Inviting Submissions on XAI for “Decision Support Systems”
— We are inviting submissions to the special issue on “DSS Special Issue on Explainable AI for Enhanced Decision Making” in the journal Decision Support Systems. —
• Paper submission system opens: November 1st, 2022.
• Paper submission deadline: June 15th, 2023.
Artificial Intelligence (AI) defined as the development of computer systems that are able to perform tasks normally requiring human intelligence by understanding, processing and analyzing large amounts of data has been a prevalent domain for several decades now. An increasing number of businesses rely on AI to achieve outcomes that operationally and/or strategically support (human) decision making in different domains. At present, AI based machine learning (ML) has become widely popular as a subfield of AI, both in industry as in academia. ML has been widely used to enhance decision making including predicting organ transplantation risk (Topuz et al., 2018), forecasting remaining useful life of machinery (Kraus et al., 2020), student dropout prediction (Coussement et al., 2020), money laundering (Fu et al., 2021), money laundering detection (Vandervorst et al., 2022) amongst others. In the early days, AI attempts to imitate human decision-making rules were only partially successful, as humans often could not accurately describe the decision-making rules, they use to solve problems (Fügener et al., 2022). With the development of advanced AI, exciting progress has been made in algorithmic development to support decision making in various fields including finance, economics, marketing, human resource management, tourism, computer science, biological science, medical science, and others (Liu et al., 2022).
Recently, advances have heavily focused on boosting the predictive accuracy of AI methods, with deep learning (DL) methods being a prevalent example. The stringent focus on improved prediction performance often comes at the expense of missed explainability, which leads to decision makers’ distrust and even rejection of AI systems (Shin, 2021). Explainable AI describes the process that allows one to understand how an AI system decides, predicts, and performs its operations. Therefore, explainable AI reveals the strengths and weaknesses of the decision-making strategy and explains the rationale of the decision support system (Rai, 2020). Numerous scholars confirm that explainable AI is the key to developing and deploying AI in industries such as retail, banking and financial services, manufacturing, and supply chain/logistics (Kim et al., 2020; Shin, 2021; Zhdanov et al., 2022). In addition, explainable AI has also received attention from governments due to its ability to improve the efficiency and effectiveness of governments’ functionalities and decision supports (Phillips-Wren et al., 2021).
In fact, in many cases, understanding why a model makes certain decisions and predictions is as important as its accuracy. Because model explainability helps managers to better understand models’ parameters and apply them more confidently, allowing managers to communicate the analytical rationale more convincingly for their decisions to stakeholders (Wang et al., 2022). Among others, exploring the applications of AI explainability and precise understandability in decision making is one of the main contributions of this special issue.
Therefore, this special issue proposal on “Explainable AI for Enhanced Decision Making” deals with the following topics as an illustrative but not restrictive list:
- Explainability and interpretability in AI decision support systems
- Use explainable AI for corporate investment decisions
- Explainable AI in banking, insurance, and micro enterprises
- Explainable AI in healthcare, transportation, and education
- Causality of AI models
- Property risk assessment using explainable AI
- Make enhanced business decisions using explainable AI
- Use explainable AI to make predictions for IT industry decisions
- Explainable AI, big data, and decision support systems
- Explainable AI, applications and services
- Explainable methods for deep learning architectures
- Decision model visualization
- Evaluate decision making metrics and processes
- Measuring explainability in decision support systems
“Please note that we are particularly interested in research papers that focus on the explainability aspects of AI based ML research. All articles that simply focus on improving the accuracy of AI algorithms/machine learning classifiers, without highlighting the benefit to improved explainable decision making, are strictly not encouraged”.
More information on https://www.journals.elsevier.com/decision-support-systems/call-for-papers/dss-special-issue-on-explainable-ai-for-enhanced-decision-making.