AbstractWith advancements in artificial intelligence (AI), explainable AI (XAI) has emerged as a promising tool for enhancing the explainability of complex machine learning models. However, the explanations generated by an XAI may lead to cognitive biases among human users. To address this problem, this study aims to investigate how to mitigate users’ cognitive biases based on their individual characteristics. In the literature review, we found two factors that can be helpful in remedying biases: 1) debiasing strategies that have been reported to potentially reduce biases in users’ decision-making via additional information or change in information delivery, and 2) explanation modality types. To examine these factors’ effects, we conducted an experiment with a 4 (debiasing strategy) × 3 (explanation type) between-subject design. In the experiment, participants were exposed to an explainable interface that provides an AI’s outcomes with explanatory information, and their behavioral and attitudinal responses were collected. Specifically, we statistically examined the effects of textual and visual explanations on users’ trust and confirmation bias toward AI systems, considering the moderating effects of debiasing methods and watching time. The results demonstrated that textual explanations lead to higher trust in XAI systems compared to visual explanations. Moreover, we found that textual explanations are particularly beneficial for quick decision-makers to evaluate the outputs of AI systems. Next, the results indicated that the cognitive bias can be effectively mitigated by providing users with a priori information. These findings have theoretical and practical implications for designing AI-based decision support systems that can generate more trustworthy and equitable explanations.Keywords: Artificial intelligenceexplanationtrustsatisfactioncognitive bias Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the faculty research fund of Sejong University in 2023.Notes on contributorsTaehyun HaTaehyun Ha is an Assistant Professor in the Department of Data Science at Sejong University. His research focuses on Online User Behavior, Human-AI Interaction, and Trust Formation.Sangyeon KimSangyeon Kim is a research professor at the Institute of Engineering Research at Korea University. He received a PhD from the Department of Interaction Science at Sungkyunkwan University in 2022. His research interests include human-computer interaction, gestural interaction, accessible computing, and human-centered AI.