理性
透明度(行为)
任务(项目管理)
过程(计算)
机器学习
知识管理
人工智能
运筹学
计算机科学
经济
管理
数学
政治学
计算机安全
法学
操作系统
标识
DOI:10.1016/j.ipm.2024.103732
摘要
Artificial intelligence models can process massive amounts of data and surpass human experts in predictions. However, the lack of trust in algorithms sealed in the "black box" is one of the most challenging barriers to taking advantage of AI in human decision-making. Improving algorithm transparency by presenting explanations is one of the most common approaches to curing this. Explainable artificial intelligence (XAI) has been a recent research focus, but most concentrate on explainable algorithm development rather than human factors. Thus, the objective of this study is twofold: (1) to explore whether or not XAI can improve human performance and trust in AI in the competitive tasks of sales prediction, and (2) to reveal the different impact routines XAI on individuals with different task-related capacities. Based on a quasi-experimental study, our results indicate that XAI can improve human decision accuracy in the scenario of sales prediction in cross-border e-commerce. XAI cannot improve self-report trust to AI but can improve behavioral trust. We also found the placebo effect of explanation for relatively low task-related capacity.
科研通智能强力驱动
Strongly Powered by AbleSci AI