可解释性
构造(python库)
人工智能
黑匣子
计算机科学
机器学习
数据科学
管理科学
工程类
程序设计语言
作者
Changdong Chen,Yuchen Zheng
标识
DOI:10.1080/0144929x.2023.2279658
摘要
Due to the “black-box’ nature of artificial intelligence (AI) recommendations, interpretability is critical to the consumer experience of human-AI interaction. Unfortunately, improving the interpretability of AI recommendations is technically challenging and costly. Therefore, there is an urgent need for the industry to identify when the interpretability of AI recommendations is more likely to be needed. This study defines the construct of Need for Interpretability (NFI) of AI recommendations and empirically tests consumers’ need for interpretability of AI recommendations in different decision-making domains. Across two experimental studies, we demonstrate that consumers do indeed have a need for interpretability toward AI recommendations, and that the need for interpretability is higher in utilitarian domains than in hedonic domains. This study would help companies to identify the varying need for interpretability of AI recommendations in different application scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI