感知
心理学
社会心理学
应用心理学
人工智能
计算机科学
认知心理学
神经科学
作者
Hou Tsung-Yu,Tseng Yu-Chia,Yuan Chien Wen
标识
DOI:10.1016/j.ijinfomgt.2024.102775
摘要
Biases in artificial intelligence (AI), a pressing issue in human-AI interaction, can be exacerbated by AI systems' opaqueness. This paper reports on our development of a user-centered explainable-AI approach to reducing such opaqueness, guided by the theoretical framework of anthropomorphism and the results of two 3 × 3 between-subjects experiments (n = 207 and n = 223). Specifically, those experiments investigated how, in a gender-biased hiring situation, three levels of AI human-likeness (low, medium, high) and three levels of richness of AI explanation (none, lean, rich) influenced users' 1) perceptions of AI bias and 2) adoption of AI's recommendations, as well as how such perceptions and adoption varied across participant characteristics such as gender and pre-existing trust in AI. We found that comprehensive explanations helped users to recognize AI bias and mitigate its influence, and that this effect was particularly pronounced among females in a scenario where females were being discriminated against. Follow-up interviews corroborated our quantitative findings. These results can usefully inform explainable AI interface design.
科研通智能强力驱动
Strongly Powered by AbleSci AI