可解释性
计算机科学
人工智能
感知
领域(数学分析)
人机交互
心理学
数学
数学分析
神经科学
作者
Changhoon Oh,Seonghyun Kim,Jinhan Choi,Jinsu Eun,Soomin Kim,Juho Kim,Joonhwan Lee,Bongwon Suh
标识
DOI:10.1145/3357236.3395430
摘要
Artificial intelligence (AI) algorithms are making remarkable achievements even in creative fields such as aesthetics. However, whether those outside the machine learning (ML) community can sufficiently interpret or agree with their results, especially in such highly subjective domains, is being questioned. In this paper, we try to understand how different user communities reason about AI algorithm results in subjective domains. We designed AI Mirror, a research probe that tells users the algorithmically predicted aesthetic scores of photographs. We conducted a user study of the system with 18 participants from three different groups: AI/ML experts, domain experts (photographers), and general public members. They performed tasks consisting of taking photos and reasoning about AI Mirror's prediction algorithm with think-aloud sessions, surveys, and interviews. The results showed the following: (1) Users understood the AI using their own group-specific expertise; (2) Users employed various strategies to close the gap between their judgments and AI predictions overtime; (3) The difference between users' thoughts and AI pre-dictions was negatively related with users' perceptions of the AI's interpretability and reasonability. We also discuss design considerations for AI-infused systems in subjective domains.
科研通智能强力驱动
Strongly Powered by AbleSci AI