计算机科学
透视图(图形)
背景(考古学)
光学(聚焦)
领域(数学)
钥匙(锁)
人机交互
用户界面
知识管理
用户建模
数据科学
人工智能
计算机安全
古生物学
物理
数学
纯数学
光学
生物
操作系统
作者
Tita Alissa Bach,Amna Nauman Khan,Harry Hallock,Gabriela Beltrão,Sónia Sousa
标识
DOI:10.1080/10447318.2022.2138826
摘要
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI enabled systems. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
科研通智能强力驱动
Strongly Powered by AbleSci AI