荟萃分析
背景(考古学)
心理学
自动化
计算机科学
可靠性(半导体)
人工智能
知识管理
应用心理学
社会心理学
工程类
医学
机械工程
生物
量子力学
物理
内科学
古生物学
功率(物理)
作者
Alexandra D. Kaplan,Theresa T. Kessler,J. Christopher Brill,Peter A. Hancock
出处
期刊:Human Factors
[SAGE]
日期:2021-05-28
卷期号:65 (2): 337-359
被引量:121
标识
DOI:10.1177/00187208211013988
摘要
The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction.There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI.Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors.Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others.Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research.Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.
科研通智能强力驱动
Strongly Powered by AbleSci AI