自动化
情境伦理学
可信赖性
形势意识
计算机科学
实证研究
多样性(控制论)
知识管理
数据科学
心理学
计算机安全
社会心理学
人工智能
工程类
认识论
机械工程
哲学
航空航天工程
作者
Kevin A. Hoff,Masooda Bashir
出处
期刊:Human Factors
[SAGE]
日期:2014-09-02
卷期号:57 (3): 407-434
被引量:1605
标识
DOI:10.1177/0018720814547570
摘要
We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge.Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust.We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review.Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust.Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
科研通智能强力驱动
Strongly Powered by AbleSci AI