归属
计算机科学
感知
背景(考古学)
业务
社会心理学
心理学
生物
古生物学
神经科学
作者
Taenyun Kim,Hayeon Song
标识
DOI:10.1016/j.tele.2021.101595
摘要
Trust is essential in individuals’ perception, behavior, and evaluation of intelligent agents. Because, it is the primary motive for people to accept new technology, it is crucial to repair trust when damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is human-like compared to machine-like based on two seemingly competing frameworks of the Computers-Are-Social-Actors paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario to make investment choices based on an artificial intelligence agent’s advice. To see the trajectory of the initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of five rounds of eight investment choices (40 investment choices in total). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal rather than external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external rather than internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI