偏爱
可信赖性
计算机科学
人工智能
数据科学
知识管理
人机交互
心理学
互联网隐私
微观经济学
经济
作者
Mike Seymour,Lingyao Yuan,Kai Riemer,Alan R. Dennis
标识
DOI:10.1287/isre.2022.0203
摘要
Practice- and policy-oriented abstract: Companies are increasingly deploying highly realistic digital human agents (DHAs) controlled by advanced AI for online customer service, tasks typically handled by chatbots. We conducted four experiments to assess users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHAs, utilizing quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Our studies involved four DHAs, including two commercial products (found to be immature) and two future-focused ones (where participants believed the AI-controlled DHAs were human-controlled). In the first study, comparing perceptions of a DHA, chatbot, and human agent from descriptions revealed few differences between the DHA and chatbot. The second study, involving actual use of a commercial DHA, showed participants found it uncanny, robotic, or difficult to converse with. The third and fourth studies used a “Wizard of Oz” design, with participants believing a human-controlled DHA was AI-driven. Results showed a preference for human agents via video conferencing, but no significant differences between DHAs and human agents when visual fidelity was controlled. Current DHAs, despite communication issues, trigger more affinity than chatbots. When DHAs match human communication abilities, they are perceived similarly to human agents for simple tasks. This research also suggests DHAs may alleviate algorithm aversion.
科研通智能强力驱动
Strongly Powered by AbleSci AI