对话
声誉
危害
计算机科学
服务(商务)
互联网隐私
客户服务
心理学
知识管理
人机交互
社会心理学
业务
营销
沟通
社会学
社会科学
作者
G. Mark Grimes,Ryan M. Schuetzler,Justin Scott Giboney
标识
DOI:10.1016/j.dss.2021.113515
摘要
Artificial Intelligence is increasingly becoming integrated in many aspects of human life. One particular AI comes in the form of conversational agents (CAs) such as Siri, Alexa, and chatbots used for customer service on websites and other information systems. It is widely accepted that humans treat systems as social actors. Leveraging this bias, companies sometimes attempt to masquerade a CA as a human customer service representative. In addition to the ethical and legal questions around this practice, the benefits and drawbacks of a CA pretending to be human are unclear due to a lack of study. While more human-like interactions can improve outcomes, when users find out that the CA is not human, they may have a negative reaction that may cause reputation harm in the company. In this research we use Expectation Violation Theory to explain what happens when users have high or low expectations of a conversation. We conducted an experiment with 175 participants where some participants were told they were interacting with a CA while others were told they were interacting with a human. We further divided the groups so that some participants interacted with a CA with low conversational capability while others interacted with a CA with high conversational capability. The results show that expectations formed by the user before the interaction change how the user evaluates the CA beyond the actual performance of the CA. These findings provide guidance to developers not just of conversational agents, but also for other technologies where users may be uncertain of a system's capabilities.
科研通智能强力驱动
Strongly Powered by AbleSci AI