聊天机器人
同情
移情
恐怖谷理论
心理学
社会化媒体
建议(编程)
反证法
表达式(计算机科学)
社会心理学
互联网隐私
感知
计算机科学
万维网
口译(哲学)
神经科学
程序设计语言
作者
Bingjie Liu,S. Shyam Sundar
出处
期刊:Cyberpsychology, Behavior, and Social Networking
[Mary Ann Liebert]
日期:2018-10-01
卷期号:21 (10): 625-636
被引量:265
标识
DOI:10.1089/cyber.2018.0110
摘要
When we ask a chatbot for advice about a personal problem, should it simply provide informational support and refrain from offering emotional support? Or, should it show sympathy and empathize with our situation? Although expression of caring and understanding is valued in supportive human communications, do we want the same from a chatbot, or do we simply reject it due to its artificiality and uncanniness? To answer this question, we conducted two experiments with a chatbot providing online medical information advice about a sensitive personal issue. In Study 1, participants (N = 158) simply read a dialogue between a chatbot and a human user. In Study 2, participants (N = 88) interacted with a real chatbot. We tested the effect of three types of empathic expression-sympathy, cognitive empathy, and affective empathy-on individuals' perceptions of the service and the chatbot. Data reveal that expression of sympathy and empathy is favored over unemotional provision of advice, in support of the Computers are Social Actors (CASA) paradigm. This is particularly true for users who are initially skeptical about machines possessing social cognitive capabilities. Theoretical, methodological, and practical implications are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI