聊天机器人
人气
透明度(行为)
计算机科学
感知
推荐系统
互联网隐私
万维网
心理学
社会心理学
计算机安全
神经科学
作者
Daricia Wilkinson,Öznur Alkan,Q. Vera Liao,Massimiliano Mattetti,Inge Vejsbjerg,Bart P. Knijnenburg,Elizabeth Daly
出处
期刊:ACM Transactions on Information Systems
日期:2021-10-22
卷期号:39 (4): 1-21
被引量:26
摘要
Chatbots or conversational recommenders have gained increasing popularity as a new paradigm for Recommender Systems (RS). Prior work on RS showed that providing explanations can improve transparency and trust, which are critical for the adoption of RS. Their interactive and engaging nature makes conversational recommenders a natural platform to not only provide recommendations but also justify the recommendations through explanations. The recent surge of interest inexplainable AI enables diverse styles of justification, and also invites questions on how styles of justification impact user perception. In this article, we explore the effect of “why” justifications and “why not” justifications on users’ perceptions of explainability and trust. We developed and tested a movie-recommendation chatbot that provides users with different types of justifications for the recommended items. Our online experiment ( n = 310) demonstrates that the “why” justifications (but not the “why not” justifications) have a significant impact on users’ perception of the conversational recommender. Particularly, “why” justifications increase users’ perception of system transparency, which impacts perceived control, trusting beliefs and in turn influences users’ willingness to depend on the system’s advice. Finally, we discuss the design implications for decision-assisting chatbots.
科研通智能强力驱动
Strongly Powered by AbleSci AI