计算机科学
人机交互
集合(抽象数据类型)
质量(理念)
模态(人机交互)
领域(数学分析)
领域(数学)
对话系统
用户体验设计
多媒体
万维网
数学分析
哲学
数学
认识论
对话框
纯数学
程序设计语言
作者
Thiemo Wambsganß,Naim Zierau,Matthias Söllner,Tanja Käser,Kenneth R. Koedinger,Jan Marco Leimeister
出处
期刊:Proceedings of the ACM on human-computer interaction
[Association for Computing Machinery]
日期:2022-11-07
卷期号:6 (CSCW2): 1-27
被引量:5
摘要
Conversational agents (CAs) provide opportunities for improving the interaction in evaluation surveys. To investigate if and how a user-centered conversational evaluation tool impacts users' response quality and their experience, we build EVA - a novel conversational course evaluation tool for educational scenarios. In a field experiment with 128 students, we compared EVA against a static web survey. Our results confirm prior findings from literature about the positive effect of conversational evaluation tools in the domain of education. Second, we then investigate the differences between a voice-based and text-based conversational human-computer interaction of EVA in the same experimental set-up. Against our prior expectation, the students of the voice-based interaction answered with higher information quality but with lower quantity of information compared to the text-based modality. Our findings indicate that using a conversational CA (voice and text-based) results in a higher response quality and user experience compared to a static web survey interface.
科研通智能强力驱动
Strongly Powered by AbleSci AI