众包
任务(项目管理)
工作(物理)
计算机科学
质量(理念)
在线评估
应用心理学
心理学
工程类
万维网
教育学
形成性评价
机械工程
认识论
哲学
系统工程
作者
Steven P. Dow,Anand Kulkarni,Scott R. Klemmer,Björn Hartmann
标识
DOI:10.1145/2145204.2145355
摘要
Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.
科研通智能强力驱动
Strongly Powered by AbleSci AI