任务(项目管理)
工作量
可靠性(半导体)
心理学
听写
有效性
计算机科学
应用心理学
认知心理学
心理测量学
发展心理学
功率(物理)
物理
管理
量子力学
经济
语音识别
操作系统
作者
Yimin Tong,Christian D. Schunn,Hong Wang
标识
DOI:10.1016/j.stueduc.2022.101233
摘要
Number of raters is theoretically central to peer assessment reliability and validity, yet rarely studied. Further, requiring each student to assess more peers’ documents both increases the number of evaluations per document but also assessor workload, which can decline performance. Moreover, task complexity is likely a moderating factor, influencing both workload and validity. This study examined whether changing the number of required peer assessments per student / number of raters per document affected peer assessment reliability and validity for tasks at different levels of task complexity. 181 students completed and provided peer assessments for tasks at three levels of task complexity: low complexity (dictation), medium complexity (oral imitation), and high complexity (writing). Adequate validity of peer assessments was observed for all three task complexities at low reviewing loads. However, the impacts of increasing reviewing load varied by reliability vs. validity outcomes and by task complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI