拉什模型
概化理论
语言能力
任务(项目管理)
计算机科学
心理学
考试(生物学)
项目反应理论
评定量表
一致性(知识库)
自然语言处理
人工智能
数学教育
心理测量学
发展心理学
古生物学
经济
管理
生物
临床心理学
作者
Claudia Harsch,André Rupp
标识
DOI:10.1080/15434303.2010.535575
摘要
The Common European Framework of Reference (CEFR; CitationCouncil of Europe, 2001) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to designing and aligning writing tasks in line with the CEFR levels. Much work has been done on assessing writing via tasks spanning over several levels of proficiency but little research on a level-specific approach, where one task targets one specific proficiency level. In our study, situated in a large-scale assessment project where such a level-specific approach was employed, we investigate the influence of the design factors tasks, assessment criteria, raters, and student proficiency on the variability of ratings, using descriptive statistics, generalizability theory, and multifaceted Rasch modeling. Results show that the level-specific approach yields plausible inferences about task difficulty, rater harshness, rating criteria difficulty, and student distribution. Moreover, Rasch analyses show a high level of consistency between a priori task classifications in terms of CEFR levels and empirical task difficulty estimates. This allows for a test-centered approach to standard setting by suggesting empirically grounded cut-scores in line with the CEFR proficiency levels targeted by the tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI