概化理论
二分法
能力(人力资源)
项目反应理论
结构方程建模
心理学
差异(会计)
分类
特质
适度
认知心理学
计算机科学
认识论
社会心理学
人工智能
数学
心理测量学
统计
机器学习
发展心理学
会计
哲学
业务
程序设计语言
作者
Sigrid Blömeke,Jan‐Eric Gustafsson,Richard J. Shavelson
出处
期刊:Zeitschrift Fur Psychologie-journal of Psychology
[Hogrefe Publishing Group]
日期:2015-01-01
卷期号:223 (1): 3-13
被引量:920
标识
DOI:10.1027/2151-2604/a000194
摘要
In this paper, the state of research on the assessment of competencies in higher education is reviewed. Fundamental conceptual and methodological issues are clarified by showing that current controversies are built on misleading dichotomies. By systematically sketching conceptual controversies, competing competence definitions are unpacked (analytic/trait vs. holistic/real-world performance) and commonplaces are identified. Disagreements are also highlighted. Similarly, competing statistical approaches to assessing competencies, namely item-response theory (latent trait) versus generalizability theory (sampling error variance), are unpacked. The resulting framework moves beyond dichotomies and shows how the different approaches complement each other. Competence is viewed along a continuum from traits that underlie perception, interpretation, and decision-making skills, which in turn give rise to observed behavior in real-world situations. Statistical approaches are also viewed along a continuum from linear to nonlinear models that serve different purposes. Item response theory (IRT) models may be used for scaling item responses and modeling structural relations, and generalizability theory (GT) models pinpoint sources of measurement error variance, thereby enabling the design of reliable measurements. The proposed framework suggests multiple new research studies and may serve as a “grand” structural model.
科研通智能强力驱动
Strongly Powered by AbleSci AI