翻译
认证
等级间信度
心理学
流利
可靠性(半导体)
评定量表
应用心理学
计算机科学
发展心理学
数学教育
功率(物理)
物理
量子力学
政治学
法学
程序设计语言
出处
期刊:Interpreting
[John Benjamins Publishing Company]
日期:2015-09-03
卷期号:17 (2): 255-283
被引量:42
标识
DOI:10.1075/intp.17.2.05han
摘要
Rater-mediated performance assessment (RMPA) is a critical component of interpreter certification testing systems worldwide. Given the acknowledged rater variability in RMPA and the high-stakes nature of certification testing, it is crucial to ensure rater reliability in interpreter certification performance testing (ICPT). However, a review of current ICPT practice indicates that rigorous research on rater reliability is lacking. Against this background, the present study reports on use of multifaceted Rasch measurement (MFRM) to identify the degree of severity/leniency in different raters’ assessments of simultaneous interpretations (SIs) by 32 interpreters in an experimental setting. Nine raters specifically trained for the purpose were asked to evaluate four English-to-Chinese SIs by each of the interpreters, using three 8-point rating scales (information content, fluency, expression). The source texts differed in speed and in the speaker’s accent (native vs non-native). Rater-generated scores were then subjected to MFRM analysis, using the FACETS program. The following general trends emerged: 1) homogeneity statistics showed that not all raters were equally severe overall; and 2) bias analyses showed that a relatively large proportion of the raters had significantly biased interactions with the interpreters and the assessment criteria. Implications for practical rating arrangements in ICPT, and for rater training, are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI