作者
Alister Cumming,Robert Kantor,Kyoko Baba,Usman Erdosy,Keanre Eouanzoui,Mark James
摘要
We assessed whether and how the discourse written for prototype integrated tasks (involving writing in response to print or audio source texts) field tested for Next Generation TOEFL® differs from the discourse written for independent essays (i.e., the TOEFL Essay®). We selected 216 compositions written for six tasks by 36 examinees in a field test—representing score levels 3, 4, and 5 on the TOEFL Essay—then coded the texts for lexical and syntactic complexity, grammatical accuracy, argument structure, orientations to evidence, and verbatim uses of source text. Analyses with non-parametric MANOVAs followed a three (task type: TOEFL Essay, writing in response to a reading passage, writing in response to a listening passage) by three (English proficiency level: score levels 3, 4, and 5 on the TOEFL Essay) within-subjects factorial design. The discourse produced for the integrated writing tasks differed significantly from the discourse produced in the independent essay for the variables of: lexical complexity (text length, word length, ratio of different words to total words written), syntactic complexity (number of words per T-unit, number clauses per T-unit), rhetoric (quality of propositions, claims, data, warrants, and oppositions in argument structure), and pragmatics (orientations to source evidence in respect to self or others and to phrasing the message as either declarations, paraphrases, or summaries). Across the three English proficiency levels, significant differences appeared for the variables of grammatical accuracy as well as all indicators of lexical complexity (text length, word length, ratio of different words to total words written), one indicator of syntactic complexity (words per T-unit), one rhetorical aspect (quality of claims in argument structure), and two pragmatic aspects (expression of self as voice, messages phrased as summaries).