数学教育
语法
纠正性反馈
同行反馈
论证(复杂分析)
计算机科学
相关性(法律)
补语(音乐)
第二语言写作
教育学
心理学
语言学
第二语言
哲学
表型
化学
互补
法学
基因
生物化学
政治学
作者
Stephanie Link,Mohaddeseh Mehrzad,Mohammad Rahimi
标识
DOI:10.1080/09588221.2020.1743323
摘要
Recent years have witnessed an increasing interest in the use of automated writing evaluation (AWE) in second language writing classrooms. This increase is partially due to the belief that AWE can assist teachers by allowing them to devote more feedback to higher-level (HL) writing skills, such as content and organization, while the technology addresses lower-level (LL) skills, such as grammar. As is speculated, student revisions will then be positively impacted. However, little evidence has supported these claims, calling into question the impact of AWE on teaching and learning. The current study explored these claims by comparing two second language writing classes that were assigned to either an AWE + teacher feedback condition or a teacher-only-feedback condition. Findings suggest that using AWE as a complement to teacher feedback did not have a significant impact on the amount of HL teacher feedback, but the teacher who did not use AWE tended to provide a greater amount of LL feedback than AWE alone. Furthermore, students seemed to revise the teacher’s LL feedback more frequently than LL feedback from the computer. Interestingly, students retained their improvement in accuracy in the long-term when they had access to AWE, but students who did not have access appeared to have lower retention. We explain the relevance of our findings in relation to an argument-based validation framework to align our work with state-of-the-art research in the field and contribute to a broader discussion about how AWE can be best provided to support second language writing development.
科研通智能强力驱动
Strongly Powered by AbleSci AI