任务(项目管理)
计算机科学
领域(数学)
体积热力学
自然语言处理
人工智能
工程类
数学
量子力学
物理
系统工程
纯数学
作者
Claudia Leacock,Martin Chodorow,Michael Gamon,Joel Tetreault
出处
期刊:Synthesis lectures on human language technologies
[Morgan & Claypool]
日期:2014-01-01
卷期号:: 31-45
标识
DOI:10.1007/978-3-031-02153-4_4
摘要
In the first edition of this volume, we painted a gloomy picture of the state-of-the-art in evaluating error detection systems. At that time, unlike other areas of NLP, there was no shared task/repository to establish agreed-upon standards for evaluation. While it is still the case that researchers working in this field often find themselves using proprietary or licensed corpora that cannot be made available to the community as a whole, three shared tasks have now been sponsored so that researchers have the opportunity to compare results on at least some shared training and testing materials. The Helping Our Own (HOO) shared task was piloted in 2011 [Dale and Kilgarriff, 2011a] and was held again in 2012 [Dale et al., 2012]. Grammatical error correction was the featured task at CoNLL 2013 [Ng et al., 2013].
科研通智能强力驱动
Strongly Powered by AbleSci AI