Abstract
In the first edition of this volume, we painted a gloomy picture of the state-of-the-art in evaluating error detection systems. At that time, unlike other areas of NLP, there was no shared task/repository to establish agreed-upon standards for evaluation. While it is still the case that researchers working in this field often find themselves using proprietary or licensed corpora that cannot be made available to the community as a whole, three shared tasks have now been sponsored so that researchers have the opportunity to compare results on at least some shared training and testing materials. The Helping Our Own (HOO) shared task was piloted in 2011 [Dale and Kilgarriff, 2011a] and was held again in 2012 [Dale et al., 2012]. Grammatical error correction was the featured task at CoNLL 2013 [Ng et al., 2013].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Leacock, C., Chodorow, M., Gamon, M., Tetreault, J. (2014). Evaluating Error Detection Systems. In: Automated Grammatical Error Detection for Language Learners, Second Edition. Synthesis Lectures on Human Language Technologies. Springer, Cham. https://doi.org/10.1007/978-3-031-02153-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-02153-4_4
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-01025-5
Online ISBN: 978-3-031-02153-4
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 3eBColl Synthesis Collection 5