skip to main content
10.3115/1073012.1073081dlproceedingsArticle/Chapter ViewAbstractPublication PagesaclConference Proceedingsconference-collections
Article
Free Access

Text chunking using regularized Winnow

Authors Info & Claims
Published:06 July 2001Publication History

ABSTRACT

Many machine learning methods have recently been applied to natural language processing tasks. Among them, the Winnow algorithm has been argued to be particularly suitable for NLP problems, due to its robustness to irrelevant features. However in theory, Winnow may not converge for non-separable data. To remedy this problem, a modification called regularized Winnow has been proposed. In this paper, we apply this new method to text chunking. We show that this method achieves state of the art performance with significantly less computation than previous approaches.

References

  1. S. P. Abney. 1991. Parsing by chunks. In R. C. Berwick, S. P. Abney, and C. Tenny, editors, Principle-Based Parsing: Computation and Psycholinguistics, pages 257--278. Kluwer, Dordrecht.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Eric Brill. 1994. Some advances in rule-based part of speech tagging. In Proc. AAAI 94, pages 722--727.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. I. Dagan, Y. Karov, and D. Roth. 1997. Mistake-driven learning in text categorization. In Proceedings of the Second Conference on Empirical Methods in NLP.]]Google ScholarGoogle Scholar
  4. C. Gentile and M. K. Warmuth. 1998. Linear hinge loss and average margin. In Proc. NIPS'98.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Grove and D. Roth. 2001. Linear concepts and hidden variables. Machine Learning, 42:123--141.]]Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. R. Khardon, D. Roth, and L. Valiant. 1999. Relational learning for NLP using linear threshold elements. In Proceedings IJCAI-99.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Taku Kudoh and Yuji Matsumoto. 2000. Use of support vector learning for chunk identification. In Proc. CoNLL-2000 and LLL-2000, pages 142--144.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. N. Littlestone. 1988. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2:285--318.]] Google ScholarGoogle ScholarCross RefCross Ref
  9. Michael McCord. 1989. Slot grammar: a system for simple construction of practical natural language grammars. Natural Language and Logic, pages 118--145.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Vasin Punyakanok and Dan Roth. 2001. The use of classifiers in sequential inference. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 995--1001. MIT Press.]]Google ScholarGoogle Scholar
  11. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared tasks: Chunking. In Proc. CoNLL-2000 and LLL-2000, pages 127--132.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hans van Halteren. 2000. Chunking with wpdv models. In Proc. CoNLL-2000 and LLL-2000, pages 154--156.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Tong Zhang. 2001. Regularized winnow methods. In Advances in Neural Information Processing Systems 13, pages 703--709.]]Google ScholarGoogle Scholar
  1. Text chunking using regularized Winnow

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image DL Hosted proceedings
            ACL '01: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
            July 2001
            562 pages

            Publisher

            Association for Computational Linguistics

            United States

            Publication History

            • Published: 6 July 2001

            Qualifiers

            • Article

            Acceptance Rates

            Overall Acceptance Rate85of443submissions,19%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader