skip to main content
10.1145/1835449.1835526acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Collecting high quality overlapping labels at low cost

Published:19 July 2010Publication History

ABSTRACT

This paper studies quality of human labels used to train search engines' rankers. Our specific focus is performance improvements obtained by using overlapping relevance labels, which is by collecting multiple human judgments for each training sample. The paper explores whether, when, and for which samples one should obtain overlapping training labels, as well as how many labels per sample are needed. The proposed selective labeling scheme collects additional labels only for a subset of training samples, specifically for those that are labeled relevant by a judge. Our experiments show that this labeling scheme improves the NDCG of two Web search rankers on several real-world test sets, with a low labeling overhead of around 1.4 labels per sample. This labeling scheme also outperforms several methods of using overlapping labels, such as simple k-overlap, majority vote, the highest labels, etc. Finally, the paper presents a study of how many overlapping labels are needed to get the best improvement in retrieval accuracy.

References

  1. A. Bernstein and J. Li. From active towards interactive learning: Using consideration information to improve labeling correctness. University of Zurich, Dynamic and distributed information systems group working paper. www.ifi.uzh.ch/ddis/nc/publications.Google ScholarGoogle Scholar
  2. C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. NIPS'06.Google ScholarGoogle Scholar
  3. Kalervo Järvelin and Jaana Kekäläinen. Ir evaluation methods for retrieving highly relevant documents. SIGIR'00. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. NAACL-HLT 2010 Workshop on Amazon Mechanical Turk. http://sites.google.com/site/amtworkshop2010/.Google ScholarGoogle Scholar
  5. V. S. Sheng, F. Provost, and P. G. Lpeirotis. Get another label? Improving data quality and data mining using multiple noisy labelers. KDD'08. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Smyth, U. M. Fayyad, M. C. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labeling of Venus images. NIPS'94.Google ScholarGoogle Scholar
  7. P. Smyth, M. C. Burl, U. M. Fayyad, P. Perona. Knowledge discovery in large image databases: Dealing with uncertainties in ground truth. AAAI Knowledge Discovery in Databases Workshop of KDD'94.Google ScholarGoogle Scholar
  8. P. Smyth. Bounds on the mean classification error rate of multiple experts. Pattern Recognition Letters 17, 12. 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. R. Snow, B. O'Connor, D. Jurafsky and A. Y. Ng. 2008. Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. EMNLP'08. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. E. M. Voorhees. Evaluating by highly relevant documents. SIGIR'01. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Q. Wu, C. J. C. Burges, K. M. Svore and J. Gao. Ranking, Boosting, and Model Adaptation. Microsoft Research Technical Report MSR-TR-2008-109.Google ScholarGoogle Scholar

Index Terms

  1. Collecting high quality overlapping labels at low cost

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
      July 2010
      944 pages
      ISBN:9781450301534
      DOI:10.1145/1835449

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 July 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      SIGIR '10 Paper Acceptance Rate87of520submissions,17%Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader