skip to main content
10.1145/1571941.1572029acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Including summaries in system evaluation

Published:19 July 2009Publication History

ABSTRACT

In batch evaluation of retrieval systems, performance is calculated based on predetermined relevance judgements applied to a list of documents returned by the system for a query. This evaluation paradigm, however, ignores the current standard operation of search systems which require the user to view summaries of documents prior to reading the documents themselves.

In this paper we modify the popular IR metrics MAP and P@10 to incorporate the summary reading step of the search process, and study the effects on system rankings using TREC data. Based on a user study, we establish likely disagreements between relevance judgements of summaries and of documents, and use these values to seed simulations of summary relevance in the TREC data. Re-evaluating the runs submitted to the TREC Web Track, we find the average correlation between system rankings and the original TREC rankings is 0.8 (Kendall τ), which is lower than commonly accepted for system orderings to be considered equivalent. The system that has the highest MAP in TREC generally remains amongst the highest MAP systems when summaries are taken into account, but other systems become equivalent to the top ranked system depending on the simulated summary relevance.

Given that system orderings alter when summaries are taken into account, the small amount of effort required to judge summaries in addition to documents (19 seconds vs 88 seconds on average in our data) should be undertaken when constructing test collections.

References

  1. A. Al-Maskari, M. Sanderson, P. Clough, and E. Airio. The good and bad system: does the test collection predict users' eectiveness? In Proc. ACM SIGIR, pages 59--66, Singapore, Singapore, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Allan, B. Carterette, and J. Lewis. When will information retrieval be good enough? In Proc. ACM SIGIR, pages 433--440, Salvador, Brazil, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Buckley and E.M. Voorhees. Retrieval evaluation with incomplete information. In Proc. ACM SIGIR, pages 25--32, Shefield, UK, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Buckley and E.M. Voorhees. Retrieval system evaluation. In Ellen M. Voorhees and Donna K. Harman, editors, TREC: experiment and evaluation in information retrieval. MIT Press, 2005.Google ScholarGoogle Scholar
  5. B. Carterette and J. Allan. Incremental test collections. In Proc. ACM CIKM, pages 680--687, Bremen, Germany, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Cleverdon. The Cranfield tests on index language devices. Aslib Proceedings, 19:173--192, 1967. (Reprinted in K. Sparck Jones and P. Willett, editors. Readings in Information Retrieval. Morgan Kaufmann Publishers Inc., 1997). Google ScholarGoogle ScholarCross RefCross Ref
  7. C. Cleverdon. Optimizing convenient online access to bibliographic databases. Information Services and Use, 4(1-2):37--47, 1984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Hawking. Overview of the TREC-9 Web track. In TREC-9, pages 87--102, Gaithersburg, MD, 2000.Google ScholarGoogle Scholar
  9. D. Hawking and N. Craswell. Overview of TREC 2001 Web track. In TREC 2001, pages 61--67, Gaithersburg, MD, 2001.Google ScholarGoogle Scholar
  10. W. Hersh, A. Turpin, S. Price, B. Chan, D. Kraemer, L. Sacherek, and D. Olson. Do batch and user evaluations give the same results? In Proc. ACM SIGIR, pages 17--24, Athens, Greece, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. P. Ingwersen and K. Järvelin. The Turn: Integration of Information Seeking and Retrieval in Context. Kluwer Academic Publishers, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422--446, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. Joachims, L.A. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proc. ACM SIGIR, pages 154--161, Salvador, Brazil, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. Kelly, X. Fu, and C. Shah. Eects of rank and precision of search results on users' evaluations of system performance. Technical Report TR-2007-02, University of North Carolina, 2007.Google ScholarGoogle Scholar
  15. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.Google ScholarGoogle Scholar
  16. M. Sanderson and H. Joho. Forming test collections with no system pooling. In Proc. ACM SIGIR, pages 33--40, Shefield, United Kingdom, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Sanderson and I. Soboro. Problems with Kendall's tau. In Proc. ACM SIGIR, pages 839--840, Amsterdam, The Netherlands, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. F. Scholer and H.E. Williams. Query association for eective retrieval. In Proc. ACM CIKM, pages 324--331, McLean, VA, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Sheskin. Handbook of parametric and nonparametric statistical procedures. CRC Press, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. E. Sormunen. Liberal relevance criteria of TREC -- counting on negligible documents? In Proc. ACM SIGIR, pages 324--330, Tampere, Finland, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. A. Tombros and M. Sanderson. Advantages of query biased summaries in information retrieval. In Proc. ACM SIGIR, pages 2--10, Melbourne, Australia, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Turpin and W. Hersh. Why batch and user evaluations do not give the same results. In Proc. ACM SIGIR, pages 225--231, New Orleans, LA, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Turpin and F. Scholer. User performance versus precision measures for simple search tasks. In Proc. ACM SIGIR, pages 11--18, Seattle, WA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. I. Varlamis and S. Stamou. Semantically driven snippet selection for supporting focused web searches. Data&Knowledge Engineering, 68:261--277, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. E.M. Voorhees. Variations in relevance judgements and the measurement of retrieval eectiveness. In Proc. ACM SIGIR, pages 315--323, Melbourne, Australia, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. E.M. Voorhees. Evaluation by highly relevant documents. In Proc. ACM SIGIR, pages 74--82, New Orleans, LA, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. E.M. Voorhees and D.K. Harman. TREC: experiment and evaluation in information retrieval. MIT Press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Wu, M. Fuller, and R. Wilkinson. Searcher performance in question answering. In Proc. ACM SIGIR, pages 375--381, New Orleans, LA, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. Yilmaz and J.A. Aslam. Estimating average precision with incomplete and imperfect judgments. In Proc. ACM CIKM, pages 102--111, Arlington, VA, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Including summaries in system evaluation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '09: Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
      July 2009
      896 pages
      ISBN:9781605584836
      DOI:10.1145/1571941

      Copyright © 2009 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 July 2009

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader