ABSTRACT
In batch evaluation of retrieval systems, performance is calculated based on predetermined relevance judgements applied to a list of documents returned by the system for a query. This evaluation paradigm, however, ignores the current standard operation of search systems which require the user to view summaries of documents prior to reading the documents themselves.
In this paper we modify the popular IR metrics MAP and P@10 to incorporate the summary reading step of the search process, and study the effects on system rankings using TREC data. Based on a user study, we establish likely disagreements between relevance judgements of summaries and of documents, and use these values to seed simulations of summary relevance in the TREC data. Re-evaluating the runs submitted to the TREC Web Track, we find the average correlation between system rankings and the original TREC rankings is 0.8 (Kendall τ), which is lower than commonly accepted for system orderings to be considered equivalent. The system that has the highest MAP in TREC generally remains amongst the highest MAP systems when summaries are taken into account, but other systems become equivalent to the top ranked system depending on the simulated summary relevance.
Given that system orderings alter when summaries are taken into account, the small amount of effort required to judge summaries in addition to documents (19 seconds vs 88 seconds on average in our data) should be undertaken when constructing test collections.
- A. Al-Maskari, M. Sanderson, P. Clough, and E. Airio. The good and bad system: does the test collection predict users' eectiveness? In Proc. ACM SIGIR, pages 59--66, Singapore, Singapore, 2008. Google ScholarDigital Library
- J. Allan, B. Carterette, and J. Lewis. When will information retrieval be good enough? In Proc. ACM SIGIR, pages 433--440, Salvador, Brazil, 2005. Google ScholarDigital Library
- C. Buckley and E.M. Voorhees. Retrieval evaluation with incomplete information. In Proc. ACM SIGIR, pages 25--32, Shefield, UK, 2004. Google ScholarDigital Library
- C. Buckley and E.M. Voorhees. Retrieval system evaluation. In Ellen M. Voorhees and Donna K. Harman, editors, TREC: experiment and evaluation in information retrieval. MIT Press, 2005.Google Scholar
- B. Carterette and J. Allan. Incremental test collections. In Proc. ACM CIKM, pages 680--687, Bremen, Germany, 2005. Google ScholarDigital Library
- C. Cleverdon. The Cranfield tests on index language devices. Aslib Proceedings, 19:173--192, 1967. (Reprinted in K. Sparck Jones and P. Willett, editors. Readings in Information Retrieval. Morgan Kaufmann Publishers Inc., 1997). Google ScholarCross Ref
- C. Cleverdon. Optimizing convenient online access to bibliographic databases. Information Services and Use, 4(1-2):37--47, 1984. Google ScholarDigital Library
- D. Hawking. Overview of the TREC-9 Web track. In TREC-9, pages 87--102, Gaithersburg, MD, 2000.Google Scholar
- D. Hawking and N. Craswell. Overview of TREC 2001 Web track. In TREC 2001, pages 61--67, Gaithersburg, MD, 2001.Google Scholar
- W. Hersh, A. Turpin, S. Price, B. Chan, D. Kraemer, L. Sacherek, and D. Olson. Do batch and user evaluations give the same results? In Proc. ACM SIGIR, pages 17--24, Athens, Greece, 2000. Google ScholarDigital Library
- P. Ingwersen and K. Järvelin. The Turn: Integration of Information Seeking and Retrieval in Context. Kluwer Academic Publishers, 2005. Google ScholarDigital Library
- K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422--446, 2002. Google ScholarDigital Library
- T. Joachims, L.A. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proc. ACM SIGIR, pages 154--161, Salvador, Brazil, 2005. Google ScholarDigital Library
- D. Kelly, X. Fu, and C. Shah. Eects of rank and precision of search results on users' evaluations of system performance. Technical Report TR-2007-02, University of North Carolina, 2007.Google Scholar
- R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.Google Scholar
- M. Sanderson and H. Joho. Forming test collections with no system pooling. In Proc. ACM SIGIR, pages 33--40, Shefield, United Kingdom, 2004. Google ScholarDigital Library
- M. Sanderson and I. Soboro. Problems with Kendall's tau. In Proc. ACM SIGIR, pages 839--840, Amsterdam, The Netherlands, 2007. Google ScholarDigital Library
- F. Scholer and H.E. Williams. Query association for eective retrieval. In Proc. ACM CIKM, pages 324--331, McLean, VA, 2002. Google ScholarDigital Library
- D. Sheskin. Handbook of parametric and nonparametric statistical procedures. CRC Press, 1997. Google ScholarDigital Library
- E. Sormunen. Liberal relevance criteria of TREC -- counting on negligible documents? In Proc. ACM SIGIR, pages 324--330, Tampere, Finland, 2002. Google ScholarDigital Library
- A. Tombros and M. Sanderson. Advantages of query biased summaries in information retrieval. In Proc. ACM SIGIR, pages 2--10, Melbourne, Australia, 1998. Google ScholarDigital Library
- A. Turpin and W. Hersh. Why batch and user evaluations do not give the same results. In Proc. ACM SIGIR, pages 225--231, New Orleans, LA, 2001. Google ScholarDigital Library
- A. Turpin and F. Scholer. User performance versus precision measures for simple search tasks. In Proc. ACM SIGIR, pages 11--18, Seattle, WA, 2006. Google ScholarDigital Library
- I. Varlamis and S. Stamou. Semantically driven snippet selection for supporting focused web searches. Data&Knowledge Engineering, 68:261--277, 2009. Google ScholarDigital Library
- E.M. Voorhees. Variations in relevance judgements and the measurement of retrieval eectiveness. In Proc. ACM SIGIR, pages 315--323, Melbourne, Australia, 1998. Google ScholarDigital Library
- E.M. Voorhees. Evaluation by highly relevant documents. In Proc. ACM SIGIR, pages 74--82, New Orleans, LA, 2001. Google ScholarDigital Library
- E.M. Voorhees and D.K. Harman. TREC: experiment and evaluation in information retrieval. MIT Press, 2005. Google ScholarDigital Library
- M. Wu, M. Fuller, and R. Wilkinson. Searcher performance in question answering. In Proc. ACM SIGIR, pages 375--381, New Orleans, LA, 2001. Google ScholarDigital Library
- E. Yilmaz and J.A. Aslam. Estimating average precision with incomplete and imperfect judgments. In Proc. ACM CIKM, pages 102--111, Arlington, VA, 2006. Google ScholarDigital Library
Index Terms
- Including summaries in system evaluation
Recommendations
Diversified search evaluation: lessons from the NTCIR-9 INTENT task
AbstractThe evaluation of diversified web search results is a relatively new research topic and is not as well-understood as the time-honoured evaluation methodology of traditional IR based on precision and recall. In diversity evaluation, one topic may ...
Diverse and Proportional Size-l Object Summaries for Keyword Search
SIGMOD '15: Proceedings of the 2015 ACM SIGMOD International Conference on Management of DataThe abundance and ubiquity of graphs (e.g., Online Social Networks such as Google+ and Facebook; bibliographic graphs such as DBLP) necessitates the effective and efficient search over them. Given a set of keywords that can identify a Data Subject (DS), ...
Hits hits TREC: exploring IR evaluation results with network analysis
SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrievalWe propose a novel method of analysing data gathered fromTREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and ...
Comments