skip to main content
10.1145/1031171.1031182acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
Article

Scoring missing terms in information retrieval tasks

Published:13 November 2004Publication History

ABSTRACT

An usual approach to address mismatching vocabulary problem is to augment the original query using dictionaries and other lexical resources and/or by looking at pseudo-relevant documents. Either way, terms are added to form a new query that will be used to score all documents in a subsequent retrieval pass, and as consequence the original query's focus may drift because of the newly added terms. We propose a new method to address the mismatching vocabulary problem, expanding original query terms only when necessary and complementing the user query for missing terms while scoring documents. It allows related semantic aspects to be included in a conservative and selective way, thus reducing the possibility of query drift. Our results using replacements for the <i>missing query terms</i> in modified document and passages retrieval methods show significant improvement over the original ones.

References

  1. A. Berger and J.Lafferty.Information retrieval as statistical translation. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval pages 222--229, 1999.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. C. Buckley, G. Salton, J. Allan, and A. Singhal. Automatic query expansion using SMART: TREC 3. In In proceedings of Third Text REtrieval Conference Gaithersburg, MD, 1994.]]Google ScholarGoogle Scholar
  3. D. Carmel, E. Farchi, Y. Petruschka, and A. Soffer. Automatic query refinement using lexical affinities with maximal information gain. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval pages 283--290, Tampere, Finland, 2002.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Clarke, G. Cormack, M. Laszlo, T. Lynam, and E. Terra. The impact of corpus size on question answering performance. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval pages 369--370, Tampere, Finland, 2002.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. L. A. Clarke, G. V. Cormack, andT. R. Lynam. Exploiting redundancy in question answering. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval pages 358--365. ACM Press, 2001.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. L. A. Clarke, G. V. Cormack, T. R. Lynam, and E. Terra. Advances in Open Domain Question Answering chapter Question answering by passage selection. Kluwer Academic Publishers. To appear, 2004.]]Google ScholarGoogle Scholar
  7. K. Collins-Thompson, E. Terra, J. Callan, and C. L. A. Clarke. The effect of document retrieval quality on factoid question answering. In ACM SIGIR Conference on Research and development in Information Retrieval 2004.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. J. Crouch, D. B. Crouch, Q. Chen, and S. J. Holtz. Improving the retrieval e.ectiveness of very short queries. Information Processing and Management 38(1):1--36, 2002.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Darwish an D. W. Oard. Probabilistic structured query methods. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval pages 338--344, 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Firth. Studies In Linguistic Analisys chapter A Synopsis of Linguistic Theory, 1930-1955, pages 1--32. Basil Blackwell, Oxford, 3rd edition, 1957.]]Google ScholarGoogle Scholar
  11. J. Gao, M. Zhou, J.-Y. Nie, H. He, and W. Chen. Resolving query translation ambiguity using a decaying co-occurrence model and syntactic dependence relations. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval pages 183--190, 2002.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. S. Jones, S. Walker, and S. E. Robertson. A probabilistic model of information retrieval: development and comparative experiments -part 1 and 2. Information Processing and Management 36(6):779--808; 809--840, 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. V. Lavrenko and W. B. Croft. Relevance based language models. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval pages 120--127, 2001.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Lin, A. Fernandes, B. Katz, G. Marton, and S. Tellex. Extracting answers from the web using data annotation an knowledge mining techniques. In The Eleventh Text REtrieval Conference (TREC 2002), Gaithersburg, MD, 2002.]]Google ScholarGoogle Scholar
  15. M. Mitra, A. Singhal, and C. Buckley. Improving automatic query expansion. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval pages 206--214. ACM Press, 1998.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. M. Ponte and W. B. Croft. A language modeling approach to information retrieval. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval pages 275--281, 1998.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. J. Rocchio. The SMART Retrieval System: Experiments in Automatic Document Processing chapter Relevance feedback in information retrieval, pages 313--323. Prentice-Hall Inc., 1971.]]Google ScholarGoogle Scholar
  18. G. Salton, A. Wong, and C. S. Yang. A vector space model for automatic indexing. Communications of the ACM 18(11):613--620, 1975.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. P. Schäuble and P. Sheridan. Cross-language information retrieval (clir) track overview. In The Sixth Text REtrieval Conference (TREC 6), Gaithersburg, MD, 1997.]]Google ScholarGoogle Scholar
  20. S. Tellex, B. Katz, J. Lin, A. Fernandes, and G. Marton. Quantitative evaluation of passage retrieval algorithms for question answering. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval pages 41--47, 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. E. Terra and C. L. Clarke. Fast computation of lexical affinity models. In Proceedings of the 20th International Conference on Computational Linguistics (COLING), 2004.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. E. Terra and C. L. A. Clarke. Frequency estimates for statistical word similarity measures. In Proceedings of Human Language Technology conference / North American chapter of the Association for Computational Linguistics annual meeting pages 244--251, Edmonton, Alberta, 2003.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. D. Turney. Mining the Web for synonyms: PMI--IR versus LSA on TOEFL. In Proceedings of European Conference on Machine Learning-2001 pages 491--502, 2001.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. Xu an B. Croft. Improving the effectiveness of information retrieval with Local Context Analysis. ACM Transactions on Information Systems 18(1):79--112, 2000.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. Xu an W. B. Croft. Corpus-based stemming using cooccurrence of word variants. ACM Trans. Inf. Syst., 16(1):61--81, 1998.]] Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. D. L. Yeung, C. L. A. Clarke, G. V. Cormack, T. R. Lynam, and E. Terra. Task-specific query expansion (multitext experiments for trec 2003). In 2002 Text REtrieval Conference Gaithersburg, MD, 2003.]]Google ScholarGoogle Scholar
  27. C. Zhai and J. Lafferty. Model-based feedback in the language modeling approach to information retrieval. In Proceedings of the tenth international conference on Information and knowledge management pages 403--410, 2001.]] Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Scoring missing terms in information retrieval tasks

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '04: Proceedings of the thirteenth ACM international conference on Information and knowledge management
      November 2004
      678 pages
      ISBN:1581138741
      DOI:10.1145/1031171

      Copyright © 2004 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 November 2004

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader