skip to main content
research-article

Some(what) grand challenges for information retrieval

Published:01 June 2008Publication History
Skip Abstract Section

Abstract

Although we see the positive results of information retrieval research embodied throughout the Internet, on our computer desktops, and in many other aspects of daily life, at the same time we notice that people still have a wide variety of difficulties in finding information that is useful in resolving their problematic situations. This suggests that there still remain substantial challenges for research in IR. Already in 1988, on the occasion of receiving the ACM SIGIR Gerard Salton Award, Karen Spärck Jones suggested that substantial progress in information retrieval was likely only to come through addressing issues associated with users (actual or potential) of IR systems, rather than continuing IR research's almost exclusive focus on document representation and matching and ranking techniques. In recent years it appears that her message has begun to be heard, yet we still have relatively few substantive results that respond to it. In this paper, I identify and discuss a few challenges for IR research which fall within the scope of association with users, and which I believe, if properly addressed, are likely to lead to substantial increases in the usefulness, usability and pleasurability of information retrieval.

References

  1. Arapakis, I. & Jose, J. (2008) Affective Feedback: An investigation of the role of emotions during an information seeking process. In SIGIR 2008. Proceedings of the 31st Annual ACM SIGIR International Conference on Research and Development in Information Retrieval (in press). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Borlund, P. (2003). The IIR Evaluation Model: a Framework for Evaluation of Interactive Information Retrieval Systems. In: Information Research, vol. 8, no. 3, paper no. 152. {Available at: http://informationr.net/ir/8-3/paper152.html}Google ScholarGoogle Scholar
  3. Budzik, J. and Hammond, K. J. (2000) User Interactions with Everyday Applications as Context for Just-in-Time Information Access. In IUI 2000, ACM Conference on Intelligent User Interfaces (pp. 44--51). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Cool, C. & Belkin, N. J. (2002). A classification of interactions with information. In Proceedings of the Fourth International Conference on Conceptions of Library and Information Science (pp. 1--15). Greenwood Village, CO: Libraries Unlimited.Google ScholarGoogle Scholar
  5. Fuhr, N. (2008) A probability ranking principle for interactive information retrieval. Information Retrieval, v. 11: 251--265. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ingwersen, P. & Jäärvelin, K. (2005). The turn. Integration of information seeking and retrieval in context. Dordrecht: Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Järvelin, K., Price, S. L., Delcambre, L. M. L. & Nielsen, M. L. Discounted Cumulated Gain Based Evaluation of Multiple-Query IR Sessions. In ECIR 2008, Proceedings of the 2008 European Conference on Information Retrieval (pp. 4--15). Berlin: Springer Verlag. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Kelly, D. (2005). Implicit feedback: Using behavior to infer relevance. In A. Spink and C. Cole (Eds.) New Directions in Cognitive Information Retrieval (pp. 169--186). Berlin: Springer Verlag.Google ScholarGoogle Scholar
  9. Kelly, D. & Belkin, N. J. (2004). Display time as implicit feedback: Understanding task effects. In SIGIR 2004, Proceedings of the 27th Annual ACM International Conference on Research and Development in Information Retrieval (pp. 377--384). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Kelly, D. & Teevan, J. (2003). Implicit feedback for inferring user preference: A bibliography. SIGIR Forum, 37(2), 18--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Kuhlthau, C. C. (1991). Inside the search process: information seeking from the user's perspective. Journal of the American Society for Information Science, 42, 361--371.Google ScholarGoogle ScholarCross RefCross Ref
  12. Nahl, D. & Bilal, D. eds (2007) Information and emotion: The Emergent Affective Paradigm in Information Behavior Research and Theory. Medford, NJ: Information Today for ASIST.Google ScholarGoogle Scholar
  13. Olston, C. & Chi, Ed H. (2003). ScentTrails: Integrating browsing and searching on the web. ACM Tarnsactions on Computer-Human Interaction, 10(3), 177--197. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Robertson, S. E. & Hancock-Beaulieu, M. (1992) On the evaluation of IR systems. Information Processing and Management, v. 28(4): 457--466. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Saracevic, T (1997) Users lost: reflections of the past, future and limits of information science. SIGIR Forum, 31, 2: 16--27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Spärck Jones, K. (1988) A look back and a look forward. In: SIGIR '88. Proceedings of the 11th Annual ACM SIGIR International Conference on Research and Development in Information Retrieval (pp. 13--29). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Spärck Jones, K. (2005). Meta-reflections on TREC. In E. M. Voorhees & D. K. Harman (Eds.) TREC: Experiment and Evaluation in Information Retrieval (pp. 421--448). Cambridge, MA: MIT Press.Google ScholarGoogle Scholar
  18. Teevan, J., Dumais, S. T. & Liebling, D. J. (2008) To personalize or not to personalize. In< SIGIR 2008. Proceedings of the 31st Annual ACM SIGIR International Conference on Research and Development in Information Retrieval (in press). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Turpin, A. H. & Hersh, W. (2001) Why batch and user evaluations do not give the same results. In SIGIR 2001, Proceedings of the 24th Annual ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 225--231). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Turpin, A. & Scholer, F. (2006) User performance versus precision measures for simple search tasks. In SIGIR 2006, Proceedings of the 29th Annual ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 11--18). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. White, R. W. & Kelly, D. (2006). A study on the effects of personalization and task information on implicit feedback performance. In CIKM '06, Conference on Information and Knowledge Management (pp.). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Some(what) grand challenges for information retrieval

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGIR Forum
      ACM SIGIR Forum  Volume 42, Issue 1
      June 2008
      76 pages
      ISSN:0163-5840
      DOI:10.1145/1394251
      Issue’s Table of Contents

      Copyright © 2008 Author

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 June 2008

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader