skip to main content
10.1145/1835449.1835512acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

A comparison of general vs personalised affective models for the prediction of topical relevance

Published:19 July 2010Publication History

ABSTRACT

Information retrieval systems face a number of challenges, originating mainly from the semantic gap problem. Implicit feedback techniques have been employed in the past to address many of these issues. Although this was a step towards the right direction, a need to personalise and tailor the search experience to the user-specific needs has become evident. In this study we examine ways of personalising affective models trained on facial expression data. Using personalised data we adapt these models to individual users and compare their performance to a general model. The main goal is to determine whether the behavioural differences of users have an impact on the models' ability to determine topical relevance and if, by personalising them, we can improve their accuracy. For modelling relevance we extract a set of features from the facial expression data and classify them using Support Vector Machines. Our initial evaluation indicates that accounting for individual differences and applying personalisation introduces, in most cases, a noticeable improvement in the models' performance.

References

  1. E. Agichtein, E. Brill, and S. Dumais. Improving web search ranking by incorporating user behavior information. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 19--26, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Agichtein, E. Brill, S. Dumais, and R. Ragno. Learning user interaction models for predicting web search result preferences. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3--10, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. I. Arapakis, J. M. Jose, and P. D. Gray. Affective feedback: an investigation into the role of emotions in the information seeking process. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 395--402, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. I. Arapakis, I. Konstas, and M. Jose, J. Using facial expressions and peripheral physiological signals as implicit indicators of topical relevance. In Proceedings of the seventeen ACM international conference on Multimedia, pages 461--470, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. I. Arapakis, Y. Moshfeghi, H. Joho, R. Ren, D. Hannah, and J. M. Jose. Enriching user profiling with affective features for the improvement of a multimodal recommender system. In Proceeding of the ACM International Conference on Image and Video Retrieval, pages 1--8, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. R. Badi, S. Bae, J. M. Moore, K. Meintanis, A. Zacchi, H. Hsieh, F. Shipman, and C. C. Marshall. Recognizing user interest and document value from reading and organizing activities in document triage. In Proceedings of the 11th international conference on Intelligent user interfaces, pages 218--225, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. N. J. Belkin. Anomalous states of knowledge as a basis for information retrieval. Canadian Journal of Information Science, 5:133--143, 1980.Google ScholarGoogle Scholar
  8. P. Borlund. Experimental components for the evaluation of interactive information retrieval systems. Journal of Documentation, 56(1):71--90, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  9. M. Daoud, L. Tamine-Lechani, and M. Boughanem. Learning user interests for a session-based personalized search. In Proceedings of the second international symposium on Information interaction in context, pages 57--64, New York, NY, USA, 2008. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. P. Ekman. Facial Expressions, chapter 16, pages 301--320. The Handbook of Cognition and Emotion. U.K.: John Wiley & Sons, Ltd, 1999.Google ScholarGoogle Scholar
  11. P. Ekman. Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life. Times Books, New York, 2003.Google ScholarGoogle Scholar
  12. T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 154--161, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Koenemann and N. J. Belkin. A case for interaction: a study of interactive information retrieval behavior and effectiveness. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 205--212, New York, NY, USA, 1996. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. F. Liu, C. Yu, and W. Meng. Personalized web search by mapping user queries to categories. In Proceedings of the eleventh international conference on Information and knowledge management, pages 558--565, New York, NY, USA, 2002. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Morita and Y. Shinoda. Information filtering based on user behavior analysis and best match text retrieval. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 272--281, New York, NY, USA, 1994. Springer-Verlag New York, Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. W. Oard and J. Kim. Modeling information content using observable behavior, 2001.Google ScholarGoogle Scholar
  17. M. Pantic and L. Rothkrantz. Expert system for automatic analysis of facial expression. Image and Vision Computing Journal, 18(11):881--905, August 2000.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Pantic and L. J. M. Rothkrantz. Toward an affect-sensitive multimodal human-computer interaction. Proceedings of the IEEE, 91(9):1370--1390, Sept. 2003.Google ScholarGoogle ScholarCross RefCross Ref
  19. M. Pantic, N. Sebe, C. J. F., and T. Huang. Affective multimodal human-computer interaction. In Proceedings of the 13th annual ACM international conference on Multimedia, pages 669--676, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. K. Puolamaki, J. Salojarvi, E. Savia, J. Simola, and S. Kaski. Combining eye movements and collaborative filtering for proactive information retrieval. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 146--153, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Rui and T. Huang. Optimizing learning in image retrieval. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 236--243, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  22. N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang. Authentic facial expression analysis. Image Vision Comput., 25(12):1856--1863, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Teevan, S. T. Dumais, and E. Horvitz. Personalizing search via automated analysis of interests and activities. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 449--456, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. R. Valenti, N. Sebe, and T. Gevers. Facial expression recognition: A fully integrated approach. 14th International Conference on Image Analysis and Processing Workshops, pages 125--130, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A comparison of general vs personalised affective models for the prediction of topical relevance

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
          July 2010
          944 pages
          ISBN:9781450301534
          DOI:10.1145/1835449

          Copyright © 2010 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 July 2010

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          SIGIR '10 Paper Acceptance Rate87of520submissions,17%Overall Acceptance Rate792of3,983submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader