skip to main content
10.1145/2647868.2656408acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Predicting Viewer Perceived Emotions in Animated GIFs

Published:03 November 2014Publication History

ABSTRACT

Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning.

References

  1. A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. In Machine Learning, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Y. Baveye, J.-N. Bettinelli, E. Dellandrea, L. Chen, and C. Chamaret. A large video data base for computational models of induced emotion. In ACII, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Bhattacharya, B. Nojavanasghari, T. Chen, D. Liu, S.-F. Chang, and M. Shah. Towards a comprehensive computational model for aesthetic assessment of videos. In ACM Multimedia, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. Borth, R. Ji, T. Chen, T. Breuel, and S.-F. Chang. Large-scale visual sentiment ontology and detectors using adjective-noun pairs. In ACM Multimedia, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning. In KDD, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Y.-Y. Chen, T. Chen, W. H. Hsu, H.-Y. M. Liao, and S.-F. Chang. Predicting viewer affective comments based on image content in social media. In ACM ICMR, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Fleureau, P. Guillotel, and I. Orlac. Affective benchmarking of movies based on the physiological responses of a real audience. In ACII, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. Herbrich, T. Minka, and T. Graepel. TrueSkill#8482;: A Bayesian skill rating system. In NIPS, 2006.Google ScholarGoogle Scholar
  9. W. James. What is an Emotion? Mind Association, 1883.Google ScholarGoogle Scholar
  10. Y.-G. Jiang, B. Xu, and X. Xue. Predicting emotions in user-generated videos. In AAAI, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Koelstra, C. Mühl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. DEAP: A database for emotion analysis using physiological signals. IEEE TAffC, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Machajdik and A. Hanbury. Affective image classification using features inspired by psychology and art theory. In ACM Multimedia, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. W. Picard. Affective Computing. MIT Press, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Schaefer, F. Nils, X. Sanchez, and P. Philippot. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion, 2010.Google ScholarGoogle Scholar
  15. M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic. A multimedia database for affect recognition and implicit tagging. IEEE TAffC, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Y. Tang. Deep learning using linear support vector machines. In ICML, 2013.Google ScholarGoogle Scholar
  17. L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. X. Wang, J. Jia, P. Hu, S. Wu, J. Tang, and L. Cai. Understanding the emotional impact of images. In ACM Multimedia, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. J. Zhou, J. Chen, and J. Ye. MALSAR: Multi-tAsk Learning via StructurAl Regularization. Technical report, ASU, 2012.Google ScholarGoogle Scholar

Index Terms

  1. Predicting Viewer Perceived Emotions in Animated GIFs

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            MM '14: Proceedings of the 22nd ACM international conference on Multimedia
            November 2014
            1310 pages
            ISBN:9781450330633
            DOI:10.1145/2647868

            Copyright © 2014 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 3 November 2014

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • short-paper

            Acceptance Rates

            MM '14 Paper Acceptance Rate55of286submissions,19%Overall Acceptance Rate995of4,171submissions,24%

            Upcoming Conference

            MM '24
            MM '24: The 32nd ACM International Conference on Multimedia
            October 28 - November 1, 2024
            Melbourne , VIC , Australia

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader