ABSTRACT
Animated GIFs are everywhere on the Web. Our work focuses on the computational prediction of emotions perceived by viewers after they are shown animated GIF images. We evaluate our results on a dataset of over 3,800 animated GIFs gathered from MIT's GIFGIF platform, each with scores for 17 discrete emotions aggregated from over 2.5M user annotations - the first computational evaluation of its kind for content-based prediction on animated GIFs to our knowledge. In addition, we advocate a conceptual paradigm in emotion prediction that shows delineating distinct types of emotion is important and is useful to be concrete about the emotion target. One of our objectives is to systematically compare different types of content features for emotion prediction, including low-level, aesthetics, semantic and face features. We also formulate a multi-task regression problem to evaluate whether viewer perceived emotion prediction can benefit from jointly learning across emotion classes compared to disjoint, independent learning.
- A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. In Machine Learning, 2008. Google ScholarDigital Library
- Y. Baveye, J.-N. Bettinelli, E. Dellandrea, L. Chen, and C. Chamaret. A large video data base for computational models of induced emotion. In ACII, 2013. Google ScholarDigital Library
- S. Bhattacharya, B. Nojavanasghari, T. Chen, D. Liu, S.-F. Chang, and M. Shah. Towards a comprehensive computational model for aesthetic assessment of videos. In ACM Multimedia, 2013. Google ScholarDigital Library
- D. Borth, R. Ji, T. Chen, T. Breuel, and S.-F. Chang. Large-scale visual sentiment ontology and detectors using adjective-noun pairs. In ACM Multimedia, 2013. Google ScholarDigital Library
- J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning. In KDD, 2011. Google ScholarDigital Library
- Y.-Y. Chen, T. Chen, W. H. Hsu, H.-Y. M. Liao, and S.-F. Chang. Predicting viewer affective comments based on image content in social media. In ACM ICMR, 2014. Google ScholarDigital Library
- J. Fleureau, P. Guillotel, and I. Orlac. Affective benchmarking of movies based on the physiological responses of a real audience. In ACII, 2013. Google ScholarDigital Library
- R. Herbrich, T. Minka, and T. Graepel. TrueSkill#8482;: A Bayesian skill rating system. In NIPS, 2006.Google Scholar
- W. James. What is an Emotion? Mind Association, 1883.Google Scholar
- Y.-G. Jiang, B. Xu, and X. Xue. Predicting emotions in user-generated videos. In AAAI, 2014.Google ScholarDigital Library
- S. Koelstra, C. Mühl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. DEAP: A database for emotion analysis using physiological signals. IEEE TAffC, 2011. Google ScholarDigital Library
- J. Machajdik and A. Hanbury. Affective image classification using features inspired by psychology and art theory. In ACM Multimedia, 2010. Google ScholarDigital Library
- R. W. Picard. Affective Computing. MIT Press, 1997. Google ScholarDigital Library
- A. Schaefer, F. Nils, X. Sanchez, and P. Philippot. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition and Emotion, 2010.Google Scholar
- M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic. A multimedia database for affect recognition and implicit tagging. IEEE TAffC, 2012. Google ScholarDigital Library
- Y. Tang. Deep learning using linear support vector machines. In ICML, 2013.Google Scholar
- L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 1996. Google ScholarDigital Library
- X. Wang, J. Jia, P. Hu, S. Wu, J. Tang, and L. Cai. Understanding the emotional impact of images. In ACM Multimedia, 2012. Google ScholarDigital Library
- J. Zhou, J. Chen, and J. Ye. MALSAR: Multi-tAsk Learning via StructurAl Regularization. Technical report, ASU, 2012.Google Scholar
Index Terms
- Predicting Viewer Perceived Emotions in Animated GIFs
Recommendations
Understanding Diverse Interpretations of Animated GIFs
CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing SystemsAnimated GIFs are increasingly popular in text-based communication. Like other forms of nonverbal communication, animated GIFs are susceptible to open interpretation. We explore whether people have different interpretations of animated GIFs, how those ...
Automatic recognition of self-reported and perceived emotion: does joint modeling help?
ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal InteractionEmotion labeling is a central component of automatic emotion recognition. Evaluators are asked to estimate the emotion label given a set of cues, produced either by themselves (self-report label) or others (perceived label). This process is complicated ...
Predicting Evoked Emotions in Video
ISM '14: Proceedings of the 2014 IEEE International Symposium on MultimediaUnderstanding how human emotion is evoked from visual content is a task that we as people do every day, but machines have not yet mastered. In this work we address the problem of predicting the intended evoked emotion at given points within movie ...
Comments