skip to main content
10.1145/2393347.2393399acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings

Published:29 October 2012Publication History

ABSTRACT

Most artworks are explicitly created to evoke a strong emotional response. During the centuries there were several art movements which employed different techniques to achieve emotional expressions conveyed by artworks. Yet people were always consistently able to read the emotional messages even from the most abstract paintings. Can a machine learn what makes an artwork emotional? In this work, we consider a set of 500 abstract paintings from Museum of Modern and Contemporary Art of Trento and Rovereto (MART), where each painting was scored as carrying a positive or negative response on a Likert scale of 1-7. We employ a state-of-the-art recognition system to learn which statistical patterns are associated with positive and negative emotions. Additionally, we dissect the classification machinery to determine which parts of an image evokes what emotions. This opens new opportunities to research why a specific painting is perceived as emotional. We also demonstrate how quantification of evidence for positive and negative emotions can be used to predict the way in which people observe paintings.

References

  1. R. Arnheim. Art and Visual Perception: A Psychology of the Creative Eye. University of California Press, Nov. 2004.Google ScholarGoogle Scholar
  2. D. H. Brainard. The psychophysics toolbox. Spatial Vision, 10(4), 1997.Google ScholarGoogle Scholar
  3. N. Bruce and J. Tsotsos. Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9(3), 2009.Google ScholarGoogle ScholarCross RefCross Ref
  4. G. C. Cupchik, O. Vartanian, A. Crawley, and D. J. Mikulis. Viewing artworks: Contributions of cognitive control and perceptual facilitation to aesthetic experience. Brain and Cognition, 70:84--91, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  5. M. Dellagiacoma, P. Zontone, G. Boato, and L. Albertazzi. Emotion based classification of natural images. In Int. workshop on Detecting and Exploiting Cultural Diversity on the Social Web, pages 17--22, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Grauman and B. Leibe. Visual Object Recognition. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Hagtvedt, R. Hagtvedt, and V. Patrick. The perception and evaluation of visual art. Empirical Studies of the Arts, 26(2):197--218, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  8. P. Isola, J. Xiao, A. Torralba, and A. Oliva. What makes an image memorable? In IEEE Conference on Computer Vision and Pattern Recognition, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. D. Joshi, R. Datta, E. A. Fedorovskaya, Q.-T. Luong, J. Z. Wang, J. Li, and J. Luo. Aesthetics and emotions in images. IEEE Signal Process. Mag., 28(5):94--115, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  10. D. Joshi, R. Datta, E. A. Fedorovskaya, Q.-T. Luong, J. Z. Wang, J. Li, and J. Luo. Aesthetic judgement of orientation in modern art. i-Perception, 3:18--24, 2012.Google ScholarGoogle Scholar
  11. P. J. Lang, M. M. Bradley, and B. N. Cuthbert. International affective picture system (iaps): Technical manual and affective ratings. 1999.Google ScholarGoogle Scholar
  12. P. Locher, E. A. Krupinski, C. Mello-Thoms, and C. F. Nodine. Visual interest in pictorial art during an aesthetic experience. Spatial Vision, 21(1--2):55--77, 2007.Google ScholarGoogle Scholar
  13. D. G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vision, 60:91--110, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Machajdik and A. Hanbury. Affective image classification using features inspired by psychology and art theory. In ACM Multimedia, pages 83--92, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. S. Maji, A. C. Berg, and J. Malik. Classification using intersection kernel support vector machines is efficient. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  16. L. Marchesotti, F. Perronnin, D. Larlus, and G. Csurka. Assessing the aesthetic quality of photographs using generic image descriptors. In International Conference on Computer Vision, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. I. C. McManus, K. Stöver, and D. Kim. Arnheim's gestalt theory of visual balance: Examining the compositional structure of art photographs and abstract images. i-Perception, 2:615--647, 2011.Google ScholarGoogle Scholar
  18. M. Pelowski and F. Akiba. A model of art perception, evaluation and emotion in transformative aesthetic experience. New Ideas in Psychology, 29:80--97, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  19. E. Pihko, A. Virtanen, V. Saarinen, S. Pannasch, L. Hirvenkari, T. Tossavainen, A. Haapala, and R. Hari. Experiencing art: The influence of expertise and painting abstraction level. Front Hum Neurosci, 5, 2011.Google ScholarGoogle Scholar
  20. E. Pihko, A. Virtanen, V. M. Saarinen, S. Pannasch, L. Hirvenkari, T. Tossavainen, A. Haapala, and R. Hari. Experiencing art: the influence of expertise and painting abstraction level. Frontiers in human neuroscience, 5, 2011.Google ScholarGoogle Scholar
  21. P. Reinagel and A. Zador. Natural scene statistics at the centre of gaze. Network: Computation in Neural Systems, 10(4), 1999.Google ScholarGoogle Scholar
  22. P. J. Silvia. Emotional responses to art: From collation and arousal to cognition and emotion. Review of general psychology, 9(4):342--357, 2005.Google ScholarGoogle Scholar
  23. J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In Proceedings of the International Conference on Computer Vision, pages 1470--1477, Oct. 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. R. Subramanian, H. Katti, N. Sebe, M. Kankanhalli, and T. S. Chua. An eye-fixation database for saliency detection in images. European Conference on Computer Vision, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Subramanian, V. Yanulevskaya, and N. Sebe. Can computers learn from humans to see better? Inferring scene semantics from viewers' eye movements. In ACM Multimedia, pages 33--42, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. R. Szeliski. Computer vision: Algorithms and applications. 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. B. W. Tatler, R. J. Baddeley, and I. D. Gilchrist. Visual correlates of fixation selection: Effects of scale and time. Vision Research, 45(5), 2005.Google ScholarGoogle Scholar
  28. J. R. R. Uijlings, A. W. M. Smeulders, and R. J. H. Scha. Real-time Visual Concept Classification. IEEE Transactions on Multimedia, 12, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. R. R. Uijlings, A. W. M. Smeulders, and R. J. H. Scha. The Visual Extent of an Object. International Journal of Computer Vision, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. K. E. A. van de Sande, T. Gevers, and C. G. M. Snoek. Evaluating color descriptors for object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. A. Vedaldi and B. Fulkerson. VLFeat -- an open and portable library of computer vision algorithms. In ACM Multimedia, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. L. Veronesi. Luigi Veronesi. Con una antologia di scritti. Torino, 1968.Google ScholarGoogle Scholar
  33. O. Wu, Y. Chen, B. Li, and W. Hu. Evaluating the visual quality of web pages using a computational aesthetic approach. In Association of Computing Machinery conference on Web search and data mining, pages 337--346, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. V. Yanulevskaya, J. V. Gemert, K. Roth, A. Herbold, N. Sebe, and J. Geusebroek. Emotional valence categorization using holistic image features. In IEEE International Conference on Image Processing, pages 101--104, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  35. V. Yanulevskaya, J. B. Marsman, F. Cornelissen, and J. M. Geusebroek. An image statistics based model for fixation prediction. Cognitive Computation, 3(1), 2011.Google ScholarGoogle Scholar
  36. Q. Zhao and C. Koch. Learning a saliency map using fixated locations in natural scenes. Journal of vision, 11(3), 2011.Google ScholarGoogle Scholar

Index Terms

  1. In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            MM '12: Proceedings of the 20th ACM international conference on Multimedia
            October 2012
            1584 pages
            ISBN:9781450310895
            DOI:10.1145/2393347

            Copyright © 2012 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 29 October 2012

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate995of4,171submissions,24%

            Upcoming Conference

            MM '24
            MM '24: The 32nd ACM International Conference on Multimedia
            October 28 - November 1, 2024
            Melbourne , VIC , Australia

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader