Understanding the underlying processes of aesthetic perception is one of the ultimate goals in empirical aesthetics. While deep learning and convolutional neural networks (CNN) already arrived in the area of aesthetic rating of art and photographs, only little attempts have been made to apply CNNs as the underlying model for aesthetic perception. The information processing architecture of CNNs shows a strong match with the visual processing pipeline in the human visual system. Thus, it seems reasonable to exploit such models to gain better insight into the universal processes that drives aesthetic perception. This work shows first results supporting this claim by analyzing already known common statistical properties of visual art, like sparsity and self-similarity, with the help of CNNs. We report about observed differences in the responses of individual layers between art and non-art images, both in forward and backward (simulation) processing, that might open new directions of research in empirical aesthetics.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.