Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2020 | OriginalPaper | Chapter

OmniEyes: Analysis and Synthesis of Artistically Painted Eyes

Authors: Gjorgji Strezoski, Rogier Knoester, Nanne van Noord, Marcel Worring

Published in: MultiMedia Modeling

Publisher: Springer International Publishing

share
SHARE

Abstract

Faces in artistic paintings most often contain the same elements (eyes, nose, mouth...) as faces in the real world, however they are not a photo-realistic transfer of physical visual content. These creative nuances the artists introduce in their work act as interference when facial detection models are used in the artistic domain. In this work we introduce models that can accurately detect, classify and conditionally generate artistically painted eyes in portrait paintings. In addition, we introduce the OmniEyes Dataset that captures the essence of painted eyes with annotated patches from 250 K artistic paintings and their metadata. We evaluate our approach in inpainting, out of context eye generation and classification on portrait paintings from the OmniArt dataset. We conduct a user case study to further study the quality of our generated samples, asses their aesthetic aspects and provide quantitative and qualitative results for our model’s performance.
Literature
1.
go back to reference Baltrušaitis, T., Robinson, P., Morency, L.P.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016) Baltrušaitis, T., Robinson, P., Morency, L.P.: Openface: an open source facial behavior analysis toolkit. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–10. IEEE (2016)
2.
go back to reference Cetinic, E., Lipic, T., Grgic, S.: A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7, 73694–73710 (2019) CrossRef Cetinic, E., Lipic, T., Grgic, S.: A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7, 73694–73710 (2019) CrossRef
3.
4.
go back to reference Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: Can: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint (2017). arXiv:​1706.​07068 Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: Can: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint (2017). arXiv:​1706.​07068
5.
go back to reference Elgammal, A., Liu, B., Kim, D., Elhoseiny, M., Mazzone, M.: The shape of art history in the eyes of the machine. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018) Elgammal, A., Liu, B., Kim, D., Elhoseiny, M., Mazzone, M.: The shape of art history in the eyes of the machine. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
6.
go back to reference Garcia, N., Renoust, B., Nakashima, Y.: Context-aware embeddings for automatic art analysis. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 25–33. ACM (2019) Garcia, N., Renoust, B., Nakashima, Y.: Context-aware embeddings for automatic art analysis. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 25–33. ACM (2019)
7.
go back to reference Goldfarb, D., Merkl, D.: Visualizing art historical developments using the getty ulan, wikipedia and wikidata. In: 2018 22nd International Conference Information Visualisation (IV), pp. 459–466. IEEE (2018) Goldfarb, D., Merkl, D.: Visualizing art historical developments using the getty ulan, wikipedia and wikidata. In: 2018 22nd International Conference Information Visualisation (IV), pp. 459–466. IEEE (2018)
8.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
9.
go back to reference Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36, 107:1–107:14 (2017) CrossRef Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36, 107:1–107:14 (2017) CrossRef
10.
go back to reference Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014) Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014)
11.
go back to reference Kozbelt, A., Seeley, W.P.: Integrating art historical, psychological, and neuroscientific explanations of artists’ advantages in drawing and perception. Psychology of Aesthetics, Creativity, and the Arts 1(2), 80 (2007) CrossRef Kozbelt, A., Seeley, W.P.: Integrating art historical, psychological, and neuroscientific explanations of artists’ advantages in drawing and perception. Psychology of Aesthetics, Creativity, and the Arts 1(2), 80 (2007) CrossRef
12.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
13.
go back to reference Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017) Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017)
14.
go back to reference Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 (2018) Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 (2018)
17.
go back to reference Oh, C.: Automatically classifying art images using computer vision (2018) Oh, C.: Automatically classifying art images using computer vision (2018)
18.
go back to reference Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016) Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
19.
go back to reference Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)
20.
go back to reference Rodriguez, C.S., Lech, M., Pirogova, E.: Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018 12th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1–7. IEEE (2018) Rodriguez, C.S., Lech, M., Pirogova, E.: Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018 12th International Conference on Signal Processing and Communication Systems (ICSPCS), pp. 1–7. IEEE (2018)
21.
go back to reference Sbai, O., Elhoseiny, M., Bordes, A., LeCun, Y., Couprie, C.: Design: design inspiration from generative networks. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018) Sbai, O., Elhoseiny, M., Bordes, A., LeCun, Y., Couprie, C.: Design: design inspiration from generative networks. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
22.
go back to reference Shen, X., Efros, A.A., Aubry, M.: Discovering visual patterns in art collections with spatially-consistent feature learning. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) Shen, X., Efros, A.A., Aubry, M.: Discovering visual patterns in art collections with spatially-consistent feature learning. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
23.
24.
go back to reference Song, Y., et al.: Contextual-based image inpainting: infer, match, and translate. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018) CrossRef Song, Y., et al.: Contextual-based image inpainting: infer, match, and translate. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018) CrossRef
26.
go back to reference Strezoski, G., Worring, M.: Omniart: a large-scale artistic benchmark. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 14(4), 88 (2018) Strezoski, G., Worring, M.: Omniart: a large-scale artistic benchmark. ACM Trans. Multimedia Comput. Commun. Appl. (TOMM) 14(4), 88 (2018)
28.
go back to reference Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004) CrossRef Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004) CrossRef
29.
go back to reference Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6721–6729 (2017) Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6721–6729 (2017)
31.
go back to reference Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018) Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)
32.
go back to reference Zaidel, D.W.: Neuropsychology of art: Neurological, Cognitive, and Evolutionary Perspectives. Psychology Press (2015) Zaidel, D.W.: Neuropsychology of art: Neurological, Cognitive, and Evolutionary Perspectives. Psychology Press (2015)
Metadata
Title
OmniEyes: Analysis and Synthesis of Artistically Painted Eyes
Authors
Gjorgji Strezoski
Rogier Knoester
Nanne van Noord
Marcel Worring
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-37731-1_51