Skip to main content
Erschienen in: Neural Computing and Applications 1/2021

12.01.2021 | S. I : Neural Networks in Art, sound and Design

Artificial Neural Networks and Deep Learning in the Visual Arts: a review

verfasst von: Iria Santos, Luz Castro, Nereida Rodriguez-Fernandez, Álvaro Torrente-Patiño, Adrián Carballal

Erschienen in: Neural Computing and Applications | Ausgabe 1/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this article, we perform an exhaustive analysis of the use of Artificial Neural Networks and Deep Learning in the Visual Arts. We begin by introducing changes in Artificial Intelligence over the years and examine in depth the latest work carried out in prediction, classification, evaluation, generation, and identification through Artificial Neural Networks for the different Visual Arts. While we highlight the contributions of photography and pictorial art, there are also other uses for 3D modeling, including video games, architecture, and comics. The results of the investigations discussed show that the use of Artificial Neural Networks in the Visual Arts continues to evolve and have recently experienced significant growth. To complement the text, we include a glossary and table with information about the most commonly employed image datasets.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133MathSciNetMATH McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133MathSciNetMATH
2.
Zurück zum Zitat Wani IM, Arora S (2020) Deep neural networks for diagnosis of osteoporosis: a review. In: Proceedings of ICRIC 2019. Springer, pp 65–78 Wani IM, Arora S (2020) Deep neural networks for diagnosis of osteoporosis: a review. In: Proceedings of ICRIC 2019. Springer, pp 65–78
4.
Zurück zum Zitat Moon S, Ahmadnezhad P, Song H-J, Thompson J, Kipp K, Akinwuntan AE, Devos H (2020) Artificial neural networks in neurorehabilitation: a scoping review. NeuroRehabilitation 1–11 Moon S, Ahmadnezhad P, Song H-J, Thompson J, Kipp K, Akinwuntan AE, Devos H (2020) Artificial neural networks in neurorehabilitation: a scoping review. NeuroRehabilitation 1–11
5.
Zurück zum Zitat Yucel M, Nigdeli SM, Bekdaş G (2020) Artificial neural networks (anns) and solution of civil engineering problems: anns and prediction applications. In: Artificial intelligence and machine learning applications in civil, mechanical, and industrial engineering. IGI Global, pp 13–37 Yucel M, Nigdeli SM, Bekdaş G (2020) Artificial neural networks (anns) and solution of civil engineering problems: anns and prediction applications. In: Artificial intelligence and machine learning applications in civil, mechanical, and industrial engineering. IGI Global, pp 13–37
6.
Zurück zum Zitat Pradhan B, Sameen MI (2020) Review of traffic accident predictions with neural networks. In: Laser scanning systems in highway and safety assessment. Springer, Cham, pp 97–109 Pradhan B, Sameen MI (2020) Review of traffic accident predictions with neural networks. In: Laser scanning systems in highway and safety assessment. Springer, Cham, pp 97–109
7.
Zurück zum Zitat Kalogirou SA (2001) Artificial neural networks in renewable energy systems applications: a review. Renew Sustain Energy Rev 5(4):373–401 Kalogirou SA (2001) Artificial neural networks in renewable energy systems applications: a review. Renew Sustain Energy Rev 5(4):373–401
8.
Zurück zum Zitat Sundararaj A, Ravi R, Thirumalai P, Radhakrishnan G (1999) Artificial neural network applications in electrochemistry—a review. Bull Electrochem 15(12):552–555 Sundararaj A, Ravi R, Thirumalai P, Radhakrishnan G (1999) Artificial neural network applications in electrochemistry—a review. Bull Electrochem 15(12):552–555
9.
Zurück zum Zitat Risi S, Togelius J (2015) Neuroevolution in games: state of the art and open challenges. IEEE Trans Comput Intell AI Games 9(1):25–41 Risi S, Togelius J (2015) Neuroevolution in games: state of the art and open challenges. IEEE Trans Comput Intell AI Games 9(1):25–41
10.
Zurück zum Zitat Lipton ZC, Berkowitz J, Elkan C A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 Lipton ZC, Berkowitz J, Elkan C A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:​1506.​00019
11.
Zurück zum Zitat Mammone RJ (1994) Artificial neural networks for speech and vision, vol 4. Chapman & Hall, London Mammone RJ (1994) Artificial neural networks for speech and vision, vol 4. Chapman & Hall, London
12.
Zurück zum Zitat Lippmann RP (1989) Review of neural networks for speech recognition. Neural Comput 1(1):1–38 Lippmann RP (1989) Review of neural networks for speech recognition. Neural Comput 1(1):1–38
13.
Zurück zum Zitat Kamble BC (2016) Speech recognition using artificial neural network—a review. Int J Comput Commun Instrum Eng 3(1):61–64 Kamble BC (2016) Speech recognition using artificial neural network—a review. Int J Comput Commun Instrum Eng 3(1):61–64
14.
Zurück zum Zitat Dias FM, Antunes A, Mota AM (2004) Artificial neural networks: a review of commercial hardware. Eng Appl Artif Intell 17(8):945–952 Dias FM, Antunes A, Mota AM (2004) Artificial neural networks: a review of commercial hardware. Eng Appl Artif Intell 17(8):945–952
15.
Zurück zum Zitat Giebel H (1971) Feature extraction and recognition of handwritten characters by homogeneous layers. In: Zeichenerkennung durch biologische und technische Systeme/Pattern Recognition in Biological and Technical Systems. Springer, pp 162–169 Giebel H (1971) Feature extraction and recognition of handwritten characters by homogeneous layers. In: Zeichenerkennung durch biologische und technische Systeme/Pattern Recognition in Biological and Technical Systems. Springer, pp 162–169
16.
Zurück zum Zitat Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36(4):193–202MATH Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36(4):193–202MATH
17.
Zurück zum Zitat LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551 LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551
18.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
20.
Zurück zum Zitat Le QV et al (2015) A tutorial on deep learning part 2: autoencoders, convolutional neural networks and recurrent neural networks. Google Brain 1–20 Le QV et al (2015) A tutorial on deep learning part 2: autoencoders, convolutional neural networks and recurrent neural networks. Google Brain 1–20
21.
Zurück zum Zitat Wu J (2017) Introduction to convolutional neural networks, National Key Lab for Novel Software Technology. Nanjing Univ China 5:23 Wu J (2017) Introduction to convolutional neural networks, National Key Lab for Novel Software Technology. Nanjing Univ China 5:23
22.
Zurück zum Zitat Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35(1):53–65 Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35(1):53–65
23.
Zurück zum Zitat Romero JJ (2008) The art of artificial evolution: a handbook on evolutionary art and music. Springer, Berlin Romero JJ (2008) The art of artificial evolution: a handbook on evolutionary art and music. Springer, Berlin
24.
Zurück zum Zitat Romero J (2020) Artificial intelligence in music, sound, art and design: 9th international conference, EvoMUSART 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15–17, 2020, Proceedings. Springer Romero J (2020) Artificial intelligence in music, sound, art and design: 9th international conference, EvoMUSART 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15–17, 2020, Proceedings. Springer
28.
Zurück zum Zitat Dorin A (2015) Artificial life art, creativity, and techno-hybridization (editor’s introduction). Artif Life 21(3):261–270 Dorin A (2015) Artificial life art, creativity, and techno-hybridization (editor’s introduction). Artif Life 21(3):261–270
29.
Zurück zum Zitat Greenfield G, Machado P (2012) Guest editor’s introduction, special issue on mathematical models used in aesthetic evaluation. J Math Arts 6(2–3):59–64MathSciNet Greenfield G, Machado P (2012) Guest editor’s introduction, special issue on mathematical models used in aesthetic evaluation. J Math Arts 6(2–3):59–64MathSciNet
31.
Zurück zum Zitat Galanter P (2012) Computational aesthetic evaluation: steps towards machine creativity. In: ACM SIGGRAPH 2012 courses. pp 1–162 Galanter P (2012) Computational aesthetic evaluation: steps towards machine creativity. In: ACM SIGGRAPH 2012 courses. pp 1–162
32.
Zurück zum Zitat Spratt EL, Elgammal A (2014) Computational beauty: aesthetic judgment at the intersection of art and science. In: European conference on computer vision. Springer, pp 35–53 Spratt EL, Elgammal A (2014) Computational beauty: aesthetic judgment at the intersection of art and science. In: European conference on computer vision. Springer, pp 35–53
33.
Zurück zum Zitat Toivonen H, Gross O (2015) Data mining and machine learning in computational creativity. Data Min Knowl Disc 5(6):265–275 Toivonen H, Gross O (2015) Data mining and machine learning in computational creativity. Data Min Knowl Disc 5(6):265–275
34.
Zurück zum Zitat Upadhyaya N, Dixit M, Pradesh DM (2016) A review: relating low level features to high level semantics in cbir. Int J Signal Process Image Process Pattern Recognit 9(3):433–444 Upadhyaya N, Dixit M, Pradesh DM (2016) A review: relating low level features to high level semantics in cbir. Int J Signal Process Image Process Pattern Recognit 9(3):433–444
36.
Zurück zum Zitat Todd PM, Werner GM (1999) Frankensteinian methods for evolutionary music. Musical networks: parallel distributed perception and performance, pp 313–340 Todd PM, Werner GM (1999) Frankensteinian methods for evolutionary music. Musical networks: parallel distributed perception and performance, pp 313–340
37.
Zurück zum Zitat Lewis M (2008) Evolutionary visual art and design. In: The art of artificial evolution. Springer, Berlin, pp 3–37 Lewis M (2008) Evolutionary visual art and design. In: The art of artificial evolution. Springer, Berlin, pp 3–37
38.
39.
Zurück zum Zitat Briot J-P, Hadjeres G, Pachet F (2019) Deep learning techniques for music generation, vol 10. Springer, Berlin Briot J-P, Hadjeres G, Pachet F (2019) Deep learning techniques for music generation, vol 10. Springer, Berlin
41.
Zurück zum Zitat Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In: International conference on machine learning, pp 647–655 Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T (2014) Decaf: a deep convolutional activation feature for generic visual recognition. In: International conference on machine learning, pp 647–655
42.
43.
Zurück zum Zitat Wilber MJ, Fang C, Jin H, Hertzmann A, Collomosse J, Belongie S (2017) Bam! the behance artistic media dataset for recognition beyond photography. In: Proceedings of the IEEE international conference on computer vision, pp 1202–1211 Wilber MJ, Fang C, Jin H, Hertzmann A, Collomosse J, Belongie S (2017) Bam! the behance artistic media dataset for recognition beyond photography. In: Proceedings of the IEEE international conference on computer vision, pp 1202–1211
44.
Zurück zum Zitat Masui K, Ochiai A, Yoshizawa S, Nakayama H (2017) Recurrent visual relationship recognition with triplet unit. In: 2017 IEEE international symposium on multimedia (ISM). IEEE, pp 69–76 Masui K, Ochiai A, Yoshizawa S, Nakayama H (2017) Recurrent visual relationship recognition with triplet unit. In: 2017 IEEE international symposium on multimedia (ISM). IEEE, pp 69–76
45.
Zurück zum Zitat Nguyen K. Relational networks for visual relationship detection in images Nguyen K. Relational networks for visual relationship detection in images
46.
Zurück zum Zitat Zhang J, Shih K, Tao A, Catanzaro B, Elgammal A. Introduction to the 1st place winning model of openimages relationship detection challenge. arXiv:1811.00662 Zhang J, Shih K, Tao A, Catanzaro B, Elgammal A. Introduction to the 1st place winning model of openimages relationship detection challenge. arXiv:​1811.​00662
48.
Zurück zum Zitat Detección del logotipo del vehículo utilizando una red neuronal convolucional y una pirámide de histograma de gradientes orientados, in: \(11^{{\rm a}}\) Conferencia Internacional Conjunta de 2014 sobre Ciencias de la Computación e Ingeniería de Software (JCSSE) Detección del logotipo del vehículo utilizando una red neuronal convolucional y una pirámide de histograma de gradientes orientados, in: \(11^{{\rm a}}\) Conferencia Internacional Conjunta de 2014 sobre Ciencias de la Computación e Ingeniería de Software (JCSSE)
49.
Zurück zum Zitat Dulecha TG, Giachetti A, Pintus R, Ciortan I, Jaspe A, Gobbetti E (2019) Crack detection in single- and multi-light images of painted surfaces using convolutional neural networks. In: Eurographics Workshop on Graphics and Cultural Heritage. https://doi.org/10.2312/gch.20191347 Dulecha TG, Giachetti A, Pintus R, Ciortan I, Jaspe A, Gobbetti E (2019) Crack detection in single- and multi-light images of painted surfaces using convolutional neural networks. In: Eurographics Workshop on Graphics and Cultural Heritage. https://​doi.​org/​10.​2312/​gch.​20191347
50.
Zurück zum Zitat Hall P, Cai H, Wu Q, Corradi T (2015) Cross-depiction problem: recognition and synthesis of photographs and artwork. Comput Visual Media 1(2):91–103 Hall P, Cai H, Wu Q, Corradi T (2015) Cross-depiction problem: recognition and synthesis of photographs and artwork. Comput Visual Media 1(2):91–103
51.
Zurück zum Zitat Seguin B, Striolo C, Kaplan F et al (2016) Visual link retrieval in a database of paintings. In: European conference on computer vision. Springer, Berlin, pp 753–767 Seguin B, Striolo C, Kaplan F et al (2016) Visual link retrieval in a database of paintings. In: European conference on computer vision. Springer, Berlin, pp 753–767
52.
Zurück zum Zitat Inoue N, Furuta R, Yamasaki T, Aizawa K (2018) Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5001–5009 Inoue N, Furuta R, Yamasaki T, Aizawa K (2018) Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5001–5009
53.
Zurück zum Zitat Gonthier N, Gousseau Y, Ladjal S, Bonfait O (2018) Weakly supervised object detection in artworks. In: Proceedings of the European conference on computer vision (ECCV) Gonthier N, Gousseau Y, Ladjal S, Bonfait O (2018) Weakly supervised object detection in artworks. In: Proceedings of the European conference on computer vision (ECCV)
54.
Zurück zum Zitat Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp 91–99 Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. pp 91–99
55.
Zurück zum Zitat Westlake N, Cai H, Hall P (2016) Detecting people in artwork with cnns. In: European conference on computer vision. Springer, pp 825–841 Westlake N, Cai H, Hall P (2016) Detecting people in artwork with cnns. In: European conference on computer vision. Springer, pp 825–841
56.
Zurück zum Zitat Seguin B, Costiner L, di Lenardo I, Kaplan F (2018) New techniques for the digitization of art historical photographic archives-the case of the cini foundation in venice. In: Archiving conference, vol 2018. Society for Imaging Science and Technology, pp 1–5 Seguin B, Costiner L, di Lenardo I, Kaplan F (2018) New techniques for the digitization of art historical photographic archives-the case of the cini foundation in venice. In: Archiving conference, vol 2018. Society for Imaging Science and Technology, pp 1–5
57.
Zurück zum Zitat Gonthier N, Ladjal S, Gousseau Y. Multiple instance learning on deep features for weakly supervised object detection with extreme domain shifts. arXiv:2008.01178 Gonthier N, Ladjal S, Gousseau Y. Multiple instance learning on deep features for weakly supervised object detection with extreme domain shifts. arXiv:​2008.​01178
58.
Zurück zum Zitat Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, Berlin, pp 740–755 Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, Berlin, pp 740–755
59.
Zurück zum Zitat Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386 Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386
60.
Zurück zum Zitat Thomas C, Kovashka A (2018) Artistic object recognition by unsupervised style adaptation. In: Asian conference on computer vision. Springer, pp 460–476 Thomas C, Kovashka A (2018) Artistic object recognition by unsupervised style adaptation. In: Asian conference on computer vision. Springer, pp 460–476
61.
Zurück zum Zitat Crowley EJ, Zisserman A (2014) In search of art. In: European conference on computer vision. Springer, pp 54–70 Crowley EJ, Zisserman A (2014) In search of art. In: European conference on computer vision. Springer, pp 54–70
62.
Zurück zum Zitat Nguyen N-V, Rigaud C, Burie J-C (2018) Digital comics image indexing based on deep learning. J Imaging 4(7):89 Nguyen N-V, Rigaud C, Burie J-C (2018) Digital comics image indexing based on deep learning. J Imaging 4(7):89
63.
Zurück zum Zitat Ogawa T, Otsubo A, Narita R, Matsui Y, Yamasaki T, Aizawa K. Object detection for comics using manga109 annotations. arXiv:1803.08670 Ogawa T, Otsubo A, Narita R, Matsui Y, Yamasaki T, Aizawa K. Object detection for comics using manga109 annotations. arXiv:​1803.​08670
64.
Zurück zum Zitat Matsui Y, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, Aizawa K (2017) Sketch-based manga retrieval using manga109 dataset. Multimed Tools Appl 76(20):21811–21838 Matsui Y, Ito K, Aramaki Y, Fujimoto A, Ogawa T, Yamasaki T, Aizawa K (2017) Sketch-based manga retrieval using manga109 dataset. Multimed Tools Appl 76(20):21811–21838
65.
Zurück zum Zitat Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 7263–7271 Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 7263–7271
66.
Zurück zum Zitat Niitani Y, Ogawa T, Saito S, Saito M (2017) Chainercv: a library for deep learning in computer vision. In: Proceedings of the 25th ACM international conference on multimedia. pp 1217–1220 Niitani Y, Ogawa T, Saito S, Saito M (2017) Chainercv: a library for deep learning in computer vision. In: Proceedings of the 25th ACM international conference on multimedia. pp 1217–1220
67.
Zurück zum Zitat Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: European conference on computer vision. Springer, pp 21–37 Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: European conference on computer vision. Springer, pp 21–37
70.
Zurück zum Zitat Dunst A, Hartel R, Laubrock J (2017) The graphic narrative corpus (gnc): design, annotation, and analysis for the digital humanities. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), vol 3. IEEE, pp 15–20 Dunst A, Hartel R, Laubrock J (2017) The graphic narrative corpus (gnc): design, annotation, and analysis for the digital humanities. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), vol 3. IEEE, pp 15–20
71.
Zurück zum Zitat Murray N, Marchesotti L, Perronnin F (2012) Ava: a large-scale database for aesthetic visual analysis. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE, pp 2408–2415 Murray N, Marchesotti L, Perronnin F (2012) Ava: a large-scale database for aesthetic visual analysis. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE, pp 2408–2415
72.
Zurück zum Zitat Lu X, Lin Z, Jin H, Yang J, Wang JZ (2015) Rating image aesthetics using deep learning. IEEE Trans Multimed 17(11):2021–2034 Lu X, Lin Z, Jin H, Yang J, Wang JZ (2015) Rating image aesthetics using deep learning. IEEE Trans Multimed 17(11):2021–2034
73.
Zurück zum Zitat Karayev S, Trentacoste M, Han H, Agarwala A, Darrell T, Hertzmann A, Winnemoeller H, Recognizing image style. arXiv:1311.3715 Karayev S, Trentacoste M, Han H, Agarwala A, Darrell T, Hertzmann A, Winnemoeller H, Recognizing image style. arXiv:​1311.​3715
76.
Zurück zum Zitat Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia. pp 675–678 Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia. pp 675–678
77.
Zurück zum Zitat Bar Y, Levy N, Wolf L (2014) Classification of artistic styles using binarized features derived from a deep neural network. In: European conference on computer vision. Springer, pp 71–84 Bar Y, Levy N, Wolf L (2014) Classification of artistic styles using binarized features derived from a deep neural network. In: European conference on computer vision. Springer, pp 71–84
79.
Zurück zum Zitat Bergamo A, Torresani L, Fitzgibbon AW (2011) Picodes: learning a compact code for novel-category recognition. In: Advances in neural information processing systems. pp 2088–2096 Bergamo A, Torresani L, Fitzgibbon AW (2011) Picodes: learning a compact code for novel-category recognition. In: Advances in neural information processing systems. pp 2088–2096
80.
Zurück zum Zitat Khan FS, Beigpour S, Van de Weijer J, Felsberg M (2014) Painting-91: a large scale database for computational painting categorization. Mach Vis Appl 25(6):1385–1397 Khan FS, Beigpour S, Van de Weijer J, Felsberg M (2014) Painting-91: a large scale database for computational painting categorization. Mach Vis Appl 25(6):1385–1397
81.
Zurück zum Zitat Mensink T, Van Gemert J (2014) The rijksmuseum challenge: museum-centered visual recognition. In: Proceedings of international conference on multimedia retrieval. ACM, p 451 Mensink T, Van Gemert J (2014) The rijksmuseum challenge: museum-centered visual recognition. In: Proceedings of international conference on multimedia retrieval. ACM, p 451
82.
Zurück zum Zitat Van Noord N, Hendriks E, Postma E (2015) Toward discovery of the artist’s style: learning to recognize artists by their artworks. IEEE Signal Process Mag 32(4):46–54 Van Noord N, Hendriks E, Postma E (2015) Toward discovery of the artist’s style: learning to recognize artists by their artworks. IEEE Signal Process Mag 32(4):46–54
83.
Zurück zum Zitat Jboor NH, Belhi A, Al-Ali AK, Bouras A, Jaoua A (2019) Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT). IEEE, pp 602–607 Jboor NH, Belhi A, Al-Ali AK, Bouras A, Jaoua A (2019) Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT). IEEE, pp 602–607
84.
Zurück zum Zitat Castro L, Perez R, Santos A, Carballal A (2014) Authorship and aesthetics experiments: comparison of results between human and computational systems. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 74–84 Castro L, Perez R, Santos A, Carballal A (2014) Authorship and aesthetics experiments: comparison of results between human and computational systems. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 74–84
85.
Zurück zum Zitat Götz KO, Götz K (1974) The maitland graves design judgment test judged by 22 experts. Percept Mot Skills 39(1):261–262 Götz KO, Götz K (1974) The maitland graves design judgment test judged by 22 experts. Percept Mot Skills 39(1):261–262
86.
Zurück zum Zitat Eysenck H, Castle M (1971) Comparative study of artists and nonartists on the maitland graves design judgment test. J Appl Psychol 55(4):389 Eysenck H, Castle M (1971) Comparative study of artists and nonartists on the maitland graves design judgment test. J Appl Psychol 55(4):389
87.
Zurück zum Zitat Machado P, Romero J, Manaris B (2008) Experiments in computational aesthetics. In: The art of artificial evolution. Springer, pp 381–415 Machado P, Romero J, Manaris B (2008) Experiments in computational aesthetics. In: The art of artificial evolution. Springer, pp 381–415
88.
Zurück zum Zitat Saleh B, Elgammal A (2015) A unified framework for painting classification. In: 2015 IEEE international conference on data mining workshop (ICDMW). IEEE, pp 1254–1261 Saleh B, Elgammal A (2015) A unified framework for painting classification. In: 2015 IEEE international conference on data mining workshop (ICDMW). IEEE, pp 1254–1261
89.
Zurück zum Zitat Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175MATH Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175MATH
90.
Zurück zum Zitat Torresani L, Szummer M, Fitzgibbon A (2010) Efficient object category recognition using classemes. In: European conference on computer vision. Springer, pp 776–789 Torresani L, Szummer M, Fitzgibbon A (2010) Efficient object category recognition using classemes. In: European conference on computer vision. Springer, pp 776–789
91.
Zurück zum Zitat Tan WR, Chan CS, Aguirre HE, Tanaka K (2016) Ceci n’est pas une pipe: a deep convolutional network for fine-art paintings classification. In: 2016 IEEE international conference on image processing (ICIP). IEEE, pp 3703–3707 Tan WR, Chan CS, Aguirre HE, Tanaka K (2016) Ceci n’est pas une pipe: a deep convolutional network for fine-art paintings classification. In: 2016 IEEE international conference on image processing (ICIP). IEEE, pp 3703–3707
92.
Zurück zum Zitat Tan WR, Chan CS, Aguirre HE, Tanaka K, Ceci n’est pas une pipe: a deep convolutional network for fine-art paintings classification Tan WR, Chan CS, Aguirre HE, Tanaka K, Ceci n’est pas une pipe: a deep convolutional network for fine-art paintings classification
93.
Zurück zum Zitat Saleh B, Elgammal A, Large-scale classification of fine-art paintings: learning the right metric on the right feature. arXiv:1505.00855 Saleh B, Elgammal A, Large-scale classification of fine-art paintings: learning the right metric on the right feature. arXiv:​1505.​00855
94.
Zurück zum Zitat Banerji S, Sinha A (2016) Painting classification using a pre-trained convolutional neural network. In: International conference on computer vision, graphics, and image processing. Springer, pp 168–179 Banerji S, Sinha A (2016) Painting classification using a pre-trained convolutional neural network. In: International conference on computer vision, graphics, and image processing. Springer, pp 168–179
95.
Zurück zum Zitat Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y, Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229 Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y, Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv:​1312.​6229
96.
Zurück zum Zitat Baumer M, Chen D, Understanding visual art with cnns Baumer M, Chen D, Understanding visual art with cnns
97.
Zurück zum Zitat Bianco S, Mazzini D, Schettini R (2017) Deep multibranch neural network for painting categorization. In: International conference on image analysis and processing. Springer, pp 414–423 Bianco S, Mazzini D, Schettini R (2017) Deep multibranch neural network for painting categorization. In: International conference on image analysis and processing. Springer, pp 414–423
98.
Zurück zum Zitat Lecoutre A, Negrevergne B, Yger F, Recognizing art style automatically with deep learning Lecoutre A, Negrevergne B, Yger F, Recognizing art style automatically with deep learning
99.
Zurück zum Zitat Cazenave T (2017) Residual networks for computer go. IEEE Trans Games 10(1):107–110 Cazenave T (2017) Residual networks for computer go. IEEE Trans Games 10(1):107–110
101.
Zurück zum Zitat Mao H, Cheung M, She J (2017) Deepart: learning joint representations of visual arts. In: Proceedings of the 25th ACM international conference on Multimedia. ACM, pp 1183–1191 Mao H, Cheung M, She J (2017) Deepart: learning joint representations of visual arts. In: Proceedings of the 25th ACM international conference on Multimedia. ACM, pp 1183–1191
109.
Zurück zum Zitat Couprie LD (1983) Iconclass: an iconographic classification system. Art Librar J 8(2):32–49 Couprie LD (1983) Iconclass: an iconographic classification system. Art Librar J 8(2):32–49
110.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1–9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1–9
111.
Zurück zum Zitat Hicsonmez S, Samet N, Sener F, Duygulu P (2017) Draw: deep networks for recognizing styles of artists who illustrate children’s books. In: Proceedings of the 2017 ACM on international conference on multimedia retrieval. pp 338–346 Hicsonmez S, Samet N, Sener F, Duygulu P (2017) Draw: deep networks for recognizing styles of artists who illustrate children’s books. In: Proceedings of the 2017 ACM on international conference on multimedia retrieval. pp 338–346
112.
Zurück zum Zitat Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110 Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110
113.
Zurück zum Zitat Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol 2. IEEE, pp 1150–1157 Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol 2. IEEE, pp 1150–1157
114.
Zurück zum Zitat Sener F, Samet N, Sahin PD (2012) Identification of illustrators. In: European conference on computer vision. Springer, pp 589–597 Sener F, Samet N, Sahin PD (2012) Identification of illustrators. In: European conference on computer vision. Springer, pp 589–597
115.
Zurück zum Zitat Rodriguez CS, Lech M, Pirogova E (2018) Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018 12th international conference on signal processing and communication systems (ICSPCS). IEEE, pp 1–7 Rodriguez CS, Lech M, Pirogova E (2018) Classification of style in fine-art paintings using transfer learning and weighted image patches. In: 2018 12th international conference on signal processing and communication systems (ICSPCS). IEEE, pp 1–7
117.
Zurück zum Zitat Florea C, Toca C, Gieseke F (2017) Artistic movement recognition by boosted fusion of color structure and topographic description. In: 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 569–577 Florea C, Toca C, Gieseke F (2017) Artistic movement recognition by boosted fusion of color structure and topographic description. In: 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 569–577
118.
Zurück zum Zitat Mooers CN (1977) Preventing software piracy. Computer 10(3):29–30 Mooers CN (1977) Preventing software piracy. Computer 10(3):29–30
120.
Zurück zum Zitat Jangtjik KA, Yeh M-C, Hua K-L (2016) Artist-based classification via deep learning with multi-scale weighted pooling. In: Proceedings of the 24th ACM international conference on Multimedia. pp 635–639 Jangtjik KA, Yeh M-C, Hua K-L (2016) Artist-based classification via deep learning with multi-scale weighted pooling. In: Proceedings of the 24th ACM international conference on Multimedia. pp 635–639
121.
Zurück zum Zitat Elgammal A, Kang Y, Den Leeuw M (2018) Picasso, matisse, or a fake? Automated analysis of drawings at the stroke level for attribution and authentication. In: Thirty-second AAAI conference on artificial intelligence Elgammal A, Kang Y, Den Leeuw M (2018) Picasso, matisse, or a fake? Automated analysis of drawings at the stroke level for attribution and authentication. In: Thirty-second AAAI conference on artificial intelligence
122.
Zurück zum Zitat Chen J, Deng A (2018) Comparison of machine learning techniques for artist identification Chen J, Deng A (2018) Comparison of machine learning techniques for artist identification
123.
Zurück zum Zitat Sandoval C, Pirogova E, Lech M (2019) Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access 7:41770–41781 Sandoval C, Pirogova E, Lech M (2019) Two-stage deep learning approach to the classification of fine-art paintings. IEEE Access 7:41770–41781
124.
Zurück zum Zitat Kim Y-M (2018) What makes the difference in visual styles of comics: from classification to style transfer. In: 2018 3rd international conference on computational intelligence and applications (ICCIA). IEEE, pp 181–185 Kim Y-M (2018) What makes the difference in visual styles of comics: from classification to style transfer. In: 2018 3rd international conference on computational intelligence and applications (ICCIA). IEEE, pp 181–185
125.
Zurück zum Zitat Young-Min K (2019) Feature visualization in comic artist classification using deep neural networks. J Big Data 6(1):56 Young-Min K (2019) Feature visualization in comic artist classification using deep neural networks. J Big Data 6(1):56
126.
Zurück zum Zitat Furusawa C, Hiroshiba K, Ogaki K, Odagiri Y (2017) Comicolorization: semi-automatic manga colorization. In: SIGGRAPH Asia 2017 Technical Briefs. pp 1–4 Furusawa C, Hiroshiba K, Ogaki K, Odagiri Y (2017) Comicolorization: semi-automatic manga colorization. In: SIGGRAPH Asia 2017 Technical Briefs. pp 1–4
127.
Zurück zum Zitat Yoshimura Y, Cai B, Wang Z, Ratti C (2019) Deep learning architect: classification for architectural design through the eye of artificial intelligence. In: International conference on computers in urban planning and urban management. Springer, pp 249–265 Yoshimura Y, Cai B, Wang Z, Ratti C (2019) Deep learning architect: classification for architectural design through the eye of artificial intelligence. In: International conference on computers in urban planning and urban management. Springer, pp 249–265
128.
129.
Zurück zum Zitat Zhang J, Shih KJ, Elgammal A, Tao A, Catanzaro B (2019) Graphical contrastive losses for scene graph parsing. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 11535–11543 Zhang J, Shih KJ, Elgammal A, Tao A, Catanzaro B (2019) Graphical contrastive losses for scene graph parsing. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 11535–11543
130.
Zurück zum Zitat Gu J, Zhao H, Lin Z, Li S, Cai J, Ling M (2019) Scene graph generation with external knowledge and image reconstruction. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1969–1978 Gu J, Zhao H, Lin Z, Li S, Cai J, Ling M (2019) Scene graph generation with external knowledge and image reconstruction. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1969–1978
131.
Zurück zum Zitat Zhang J, Kalantidis Y, Rohrbach M, Paluri M, Elgammal A, Elhoseiny M (2019) Large-scale visual relationship understanding. In: Proceedings of the AAAI conference on artificial intelligence, vol 33. pp 9185–9194 Zhang J, Kalantidis Y, Rohrbach M, Paluri M, Elgammal A, Elhoseiny M (2019) Large-scale visual relationship understanding. In: Proceedings of the AAAI conference on artificial intelligence, vol 33. pp 9185–9194
132.
Zurück zum Zitat Tian X, Dong Z, Yang K, Mei T (2015) Query-dependent aesthetic model with deep learning for photo quality assessment. IEEE Trans Multimed 17(11):2035–2048 Tian X, Dong Z, Yang K, Mei T (2015) Query-dependent aesthetic model with deep learning for photo quality assessment. IEEE Trans Multimed 17(11):2035–2048
133.
Zurück zum Zitat Luo W, Wang X, Tang X (2011) Content-based photo quality assessment. In: 2011 International conference on computer vision. IEEE, pp 2206–2213 Luo W, Wang X, Tang X (2011) Content-based photo quality assessment. In: 2011 International conference on computer vision. IEEE, pp 2206–2213
135.
Zurück zum Zitat Machado P, Romero J, Nadal M, Santos A, Correia J, Carballal A (2015) Computerized measures of visual complexity. Acta Psychol 160:43–57 Machado P, Romero J, Nadal M, Santos A, Correia J, Carballal A (2015) Computerized measures of visual complexity. Acta Psychol 160:43–57
136.
Zurück zum Zitat Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall PTR, LondonMATH Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall PTR, LondonMATH
137.
Zurück zum Zitat Denzler J, Rodner E, Simon M (2016) Convolutional neural networks as a computational model for the underlying processes of aesthetics perception. In: European conference on computer vision. Springer, pp 871–887 Denzler J, Rodner E, Simon M (2016) Convolutional neural networks as a computational model for the underlying processes of aesthetics perception. In: European conference on computer vision. Springer, pp 871–887
138.
Zurück zum Zitat Amirshahi SA, Denzler J, Redies C, Jenaesthetics—a public dataset of paintings for aesthetic research. Computer Vision Group [Google Scholar], Jena Amirshahi SA, Denzler J, Redies C, Jenaesthetics—a public dataset of paintings for aesthetic research. Computer Vision Group [Google Scholar], Jena
139.
Zurück zum Zitat Redies C, Amirshahi SA, Koch M, Denzler J (2012) Phog-derived aesthetic measures applied to color photographs of artworks, natural scenes and objects. In: European conference on computer vision. Springer, pp 522–531 Redies C, Amirshahi SA, Koch M, Denzler J (2012) Phog-derived aesthetic measures applied to color photographs of artworks, natural scenes and objects. In: European conference on computer vision. Springer, pp 522–531
140.
Zurück zum Zitat Amirshahi SA, Redies C, Denzler J (2013) How self-similar are artworks at different levels of spatial resolution?. In: Proceedings of the symposium on computational aesthetics, pp 93–100 Amirshahi SA, Redies C, Denzler J (2013) How self-similar are artworks at different levels of spatial resolution?. In: Proceedings of the symposium on computational aesthetics, pp 93–100
141.
Zurück zum Zitat Carballal A, Santos A, Romero J, Machado P, Correia J, Castro L (2018) Distinguishing paintings from photographs by complexity estimates. Neural Comput Appl 30(6):1957–1969 Carballal A, Santos A, Romero J, Machado P, Correia J, Castro L (2018) Distinguishing paintings from photographs by complexity estimates. Neural Comput Appl 30(6):1957–1969
142.
Zurück zum Zitat Prasad M, Jwala Lakshmamma B, Chandana AH, Komali K, Manoja M, Rajesh Kumar P, Sasi Kiran P (2018) An efficient classification of flower images with convolutional neural networks. Int J Eng Technol 7(11):384–391 Prasad M, Jwala Lakshmamma B, Chandana AH, Komali K, Manoja M, Rajesh Kumar P, Sasi Kiran P (2018) An efficient classification of flower images with convolutional neural networks. Int J Eng Technol 7(11):384–391
143.
Zurück zum Zitat Collomosse J, Bui T, Wilber MJ, Fang C, Jin H (2017) Sketching with style: visual search with sketches and aesthetic context. In: Proceedings of the IEEE international conference on computer vision, pp 2660–2668 Collomosse J, Bui T, Wilber MJ, Fang C, Jin H (2017) Sketching with style: visual search with sketches and aesthetic context. In: Proceedings of the IEEE international conference on computer vision, pp 2660–2668
144.
Zurück zum Zitat Eitz M, Hays J, Alexa M (2012) How do humans sketch objects? ACM Trans Graphics TOG 31(4):1–10 Eitz M, Hays J, Alexa M (2012) How do humans sketch objects? ACM Trans Graphics TOG 31(4):1–10
145.
Zurück zum Zitat Lu NGM, Deformsketchnet: Deformable convolutional networks for sketch classification Lu NGM, Deformsketchnet: Deformable convolutional networks for sketch classification
146.
Zurück zum Zitat Shen X, Efros AA, Mathieu A, Discovering visual patterns in art collections with spatially-consistent feature learning. arXiv:1903.02678 Shen X, Efros AA, Mathieu A, Discovering visual patterns in art collections with spatially-consistent feature learning. arXiv:​1903.​02678
149.
Zurück zum Zitat En S, Nicolas S, Petitjean C, Jurie F, Heutte L (2016) New public dataset for spotting patterns in medieval document images. J Electron Imaging 26(1):011010 En S, Nicolas S, Petitjean C, Jurie F, Heutte L (2016) New public dataset for spotting patterns in medieval document images. J Electron Imaging 26(1):011010
150.
Zurück zum Zitat Fernando B, Tommasi T, Tuytelaars T (2015) Location recognition over large time lags. Comput Vis Image Underst 139:21–28 Fernando B, Tommasi T, Tuytelaars T (2015) Location recognition over large time lags. Comput Vis Image Underst 139:21–28
151.
Zurück zum Zitat Philbin J, Chum O, Isard M, Sivic J, Zisserman A (2007) Object retrieval with large vocabularies and fast spatial matching. In IEEE conference on computer vision and pattern recognition. IEEE, pp 1–8 Philbin J, Chum O, Isard M, Sivic J, Zisserman A (2007) Object retrieval with large vocabularies and fast spatial matching. In IEEE conference on computer vision and pattern recognition. IEEE, pp 1–8
152.
Zurück zum Zitat Castellano G, Vessio G (2020) Towards a tool for visual link retrieval and knowledge discovery in painting datasets. In: Italian research conference on digital libraries. Springer, pp 105–110 Castellano G, Vessio G (2020) Towards a tool for visual link retrieval and knowledge discovery in painting datasets. In: Italian research conference on digital libraries. Springer, pp 105–110
154.
Zurück zum Zitat Datta R, Joshi D, Li J, Wang JZ (2006) Studying aesthetics in photographic images using a computational approach. In: European conference on computer vision. Springer, pp 288–301 Datta R, Joshi D, Li J, Wang JZ (2006) Studying aesthetics in photographic images using a computational approach. In: European conference on computer vision. Springer, pp 288–301
155.
Zurück zum Zitat Ke Y, Tang X, Jing F (2006) The design of high-level features for photo quality assessment. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol 1. IEEE, pp 419–426 Ke Y, Tang X, Jing F (2006) The design of high-level features for photo quality assessment. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol 1. IEEE, pp 419–426
156.
Zurück zum Zitat Wong L-K, Low K-L (2009) Saliency-enhanced image aesthetics class prediction. In: 2009 16th IEEE international conference on image processing (ICIP). pp 997–1000 Wong L-K, Low K-L (2009) Saliency-enhanced image aesthetics class prediction. In: 2009 16th IEEE international conference on image processing (ICIP). pp 997–1000
157.
Zurück zum Zitat Marchesotti L, Perronnin F, Larlus D, Csurka G (2011) Assessing the aesthetic quality of photographs using generic image descriptors. In: 2011 international conference on computer vision. IEEE, pp 1784–1791 Marchesotti L, Perronnin F, Larlus D, Csurka G (2011) Assessing the aesthetic quality of photographs using generic image descriptors. In: 2011 international conference on computer vision. IEEE, pp 1784–1791
158.
Zurück zum Zitat Wang W, Cai D, Wang L, Huang Q, Xu X, Li X (2016) Synthesized computational aesthetic evaluation of photos. Neurocomputing 172:244–252 Wang W, Cai D, Wang L, Huang Q, Xu X, Li X (2016) Synthesized computational aesthetic evaluation of photos. Neurocomputing 172:244–252
159.
Zurück zum Zitat Xia Y, Liu Z, Yan Y, Chen Y, Zhang L, Zimmermann R (2017) Media quality assessment by perceptual gaze-shift patterns discovery. IEEE Trans Multimedia 19(8):1811–1820 Xia Y, Liu Z, Yan Y, Chen Y, Zhang L, Zimmermann R (2017) Media quality assessment by perceptual gaze-shift patterns discovery. IEEE Trans Multimedia 19(8):1811–1820
160.
Zurück zum Zitat Tong H, Li M, Zhang H-J, He J, Zhang C (2004) Classification of digital photos taken by photographers or home users. In: Pacific-Rim conference on multimedia. Springer, pp 198–205 Tong H, Li M, Zhang H-J, He J, Zhang C (2004) Classification of digital photos taken by photographers or home users. In: Pacific-Rim conference on multimedia. Springer, pp 198–205
161.
Zurück zum Zitat Friedman J, Hastie T, Tibshirani R et al (2000) Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann Statist 28(2):337–407MathSciNetMATH Friedman J, Hastie T, Tibshirani R et al (2000) Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann Statist 28(2):337–407MathSciNetMATH
162.
Zurück zum Zitat Luo Y, Tang X (2008) Photo and video quality evaluation: focusing on the subject. In: European conference on computer vision. Springer, pp 386–399 Luo Y, Tang X (2008) Photo and video quality evaluation: focusing on the subject. In: European conference on computer vision. Springer, pp 386–399
163.
Zurück zum Zitat Wu O, Hu W, Gao J (2011) Learning to predict the perceived visual quality of photos. In: 2011 international conference on computer vision. IEEE, pp 225–232 Wu O, Hu W, Gao J (2011) Learning to predict the perceived visual quality of photos. In: 2011 international conference on computer vision. IEEE, pp 225–232
164.
Zurück zum Zitat Tan Y, Zhou Y, Li G, Huang A (2016) Computational aesthetics of photos quality assessment based on improved artificial neural network combined with an autoencoder technique. Neurocomputing 188:50–62 Tan Y, Zhou Y, Li G, Huang A (2016) Computational aesthetics of photos quality assessment based on improved artificial neural network combined with an autoencoder technique. Neurocomputing 188:50–62
165.
Zurück zum Zitat Gao F, Wang Y, Li P, Tan M, Yu J, Zhu Y (2017) Deepsim: deep similarity for image quality assessment. Neurocomputing 257:104–114 Gao F, Wang Y, Li P, Tan M, Yu J, Zhu Y (2017) Deepsim: deep similarity for image quality assessment. Neurocomputing 257:104–114
166.
Zurück zum Zitat Meng X, Gao F, Shi S, Zhu S, Zhu J (2018) Mlans: image aesthetic assessment via multi-layer aggregation networks. In: 2018 Eighth international conference on image processing theory, tools and applications (IPTA). IEEE, pp 1–6 Meng X, Gao F, Shi S, Zhu S, Zhu J (2018) Mlans: image aesthetic assessment via multi-layer aggregation networks. In: 2018 Eighth international conference on image processing theory, tools and applications (IPTA). IEEE, pp 1–6
167.
Zurück zum Zitat Talebi H, Milanfar P (2018) Nima: neural image assessment. IEEE Trans Image Process 27(8):3998–4011MathSciNetMATH Talebi H, Milanfar P (2018) Nima: neural image assessment. IEEE Trans Image Process 27(8):3998–4011MathSciNetMATH
169.
Zurück zum Zitat Verkoelen SD, Lamers MH, van der Putten P (2017) Exploring the exactitudes portrait series with restricted boltzmann machines. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 321–337 Verkoelen SD, Lamers MH, van der Putten P (2017) Exploring the exactitudes portrait series with restricted boltzmann machines. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 321–337
170.
Zurück zum Zitat Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507MathSciNetMATH Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507MathSciNetMATH
172.
Zurück zum Zitat Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imaging 19(1):011006 Larson EC, Chandler DM (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. J Electron Imaging 19(1):011006
173.
Zurück zum Zitat Ghadiyaram D, Bovik AC (2015) Massive online crowd sourced study of subjective and objective picture quality. IEEE Trans Image Process 25(1):372–387MATH Ghadiyaram D, Bovik AC (2015) Massive online crowd sourced study of subjective and objective picture quality. IEEE Trans Image Process 25(1):372–387MATH
174.
Zurück zum Zitat Jayaraman D, Mittal A, Moorthy AK, Bovik AC (2012) Objective quality assessment of multiply distorted images. In: 2012 Conference record of the forty sixth asilomar conference on signals, systems and computers (ASILOMAR). IEEE, pp 1693–1697 Jayaraman D, Mittal A, Moorthy AK, Bovik AC (2012) Objective quality assessment of multiply distorted images. In: 2012 Conference record of the forty sixth asilomar conference on signals, systems and computers (ASILOMAR). IEEE, pp 1693–1697
175.
Zurück zum Zitat Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L, Astola J, Vozel B, Chehdi K, Carli M, Battisti F et al (2013) Color image database tid2013: peculiarities and preliminary results. In: European workshop on visual information processing (EUVIP). IEEE, pp 106–111 Ponomarenko N, Ieremeiev O, Lukin V, Egiazarian K, Jin L, Astola J, Vozel B, Chehdi K, Carli M, Battisti F et al (2013) Color image database tid2013: peculiarities and preliminary results. In: European workshop on visual information processing (EUVIP). IEEE, pp 106–111
176.
Zurück zum Zitat Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H, Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H, Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:​1704.​04861
177.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2818–2826 Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2818–2826
178.
Zurück zum Zitat Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15(11):3440–3451 Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15(11):3440–3451
179.
Zurück zum Zitat Ponomarenko N, Jin L, Ieremeiev O, Lukin V, Egiazarian K, Astola J, Vozel B, Chehdi K, Carli M, Battisti F et al (2015) Image database tid2013: peculiarities, results and perspectives. Sig Process Image Commun 30:57–77 Ponomarenko N, Jin L, Ieremeiev O, Lukin V, Egiazarian K, Astola J, Vozel B, Chehdi K, Carli M, Battisti F et al (2015) Image database tid2013: peculiarities, results and perspectives. Sig Process Image Commun 30:57–77
180.
Zurück zum Zitat Ma K, Duanmu Z, Wu Q, Wang Z, Yong H, Li H, Zhang L (2016) Waterloo exploration database: new challenges for image quality assessment models. IEEE Trans Image Process 26(2):1004–1016MathSciNetMATH Ma K, Duanmu Z, Wu Q, Wang Z, Yong H, Li H, Zhang L (2016) Waterloo exploration database: new challenges for image quality assessment models. IEEE Trans Image Process 26(2):1004–1016MathSciNetMATH
181.
Zurück zum Zitat Carballal A, Perez R, Santos A, Castro L (2014) A complexity approach for identifying aesthetic composite landscapes. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 50–61 Carballal A, Perez R, Santos A, Castro L (2014) A complexity approach for identifying aesthetic composite landscapes. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 50–61
182.
Zurück zum Zitat Lu X, Lin Z, Jin H, Yang J, Wang JZ (2014) Rapid: rating pictorial aesthetics using deep learning. In: Proceedings of the 22nd ACM international conference on Multimedia. pp 457–466 Lu X, Lin Z, Jin H, Yang J, Wang JZ (2014) Rapid: rating pictorial aesthetics using deep learning. In: Proceedings of the 22nd ACM international conference on Multimedia. pp 457–466
183.
Zurück zum Zitat Zhou Y, Li G, Tan Y (2015) Computational aesthetics of photos quality assessment and classification based on artificial neural network with deep learning methods. Int J Signal Process Image Process Pattern Recognit 8(7):273–282 Zhou Y, Li G, Tan Y (2015) Computational aesthetics of photos quality assessment and classification based on artificial neural network with deep learning methods. Int J Signal Process Image Process Pattern Recognit 8(7):273–282
184.
Zurück zum Zitat Dong Z, Tian X (2015) Multi-level photo quality assessment with multi-view features. Neurocomputing 168:308–319 Dong Z, Tian X (2015) Multi-level photo quality assessment with multi-view features. Neurocomputing 168:308–319
185.
Zurück zum Zitat Campbell A, Ciesielksi V, Qin AK (2015) Feature discovery by deep learning for aesthetic analysis of evolved abstract images. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 27–38 Campbell A, Ciesielksi V, Qin AK (2015) Feature discovery by deep learning for aesthetic analysis of evolved abstract images. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 27–38
186.
Zurück zum Zitat Xu Q, D’Souza D, Ciesielski V (2007) Evolving images for entertainment. In: Proceedings of the 4th Australasian conference on Interactive entertainment. RMIT University, p 26 Xu Q, D’Souza D, Ciesielski V (2007) Evolving images for entertainment. In: Proceedings of the 4th Australasian conference on Interactive entertainment. RMIT University, p 26
187.
Zurück zum Zitat Wang W, Zhao M, Wang L, Huang J, Cai C, Xu X (2016) A multi-scene deep learning model for image aesthetic evaluation. Signal Process Image Commun 47:511–518 Wang W, Zhao M, Wang L, Huang J, Cai C, Xu X (2016) A multi-scene deep learning model for image aesthetic evaluation. Signal Process Image Commun 47:511–518
188.
Zurück zum Zitat Jin X, Chi J, Peng S, Tian Y, Ye C, Li X (2016) Deep image aesthetics classification using inception modules and fine-tuning connected layer. In: 2016 8th international conference on wireless communications and signal processing (WCSP). IEEE, pp 1–6 Jin X, Chi J, Peng S, Tian Y, Ye C, Li X (2016) Deep image aesthetics classification using inception modules and fine-tuning connected layer. In: 2016 8th international conference on wireless communications and signal processing (WCSP). IEEE, pp 1–6
189.
Zurück zum Zitat Kao Y, Huang K, Maybank S (2016) Hierarchical aesthetic quality assessment using deep convolutional neural networks. Signal Process Image Commun 47:500–510 Kao Y, Huang K, Maybank S (2016) Hierarchical aesthetic quality assessment using deep convolutional neural networks. Signal Process Image Commun 47:500–510
192.
Zurück zum Zitat Kong S, Shen X, Lin Z, Mech R, Fowlkes C (2016) Photo aesthetics ranking network with attributes and content adaptation. In: European conference on computer vision. Springer, pp 662–679 Kong S, Shen X, Lin Z, Mech R, Fowlkes C (2016) Photo aesthetics ranking network with attributes and content adaptation. In: European conference on computer vision. Springer, pp 662–679
193.
Zurück zum Zitat Tan Y, Tang P, Zhou Y, Luo W, Kang Y, Li G (2017) Photograph aesthetical evaluation and classification with deep convolutional neural networks. Neurocomputing 228:165–175 Tan Y, Tang P, Zhou Y, Luo W, Kang Y, Li G (2017) Photograph aesthetical evaluation and classification with deep convolutional neural networks. Neurocomputing 228:165–175
194.
Zurück zum Zitat Li Y-X, Pu Y-Y, Xu D, Qian W-H, Wang L-P (2017) Image aesthetic quality evaluation using convolution neural network embedded learning. Optoelectron Lett 13(6):471–475 Li Y-X, Pu Y-Y, Xu D, Qian W-H, Wang L-P (2017) Image aesthetic quality evaluation using convolution neural network embedded learning. Optoelectron Lett 13(6):471–475
195.
Zurück zum Zitat Lemarchand F et al (2017) From computational aesthetic prediction for images to films and online videos. AVANT. Pismo Awangardy Filozoficzno-Naukowej (S):69–78 Lemarchand F et al (2017) From computational aesthetic prediction for images to films and online videos. AVANT. Pismo Awangardy Filozoficzno-Naukowej (S):69–78
196.
Zurück zum Zitat Tzelepis C, Mavridaki E, Mezaris V, Patras I (2016) Video aesthetic quality assessment using kernel support vector machine with isotropic gaussian sample uncertainty (ksvm-igsu). In: 2016 IEEE international conference on image processing (ICIP). IEEE, pp 2410–2414 Tzelepis C, Mavridaki E, Mezaris V, Patras I (2016) Video aesthetic quality assessment using kernel support vector machine with isotropic gaussian sample uncertainty (ksvm-igsu). In: 2016 IEEE international conference on image processing (ICIP). IEEE, pp 2410–2414
198.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778
199.
Zurück zum Zitat Bianco S, Celona L, Napoletano P, Schettini R (2016) Predicting image aesthetics with deep learning. In: International conference on advanced concepts for intelligent vision systems. Springer, pp 117–125 Bianco S, Celona L, Napoletano P, Schettini R (2016) Predicting image aesthetics with deep learning. In: International conference on advanced concepts for intelligent vision systems. Springer, pp 117–125
200.
Zurück zum Zitat Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: Advances in neural information processing systems. pp 487–495 Zhou B, Lapedriza A, Xiao J, Torralba A, Oliva A (2014) Learning deep features for scene recognition using places database. In: Advances in neural information processing systems. pp 487–495
201.
Zurück zum Zitat Lemarchand F (2018) Fundamental visual features for aesthetic classification of photographs across datasets. Pattern Recogn Lett 112:9–17 Lemarchand F (2018) Fundamental visual features for aesthetic classification of photographs across datasets. Pattern Recogn Lett 112:9–17
202.
Zurück zum Zitat Zhang C, Zhu C, Xu X, Liu Y, Xiao J, Tillo T (2018) Visual aesthetic understanding: sample-specific aesthetic classification and deep activation map visualization. Signal Process Image Commun 67:12–21 Zhang C, Zhu C, Xu X, Liu Y, Xiao J, Tillo T (2018) Visual aesthetic understanding: sample-specific aesthetic classification and deep activation map visualization. Signal Process Image Commun 67:12–21
204.
Zurück zum Zitat Jin X, Wu L, Zhao G, Zhou X, Zhang X, Li X (2020) IDEA: a new dataset for image aesthetic scoring. Multimed Tools Appl 79(21):14341–14355 Jin X, Wu L, Zhao G, Zhou X, Zhang X, Li X (2020) IDEA: a new dataset for image aesthetic scoring. Multimed Tools Appl 79(21):14341–14355
205.
Zurück zum Zitat Jin X, Wu L, Zhao G, Zhou X, Zhang X, Li X (2018) Photo aesthetic scoring through spatial aggregation perception dcnn on a new idea dataset. In: International symposium on artificial intelligence and robotics. Springer, pp 41–50 Jin X, Wu L, Zhao G, Zhou X, Zhang X, Li X (2018) Photo aesthetic scoring through spatial aggregation perception dcnn on a new idea dataset. In: International symposium on artificial intelligence and robotics. Springer, pp 41–50
206.
Zurück zum Zitat Apostolidis K, Mezaris V (2019) Image aesthetics assessment using fully convolutional neural networks. In: International conference on multimedia modeling. Springer, pp 361–373 Apostolidis K, Mezaris V (2019) Image aesthetics assessment using fully convolutional neural networks. In: International conference on multimedia modeling. Springer, pp 361–373
209.
Zurück zum Zitat Sheng K, Dong W, Chai M, Wang G, Zhou P, Huang F, Hu B-G, Ji R, Ma C, Revisiting image aesthetic assessment via self-supervised feature learning. arXiv:1911.11419 Sheng K, Dong W, Chai M, Wang G, Zhou P, Huang F, Hu B-G, Ji R, Ma C, Revisiting image aesthetic assessment via self-supervised feature learning. arXiv:​1911.​11419
211.
Zurück zum Zitat Cetinic E, Lipic T, Grgic S (2019) A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7:73694–73710 Cetinic E, Lipic T, Grgic S (2019) A deep learning perspective on beauty, sentiment, and remembrance of art. IEEE Access 7:73694–73710
212.
213.
215.
Zurück zum Zitat Semmo A, Isenberg T, Döllner J (2017) Neural style transfer: a paradigm shift for image-based artistic rendering?. In: Proceedings of the symposium on non-photorealistic animation and rendering. pp 1–13 Semmo A, Isenberg T, Döllner J (2017) Neural style transfer: a paradigm shift for image-based artistic rendering?. In: Proceedings of the symposium on non-photorealistic animation and rendering. pp 1–13
216.
Zurück zum Zitat Gatys L, Ecker AS, Bethge M (2015) Texture synthesis using convolutional neural networks. In: Advances in neural information processing systems. pp 262–270 Gatys L, Ecker AS, Bethge M (2015) Texture synthesis using convolutional neural networks. In: Advances in neural information processing systems. pp 262–270
217.
Zurück zum Zitat Portilla J, Simoncelli EP (2000) A parametric texture model based on joint statistics of complex wavelet coefficients. Int J Comput Vis 40(1):49–70MATH Portilla J, Simoncelli EP (2000) A parametric texture model based on joint statistics of complex wavelet coefficients. Int J Comput Vis 40(1):49–70MATH
219.
Zurück zum Zitat Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2414–2423 Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2414–2423
220.
221.
Zurück zum Zitat Chen Y-L, Hsu C-T (2016) Towards deep style transfer: a content-aware perspective. In: BMVC Chen Y-L, Hsu C-T (2016) Towards deep style transfer: a content-aware perspective. In: BMVC
223.
Zurück zum Zitat Li C, Wand M (2016) Combining markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2479–2486 Li C, Wand M (2016) Combining markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2479–2486
224.
Zurück zum Zitat Joshi B, Stewart K, Shapiro D (2017) Bringing impressionism to life with neural style transfer in come swim. In: Proceedings of the ACM SIGGRAPH digital production symposium. pp 1–5 Joshi B, Stewart K, Shapiro D (2017) Bringing impressionism to life with neural style transfer in come swim. In: Proceedings of the ACM SIGGRAPH digital production symposium. pp 1–5
225.
Zurück zum Zitat Chen Y, Lai Y-K, Liu Y-J (2017) Transforming photos to comics using convolutional neural networks. In: 2017 IEEE international conference on image processing (ICIP). IEEE, pp 2010–2014 Chen Y, Lai Y-K, Liu Y-J (2017) Transforming photos to comics using convolutional neural networks. In: 2017 IEEE international conference on image processing (ICIP). IEEE, pp 2010–2014
226.
Zurück zum Zitat Krishnan U, Sharma A, Chattopadhyay P, Feature fusion from multiple paintings for generalized artistic style transfer. Available at SSRN 3387817 Krishnan U, Sharma A, Chattopadhyay P, Feature fusion from multiple paintings for generalized artistic style transfer. Available at SSRN 3387817
229.
Zurück zum Zitat Correia J, Martins T, Martins P, Machado P (2016) X-faces: the exploit is out there. In: Proceedings of the seventh international conference on computational creativity Correia J, Martins T, Martins P, Machado P (2016) X-faces: the exploit is out there. In: Proceedings of the seventh international conference on computational creativity
230.
Zurück zum Zitat Machado P, Correia J, Romero J (2012) Improving face detection. In: European conference on genetic programming. Springer, pp 73–84 Machado P, Correia J, Romero J (2012) Improving face detection. In: European conference on genetic programming. Springer, pp 73–84
231.
Zurück zum Zitat Machado P, Correia J, Romero J (2012) Expression-based evolution of faces. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 187–198 Machado P, Correia J, Romero J (2012) Expression-based evolution of faces. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 187–198
232.
Zurück zum Zitat Correia J, Martins T, Machado P (2019) Evolutionary data augmentation in deep face detection. In: Proceedings of the genetic and evolutionary computation conference companion. pp 163–164 Correia J, Martins T, Machado P (2019) Evolutionary data augmentation in deep face detection. In: Proceedings of the genetic and evolutionary computation conference companion. pp 163–164
233.
Zurück zum Zitat Machado P, Vinhas A, Correia J, Ekárt A (2015) Evolving ambiguous images. In: Twenty-fourth international joint conference on artificial intelligence Machado P, Vinhas A, Correia J, Ekárt A (2015) Evolving ambiguous images. In: Twenty-fourth international joint conference on artificial intelligence
236.
Zurück zum Zitat Colton S, Halskov J, Ventura D, Gouldstone I, Cook M, Ferrer BP (2015) The painting fool sees! new projects with the automated painter. In: ICCC. pp 189–196 Colton S, Halskov J, Ventura D, Gouldstone I, Cook M, Ferrer BP (2015) The painting fool sees! new projects with the automated painter. In: ICCC. pp 189–196
237.
Zurück zum Zitat Krzeczkowska A, El-Hage J, Colton S, Clark S (2010) Automated collage generation-with intent. In: ICCC. pp 36–40 Krzeczkowska A, El-Hage J, Colton S, Clark S (2010) Automated collage generation-with intent. In: ICCC. pp 36–40
238.
Zurück zum Zitat Colton S (2008) Automatic invention of fitness functions with application to scene generation. In: Workshops on applications of evolutionary computation. Springer, pp 381–391 Colton S (2008) Automatic invention of fitness functions with application to scene generation. In: Workshops on applications of evolutionary computation. Springer, pp 381–391
239.
Zurück zum Zitat Colton S (2008) Experiments in constraint-based automated scene generation. In: Proceedings of the 5th international joint workshop on computational creativity. pp 127–136 Colton S (2008) Experiments in constraint-based automated scene generation. In: Proceedings of the 5th international joint workshop on computational creativity. pp 127–136
240.
Zurück zum Zitat Colton S, Ferrer BP (2012) No photos harmed/growing paths from seed: an exhibition. In: Proceedings of the symposium on non-photorealistic animation and rendering. pp 1–10 Colton S, Ferrer BP (2012) No photos harmed/growing paths from seed: an exhibition. In: Proceedings of the symposium on non-photorealistic animation and rendering. pp 1–10
241.
Zurück zum Zitat Colton S (2012) Evolving a library of artistic scene descriptors. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 35–47 Colton S (2012) Evolving a library of artistic scene descriptors. In: International conference on evolutionary and biologically inspired music and art. Springer, pp 35–47
243.
Zurück zum Zitat Colton S, Ventura D (2014) You can’t know my mind: a festival of computational creativity. In: ICCC. pp 351–354 Colton S, Ventura D (2014) You can’t know my mind: a festival of computational creativity. In: ICCC. pp 351–354
245.
Zurück zum Zitat Radford A, Metz L, Chintala S, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 Radford A, Metz L, Chintala S, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:​1511.​06434
246.
Zurück zum Zitat Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J, Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv:1506.03365 Yu F, Seff A, Zhang Y, Song S, Funkhouser T, Xiao J, Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv:​1506.​03365
248.
Zurück zum Zitat Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images
249.
Zurück zum Zitat Tan WR, Chan CS, Aguirre HE, Tanaka K (2017) Artgan: artwork synthesis with conditional categorical gans. In: 2017 IEEE International conference on image processing (ICIP). IEEE, pp 3760–3764 Tan WR, Chan CS, Aguirre HE, Tanaka K (2017) Artgan: artwork synthesis with conditional categorical gans. In: 2017 IEEE International conference on image processing (ICIP). IEEE, pp 3760–3764
250.
Zurück zum Zitat Elgammal A, Liu B, Elhoseiny M, Mazzone M, Can: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv:1706.07068 Elgammal A, Liu B, Elhoseiny M, Mazzone M, Can: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv:​1706.​07068
251.
Zurück zum Zitat Neumann A, Pyromallis C, Alexander B (2018) Evolution of images with diversity and constraints using a generative adversarial network. In: International conference on neural information processing. Springer, pp 452–465 Neumann A, Pyromallis C, Alexander B (2018) Evolution of images with diversity and constraints using a generative adversarial network. In: International conference on neural information processing. Springer, pp 452–465
252.
Zurück zum Zitat Neumann A, Pyromallis C, Alexander B, Evolution of images with diversity and constraints using a generator network. arXiv:1802.05480 Neumann A, Pyromallis C, Alexander B, Evolution of images with diversity and constraints using a generator network. arXiv:​1802.​05480
253.
Zurück zum Zitat Talebi H, Milanfar P (2018) Learned perceptual image enhancement. In: 2018 IEEE international conference on computational photography (ICCP). IEEE, pp 1–13 Talebi H, Milanfar P (2018) Learned perceptual image enhancement. In: 2018 IEEE international conference on computational photography (ICCP). IEEE, pp 1–13
254.
Zurück zum Zitat Bychkovsky V, Paris S, Chan E, Durand F (2011) Learning photographic global tonal adjustment with a database of input/output image pairs. In: CVPR 2011. IEEE, pp 97–104 Bychkovsky V, Paris S, Chan E, Durand F (2011) Learning photographic global tonal adjustment with a database of input/output image pairs. In: CVPR 2011. IEEE, pp 97–104
255.
Zurück zum Zitat Bontrager P, Lin W, Togelius J, Risi S (2018) Deep interactive evolution. In: International conference on computational intelligence in music, sound, art and design. Springer, pp 267–282 Bontrager P, Lin W, Togelius J, Risi S (2018) Deep interactive evolution. In: International conference on computational intelligence in music, sound, art and design. Springer, pp 267–282
256.
Zurück zum Zitat Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision. pp 3730–3738 Liu Z, Luo P, Wang X, Tang X (2015) Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision. pp 3730–3738
257.
Zurück zum Zitat Yu A, Grauman K (2014) Fine-grained visual comparisons with local learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 192–199 Yu A, Grauman K (2014) Fine-grained visual comparisons with local learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 192–199
258.
Zurück zum Zitat Aubry M, Maturana D, Efros AA, Russell BC, Sivic J (2014) Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 3762–3769 Aubry M, Maturana D, Efros AA, Russell BC, Sivic J (2014) Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 3762–3769
260.
Zurück zum Zitat Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2536–2544 Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2536–2544
261.
Zurück zum Zitat Tanjil F, Ross BJ (2019) Deep learning concepts for evolutionary art. In: International conference on computational intelligence in music, sound, art and design (Part of EvoStar). Springer, pp 1–17 Tanjil F, Ross BJ (2019) Deep learning concepts for evolutionary art. In: International conference on computational intelligence in music, sound, art and design (Part of EvoStar). Springer, pp 1–17
262.
Zurück zum Zitat Deng J, Berg A, Satheesh S, Su H, Khosla A, Li F (2012) Large scale visual recognition challenge 2012. In: ILSVRC 2012 workshop Deng J, Berg A, Satheesh S, Su H, Khosla A, Li F (2012) Large scale visual recognition challenge 2012. In: ILSVRC 2012 workshop
263.
Zurück zum Zitat Elgammal A (2019) Ai is blurring the definition of artist: advanced algorithms are using machine learning to create art autonomously. Am Sci 107(1):18–22 Elgammal A (2019) Ai is blurring the definition of artist: advanced algorithms are using machine learning to create art autonomously. Am Sci 107(1):18–22
265.
Zurück zum Zitat Blair A (2019) Adversarial evolution and deep learning—How does an artist play with our visual system?. In: International conference on computational intelligence in music, sound, art and design (part of EvoStar). Springer, pp 18–34 Blair A (2019) Adversarial evolution and deep learning—How does an artist play with our visual system?. In: International conference on computational intelligence in music, sound, art and design (part of EvoStar). Springer, pp 18–34
267.
Zurück zum Zitat Shen X, Darmon F, Efros AA, Aubry M (2020) Ransac-flow: generic two-stage image alignment. In: 16th European conference on computer vision Shen X, Darmon F, Efros AA, Aubry M (2020) Ransac-flow: generic two-stage image alignment. In: 16th European conference on computer vision
268.
Zurück zum Zitat Barath D, Matas J (2018) Graph-cut ransac. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 6733–6741 Barath D, Matas J (2018) Graph-cut ransac. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 6733–6741
269.
Zurück zum Zitat Barath D, Matas J, Noskova J (2019) Magsac: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 10197–10205 Barath D, Matas J, Noskova J (2019) Magsac: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 10197–10205
270.
Zurück zum Zitat Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNet Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNet
271.
Zurück zum Zitat Plötz T, Roth S (2018) Neural nearest neighbors networks. In: Advances in neural information processing systems. pp 1087–1098 Plötz T, Roth S (2018) Neural nearest neighbors networks. In: Advances in neural information processing systems. pp 1087–1098
272.
Zurück zum Zitat Qi CR, Su H, Mo K, Guibas LJ (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 652–660 Qi CR, Su H, Mo K, Guibas LJ (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 652–660
273.
Zurück zum Zitat Raguram R, Chum O, Pollefeys M, Matas J, Frahm J-M (2012) Usac: a universal framework for random sample consensus. IEEE Trans Pattern Anal Mach Intell 35(8):2022–2038 Raguram R, Chum O, Pollefeys M, Matas J, Frahm J-M (2012) Usac: a universal framework for random sample consensus. IEEE Trans Pattern Anal Mach Intell 35(8):2022–2038
274.
Zurück zum Zitat Ranftl R, Koltun V (2018) Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (ECCV). pp 284–299 Ranftl R, Koltun V (2018) Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (ECCV). pp 284–299
275.
Zurück zum Zitat Zhang J, Sun D, Luo Z, Yao A, Zhou L, Shen T, Chen Y, Quan L, Liao H (2019) Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE international conference on computer vision. pp 5845–5854 Zhang J, Sun D, Luo Z, Yao A, Zhou L, Shen T, Chen Y, Quan L, Liao H (2019) Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE international conference on computer vision. pp 5845–5854
276.
Zurück zum Zitat Jason JY, Harley AW, Derpanis KG (2016) Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: European conference on computer vision. Springer, pp 3–10 Jason JY, Harley AW, Derpanis KG (2016) Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: European conference on computer vision. Springer, pp 3–10
277.
Zurück zum Zitat Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612 Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612
278.
Zurück zum Zitat Yin Z, Shi J (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1983–1992 Yin Z, Shi J (2018) Geonet: unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1983–1992
279.
Zurück zum Zitat Temizel A et al (2018) Paired 3d model generation with conditional generative adversarial networks. In: Proceedings of the European conference on computer vision (ECCV) Temizel A et al (2018) Paired 3d model generation with conditional generative adversarial networks. In: Proceedings of the European conference on computer vision (ECCV)
280.
Zurück zum Zitat Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J (2015) 3d shapenets: a deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1912–1920 Wu Z, Song S, Khosla A, Yu F, Zhang L, Tang X, Xiao J (2015) 3d shapenets: a deep representation for volumetric shapes. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1912–1920
281.
Zurück zum Zitat Li H, Zheng Y, Wu X, Cai Q (2019) 3d Model generation and reconstruction using conditional generative adversarial network. Int J Comput Intell Syst 12(2):697–705 Li H, Zheng Y, Wu X, Cai Q (2019) 3d Model generation and reconstruction using conditional generative adversarial network. Int J Comput Intell Syst 12(2):697–705
282.
Zurück zum Zitat Chang AX, Funkhouser T, Guibas L, Hanrahan P, Huang Q, Li Z, Savarese S, Savva M, Song S, Su H et al Shapenet: an information-rich 3d model repository. arXiv:1512.03012 Chang AX, Funkhouser T, Guibas L, Hanrahan P, Huang Q, Li Z, Savarese S, Savva M, Song S, Su H et al Shapenet: an information-rich 3d model repository. arXiv:​1512.​03012
283.
Zurück zum Zitat Lim JJ, Pirsiavash H, Torralba A (2013) Parsing ikea objects: fine pose estimation. In: Proceedings of the IEEE international conference on computer vision. pp 2992–2999 Lim JJ, Pirsiavash H, Torralba A (2013) Parsing ikea objects: fine pose estimation. In: Proceedings of the IEEE international conference on computer vision. pp 2992–2999
284.
Zurück zum Zitat Volz V, Schrum J, Liu J, Lucas SM, Smith A, Risi S (2018) Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In: Proceedings of the genetic and evolutionary computation conference. pp 221–228 Volz V, Schrum J, Liu J, Lucas SM, Smith A, Risi S (2018) Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In: Proceedings of the genetic and evolutionary computation conference. pp 221–228
286.
Zurück zum Zitat Togelius J, Karakovskiy S, Baumgarten R (2010) The 2009 mario ai competition. In: IEEE congress on evolutionary computation. IEEE, pp 1–8 Togelius J, Karakovskiy S, Baumgarten R (2010) The 2009 mario ai competition. In: IEEE congress on evolutionary computation. IEEE, pp 1–8
287.
Zurück zum Zitat Hollingsworth B, Schrum J (2019) Infinite art gallery: a game world of interactively evolved artwork. In: 2019 IEEE congress on evolutionary computation (CEC). IEEE, pp 474–481 Hollingsworth B, Schrum J (2019) Infinite art gallery: a game world of interactively evolved artwork. In: 2019 IEEE congress on evolutionary computation (CEC). IEEE, pp 474–481
289.
Zurück zum Zitat Bishop CM (2006) Pattern recognition and machine learning. springer, BerlinMATH Bishop CM (2006) Pattern recognition and machine learning. springer, BerlinMATH
290.
Zurück zum Zitat McLachlan GJ, Do K-A, Ambroise C (2005) Analyzing microarray gene expression data, vol 422. Wiley, New YorkMATH McLachlan GJ, Do K-A, Ambroise C (2005) Analyzing microarray gene expression data, vol 422. Wiley, New YorkMATH
291.
Zurück zum Zitat Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE J 37(2):233–243 Kramer MA (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE J 37(2):233–243
292.
Zurück zum Zitat Géron A (2019) Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: concepts, tools, and techniques to build intelligent systems. O’Reilly Media, Newton Géron A (2019) Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: concepts, tools, and techniques to build intelligent systems. O’Reilly Media, Newton
293.
Zurück zum Zitat Lin T-Y, RoyChowdhury A, Maji S (2015) Bilinear cnn models for fine-grained visual recognition. In: Proceedings of the IEEE international conference on computer vision. pp 1449–1457 Lin T-Y, RoyChowdhury A, Maji S (2015) Bilinear cnn models for fine-grained visual recognition. In: Proceedings of the IEEE international conference on computer vision. pp 1449–1457
295.
Zurück zum Zitat Bengio Y (2009) Learning deep architectures for AI. Now Publishers Inc, New YorkMATH Bengio Y (2009) Learning deep architectures for AI. Now Publishers Inc, New YorkMATH
296.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems. pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems. pp 2672–2680
297.
Zurück zum Zitat Hochreiter S, Ja1 4 rgen schmidhuber (1997) “long short-term memory”. Neural Comput 9(8) Hochreiter S, Ja1 4 rgen schmidhuber (1997) “long short-term memory”. Neural Comput 9(8)
298.
Zurück zum Zitat Barriga NA (2019) A short introduction to procedural content generation algorithms for videogames. Int J Artif Intell Tools 28(02):1930001 Barriga NA (2019) A short introduction to procedural content generation algorithms for videogames. Int J Artif Intell Tools 28(02):1930001
299.
Zurück zum Zitat Togelius J, Kastbjerg E, Schedl D, Yannakakis GN (2011) What is procedural content generation? Mario on the borderline. In: Proceedings of the 2nd international workshop on procedural content generation in games. pp 1–6 Togelius J, Kastbjerg E, Schedl D, Yannakakis GN (2011) What is procedural content generation? Mario on the borderline. In: Proceedings of the 2nd international workshop on procedural content generation in games. pp 1–6
300.
Zurück zum Zitat Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4):427–437 Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manag 45(4):427–437
301.
Zurück zum Zitat Smolensky P (1986) Information processing in dynamical systems: foundations of harmony theory, Technical report. Colorado University at Boulder Department of Computer Science Smolensky P (1986) Information processing in dynamical systems: foundations of harmony theory, Technical report. Colorado University at Boulder Department of Computer Science
302.
Zurück zum Zitat Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297MATH Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297MATH
303.
Zurück zum Zitat Philbin J, Chum O, Isard M, Sivic J, Zisserman A (2008) Lost in quantization: improving particular object retrieval in large scale image databases. In: 2008 IEEE conference on computer vision and pattern recognition. IEEE, pp 1–8 Philbin J, Chum O, Isard M, Sivic J, Zisserman A (2008) Lost in quantization: improving particular object retrieval in large scale image databases. In: 2008 IEEE conference on computer vision and pattern recognition. IEEE, pp 1–8
304.
Zurück zum Zitat Ren J, Shen X, Lin Z, Mech R, Foran DJ (2017) Personalized image aesthetics. In: Proceedings of the IEEE international conference on computer vision. pp 638–647 Ren J, Shen X, Lin Z, Mech R, Foran DJ (2017) Personalized image aesthetics. In: Proceedings of the IEEE international conference on computer vision. pp 638–647
305.
Zurück zum Zitat You Q, Luo J, Jin H, Yang J (2015) Robust image sentiment analysis using progressively trained and domain transferred deep networks. In: Twenty-ninth AAAI conference on artificial intelligence You Q, Luo J, Jin H, Yang J (2015) Robust image sentiment analysis using progressively trained and domain transferred deep networks. In: Twenty-ninth AAAI conference on artificial intelligence
306.
Zurück zum Zitat Katsurai M, Satoh S (2016) Image sentiment analysis using latent correlations among visual, textual, and sentiment views. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2837–2841 Katsurai M, Satoh S (2016) Image sentiment analysis using latent correlations among visual, textual, and sentiment views. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 2837–2841
307.
Zurück zum Zitat Khosla A, Raju AS, Torralba A, Oliva A (2015) Understanding and predicting image memorability at a large scale. In: Proceedings of the IEEE international conference on computer vision, pp 2390–2398 Khosla A, Raju AS, Torralba A, Oliva A (2015) Understanding and predicting image memorability at a large scale. In: Proceedings of the IEEE international conference on computer vision, pp 2390–2398
308.
Zurück zum Zitat Mohammad S, Kiritchenko S (2018) Wikiart emotions: an annotated dataset of emotions evoked by art. In: Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018) Mohammad S, Kiritchenko S (2018) Wikiart emotions: an annotated dataset of emotions evoked by art. In: Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018)
309.
Zurück zum Zitat Yanulevskaya V, Uijlings J, Bruni E, Sartori A, Zamboni E, Bacci F, Melcher D, Sebe N (2012) In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings. In: Proceedings of the 20th ACM international conference on multimedia. pp 349–358 Yanulevskaya V, Uijlings J, Bruni E, Sartori A, Zamboni E, Bacci F, Melcher D, Sebe N (2012) In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings. In: Proceedings of the 20th ACM international conference on multimedia. pp 349–358
Metadaten
Titel
Artificial Neural Networks and Deep Learning in the Visual Arts: a review
verfasst von
Iria Santos
Luz Castro
Nereida Rodriguez-Fernandez
Álvaro Torrente-Patiño
Adrián Carballal
Publikationsdatum
12.01.2021
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 1/2021
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-05565-4

Weitere Artikel der Ausgabe 1/2021

Neural Computing and Applications 1/2021 Zur Ausgabe