Skip to main content
Top

2024 | OriginalPaper | Chapter

Quantitative Evaluation Based on CLIP for Methods Inhibiting Imitation of Painting Styles

Authors : Motoi Iwata, Keito Okamoto, Koichi Kise

Published in: Document Analysis and Recognition – ICDAR 2024 Workshops

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Image generation AIs with the ability to generate a variety of high-quality images from text and images have been gaining attention, meanwhile, they cause a serious problem where a third party generates AI art that imitates the artist’s style by fine-tuning from a specific artist’s work. Against this background, a method has been proposed to add small noise perturbations to an artwork to inhibit imitation of the painting style. Currently, two such methods exist: Glaze and Mist, which apply perturbations to artworks so that a false style is learned, and Mist, which makes it difficult to extract features from the work. Glaze and Mist differ from each other in their evaluation manners, and they are not able to evaluate the real problem described above, i.e., the inhibition of imitating the style of a particular artist. Therefore, the purpose of this study is to realize a quantitative evaluation based on the index of artist-likeness. In this paper, we discuss three points: whether the proposed artist-likeness index and the correct prediction rate are suitable for quantitative evaluation of artist-likeness, how much it inhibits the imitation of painting style quantitatively, and whether Glaze or Mist is superior. Our experimental results confirm that the proposed metrics reflect the artist-likeness and quantitatively evaluate, how much Glaze and Mist inhibit the imitation of painting styles quantitatively, and we find that Mist is superior to Glaze according to our approach.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
2.
go back to reference Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6629–6640. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6629–6640. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)
4.
go back to reference Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., Aila, T.: Improved precision and recall metric for assessing generative models (2019). CoRR abs/1904.06991 Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., Aila, T.: Improved precision and recall metric for assessing generative models (2019). CoRR abs/1904.06991
5.
go back to reference Liang, C., et al.: Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples (2023) Liang, C., et al.: Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples (2023)
6.
go back to reference Radford, A., et al.: Learning transferable visual models from natural language supervision. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8748–8763. PMLR (2021). https://proceedings.mlr.press/v139/radford21a.html Radford, A., et al.: Learning transferable visual models from natural language supervision. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8748–8763. PMLR (2021). https://​proceedings.​mlr.​press/​v139/​radford21a.​html
Metadata
Title
Quantitative Evaluation Based on CLIP for Methods Inhibiting Imitation of Painting Styles
Authors
Motoi Iwata
Keito Okamoto
Koichi Kise
Copyright Year
2024
DOI
https://doi.org/10.1007/978-3-031-70645-5_14

Premium Partner