Skip to main content
Erschienen in: Programming and Computer Software 2/2023

01.12.2023

Analysis of Vulnerabilities of Neural Network Image Recognition Technologies

verfasst von: A. V. Trusov, E. E. Limonova, V. V. Arlazarov, A. A. Zatsarinnyy

Erschienen in: Programming and Computer Software | Sonderheft 2/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The problem of vulnerability of artificial intelligence technologies based on neural networks is considered. It is shown that the use of neural networks generates a lot of vulnerabilities. Examples of such vulnerabilities are demonstrated, such as incorrect classification of images containing adversarial noise or patches, failure of recognition systems in the presence of special patterns in the image, including those applied to objects in the real world, training data poisoning, etc. Based on the analysis, the need to improve the security of artificial intelligence technologies is shown, and some considerations that contribute to this improvement are discussed.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Ye, M. et al., Deep learning for person re-identification: A survey and outlook, IEEE Trans. Pattern Anal. Mach. Intell., 2021, vol. 44, no. 6, pp. 2872–2893.CrossRef Ye, M. et al., Deep learning for person re-identification: A survey and outlook, IEEE Trans. Pattern Anal. Mach. Intell., 2021, vol. 44, no. 6, pp. 2872–2893.CrossRef
2.
Zurück zum Zitat Arlazarov, V.V., Andreeva, E.I., Bulatov, K.B., Nikolaev, D.P., Petrova, O.O., Savelev B.I., and Slavin, O.A., Document image analysis and recognition: A survey, Komput. Optika, 2022, vol. 46, no. 4, pp. 567–589.ADS Arlazarov, V.V., Andreeva, E.I., Bulatov, K.B., Nikolaev, D.P., Petrova, O.O., Savelev B.I., and Slavin, O.A., Document image analysis and recognition: A survey, Komput. Optika, 2022, vol. 46, no. 4, pp. 567–589.ADS
3.
Zurück zum Zitat Yang, B. et al., Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions, IEEE Wireless Commun., 2021, vol. 28, no. 2, pp. 40–47.CrossRef Yang, B. et al., Edge intelligence for autonomous driving in 6G wireless system: Design challenges and solutions, IEEE Wireless Commun., 2021, vol. 28, no. 2, pp. 40–47.CrossRef
4.
Zurück zum Zitat Gu, T., Dolan-Gavitt, B., and Garg, S., Badnets: Identifying vulnerabilities in the machine learning model supply chain, arXiv:1708.06733, 2017. Gu, T., Dolan-Gavitt, B., and Garg, S., Badnets: Identifying vulnerabilities in the machine learning model supply chain, arXiv:1708.06733, 2017.
5.
Zurück zum Zitat Fredrikson, M., Jha, S., and Ristenpart, T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333. Fredrikson, M., Jha, S., and Ristenpart, T., Model inversion attacks that exploit confidence information and basic countermeasures, Proc. of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333.
6.
Zurück zum Zitat Szegedy C. et al., Intriguing properties of neural networks, arXiv:1312.6199, 2013. Szegedy C. et al., Intriguing properties of neural networks, arXiv:1312.6199, 2013.
7.
Zurück zum Zitat Brown, T.B. et al., Adversarial patch, arXiv:1712.09665, 2017. Brown, T.B. et al., Adversarial patch, arXiv:1712.09665, 2017.
8.
Zurück zum Zitat Lin, C.S. et al., Real-world adversarial examples via makeup, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2022, pp. 2854–2858. Lin, C.S. et al., Real-world adversarial examples via makeup, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2022, pp. 2854–2858.
9.
Zurück zum Zitat Hu, S. et al. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer, Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15014–15023. Hu, S. et al. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer, Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15014–15023.
10.
Zurück zum Zitat Zolfi, A. et al., Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Models, Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Cham: Springer Nature Switzerland, 2022, pp. 304–320. Zolfi, A. et al., Adversarial Mask: Real-World Universal Adversarial Attack on Face Recognition Models, Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Cham: Springer Nature Switzerland, 2022, pp. 304–320.
11.
Zurück zum Zitat Zhou, Z. et al., Invisible mask: Practical attacks on face recognition with infrared, arXiv:1803.04683, 2018. Zhou, Z. et al., Invisible mask: Practical attacks on face recognition with infrared, arXiv:1803.04683, 2018.
12.
Zurück zum Zitat Wu, Z., Lim, S.N., Davis, L.S., and Goldstein, T., Making an invisibility cloak: Real world adversarial attacks on object detectors, Proc. of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 2020, Part 4, pp. 1–17. Wu, Z., Lim, S.N., Davis, L.S., and Goldstein, T., Making an invisibility cloak: Real world adversarial attacks on object detectors, Proc. of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 2020, Part 4, pp. 1–17.
13.
Zurück zum Zitat Thys, S., Van Ranst, W., and Goedemé, T., Fooling automated surveillance cameras: adversarial patches to attack person detection, Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. https://openaccess.thecvf.com/content_CVPRW_2019/html/CV-COPS/Thys_Fooling_Automated_ Surveillance_Cameras_Adversarial_Patches_to_Attack_Person_Detection_CVPRW_2019_paper.html. Thys, S., Van Ranst, W., and Goedemé, T., Fooling automated surveillance cameras: adversarial patches to attack person detection, Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. https://openaccess.thecvf.com/content_CVPRW_2019/html/CV-COPS/Thys_Fooling_Automated_ Surveillance_Cameras_Adversarial_Patches_to_Attack_Person_Detection_CVPRW_2019_paper.html.
14.
Zurück zum Zitat Chen, J. et al., Diffusion Models for Imperceptible and Transferable Adversarial Attack, arXiv:2305.08192, 2023. Chen, J. et al., Diffusion Models for Imperceptible and Transferable Adversarial Attack, arXiv:2305.08192, 2023.
15.
Zurück zum Zitat Hong, S. et al., Security analysis of deep neural networks operating in the presence of cache side-channel attacks, arXiv:1810.03487, 2018. Hong, S. et al., Security analysis of deep neural networks operating in the presence of cache side-channel attacks, arXiv:1810.03487, 2018.
16.
Zurück zum Zitat Oh, S.J., Schiele, B., and Fritz, M., Towards reverse-engineering black-box neural networks, in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 2019 pp. 121–144. Oh, S.J., Schiele, B., and Fritz, M., Towards reverse-engineering black-box neural networks, in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, 2019 pp. 121–144.
17.
Zurück zum Zitat Chmielewski, Ł., and Weissbart, L., On reverse engineering neural network implementation on GPU, Proc. of the Applied Cryptography and Network Security Workshops: ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S&P, SCI, SecMT, and SiMLA, Kamakura, Japan, 2021, Springer, 2021, pp. 96–113. Chmielewski, Ł., and Weissbart, L., On reverse engineering neural network implementation on GPU, Proc. of the Applied Cryptography and Network Security Workshops: ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S&P, SCI, SecMT, and SiMLA, Kamakura, Japan, 2021, Springer, 2021, pp. 96–113.
18.
Zurück zum Zitat Goldblum, M. et al., Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, IEEE Trans. Pattern Anal. Mach, Intell., 2022, vol. 45, no. 2, pp. 1563–1580.CrossRef Goldblum, M. et al., Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, IEEE Trans. Pattern Anal. Mach, Intell., 2022, vol. 45, no. 2, pp. 1563–1580.CrossRef
19.
Zurück zum Zitat Shafahi, A. et al., Poison frogs! Targeted clean-label poisoning attacks on neural networks, Advances in neural information processing systems, 2018, vol. 31. Shafahi, A. et al., Poison frogs! Targeted clean-label poisoning attacks on neural networks, Advances in neural information processing systems, 2018, vol. 31.
20.
Zurück zum Zitat Wang, Y. et al. Sapag: A self-adaptive privacy attack from gradients, arXiv:2009.06228, 2020. Wang, Y. et al. Sapag: A self-adaptive privacy attack from gradients, arXiv:2009.06228, 2020.
21.
Zurück zum Zitat Warr, K., Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery, O’Reilly, 2019. Warr, K., Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery, O’Reilly, 2019.
22.
Zurück zum Zitat Akhtar, N. and Mian, A., Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, vol. 2018, no. 6, pp. 14410–14430 Akhtar, N. and Mian, A., Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, vol. 2018, no. 6, pp. 14410–14430
23.
Zurück zum Zitat Machado, G.R., Silva, E., and Goldschmidt, R.R., Adversarial machine learning in image classification: A survey toward the defender’s perspective, ACM Comput. Surveys (CSUR), 2021, vol. 55, no. 1, pp. 1–38.CrossRef Machado, G.R., Silva, E., and Goldschmidt, R.R., Adversarial machine learning in image classification: A survey toward the defender’s perspective, ACM Comput. Surveys (CSUR), 2021, vol. 55, no. 1, pp. 1–38.CrossRef
24.
Zurück zum Zitat Long, T. et al., A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions, Comput. & Security, 2022, p. 102847. Long, T. et al., A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions, Comput. & Security, 2022, p. 102847.
25.
Zurück zum Zitat Ren, K. et al., Adversarial attacks and defenses in deep learning, Engineering, 2020, vol. 6, no. 3, pp. 346–360.CrossRef Ren, K. et al., Adversarial attacks and defenses in deep learning, Engineering, 2020, vol. 6, no. 3, pp. 346–360.CrossRef
26.
Zurück zum Zitat Zhang, X. et al., Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition, Complex & Intell. Syst., 2023, vo. 9, no. 1, pp. 65–79.CrossRef Zhang, X. et al., Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition, Complex & Intell. Syst., 2023, vo. 9, no. 1, pp. 65–79.CrossRef
27.
Zurück zum Zitat Kwon, H. and Lee, S., Ensemble transfer attack targeting text classification systems, Comput. & Security, 2022, vol. 117, p. 102695. Kwon, H. and Lee, S., Ensemble transfer attack targeting text classification systems, Comput. & Security, 2022, vol. 117, p. 102695.
28.
Zurück zum Zitat Mo, K. et al. Attacking deep reinforcement learning with decoupled adversarial policy, IEEE Trans. Dependable Secure Comput., 2022, vol. 20, no. 1, pp. 758–768.CrossRef Mo, K. et al. Attacking deep reinforcement learning with decoupled adversarial policy, IEEE Trans. Dependable Secure Comput., 2022, vol. 20, no. 1, pp. 758–768.CrossRef
29.
Zurück zum Zitat Zhou, X. et al., Hierarchical adversarial attacks against graph-neural-network-based IoT network intrusion detection system, IEEE IoT J., 2021, vol. 9, no. 12, pp. 9310–9319. Zhou, X. et al., Hierarchical adversarial attacks against graph-neural-network-based IoT network intrusion detection system, IEEE IoT J., 2021, vol. 9, no. 12, pp. 9310–9319.
30.
Zurück zum Zitat Kumar, R.S.S. et al., Adversarial machine learning-industry perspectives, IEEE Security and Privacy Workshops (SPW), IEEE, 2020, pp. 69–75. Kumar, R.S.S. et al., Adversarial machine learning-industry perspectives, IEEE Security and Privacy Workshops (SPW), IEEE, 2020, pp. 69–75.
31.
Zurück zum Zitat Paleyes, A., Urma, R.G., and Lawrence, N.D., Challenges in deploying machine learning: A survey of case studies, ACM Comput. Surveys, 2022, vol. 55, no. 6, pp. 1–29.CrossRef Paleyes, A., Urma, R.G., and Lawrence, N.D., Challenges in deploying machine learning: A survey of case studies, ACM Comput. Surveys, 2022, vol. 55, no. 6, pp. 1–29.CrossRef
32.
Zurück zum Zitat Ala-Pietilä, P. et al., The Assessment List for Trustworthy Artificial Intelligence (ALTAI), European Commission, 2020. Ala-Pietilä, P. et al., The Assessment List for Trustworthy Artificial Intelligence (ALTAI), European Commission, 2020.
33.
Zurück zum Zitat Musser, M. et al., Adversarial machine learning and cybersecurity: Risks, challenges, and legal implications, arXiv:2305.14553, 2023. Musser, M. et al., Adversarial machine learning and cybersecurity: Risks, challenges, and legal implications, arXiv:2305.14553, 2023.
34.
Zurück zum Zitat Facial recognition’s latest foe: Italian knitwear. https://therecord.media/facial-recognitions-latest-foe-italian-knitwear. Accessed July 20, 2023. Facial recognition’s latest foe: Italian knitwear. https://​therecord.​media/​facial-recognitions-latest-foe-italian-knitwear. Accessed July 20, 2023.
35.
Zurück zum Zitat How we fight content copying, or the first adversarial attack in production. https://habr.com/ru/companies/avito/articles/452142. Accessed July 20, 2023. How we fight content copying, or the first adversarial attack in production. https://​habr.​com/​ru/​companies/​avito/​articles/​452142. Accessed July 20, 2023.
36.
Zurück zum Zitat Povolny, S. and Trivedi, S., Model hacking ADAS to pave safer roads for autonomous vehicles. https://www.mcafee.com/blogs/other-blogs/mcafee-labs/model-hacking-adas-to-pave-safer-roads-for-autonomous-vehicles/. Accessed July 20, 2023. Povolny, S. and Trivedi, S., Model hacking ADAS to pave safer roads for autonomous vehicles. https://​www.​mcafee.​com/​blogs/​other-blogs/​mcafee-labs/​model-hacking-adas-to-pave-safer-roads-for-autonomous-vehicles/​.​ Accessed July 20, 2023.
Metadaten
Titel
Analysis of Vulnerabilities of Neural Network Image Recognition Technologies
verfasst von
A. V. Trusov
E. E. Limonova
V. V. Arlazarov
A. A. Zatsarinnyy
Publikationsdatum
01.12.2023
Verlag
Pleiades Publishing
Erschienen in
Programming and Computer Software / Ausgabe Sonderheft 2/2023
Print ISSN: 0361-7688
Elektronische ISSN: 1608-3261
DOI
https://doi.org/10.1134/S0361768823100079

Weitere Artikel der Sonderheft 2/2023

Programming and Computer Software 2/2023 Zur Ausgabe

Premium Partner