Skip to main content
Top
Published in: Automatic Control and Computer Sciences 8/2023

01-12-2023

Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder

Authors: V. V. Platonov, N. M. Grigorjeva

Published in: Automatic Control and Computer Sciences | Issue 8/2023

Login to get access

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Adversarial attacks on artificial neural network systems for image recognition are considered. To improve the security of image recognition systems against adversarial attacks (evasion attacks), the use of autoencoders is proposed. Various attacks are considered and software prototypes of autoencoders of full-link and convolutional architectures are developed as means of defense against evasion attacks. The possibility of using developed prototypes as a basis for designing autoencoders more complex architectures is substantiated.
Literature
4.
go back to reference Fast gradient sign method. https://www.neuralception.com/adversarialexamples-fgsm/. Cited October 1, 2021. Fast gradient sign method. https://​www.​neuralception.​com/​adversarialexamp​les-fgsm/​.​ Cited October 1, 2021.
5.
go back to reference Basic iterative method. https://www.neuralception.com/adversarialexamples-bim/. Cited October 1, 2021. Basic iterative method. https://​www.​neuralception.​com/​adversarialexamp​les-bim/​.​ Cited October 1, 2021.
6.
go back to reference Iterative least likely method. https://www.neuralception.com/adversarialexamples-illmhttps://www.neuralception.com/adversarialexamples-illm/. Cited March 1, 2022. Iterative least likely method. https://​www.​neuralception.​com/​adversarialexamp​les-illmhttps://www.neuralception.com/adversarialexamples-illm/. Cited March 1, 2022.
9.
go back to reference Attacking machine learning with adversarial examples. https://openai.com/blog/adversarial-example-research/#:~:text=Gradient%20masking%20is%20a%20term,access%20to%20a%20useful%20gradient/. Cited May 5, 2022. Attacking machine learning with adversarial examples. https://​openai.​com/​blog/​adversarial-example-research/​#:~:text=Gradient%20masking%20is%20a%20term,access%20to%20a%20useful%20gradient/. Cited May 5, 2022.
11.
go back to reference Feature squeezing. https://evademl.org/squeezing/. Cited May 5, 2022. Feature squeezing. https://​evademl.​org/​squeezing/​.​ Cited May 5, 2022.
12.
go back to reference Get started with TensorBoard. https://www.tensorflow.org/tensorboard/get_started/. Cited April 27, 2022. Get started with TensorBoard. https://​www.​tensorflow.​org/​tensorboard/​get_​started/​.​ Cited April 27, 2022.
13.
go back to reference How to choose CNN Architecture MNIST Experiment 4. https://www.kaggle.com/code/cdeotte/how-to-choose-cnn-architecture-mnist/note-book/. Cited April 27, 2004. How to choose CNN Architecture MNIST Experiment 4. https://​www.​kaggle.​com/​code/​cdeotte/​how-to-choose-cnn-architecture-mnist/​note-book/​.​ Cited April 27, 2004.
Metadata
Title
Defense against Adversarial Attacks on Image Recognition Systems Using an Autoencoder
Authors
V. V. Platonov
N. M. Grigorjeva
Publication date
01-12-2023
Publisher
Pleiades Publishing
Published in
Automatic Control and Computer Sciences / Issue 8/2023
Print ISSN: 0146-4116
Electronic ISSN: 1558-108X
DOI
https://doi.org/10.3103/S0146411623080230

Other articles of this Issue 8/2023

Automatic Control and Computer Sciences 8/2023 Go to the issue