Skip to main content
Top
Published in: Mobile Networks and Applications 4/2021

20-01-2020

An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers

Authors: Yutian Zhou, Yu-an Tan, Quanxin Zhang, Xiaohui Kuang, Yahong Han, Jingjing Hu

Published in: Mobile Networks and Applications | Issue 4/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Show more products
Literature
1.
go back to reference Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning[C]//proceedings of the 25th international conference on machine learning. ACM:160–167 Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning[C]//proceedings of the 25th international conference on machine learning. ACM:160–167
2.
go back to reference Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Process Mag 29(6):82–97CrossRef Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Process Mag 29(6):82–97CrossRef
3.
go back to reference Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[J]. Adv Neural Inf Proces Syst, 2012, 25(2) Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[J]. Adv Neural Inf Proces Syst, 2012, 25(2)
4.
go back to reference Chen HH, Chen M, Chiu CC (2016) The integration of artificial neural networks and text mining to forecast gold futures prices[J]. Communications in Statistics - Simulation and Computation 45(4):13MathSciNetCrossRef Chen HH, Chen M, Chiu CC (2016) The integration of artificial neural networks and text mining to forecast gold futures prices[J]. Communications in Statistics - Simulation and Computation 45(4):13MathSciNetCrossRef
5.
go back to reference Yerima SY, Sezer S (2018) DroidFusion: a novel multilevel classifier fusion approach for android malware detection[J]. IEEE Transactions on Cybernetics:1–14 Yerima SY, Sezer S (2018) DroidFusion: a novel multilevel classifier fusion approach for android malware detection[J]. IEEE Transactions on Cybernetics:1–14
6.
go back to reference He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778 He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778
7.
go back to reference Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods[J]. 2017CrossRef Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods[J]. 2017CrossRef
8.
go back to reference Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images[J]. 2015 Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images[J]. 2015
9.
go back to reference Athalye A , Carlini N , Wagner D . Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[J]. 2018 Athalye A , Carlini N , Wagner D . Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[J]. 2018
10.
go back to reference Huang S, Papernot N, Goodfellow I, et al. Adversarial attacks on neural network policies[J]. arXiv preprint arXiv:1702.02284, 2017 Huang S, Papernot N, Goodfellow I, et al. Adversarial attacks on neural network policies[J]. arXiv preprint arXiv:1702.02284, 2017
11.
go back to reference Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. 2018 Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. 2018
12.
go back to reference Barreno M, Nelson B, Sears R et al (2006) Can machine learning be secure?[C]//proceedings of the 2006 ACM symposium on information, computer and communications security. ACM:16–25 Barreno M, Nelson B, Sears R et al (2006) Can machine learning be secure?[C]//proceedings of the 2006 ACM symposium on information, computer and communications security. ACM:16–25
13.
14.
go back to reference Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013 Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013
16.
go back to reference Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016 Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016
17.
go back to reference Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings[C]//security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE:372–387 Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings[C]//security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE:372–387
18.
go back to reference Su J, Vargas D V, Kouichi S. One pixel attack for fooling deep neural networks[J]. arXiv preprint arXiv:1710.08864, 2017 Su J, Vargas D V, Kouichi S. One pixel attack for fooling deep neural networks[J]. arXiv preprint arXiv:1710.08864, 2017
19.
go back to reference Papernot N, Mcdaniel P, Xi W, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Security & Privacy. 2016 Papernot N, Mcdaniel P, Xi W, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Security & Privacy. 2016
20.
go back to reference Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks[C]//2017 IEEE symposium on security and privacy (SP). IEEE:39–57 Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks[C]//2017 IEEE symposium on security and privacy (SP). IEEE:39–57
21.
go back to reference Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks[C]// computer vision and pattern recognition. IEEE:2574–2582 Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks[C]// computer vision and pattern recognition. IEEE:2574–2582
22.
go back to reference Papernot N, Mcdaniel P, Goodfellow I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. 2016 Papernot N, Mcdaniel P, Goodfellow I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. 2016
23.
go back to reference Andor D, Alberti C, Weiss D, et al. Globally normalized transition-based neural networks[J]. arXiv preprint arXiv:1603.06042, 2016 Andor D, Alberti C, Weiss D, et al. Globally normalized transition-based neural networks[J]. arXiv preprint arXiv:1603.06042, 2016
24.
go back to reference Liu Y, Chen X, Liu C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016 Liu Y, Chen X, Liu C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016
25.
go back to reference Tramèr F, Papernot N, Goodfellow I, et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017 Tramèr F, Papernot N, Goodfellow I, et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017
26.
go back to reference Narodytska N, Kasiviswanathan S P. Simple black-box adversarial perturbations for deep networks[J]. 2016 Narodytska N, Kasiviswanathan S P. Simple black-box adversarial perturbations for deep networks[J]. 2016
27.
go back to reference Jin Y (2005) Jürgen Branke. Evolutionary optimization in uncertain environments---a survey[J]. IEEE Trans Evol Comput 9(3):303–317CrossRef Jin Y (2005) Jürgen Branke. Evolutionary optimization in uncertain environments---a survey[J]. IEEE Trans Evol Comput 9(3):303–317CrossRef
28.
go back to reference LeCun Y. The MNIST database of handwritten digits[J]. http://yann. lecun. com/exdb/mnist/, 1998 LeCun Y. The MNIST database of handwritten digits[J]. http://​yann.​ lecun. com/exdb/mnist/, 1998
29.
go back to reference Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]// IEEE conference on Computer Vision & Pattern Recognition. 2009 Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]// IEEE conference on Computer Vision & Pattern Recognition. 2009
Metadata
Title
An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
Authors
Yutian Zhou
Yu-an Tan
Quanxin Zhang
Xiaohui Kuang
Yahong Han
Jingjing Hu
Publication date
20-01-2020
Publisher
Springer US
Published in
Mobile Networks and Applications / Issue 4/2021
Print ISSN: 1383-469X
Electronic ISSN: 1572-8153
DOI
https://doi.org/10.1007/s11036-019-01499-x

Other articles of this Issue 4/2021

Mobile Networks and Applications 4/2021 Go to the issue