Skip to main content
Erschienen in: Mobile Networks and Applications 4/2021

20.01.2020

An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers

verfasst von: Yutian Zhou, Yu-an Tan, Quanxin Zhang, Xiaohui Kuang, Yahong Han, Jingjing Hu

Erschienen in: Mobile Networks and Applications | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning[C]//proceedings of the 25th international conference on machine learning. ACM:160–167 Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning[C]//proceedings of the 25th international conference on machine learning. ACM:160–167
2.
Zurück zum Zitat Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Process Mag 29(6):82–97CrossRef Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Process Mag 29(6):82–97CrossRef
3.
Zurück zum Zitat Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[J]. Adv Neural Inf Proces Syst, 2012, 25(2) Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[J]. Adv Neural Inf Proces Syst, 2012, 25(2)
4.
Zurück zum Zitat Chen HH, Chen M, Chiu CC (2016) The integration of artificial neural networks and text mining to forecast gold futures prices[J]. Communications in Statistics - Simulation and Computation 45(4):13MathSciNetCrossRef Chen HH, Chen M, Chiu CC (2016) The integration of artificial neural networks and text mining to forecast gold futures prices[J]. Communications in Statistics - Simulation and Computation 45(4):13MathSciNetCrossRef
5.
Zurück zum Zitat Yerima SY, Sezer S (2018) DroidFusion: a novel multilevel classifier fusion approach for android malware detection[J]. IEEE Transactions on Cybernetics:1–14 Yerima SY, Sezer S (2018) DroidFusion: a novel multilevel classifier fusion approach for android malware detection[J]. IEEE Transactions on Cybernetics:1–14
6.
Zurück zum Zitat He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778 He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778
7.
Zurück zum Zitat Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods[J]. 2017CrossRef Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods[J]. 2017CrossRef
8.
Zurück zum Zitat Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images[J]. 2015 Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images[J]. 2015
9.
Zurück zum Zitat Athalye A , Carlini N , Wagner D . Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[J]. 2018 Athalye A , Carlini N , Wagner D . Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[J]. 2018
10.
Zurück zum Zitat Huang S, Papernot N, Goodfellow I, et al. Adversarial attacks on neural network policies[J]. arXiv preprint arXiv:1702.02284, 2017 Huang S, Papernot N, Goodfellow I, et al. Adversarial attacks on neural network policies[J]. arXiv preprint arXiv:1702.02284, 2017
11.
Zurück zum Zitat Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. 2018 Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. 2018
12.
Zurück zum Zitat Barreno M, Nelson B, Sears R et al (2006) Can machine learning be secure?[C]//proceedings of the 2006 ACM symposium on information, computer and communications security. ACM:16–25 Barreno M, Nelson B, Sears R et al (2006) Can machine learning be secure?[C]//proceedings of the 2006 ACM symposium on information, computer and communications security. ACM:16–25
13.
14.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013 Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013
16.
Zurück zum Zitat Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016 Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016
17.
Zurück zum Zitat Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings[C]//security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE:372–387 Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings[C]//security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE:372–387
18.
Zurück zum Zitat Su J, Vargas D V, Kouichi S. One pixel attack for fooling deep neural networks[J]. arXiv preprint arXiv:1710.08864, 2017 Su J, Vargas D V, Kouichi S. One pixel attack for fooling deep neural networks[J]. arXiv preprint arXiv:1710.08864, 2017
19.
Zurück zum Zitat Papernot N, Mcdaniel P, Xi W, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Security & Privacy. 2016 Papernot N, Mcdaniel P, Xi W, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Security & Privacy. 2016
20.
Zurück zum Zitat Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks[C]//2017 IEEE symposium on security and privacy (SP). IEEE:39–57 Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks[C]//2017 IEEE symposium on security and privacy (SP). IEEE:39–57
21.
Zurück zum Zitat Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks[C]// computer vision and pattern recognition. IEEE:2574–2582 Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks[C]// computer vision and pattern recognition. IEEE:2574–2582
22.
Zurück zum Zitat Papernot N, Mcdaniel P, Goodfellow I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. 2016 Papernot N, Mcdaniel P, Goodfellow I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. 2016
23.
Zurück zum Zitat Andor D, Alberti C, Weiss D, et al. Globally normalized transition-based neural networks[J]. arXiv preprint arXiv:1603.06042, 2016 Andor D, Alberti C, Weiss D, et al. Globally normalized transition-based neural networks[J]. arXiv preprint arXiv:1603.06042, 2016
24.
Zurück zum Zitat Liu Y, Chen X, Liu C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016 Liu Y, Chen X, Liu C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016
25.
Zurück zum Zitat Tramèr F, Papernot N, Goodfellow I, et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017 Tramèr F, Papernot N, Goodfellow I, et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017
26.
Zurück zum Zitat Narodytska N, Kasiviswanathan S P. Simple black-box adversarial perturbations for deep networks[J]. 2016 Narodytska N, Kasiviswanathan S P. Simple black-box adversarial perturbations for deep networks[J]. 2016
27.
Zurück zum Zitat Jin Y (2005) Jürgen Branke. Evolutionary optimization in uncertain environments---a survey[J]. IEEE Trans Evol Comput 9(3):303–317CrossRef Jin Y (2005) Jürgen Branke. Evolutionary optimization in uncertain environments---a survey[J]. IEEE Trans Evol Comput 9(3):303–317CrossRef
28.
Zurück zum Zitat LeCun Y. The MNIST database of handwritten digits[J]. http://yann. lecun. com/exdb/mnist/, 1998 LeCun Y. The MNIST database of handwritten digits[J]. http://​yann.​ lecun. com/exdb/mnist/, 1998
29.
Zurück zum Zitat Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]// IEEE conference on Computer Vision & Pattern Recognition. 2009 Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]// IEEE conference on Computer Vision & Pattern Recognition. 2009
Metadaten
Titel
An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
verfasst von
Yutian Zhou
Yu-an Tan
Quanxin Zhang
Xiaohui Kuang
Yahong Han
Jingjing Hu
Publikationsdatum
20.01.2020
Verlag
Springer US
Erschienen in
Mobile Networks and Applications / Ausgabe 4/2021
Print ISSN: 1383-469X
Elektronische ISSN: 1572-8153
DOI
https://doi.org/10.1007/s11036-019-01499-x

Weitere Artikel der Ausgabe 4/2021

Mobile Networks and Applications 4/2021 Zur Ausgabe

Neuer Inhalt