Skip to main content
Erschienen in:
Buchtitelbild

2019 | OriginalPaper | Buchkapitel

Adversarial Attack, Defense, and Applications with Deep Learning Frameworks

verfasst von : Zhizhou Yin, Wei Liu, Sanjay Chawla

Erschienen in: Deep Learning Applications for Cyber Security

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Arjovsky M, Chintala S, Bottou L (2017) Wasserstein gan. CoRR, abs/1701.07875 Arjovsky M, Chintala S, Bottou L (2017) Wasserstein gan. CoRR, abs/1701.07875
2.
Zurück zum Zitat Arp D, Spreitzenbarth M, Gascon H, Rieck K (2014) Drebin: effective and explainable detection of android malware in your pocket. In: Proceedings of the 21th Annual Network and Distributed System Security Symposium (NDSS’14) Arp D, Spreitzenbarth M, Gascon H, Rieck K (2014) Drebin: effective and explainable detection of android malware in your pocket. In: Proceedings of the 21th Annual Network and Distributed System Security Symposium (NDSS’14)
3.
Zurück zum Zitat Carlini N, Wagner DA (2016) Towards evaluating the robustness of neural networks. CoRR, abs/1608.04644 Carlini N, Wagner DA (2016) Towards evaluating the robustness of neural networks. CoRR, abs/1608.04644
4.
Zurück zum Zitat Cisse M, Adi Y, Neverova N, Keshet J (2017) Houdini: fooling deep structured visual and speech recognition models with adversarial examples. In: Neural Information Processing Systems (NIPS 2017) Cisse M, Adi Y, Neverova N, Keshet J (2017) Houdini: fooling deep structured visual and speech recognition models with adversarial examples. In: Neural Information Processing Systems (NIPS 2017)
5.
Zurück zum Zitat Deng J, Dong W, Socher R, Li LJ, Li K, Li F (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255 Deng J, Dong W, Socher R, Li LJ, Li K, Li F (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp 248–255
6.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Neural Information Processing Systems (NIPS 2014) Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Neural Information Processing Systems (NIPS 2014)
7.
Zurück zum Zitat Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International Conference on Learning Representations Goodfellow I, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International Conference on Learning Representations
8.
Zurück zum Zitat Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2016) Adversarial perturbations against deep neural networks for Malware classification. CoRR, abs/1606.04435 Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2016) Adversarial perturbations against deep neural networks for Malware classification. CoRR, abs/1606.04435
9.
Zurück zum Zitat Hinton G, Vinyals O, Dean J (2014) Distilling the knowledge in a neural network. In: Deep Learning and Representation Learning Workshop at NIPS 2014 Hinton G, Vinyals O, Dean J (2014) Distilling the knowledge in a neural network. In: Deep Learning and Representation Learning Workshop at NIPS 2014
10.
Zurück zum Zitat Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, MM’14, New York. ACM, pp 675–678 Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, MM’14, New York. ACM, pp 675–678
11.
Zurück zum Zitat Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto.
12.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS 2012) Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS 2012)
13.
Zurück zum Zitat Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial examples in the physical world. CoRR, abs/1607.02533 Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial examples in the physical world. CoRR, abs/1607.02533
14.
Zurück zum Zitat Le QV (2013) Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp 8595–8598 Le QV (2013) Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp 8595–8598
15.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436 EP –CrossRef LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436 EP –CrossRef
16.
Zurück zum Zitat Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
17.
Zurück zum Zitat LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278–2324CrossRef
18.
Zurück zum Zitat Liu W, Chawla S (2009) A game theoretical model for adversarial learning. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops. IEEE Computer Society, pp 25–30 Liu W, Chawla S (2009) A game theoretical model for adversarial learning. In: Proceedings of the 2009 IEEE International Conference on Data Mining Workshops. IEEE Computer Society, pp 25–30
19.
Zurück zum Zitat Liu W, Chawla S (2010) Mining adversarial patterns via regularized loss minimization. Mach Learn 81(1):69–83MathSciNetCrossRef Liu W, Chawla S (2010) Mining adversarial patterns via regularized loss minimization. Mach Learn 81(1):69–83MathSciNetCrossRef
20.
Zurück zum Zitat Meng D, Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS’17, New York. ACM, pp 135–147 Meng D, Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS’17, New York. ACM, pp 135–147
21.
Zurück zum Zitat Moosavi-Dezfooli S, Fawziand A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Moosavi-Dezfooli S, Fawziand A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
22.
Zurück zum Zitat Panayotov V, Chen G, Povey D, Khudanpur S (2015) Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 5206–5210 Panayotov V, Chen G, Povey D, Khudanpur S (2015) Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 5206–5210
23.
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS’17, New York. ACM, pp 506–519CrossRef Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS’17, New York. ACM, pp 506–519CrossRef
24.
Zurück zum Zitat Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroSP), pp 372–387 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroSP), pp 372–387
25.
Zurück zum Zitat Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, Swami A (2015) Distillation as a defense to adversarial perturbations against deep neural networks. CoRR, abs/1511.04508 Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, Swami A (2015) Distillation as a defense to adversarial perturbations against deep neural networks. CoRR, abs/1511.04508
26.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252MathSciNetCrossRef Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis (IJCV) 115(3):211–252MathSciNetCrossRef
27.
Zurück zum Zitat Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. CoRR, abs/1805.06605 Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. CoRR, abs/1805.06605
28.
Zurück zum Zitat Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9 Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9
29.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
30.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2013) Intriguing properties of neural networks. CoRR, abs/1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2013) Intriguing properties of neural networks. CoRR, abs/1312.6199
31.
Zurück zum Zitat Wang F, Liu W, Chawla S (2014) On sparse feature attacks in adversarial learning. In: Proceedings of 2014 IEEE International Conference on Data Mining (ICDM), pp 1013–1018 Wang F, Liu W, Chawla S (2014) On sparse feature attacks in adversarial learning. In: Proceedings of 2014 IEEE International Conference on Data Mining (ICDM), pp 1013–1018
32.
Zurück zum Zitat Yin Z, Wang F, Liu W, Chawla S (2018) Sparse feature attacks in adversarial learning. IEEE Trans Knowl Data Eng 30(6):1164–1177CrossRef Yin Z, Wang F, Liu W, Chawla S (2018) Sparse feature attacks in adversarial learning. IEEE Trans Knowl Data Eng 30(6):1164–1177CrossRef
Metadaten
Titel
Adversarial Attack, Defense, and Applications with Deep Learning Frameworks
verfasst von
Zhizhou Yin
Wei Liu
Sanjay Chawla
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-13057-2_1