Skip to main content

2018 | OriginalPaper | Buchkapitel

Vulnerability Detection and Analysis in Adversarial Deep Learning

verfasst von : Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, Renato Levy

Erschienen in: Guide to Vulnerability Analysis for Computer Networks and Systems

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Machine learning has been applied in various information systems, but its vulnerability has not been well understood yet. This chapter studies vulnerability to adversarial machine learning in information systems such as online services with interfaces that accept user data inputs and return machine learning results such as labels. Two types of attacks are considered: exploratory (or inference) attack and evasion attack. In an exploratory attack, the adversary collects labels of input data from an online classifier and applies deep learning to train a functionally equivalent classifier without knowing the inner working of the target classifier. The vulnerability includes the theft of intellectual property (quantified by the statistical similarity of the target and inferred classifiers) and the support of other attacks built upon the inference results. An example of follow-up attacks is the evasion attack, where the adversary deceives the classifier into misclassifying input data samples that are systematically selected based on the classification scores from the inferred classier. This attack is strengthened by generative adversarial networks (GANs) and adversarial perturbations producing synthetic data samples that are likely to be misclassified. The vulnerability is measured by the increase in misdetection rates. This quantitative understanding of the vulnerability in machine learning systems provides valuable insights into designing defence mechanisms against adversarial machine learning.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489
4.
Zurück zum Zitat Huang L, Joseph AD, Nelson B, Rubinstein BI, Tygar J (2011) Adversarial machine learning. In: Proceedings of the 4th ACM workshop on security and artificial intelligence. ACM, pp 43–58 Huang L, Joseph AD, Nelson B, Rubinstein BI, Tygar J (2011) Adversarial machine learning. In: Proceedings of the 4th ACM workshop on security and artificial intelligence. ACM, pp 43–58
5.
Zurück zum Zitat Miller B, Kantchelian A, Afroz S, Bachwani R, Dauber E, Huang L, Tschantz MC, Joseph AD, Tygar JD (2014) Adversarial active learning. In: Proceedings of the 2014 workshop on artificial intelligent and security workshop. ACM, pp 3–14 Miller B, Kantchelian A, Afroz S, Bachwani R, Dauber E, Huang L, Tschantz MC, Joseph AD, Tygar JD (2014) Adversarial active learning. In: Proceedings of the 2014 workshop on artificial intelligent and security workshop. ACM, pp 3–14
6.
Zurück zum Zitat Laskov P, Lippmann R (2010) Machine learning in adversarial environments. Springer, Berlin Laskov P, Lippmann R (2010) Machine learning in adversarial environments. Springer, Berlin
7.
Zurück zum Zitat Shi Y, Sagduyu Y, Grushin A (2017) How to steal a machine learning classifier with deep learning. In: 2017 IEEE international symposium on technologies for homeland security (HST). IEEE Shi Y, Sagduyu Y, Grushin A (2017) How to steal a machine learning classifier with deep learning. In: 2017 IEEE international symposium on technologies for homeland security (HST). IEEE
8.
Zurück zum Zitat Shi Y, Sagduyu YE (2017) Evasion and causative attacks with adversarial deep learning. In: MILCOM 2017-2017 IEEE military communications conference (MILCOM). IEEE, pp 243–248 Shi Y, Sagduyu YE (2017) Evasion and causative attacks with adversarial deep learning. In: MILCOM 2017-2017 IEEE military communications conference (MILCOM). IEEE, pp 243–248
9.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680
10.
Zurück zum Zitat Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366 Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
11.
Zurück zum Zitat Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2(4):303–314 Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2(4):303–314
12.
Zurück zum Zitat LeCun Y et al (1989) Generalization and network design strategies. Connectionism in perspective, pp 143–155 LeCun Y et al (1989) Generalization and network design strategies. Connectionism in perspective, pp 143–155
13.
Zurück zum Zitat Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780
14.
Zurück zum Zitat Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. Technical report, University of California, San Diego, La Jolla, Institute for Cognitive Science Rumelhart DE, Hinton GE, Williams RJ (1985) Learning internal representations by error propagation. Technical report, University of California, San Diego, La Jolla, Institute for Cognitive Science
16.
Zurück zum Zitat Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467 Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv:​1603.​04467
17.
Zurück zum Zitat Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proceedings of the 2006 ACM symposium on information, computer and communications security. ACM, pp 16–25 Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proceedings of the 2006 ACM symposium on information, computer and communications security. ACM, pp 16–25
18.
Zurück zum Zitat Ateniese G, Mancini LV, Spognardi A, Villani A, Vitali D, Felici G (2015) Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers. Int J Secur Netw 10(3):137–150 Ateniese G, Mancini LV, Spognardi A, Villani A, Vitali D, Felici G (2015) Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers. Int J Secur Netw 10(3):137–150
19.
Zurück zum Zitat Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, pp 1322–1333 Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, pp 1322–1333
20.
Zurück zum Zitat Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction APIs. In: USENIX security symposium, pp 601–618 Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction APIs. In: USENIX security symposium, pp 601–618
21.
Zurück zum Zitat Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, pp 387–402 Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, Berlin, pp 387–402
22.
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against deep learning systems using adversarial examples Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against deep learning systems using adversarial examples
24.
Zurück zum Zitat Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387
25.
Zurück zum Zitat Pi L, Lu Z, Sagduyu Y, Chen S (2016) Defending active learning against adversarial inputs in automated document classification. In: 2016 IEEE global conference on signal and information processing (GlobalSIP). IEEE, pp 257–261 Pi L, Lu Z, Sagduyu Y, Chen S (2016) Defending active learning against adversarial inputs in automated document classification. In: 2016 IEEE global conference on signal and information processing (GlobalSIP). IEEE, pp 257–261
26.
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on asia conference on computer and communications security. ACM, pp 506–519 Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on asia conference on computer and communications security. ACM, pp 506–519
28.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:​1312.​6199
29.
Zurück zum Zitat Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 427–436 Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 427–436
30.
Zurück zum Zitat Moosavi Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of 2016 IEEE conference on computer vision and pattern recognition (CVPR) Moosavi Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of 2016 IEEE conference on computer vision and pattern recognition (CVPR)
31.
Zurück zum Zitat Haykin S (2005) Cognitive radio: brain-empowered wireless communications. IEEE J Sel Areas Commun 23(2):201–220 Haykin S (2005) Cognitive radio: brain-empowered wireless communications. IEEE J Sel Areas Commun 23(2):201–220
32.
Zurück zum Zitat Soltani S, Sagduyu Y, Shi Y, Li J, Feldman J, Matyjas J (2015) Distributed cognitive radio network architecture, SDR implementation and emulation testbed. In: MILCOM 2015-2015 IEEE military communications conference. IEEE, pp 438–443 Soltani S, Sagduyu Y, Shi Y, Li J, Feldman J, Matyjas J (2015) Distributed cognitive radio network architecture, SDR implementation and emulation testbed. In: MILCOM 2015-2015 IEEE military communications conference. IEEE, pp 438–443
33.
Zurück zum Zitat Davaslioglu K, Sagduyu YE (2018) Generative adversarial learning for spectrum sensing. In: Accepted to IEEE international conference on communications (ICC). IEEE Davaslioglu K, Sagduyu YE (2018) Generative adversarial learning for spectrum sensing. In: Accepted to IEEE international conference on communications (ICC). IEEE
34.
Zurück zum Zitat Shi Y, Sagduyu YE, Erpek T, Davaslioglu K, Lu Z, Li JH (2018) Adversarial deep learning for cognitive radio security: jamming attack and defense strategies. In: IEEE international communications conference workshop on promises and challenges of machine learning in communication networks. IEEE Shi Y, Sagduyu YE, Erpek T, Davaslioglu K, Lu Z, Li JH (2018) Adversarial deep learning for cognitive radio security: jamming attack and defense strategies. In: IEEE international communications conference workshop on promises and challenges of machine learning in communication networks. IEEE
Metadaten
Titel
Vulnerability Detection and Analysis in Adversarial Deep Learning
verfasst von
Yi Shi
Yalin E. Sagduyu
Kemal Davaslioglu
Renato Levy
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-92624-7_9