Skip to main content

2021 | OriginalPaper | Buchkapitel

Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context

verfasst von : Petri Vähäkainu, Martti Lehto, Antti Kariluoto

Erschienen in: Cybersecurity, Privacy and Freedom Protection in the Connected World

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Machine Learning (ML) has been taking significant evolutionary steps and provided sophisticated means in developing novel and smart, up-to-date applications. However, the development has also brought new types of hazards into the daylight that can have even destructive consequences required to be addressed. Evasion attacks are among the most utilized attacks that can be generated in adversarial settings during the system operation. In assumption, ML environment is benign, but in reality, perpetrators may exploit vulnerabilities to conduct these gradient-free or gradient-based malicious adversarial inference attacks towards cyber-physical systems (CPS), such as smart buildings. Evasion attacks provide a utility for perpetrators to modify, for example, a testing dataset of a victim ML-model. In this article, we conduct a literature review concerning evasion attacks and countermeasures and discuss how these attacks can be utilized in order to deceive the, i.e., CPS smart lock system’s ML-classifier to gain access to the smart building.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C-J, Srivastava MB (2019) GenAttack: practical black-box attacks with gradient-free optimization (2019). arXiv:1805.11090v3 [cs.LG] Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C-J, Srivastava MB (2019) GenAttack: practical black-box attacks with gradient-free optimization (2019). arXiv:​1805.​11090v3 [cs.LG]
3.
Zurück zum Zitat Co KT (2018) Bayesian optimization for black-box evasion of machine learning systems. Imperial College London, Department of Computing Co KT (2018) Bayesian optimization for black-box evasion of machine learning systems. Imperial College London, Department of Computing
9.
Zurück zum Zitat Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66CrossRef Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66CrossRef
10.
Zurück zum Zitat Ibitoye O, Abou-Khamis R, Matrawy A, Shafix MO (2019) The threat of adversarial attacks on machine learning in network security—a survey. arXiv:1911.02621v1 [cs.CR] Ibitoye O, Abou-Khamis R, Matrawy A, Shafix MO (2019) The threat of adversarial attacks on machine learning in network security—a survey. arXiv:​1911.​02621v1 [cs.CR]
11.
Zurück zum Zitat Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Sciencemag.org. Science 349(6245) Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Sciencemag.org. Science 349(6245)
12.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553), 436–444 LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553), 436–444
13.
Zurück zum Zitat Lipton ZC, Berkowitz J, Elkan C (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 Lipton ZC, Berkowitz J, Elkan C (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:​1506.​00019
16.
Zurück zum Zitat Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. arXiv:1806.09186v2 [cs.CV] Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. arXiv:​1806.​09186v2 [cs.CV]
18.
21.
Zurück zum Zitat Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Fossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773 Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Fossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773
23.
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv:1602.02697 [cs.CR] Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv:​1602.​02697 [cs.CR]
26.
Zurück zum Zitat Samangouei P, Kabhab M, Chellappa R (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv:1805.06605v2 [cs.CV] Samangouei P, Kabhab M, Chellappa R (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv:​1805.​06605v2 [cs.CV]
28.
Zurück zum Zitat Short A, Pay TL, Gandhi A (2019) Defending against adversarial examples. In: Sandia report, SAND 2019-11748. Sandia National Laboratories Short A, Pay TL, Gandhi A (2019) Defending against adversarial examples. In: Sandia report, SAND 2019-11748. Sandia National Laboratories
29.
Zurück zum Zitat Song D (2019) Plenary session—toward trustworthy machine learning. In: Proceedings of a workshop of robust machine learning algorithms and systems for detection and mitigation of adversarial attacks and anomalies, pp 35–38 Song D (2019) Plenary session—toward trustworthy machine learning. In: Proceedings of a workshop of robust machine learning algorithms and systems for detection and mitigation of adversarial attacks and anomalies, pp 35–38
30.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:​1312.​6199
34.
Zurück zum Zitat Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv:1903.01610 [cs.LG] Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv:​1903.​01610 [cs.LG]
36.
Metadaten
Titel
Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context
verfasst von
Petri Vähäkainu
Martti Lehto
Antti Kariluoto
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-68534-8_11

Premium Partner