Skip to main content
Top

2021 | OriginalPaper | Chapter

Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context

Authors : Petri Vähäkainu, Martti Lehto, Antti Kariluoto

Published in: Cybersecurity, Privacy and Freedom Protection in the Connected World

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Machine Learning (ML) has been taking significant evolutionary steps and provided sophisticated means in developing novel and smart, up-to-date applications. However, the development has also brought new types of hazards into the daylight that can have even destructive consequences required to be addressed. Evasion attacks are among the most utilized attacks that can be generated in adversarial settings during the system operation. In assumption, ML environment is benign, but in reality, perpetrators may exploit vulnerabilities to conduct these gradient-free or gradient-based malicious adversarial inference attacks towards cyber-physical systems (CPS), such as smart buildings. Evasion attacks provide a utility for perpetrators to modify, for example, a testing dataset of a victim ML-model. In this article, we conduct a literature review concerning evasion attacks and countermeasures and discuss how these attacks can be utilized in order to deceive the, i.e., CPS smart lock system’s ML-classifier to gain access to the smart building.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C-J, Srivastava MB (2019) GenAttack: practical black-box attacks with gradient-free optimization (2019). arXiv:1805.11090v3 [cs.LG] Alzantot M, Sharma Y, Chakraborty S, Zhang H, Hsieh C-J, Srivastava MB (2019) GenAttack: practical black-box attacks with gradient-free optimization (2019). arXiv:​1805.​11090v3 [cs.LG]
3.
go back to reference Co KT (2018) Bayesian optimization for black-box evasion of machine learning systems. Imperial College London, Department of Computing Co KT (2018) Bayesian optimization for black-box evasion of machine learning systems. Imperial College London, Department of Computing
9.
go back to reference Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66CrossRef Goodfellow I, McDaniel P, Papernot N (2018) Making machine learning robust against adversarial inputs. Commun ACM 61(7):56–66CrossRef
10.
go back to reference Ibitoye O, Abou-Khamis R, Matrawy A, Shafix MO (2019) The threat of adversarial attacks on machine learning in network security—a survey. arXiv:1911.02621v1 [cs.CR] Ibitoye O, Abou-Khamis R, Matrawy A, Shafix MO (2019) The threat of adversarial attacks on machine learning in network security—a survey. arXiv:​1911.​02621v1 [cs.CR]
11.
go back to reference Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Sciencemag.org. Science 349(6245) Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Sciencemag.org. Science 349(6245)
12.
go back to reference LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553), 436–444 LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553), 436–444
13.
go back to reference Lipton ZC, Berkowitz J, Elkan C (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 Lipton ZC, Berkowitz J, Elkan C (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:​1506.​00019
16.
go back to reference Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. arXiv:1806.09186v2 [cs.CV] Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. arXiv:​1806.​09186v2 [cs.CV]
21.
go back to reference Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Fossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773 Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Fossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765–1773
23.
go back to reference Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv:1602.02697 [cs.CR] Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv:​1602.​02697 [cs.CR]
26.
go back to reference Samangouei P, Kabhab M, Chellappa R (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv:1805.06605v2 [cs.CV] Samangouei P, Kabhab M, Chellappa R (2018) Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv:​1805.​06605v2 [cs.CV]
28.
go back to reference Short A, Pay TL, Gandhi A (2019) Defending against adversarial examples. In: Sandia report, SAND 2019-11748. Sandia National Laboratories Short A, Pay TL, Gandhi A (2019) Defending against adversarial examples. In: Sandia report, SAND 2019-11748. Sandia National Laboratories
29.
go back to reference Song D (2019) Plenary session—toward trustworthy machine learning. In: Proceedings of a workshop of robust machine learning algorithms and systems for detection and mitigation of adversarial attacks and anomalies, pp 35–38 Song D (2019) Plenary session—toward trustworthy machine learning. In: Proceedings of a workshop of robust machine learning algorithms and systems for detection and mitigation of adversarial attacks and anomalies, pp 35–38
30.
go back to reference Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:​1312.​6199
34.
go back to reference Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv:1903.01610 [cs.LG] Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv:​1903.​01610 [cs.LG]
36.
Metadata
Title
Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context
Authors
Petri Vähäkainu
Martti Lehto
Antti Kariluoto
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-68534-8_11

Premium Partner