Skip to main content
Top

2021 | OriginalPaper | Chapter

4. Security of AI Hardware Systems

Author : Haoting Shen

Published in: Emerging Topics in Hardware Security

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Artificial intelligence (AI) systems are changing our lives. Coming with the benefits, challenges on AI security raise concerns as AI systems are not only accessing personal and sensitive data, they are also to be deployed on systems related to life safety (e.g., autonomous vehicles and medical systems) and critical infrastructures. In this chapter, we will start from a brief introduction to modern AI systems, review reported AI security issues, and discuss possible countermeasures.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference I. Anati, S. Gueron, S. Johnson, V. Scarlata, Innovative technology for CPU based attestation and sealing, in Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, vol. 13 (ACM, New York, 2013), p. 7 I. Anati, S. Gueron, S. Johnson, V. Scarlata, Innovative technology for CPU based attestation and sealing, in Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy, vol. 13 (ACM, New York, 2013), p. 7
3.
go back to reference N. Baracaldo, B. Chen, H. Ludwig, J.A. Safavi, Mitigating poisoning attacks on machine learning models: a data provenance based approach, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017), pp. 103–110 N. Baracaldo, B. Chen, H. Ludwig, J.A. Safavi, Mitigating poisoning attacks on machine learning models: a data provenance based approach, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017), pp. 103–110
5.
go back to reference B. Biggio, B. Nelson, P. Laskov, Poisoning attacks against support vector machines (preprint, 2012). arXiv:1206.6389 B. Biggio, B. Nelson, P. Laskov, Poisoning attacks against support vector machines (preprint, 2012). arXiv:1206.6389
6.
go back to reference B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, F. Roli, Evasion attacks against machine learning at test time, in Joint European Conference on Machine Learning and Knowledge Discovery in Databases (Springer, Berlin, 2013), pp. 387–402 B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, F. Roli, Evasion attacks against machine learning at test time, in Joint European Conference on Machine Learning and Knowledge Discovery in Databases (Springer, Berlin, 2013), pp. 387–402
7.
go back to reference B. Biggio, L. Didaci, G. Fumera, F. Roli, Poisoning attacks to compromise face templates, in 2013 International Conference on Biometrics (ICB) (IEEE, Piscataway, 2013), pp. 1–7CrossRef B. Biggio, L. Didaci, G. Fumera, F. Roli, Poisoning attacks to compromise face templates, in 2013 International Conference on Biometrics (ICB) (IEEE, Piscataway, 2013), pp. 1–7CrossRef
8.
go back to reference B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, F. Roli, Poisoning behavioral malware clustering, in Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop (2014), pp. 27–36 B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, F. Roli, Poisoning behavioral malware clustering, in Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop (2014), pp. 27–36
9.
go back to reference I. Damgård, V. Pastro, N. Smart, S. Zakarias, Multiparty computation from somewhat homomorphic encryption, in Annual Cryptology Conference (Springer, Berlin, 2012), pp. 643–662MATH I. Damgård, V. Pastro, N. Smart, S. Zakarias, Multiparty computation from somewhat homomorphic encryption, in Annual Cryptology Conference (Springer, Berlin, 2012), pp. 643–662MATH
10.
go back to reference J. Dean, S. Ghemawat, MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)CrossRef J. Dean, S. Ghemawat, MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)CrossRef
12.
go back to reference Y. Doröz, E. Öztürk, B. Sunar, Accelerating fully homomorphic encryption in hardware. IEEE Trans. Comput. 64(6), 1509–1521 (2014)MathSciNetMATH Y. Doröz, E. Öztürk, B. Sunar, Accelerating fully homomorphic encryption in hardware. IEEE Trans. Comput. 64(6), 1509–1521 (2014)MathSciNetMATH
13.
go back to reference K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song, Robust physical-world attacks on deep learning visual classification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1625–1634 K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, D. Song, Robust physical-world attacks on deep learning visual classification, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 1625–1634
14.
go back to reference M. Fredrikson, S. Jha, T. Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), pp. 1322–1333 M. Fredrikson, S. Jha, T. Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), pp. 1322–1333
15.
go back to reference J.E. Gonzalez, Y. Low, H. Gu, D. Bickson, C. Guestrin, Powergraph: distributed graph-parallel computation on natural graphs, in Presented as part of the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI’12) (2012), pp. 17–30 J.E. Gonzalez, Y. Low, H. Gu, D. Bickson, C. Guestrin, Powergraph: distributed graph-parallel computation on natural graphs, in Presented as part of the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI’12) (2012), pp. 17–30
16.
go back to reference I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples (preprint, 2014). arXiv:1412.6572 I.J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples (preprint, 2014). arXiv:1412.6572
17.
go back to reference G. Hospodar, R. Maes, I. Verbauwhede, Machine learning attacks on 65nm Arbiter PUFs: accurate modeling poses strict bounds on usability, in 2012 IEEE International Workshop on Information Forensics and Security (WIFS) (IEEE, Piscataway, 2012), pp. 37–42 G. Hospodar, R. Maes, I. Verbauwhede, Machine learning attacks on 65nm Arbiter PUFs: accurate modeling poses strict bounds on usability, in 2012 IEEE International Workshop on Information Forensics and Security (WIFS) (IEEE, Piscataway, 2012), pp. 37–42
18.
go back to reference N.P. Jouppi, C. Young, D.H. Yoon, et al., In-datacenter performance analysis of a tensor processing unit, in Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA’17) (ACM, New York, 2017), pp. 1–12. https://doi.org/10.1145/3079856.3080246 N.P. Jouppi, C. Young, D.H. Yoon, et al., In-datacenter performance analysis of a tensor processing unit, in Proceedings of the 44th Annual International Symposium on Computer Architecture (ISCA’17) (ACM, New York, 2017), pp. 1–12. https://​doi.​org/​10.​1145/​3079856.​3080246
19.
go back to reference P.W. Koh, J. Steinhardt, P. Liang, Stronger data poisoning attacks break data sanitization defenses (preprint, 2018). arXiv:1811.00741 P.W. Koh, J. Steinhardt, P. Liang, Stronger data poisoning attacks break data sanitization defenses (preprint, 2018). arXiv:1811.00741
20.
go back to reference P. Laskov, M. Kloft, A framework for quantitative security analysis of machine learning, in Proceedings of the 2nd ACM Workshop on Security and Artificial Intelligence (2009), pp. 1–4 P. Laskov, M. Kloft, A framework for quantitative security analysis of machine learning, in Proceedings of the 2nd ACM Workshop on Security and Artificial Intelligence (2009), pp. 1–4
21.
go back to reference C. Liu, B. Li, Y. Vorobeychik, A. Oprea, Robust linear regression against training data poisoning, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017), pp. 91–102 C. Liu, B. Li, Y. Vorobeychik, A. Oprea, Robust linear regression against training data poisoning, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (2017), pp. 91–102
22.
go back to reference Z. Mengchen, B. An, W. Gao, T. Zhang, Efficient label contamination attacks against black-box learning models, in (IJCAI) (2017), pp. 3945–3951 Z. Mengchen, B. An, W. Gao, T. Zhang, Efficient label contamination attacks against black-box learning models, in (IJCAI) (2017), pp. 3945–3951
23.
go back to reference A.C. Mert, E. Öztürk, E. Savaş, Design and implementation of encryption/decryption architectures for BFV homomorphic encryption scheme. IEEE Trans. Very Large Scale Integr. Syst. 28(2), 353–362 (2019)CrossRef A.C. Mert, E. Öztürk, E. Savaş, Design and implementation of encryption/decryption architectures for BFV homomorphic encryption scheme. IEEE Trans. Very Large Scale Integr. Syst. 28(2), 353–362 (2019)CrossRef
24.
go back to reference P.H. Nguyen, D.P. Sahoo, C. Jin, K. Mahmood, U. Rührmair, M. van Dijk, The interpose PUF: secure PUF design against state-of-the-art machine learning attacks. IACR Trans. Cryptograph. Hardware Embed. Syst. 4, 243–290 (2019)CrossRef P.H. Nguyen, D.P. Sahoo, C. Jin, K. Mahmood, U. Rührmair, M. van Dijk, The interpose PUF: secure PUF design against state-of-the-art machine learning attacks. IACR Trans. Cryptograph. Hardware Embed. Syst. 4, 243–290 (2019)CrossRef
25.
27.
go back to reference R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership inference attacks against machine learning models, in 2017 IEEE Symposium on Security and Privacy (SP) (IEEE, Piscataway, 2017), pp. 3–18 R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership inference attacks against machine learning models, in 2017 IEEE Symposium on Security and Privacy (SP) (IEEE, Piscataway, 2017), pp. 3–18
28.
go back to reference V. Sundaresan, S. Rammohan, R. Vemuri, Defense against side-channel power analysis attacks on microelectronic systems, in 2008 IEEE National Aerospace and Electronics Conference (IEEE, Piscataway, 2008), pp. 144–150 V. Sundaresan, S. Rammohan, R. Vemuri, Defense against side-channel power analysis attacks on microelectronic systems, in 2008 IEEE National Aerospace and Electronics Conference (IEEE, Piscataway, 2008), pp. 144–150
29.
go back to reference N. Šrndic, P. Laskov, Detection of malicious PDF files based on hierarchical document structure, in Proceedings of the 20th Annual Network & Distributed System Security Symposium (Citeseer, 2013), pp. 1–16 N. Šrndic, P. Laskov, Detection of malicious PDF files based on hierarchical document structure, in Proceedings of the 20th Annual Network & Distributed System Security Symposium (Citeseer, 2013), pp. 1–16
30.
go back to reference L. Wei, B. Luo, Y. Li, Y. Liu, Q. Xu, I know what you see: power side-channel attack on convolutional neural network accelerators, in Proceedings of the 34th Annual Computer Security Applications Conference (2018), pp. 393–406 L. Wei, B. Luo, Y. Li, Y. Liu, Q. Xu, I know what you see: power side-channel attack on convolutional neural network accelerators, in Proceedings of the 34th Annual Computer Security Applications Conference (2018), pp. 393–406
31.
go back to reference G.L. Wittel, S.F. Wu, On attacking statistical spam filters, in First Conference on Email and Anti-Spam CEAS (2004) G.L. Wittel, S.F. Wu, On attacking statistical spam filters, in First Conference on Email and Anti-Spam CEAS (2004)
32.
go back to reference G. Xu, H. Li, H. Ren, K. Yang, R.H. Deng, Data security issues in deep learning: attacks, countermeasures, and opportunities. IEEE Commun. Mag. 57(11), 116–122 (2019)CrossRef G. Xu, H. Li, H. Ren, K. Yang, R.H. Deng, Data security issues in deep learning: attacks, countermeasures, and opportunities. IEEE Commun. Mag. 57(11), 116–122 (2019)CrossRef
33.
go back to reference C. Yang, Q. Wu, H. Li, Y. Chen, Generative poisoning attack method against neural networks (preprint, 2017). arXiv:1703.01340 C. Yang, Q. Wu, H. Li, Y. Chen, Generative poisoning attack method against neural networks (preprint, 2017). arXiv:1703.01340
34.
go back to reference M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauly, M.J. Franklin, S. Shenker, I. Stoica, Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing, in Presented as Part of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI’12) (2012), pp. 15–28 M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. McCauly, M.J. Franklin, S. Shenker, I. Stoica, Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing, in Presented as Part of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI’12) (2012), pp. 15–28
35.
go back to reference J.-L. Zhang, G. Qu, Y.-Q. Lv, Q. Zhou, A survey on silicon PUFs and recent advances in ring oscillator PUFs. J. Comput. Sci. Technol. 29(4), 664–678 (2014)CrossRef J.-L. Zhang, G. Qu, Y.-Q. Lv, Q. Zhou, A survey on silicon PUFs and recent advances in ring oscillator PUFs. J. Comput. Sci. Technol. 29(4), 664–678 (2014)CrossRef
Metadata
Title
Security of AI Hardware Systems
Author
Haoting Shen
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-64448-2_4