Skip to main content
Top
Published in: Optical Memory and Neural Networks 4/2022

01-12-2022

Neural Network: Predator, Victim, and Information Security Tool

Authors: V. B. Betelin, V. A. Galatenko, K. A. Kostiukhin

Published in: Optical Memory and Neural Networks | Issue 4/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The article deals with information security problems associated with neural networks. Malicious neural networks, attacks on neural networks, the use of neural networks as an information security tool, and neural network attack tools are considered. Methods for improving the information security of systems that include neural network components are proposed.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Changliu Liu, Arnon, T., Lazarus, Ch., Strong, Ch., Barrett, C., and Kochenderfer, M.J., Algorithms for Verifying Deep Neural Networks, 2020. https://arxiv.org/pdf/1903.06758.pdf. Changliu Liu, Arnon, T., Lazarus, Ch., Strong, Ch., Barrett, C., and Kochenderfer, M.J., Algorithms for Verifying Deep Neural Networks, 2020. https://​arxiv.​org/​pdf/​1903.​06758.​pdf.​
2.
go back to reference Alejandro Barredo Arrieta, Daz-Rodrguez, N., Del Ser, J., Bennetot, A., Siham Tabik, Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Raja Chatila, and Herrera, F., Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. 2019. https://arxiv. org/pdf/1910.10045.pdf. Alejandro Barredo Arrieta, Daz-Rodrguez, N., Del Ser, J., Bennetot, A., Siham Tabik, Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Raja Chatila, and Herrera, F., Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. 2019. https://​arxiv.​ org/pdf/1910.10045.pdf.
3.
go back to reference Nektaria Kaloudi and Jingyue Li, The AI-based cyber threat landscape: A survey, ACM Comput. Surv., 2020, vol. 53, p. 1.CrossRef Nektaria Kaloudi and Jingyue Li, The AI-based cyber threat landscape: A survey, ACM Comput. Surv., 2020, vol. 53, p. 1.CrossRef
4.
go back to reference Seymour, J. and Tully, Ph., Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter, Black Hat. USA, 2016, vol. 37, no. 2016, p. 139. Seymour, J. and Tully, Ph., Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter, Black Hat. USA, 2016, vol. 37, no. 2016, p. 139.
5.
go back to reference Dhilung Kirat, Jiyong Jang, and Stoecklin, M.Ph., DeepLocker—Concealing targeted attacks with AI Locksmithing, in Proceedings of the Black Hat USA Conference, 2018. Dhilung Kirat, Jiyong Jang, and Stoecklin, M.Ph., DeepLocker—Concealing targeted attacks with AI Locksmithing, in Proceedings of the Black Hat USA Conference, 2018.
6.
go back to reference Tao Liu, Wujie Wen, and Yier Jin, SIN 2: Stealth infection on neural networks low-cost agile neural trojan attack methodology, in 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), IEEE, 2018, pp. 227–230. Tao Liu, Wujie Wen, and Yier Jin, SIN 2: Stealth infection on neural networks low-cost agile neural trojan attack methodology, in 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), IEEE, 2018, pp. 227–230.
7.
go back to reference Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Zuzak, M., Jacobsen, N., Xing, D., and Ankur Srivastava, A Survey on Neural Trojans, Proceedings of the Twentyfirst International Symposium on Quality Electronic Design ISQED 2020, March 25–26, California, USA, Santa Clara, 2020, pp. 33–39. Yuntao Liu, Ankit Mondal, Abhishek Chakraborty, Zuzak, M., Jacobsen, N., Xing, D., and Ankur Srivastava, A Survey on Neural Trojans, Proceedings of the Twentyfirst International Symposium on Quality Electronic Design ISQED 2020, March 25–26, California, USA, Santa Clara, 2020, pp. 33–39.
8.
go back to reference Yuntao Liu, Yang Xie, and Ankur Srivastava, Neural Trojans, 2017 IEEE 35th International Conference on Computer Design ICCD, 2017, pp. 45–48. Yuntao Liu, Yang Xie, and Ankur Srivastava, Neural Trojans, 2017 IEEE 35th International Conference on Computer Design ICCD, 2017, pp. 45–48.
9.
go back to reference Elisa Bertino, Zero Trust Architecture: Does It Help?, IEEE Secur. Privacy, 2021, vol. 19, issue 5, pp. 95–96.CrossRef Elisa Bertino, Zero Trust Architecture: Does It Help?, IEEE Secur. Privacy, 2021, vol. 19, issue 5, pp. 95–96.CrossRef
10.
go back to reference Galatenko, A.V. and Galatenko, V.A., Statement of the problem of access differentiation in a distributed object environment, Questions of Cybernetics. Information Security. Real-Time Operating Systems. Databases, M.: NSK RAS, 1999, pp. 3–13. Galatenko, A.V. and Galatenko, V.A., Statement of the problem of access differentiation in a distributed object environment, Questions of Cybernetics. Information Security. Real-Time Operating Systems. Databases, M.: NSK RAS, 1999, pp. 3–13.
11.
go back to reference Lear, E., Droms, R., and Romascanu, D., Manufacturer Usage Description Specification, Request for Comments: 8520, Internet Engineering Task Force (IETF), March 2019. Lear, E., Droms, R., and Romascanu, D., Manufacturer Usage Description Specification, Request for Comments: 8520, Internet Engineering Task Force (IETF), March 2019.
12.
go back to reference Lee Badger, Sterne, D.F., Sherman, D.L., Walker, K.M., and Haghighat, Sh.A., Practical domain and type enforcement for UNIX, in Proceedings 1995 IEEE Symposium on Security and Privacy, IEEE, 1995, p. 6677. Lee Badger, Sterne, D.F., Sherman, D.L., Walker, K.M., and Haghighat, Sh.A., Practical domain and type enforcement for UNIX, in Proceedings 1995 IEEE Symposium on Security and Privacy, IEEE, 1995, p. 6677.
13.
go back to reference Moshe Kravchik, Battista Biggio, and Asaf Shabtai, Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems, in The 36th ACM/SIGAPP Symposium on Applied Computing (SAC 21), March 22–26, 2021, Virtual Event, Republic of Korea, New York, NY, USA: ACM, 2021. Moshe Kravchik, Battista Biggio, and Asaf Shabtai, Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems, in The 36th ACM/SIGAPP Symposium on Applied Computing (SAC 21), March 22–26, 2021, Virtual Event, Republic of Korea, New York, NY, USA: ACM, 2021.
14.
go back to reference Sen Chen, Minhui Xue, Lingling Fan, Shuang Hao, Lihua Xu, Haojin Zhu, and Bo Li, Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach, Comput. Secur., 2018, vol. 73, no. 2018, p. 326–344. Sen Chen, Minhui Xue, Lingling Fan, Shuang Hao, Lihua Xu, Haojin Zhu, and Bo Li, Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach, Comput. Secur., 2018, vol. 73, no. 2018, p. 326–344.
15.
go back to reference Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song, Targeted backdoor attacks on deep learning systems using data poisoning, 2017. arXiv:1712.05526. Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song, Targeted backdoor attacks on deep learning systems using data poisoning, 2017. arXiv:1712.05526.
16.
go back to reference Kang Liu, Benjamin Tan, Ramesh Karri, and Siddharth Garg, Poisoning the (Data) Well in ML-Based CAD: A Case Study of Hiding Lithographic Hotspots, Design, Automation and Test in Europe (DATE 2020), EDAA, 2020, pp. 306–309. Kang Liu, Benjamin Tan, Ramesh Karri, and Siddharth Garg, Poisoning the (Data) Well in ML-Based CAD: A Case Study of Hiding Lithographic Hotspots, Design, Automation and Test in Europe (DATE 2020), EDAA, 2020, pp. 306–309.
17.
go back to reference Jun Wu and Jingrui He, Indirect invisible poisoning attacks on domain adaptation, in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 21), August 14–18, 2021, Virtual Event, Singapore, New York, NY, USA: ACM, 2021. Jun Wu and Jingrui He, Indirect invisible poisoning attacks on domain adaptation, in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 21), August 14–18, 2021, Virtual Event, Singapore, New York, NY, USA: ACM, 2021.
18.
go back to reference Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li, Adversarial attacks on deeplearning models in natural language processing: A survey, ACM Trans. Intell. Syst. Technol., 2020, vol. 11, p. 3, Article 24 (March 2020). Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li, Adversarial attacks on deeplearning models in natural language processing: A survey, ACM Trans. Intell. Syst. Technol., 2020, vol. 11, p. 3, Article 24 (March 2020).
19.
go back to reference Daniel Park and Blent Yener, A survey on practical adversarial examples for malware classifiers, in Reversing and Offensive-oriented Trends Symposium (ROOTS20), 2020, November 19–20, 2020, Vienna, Austria; New York, NY, USA: ACM, 2020. Daniel Park and Blent Yener, A survey on practical adversarial examples for malware classifiers, in Reversing and Offensive-oriented Trends Symposium (ROOTS20), 2020, November 19–20, 2020, Vienna, Austria; New York, NY, USA: ACM, 2020.
20.
go back to reference Yun-Da Tsai, ChengKuan Chen, and Shou-De Lin, Toward an Effective Black-Box Adversarial Attack on Functional JavaScript Malware against Commercial Anti-Virus, in Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM 21), November 15, 2021, Virtual Event, QLD, Australia, New York, NY, USA: ACM, 2021. Yun-Da Tsai, ChengKuan Chen, and Shou-De Lin, Toward an Effective Black-Box Adversarial Attack on Functional JavaScript Malware against Commercial Anti-Virus, in Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM 21), November 15, 2021, Virtual Event, QLD, Australia, New York, NY, USA: ACM, 2021.
21.
go back to reference Carlini, N. and Wagner, D., Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, 7 p. Carlini, N. and Wagner, D., Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, 7 p.
23.
go back to reference Nitzan Guetta, Asaf Shabtai, Inderjeet Singh, Satoru Momiyama, and Yuval Elovici, Dodging Attack Using Carefully Crafted Natural Makeup, 2021. arXiv:2109.06467v1 [cs.CV]. Nitzan Guetta, Asaf Shabtai, Inderjeet Singh, Satoru Momiyama, and Yuval Elovici, Dodging Attack Using Carefully Crafted Natural Makeup, 2021. arXiv:2109.06467v1 [cs.CV].
24.
go back to reference Xu Kang, Bin Song, Xiaojiang Du, and Mohsen Guizani, Adversarial Attacks for Image Segmentation on Multiple Lightweight Models, IEEE Access, Special Section on Deep Learning: Security and Forensics Research Advances and Challenges, Vol. 8, 2020, pp. 31359–31370. Digital Object Identifier .https://doi.org/10.1109/Access.2020.2973069CrossRef Xu Kang, Bin Song, Xiaojiang Du, and Mohsen Guizani, Adversarial Attacks for Image Segmentation on Multiple Lightweight Models, IEEE Access, Special Section on Deep Learning: Security and Forensics Research Advances and Challenges, Vol. 8, 2020, pp. 31359–31370. Digital Object Identifier .https://​doi.​org/​10.​1109/​Access.​2020.​2973069CrossRef
26.
go back to reference Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai, One Pixel Attack for Fooling Deep Neural Networks. arXiv:1710.08864v7 [cs.LG], 2019. Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai, One Pixel Attack for Fooling Deep Neural Networks. arXiv:1710.08864v7 [cs.LG], 2019.
28.
go back to reference Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, and Radha Poovendran, Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 2017. arXiv:1703.04318v1 [cs.LG]. Hossein Hosseini, Yize Chen, Sreeram Kannan, Baosen Zhang, and Radha Poovendran, Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 2017. arXiv:1703.04318v1 [cs.LG].
29.
go back to reference Emily Wenger, Max Bronckers, Christian Cianfarani, Jenna Cryan, Angela Sha, Haitao Zheng, and Ben Y. Zhao, Hello, Its Me: Deep Learning-based Speech Synthesis Attacks in the Real World, in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS 21), 2021, Virtual Event, Republic of Korea, New York, NY, USA: ACM, 2021. Emily Wenger, Max Bronckers, Christian Cianfarani, Jenna Cryan, Angela Sha, Haitao Zheng, and Ben Y. Zhao, Hello, Its Me: Deep Learning-based Speech Synthesis Attacks in the Real World, in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS 21), 2021, Virtual Event, Republic of Korea, New York, NY, USA: ACM, 2021.
30.
go back to reference Keywhan Chung, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer, Availability Attacks on Computing Systems through Alteration of Environmental Control: Smart Malware Approach, in 10th ACM/IEEE International Conference on Cyber-Physical Systems (with CPS-IoT Week 2019) (ICCPS 19), April 1618, 2019, Montreal, QC, Canada, New York, NY, USA: ACM, 2019. Keywhan Chung, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer, Availability Attacks on Computing Systems through Alteration of Environmental Control: Smart Malware Approach, in 10th ACM/IEEE International Conference on Cyber-Physical Systems (with CPS-IoT Week 2019) (ICCPS 19), April 1618, 2019, Montreal, QC, Canada, New York, NY, USA: ACM, 2019.
31.
go back to reference Rongjunchen Zhang, Xiao Chen, Jianchao Lu, Sheng Wen, Surya Nepal, and Yang Xiang, Using AI to hack IA: A new stealthy spyware against voice assistance functions in smart phones, 2018. arXiv:1805.06187. Rongjunchen Zhang, Xiao Chen, Jianchao Lu, Sheng Wen, Surya Nepal, and Yang Xiang, Using AI to hack IA: A new stealthy spyware against voice assistance functions in smart phones, 2018. arXiv:1805.06187.
32.
go back to reference Khoa Trieu and Yi Yang, Artificial intelligence-based password brute force attacks, in Proceedings of the 13th Annual Conference of the Midwest AIS (MWAIS18), 2018. Khoa Trieu and Yi Yang, Artificial intelligence-based password brute force attacks, in Proceedings of the 13th Annual Conference of the Midwest AIS (MWAIS18), 2018.
33.
go back to reference Alejandro Correa Bahnsen, Torroledo, I., Camacho, D., and Villegas, S., DeepPhish: Simulating malicious AI, in Proceedings of the 2018 APWG Symposium on Electronic Crime Research (eCrime18) 2018, p. 18. Alejandro Correa Bahnsen, Torroledo, I., Camacho, D., and Villegas, S., DeepPhish: Simulating malicious AI, in Proceedings of the 2018 APWG Symposium on Electronic Crime Research (eCrime18) 2018, p. 18.
34.
go back to reference Yuantian Miao, Chao Chen, Lei Pan, Qing-Long Han, Jun Zhang, and Yang Xiang, Machine learning based cyber attacks targeting on controlled information: A Survey, ACM Comput. Surv., 2021, vol. 54, p. 7, Article 139 (June 2021), 36 p. Yuantian Miao, Chao Chen, Lei Pan, Qing-Long Han, Jun Zhang, and Yang Xiang, Machine learning based cyber attacks targeting on controlled information: A Survey, ACM Comput. Surv., 2021, vol. 54, p. 7, Article 139 (June 2021), 36 p.
35.
go back to reference Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach, Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain, ACM Comput. Surv., 2021, vol. 54, p. 5, Article 108 (May 2021), 36 p. Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, and Lior Rokach, Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain, ACM Comput. Surv., 2021, vol. 54, p. 5, Article 108 (May 2021), 36 p.
36.
go back to reference Yuan Luo, Ya Xiao, Long Cheng, Guojun Peng, and Danfeng (Daphne) Yao, Deep learning-based anomaly detection in cyber-physical systems: Progress and opportunities, ACM Comput. Surv., 2021, vol. 54, p. 5, Article 106 (May 2021), 36 p. Yuan Luo, Ya Xiao, Long Cheng, Guojun Peng, and Danfeng (Daphne) Yao, Deep learning-based anomaly detection in cyber-physical systems: Progress and opportunities, ACM Comput. Surv., 2021, vol. 54, p. 5, Article 106 (May 2021), 36 p.
37.
go back to reference De Lucia, M.J. and Chase Cotton, A Network security classifier Defense: Against adversarial machine learning attacks, in Proceedings of ACM Workshop on Wireless Security Machine Learning (WiseML20), New York, NY, USA: ACM, 2020. De Lucia, M.J. and Chase Cotton, A Network security classifier Defense: Against adversarial machine learning attacks, in Proceedings of ACM Workshop on Wireless Security Machine Learning (WiseML20), New York, NY, USA: ACM, 2020.
38.
go back to reference Berghoff, Ch., Neu, M., and von Twickel, A., The interplay of AI and biometrics: Challenges and opportunities, IEEE Comput., 2021, pp. 80–85. Berghoff, Ch., Neu, M., and von Twickel, A., The interplay of AI and biometrics: Challenges and opportunities, IEEE Comput., 2021, pp. 80–85.
39.
go back to reference Schneier, B., Artificial intelligence and the attack/defense balance, IEEE Secur. Privacy, 2018, vol. 2, p. 96.CrossRef Schneier, B., Artificial intelligence and the attack/defense balance, IEEE Secur. Privacy, 2018, vol. 2, p. 96.CrossRef
40.
go back to reference Kroll, J.A., Michael, J.B., and Thaw, D.B., Enhancing cybersecurity via artificial intelligence: Risks, rewards, and frameworks, IEEE Comput., 2021, pp. 64–71. Kroll, J.A., Michael, J.B., and Thaw, D.B., Enhancing cybersecurity via artificial intelligence: Risks, rewards, and frameworks, IEEE Comput., 2021, pp. 64–71.
Metadata
Title
Neural Network: Predator, Victim, and Information Security Tool
Authors
V. B. Betelin
V. A. Galatenko
K. A. Kostiukhin
Publication date
01-12-2022
Publisher
Pleiades Publishing
Published in
Optical Memory and Neural Networks / Issue 4/2022
Print ISSN: 1060-992X
Electronic ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X22040026

Other articles of this Issue 4/2022

Optical Memory and Neural Networks 4/2022 Go to the issue

Premium Partner