Abstract
In this paper, adversarial attacks on machine learning models and their classification are considered. Methods for assessing the resistance of a long short term memory (LSTM) classifier to adversarial attacks. Jacobian based saliency map attack (JSMA) and fast gradient sign method (FGSM) attacks chosen due to the portability of adversarial examples between machine learning models are discussed in detail. An attack of “poisoning” of the LSTM classifier is proposed. Methods of protection against the considered adversarial attacks are formulated.
Similar content being viewed by others
REFERENCES
Lavrova, D., Zegzhda, D. and Yarmak, A., Predicting cyber attacks on industrial systems using the Kalman filter, Third World Conf. on Smart Trends in Systems Security and Sustainability (WorldS4), London, 2019, IEEE, 2019, pp. 317–321. https://doi.org/10.1109/WorldS4.2019.8904038
Lavrova, D., Zegzhda, D., and Yarmak, A., Using GRU neural network for cyber-attack detection in automated process control systems, IEEE Int. Black Sea Conf. on Communications and Networking (BlackSeaCom), Sochi, Russia, 2019, IEEE, 2019, pp. 1–3. https://doi.org/10.1109/BlackSeaCom.2019.8812818
Szededy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R., Intriguing properties of neural networks, 2013. arXiv:1312.6199 [cs.CV]
Tabassi, E., Burns, K., Hadjimichael, M., Molina-Markham, A., and Sexton, J., NISTIR 8269: A taxonomy and terminology of adversarial machine learning, 2019.https://doi.org/10.6028/NIST.IR.8269-draft
Moustafa, N., The UNSW-NB15 dataset. www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-NB15-Datasets/. Cited January 9, 2021.
Moustafa, N. and Slay, J., UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set), Military Communications and Information Systems Conf. (MilCIS), Canberra, 2015, IEEE, 2015, pp. 1–6. https://doi.org/10.1109/MilCIS.2015.7348942
Nikolenko, S., Kadurin, A.A., and Arkhangel’skaya, E.O., Glubokoe obuchenie. Pogruzhenie v mir neironnykh setei (Deep Learning: Dive into the World of Neural Networks), St. Petersburg: Piter, 2018.
Elsayed, N., Maida, A. S., and Bayoumi, M. Deep gated recurrent and convolutional network hybrid model for univariate time series classification, Int. J. Adv. Comput. Sci. Appl., 2019, vol. 10, no. 5. https://doi.org/10.14569/IJACSA.2019.0100582
Goodfellow I.J., Shlens, J., and Szegedy, C., Explaining and harnessing adversarial examples, 2014. arXiv:1412.6572 [stat.ML]
Wiyanto, R. and Xu, A., Maximal Jacobian-based saliency map attack, 2018. arXiv:1808.07945 [cs.LG]
Keras. https://keras.io/about/. Cited January 9, 2021.
TensorFlow. https://www.tensorflow.org/. Cited January 9, 2021.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that they have no conflicts of interest.
Additional information
Translated by A. Ivanov
About this article
Cite this article
Kulikov, D.A., Platonov, V.V. Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier. Aut. Control Comp. Sci. 55, 1080–1086 (2021). https://doi.org/10.3103/S0146411621080174
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S0146411621080174