Skip to main content
Log in

Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier

  • Published:
Automatic Control and Computer Sciences Aims and scope Submit manuscript

Abstract

In this paper, adversarial attacks on machine learning models and their classification are considered. Methods for assessing the resistance of a long short term memory (LSTM) classifier to adversarial attacks. Jacobian based saliency map attack (JSMA) and fast gradient sign method (FGSM) attacks chosen due to the portability of adversarial examples between machine learning models are discussed in detail. An attack of “poisoning” of the LSTM classifier is proposed. Methods of protection against the considered adversarial attacks are formulated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

Similar content being viewed by others

REFERENCES

  1. Lavrova, D., Zegzhda, D. and Yarmak, A., Predicting cyber attacks on industrial systems using the Kalman filter, Third World Conf. on Smart Trends in Systems Security and Sustainability (WorldS4), London, 2019, IEEE, 2019, pp. 317–321.  https://doi.org/10.1109/WorldS4.2019.8904038

  2. Lavrova, D., Zegzhda, D., and Yarmak, A., Using GRU neural network for cyber-attack detection in automated process control systems, IEEE Int. Black Sea Conf. on Communications and Networking (BlackSeaCom), Sochi, Russia, 2019, IEEE, 2019, pp. 1–3.  https://doi.org/10.1109/BlackSeaCom.2019.8812818

  3. Szededy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R., Intriguing properties of neural networks, 2013. arXiv:1312.6199 [cs.CV]

  4. Tabassi, E., Burns, K., Hadjimichael, M., Molina-Markham, A., and Sexton, J., NISTIR 8269: A taxonomy and terminology of adversarial machine learning, 2019.https://doi.org/10.6028/NIST.IR.8269-draft

  5. Moustafa, N., The UNSW-NB15 dataset. www.unsw.adfa.edu.au/unsw-canberra-cyber/cybersecurity/ADFA-NB15-Datasets/. Cited January 9, 2021.

  6. Moustafa, N. and Slay, J., UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set), Military Communications and Information Systems Conf. (MilCIS), Canberra, 2015, IEEE, 2015, pp. 1–6.  https://doi.org/10.1109/MilCIS.2015.7348942

  7. Nikolenko, S., Kadurin, A.A., and Arkhangel’skaya, E.O., Glubokoe obuchenie. Pogruzhenie v mir neironnykh setei (Deep Learning: Dive into the World of Neural Networks), St. Petersburg: Piter, 2018.

  8. Elsayed, N., Maida, A. S., and Bayoumi, M. Deep gated recurrent and convolutional network hybrid model for univariate time series classification, Int. J. Adv. Comput. Sci. Appl., 2019, vol. 10, no. 5. https://doi.org/10.14569/IJACSA.2019.0100582

  9. Goodfellow I.J., Shlens, J., and Szegedy, C., Explaining and harnessing adversarial examples, 2014. arXiv:1412.6572 [stat.ML]

  10. Wiyanto, R. and Xu, A., Maximal Jacobian-based saliency map attack, 2018. arXiv:1808.07945 [cs.LG]

  11. Keras. https://keras.io/about/. Cited January 9, 2021.

  12. TensorFlow. https://www.tensorflow.org/. Cited January 9, 2021.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. V. Platonov.

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Translated by A. Ivanov

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kulikov, D.A., Platonov, V.V. Adversarial Attacks on Intrusion Detection Systems Using the LSTM Classifier. Aut. Control Comp. Sci. 55, 1080–1086 (2021). https://doi.org/10.3103/S0146411621080174

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0146411621080174

Keywords:

Navigation