Skip to main content
Erschienen in: Information Systems Frontiers 2/2023

30.08.2022

Deep Reinforcement Learning in the Advanced Cybersecurity Threat Detection and Protection

verfasst von: Mohit Sewak, Sanjay K. Sahay, Hemant Rathore

Erschienen in: Information Systems Frontiers | Ausgabe 2/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The cybersecurity threat landscape has lately become overly complex. Threat actors leverage weaknesses in the network and endpoint security in a very coordinated manner to perpetuate sophisticated attacks that could bring down the entire network and many critical hosts in the network. To defend against such attacks, cybersecurity solutions are upgrading from the traditional to advanced deep and machine learning defense mechanisms for threat detection and protection. The application of these techniques has been reviewed well in the scientific literature. Deep Reinforcement Learning has shown great promise in developing AI solutions for areas that had earlier required advanced human cognizance. Different techniques and algorithms under deep reinforcement learning have shown great promise in applications ranging from games to industrial processes, where it is claimed to augment systems with general AI capabilities. These algorithms have recently also been used in cybersecurity, especially in threat detection and protection, where these are showing state-of-the-art results. Unlike supervised machine learning and deep learning, deep reinforcement learning is used in more diverse ways and is empowering many innovative applications in the threat defense landscape. However, there does not exist any comprehensive review of deep reinforcement learning applications in advanced cybersecurity threat detection and protection. Therefore, in this paper, we intend to fill this gap and provide a comprehensive review of the different applications of deep reinforcement learning in this field.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Abu Rajab, M., Zarfoss, J., Monrose, F., & Terzis, A. (2006). A multifaceted approach to understanding the botnet phenomenon. In Proceedings of the 6th ACM SIGCOMM conference on internet measurement (p. 41–52). Association for Computing Machinery. https://doi.org/10.1145/1177080.1177086 Abu Rajab, M., Zarfoss, J., Monrose, F., & Terzis, A. (2006). A multifaceted approach to understanding the botnet phenomenon. In Proceedings of the 6th ACM SIGCOMM conference on internet measurement (p. 41–52). Association for Computing Machinery. https://​doi.​org/​10.​1145/​1177080.​1177086
Zurück zum Zitat Anderson, H. S., Kharkar, A., Filar, B., & Roth, P. (2017). Evading machine learning malware detection. Black Hat Anderson, H. S., Kharkar, A., Filar, B., & Roth, P. (2017). Evading machine learning malware detection. Black Hat
Zurück zum Zitat Behera, C. K., & Bhaskari, D. L. (2015). Different obfuscation techniques for code protection. Procedia Computer Science, 70, 757–763.CrossRef Behera, C. K., & Bhaskari, D. L. (2015). Different obfuscation techniques for code protection. Procedia Computer Science, 70, 757–763.CrossRef
Zurück zum Zitat Birman, Y., Hindi, S., Katz, G., & Shabtai, A. (2020). Cost-effective malware detection as a service over serverless cloud using deep reinforcement learning. In: 2020 20th IEEE/ACM international symposium on cluster, cloud and internet computing (CCGRID) (pp. 420–429). https://doi.org/10.1109/CCGrid49817.2020.00-51 Birman, Y., Hindi, S., Katz, G., & Shabtai, A. (2020). Cost-effective malware detection as a service over serverless cloud using deep reinforcement learning. In: 2020 20th IEEE/ACM international symposium on cluster, cloud and internet computing (CCGRID) (pp. 420–429). https://​doi.​org/​10.​1109/​CCGrid49817.​2020.​00-51
Zurück zum Zitat Chalaki, B., Beaver, L. E., Remer, B., Jang, K., Vinitsky, E., Bayen, A. M., & Malikopoulos, A. A. (2020). Zero-shot autonomous vehicle policy transfer: From simulation to real-world via adversarial learning. In 2020 IEEE 16th international conference on control & automation (ICCA) (pp. 35–40). https://doi.org/10.1109/ICCA51439.2020.9264552 Chalaki, B., Beaver, L. E., Remer, B., Jang, K., Vinitsky, E., Bayen, A. M., & Malikopoulos, A. A. (2020). Zero-shot autonomous vehicle policy transfer: From simulation to real-world via adversarial learning. In 2020 IEEE 16th international conference on control & automation (ICCA) (pp. 35–40). https://​doi.​org/​10.​1109/​ICCA51439.​2020.​9264552
Zurück zum Zitat Chen, Y., Li, Y., Xu, D., & Xiao, L. (2018). Dqn-based power control for iot transmission against jamming. In 2018 IEEE 87th vehicular technology conference (VTC Spring) (pp. 1–5). IEEE Chen, Y., Li, Y., Xu, D., & Xiao, L. (2018). Dqn-based power control for iot transmission against jamming. In 2018 IEEE 87th vehicular technology conference (VTC Spring) (pp. 1–5). IEEE
Zurück zum Zitat Chow, Y., & Ghavamzadeh, M. (2014). Algorithms for cvar optimization in mdps. In Advances in neural information processing systems (NIPS) (pp. 3509–3517) Chow, Y., & Ghavamzadeh, M. (2014). Algorithms for cvar optimization in mdps. In Advances in neural information processing systems (NIPS) (pp. 3509–3517)
Zurück zum Zitat Dazeley, R., Vamplew, P., & Cruz, F. (2021). Explainable reinforcement learning for broad-XAI: A Conceptual framework and survey. arXiv:2108.09003 Dazeley, R., Vamplew, P., & Cruz, F. (2021). Explainable reinforcement learning for broad-XAI: A Conceptual framework and survey. arXiv:​2108.​09003
Zurück zum Zitat Firstbrook, P., Hallawell, A., Girard, J., & MacDonald, N. (2009). Magic quadrant for endpoint protection platforms. Gartner RAS Core Research Note G, 208912 Firstbrook, P., Hallawell, A., Girard, J., & MacDonald, N. (2009). Magic quadrant for endpoint protection platforms. Gartner RAS Core Research Note G, 208912
Zurück zum Zitat Han, Y., Rubinstein, B. I., Abraham, T., Alpcan, T., Vel, O. D., Erfani, S., Hubczenko, D., Leckie, C., & Montague, P. (2018). Reinforcement learning for autonomous defence in software-defined networking. In International conference on decision and game theory for security (pp. 145–165). Springer . https://doi.org/10.1007/978-3-030-01554-1_9 Han, Y., Rubinstein, B. I., Abraham, T., Alpcan, T., Vel, O. D., Erfani, S., Hubczenko, D., Leckie, C., & Montague, P. (2018). Reinforcement learning for autonomous defence in software-defined networking. In International conference on decision and game theory for security (pp. 145–165). Springer . https://​doi.​org/​10.​1007/​978-3-030-01554-1_​9
Zurück zum Zitat Han, G., Xiao, L., & Poor, H. V. (2017). Two-dimensional anti-jamming communication based on deep reinforcement learning. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 2087–2091). IEEE Han, G., Xiao, L., & Poor, H. V. (2017). Two-dimensional anti-jamming communication based on deep reinforcement learning. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 2087–2091). IEEE
Zurück zum Zitat Heady, R., Luger, G., Maccabe, A., & Servilla, M. (1990). The architecture of a network level intrusion detection system. Office of Scientific and Technical Information, U.S: Department of Energy. https://doi.org/10.2172/425295. Heady, R., Luger, G., Maccabe, A., & Servilla, M. (1990). The architecture of a network level intrusion detection system. Office of Scientific and Technical Information, U.S: Department of Energy. https://​doi.​org/​10.​2172/​425295.
Zurück zum Zitat Leibo, J. Z., Dueñez-Guzman, E. A., Vezhnevets, A., Agapiou, J. P., Sunehag, P., Koster, R., Matyas, J., Beattie, C., Mordatch, I., & Graepel, T. (2021). Scalable evaluation of multi-agent reinforcement learning with melting pot. In International conference on machine learning (pp. 6187–6199). PMLR Leibo, J. Z., Dueñez-Guzman, E. A., Vezhnevets, A., Agapiou, J. P., Sunehag, P., Koster, R., Matyas, J., Beattie, C., Mordatch, I., & Graepel, T. (2021). Scalable evaluation of multi-agent reinforcement learning with melting pot. In International conference on machine learning (pp. 6187–6199). PMLR
Zurück zum Zitat Lin, Z., Shi, Y., & Xue, Z. (2022). Idsgan: Generative adversarial networks for attack generation against intrusion detection. In Pacific-asia conference on knowledge discovery and data mining (pp. 79–91). Springer Lin, Z., Shi, Y., & Xue, Z. (2022). Idsgan: Generative adversarial networks for attack generation against intrusion detection. In Pacific-asia conference on knowledge discovery and data mining (pp. 79–91). Springer
Zurück zum Zitat Liu, Y., Dong, M., Ota, K., Li, J., & Wu, J. (2018). Deep reinforcement learning based smart mitigation of ddos flooding in software-defined networks. In 2018 IEEE 23rd international workshop on computer aided modeling and design of communication links and networks (CAMAD) (pp. 1–6). https://doi.org/10.1109/CAMAD.2018.8514971 Liu, Y., Dong, M., Ota, K., Li, J., & Wu, J. (2018). Deep reinforcement learning based smart mitigation of ddos flooding in software-defined networks. In 2018 IEEE 23rd international workshop on computer aided modeling and design of communication links and networks (CAMAD) (pp. 1–6). https://​doi.​org/​10.​1109/​CAMAD.​2018.​8514971
Zurück zum Zitat Liu, X., Xu, Y., Jia, L., Wu, Q., & Anpalagan, A. (2018). Anti-jamming communications using spectrum waterfall: A deep reinforcement learning approach. IEEE Communications Letters, 22(5), 998–1001.CrossRef Liu, X., Xu, Y., Jia, L., Wu, Q., & Anpalagan, A. (2018). Anti-jamming communications using spectrum waterfall: A deep reinforcement learning approach. IEEE Communications Letters, 22(5), 998–1001.CrossRef
Zurück zum Zitat Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., & Kavukcuoglu, K. (2016) Asynchronous methods for deep reinforcement learning. In International conference on machine learning (pp. 1928–1937). PMLR Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., & Kavukcuoglu, K. (2016) Asynchronous methods for deep reinforcement learning. In International conference on machine learning (pp. 1928–1937). PMLR
Zurück zum Zitat Nappa, A., Rafique, M. Z., & Caballero, J. (2015). The malicia dataset: identification and analysis of drive-by download operations. International Journal of Information Security, 14(1), 15–33.CrossRef Nappa, A., Rafique, M. Z., & Caballero, J. (2015). The malicia dataset: identification and analysis of drive-by download operations. International Journal of Information Security, 14(1), 15–33.CrossRef
Zurück zum Zitat Pao, D., Lin, W., & Liu, B. (2010). A memory-efficient pipelined implementation of the aho-corasick string-matching algorithm. ACM Transactions on Architecture and Code Optimization (TACO), 7(2), 1–27.CrossRef Pao, D., Lin, W., & Liu, B. (2010). A memory-efficient pipelined implementation of the aho-corasick string-matching algorithm. ACM Transactions on Architecture and Code Optimization (TACO), 7(2), 1–27.CrossRef
Zurück zum Zitat Rathore, H., Nikam, P., Sahay, S. K., & Sewak, M. (2021a). Identification of adversarial android intents using reinforcement learning. In 2021 international joint conference on neural networkks (IJCNN), (pp. 1–8). IEEE Rathore, H., Nikam, P., Sahay, S. K., & Sewak, M. (2021a). Identification of adversarial android intents using reinforcement learning. In 2021 international joint conference on neural networkks (IJCNN), (pp. 1–8). IEEE
Zurück zum Zitat Rathore, H., Sahay, S. K., Rajvanshi, R., & Sewak, M. (2020a). Identification of significant permissions for efficient android malware detection. In International conference on broadband communications, networks and systems, (pp. 33–52). Springer Rathore, H., Sahay, S. K., Rajvanshi, R., & Sewak, M. (2020a). Identification of significant permissions for efficient android malware detection. In International conference on broadband communications, networks and systems, (pp. 33–52). Springer
Zurück zum Zitat Rathore, H., Sahay, S. K., Thukral, S., & Sewak, M. (2020b). Detection of malicious android applications: Classical machine learning vs. deep neural network integrated with clustering. In International conference on broadband communications, networks and systems (pp. 109–128). Springer Rathore, H., Sahay, S. K., Thukral, S., & Sewak, M. (2020b). Detection of malicious android applications: Classical machine learning vs. deep neural network integrated with clustering. In International conference on broadband communications, networks and systems (pp. 109–128). Springer
Zurück zum Zitat Rathore, H., Sharma, S. C., Sahay, S. K., & Sewak, M. (2022c). Are malware detection classifiers adversarially vulnerable to actor-critic based evasion attacks? EAI Endorsed Transactions on Scalable Information Systems pp. e74 Rathore, H., Sharma, S. C., Sahay, S. K., & Sewak, M. (2022c). Are malware detection classifiers adversarially vulnerable to actor-critic based evasion attacks? EAI Endorsed Transactions on Scalable Information Systems pp. e74
Zurück zum Zitat Rathore, H., Sahay, S. K., Nikam, P., & Sewak, M. (2021). Robust android malware detection system against adversarial attacks using q-learning. Information Systems Frontiers, 23(4), 867–882.CrossRef Rathore, H., Sahay, S. K., Nikam, P., & Sewak, M. (2021). Robust android malware detection system against adversarial attacks using q-learning. Information Systems Frontiers, 23(4), 867–882.CrossRef
Zurück zum Zitat Rathore, H., Samavedhi, A., Sahay, S. K., & Sewak, M. (2021). Robust malware detection models: learning from adversarial attacks and defenses. Forensic Science International: Digital Investigation, 37, 301183. Rathore, H., Samavedhi, A., Sahay, S. K., & Sewak, M. (2021). Robust malware detection models: learning from adversarial attacks and defenses. Forensic Science International: Digital Investigation, 37, 301183.
Zurück zum Zitat Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015). Trust region policy optimization. In International conference on machine learning (pp. 1889–1897). PMLR Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015). Trust region policy optimization. In International conference on machine learning (pp. 1889–1897). PMLR
Zurück zum Zitat Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438 Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional continuous control using generalized advantage estimation. arXiv:​1506.​02438
Zurück zum Zitat Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv:1707.06347 Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv:​1707.​06347
Zurück zum Zitat Sewak, M., Sahay, S. K., & Rathore, H. (2020b). Deepintent: Implicitintent based android ids with e2e deep learning architecture. In 2020 IEEE 31st annual international symposium on personal, indoor and mobile radio communications (pp. 1–6). IEEE Sewak, M., Sahay, S. K., & Rathore, H. (2020b). Deepintent: Implicitintent based android ids with e2e deep learning architecture. In 2020 IEEE 31st annual international symposium on personal, indoor and mobile radio communications (pp. 1–6). IEEE
Zurück zum Zitat Sewak, M., Sahay, S. K., & Rathore, H. (2020c). DOOM: a novel adversarial-drl-based op-code level metamorphic malware obfuscator for the enhancement of IDS. In UbiComp/ISWC ’20: 2020 ACM international joint conference on pervasive and ubiquitous computing and 2020 ACM international symposium on wearable computers, Virtual Event, Mexico, September 12-17, 2020 (pp. 131–134). ACM. https://doi.org/10.1145/3410530.3414411 Sewak, M., Sahay, S. K., & Rathore, H. (2020c). DOOM: a novel adversarial-drl-based op-code level metamorphic malware obfuscator for the enhancement of IDS. In UbiComp/ISWC ’20: 2020 ACM international joint conference on pervasive and ubiquitous computing and 2020 ACM international symposium on wearable computers, Virtual Event, Mexico, September 12-17, 2020 (pp. 131–134). ACM. https://​doi.​org/​10.​1145/​3410530.​3414411
Zurück zum Zitat Sewak, M., Sahay, S. K., & Rathore, H. (2022). Policy-approximation based deep reinforcement learning techniques: an overview. Information and Communication Technology for Competitive Strategies (ICTCS 2020) (pp. 493–507) Sewak, M., Sahay, S. K., & Rathore, H. (2022). Policy-approximation based deep reinforcement learning techniques: an overview. Information and Communication Technology for Competitive Strategies (ICTCS 2020) (pp. 493–507)
Zurück zum Zitat Teh, Y., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., & Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. Advances in Neural Information Processing Systems, 30 Teh, Y., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., & Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. Advances in Neural Information Processing Systems, 30
Zurück zum Zitat Van Hasselt, H., Guez, A., & Silver, D. (2016). Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence (Vol. 30) Van Hasselt, H., Guez, A., & Silver, D. (2016). Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence (Vol. 30)
Zurück zum Zitat Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., & De Freitas, N. (2016). Dueling network architectures for deep reinforcement learning. In International conference on machine learning (ICML 16) (pp. 1995–2003) Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., & De Freitas, N. (2016). Dueling network architectures for deep reinforcement learning. In International conference on machine learning (ICML 16) (pp. 1995–2003)
Zurück zum Zitat Weng, L. (2019). Meta reinforcement learning. lilianweng. github. io/lillog Weng, L. (2019). Meta reinforcement learning. lilianweng. github. io/lillog
Zurück zum Zitat Wu, D., Fang, B., Wang, J., Liu, Q., & Cui, X. (2019). Evading machine learning botnet detection models via deep reinforcement learning. In IEEE international conference on communications (ICC) (pp. 1–6). IEEE Wu, D., Fang, B., Wang, J., Liu, Q., & Cui, X. (2019). Evading machine learning botnet detection models via deep reinforcement learning. In IEEE international conference on communications (ICC) (pp. 1–6). IEEE
Zurück zum Zitat Xiao, L., Wan, X., Su, W., Tang, Y., et al. (2018). Anti-jamming underwater transmission with mobility and learning. IEEE Communications Letters, 22(3), 542–545.CrossRef Xiao, L., Wan, X., Su, W., Tang, Y., et al. (2018). Anti-jamming underwater transmission with mobility and learning. IEEE Communications Letters, 22(3), 542–545.CrossRef
Zurück zum Zitat Zahavy, T., Haroush, M., Merlis, N., Mankowitz, D. J., & Mannor, S. (2018). Learn what not to learn: Action elimination with deep reinforcement learning. Advances in Neural Information Processing Systems, 31, 3562–3573. Zahavy, T., Haroush, M., Merlis, N., Mankowitz, D. J., & Mannor, S. (2018). Learn what not to learn: Action elimination with deep reinforcement learning. Advances in Neural Information Processing Systems, 31, 3562–3573.
Metadaten
Titel
Deep Reinforcement Learning in the Advanced Cybersecurity Threat Detection and Protection
verfasst von
Mohit Sewak
Sanjay K. Sahay
Hemant Rathore
Publikationsdatum
30.08.2022
Verlag
Springer US
Erschienen in
Information Systems Frontiers / Ausgabe 2/2023
Print ISSN: 1387-3326
Elektronische ISSN: 1572-9419
DOI
https://doi.org/10.1007/s10796-022-10333-x

Weitere Artikel der Ausgabe 2/2023

Information Systems Frontiers 2/2023 Zur Ausgabe

Premium Partner