Skip to main content

2021 | OriginalPaper | Buchkapitel

Explanation-Driven Characterization of Android Ransomware

verfasst von : Michele Scalas, Konrad Rieck, Giorgio Giacinto

Erschienen in: Pattern Recognition. ICPR International Workshops and Challenges

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Machine learning is currently successfully used for addressing several cybersecurity detection and classification tasks. Typically, such detectors are modeled through complex learning algorithms employing a wide variety of features. Although these settings allow achieving considerable performances, gaining insights on the learned knowledge turns out to be a hard task. To address this issue, research efforts on the interpretability of machine learning approaches to cybersecurity tasks is currently rising. In particular, relying on explanations could improve prevention and detection capabilities since they could help human experts to find out the distinctive features that truly characterize malware attacks. In this perspective, Android ransomware represents a serious threat. Leveraging state-of-the-art explanation techniques, we present a first approach that enables the identification of the most influential discriminative features for ransomware characterization. We propose strategies to adopt explanation techniques appropriately and describe ransomware families and their evolution over time. Reported results suggest that our proposal can help cyber threat intelligence teams in the early detection of new ransomware families, and could be applicable to other malware detection systems through the identification of their distinctive features.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
MD5: 8a7fea6a5279e8f64a56aa192d2e7cf0.
 
Literatur
2.
Zurück zum Zitat Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems 31, pp. 9505–9515. Curran Associates, Inc., October 2018 Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems 31, pp. 9505–9515. Curran Associates, Inc., October 2018
5.
Zurück zum Zitat Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9CrossRef Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 169–191. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-28954-6_​9CrossRef
7.
Zurück zum Zitat Demetrio, L., Biggio, B., Lagorio, G., Roli, F., Armando, A.: Explaining vulnerabilities of deep learning to adversarial malware binaries. In: Proceedings of the Third Italian Conference on Cyber Security 2019. CEUR-WS.org, Pisa, February 2019 Demetrio, L., Biggio, B., Lagorio, G., Roli, F., Armando, A.: Explaining vulnerabilities of deep learning to adversarial malware binaries. In: Proceedings of the Third Italian Conference on Cyber Security 2019. CEUR-WS.org, Pisa, February 2019
8.
Zurück zum Zitat Ghorbani, A., Wexler, J., Zou, J., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems. pp. 9273–9282. Curran Associates Inc., Vancouver, February 2019 Ghorbani, A., Wexler, J., Zou, J., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems. pp. 9273–9282. Curran Associates Inc., Vancouver, February 2019
9.
Zurück zum Zitat Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems 32 (NIPS 2019), pp. 125–136. Curran Associates Inc., Vancouver (2019) Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems 32 (NIPS 2019), pp. 125–136. Curran Associates Inc., Vancouver (2019)
10.
Zurück zum Zitat Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: 35th International Conference on Machine Learning (ICML 2018), vol. 80, pp. 2668–2677. Stockholm, July 2018 Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: 35th International Conference on Machine Learning (ICML 2018), vol. 80, pp. 2668–2677. Stockholm, July 2018
11.
Zurück zum Zitat Lage, I., et al.: An evaluation of the human-interpretability of explanation. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, pp. 59–67. AAAI Press, Honolulu, January 2019 Lage, I., et al.: An evaluation of the human-interpretability of explanation. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, pp. 59–67. AAAI Press, Honolulu, January 2019
15.
Zurück zum Zitat Pendlebury, F., Pierazzi, F., Jordaney, R., Kinder, J., Cavallaro, L.: TESSERACT: eliminating experimental bias in malware classification across space and time. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 729–746. USENIX Association, Santa Clara, August 2019 Pendlebury, F., Pierazzi, F., Jordaney, R., Kinder, J., Cavallaro, L.: TESSERACT: eliminating experimental bias in malware classification across space and time. In: 28th USENIX Security Symposium (USENIX Security 19), pp. 729–746. USENIX Association, Santa Clara, August 2019
16.
Zurück zum Zitat Preece, A., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI. In: AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector. Arlington, Virginia, USA (2018) Preece, A., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI. In: AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector. Arlington, Virginia, USA (2018)
19.
Zurück zum Zitat Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866. JMLR.org, Sydney, NSW, Australia, May 2016. https://doi.org/10.5555/3305890.3306006 Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866. JMLR.org, Sydney, NSW, Australia, May 2016. https://​doi.​org/​10.​5555/​3305890.​3306006
20.
Zurück zum Zitat Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3319–3328. JMLR.org, Sidney, March 2017. http://arxiv.org/abs/1703.01365 Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3319–3328. JMLR.org, Sidney, March 2017. http://​arxiv.​org/​abs/​1703.​01365
21.
Zurück zum Zitat Warnecke, A., Arp, D., Wressnegger, C., Rieck, K.: Evaluating explanation methods for deep learning in security. In: 5th IEEE European Symposium on Security and Privacy (Euro S&P 2020), Genova, September 2020 Warnecke, A., Arp, D., Wressnegger, C., Rieck, K.: Evaluating explanation methods for deep learning in security. In: 5th IEEE European Symposium on Security and Privacy (Euro S&P 2020), Genova, September 2020
Metadaten
Titel
Explanation-Driven Characterization of Android Ransomware
verfasst von
Michele Scalas
Konrad Rieck
Giorgio Giacinto
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-68796-0_17