Skip to main content
Erschienen in: International Journal of Machine Learning and Cybernetics 1/2022

23.10.2021 | Original Article

Do gradient-based explanations tell anything about adversarial robustness to android malware?

verfasst von: Marco Melis, Michele Scalas, Ambra Demontis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli

Erschienen in: International Journal of Machine Learning and Cybernetics | Ausgabe 1/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e.g., permissions and system calls, without compromising intrusive functionality. Previous work has shown that, to improve robustness against such attacks, learning algorithms should avoid overemphasizing few discriminant features, providing instead decisions that rely upon a large subset of components. In this work, we investigate whether gradient-based attribution methods, used to explain classifiers’ decisions by identifying the most relevant features, can be used to help identify and select more robust algorithms. To this end, we propose to exploit two different metrics that represent the evenness of explanations, and a new compact security measure called Adversarial Robustness Metric. Our experiments conducted on two different datasets and five classification algorithms for Android malware detection show that a strong connection exists between the uniformity of explanations and adversarial robustness. In particular, we found that popular techniques like Gradient*Input and Integrated Gradients are strongly correlated to security when applied to both linear and nonlinear detectors, while more elementary explanation techniques like the simple Gradient do not provide reliable information about the robustness of such classifiers.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Fußnoten
1
MD5: f8bcbd48f44ce973036fac0bce68a5d5.
 
2
MD5: eb1f454ea622a8d2713918b590241a7e.
 
Literatur
2.
Zurück zum Zitat Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6:52138–52160CrossRef Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6:52138–52160CrossRef
3.
Zurück zum Zitat Allix K, Bissyandé TF, Klein J, Le Traon Y (2016) Androzoo: collecting millions of android apps for the research community. In: 2016 IEEE/ACM 13th working conference on mining software repositories (MSR), pp 468–471, IEEE Allix K, Bissyandé TF, Klein J, Le Traon Y (2016) Androzoo: collecting millions of android apps for the research community. In: 2016 IEEE/ACM 13th working conference on mining software repositories (MSR), pp 468–471, IEEE
4.
Zurück zum Zitat Arp D, Spreitzenbarth M, Hübner M, Gascon H, Rieck K (2014) Drebin: efficient and explainable detection of android malware in your pocket. In: Proc. 21st annual network & distributed system security symposium (NDSS). The Internet Society Arp D, Spreitzenbarth M, Hübner M, Gascon H, Rieck K (2014) Drebin: efficient and explainable detection of android malware in your pocket. In: Proc. 21st annual network & distributed system security symposium (NDSS). The Internet Society
6.
Zurück zum Zitat Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Müller KR (2010) How to explain individual classification decisions. J Mach Learn Res 11:1803–1831MathSciNetMATH Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Müller KR (2010) How to explain individual classification decisions. J Mach Learn Res 11:1803–1831MathSciNetMATH
7.
8.
Zurück zum Zitat Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proc. ACM Symp. information, computer and comm. Sec., ASIACCS ’06, pp 16–25. ACM, New York Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proc. ACM Symp. information, computer and comm. Sec., ASIACCS ’06, pp 16–25. ACM, New York
9.
Zurück zum Zitat Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Blockeel H, Kersting K, Nijssen S, Železný F (eds) Machine learning and knowledge discovery in databases (ECML PKDD), Part III, LNCS, vol 8190. Springer, Berlin, Heidelberg, pp 387–402 Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013) Evasion attacks against machine learning at test time. In: Blockeel H, Kersting K, Nijssen S, Železný F (eds) Machine learning and knowledge discovery in databases (ECML PKDD), Part III, LNCS, vol 8190. Springer, Berlin, Heidelberg, pp 387–402
10.
Zurück zum Zitat Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1(1):27–41CrossRef Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1(1):27–41CrossRef
11.
Zurück zum Zitat Biggio B, Fumera G, Roli F (2014) Security evaluation of pattern classifiers under attack. IEEE Trans Knowl Data Eng 26(4):984–996CrossRef Biggio B, Fumera G, Roli F (2014) Security evaluation of pattern classifiers under attack. IEEE Trans Knowl Data Eng 26(4):984–996CrossRef
12.
Zurück zum Zitat Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. In: Langford J, Pineau J (eds) 29th Int’l Conf. on Machine Learning, pp 1807–1814, Omnipress Biggio B, Nelson B, Laskov P (2012) Poisoning attacks against support vector machines. In: Langford J, Pineau J (eds) 29th Int’l Conf. on Machine Learning, pp 1807–1814, Omnipress
13.
Zurück zum Zitat Biggio B, Roli F (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn 84:317–331CrossRef Biggio B, Roli F (2018) Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn 84:317–331CrossRef
14.
Zurück zum Zitat Cai H, Meng N, Ryder B, Yao D (2018) Droidcat: effective android malware detection and categorization via app-level profiling. IEEE Trans Inf Forensics Secur 14(6):1455–1470CrossRef Cai H, Meng N, Ryder B, Yao D (2018) Droidcat: effective android malware detection and categorization via app-level profiling. IEEE Trans Inf Forensics Secur 14(6):1455–1470CrossRef
15.
Zurück zum Zitat Calleja A, Martin A, Menendez HD, Tapiador J, Clark D (2018) Picking on the family: disrupting android malware triage by forcing misclassification. Expert Syst Appl 95:113–126CrossRef Calleja A, Martin A, Menendez HD, Tapiador J, Clark D (2018) Picking on the family: disrupting android malware triage by forcing misclassification. Expert Syst Appl 95:113–126CrossRef
16.
Zurück zum Zitat Cara F, Scalas M, Giacinto G, Maiorca D (2020) On the feasibility of adversarial sample creation using the android system api. Information 11(9):433CrossRef Cara F, Scalas M, Giacinto G, Maiorca D (2020) On the feasibility of adversarial sample creation using the android system api. Information 11(9):433CrossRef
18.
Zurück zum Zitat Chen J, Wu X, Rastogi V, Liang Y, Jha S (2019) Robust attribution regularization. Adv Neural Inf Process Syst 2019:14300–14310 Chen J, Wu X, Rastogi V, Liang Y, Jha S (2019) Robust attribution regularization. Adv Neural Inf Process Syst 2019:14300–14310
19.
Zurück zum Zitat Chen L, Hou S, Ye Y, Xu S (2018) Droideye: fortifying security of learning-based classifier against adversarial android malware attacks. In: Proceedings of the 2018 IEEE/ACM international conference on advances in social networks analysis and mining, ASONAM 2018, pp. 782–789. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ASONAM.2018.8508284 Chen L, Hou S, Ye Y, Xu S (2018) Droideye: fortifying security of learning-based classifier against adversarial android malware attacks. In: Proceedings of the 2018 IEEE/ACM international conference on advances in social networks analysis and mining, ASONAM 2018, pp. 782–789. Institute of Electrical and Electronics Engineers Inc. https://​doi.​org/​10.​1109/​ASONAM.​2018.​8508284
20.
Zurück zum Zitat Chen S, Xue M, Tang Z, Xu L, Zhu H (2016) Stormdroid: a streaminglized machine learning-based system for detecting android malware. In: Proceedings of the 11th ACM on Asia conference on computer and communications security, pp 377–388 Chen S, Xue M, Tang Z, Xu L, Zhu H (2016) Stormdroid: a streaminglized machine learning-based system for detecting android malware. In: Proceedings of the 11th ACM on Asia conference on computer and communications security, pp 377–388
22.
Zurück zum Zitat Dalvi N, Domingos P, Mausam G, Sanghai S, Verma D (2004) Adversarial classification. In: Tenth ACM SIGKDD international conference on knowledge discovery and data mining (KDD), pp 99–108. Seattle Dalvi N, Domingos P, Mausam G, Sanghai S, Verma D (2004) Adversarial classification. In: Tenth ACM SIGKDD international conference on knowledge discovery and data mining (KDD), pp 99–108. Seattle
23.
Zurück zum Zitat Demontis A, Melis M, Biggio B, Maiorca D, Arp D, Rieck K, Corona I, Giacinto G, Roli F (2017) Yes, machine learning can be more secure! a case study on android malware detection. In: IEEE transactions on dependable and secure computing, pp 1–1. https://doi.org/10.1109/TDSC.2017.2700270 Demontis A, Melis M, Biggio B, Maiorca D, Arp D, Rieck K, Corona I, Giacinto G, Roli F (2017) Yes, machine learning can be more secure! a case study on android malware detection. In: IEEE transactions on dependable and secure computing, pp 1–1. https://​doi.​org/​10.​1109/​TDSC.​2017.​2700270
24.
Zurück zum Zitat Demontis A, Melis, M., Pintor M, Jagielski M, Biggio B, Oprea A, Nita-Rotaru C, Roli F (2019) Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th USENIX Security Symposium (USENIX Security 19), pp 321–338. USENIX Association, Santa Clara Demontis A, Melis, M., Pintor M, Jagielski M, Biggio B, Oprea A, Nita-Rotaru C, Roli F (2019) Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th USENIX Security Symposium (USENIX Security 19), pp 321–338. USENIX Association, Santa Clara
25.
Zurück zum Zitat Demontis A, Russu P, Biggio B, Fumera G, Roli F (2016) On security and sparsity of linear classifiers for adversarial settings. In: Robles-Kelly A, Loog M, Biggio B, Escolano F, Wilson R (eds) Joint IAPR Int’l workshop on structural, syntactic, and statistical pattern recognition, LNCS, vol 10029. Springer International Publishing, Cham, pp 322–332CrossRef Demontis A, Russu P, Biggio B, Fumera G, Roli F (2016) On security and sparsity of linear classifiers for adversarial settings. In: Robles-Kelly A, Loog M, Biggio B, Escolano F, Wilson R (eds) Joint IAPR Int’l workshop on structural, syntactic, and statistical pattern recognition, LNCS, vol 10029. Springer International Publishing, Cham, pp 322–332CrossRef
26.
Zurück zum Zitat Dombrowski AK, Alber M, Anders CJ, Ackermann M, Müller KR, Kessel P (2019) Explanations can be manipulated and geometry is to blame. arXiv:1906.07983 Dombrowski AK, Alber M, Anders CJ, Ackermann M, Müller KR, Kessel P (2019) Explanations can be manipulated and geometry is to blame. arXiv:​1906.​07983
28.
Zurück zum Zitat Fidel G, Bitton R, Shabtai A (2020) When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In: 2020 international joint conference on neural networks (IJCNN), pp 1–8, IEEE Fidel G, Bitton R, Shabtai A (2020) When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In: 2020 international joint conference on neural networks (IJCNN), pp 1–8, IEEE
29.
Zurück zum Zitat Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference on learning representations Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference on learning representations
30.
Zurück zum Zitat Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. In: AI magazine, vol 38, pp 50–57 Goodman B, Flaxman S (2016) European Union regulations on algorithmic decision-making and a “right to explanation”. In: AI magazine, vol 38, pp 50–57
31.
Zurück zum Zitat Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2017) Adversarial examples for malware detection. In: ESORICS (2), LNCS, vol 10493, pp 62–79. Springer Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2017) Adversarial examples for malware detection. In: ESORICS (2), LNCS, vol 10493, pp 62–79. Springer
32.
Zurück zum Zitat Guo W, Mu D, Xu J, Su P, Wang G, Xing X (2018) Lemna: explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp 364–379 Guo W, Mu D, Xu J, Su P, Wang G, Xing X (2018) Lemna: explaining deep learning based security applications. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp 364–379
34.
Zurück zum Zitat Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F, Sayres R (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: 35th international conference on machine learning (ICML 2018), vol 80, pp 2668–2677, Stockholm Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F, Sayres R (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: 35th international conference on machine learning (ICML 2018), vol 80, pp 2668–2677, Stockholm
35.
Zurück zum Zitat Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning (ICML) Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: International conference on machine learning (ICML)
36.
Zurück zum Zitat Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P (2020) Concept bottleneck models. In: III HD, Singh A (eds) Proceedings of the 37th international conference on machine learning, Proceedings of Machine Learning Research, vol 119, pp 5338–5348, PMLR. http://proceedings.mlr.press/v119/koh20a.html Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P (2020) Concept bottleneck models. In: III HD, Singh A (eds) Proceedings of the 37th international conference on machine learning, Proceedings of Machine Learning Research, vol 119, pp 5338–5348, PMLR. http://​proceedings.​mlr.​press/​v119/​koh20a.​html
37.
Zurück zum Zitat Kolcz A, Teo CH (2009) Feature weighting for improved classifier robustness. In: Sixth conference on email and anti-spam (CEAS). Mountain View Kolcz A, Teo CH (2009) Feature weighting for improved classifier robustness. In: Sixth conference on email and anti-spam (CEAS). Mountain View
39.
Zurück zum Zitat Lindorfer M, Neugschwandtner M, Platzer C (2015) Marvin: efficient and comprehensive mobile app classification through static and dynamic analysis. In: Proceedings of the 39th annual international computers, software & applications conference (COMPSAC) Lindorfer M, Neugschwandtner M, Platzer C (2015) Marvin: efficient and comprehensive mobile app classification through static and dynamic analysis. In: Proceedings of the 39th annual international computers, software & applications conference (COMPSAC)
40.
Zurück zum Zitat Lindorfer M, Neugschwandtner M, Platzer C (2015) MARVIN: efficient and comprehensive mobile app classification through static and dynamic analysis. In: 2015 IEEE 39th annual computer software and applications conference, vol 2, pp 422–433 Lindorfer M, Neugschwandtner M, Platzer C (2015) MARVIN: efficient and comprehensive mobile app classification through static and dynamic analysis. In: 2015 IEEE 39th annual computer software and applications conference, vol 2, pp 422–433
41.
Zurück zum Zitat Lowd D, Meek C (2005) Adversarial learning. In: Proc. 11th ACM sigkdd international conference on knowledge discovery and data mining (KDD), pp 641–647. ACM Press, Chicago Lowd D, Meek C (2005) Adversarial learning. In: Proc. 11th ACM sigkdd international conference on knowledge discovery and data mining (KDD), pp 641–647. ACM Press, Chicago
43.
Zurück zum Zitat Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 4765–4774 Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 4765–4774
46.
Zurück zum Zitat Maiorca D, Biggio B, Giacinto G (2019) Towards adversarial malware detection: lessons learned from pdf-based attacks. ACM Comput Surv (CSUR) 52(4):1–36CrossRef Maiorca D, Biggio B, Giacinto G (2019) Towards adversarial malware detection: lessons learned from pdf-based attacks. ACM Comput Surv (CSUR) 52(4):1–36CrossRef
47.
Zurück zum Zitat Maiorca D, Mercaldo F, Giacinto G, Visaggio CA, Martinelli F (2017) R-packdroid: Api package-based characterization and detection of mobile ransomware. In: Proceedings of the symposium on applied computing, SAC ’17, pp 1718–1723. ACM, New York. https://doi.org/10.1145/3019612.3019793 Maiorca D, Mercaldo F, Giacinto G, Visaggio CA, Martinelli F (2017) R-packdroid: Api package-based characterization and detection of mobile ransomware. In: Proceedings of the symposium on applied computing, SAC ’17, pp 1718–1723. ACM, New York. https://​doi.​org/​10.​1145/​3019612.​3019793
48.
Zurück zum Zitat Mariconti E, Onwuzurike L, Andriotis P, Cristofaro ED, Ross GJ, Stringhini G (2017) Mamadroid: Detecting android malware by building markov chains of behavioral models. In: NDSS. The Internet Society Mariconti E, Onwuzurike L, Andriotis P, Cristofaro ED, Ross GJ, Stringhini G (2017) Mamadroid: Detecting android malware by building markov chains of behavioral models. In: NDSS. The Internet Society
49.
Zurück zum Zitat Melis M, Demontis A, Biggio B, Brown G, Fumera G, Roli F (2017) Is deep learning safe for robot vision? Adversarial examples against the icub humanoid. In: ICCV workshop on vision in practice on autonomous robots (ViPAR) Melis M, Demontis A, Biggio B, Brown G, Fumera G, Roli F (2017) Is deep learning safe for robot vision? Adversarial examples against the icub humanoid. In: ICCV workshop on vision in practice on autonomous robots (ViPAR)
50.
Zurück zum Zitat Melis M, Demontis A, Pintor M, Sotgiu A, Biggio B (2019) secml: a python library for secure and explainable machine learning. arXiv:1912.10013 Melis M, Demontis A, Pintor M, Sotgiu A, Biggio B (2019) secml: a python library for secure and explainable machine learning. arXiv:​1912.​10013
51.
Zurück zum Zitat Melis M, Maiorca D, Biggio B, Giacinto G, Roli F (2018) Explaining black-box android malware detection. In: 2018 26th european signal processing conference (EUSIPCO), pp 524–528, IEEE Melis M, Maiorca D, Biggio B, Giacinto G, Roli F (2018) Explaining black-box android malware detection. In: 2018 26th european signal processing conference (EUSIPCO), pp 524–528, IEEE
52.
Zurück zum Zitat Pendlebury F, Pierazzi F, Jordaney R, Kinder J, Cavallaro L (2019) \(\{\)TESSERACT\(\}\): Eliminating experimental bias in malware classification across space and time. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp 729–746 Pendlebury F, Pierazzi F, Jordaney R, Kinder J, Cavallaro L (2019) \(\{\)TESSERACT\(\}\): Eliminating experimental bias in malware classification across space and time. In: 28th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 19), pp 729–746
53.
Zurück zum Zitat Peng H, Gates C, Sarma B, Li N, Qi Y, Potharaju R, Nita-Rotaru C, Molloy I (2012) Using probabilistic generative models for ranking risks of android apps. In: Proceedings of the 2012 ACM conference on computer and communications security Peng H, Gates C, Sarma B, Li N, Qi Y, Potharaju R, Nita-Rotaru C, Molloy I (2012) Using probabilistic generative models for ranking risks of android apps. In: Proceedings of the 2012 ACM conference on computer and communications security
54.
Zurück zum Zitat Pierazzi F, Pendlebury F, Cortellazzi J, Cavallaro L (2020) Intriguing properties of adversarial ml attacks in the problem space. In: 2020 IEEE symposium on security and privacy (SP), pp 1332–1349, IEEE Pierazzi F, Pendlebury F, Cortellazzi J, Cavallaro L (2020) Intriguing properties of adversarial ml attacks in the problem space. In: 2020 IEEE symposium on security and privacy (SP), pp 1332–1349, IEEE
55.
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2016) “why should i trust you?”: explaining the predictions of any classifier. In: 22nd ACM SIGKDD Int’l Conf. Knowl. Disc. Data Mining, KDD ’16, pp 1135–1144. ACM, New York Ribeiro MT, Singh S, Guestrin C (2016) “why should i trust you?”: explaining the predictions of any classifier. In: 22nd ACM SIGKDD Int’l Conf. Knowl. Disc. Data Mining, KDD ’16, pp 1135–1144. ACM, New York
56.
Zurück zum Zitat Rosenberg I, Meir S, Berrebi J, Gordon I, Sicard G, David EO (2020) Generating end-to-end adversarial examples for malware classifiers using explainability. In: 2020 international joint conference on neural networks (IJCNN), pp 1–10, IEEE Rosenberg I, Meir S, Berrebi J, Gordon I, Sicard G, David EO (2020) Generating end-to-end adversarial examples for malware classifiers using explainability. In: 2020 international joint conference on neural networks (IJCNN), pp 1–10, IEEE
57.
Zurück zum Zitat Scalas M, Maiorca D, Mercaldo F, Visaggio CA, Martinelli F, Giacinto G (2019) On the effectiveness of system api-related information for android ransomware detection. Comput Secur 86:168–182CrossRef Scalas M, Maiorca D, Mercaldo F, Visaggio CA, Martinelli F, Giacinto G (2019) On the effectiveness of system api-related information for android ransomware detection. Comput Secur 86:168–182CrossRef
59.
Zurück zum Zitat Shrikumar A, Greenside P, Shcherbina A, Kundaje A (2016) Not just a black box: learning important features through propagating activation differences Shrikumar A, Greenside P, Shcherbina A, Kundaje A (2016) Not just a black box: learning important features through propagating activation differences
60.
Zurück zum Zitat Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: Proceedings of the 34th international conference on machine learning-vol 70, pp 3319–3328. JMLR. org Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: Proceedings of the 34th international conference on machine learning-vol 70, pp 3319–3328. JMLR. org
61.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations. arxiv:1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations. arxiv:​1312.​6199
62.
Zurück zum Zitat Tam K, Khan SJ, Fattori A, Cavallaro L (2015) CopperDroid: automatic reconstruction of android malware behaviors. In: Proc. 22nd annual network & distributed system security symposium (NDSS). The Internet Society Tam K, Khan SJ, Fattori A, Cavallaro L (2015) CopperDroid: automatic reconstruction of android malware behaviors. In: Proc. 22nd annual network & distributed system security symposium (NDSS). The Internet Society
64.
Zurück zum Zitat Šrndic N, Laskov P (2014) Practical evasion of a learning-based classifier: a case study. In: Proc. 2014 IEEE symp. security and privacy, SP ’14, pp 197–211. IEEE CS, Washington, DC Šrndic N, Laskov P (2014) Practical evasion of a learning-based classifier: a case study. In: Proc. 2014 IEEE symp. security and privacy, SP ’14, pp 197–211. IEEE CS, Washington, DC
66.
Zurück zum Zitat Yang W, Kong D, Xie T, Gunter CA (2017) Malware detection in adversarial settings: exploiting feature evolutions and confusions in android apps. In: ACSAC, pp 288–302. ACM Yang W, Kong D, Xie T, Gunter CA (2017) Malware detection in adversarial settings: exploiting feature evolutions and confusions in android apps. In: ACSAC, pp 288–302. ACM
67.
Zurück zum Zitat Zhang X, Zhang Y, Zhong M, Ding D, Cao Y, Zhang Y, Zhang M, Yang M (2020) Enhancing State-of-the-art Classifiers with API Semantics to Detect Evolved Android Malware. In: Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, pp 757–770. ACM, New York. https://doi.org/10.1145/3372297.3417291 Zhang X, Zhang Y, Zhong M, Ding D, Cao Y, Zhang Y, Zhang M, Yang M (2020) Enhancing State-of-the-art Classifiers with API Semantics to Detect Evolved Android Malware. In: Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, pp 757–770. ACM, New York. https://​doi.​org/​10.​1145/​3372297.​3417291
Metadaten
Titel
Do gradient-based explanations tell anything about adversarial robustness to android malware?
verfasst von
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
Publikationsdatum
23.10.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
International Journal of Machine Learning and Cybernetics / Ausgabe 1/2022
Print ISSN: 1868-8071
Elektronische ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-021-01393-7

Weitere Artikel der Ausgabe 1/2022

International Journal of Machine Learning and Cybernetics 1/2022 Zur Ausgabe

Neuer Inhalt