Skip to main content
Top

2017 | OriginalPaper | Chapter

Adversarial Examples for Malware Detection

Authors : Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel

Published in: Computer Security – ESORICS 2017

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations.
In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detection models. To this end, we identify and overcome key challenges that prevent existing algorithms from being applied against malware detection: our approach operates in discrete and often binary input domains, whereas previous work operated only in continuous and differentiable domains. In addition, our technique guarantees the malware functionality of the adversarially manipulated program. In our evaluation, we train a neural network for malware detection on the DREBIN data set and achieve classification performance matching state-of-the-art from the literature. Using the augmented adversarial crafting algorithm we then manage to mislead this classifier for 63% of all malware samples. We also present a detailed evaluation of defensive mechanisms previously introduced in the computer vision contexts, including distillation and adversarial training, which show promising results.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Androutsopoulos, I., Koutsias, J., Chandrinos, K.V., Paliouras, G., Spyropoulos, C.D.: An evaluation of naive Bayesian anti-spam filtering. arXiv preprint arXiv:cs/0006013 (2000) Androutsopoulos, I., Koutsias, J., Chandrinos, K.V., Paliouras, G., Spyropoulos, C.D.: An evaluation of naive Bayesian anti-spam filtering. arXiv preprint arXiv:​cs/​0006013 (2000)
2.
go back to reference Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K.: DREBIN: effective and explainable detection of android malware in your pocket. In: Proceedings of NDSS (2014) Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K.: DREBIN: effective and explainable detection of android malware in your pocket. In: Proceedings of NDSS (2014)
3.
go back to reference Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)MathSciNetCrossRef Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81(2), 121–148 (2010)MathSciNetCrossRef
4.
go back to reference Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS, vol. 8190, pp. 387–402. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40994-3_25CrossRef Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS, vol. 8190, pp. 387–402. Springer, Heidelberg (2013). doi:10.​1007/​978-3-642-40994-3_​25CrossRef
5.
go back to reference Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016) Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:​1604.​07316 (2016)
6.
go back to reference Dahl, G.E., Stokes, J.W., Deng, L., Yu, D.: Large-scale malware classification using random projections and neural networks. In: Proceedings of the 2013 IEEE ICASSP, pp. 3422–3426 (2013) Dahl, G.E., Stokes, J.W., Deng, L., Yu, D.: Large-scale malware classification using random projections and neural networks. In: Proceedings of the 2013 IEEE ICASSP, pp. 3422–3426 (2013)
7.
go back to reference Gong, Z., Wang, W., Ku, W.-S.: Adversarial and clean data are not twins. arXiv e-prints, April 2017 Gong, Z., Wang, W., Ku, W.-S.: Adversarial and clean data are not twins. arXiv e-prints, April 2017
8.
go back to reference Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)MATH Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)MATH
9.
go back to reference Goodfellow, I.J., et al.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR 2015 (2015) Goodfellow, I.J., et al.: Explaining and harnessing adversarial examples. In: Proceedings of ICLR 2015 (2015)
10.
go back to reference Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples. arXiv e-prints, February 2017 Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples. arXiv e-prints, February 2017
11.
go back to reference Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. CoRR, abs/1412.5068 (2014) Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. CoRR, abs/1412.5068 (2014)
12.
go back to reference Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv e-prints (2015) Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv e-prints (2015)
13.
go back to reference Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R.: Blocking transferability of adversarial examples in black-box learning systems. arXiv e-prints, March 2017 Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R.: Blocking transferability of adversarial examples in black-box learning systems. arXiv e-prints, March 2017
14.
go back to reference Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN. arXiv e-prints, February 2017 Hu, W., Tan, Y.: Generating adversarial malware examples for black-box attacks based on GAN. arXiv e-prints, February 2017
15.
go back to reference Alexander, G., Ororbia, I.I., Giles, C.L., Kifer, D.: Unifying adversarial training algorithms with flexible deep data gradient regularization. CoRR, abs/1601.07213 (2016) Alexander, G., Ororbia, I.I., Giles, C.L., Kifer, D.: Unifying adversarial training algorithms with flexible deep data gradient regularization. CoRR, abs/1601.07213 (2016)
16.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)
17.
go back to reference Krotov, D., Hopfield, J.J.: Dense associative memory is robust to adversarial inputs. arXiv e-prints, January 2017 Krotov, D., Hopfield, J.J.: Dense associative memory is robust to adversarial inputs. arXiv e-prints, January 2017
18.
go back to reference Laskov, P., et al.: Practical evasion of a learning-based classifier: a case study. In: Proceedings of the 36th IEEE S&P, pp. 197–211 (2014) Laskov, P., et al.: Practical evasion of a learning-based classifier: a case study. In: Proceedings of the 36th IEEE S&P, pp. 197–211 (2014)
19.
go back to reference Li, X., Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. CoRR, abs/1612.07767 (2016) Li, X., Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. CoRR, abs/1612.07767 (2016)
20.
go back to reference Lindorfer, M., Neugschwandtner, M., Platzer, C.: Marvin: efficient and comprehensive mobile app classification through static and dynamic analysis. In: Proceedings of the 39th Annual International Computers, Software and Applications Conference (COMPSAC) (2015) Lindorfer, M., Neugschwandtner, M., Platzer, C.: Marvin: efficient and comprehensive mobile app classification through static and dynamic analysis. In: Proceedings of the 39th Annual International Computers, Software and Applications Conference (COMPSAC) (2015)
21.
go back to reference Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. CoRR, abs/1611.02770 (2016) Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. CoRR, abs/1611.02770 (2016)
22.
go back to reference Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. CoRR, abs/1702.04267 (2017) Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. CoRR, abs/1702.04267 (2017)
23.
go back to reference Miyato, T., Dai, A.M., Goodfellow, I.J.: Virtual adversarial training for semi-supervised text classification. CoRR, abs/1605.07725 (2016) Miyato, T., Dai, A.M., Goodfellow, I.J.: Virtual adversarial training for semi-supervised text classification. CoRR, abs/1605.07725 (2016)
24.
go back to reference Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., et al.: Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697 (2016) Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., et al.: Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:​1602.​02697 (2016)
25.
go back to reference Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of IEEE EuroS&P (2016) Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of IEEE EuroS&P (2016)
26.
go back to reference Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814 (2016) Papernot, N., McDaniel, P., Sinha, A., Wellman, M.: Towards the science of security and privacy in machine learning. arXiv preprint arXiv:​1611.​03814 (2016)
27.
go back to reference Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of IEEE S&P (2015) Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of IEEE S&P (2015)
28.
go back to reference Shintre, S., Gardner, A.B., Feinman, R., Curtin, R.R.: Detecting adversarial samples from artifacts. CoRR, abs/1703.00410 (2017) Shintre, S., Gardner, A.B., Feinman, R., Curtin, R.R.: Detecting adversarial samples from artifacts. CoRR, abs/1703.00410 (2017)
29.
go back to reference Rieck, K., Trinius, P., Willems, C., Holz, T.: Automatic analysis of malware behavior using machine learning. J. Comput. Secur. 19(4), 639–668 (2011)CrossRef Rieck, K., Trinius, P., Willems, C., Holz, T.: Automatic analysis of malware behavior using machine learning. J. Comput. Secur. 19(4), 639–668 (2011)CrossRef
30.
go back to reference Rozsa, A., Günther, M., Boult, T.E.: Are accuracy and robustness correlated? arXiv e-prints, October 2016 Rozsa, A., Günther, M., Boult, T.E.: Are accuracy and robustness correlated? arXiv e-prints, October 2016
31.
go back to reference Saxe, J., Berlin, K.: Deep neural network based malware detection using two dimensional binary program features. In: 10th International Conference on Malicious and Unwanted Software, MALWARE, pp. 11–20 (2015) Saxe, J., Berlin, K.: Deep neural network based malware detection using two dimensional binary program features. In: 10th International Conference on Malicious and Unwanted Software, MALWARE, pp. 11–20 (2015)
32.
go back to reference Sayfullina, L., Eirola, E., Komashinsky, D., Palumbo, P., Miché, Y., Lendasse, A., Karhunen, J.: Efficient detection of zero-day android malware using normalized Bernoulli naive Bayes. In: Proceedings of IEEE TrustCom, pp. 198–205 (2015) Sayfullina, L., Eirola, E., Komashinsky, D., Palumbo, P., Miché, Y., Lendasse, A., Karhunen, J.: Efficient detection of zero-day android malware using normalized Bernoulli naive Bayes. In: Proceedings of IEEE TrustCom, pp. 198–205 (2015)
33.
go back to reference Shabtai, A., Fledel, Y., Elovici, Y.: Automated static code analysis for classifying android applications using machine learning. In: CIS, pp. 329–333. IEEE (2010) Shabtai, A., Fledel, Y., Elovici, Y.: Automated static code analysis for classifying android applications using machine learning. In: CIS, pp. 329–333. IEEE (2010)
34.
go back to reference Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef
35.
go back to reference Sommer, R., Paxson, V.: Outside the closed world: on using machine learning for network intrusion detection. In: 2010 IEEE S&P, pp. 305–316. IEEE (2010) Sommer, R., Paxson, V.: Outside the closed world: on using machine learning for network intrusion detection. In: 2010 IEEE S&P, pp. 305–316. IEEE (2010)
36.
go back to reference Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: Proceedings of ICLR. Computational and Biological Learning Society (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: Proceedings of ICLR. Computational and Biological Learning Society (2014)
37.
go back to reference Wang, Q., Guo, W., Alexander, G., Ororbia, I. I., Xing, X., Lin, L., Giles, C.L., Liu, X., Liu, P., Xiong, G.: Using non-invertible data transformations to build adversary-resistant deep neural networks. CoRR, abs/1610.01934 (2016) Wang, Q., Guo, W., Alexander, G., Ororbia, I. I., Xing, X., Lin, L., Giles, C.L., Liu, X., Liu, P., Xiong, G.: Using non-invertible data transformations to build adversary-resistant deep neural networks. CoRR, abs/1610.01934 (2016)
38.
go back to reference Warde-Farley, D., Goodfellow, I.: Adversarial perturbations of deep neural networks. In: Hazan, T., Papandreou, G., Tarlow, D. (eds.) Advanced Structured Prediction (2016) Warde-Farley, D., Goodfellow, I.: Adversarial perturbations of deep neural networks. In: Hazan, T., Papandreou, G., Tarlow, D. (eds.) Advanced Structured Prediction (2016)
39.
go back to reference Zhu, Z., Dumitras, T.: Featuresmith: automatically engineering features for malware detection by mining the security literature. In: Proceedings of ACM SIGSAC, pp. 767–778 (2016) Zhu, Z., Dumitras, T.: Featuresmith: automatically engineering features for malware detection by mining the security literature. In: Proceedings of ACM SIGSAC, pp. 767–778 (2016)
Metadata
Title
Adversarial Examples for Malware Detection
Authors
Kathrin Grosse
Nicolas Papernot
Praveen Manoharan
Michael Backes
Patrick McDaniel
Copyright Year
2017
DOI
https://doi.org/10.1007/978-3-319-66399-9_4

Premium Partner