Skip to main content
Erschienen in: Empirical Software Engineering 1/2022

01.01.2022

Omni: automated ensemble with unexpected models against adversarial evasion attack

verfasst von: Rui Shu, Tianpei Xia, Laurie Williams, Tim Menzies

Erschienen in: Empirical Software Engineering | Ausgabe 1/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Context

Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective.

Objective

To help security practitioners and researchers build a more robust model against non-adaptive, white-box and non-targeted adversarial evasion attacks through the idea of ensemble model.

Method

We propose an approach called Omni, the main idea of which is to explore methods that create an ensemble of “unexpected models”; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary’s target model, with which we then make an optimized weighted ensemble prediction.

Results

In studies with five types of adversarial evasion attacks (FGSM, BIM, JSMA, DeepFool and Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAndMal2017 and the Contagio PDF dataset), we show Omni is a promising approach as a defense strategy against adversarial attacks when compared with other baseline treatments.

Conclusions

When employing ensemble defense against adversarial evasion attacks, we suggest to create ensemble with unexpected models that are distant from the attacker’s expected model (i.e., target model) through methods such as hyperparameter optimization.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Arp D, Spreitzenbarth M, Hubner M, Gascon H, Rieck K, Siemens C (2014) Drebin: Effective and explainable detection of android malware in your pocket. In: Ndss, vol 14, pp 23–26 Arp D, Spreitzenbarth M, Hubner M, Gascon H, Rieck K, Siemens C (2014) Drebin: Effective and explainable detection of android malware in your pocket. In: Ndss, vol 14, pp 23–26
Zurück zum Zitat Athalye A, Carlini N, Wagner DA (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmȧssan, Stockholm, Sweden, July 10-15, 2018 Athalye A, Carlini N, Wagner DA (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy JG, Krause A (eds) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmȧssan, Stockholm, Sweden, July 10-15, 2018
Zurück zum Zitat Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure?. In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp 16–25 Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure?. In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pp 16–25
Zurück zum Zitat Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13(Feb):281–305MathSciNetMATH Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13(Feb):281–305MathSciNetMATH
Zurück zum Zitat Bergstra JS, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Advances in neural information processing systems. pp 2546–2554 Bergstra JS, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Advances in neural information processing systems. pp 2546–2554
Zurück zum Zitat Bhat P, Dutta K (2019) A survey on various threats and current state of security in android platform. ACM Comput Surv (CSUR) 52(1):1–35CrossRef Bhat P, Dutta K (2019) A survey on various threats and current state of security in android platform. ACM Comput Surv (CSUR) 52(1):1–35CrossRef
Zurück zum Zitat Biggio B, Roli F (2018) Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recogn 84:317–331CrossRef Biggio B, Roli F (2018) Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recogn 84:317–331CrossRef
Zurück zum Zitat Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1 (1-4):27–41CrossRef Biggio B, Fumera G, Roli F (2010) Multiple classifier systems for robust classifier design in adversarial environments. Int J Mach Learn Cybern 1 (1-4):27–41CrossRef
Zurück zum Zitat Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013a) Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 387–402 Biggio B, Corona I, Maiorca D, Nelson B, Šrndić N, Laskov P, Giacinto G, Roli F (2013a) Evasion attacks against machine learning at test time. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 387–402
Zurück zum Zitat Biggio B, Pillai I, Rota Bulò S, Ariu D, Pelillo M, Roli F (2013b) Is data clustering in adversarial settings secure?. In: Proceedings of the 2013 ACM workshop on Artificial intelligence and security. pp 87–98 Biggio B, Pillai I, Rota Bulò S, Ariu D, Pelillo M, Roli F (2013b) Is data clustering in adversarial settings secure?. In: Proceedings of the 2013 ACM workshop on Artificial intelligence and security. pp 87–98
Zurück zum Zitat Biggio B, Rieck K, Ariu D, Wressnegger C, Corona I, Giacinto G, Roli F (2014) Poisoning behavioral malware clustering. In: Dimitrakakis C, Mitrokotsa A, Rubinstein BIP, Ahn G (eds) Proceedings of the 2014 workshop on artificial intelligent and security workshop, AISec 2014, Scottsdale, AZ, USA, November 7, 2014, pp 27–36 Biggio B, Rieck K, Ariu D, Wressnegger C, Corona I, Giacinto G, Roli F (2014) Poisoning behavioral malware clustering. In: Dimitrakakis C, Mitrokotsa A, Rubinstein BIP, Ahn G (eds) Proceedings of the 2014 workshop on artificial intelligent and security workshop, AISec 2014, Scottsdale, AZ, USA, November 7, 2014, pp 27–36
Zurück zum Zitat Bissell K, LaSalle R, Dal Cin P (2019) The cost of cybercrime—ninth annual cost of cybercrime study. Tech. rep., Technical report, Accenture 2019 Bissell K, LaSalle R, Dal Cin P (2019) The cost of cybercrime—ninth annual cost of cybercrime study. Tech. rep., Technical report, Accenture 2019
Zurück zum Zitat Canali D, Cova M, Vigna G, Kruegel C (2011) Prophiler: a fast filter for the large-scale detection of malicious web pages. In: Proceedings of the 20th international conference on World wide web, pp 197–206 Canali D, Cova M, Vigna G, Kruegel C (2011) Prophiler: a fast filter for the large-scale detection of malicious web pages. In: Proceedings of the 20th international conference on World wide web, pp 197–206
Zurück zum Zitat Carlini N, Wagner D (2017a) Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp), IEEE, pp 39–57 Carlini N, Wagner D (2017a) Towards evaluating the robustness of neural networks. In: 2017 ieee symposium on security and privacy (sp), IEEE, pp 39–57
Zurück zum Zitat Carlini N, Wagner DA (2016) Defensive distillation is not robust to adversarial examples. arXiv:1607.04311 Carlini N, Wagner DA (2016) Defensive distillation is not robust to adversarial examples. arXiv:1607.​04311
Zurück zum Zitat Carlini N, Wagner DA (2017b) Adversarial examples are not easily detected: Bypassing ten detection methods. In: Thuraisingham BM, Biggio B, Freeman DM, Miller B, Sinha A (eds) Proceedings of the 10th ACM workshop on artificial intelligence and security, AISec@CCS 2017, Dallas, TX, USA, November 3, 2017. pp 3–14 Carlini N, Wagner DA (2017b) Adversarial examples are not easily detected: Bypassing ten detection methods. In: Thuraisingham BM, Biggio B, Freeman DM, Miller B, Sinha A (eds) Proceedings of the 10th ACM workshop on artificial intelligence and security, AISec@CCS 2017, Dallas, TX, USA, November 3, 2017. pp 3–14
Zurück zum Zitat Carlini N, Athalye A, Papernot N, Brendel W, Rauber J, Tsipras D, Goodfellow I, Madry A, Kurakin A (2019) On evaluating adversarial robustness. arXiv:190206705 Carlini N, Athalye A, Papernot N, Brendel W, Rauber J, Tsipras D, Goodfellow I, Madry A, Kurakin A (2019) On evaluating adversarial robustness. arXiv:190206705
Zurück zum Zitat Chang C, Wenming S, Wei Z, Changki P, Kontovas C (2019) Evaluating cybersecurity risks in the maritime industry: a literature review. In: Proceedings of the international association of Maritime Universities (IAMU) Conference Chang C, Wenming S, Wei Z, Changki P, Kontovas C (2019) Evaluating cybersecurity risks in the maritime industry: a literature review. In: Proceedings of the international association of Maritime Universities (IAMU) Conference
Zurück zum Zitat Chen S, Xue M, Fan L, Hao S, Xu L, Zhu H, Li B (2018) Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. Comput Secur 73:326–344CrossRef Chen S, Xue M, Fan L, Hao S, Xu L, Zhu H, Li B (2018) Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. Comput Secur 73:326–344CrossRef
Zurück zum Zitat Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297MATH Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297MATH
Zurück zum Zitat Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. pp 99–108 Dalvi N, Domingos P, Sanghai S, Verma D (2004) Adversarial classification. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. pp 99–108
Zurück zum Zitat Dang H, Huang Y, Chang EC (2017) Evading classifiers by morphing in the dark. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. pp 119–133 Dang H, Huang Y, Chang EC (2017) Evading classifiers by morphing in the dark. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. pp 119–133
Zurück zum Zitat Demontis A, Melis M, Pintor M, Jagielski M, Biggio B, Oprea A, Nita-Rotaru C, Roli F (2019) Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In: 28th {USENIX} Security Symposium ({USENIX} security, vol 19, pp 321–338 Demontis A, Melis M, Pintor M, Jagielski M, Biggio B, Oprea A, Nita-Rotaru C, Roli F (2019) Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In: 28th {USENIX} Security Symposium ({USENIX} security, vol 19, pp 321–338
Zurück zum Zitat Dietterich TG, et al. (2002) Ensemble learning. In: The handbook of brain theory and neural networks, vol 2, pp 110–125 Dietterich TG, et al. (2002) Ensemble learning. In: The handbook of brain theory and neural networks, vol 2, pp 110–125
Zurück zum Zitat Eshete B, Villafiorita A, Weldemariam K (2012) Binspect: Holistic analysis and detection of malicious web pages. In: International conference on security and privacy in communication systems. Springer, pp 149–166 Eshete B, Villafiorita A, Weldemariam K (2012) Binspect: Holistic analysis and detection of malicious web pages. In: International conference on security and privacy in communication systems. Springer, pp 149–166
Zurück zum Zitat Farnia F, Zhang JM, Tse D (2019) Generalizable adversarial training via spectral normalization. In: 7th international conference on learning representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 Farnia F, Zhang JM, Tse D (2019) Generalizable adversarial training via spectral normalization. In: 7th international conference on learning representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019
Zurück zum Zitat Feurer M, Hutter F (2019) Hyperparameter optimization. In: Automated machine learning. Springer, Cham, pp 3–33 Feurer M, Hutter F (2019) Hyperparameter optimization. In: Automated machine learning. Springer, Cham, pp 3–33
Zurück zum Zitat Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015 Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015
Zurück zum Zitat Gower JC (1971) A general coefficient of similarity and some of its properties. Biometrics: 857–871 Gower JC (1971) A general coefficient of similarity and some of its properties. Biometrics: 857–871
Zurück zum Zitat Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2016) Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435 Grosse K, Papernot N, Manoharan P, Backes M, McDaniel PD (2016) Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.​04435
Zurück zum Zitat Grosse K, Manoharan P, Papernot N, Backes M, McDaniel PD (2017a) On the (statistical) detection of adversarial examples. arXiv:1702.06280 Grosse K, Manoharan P, Papernot N, Backes M, McDaniel PD (2017a) On the (statistical) detection of adversarial examples. arXiv:1702.​06280
Zurück zum Zitat Grosse K, Papernot N, Manoharan P, Backes M, McDaniel P (2017b) Adversarial examples for malware detection. In: European symposium on research in computer security. Springer, pp 62–79 Grosse K, Papernot N, Manoharan P, Backes M, McDaniel P (2017b) Adversarial examples for malware detection. In: European symposium on research in computer security. Springer, pp 62–79
Zurück zum Zitat He W, Wei J, Chen X, Carlini N, Song D (2017) Adversarial example defense: Ensembles of weak defenses are not strong. In: 11th {USENIX} workshop on offensive technologies ({WOOT} 17) He W, Wei J, Chen X, Carlini N, Song D (2017) Adversarial example defense: Ensembles of weak defenses are not strong. In: 11th {USENIX} workshop on offensive technologies ({WOOT} 17)
Zurück zum Zitat Hendler D, Kels S, Rubin A (2018) Detecting malicious powershell commands using deep neural networks. In: Proceedings of the 2018 on Asia conference on computer and communications security. pp 187–197 Hendler D, Kels S, Rubin A (2018) Detecting malicious powershell commands using deep neural networks. In: Proceedings of the 2018 on Asia conference on computer and communications security. pp 187–197
Zurück zum Zitat Hernández-Lobato D, Martínez-MuñOz G, Suárez A (2013) How large should ensembles of classifiers be? Pattern Recognit 46(5):1323–1336CrossRef Hernández-Lobato D, Martínez-MuñOz G, Suárez A (2013) How large should ensembles of classifiers be? Pattern Recognit 46(5):1323–1336CrossRef
Zurück zum Zitat Hu W, Tan Y (2017) Generating adversarial malware examples for black-box attacks based on gan. arXiv:170205983 Hu W, Tan Y (2017) Generating adversarial malware examples for black-box attacks based on gan. arXiv:170205983
Zurück zum Zitat Jang-Jaccard J, Nepal S (2014) A survey of emerging threats in cybersecurity. J Comput Syst Sci 80(5):973–993MathSciNetCrossRef Jang-Jaccard J, Nepal S (2014) A survey of emerging threats in cybersecurity. J Comput Syst Sci 80(5):973–993MathSciNetCrossRef
Zurück zum Zitat Juuti M, Szyller S, Marchal S, Asokan N (2019) Prada: protecting against dnn model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512–527 Juuti M, Szyller S, Marchal S, Asokan N (2019) Prada: protecting against dnn model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 512–527
Zurück zum Zitat Kantchelian A, Tygar JD, Joseph A (2016) Evasion and hardening of tree ensemble classifiers. In: International conference on machine learning. pp 2387–2396 Kantchelian A, Tygar JD, Joseph A (2016) Evasion and hardening of tree ensemble classifiers. In: International conference on machine learning. pp 2387–2396
Zurück zum Zitat Kariyappa S, Qureshi MK (2019) Improving adversarial robustness of ensembles with diversity training. arXiv:190109981 Kariyappa S, Qureshi MK (2019) Improving adversarial robustness of ensembles with diversity training. arXiv:190109981
Zurück zum Zitat Khasawneh KN, Abu-Ghazaleh N, Ponomarev D, Yu L (2017) Rhmd: evasion-resilient hardware malware detectors. In: Proceedings of the 50th Annual IEEE/ACM international symposium on microarchitecture. pp 315–327 Khasawneh KN, Abu-Ghazaleh N, Ponomarev D, Yu L (2017) Rhmd: evasion-resilient hardware malware detectors. In: Proceedings of the 50th Annual IEEE/ACM international symposium on microarchitecture. pp 315–327
Zurück zum Zitat Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: 5th international conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: 5th international conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings
Zurück zum Zitat Kwon H, Kim Y, Park KW, Yoon H, Choi D (2018) Multi-targeted adversarial example in evasion attack on deep neural network. IEEE Access 6:46084–46096CrossRef Kwon H, Kim Y, Park KW, Yoon H, Choi D (2018) Multi-targeted adversarial example in evasion attack on deep neural network. IEEE Access 6:46084–46096CrossRef
Zurück zum Zitat Langner R (2011) Stuxnet: Dissecting a cyberwarfare weapon. IEEE Secur Privacy 9(3):49–51CrossRef Langner R (2011) Stuxnet: Dissecting a cyberwarfare weapon. IEEE Secur Privacy 9(3):49–51CrossRef
Zurück zum Zitat Lashkari AH, Kadir AFA, Taheri L, Ghorbani AA (2018) Toward developing a systematic approach to generate benchmark android malware datasets and classification, IEEE Lashkari AH, Kadir AFA, Taheri L, Ghorbani AA (2018) Toward developing a systematic approach to generate benchmark android malware datasets and classification, IEEE
Zurück zum Zitat Laskov P et al (2014) Practical evasion of a learning-based classifier: A case study. In: 2014 IEEE symposium on security and privacy. IEEE, pp 197–211 Laskov P et al (2014) Practical evasion of a learning-based classifier: A case study. In: 2014 IEEE symposium on security and privacy. IEEE, pp 197–211
Zurück zum Zitat Liu C, Li B, Vorobeychik Y, Oprea A (2017) Robust linear regression against training data poisoning. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. pp 91–102 Liu C, Li B, Vorobeychik Y, Oprea A (2017) Robust linear regression against training data poisoning. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. pp 91–102
Zurück zum Zitat Lowd D, Meek C (2005a) Adversarial learning. In: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. pp 641–647 Lowd D, Meek C (2005a) Adversarial learning. In: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. pp 641–647
Zurück zum Zitat Lowd D, Meek C (2005b) Good word attacks on statistical spam filters. In: CEAS 2005 Lowd D, Meek C (2005b) Good word attacks on statistical spam filters. In: CEAS 2005
Zurück zum Zitat Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings
Zurück zum Zitat Maiorca D, Corona I, Giacinto G (2013) Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious pdf files detection. In: Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security. pp 119–130 Maiorca D, Corona I, Giacinto G (2013) Looking at the bag is not enough to find the bomb: an evasion of structural methods for malicious pdf files detection. In: Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security. pp 119–130
Zurück zum Zitat Monshizadeh M, Yan Z (2014) Security related data mining. In: 2014 IEEE international conference on computer and information technology. IEEE, pp 775–782 Monshizadeh M, Yan Z (2014) Security related data mining. In: 2014 IEEE international conference on computer and information technology. IEEE, pp 775–782
Zurück zum Zitat Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2574–2582 Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2574–2582
Zurück zum Zitat Morgan S (2019) Official annual cybercrime report, Sausalito, Cybersecurity Ventures Morgan S (2019) Official annual cybercrime report, Sausalito, Cybersecurity Ventures
Zurück zum Zitat Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC, Roli F (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. pp 27–38 Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC, Roli F (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. pp 27–38
Zurück zum Zitat Na T, Ko JH, Mukhopadhyay S (2018) Cascade adversarial machine learning regularized with a unified embedding. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net Na T, Ko JH, Mukhopadhyay S (2018) Cascade adversarial machine learning regularized with a unified embedding. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, OpenReview.net
Zurück zum Zitat Opderbeck DW (2015) Cybersecurity, data breaches, and the economic loss doctrine in the payment card industry. Md L Rev 75:935 Opderbeck DW (2015) Cybersecurity, data breaches, and the economic loss doctrine in the payment card industry. Md L Rev 75:935
Zurück zum Zitat Pang T, Xu K, Du C, Chen N, Zhu J (2019) Improving adversarial robustness via promoting ensemble diversity. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, PMLR, Proceedings of Machine Learning Research, vol 97, pp 4970–4979 Pang T, Xu K, Du C, Chen N, Zhu J (2019) Improving adversarial robustness via promoting ensemble diversity. In: Chaudhuri K, Salakhutdinov R (eds) Proceedings of the 36th international conference on machine learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, PMLR, Proceedings of Machine Learning Research, vol 97, pp 4970–4979
Zurück zum Zitat Papernot N, Faghri F, Carlini N, Goodfellow I, Feinman R, Kurakin A, Xie C, Sharma Y, Brown T, Roy A et al (2016a) Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv:161000768 Papernot N, Faghri F, Carlini N, Goodfellow I, Feinman R, Kurakin A, Xie C, Sharma Y, Brown T, Roy A et al (2016a) Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv:161000768
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I (2016b) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:160507277 Papernot N, McDaniel P, Goodfellow I (2016b) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:160507277
Zurück zum Zitat Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016c) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016c) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 372–387
Zurück zum Zitat Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016d) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), IEEE, pp 582–597 Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016d) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), IEEE, pp 582–597
Zurück zum Zitat Papernot N, McDaniel PD, Goodfellow IJ (2016e) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.07277 Papernot N, McDaniel PD, Goodfellow IJ (2016e) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv:1605.​07277
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017a) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. pp 506–519 Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017a) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security. pp 506–519
Zurück zum Zitat Papernot N, McDaniel PD, Goodfellow IJ, Jha S, Celik ZB, Swami A (2017b) Practical black-box attacks against machine learning. In: Karri R, Sinanoglu O, Sadeghi A, Yi X (eds) Proceedings of the 2017 ACM on Asia conference on computer and communications security, AsiaCCS 2017, Abu Dhabi, United Arab Emirates, April 2-6, 2017, pp 506–519 Papernot N, McDaniel PD, Goodfellow IJ, Jha S, Celik ZB, Swami A (2017b) Practical black-box attacks against machine learning. In: Karri R, Sinanoglu O, Sadeghi A, Yi X (eds) Proceedings of the 2017 ACM on Asia conference on computer and communications security, AsiaCCS 2017, Abu Dhabi, United Arab Emirates, April 2-6, 2017, pp 506–519
Zurück zum Zitat Papernot N, McDaniel P, Sinha A, Wellman MP (2018) Sok: Security and privacy in machine learning. In: 2018 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 399–414 Papernot N, McDaniel P, Sinha A, Wellman MP (2018) Sok: Security and privacy in machine learning. In: 2018 IEEE European symposium on security and privacy (EuroS&P). IEEE, pp 399–414
Zurück zum Zitat Rigaki M, Garcia S (2020) A survey of privacy attacks in machine learning. arXiv:200707646 Rigaki M, Garcia S (2020) A survey of privacy attacks in machine learning. arXiv:200707646
Zurück zum Zitat Roškot M, Wanasika I, Kroupova ZK (2020) Cybercrime in europe: surprising results of an expensive lapse. J Bus Strat Roškot M, Wanasika I, Kroupova ZK (2020) Cybercrime in europe: surprising results of an expensive lapse. J Bus Strat
Zurück zum Zitat Sechel S (2019) A comparative assessment of obfuscated ransomware detection methods. Informatica Economica 23(2):45–62CrossRef Sechel S (2019) A comparative assessment of obfuscated ransomware detection methods. Informatica Economica 23(2):45–62CrossRef
Zurück zum Zitat Shafahi A, Najibi M, Ghiasi A, Xu Z, Dickerson JP, Studer C, Davis LS, Taylor G, Goldstein T, d’Alchė-Buc F, Fox EB, Garnett R (2019) Adversarial training for free!. In: Wallach H M, Larochelle H, Beygelzimer A (eds) Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp 3353–3364 Shafahi A, Najibi M, Ghiasi A, Xu Z, Dickerson JP, Studer C, Davis LS, Taylor G, Goldstein T, d’Alchė-Buc F, Fox EB, Garnett R (2019) Adversarial training for free!. In: Wallach H M, Larochelle H, Beygelzimer A (eds) Advances in neural information processing systems 32: Annual conference on neural information processing systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp 3353–3364
Zurück zum Zitat Shahriari B, Swersky K, Wang Z, Adams RP, De Freitas N (2015) Taking the human out of the loop: A review of bayesian optimization. Proc IEEE 104(1):148–175CrossRef Shahriari B, Swersky K, Wang Z, Adams RP, De Freitas N (2015) Taking the human out of the loop: A review of bayesian optimization. Proc IEEE 104(1):148–175CrossRef
Zurück zum Zitat Sharafaldin I, Lashkari AH, Ghorbani AA (2018) Toward generating a new intrusion detection dataset and intrusion traffic characterization. In: ICISSP, pp 108–116 Sharafaldin I, Lashkari AH, Ghorbani AA (2018) Toward generating a new intrusion detection dataset and intrusion traffic characterization. In: ICISSP, pp 108–116
Zurück zum Zitat Smutz C, Stavrou A (2012) Malicious pdf detection using metadata and structural features. In: Proceedings of the 28th annual computer security applications conference. pp 239–248 Smutz C, Stavrou A (2012) Malicious pdf detection using metadata and structural features. In: Proceedings of the 28th annual computer security applications conference. pp 239–248
Zurück zum Zitat Smutz C, Stavrou A (2016) When a tree falls: Using diversity in ensemble classifiers to identify evasion in malware detectors. In: NDSS Smutz C, Stavrou A (2016) When a tree falls: Using diversity in ensemble classifiers to identify evasion in malware detectors. In: NDSS
Zurück zum Zitat Snoek J, Larochelle H, Adams RP (2012) Practical bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems. pp 2951–2959 Snoek J, Larochelle H, Adams RP (2012) Practical bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems. pp 2951–2959
Zurück zum Zitat Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359MathSciNetCrossRef Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359MathSciNetCrossRef
Zurück zum Zitat Strauss T, Hanselmann M, Junginger A, Ulmer H (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv:1709.03423 Strauss T, Hanselmann M, Junginger A, Ulmer H (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv:1709.​03423
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:13126199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:13126199
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings
Zurück zum Zitat Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the kdd cup 99 data set. In: 2009 IEEE symposium on computational intelligence for security and defense applications, IEEE, pp 1–6 Tavallaee M, Bagheri E, Lu W, Ghorbani AA (2009) A detailed analysis of the kdd cup 99 data set. In: 2009 IEEE symposium on computational intelligence for security and defense applications, IEEE, pp 1–6
Zurück zum Zitat Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction apis. In: 25th {USENIX} Security Symposium ({USENIX} security, vol 16, pp 601–618 Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction apis. In: 25th {USENIX} Security Symposium ({USENIX} security, vol 16, pp 601–618
Zurück zum Zitat Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D, McDaniel PD (2018) Ensemble adversarial training: Attacks and defenses. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018 Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D, McDaniel PD (2018) Ensemble adversarial training: Attacks and defenses. In: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018
Zurück zum Zitat Tramèr F, Carlini N, Brendel W, Madry A (2020) On adaptive attacks to adversarial example defenses. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds) Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, December 6-12, 2020, virtual Tramèr F, Carlini N, Brendel W, Madry A (2020) On adaptive attacks to adversarial example defenses. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds) Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, December 6-12, 2020, virtual
Zurück zum Zitat Valiant L (1984) A theory of the learnable. Commun ACM (27) Valiant L (1984) A theory of the learnable. Commun ACM (27)
Zurück zum Zitat Xiao H, Biggio B, Brown G, Fumera G, Eckert C, Roli F (2015) Is feature selection secure against training data poisoning?. In: Bach FR, Blei DM (eds) Proceedings of the 32nd international conference on machine learning, ICML 2015, Lille, France, 6-11 July 2015, JMLR Workshop and Conference Proceedings, vol 37, pp 1689–1698 Xiao H, Biggio B, Brown G, Fumera G, Eckert C, Roli F (2015) Is feature selection secure against training data poisoning?. In: Bach FR, Blei DM (eds) Proceedings of the 32nd international conference on machine learning, ICML 2015, Lille, France, 6-11 July 2015, JMLR Workshop and Conference Proceedings, vol 37, pp 1689–1698
Zurück zum Zitat Xu W, Qi Y, Evans D (2016) Automatically evading classifiers. In: Proceedings of the 2016 network and distributed systems symposium, vol 10 Xu W, Qi Y, Evans D (2016) Automatically evading classifiers. In: Proceedings of the 2016 network and distributed systems symposium, vol 10
Zurück zum Zitat Yin X, Kolouri S, Rohde GK (2019) Adversarial example detection and classification with asymmetrical adversarial training. arXiv:190511475 Yin X, Kolouri S, Rohde GK (2019) Adversarial example detection and classification with asymmetrical adversarial training. arXiv:190511475
Zurück zum Zitat Young SR, Rose DC, Karnowski TP, Lim SH, Patton RM (2015) Optimizing deep learning hyper-parameters through an evolutionary algorithm. In: Proceedings of the workshop on machine learning in high-performance computing environments. pp 1–5 Young SR, Rose DC, Karnowski TP, Lim SH, Patton RM (2015) Optimizing deep learning hyper-parameters through an evolutionary algorithm. In: Proceedings of the workshop on machine learning in high-performance computing environments. pp 1–5
Zurück zum Zitat Zhang F, Chan PP, Biggio B, Yeung DS, Roli F (2015) Adversarial feature selection against evasion attacks. IEEE Trans Cybern 46(3):766–777CrossRef Zhang F, Chan PP, Biggio B, Yeung DS, Roli F (2015) Adversarial feature selection against evasion attacks. IEEE Trans Cybern 46(3):766–777CrossRef
Zurück zum Zitat Zhang F, Wang Y, Wang H (2018) Gradient correlation: Are ensemble classifiers more robust against evasion attacks in practical settings?. In: International conference on web information systems engineering. Springer, pp 96–110 Zhang F, Wang Y, Wang H (2018) Gradient correlation: Are ensemble classifiers more robust against evasion attacks in practical settings?. In: International conference on web information systems engineering. Springer, pp 96–110
Zurück zum Zitat Zhang F, Wang Y, Liu S, Wang H (2020) Decision-based evasion attacks on tree ensemble classifiers. World Wide Web: 1–21 Zhang F, Wang Y, Liu S, Wang H (2020) Decision-based evasion attacks on tree ensemble classifiers. World Wide Web: 1–21
Zurück zum Zitat Zhou Y, Kantarcioglu M, Thuraisingham B, Xi B (2012) Adversarial support vector machine learning. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. pp 1059–1067 Zhou Y, Kantarcioglu M, Thuraisingham B, Xi B (2012) Adversarial support vector machine learning. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. pp 1059–1067
Metadaten
Titel
Omni: automated ensemble with unexpected models against adversarial evasion attack
verfasst von
Rui Shu
Tianpei Xia
Laurie Williams
Tim Menzies
Publikationsdatum
01.01.2022
Verlag
Springer US
Erschienen in
Empirical Software Engineering / Ausgabe 1/2022
Print ISSN: 1382-3256
Elektronische ISSN: 1573-7616
DOI
https://doi.org/10.1007/s10664-021-10064-8

Weitere Artikel der Ausgabe 1/2022

Empirical Software Engineering 1/2022 Zur Ausgabe

Premium Partner