Skip to main content

2024 | OriginalPaper | Buchkapitel

15. Evolutionary Model Validation—An Adversarial Robustness Perspective

verfasst von : Inês Valentim, Nuno Lourenço, Nuno Antunes

Erschienen in: Handbook of Evolutionary Machine Learning

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

When building Machine Learning models, either manually or automatically, we need to make sure that they are able to solve the task at hand and generalize, i.e., perform well on unseen data. By properly validating a model and estimating its generalization performance, not only do we get a clearer idea of how it behaves but we might also identify problems (e.g., overfitting) before they lead to significant losses in a production environment. Model validation is usually focused on predictive performance, but with models being applied in safety-critical areas, robustness should also be taken into consideration. In this context, a robust model produces correct outputs even when presented with data that somehow deviates from the one used for training, including adversarial examples. These are samples to which small perturbations are added in order to purposely fool the model. There are, however, limited studies on the robustness of models designed by evolution. In this chapter, we address this gap in the literature by performing adversarial attacks and evaluating the models created by two prominent NeuroEvolution methods (DENSER and NSGA-Net). The results confirm that, despite achieving competitive results in standard settings where only predictive accuracy is analyzed, the evolved models are vulnerable to adversarial examples. This highlights the need to also address model validation from an adversarial robustness perspective.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In the ML literature, it is common to find references to a validation set of data. Typically, that validation set is used to tune model hyperparameters and perform model selection. Model selection is, to a certain degree, related to model validation, but, ultimately, the goal of each of these tasks differs. Model validation is concerned with fully specified models.
 
Literatur
1.
Zurück zum Zitat Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.-J., Srivastava, M.B.: GenAttack: practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119 (2019) Alzantot, M., Sharma, Y., Chakraborty, S., Zhang, H., Hsieh, C.-J., Srivastava, M.B.: GenAttack: practical black-box attacks with gradient-free optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1111–1119 (2019)
2.
Zurück zum Zitat Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: ECCV (2020) Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: ECCV (2020)
3.
Zurück zum Zitat Assunção, F., Lourenço, N., Machado, P., Ribeiro, B.: DENSER: deep evolutionary network structured representation (2018). arXiv:1801.01563 Assunção, F., Lourenço, N., Machado, P., Ribeiro, B.: DENSER: deep evolutionary network structured representation (2018). arXiv:​1801.​01563
4.
Zurück zum Zitat Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning (2018) Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International Conference on Machine Learning (2018)
5.
Zurück zum Zitat Benz, P., Zhang, C., Ham, S., Karjauv, A., Kweon, I.S.: Robustness comparison of vision transformer and MLP-mixer to CNNs. In: CVPR 2021 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV) (2021) Benz, P., Zhang, C., Ham, S., Karjauv, A., Kweon, I.S.: Robustness comparison of vision transformer and MLP-mixer to CNNs. In: CVPR 2021 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV) (2021)
6.
Zurück zum Zitat Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: International Conference on Learning Representations (2018) Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: International Conference on Learning Representations (2018)
8.
Zurück zum Zitat Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., Kurakin, A.: On evaluating adversarial robustness (2019). arXiv:1902.06705 Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I., Madry, A., Kurakin, A.: On evaluating adversarial robustness (2019). arXiv:​1902.​06705
9.
Zurück zum Zitat Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE Computer Society (2017) Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE Computer Society (2017)
10.
Zurück zum Zitat Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., Hein, M.: RobustBench: a standardized adversarial robustness benchmark (2020). arXiv:2010.09670 Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., Hein, M.: RobustBench: a standardized adversarial robustness benchmark (2020). arXiv:​2010.​09670
11.
Zurück zum Zitat Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: Daumé III, H., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning, vol. 119, pp. 2206–2216. Proceedings of Machine Learning Research (PMLR), 13–18 Jul 2020 (2020) Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: Daumé III, H., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning, vol. 119, pp. 2206–2216. Proceedings of Machine Learning Research (PMLR), 13–18 Jul 2020 (2020)
12.
Zurück zum Zitat Devaguptapu, C., Agarwal, D., Mittal, G., Gopalani, P., Balasubramanian, V.N.: On adversarial robustness: a neural architecture search perspective. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 152–161 (2021) Devaguptapu, C., Agarwal, D., Mittal, G., Gopalani, P., Balasubramanian, V.N.: On adversarial robustness: a neural architecture search perspective. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pp. 152–161 (2021)
13.
14.
Zurück zum Zitat Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018) Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)
15.
Zurück zum Zitat Dong, Y., Fu, Q.-A., Yang, X, Pang, T., Su, H., Xiao, Z., Zhu, J.: Benchmarking adversarial robustness on image classification. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 318–328 (2020) Dong, Y., Fu, Q.-A., Yang, X, Pang, T., Su, H., Xiao, Z., Zhu, J.: Benchmarking adversarial robustness on image classification. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 318–328 (2020)
16.
Zurück zum Zitat Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (2021) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (2021)
17.
Zurück zum Zitat Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1625–1634 (2018) Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1625–1634 (2018)
18.
Zurück zum Zitat Geraeinejad, V., Sinaei, S., Modarressi, M., Daneshtalab, M.: RoCo-NAS: robust and compact neural architecture search. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021) Geraeinejad, V., Sinaei, S., Modarressi, M., Daneshtalab, M.: RoCo-NAS: robust and compact neural architecture search. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021)
19.
Zurück zum Zitat Gilmer, J., Adams, R.P., Goodfellow, I.J., Andersen, D.G., Dahl, G.E.: Motivating the rules of the game for adversarial example research (2018). arXiv:1807.06732 Gilmer, J., Adams, R.P., Goodfellow, I.J., Andersen, D.G., Dahl, G.E.: Motivating the rules of the game for adversarial example research (2018). arXiv:​1807.​06732
20.
Zurück zum Zitat Goodfellow, I., Bengio, Y., Courville., A.: Deep Learning. MIT Press (2016) Goodfellow, I., Bengio, Y., Courville., A.: Deep Learning. MIT Press (2016)
21.
Zurück zum Zitat Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015) Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)
22.
Zurück zum Zitat Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.:. When NAS meets robustness: in search of robust architectures against adversarial attacks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 628–637 (2020) Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.:. When NAS meets robustness: in search of robust architectures against adversarial attacks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 628–637 (2020)
23.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
25.
Zurück zum Zitat Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: International Conference on Learning Representations (2019) Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: International Conference on Learning Representations (2019)
26.
Zurück zum Zitat Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141 (2018) Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141 (2018)
27.
Zurück zum Zitat Huang, H., Wang, Y., Erfani, S., Quanquan, G., Bailey, J., Ma, X.: Exploring architectural ingredients of adversarially robust deep neural networks. Adv. Neural Inf. Process. Syst. 34, 5545–5559 (2021) Huang, H., Wang, Y., Erfani, S., Quanquan, G., Bailey, J., Ma, X.: Exploring architectural ingredients of adversarially robust deep neural networks. Adv. Neural Inf. Process. Syst. 34, 5545–5559 (2021)
28.
Zurück zum Zitat Jacobsen, J.-H., Behrmann, J., Carlini, N., Tramèr, F., Papernot, N.: Exploiting excessive invariance caused by norm-bounded adversarial robustness (2019). arXiv:1903.10484 Jacobsen, J.-H., Behrmann, J., Carlini, N., Tramèr, F., Papernot, N.: Exploiting excessive invariance caused by norm-bounded adversarial robustness (2019). arXiv:​1903.​10484
29.
Zurück zum Zitat Kar, O.F., Yeo, T., Atanov, A., Zamir, A.:. 3d common corruptions and data augmentation. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18941–18952 (2022) Kar, O.F., Yeo, T., Atanov, A., Zamir, A.:. 3d common corruptions and data augmentation. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18941–18952 (2022)
30.
Zurück zum Zitat Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117. Springer (2017) Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117. Springer (2017)
31.
Zurück zum Zitat Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009 Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009
34.
Zurück zum Zitat López-Ibáñez, M., Branke, J., Paquete, L.: Reproducibility in evolutionary computation. ACM Trans. Evol. Learn. Optim. 1(4) (2021) López-Ibáñez, M., Branke, J., Paquete, L.: Reproducibility in evolutionary computation. ACM Trans. Evol. Learn. Optim. 1(4) (2021)
35.
Zurück zum Zitat Lu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., Banzhaf, W.: NSGA-Net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427 (2019) Lu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., Banzhaf, W.: NSGA-Net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 419–427 (2019)
36.
Zurück zum Zitat Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)
37.
Zurück zum Zitat Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: International Conference on Learning Representations (2017) Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations. In: International Conference on Learning Representations (2017)
38.
Zurück zum Zitat Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94 (2017) Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86–94 (2017)
39.
Zurück zum Zitat Moreno-Torres, J.G., Raeder, T., Alaíz-Rodríguez, R., Chawla, N.V., Herrera, F.: A unifying view on dataset shift in classification. Pattern Recogn. 45(1), 521–530 (2012). JanCrossRef Moreno-Torres, J.G., Raeder, T., Alaíz-Rodríguez, R., Chawla, N.V., Herrera, F.: A unifying view on dataset shift in classification. Pattern Recogn. 45(1), 521–530 (2012). JanCrossRef
40.
Zurück zum Zitat Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015) Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015)
41.
Zurück zum Zitat Nicolae, M.-I., Sinn, M., Tran, M.N., Buesser, B., Rawat, A., Wistuba, M., Zantedeschi, V., Baracaldo, N., Chen, B., Ludwig, H., Molloy, I., Edwards, B.: Adversarial robustness toolbox v1.2.0 (2018). arXiv:1807.01069 Nicolae, M.-I., Sinn, M., Tran, M.N., Buesser, B., Rawat, A., Wistuba, M., Zantedeschi, V., Baracaldo, N., Chen, B., Ludwig, H., Molloy, I., Edwards, B.: Adversarial robustness toolbox v1.2.0 (2018). arXiv:​1807.​01069
42.
Zurück zum Zitat Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P.F., Leike, J., Lowe, R.: Training language models to follow instructions with human feedback (2022). arXiv:2203.02155 Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C.L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P.F., Leike, J., Lowe, R.: Training language models to follow instructions with human feedback (2022). arXiv:​2203.​02155
43.
Zurück zum Zitat Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Berkay Celik, Z., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017) Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Berkay Celik, Z., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)
45.
Zurück zum Zitat Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRef Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRef
46.
Zurück zum Zitat Sinn, M., Wistuba, M., Buesser, B., Nicolae, M.-I., Tran, M.: Evolutionary search for adversarially robust neural networks. In: Safe Machine Learning workshop at ICLR (2019) Sinn, M., Wistuba, M., Buesser, B., Nicolae, M.-I., Tran, M.: Evolutionary search for adversarially robust neural networks. In: Safe Machine Learning workshop at ICLR (2019)
47.
Zurück zum Zitat Stanley, K.O.: Compositional pattern producing networks: a novel abstraction of development. Genet. Program Evolvable Mach. 8, 131–162 (2007)CrossRef Stanley, K.O.: Compositional pattern producing networks: a novel abstraction of development. Genet. Program Evolvable Mach. 8, 131–162 (2007)CrossRef
48.
Zurück zum Zitat Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997)MathSciNetCrossRefMATH Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997)MathSciNetCrossRefMATH
49.
Zurück zum Zitat Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019) Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)
50.
Zurück zum Zitat Suganuma, M., Shirakawa, S., Nagao, T.: A genetic programming approach to designing convolutional neural network architectures. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 497–504 (2017) Suganuma, M., Shirakawa, S., Nagao, T.: A genetic programming approach to designing convolutional neural network architectures. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 497–504 (2017)
51.
Zurück zum Zitat Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning (2013) Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning (2013)
52.
Zurück zum Zitat Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
53.
Zurück zum Zitat Taori, R., Dave, A., Shankar, V., Carlini, N., Recht, B., Schmidt, L.: Measuring robustness to natural distribution shifts in image classification. In: Advances in Neural Information Processing Systems, vol. 33 (2020) Taori, R., Dave, A., Shankar, V., Carlini, N., Recht, B., Schmidt, L.: Measuring robustness to natural distribution shifts in image classification. In: Advances in Neural Information Processing Systems, vol. 33 (2020)
54.
Zurück zum Zitat Telikani, A., Tahmassebi, A., Banzhaf, W., Gandomi, A.H.: Evolutionary machine learning: a survey. ACM Comput. Surv. 54(8) (2021) Telikani, A., Tahmassebi, A., Banzhaf, W., Gandomi, A.H.: Evolutionary machine learning: a survey. ACM Comput. Surv. 54(8) (2021)
55.
Zurück zum Zitat Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: International Conference on Learning Representations (2019) Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: International Conference on Learning Representations (2019)
56.
Zurück zum Zitat Tolstikhin, I.O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., Lucic, M., Dosovitskiy, A.: MLP-Mixer: an all-MLP architecture for vision (2021). arXiv: 2105.01601 Tolstikhin, I.O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., Lucic, M., Dosovitskiy, A.: MLP-Mixer: an all-MLP architecture for vision (2021). arXiv:​ 2105.​01601
57.
Zurück zum Zitat Tramèr, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: Conference on Neural Information Processing Systems (NeurIPS) (2020) Tramèr, F., Carlini, N., Brendel, W., Madry, A.: On adaptive attacks to adversarial example defenses. In: Conference on Neural Information Processing Systems (NeurIPS) (2020)
58.
Zurück zum Zitat Valentim, I., Lourenço, N., Antunes, N.: Adversarial robustness assessment of neuroevolution approaches. In: 2022 IEEE Congress on Evolutionary Computation (CEC) (2022) Valentim, I., Lourenço, N., Antunes, N.: Adversarial robustness assessment of neuroevolution approaches. In: 2022 IEEE Congress on Evolutionary Computation (CEC) (2022)
59.
60.
Zurück zum Zitat Webb, G.I., Hyde, R., Cao, H., Nguyen, H.-L., Petitjean, F.: Characterizing concept drift. Data Min. Knowl. Disc. 30(4), 964–994 (2016)MathSciNetCrossRefMATH Webb, G.I., Hyde, R., Cao, H., Nguyen, H.-L., Petitjean, F.: Characterizing concept drift. Data Min. Knowl. Disc. 30(4), 964–994 (2016)MathSciNetCrossRefMATH
61.
Zurück zum Zitat Xie, X., Liu, Y., Sun, Y., Yen, G.G., Xue, B., Zhang, M.: BenchENAS: a benchmarking platform for evolutionary neural architecture search. IEEE Trans. Evol. Comput. 26(6), 1473–1485 (2022)CrossRef Xie, X., Liu, Y., Sun, Y., Yen, G.G., Xue, B., Zhang, M.: BenchENAS: a benchmarking platform for evolutionary neural architecture search. IEEE Trans. Evol. Comput. 26(6), 1473–1485 (2022)CrossRef
62.
Zurück zum Zitat Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., Liu, Z.: OpenOOD: benchmarking generalized out-of-distribution detection. In: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022) Yang, J., Wang, P., Zou, D., Zhou, Z., Ding, K., Peng, W., Wang, H., Chen, G., Li, B., Sun, Y., Du, X., Zhou, K., Zhang, W., Hendrycks, D., Li, Y., Liu, Z.: OpenOOD: benchmarking generalized out-of-distribution detection. In: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022)
63.
Zurück zum Zitat Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E.D., Gilmer, J.: A fourier perspective on model robustness in computer vision. In: Advances in Neural Information Processing Systems, vol. 32 (2019) Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E.D., Gilmer, J.: A fourier perspective on model robustness in computer vision. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
64.
Zurück zum Zitat Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)MathSciNetCrossRef Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)MathSciNetCrossRef
65.
Zurück zum Zitat Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (BMVC). BMVA Press (2016) Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of the British Machine Vision Conference (BMVC). BMVA Press (2016)
66.
Zurück zum Zitat Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8697–8710 (2018) Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8697–8710 (2018)
Metadaten
Titel
Evolutionary Model Validation—An Adversarial Robustness Perspective
verfasst von
Inês Valentim
Nuno Lourenço
Nuno Antunes
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-3814-8_15

Premium Partner