Skip to main content
main-content

Tipp

Weitere Artikel dieser Ausgabe durch Wischen aufrufen

26.02.2020 | Original Article | Ausgabe 4/2020 Open Access

International Journal of Machine Learning and Cybernetics 4/2020

Robustness to adversarial examples can be improved with overfitting

Zeitschrift:
International Journal of Machine Learning and Cybernetics > Ausgabe 4/2020
Autoren:
Oscar Deniz, Anibal Pedraza, Noelia Vallez, Jesus Salido, Gloria Bueno
Wichtige Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Abstract

Deep learning (henceforth DL) has become most powerful machine learning methodology. Under specific circumstances recognition rates even surpass those obtained by humans. Despite this, several works have shown that deep learning produces outputs that are very far from human responses when confronted with the same task. This the case of the so-called “adversarial examples” (henceforth AE). The fact that such implausible misclassifications exist points to a fundamental difference between machine and human learning. This paper focuses on the possible causes of this intriguing phenomenon. We first argue that the error in adversarial examples is caused by high bias, i.e. by regularization that has local negative effects. This idea is supported by our experiments in which the robustness to adversarial examples is measured with respect to the level of fitting to training samples. Higher fitting was associated to higher robustness to adversarial examples. This ties the phenomenon to the trade-off that exists in machine learning between fitting and generalization.
Literatur
Über diesen Artikel

Weitere Artikel der Ausgabe 4/2020

International Journal of Machine Learning and Cybernetics 4/2020 Zur Ausgabe