Skip to main content
main-content

Tipp

Weitere Artikel dieser Ausgabe durch Wischen aufrufen

19.09.2022

Generate Usable Adversarial Examples via Simulating Additional Light Sources

verfasst von: Chen Xi, Guo Wei, Zhang Fan, Du Jiayu

Erschienen in: Neural Processing Letters

Einloggen, um Zugang zu erhalten
share
TEILEN

Abstract

Deep neural networks have been shown to be critically vulnerable under adversarial attacks. This has led to the proliferation of methods to generate different adversarial examples from different perspectives. The adversarial examples generated by these methods rapidly progress towards being harder to perceive, faster to generate, and more effective to attack. Inspired by the cyberspace attack process, this paper analyzes from the perspective of attack path and find meaningless noise perturbations makes these adversarial examples efficient but difficult to apply for an attacker. This paper generates adversarial examples from the original realistic features of the pictures. The purpose of deceiving the deep convolutional network is achieved by simulating the addition of tiny light sources to produce subtle feature effects on the image. The generated adversarial perturbations are no longer meaningless noisy making it a promising avenue for applications theoretically. The proposed method demonstrates in experiments that the generated adversarial examples can still achieve good attack results in deep convolutional networks and can be applied to black-box attacks.
Literatur
4.
Zurück zum Zitat Prakash A, Chitta K, Geiger A (2021) Multi-modal fusion transformer for end-to-end autonomous driving. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 7073–7083 Prakash A, Chitta K, Geiger A (2021) Multi-modal fusion transformer for end-to-end autonomous driving. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 7073–7083
6.
Zurück zum Zitat Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556
8.
Zurück zum Zitat Lehtinen J, Munkberg J, Hasselgren J, Laine S, Karras T, Aittala, M, Aila T (2018) Noise2noise: learning image restoration without clean data. arXiv:​1803.​04189 Lehtinen J, Munkberg J, Hasselgren J, Laine S, Karras T, Aittala, M, Aila T (2018) Noise2noise: learning image restoration without clean data. arXiv:​1803.​04189
10.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. CoRR abs/1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. CoRR abs/1312.6199
12.
Zurück zum Zitat Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. CoRR abs/1412.6572 Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. CoRR abs/1412.6572
14.
Zurück zum Zitat Carlini N, Wagner DA (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39–57 Carlini N, Wagner DA (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39–57
15.
Zurück zum Zitat Papernot N, Mcdaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597 Papernot N, Mcdaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597
17.
Zurück zum Zitat Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp 372–387 Papernot N, Mcdaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp 372–387
18.
Zurück zum Zitat Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23:828–841 CrossRef Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23:828–841 CrossRef
20.
Zurück zum Zitat Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of the 27th international conference on neural information processing systems—volume 2. NIPS’14. MIT Press, Cambridge, MA, pp 2672–2680 Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Proceedings of the 27th international conference on neural information processing systems—volume 2. NIPS’14. MIT Press, Cambridge, MA, pp 2672–2680
22.
23.
Zurück zum Zitat Uesat, J, O’Donoghue B, van den Oord A, Kohli P (2018) Adversarial risk and the dangers of evaluating against weak attacks. arXiv:​1802.​05666 Uesat, J, O’Donoghue B, van den Oord A, Kohli P (2018) Adversarial risk and the dangers of evaluating against weak attacks. arXiv:​1802.​05666
25.
Zurück zum Zitat Brendel W, Rauber J, Bethge M (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:​1712.​04248 Brendel W, Rauber J, Bethge M (2018) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:​1712.​04248
26.
Zurück zum Zitat Li Y, Li L, Wang L, Zhang T, Gong B (2019) Nattack: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. arXiv:​1905.​00441 Li Y, Li L, Wang L, Zhang T, Gong B (2019) Nattack: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. arXiv:​1905.​00441
27.
Zurück zum Zitat Lu J, Sibai H, Fabry E, Forsyth DA (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv:​1707.​03501 Lu J, Sibai H, Fabry E, Forsyth DA (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv:​1707.​03501
28.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778
Metadaten
Titel
Generate Usable Adversarial Examples via Simulating Additional Light Sources
verfasst von
Chen Xi
Guo Wei
Zhang Fan
Du Jiayu
Publikationsdatum
19.09.2022
Verlag
Springer US
Erschienen in
Neural Processing Letters
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-022-11024-z