Skip to main content
Erschienen in: International Journal of Machine Learning and Cybernetics 12/2023

28.06.2023 | Original Article

Vulnerable point detection and repair against adversarial attacks for convolutional neural networks

verfasst von: Jie Gao, Zhaoqiang Xia, Jing Dai, Chen Dang, Xiaoyue Jiang, Xiaoyi Feng

Erschienen in: International Journal of Machine Learning and Cybernetics | Ausgabe 12/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Abusnaina A, Wu Y, Arora S, Wang Y, Wang F, Yang H, and Mohaisen D (2021) Adversarial example detection using latent neighborhood graph. In Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7687–7696 Abusnaina A, Wu Y, Arora S, Wang Y, Wang F, Yang H, and Mohaisen D (2021) Adversarial example detection using latent neighborhood graph. In Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7687–7696
2.
Zurück zum Zitat Agarwal A, Vatsa M, Singh R, Ratha N (2021) Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recogn Lett 146:244–251CrossRef Agarwal A, Vatsa M, Singh R, Ratha N (2021) Cognitive data augmentation for adversarial defense via pixel masking. Pattern Recogn Lett 146:244–251CrossRef
3.
Zurück zum Zitat Alarab I, Prakoonwit S (2022) Adversarial attack for uncertainty estimation: identifying critical regions in neural networks. Neural Process Lett 54(3):1805–1821CrossRef Alarab I, Prakoonwit S (2022) Adversarial attack for uncertainty estimation: identifying critical regions in neural networks. Neural Process Lett 54(3):1805–1821CrossRef
4.
Zurück zum Zitat Aldahdooh A, Hamidouche W, Fezza SA, Déforges O (2022) Adversarial example detection for dnn models: a review and experimental comparison. Artif Intell Rev 55(6):4403–4462CrossRef Aldahdooh A, Hamidouche W, Fezza SA, Déforges O (2022) Adversarial example detection for dnn models: a review and experimental comparison. Artif Intell Rev 55(6):4403–4462CrossRef
5.
Zurück zum Zitat Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst 33:16048–16059 Andriushchenko M, Flammarion N (2020) Understanding and improving fast adversarial training. Adv Neural Inf Process Syst 33:16048–16059
6.
Zurück zum Zitat Carlini N and Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, pp 39–57 Carlini N and Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE, pp 39–57
7.
Zurück zum Zitat Cisse M, Adi Y, Neverova N, and Keshet J (2017) Houdini: fooling deep structured prediction models. arxiv 2017. arXiv preprint arXiv:1707.05373 :1–12 Cisse M, Adi Y, Neverova N, and Keshet J (2017) Houdini: fooling deep structured prediction models. arxiv 2017. arXiv preprint arXiv:​1707.​05373 :1–12
8.
Zurück zum Zitat Cohen J, Rosenfeld E and Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning. PMLR, pp 1310–1320 Cohen J, Rosenfeld E and Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning. PMLR, pp 1310–1320
9.
Zurück zum Zitat Cohen G, Sapiro G, and Giryes R (2020) Detecting adversarial samples using influence functions and nearest neighbors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 14453–14462 Cohen G, Sapiro G, and Giryes R (2020) Detecting adversarial samples using influence functions and nearest neighbors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 14453–14462
10.
Zurück zum Zitat Deng J, Guo J, Xue N, and Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 4690–4699 Deng J, Guo J, Xue N, and Zafeiriou S (2019) Arcface: additive angular margin loss for deep face recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 4690–4699
11.
Zurück zum Zitat Ghaffari Laleh N, Truhn D, Veldhuizen GP, Han T, van Treeck M, Buelow RD, Langer R, Dislich B, Boor P, Schulz V et al (2022) Adversarial attacks and adversarial robustness in computational pathology. Nat Commun 13(1):5711CrossRef Ghaffari Laleh N, Truhn D, Veldhuizen GP, Han T, van Treeck M, Buelow RD, Langer R, Dislich B, Boor P, Schulz V et al (2022) Adversarial attacks and adversarial robustness in computational pathology. Nat Commun 13(1):5711CrossRef
12.
Zurück zum Zitat Gong C, Ren T, Ye M and Liu Q (2021) Maxup: lightweight adversarial training with data augmentation improves neural network training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2474–2483 Gong C, Ren T, Ye M and Liu Q (2021) Maxup: lightweight adversarial training with data augmentation improves neural network training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2474–2483
13.
Zurück zum Zitat Goodfellow IJ, Shlens J, and Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y and LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings Goodfellow IJ, Shlens J, and Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y and LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings
14.
Zurück zum Zitat Gu S, Rigazio L (2015) Towards deep neural network architectures robust to adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015. Workshop Track Proceedings Gu S, Rigazio L (2015) Towards deep neural network architectures robust to adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015. Workshop Track Proceedings
15.
Zurück zum Zitat Hirano H, Minagi A, Takemoto K (2021) Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging 21:1–13CrossRef Hirano H, Minagi A, Takemoto K (2021) Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging 21:1–13CrossRef
16.
Zurück zum Zitat Jia S, Ma C, Yao T, Yin B, Ding S and Yang X (2022) Exploring frequency adversarial attacks for face forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 4103–4112 Jia S, Ma C, Yao T, Yin B, Ding S and Yang X (2022) Exploring frequency adversarial attacks for face forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 4103–4112
17.
Zurück zum Zitat Jia X, Zhang Y, Wu B, Ma K, Wang J and Cao X (2022) Las-at: adversarial training with learnable attack strategy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13398–13408 Jia X, Zhang Y, Wu B, Ma K, Wang J and Cao X (2022) Las-at: adversarial training with learnable attack strategy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13398–13408
18.
Zurück zum Zitat Jin W, Ma Y, Liu X, Tang X, Wang S and Tang J (2020) Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. pp 66–74 Jin W, Ma Y, Liu X, Tang X, Wang S and Tang J (2020) Graph structure learning for robust graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. pp 66–74
19.
Zurück zum Zitat Kong X, Ge Z (2021) Adversarial attacks on neural-network-based soft sensors: directly attack output. IEEE Trans Industr Inf 18(4):2443–2451CrossRef Kong X, Ge Z (2021) Adversarial attacks on neural-network-based soft sensors: directly attack output. IEEE Trans Industr Inf 18(4):2443–2451CrossRef
20.
Zurück zum Zitat Kurakin A, Goodfellow IJ and Bengio S (2017) Adversarial machine learning at scale. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017, Conference Track Proceedings Kurakin A, Goodfellow IJ and Bengio S (2017) Adversarial machine learning at scale. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017, Conference Track Proceedings
21.
Zurück zum Zitat Lecuyer M, Atlidakis V, Geambasu R, Hsu D and Jana S (2019) Certified robustness to adversarial examples with differential privacy. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 656–672 Lecuyer M, Atlidakis V, Geambasu R, Hsu D and Jana S (2019) Certified robustness to adversarial examples with differential privacy. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 656–672
22.
Zurück zum Zitat Liang B, Li H, Su M, Li X, Shi W, Wang X (2018) Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans Dependable Secure Comput 18(1):72–85CrossRef Liang B, Li H, Su M, Li X, Shi W, Wang X (2018) Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans Dependable Secure Comput 18(1):72–85CrossRef
23.
Zurück zum Zitat Liao F, Liang M, Dong Y, Pang T, Hu X and Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1778–1787 Liao F, Liang M, Dong Y, Pang T, Hu X and Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1778–1787
24.
Zurück zum Zitat Liu S, Chen Z, Li W, Zhu J, Wang J, Zhang W and Gan Z (2022) Efficient universal shuffle attack for visual object tracking. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 2739–2743 Liu S, Chen Z, Li W, Zhu J, Wang J, Zhang W and Gan Z (2022) Efficient universal shuffle attack for visual object tracking. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 2739–2743
25.
Zurück zum Zitat Liu M, Liu S, Su H, Cao K and Zhu J (2018) Analyzing the noise robustness of deep neural networks. In: 2018 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, pp 60–71 Liu M, Liu S, Su H, Cao K and Zhu J (2018) Analyzing the noise robustness of deep neural networks. In: 2018 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEE, pp 60–71
26.
Zurück zum Zitat Long T, Gao Q, Xu L and Zhou Z (2022) A survey on adversarial attacks in computer vision: taxonomy, visualization and future directions. Comput Secur 102847 Long T, Gao Q, Xu L and Zhou Z (2022) A survey on adversarial attacks in computer vision: taxonomy, visualization and future directions. Comput Secur 102847
27.
Zurück zum Zitat Lyu C, Huang K and Liang HN (2015) A unified gradient regularization family for adversarial examples. In: 2015 IEEE international conference on data mining. IEEE, pp 301–309 Lyu C, Huang K and Liang HN (2015) A unified gradient regularization family for adversarial examples. In: 2015 IEEE international conference on data mining. IEEE, pp 301–309
28.
Zurück zum Zitat Ma Y, Xie T, Li J, Maciejewski R (2019) Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Trans Visual Comput Graphics 26(1):1075–1085CrossRef Ma Y, Xie T, Li J, Maciejewski R (2019) Explaining vulnerabilities to adversarial machine learning through visual analytics. IEEE Trans Visual Comput Graphics 26(1):1075–1085CrossRef
29.
Zurück zum Zitat Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings Madry A, Makelov A, Schmidt L, Tsipras D and Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings
30.
Zurück zum Zitat Ma X, Li B, Wang Y, Erfani SM, Wijewickrema S, Schoenebeck G, Song D, Houle ME and Bailey J (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613 Ma X, Li B, Wang Y, Erfani SM, Wijewickrema S, Schoenebeck G, Song D, Houle ME and Bailey J (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:​1801.​02613
31.
Zurück zum Zitat Meng D and Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. pp 135–147 Meng D and Chen H (2017) Magnet: a two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. pp 135–147
32.
Zurück zum Zitat Michel A, Jha SK and Ewetz R (2022) A survey on the vulnerability of deep neural networks against adversarial attacks. Progress Artif Intell 1–11 Michel A, Jha SK and Ewetz R (2022) A survey on the vulnerability of deep neural networks against adversarial attacks. Progress Artif Intell 1–11
33.
Zurück zum Zitat Moosavi-Dezfooli SM, Fawzi A and Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2574–2582 Moosavi-Dezfooli SM, Fawzi A and Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2574–2582
34.
Zurück zum Zitat Papernot N, McDaniel P, Wu X, Jha S and Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 582–597 Papernot N, McDaniel P, Wu X, Jha S and Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP). IEEE, pp 582–597
35.
Zurück zum Zitat Schroff F, Kalenichenko D and Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 815–823 Schroff F, Kalenichenko D and Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 815–823
36.
Zurück zum Zitat Shafahi A, Najibi M, Ghiasi A, Xu Z, Dickerson J, Studer C, Davis LS, Taylor G and Goldstein T (2019) Adversarial training for free! In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. pp 3358–3369 Shafahi A, Najibi M, Ghiasi A, Xu Z, Dickerson J, Studer C, Davis LS, Taylor G and Goldstein T (2019) Adversarial training for free! In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. pp 3358–3369
37.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ and Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y and LeCun Y (eds) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ and Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y and LeCun Y (eds) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings
38.
Zurück zum Zitat Tramer F (2022) Detecting adversarial examples is (nearly) as hard as classifying them. In: International Conference on Machine Learning. PMLR, pp 21692–21702 Tramer F (2022) Detecting adversarial examples is (nearly) as hard as classifying them. In: International Conference on Machine Learning. PMLR, pp 21692–21702
39.
Zurück zum Zitat Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D and McDaniel PD (2018) Ensemble adversarial training: attacks and defenses. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings Tramèr F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D and McDaniel PD (2018) Ensemble adversarial training: attacks and defenses. In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings
40.
Zurück zum Zitat Wang J (2021) Adversarial examples in physical world. In: International Joint Conference on Artificial Intelligence. pp 4925–4926 Wang J (2021) Adversarial examples in physical world. In: International Joint Conference on Artificial Intelligence. pp 4925–4926
41.
Zurück zum Zitat Wang X and He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 1924–1933 Wang X and He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 1924–1933
42.
Zurück zum Zitat Wang N, Chen Y, Xiao Y, Hu Y, Lou W and Hou T (2022) Manda: on adversarial example detection for network intrusion detection system. IEEE Trans Depend Secure Comput Wang N, Chen Y, Xiao Y, Hu Y, Lou W and Hou T (2022) Manda: on adversarial example detection for network intrusion detection system. IEEE Trans Depend Secure Comput
43.
Zurück zum Zitat Wang Z, Guo H, Zhang Z, Liu W, Qin Z and Ren K (2021) Feature importance-aware transferable adversarial attacks. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 7639–7648 Wang Z, Guo H, Zhang Z, Liu W, Qin Z and Ren K (2021) Feature importance-aware transferable adversarial attacks. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 7639–7648
44.
Zurück zum Zitat Wang B, Li Y and Zhou P (2022) Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13379–13387 Wang B, Li Y and Zhou P (2022) Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13379–13387
45.
Zurück zum Zitat Wang G, Yan H and Wei X (2022) Enhancing transferability of adversarial examples with spatial momentum. In: Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part I. Springer, pp 593–604 Wang G, Yan H and Wei X (2022) Enhancing transferability of adversarial examples with spatial momentum. In: Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, November 4–7, 2022, Proceedings, Part I. Springer, pp 593–604
46.
Zurück zum Zitat Wei Z, Chen J, Goldblum M, Wu Z, Goldstein T, Jiang YG (2022) Towards transferable adversarial attacks on vision transformers. Proc AAAI Conf Artif Intell 36:2668–2676 Wei Z, Chen J, Goldblum M, Wu Z, Goldstein T, Jiang YG (2022) Towards transferable adversarial attacks on vision transformers. Proc AAAI Conf Artif Intell 36:2668–2676
47.
Zurück zum Zitat Woo S, Park J, Lee JY and Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp 3–19 Woo S, Park J, Lee JY and Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp 3–19
48.
Zurück zum Zitat Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K and Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv preprint arXiv:1903.01610 Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K and Zhu L (2019) Adversarial examples on graph data: deep insights into attack and defense. arXiv preprint arXiv:​1903.​01610
49.
Zurück zum Zitat Xie C, Tan M, Gong B, Wang J, Yuille AL and Le QV (2020) Adversarial examples improve image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 819–828 Xie C, Tan M, Gong B, Wang J, Yuille AL and Le QV (2020) Adversarial examples improve image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 819–828
50.
Zurück zum Zitat Xie C, Wang J, Zhang Z, Ren Z and Yuille A (2018) Mitigating adversarial effects through randomization. In: International Conference on Learning Representations. pp 1–17 Xie C, Wang J, Zhang Z, Ren Z and Yuille A (2018) Mitigating adversarial effects through randomization. In: International Conference on Learning Representations. pp 1–17
51.
Zurück zum Zitat Xie C, Wu Y, Maaten Lvd, Yuille AL and He K (2019) Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 501–509 Xie C, Wu Y, Maaten Lvd, Yuille AL and He K (2019) Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 501–509
52.
Zurück zum Zitat Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 17(2):151–178CrossRef Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK (2020) Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 17(2):151–178CrossRef
53.
Zurück zum Zitat Xu W, Evans D and Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium. pp 1–15 Xu W, Evans D and Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium. pp 1–15
54.
Zurück zum Zitat Yuan Z, Zhang J, Jia Y, Tan C, Xue T and Shan S (2021) Meta gradient adversarial attack. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7748–7757 Yuan Z, Zhang J, Jia Y, Tan C, Xue T and Shan S (2021) Meta gradient adversarial attack. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp 7748–7757
55.
Zurück zum Zitat Yu Y, Gao X and Xu CZ (2021) Lafeat: piercing through adversarial defenses with latent features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 5735–5745 Yu Y, Gao X and Xu CZ (2021) Lafeat: piercing through adversarial defenses with latent features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 5735–5745
56.
Zurück zum Zitat Zhang X, Wang J, Wang T, Jiang R, Xu J, Zhao L (2021) Robust feature learning for adversarial defense via hierarchical feature alignment. Inf Sci 560:256–270MathSciNetCrossRef Zhang X, Wang J, Wang T, Jiang R, Xu J, Zhao L (2021) Robust feature learning for adversarial defense via hierarchical feature alignment. Inf Sci 560:256–270MathSciNetCrossRef
57.
Zurück zum Zitat Zhang J, Li B, Xu J, Wu S, Ding S, Zhang L and Wu C (2022) Towards efficient data free black-box adversarial attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15115–15125 Zhang J, Li B, Xu J, Wu S, Ding S, Zhang L and Wu C (2022) Towards efficient data free black-box adversarial attack. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15115–15125
58.
Zurück zum Zitat Zhang J, Xu X, Han B, Niu G, Cui L, Sugiyama M and Kankanhalli M (2020) Attacks which do not kill training make adversarial learning stronger. In: International conference on machine learning. PMLR, pp 11278–11287 Zhang J, Xu X, Han B, Niu G, Cui L, Sugiyama M and Kankanhalli M (2020) Attacks which do not kill training make adversarial learning stronger. In: International conference on machine learning. PMLR, pp 11278–11287
59.
Zurück zum Zitat Zhang H, Yu Y, Jiao J, Xing E, El Ghaoui L and Jordan M (2019) Theoretically principled trade-off between robustness and accuracy. In: International conference on machine learning. PMLR, pp 7472–7482 Zhang H, Yu Y, Jiao J, Xing E, El Ghaoui L and Jordan M (2019) Theoretically principled trade-off between robustness and accuracy. In: International conference on machine learning. PMLR, pp 7472–7482
60.
Zurück zum Zitat Zhong Y, Liu X, Zhai D, Jiang J and Ji X (2022) Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15345–15354 Zhong Y, Liu X, Zhai D, Jiang J and Ji X (2022) Shadows can be dangerous: stealthy and effective physical-world adversarial attack by natural phenomenon. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 15345–15354
61.
Zurück zum Zitat Zuo F and Zeng Q (2021) Exploiting the sensitivity of l2 adversarial examples to erase-and-restore. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security. pp 40–51 Zuo F and Zeng Q (2021) Exploiting the sensitivity of l2 adversarial examples to erase-and-restore. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security. pp 40–51
Metadaten
Titel
Vulnerable point detection and repair against adversarial attacks for convolutional neural networks
verfasst von
Jie Gao
Zhaoqiang Xia
Jing Dai
Chen Dang
Xiaoyue Jiang
Xiaoyi Feng
Publikationsdatum
28.06.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
International Journal of Machine Learning and Cybernetics / Ausgabe 12/2023
Print ISSN: 1868-8071
Elektronische ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-023-01888-5

Weitere Artikel der Ausgabe 12/2023

International Journal of Machine Learning and Cybernetics 12/2023 Zur Ausgabe

Neuer Inhalt