Skip to main content
Top
Published in: Neural Processing Letters 1/2021

03-01-2021

Computation of CNN’s Sensitivity to Input Perturbation

Authors: Lin Xiang, Xiaoqin Zeng, Shengli Wu, Yanjun Liu, Baohua Yuan

Published in: Neural Processing Letters | Issue 1/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Although Convolutional Neural Networks (CNNs) are considered as being "approximately invariant" to nuisance perturbations such as image transformation, shift, scaling, and other small deformations, some existing studies show that intense noises can cause noticeable variation to CNNs’ outputs. This paper focuses on exploring a method of measuring sensitivity by observing corresponding output variation to input perturbation on CNNs. The sensitivity is statistically defined in a bottom-up way from neuron to layer, and finally to the entire CNN network. An iterative algorithm is proposed for approximating the defined sensitivity. On the basic architecture of CNNs, the theoretically computed sensitivity is verified on the MNIST database with four types of commonly used noise distributions: Gaussian, Uniform, Salt and Pepper, and Rayleigh. Experimental results show the theoretical sensitivity is on the one hand in agreement with the actual output variation what on the maps, layers or entire networks are, and on the other hand an applicable quantitative measure for robust network selection.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
2.
go back to reference Simon M, Rodner E (2015) Neural activation constellations: unsupervised part model discovery with convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 1143–1151 Simon M, Rodner E (2015) Neural activation constellations: unsupervised part model discovery with convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 1143–1151
4.
go back to reference Karpathy A et al (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732 Karpathy A et al (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732
5.
go back to reference Girshick R et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587 Girshick R et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587
9.
go back to reference Hariharan B et al (2015) Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 447–456 Hariharan B et al (2015) Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 447–456
10.
go back to reference Fukushima K, Miyake S (1982) Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition, Competition and cooperation in neural nets. Springer, Berlin , pp 267–285 Fukushima K, Miyake S (1982) Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition, Competition and cooperation in neural nets. Springer, Berlin , pp 267–285
11.
go back to reference Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833 Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision, pp 818–833
13.
go back to reference Kwon S et al (2016). Measuring error-tolerance in SRAM architecture on hardware accelerated neural network. In: 2016 IEEE international conference on consumer electronics-Asia, pp 1–4 Kwon S et al (2016). Measuring error-tolerance in SRAM architecture on hardware accelerated neural network. In: 2016 IEEE international conference on consumer electronics-Asia, pp 1–4
15.
go back to reference Fawzi A, Fawzi O, Frossard P (2018) Analysis of classifiers’ robustness to adversarial perturbations. Mach Learn 107(3):481–508MathSciNetCrossRef Fawzi A, Fawzi O, Frossard P (2018) Analysis of classifiers’ robustness to adversarial perturbations. Mach Learn 107(3):481–508MathSciNetCrossRef
16.
go back to reference Fawzi A et al (2016) Robustness of classifiers: from adversarial to random noise. In: Advances in neural information processing systems, pp 1632–1640 Fawzi A et al (2016) Robustness of classifiers: from adversarial to random noise. In: Advances in neural information processing systems, pp 1632–1640
17.
go back to reference Moosavi D, Seyed M (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 86–94 Moosavi D, Seyed M (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 86–94
18.
go back to reference Sharif M et al (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 1528–1540 Sharif M et al (2016) Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 1528–1540
21.
go back to reference Moosavi D el al (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582 Moosavi D el al (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582
22.
go back to reference Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Advances in neural information processing systems, pp 2266–2276 Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Advances in neural information processing systems, pp 2266–2276
24.
go back to reference Yang X et al (2019) Dynamic properties of foreign exchange complex network. Mathematics 7:832CrossRef Yang X et al (2019) Dynamic properties of foreign exchange complex network. Mathematics 7:832CrossRef
25.
go back to reference Huang C, Tan Y (2020) Global behavior of a reaction–diffusion model with time delay and Dirichlet condition. J Differ Equ 271:186–215MathSciNetCrossRef Huang C, Tan Y (2020) Global behavior of a reaction–diffusion model with time delay and Dirichlet condition. J Differ Equ 271:186–215MathSciNetCrossRef
26.
go back to reference Fawzi A et al (2017) The robustness of deep networks: a geometrical perspective. IEEE Signal Process Mag 34(6):50–62CrossRef Fawzi A et al (2017) The robustness of deep networks: a geometrical perspective. IEEE Signal Process Mag 34(6):50–62CrossRef
27.
go back to reference Saltelli A (2002) Sensitivity analysis for importance assessment. Risk Anal 22(3):579–590CrossRef Saltelli A (2002) Sensitivity analysis for importance assessment. Risk Anal 22(3):579–590CrossRef
28.
go back to reference Saltelli A et al (2019) Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices. Environ Model Softw 114:29–39CrossRef Saltelli A et al (2019) Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices. Environ Model Softw 114:29–39CrossRef
29.
go back to reference Saltelli A et al (2008) Global sensitivity analysis: the primer. Wiley, ChichesterMATH Saltelli A et al (2008) Global sensitivity analysis: the primer. Wiley, ChichesterMATH
30.
31.
go back to reference Stevenson M, Winter R, Widrow B (1990) Sensitivity of feedforward neural networks to weight errors. IEEE Trans Neural Netw 1(1):71–80CrossRef Stevenson M, Winter R, Widrow B (1990) Sensitivity of feedforward neural networks to weight errors. IEEE Trans Neural Netw 1(1):71–80CrossRef
32.
go back to reference Piche SW et al (1995) The selection of weight accuracies for Madalines. IEEE Trans Neural Netw 6(2):432–445CrossRef Piche SW et al (1995) The selection of weight accuracies for Madalines. IEEE Trans Neural Netw 6(2):432–445CrossRef
33.
go back to reference Zeng X, Wang Y, Zhang K (2006) Computation of Adalines’ sensitivity to weight perturbation. IEEE Trans Neural Netw 17(2):515–519CrossRef Zeng X, Wang Y, Zhang K (2006) Computation of Adalines’ sensitivity to weight perturbation. IEEE Trans Neural Netw 17(2):515–519CrossRef
34.
go back to reference Wang Y et al (2006) Computation of Madalines’ sensitivity to input and weight perturbations. Neural Comput 18(11):2854–2877MathSciNetCrossRef Wang Y et al (2006) Computation of Madalines’ sensitivity to input and weight perturbations. Neural Comput 18(11):2854–2877MathSciNetCrossRef
35.
go back to reference Choi JY, Choi CH (1992) Sensitivity analysis of multilayer perceptron with differentiable activation functions. IEEE Trans Neural Netw 3(1):101–107CrossRef Choi JY, Choi CH (1992) Sensitivity analysis of multilayer perceptron with differentiable activation functions. IEEE Trans Neural Netw 3(1):101–107CrossRef
36.
go back to reference Fu L, Chen T (1993) Sensitivity analysis for input vector in multilayer feedforward neural networks. In: IEEE international conference on neural networks, pp 215–218 Fu L, Chen T (1993) Sensitivity analysis for input vector in multilayer feedforward neural networks. In: IEEE international conference on neural networks, pp 215–218
37.
go back to reference Yeung D, Sun X (2002) Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Trans Neural Netw 13(1):34–44CrossRef Yeung D, Sun X (2002) Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Trans Neural Netw 13(1):34–44CrossRef
38.
go back to reference Yang S, Ho C, Siu S (2007) Sensitivity analysis of the split-complex valued multilayer perceptron due to the errors of the iid inputs and weights. IEEE Trans Neural Netw 18(5):1280–1293CrossRef Yang S, Ho C, Siu S (2007) Sensitivity analysis of the split-complex valued multilayer perceptron due to the errors of the iid inputs and weights. IEEE Trans Neural Netw 18(5):1280–1293CrossRef
39.
go back to reference Zeng X, Yeung D (2001) Sensitivity analysis of multilayer perceptron to input and weight perturbations. IEEE Trans Neural Netw 12(6):1358–1366CrossRef Zeng X, Yeung D (2001) Sensitivity analysis of multilayer perceptron to input and weight perturbations. IEEE Trans Neural Netw 12(6):1358–1366CrossRef
40.
go back to reference Zeng X, Yeung D (2003) A quantified sensitivity measure for multilayer perceptron to input perturbation. Neural Comput 15(1):183–212CrossRef Zeng X, Yeung D (2003) A quantified sensitivity measure for multilayer perceptron to input perturbation. Neural Comput 15(1):183–212CrossRef
41.
go back to reference Ng WWY et al (2002) Statistical output sensitivity to input and weight perturbations of radial basis function neural networks. IEEE Int Conf Syst Man Cybern 2:503–508CrossRef Ng WWY et al (2002) Statistical output sensitivity to input and weight perturbations of radial basis function neural networks. IEEE Int Conf Syst Man Cybern 2:503–508CrossRef
42.
go back to reference Cheng A, Yeung D (1999) Sensitivity analysis of neocognitron. IEEE Trans Syst Man Cybern C Appl Rev 29(2):238–249CrossRef Cheng A, Yeung D (1999) Sensitivity analysis of neocognitron. IEEE Trans Syst Man Cybern C Appl Rev 29(2):238–249CrossRef
44.
go back to reference Cao JD et al (2020) Zagreb connection indices of molecular graphs based on operations. Complexity 2020:1–15 Cao JD et al (2020) Zagreb connection indices of molecular graphs based on operations. Complexity 2020:1–15
45.
go back to reference Zhou Y et al (2020) Finite-time stochastic synchronization of dynamic networks with nonlinear coupling strength via quantized intermittent control. Appl Math Comput 376:125157MathSciNetMATH Zhou Y et al (2020) Finite-time stochastic synchronization of dynamic networks with nonlinear coupling strength via quantized intermittent control. Appl Math Comput 376:125157MathSciNetMATH
46.
go back to reference Yeung D et al (2010) Sensitivity analysis for neural networks. Springer, BerlinCrossRef Yeung D et al (2010) Sensitivity analysis for neural networks. Springer, BerlinCrossRef
47.
go back to reference Wang W et al (2020) Bipartite formation problem of second-order nonlinear multi-agent systems with hybrid impulses. Appl Math Comput 370:124926MathSciNetMATH Wang W et al (2020) Bipartite formation problem of second-order nonlinear multi-agent systems with hybrid impulses. Appl Math Comput 370:124926MathSciNetMATH
48.
go back to reference Huang C et al (2020) Asymptotic behavior for a class of population dynamics. Mathematics 5(4):3378–3390MathSciNet Huang C et al (2020) Asymptotic behavior for a class of population dynamics. Mathematics 5(4):3378–3390MathSciNet
49.
go back to reference Kumari S et al (2020) On the construction, properties and Hausdorff dimension of random cantor one pth set. Mathematics 5(4):3138–3155MathSciNet Kumari S et al (2020) On the construction, properties and Hausdorff dimension of random cantor one pth set. Mathematics 5(4):3138–3155MathSciNet
50.
go back to reference Zhang Y, Wallace B (2017) Sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. In: International joint conference on natural language processing, pp 253–263 Zhang Y, Wallace B (2017) Sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. In: International joint conference on natural language processing, pp 253–263
51.
go back to reference Rawat W, Wang Z (2017) Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput 29(9):2352–2449MathSciNetCrossRef Rawat W, Wang Z (2017) Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput 29(9):2352–2449MathSciNetCrossRef
52.
go back to reference Shu H, Zhu H (2019) Sensitivity analysis of deep neural networks. Proc AAAI Conf Artif Intell 33:4943–4950 Shu H, Zhu H (2019) Sensitivity analysis of deep neural networks. Proc AAAI Conf Artif Intell 33:4943–4950
Metadata
Title
Computation of CNN’s Sensitivity to Input Perturbation
Authors
Lin Xiang
Xiaoqin Zeng
Shengli Wu
Yanjun Liu
Baohua Yuan
Publication date
03-01-2021
Publisher
Springer US
Published in
Neural Processing Letters / Issue 1/2021
Print ISSN: 1370-4621
Electronic ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-020-10420-7

Other articles of this Issue 1/2021

Neural Processing Letters 1/2021 Go to the issue