Skip to main content
Top

2021 | OriginalPaper | Chapter

The Effect of Noise and Brightness on Convolutional Deep Neural Networks

Authors : José A. Rodríguez-Rodríguez, Miguel A. Molina-Cabello, Rafaela Benítez-Rochel, Ezequiel López-Rubio

Published in: Pattern Recognition. ICPR International Workshops and Challenges

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The classification performance of Convolutional Neural Networks (CNNs) can be hampered by several factors. Sensor noise is one of these nuisances. In this work, a study of the effect of noise on these networks is presented. The methodological framework includes two realistic models of noise for present day CMOS vision sensors. The models allow to include separately Poisson, Gaussian, salt & pepper, speckle and uniform noise as sources of defects in image acquisition sensors. Then, synthetic noise may be added to images by using that methodology in order to simulate common sources of image distortion. Additionally, the impact of the brightness in conjunction with each selected kind of noise is also addressed. This way, the proposed methodology incorporates a brightness scale factor to emulate images with low illumination conditions. Based on these models, experiments are carried out for a selection of state of the art CNNs. The results of the study demonstrate that Poisson noise has a small impact on the performance of CNNs, while speckle and salt & pepper noise together the global illumination level can substantially degrade the classification accuracy. Also, Gaussian and uniform noise have a moderate effect on the CNNs.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference AG., A.: Miniature cmos image sensor. NanEye datasheet (2018). Accessed Oct 2018 AG., A.: Miniature cmos image sensor. NanEye datasheet (2018). Accessed Oct 2018
2.
go back to reference AG, A.: Cmos machine vision image sensor. CMV50000 datasheet (2019). Accessed Feb 2019 AG, A.: Cmos machine vision image sensor. CMV50000 datasheet (2019). Accessed Feb 2019
6.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
7.
go back to reference Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017) Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017)
9.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Proc. Syst. 25, 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Proc. Syst. 25, 1097–1105 (2012)
10.
go back to reference López-Rubio, F.J., López-Rubio, E., Molina-Cabello, M.A., Luque-Baena, R.M., Palomo, E.J., Dominguez, E.: The effect of noise on foreground detection algorithms. Artif. Intell. Rev. 49(3), 407–438 (2018)CrossRef López-Rubio, F.J., López-Rubio, E., Molina-Cabello, M.A., Luque-Baena, R.M., Palomo, E.J., Dominguez, E.: The effect of noise on foreground detection algorithms. Artif. Intell. Rev. 49(3), 407–438 (2018)CrossRef
11.
go back to reference Molina-Cabello, M.A., Elizondo, D.A., Luque-Baena, R.M., López-Rubio, E.: Foreground object detection enhancement by adaptive super resolution for video surveillance. In: British Machine Vision Conference (BMVC) (2019) Molina-Cabello, M.A., Elizondo, D.A., Luque-Baena, R.M., López-Rubio, E.: Foreground object detection enhancement by adaptive super resolution for video surveillance. In: British Machine Vision Conference (BMVC) (2019)
12.
go back to reference Molina-Cabello, M.A., García-González, J., Luque-Baena, R.M., López-Rubio, E.: The effect of downsampling–upsampling strategy on foreground detection algorithms. Artif. Intell. Rev. 53(7), 4935–4965 (2020) Molina-Cabello, M.A., García-González, J., Luque-Baena, R.M., López-Rubio, E.: The effect of downsampling–upsampling strategy on foreground detection algorithms. Artif. Intell. Rev. 53(7), 4935–4965 (2020)
14.
go back to reference Molina-Cabello, M.A., Luque-Baena, R.M., López-Rubio, E., Thurnhofer-Hemsi, K.: Vehicle type detection by ensembles of convolutional neural networks operating on super resolved images. Integrated Comput. Aided Eng. 25(4), 321–333 (2018)CrossRef Molina-Cabello, M.A., Luque-Baena, R.M., López-Rubio, E., Thurnhofer-Hemsi, K.: Vehicle type detection by ensembles of convolutional neural networks operating on super resolved images. Integrated Comput. Aided Eng. 25(4), 321–333 (2018)CrossRef
16.
go back to reference OmniVision: 4” color cmos qsxga (5 megapixel) image sensorwith omnibsi technology. OV5640 datasheet (2010). Accessed May 2011 OmniVision: 4” color cmos qsxga (5 megapixel) image sensorwith omnibsi technology. OV5640 datasheet (2010). Accessed May 2011
17.
go back to reference ONSemiconductor: High accuracy star tracker cmos active pixel image sensor. NOIH25SM1000S datasheet (2009). Accessed June 2010 ONSemiconductor: High accuracy star tracker cmos active pixel image sensor. NOIH25SM1000S datasheet (2009). Accessed June 2010
20.
go back to reference Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018) Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
21.
go back to reference Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014)
23.
go back to reference Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016)
24.
go back to reference Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) Szegedy, C., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
25.
go back to reference Tan, M., et al.: Mnasnet: Platform-aware neural architecture search for mobile. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2815–2823 (2019) Tan, M., et al.: Mnasnet: Platform-aware neural architecture search for mobile. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2815–2823 (2019)
26.
go back to reference Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995 (2017) Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5987–5995 (2017)
Metadata
Title
The Effect of Noise and Brightness on Convolutional Deep Neural Networks
Authors
José A. Rodríguez-Rodríguez
Miguel A. Molina-Cabello
Rafaela Benítez-Rochel
Ezequiel López-Rubio
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-68780-9_49

Premium Partner