Skip to main content
Erschienen in: International Journal of Machine Learning and Cybernetics 6/2021

13.02.2021 | Original Article

A study on the uncertainty of convolutional layers in deep neural networks

verfasst von: Haojing Shen, Sihong Chen, Ran Wang

Erschienen in: International Journal of Machine Learning and Cybernetics | Ausgabe 6/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper shows a Min–Max property existing in the connection weights of the convolutional layers in a neural network structure, i.e., the LeNet. Specifically, the Min–Max property means that, during the back propagation-based training for LeNet, the weights of the convolutional layers will become far away from their centers of intervals, i.e., decreasing to their minimum or increasing to their maximum. From the perspective of uncertainty, we demonstrate that the Min–Max property corresponds to minimizing the fuzziness of the model parameters through a simplified formulation of convolution. It is experimentally confirmed that the model with the Min–Max property has a stronger adversarial robustness, thus this property can be incorporated into the design of loss function. This paper points out a changing tendency of uncertainty in the convolutional layers of LeNet structure, and gives some insights to the interpretability of convolution.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Abbasi M, Gagné C (2017) Robustness to adversarial examples through an ensemble of specialists. In: Proceedings of the 5th international conference on learning representations (ICLR), Toulon, France, April 24–26, 2017 Abbasi M, Gagné C (2017) Robustness to adversarial examples through an ensemble of specialists. In: Proceedings of the 5th international conference on learning representations (ICLR), Toulon, France, April 24–26, 2017
2.
Zurück zum Zitat Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Proceedings of the 35th international conference on machine learning (ICML), Stockholm, Sweden, July 10–15, 2018, pp 274–283 Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Proceedings of the 35th international conference on machine learning (ICML), Stockholm, Sweden, July 10–15, 2018, pp 274–283
3.
Zurück zum Zitat Basak J, De RK, Pal SK (1998) Unsupervised feature selection using a neuro-fuzzy approach. Pattern Recognit Lett 19(11):997–1006CrossRef Basak J, De RK, Pal SK (1998) Unsupervised feature selection using a neuro-fuzzy approach. Pattern Recognit Lett 19(11):997–1006CrossRef
4.
Zurück zum Zitat Bradshaw J, Matthews AGdG, Ghahramani Z (2017) Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476 Bradshaw J, Matthews AGdG, Ghahramani Z (2017) Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:​1707.​02476
6.
Zurück zum Zitat Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp), pp 39–57. IEEE Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (sp), pp 39–57. IEEE
7.
Zurück zum Zitat Chen HY, Liang JH, Chang SC, Pan JY, Chen YT, Wei W, Juan DC (2019) Improving adversarial robustness via guided complement entropy. In: Proceedings of the IEEE international conference on computer vision, pp 4881–4889 Chen HY, Liang JH, Chang SC, Pan JY, Chen YT, Wei W, Juan DC (2019) Improving adversarial robustness via guided complement entropy. In: Proceedings of the IEEE international conference on computer vision, pp 4881–4889
8.
Zurück zum Zitat Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on artificial intelligence and security, pp 15–26 Chen PY, Zhang H, Sharma Y, Yi J, Hsieh CJ (2017) Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on artificial intelligence and security, pp 15–26
9.
Zurück zum Zitat Clevert D, Unterthiner T, Hochreiter S (2016) Fast and accurate deep network learning by exponential linear units (elus). In: Bengio Y, LeCun Y (eds) 4th international conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, conference track proceedings. http://arxiv.org/abs/1511.07289 Clevert D, Unterthiner T, Hochreiter S (2016) Fast and accurate deep network learning by exponential linear units (elus). In: Bengio Y, LeCun Y (eds) 4th international conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, conference track proceedings. http://​arxiv.​org/​abs/​1511.​07289
10.
Zurück zum Zitat Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Mag 29(6):141–142CrossRef Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process Mag 29(6):141–142CrossRef
11.
Zurück zum Zitat Ding GW, Wang L, Jin X (2019) AdverTorch v0.1: an adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:1902.07623 Ding GW, Wang L, Jin X (2019) AdverTorch v0.1: an adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:​1902.​07623
12.
Zurück zum Zitat Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193 Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193
13.
Zurück zum Zitat Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1625–1634 Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1625–1634
14.
Zurück zum Zitat Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp 315–323 Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp 315–323
15.
Zurück zum Zitat Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, conference track proceedings. http://arxiv.org/abs/1412.6572 Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, conference track proceedings. http://​arxiv.​org/​abs/​1412.​6572
16.
Zurück zum Zitat Haykin S, Kosko B (2001) Gradient based learning applied to document recognition, pp 306–351 Haykin S, Kosko B (2001) Gradient based learning applied to document recognition, pp 306–351
17.
Zurück zum Zitat Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Advances in neural information processing systems, pp 2266–2276 Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Advances in neural information processing systems, pp 2266–2276
20.
Zurück zum Zitat Karmon D, Zoran D, Goldberg Y (2018) Lavan: localized and visible adversarial noise. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, Proceedings of Machine Learning Research, vol. 80, pp 2512–2520. PMLR. http://proceedings.mlr.press/v80/karmon18a.html Karmon D, Zoran D, Goldberg Y (2018) Lavan: localized and visible adversarial noise. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, Proceedings of Machine Learning Research, vol. 80, pp 2512–2520. PMLR. http://​proceedings.​mlr.​press/​v80/​karmon18a.​html
22.
Zurück zum Zitat Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images
23.
Zurück zum Zitat Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: Proceedings of the 5th international conference on learning representations (ICLR), Toulon, France, April 24–26, 2017 Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: Proceedings of the 5th international conference on learning representations (ICLR), Toulon, France, April 24–26, 2017
24.
Zurück zum Zitat Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial machine learning at scale. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24–26, 2017, conference track proceedings. OpenReview.net. https://openreview.net/forum?id=HJGU3Rodl Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial machine learning at scale. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, April 24–26, 2017, conference track proceedings. OpenReview.net. https://​openreview.​net/​forum?​id=​HJGU3Rodl
25.
26.
Zurück zum Zitat Liu H, Ji R, Li J, Zhang B, Gao Y, Wu Y, Huang F (2019) Universal adversarial perturbation via prior driven uncertainty approximation. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV) Liu H, Ji R, Li J, Zhang B, Gao Y, Wu Y, Huang F (2019) Universal adversarial perturbation via prior driven uncertainty approximation. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
27.
Zurück zum Zitat Liu H, Jia Y, Hou J, Zhang Q (2019) Imbalance-aware pairwise constraint propagation. In: Proceedings of the 27th ACM international conference on multimedia, pp 1605–1613 Liu H, Jia Y, Hou J, Zhang Q (2019) Imbalance-aware pairwise constraint propagation. In: Proceedings of the 27th ACM international conference on multimedia, pp 1605–1613
28.
Zurück zum Zitat Liu X, Yang H, Liu Z, Song L, Chen Y, Li H (2019) DPATCH: an adversarial patch attack on object detectors. In: Proceedings of the AAAI workshop on artificial intelligence safety (SafeAI), Honolulu, Hawaii, USA, January 27, 2019 Liu X, Yang H, Liu Z, Song L, Chen Y, Li H (2019) DPATCH: an adversarial patch attack on object detectors. In: Proceedings of the AAAI workshop on artificial intelligence safety (SafeAI), Honolulu, Hawaii, USA, January 27, 2019
30.
Zurück zum Zitat Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018 Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018
31.
Zurück zum Zitat Mishkin D, Matas J (2016) All you need is a good init. In: Bengio Y, LeCun Y (eds) 4th International conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, conference track proceedings. http://arxiv.org/abs/1511.06422 Mishkin D, Matas J (2016) All you need is a good init. In: Bengio Y, LeCun Y (eds) 4th International conference on learning representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, conference track proceedings. http://​arxiv.​org/​abs/​1511.​06422
32.
Zurück zum Zitat Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582 Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582
33.
Zurück zum Zitat Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:​1605.​07277
34.
Zurück zum Zitat Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. IEEE Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. IEEE
35.
Zurück zum Zitat Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597. IEEE Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP), pp 582–597. IEEE
36.
Zurück zum Zitat Pinto L, Davidson J, Sukthankar R, Gupta A (2017) Robust adversarial reinforcement learning. In: Proceedings of the 34th international conference on machine learning (ICML), Sydney, NSW, Australia, August 6–11, 2017 Pinto L, Davidson J, Sukthankar R, Gupta A (2017) Robust adversarial reinforcement learning. In: Proceedings of the 34th international conference on machine learning (ICML), Sydney, NSW, Australia, August 6–11, 2017
37.
Zurück zum Zitat Pourpanah F, Abdar M, Luo Y, Zhou X, Wang R, Lim CP, Wang XZ (2020) A review of generalized zero-shot learning methods. arXiv preprint arXiv:2011.08641 Pourpanah F, Abdar M, Luo Y, Zhou X, Wang R, Lim CP, Wang XZ (2020) A review of generalized zero-shot learning methods. arXiv preprint arXiv:​2011.​08641
38.
Zurück zum Zitat Qin C, Martens J, Gowal S, Krishnan D, Dvijotham K, Fawzi A, De S, Stanforth R, Kohli P (2019) Adversarial robustness through local linearization. In: Advances in neural information processing systems, pp 13847–13856 Qin C, Martens J, Gowal S, Krishnan D, Dvijotham K, Fawzi A, De S, Stanforth R, Kohli P (2019) Adversarial robustness through local linearization. In: Advances in neural information processing systems, pp 13847–13856
39.
Zurück zum Zitat Raghunathan A, Steinhardt J, Liang P (2018) Certified defenses against adversarial examples. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018 Raghunathan A, Steinhardt J, Liang P (2018) Certified defenses against adversarial examples. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018
40.
Zurück zum Zitat Seeger M (2004) Gaussian processes for machine learning. Int J Neural Syst 14(02):69–106CrossRef Seeger M (2004) Gaussian processes for machine learning. Int J Neural Syst 14(02):69–106CrossRef
41.
42.
Zurück zum Zitat Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM sigsac conference on computer and communications security, pp 1528–1540 Sharif M, Bhagavatula S, Bauer L, Reiter MK (2016) Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM sigsac conference on computer and communications security, pp 1528–1540
43.
44.
Zurück zum Zitat Sinha A, Namkoong H, Duchi JC (2018) Certifying some distributional robustness with principled adversarial training. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018 Sinha A, Namkoong H, Duchi JC (2018) Certifying some distributional robustness with principled adversarial training. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018
45.
Zurück zum Zitat Smith L, Gal Y (2018) Understanding measures of uncertainty for adversarial example detection. In: Proceedings of the 34th conference on uncertainty in artificial intelligence (UAI), Monterey, California, USA, August 6–10, 2018, pp 560–569 Smith L, Gal Y (2018) Understanding measures of uncertainty for adversarial example detection. In: Proceedings of the 34th conference on uncertainty in artificial intelligence (UAI), Monterey, California, USA, August 6–10, 2018, pp 560–569
46.
47.
Zurück zum Zitat Su D, Zhang H, Chen H, Yi J, Chen PY, Gao Y (2018) Is robustness the cost of accuracy?—a comprehensive study on the robustness of 18 deep image classification models. In: Proceedings of the European conference on computer vision (ECCV), pp 631–648 Su D, Zhang H, Chen H, Yi J, Chen PY, Gao Y (2018) Is robustness the cost of accuracy?—a comprehensive study on the robustness of 18 deep image classification models. In: Proceedings of the European conference on computer vision (ECCV), pp 631–648
48.
Zurück zum Zitat Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evolut Comput 23(5):828–841CrossRef Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evolut Comput 23(5):828–841CrossRef
49.
Zurück zum Zitat Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826 Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826
50.
Zurück zum Zitat Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, conference track proceedings. http://arxiv.org/abs/1312.6199 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd international conference on learning representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, conference track proceedings. http://​arxiv.​org/​abs/​1312.​6199
51.
Zurück zum Zitat Terzi M, Susto GA, Chaudhari P (2020) Directional adversarial training for cost sensitive deep learning classification applications. Eng Appl Artif Intell 91:103550CrossRef Terzi M, Susto GA, Chaudhari P (2020) Directional adversarial training for cost sensitive deep learning classification applications. Eng Appl Artif Intell 91:103550CrossRef
52.
Zurück zum Zitat Tramér F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D, McDaniel PD (2018) Ensemble adversarial training: Attacks and defenses. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018 Tramér F, Kurakin A, Papernot N, Goodfellow IJ, Boneh D, McDaniel PD (2018) Ensemble adversarial training: Attacks and defenses. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018
53.
Zurück zum Zitat Tsipras D, Santurkar S, Engstrom L, Turne, A, Madry A(2019) Robustness may be at odds with accuracy. In: Proceedings of the 7th international conference on learning representations (ICLR), New Orleans, LA, USA, May 6–9, 2019 Tsipras D, Santurkar S, Engstrom L, Turne, A, Madry A(2019) Robustness may be at odds with accuracy. In: Proceedings of the 7th international conference on learning representations (ICLR), New Orleans, LA, USA, May 6–9, 2019
54.
Zurück zum Zitat Wong E, Kolter JZ (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of the 35th international conference on machine learning (ICML), Stockholm, Sweden, July 10–15, 2018, pp 8405–8423 Wong E, Kolter JZ (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Proceedings of the 35th international conference on machine learning (ICML), Stockholm, Sweden, July 10–15, 2018, pp 8405–8423
55.
Zurück zum Zitat Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms
56.
Zurück zum Zitat Yeung DS, Wang X (2002) Improving performance of similarity-based clustering by feature weight learning. IEEE Trans Pattern Anal Mach Intell 24(4):556–561CrossRef Yeung DS, Wang X (2002) Improving performance of similarity-based clustering by feature weight learning. IEEE Trans Pattern Anal Mach Intell 24(4):556–561CrossRef
57.
Zurück zum Zitat Zhang H, Yu Y, Jiao J, Xing EP, Ghaoui LE, Jordan MI (2019) Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th international conference on machine learning (ICML), Long Beach, California, USA, June 9–15, 2019, pp 12907–12929 Zhang H, Yu Y, Jiao J, Xing EP, Ghaoui LE, Jordan MI (2019) Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th international conference on machine learning (ICML), Long Beach, California, USA, June 9–15, 2019, pp 12907–12929
58.
Zurück zum Zitat Zhao Z, Dua D, Singh S (2018) Generating natural adversarial examples. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018 Zhao Z, Dua D, Singh S (2018) Generating natural adversarial examples. In: Proceedings of the 6th international conference on learning representations (ICLR), Vancouver, BC, Canada, April 30–May 3, 2018
Metadaten
Titel
A study on the uncertainty of convolutional layers in deep neural networks
verfasst von
Haojing Shen
Sihong Chen
Ran Wang
Publikationsdatum
13.02.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
International Journal of Machine Learning and Cybernetics / Ausgabe 6/2021
Print ISSN: 1868-8071
Elektronische ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-021-01278-9

Weitere Artikel der Ausgabe 6/2021

International Journal of Machine Learning and Cybernetics 6/2021 Zur Ausgabe

Neuer Inhalt