Skip to main content
Erschienen in: Neural Computing and Applications 8/2022

18.01.2022 | Original Article

Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks

verfasst von: Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato

Erschienen in: Neural Computing and Applications | Ausgabe 8/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers—SHC, and Contrastive Hebbian Learning—CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
The code to reproduce the experiments is available at: github.com/GabrieleLagani/HebbianPCA/tree/hebbpca.
 
Literatur
1.
Zurück zum Zitat Amato G, Carrara F, Falchi F, Gennaro C, Lagani G (2019) Hebbian learning meets deep convolutional neural networks. In: International conference on image analysis and processing. Springer, pp 324–334 Amato G, Carrara F, Falchi F, Gennaro C, Lagani G (2019) Hebbian learning meets deep convolutional neural networks. In: International conference on image analysis and processing. Springer, pp 324–334
2.
Zurück zum Zitat Bahroun Y, Soltoggio A (2017) Online representation learning with single and multi-layer hebbian networks for image classification. In: International conference on artificial neural networks. Springer, pp 354–363 Bahroun Y, Soltoggio A (2017) Online representation learning with single and multi-layer hebbian networks for image classification. In: International conference on artificial neural networks. Springer, pp 354–363
3.
Zurück zum Zitat Becker S, Plumbley M (1996) Unsupervised neural network learning procedures for feature extraction and classification. Appl Intell 6(3):185–203CrossRef Becker S, Plumbley M (1996) Unsupervised neural network learning procedures for feature extraction and classification. Appl Intell 6(3):185–203CrossRef
4.
Zurück zum Zitat Diehl PU, Cook M (2015) Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front Comput Neurosci 9:99CrossRef Diehl PU, Cook M (2015) Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front Comput Neurosci 9:99CrossRef
5.
Zurück zum Zitat Ferré P, Mamalet F, Thorpe SJ (2018) Unsupervised feature learning with winner-takes-all based stdp. Front Comput Neurosci 12:24CrossRef Ferré P, Mamalet F, Thorpe SJ (2018) Unsupervised feature learning with winner-takes-all based stdp. Front Comput Neurosci 12:24CrossRef
6.
Zurück zum Zitat Földiak P (1989) Adaptive network for optimal linear feature extraction. In: Proceedings of IEEE/INNS international joint conference on neural networks, vol 1, pp 401–405 Földiak P (1989) Adaptive network for optimal linear feature extraction. In: Proceedings of IEEE/INNS international joint conference on neural networks, vol 1, pp 401–405
7.
Zurück zum Zitat Grossberg S (1976) Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectors. Biol Cybern 23(3):121–134MathSciNetCrossRef Grossberg S (1976) Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectors. Biol Cybern 23(3):121–134MathSciNetCrossRef
8.
Zurück zum Zitat Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson
9.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
10.
Zurück zum Zitat He T, Zhang Z, Zhang H, Zhang Z, Xie J, Li M (2019) Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 558–567 He T, Zhang Z, Zhang H, Zhang Z, Xie J, Li M (2019) Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 558–567
11.
Zurück zum Zitat Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A (2016) beta-vae: Learning basic visual concepts with a constrained variational framework Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A (2016) beta-vae: Learning basic visual concepts with a constrained variational framework
12.
Zurück zum Zitat Hyvarinen A, Karhunen J, Oja E (2002) Independent component analysis. Stud Inf Control 11(2):205–207 Hyvarinen A, Karhunen J, Oja E (2002) Independent component analysis. Stud Inf Control 11(2):205–207
13.
Zurück zum Zitat Karhunen J, Joutsensalo J (1995) Generalizations of principal component analysis, optimization problems, and neural networks. Neural Netw 8(4):549–562CrossRef Karhunen J, Joutsensalo J (1995) Generalizations of principal component analysis, optimization problems, and neural networks. Neural Netw 8(4):549–562CrossRef
14.
Zurück zum Zitat Kingma DP, Welling M (2013) Auto-encoding variational bayes Kingma DP, Welling M (2013) Auto-encoding variational bayes
15.
16.
Zurück zum Zitat Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev 45(3):385–482MathSciNetCrossRef Kolda TG, Lewis RM, Torczon V (2003) Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev 45(3):385–482MathSciNetCrossRef
17.
Zurück zum Zitat Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images
18.
Zurück zum Zitat Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems
20.
Zurück zum Zitat LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P et al (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324CrossRef
21.
Zurück zum Zitat Miconi T, Clune J, Stanley KO (2018) Differentiable plasticity: training plastic neural networks with backpropagation Miconi T, Clune J, Stanley KO (2018) Differentiable plasticity: training plastic neural networks with backpropagation
22.
Zurück zum Zitat Movellan JR (1991) Contrastive hebbian learning in the continuous hopfield model. In: Connectionist models. Elsevier, pp 10–17 Movellan JR (1991) Contrastive hebbian learning in the continuous hopfield model. In: Connectionist models. Elsevier, pp 10–17
23.
Zurück zum Zitat Olshausen BA (1996) Learning linear, sparse, factorial codes Olshausen BA (1996) Learning linear, sparse, factorial codes
24.
Zurück zum Zitat Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583):607CrossRef Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583):607CrossRef
25.
Zurück zum Zitat O’Reilly RC (1996) Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm. Neural Comput 8(5):895–938CrossRef O’Reilly RC (1996) Biologically plausible error-driven learning using local activation differences: the generalized recirculation algorithm. Neural Comput 8(5):895–938CrossRef
26.
Zurück zum Zitat O’reilly RC (2001) Generalization in interactive networks: the benefits of inhibitory competition and hebbian learning. Neural Comput 13(6):1199–1241CrossRef O’reilly RC (2001) Generalization in interactive networks: the benefits of inhibitory competition and hebbian learning. Neural Comput 13(6):1199–1241CrossRef
27.
Zurück zum Zitat O’Reilly RC, Munakata Y (2000) Computational explorations in cognitive neuroscience: Understanding the mind by simulating the brain. MIT Press O’Reilly RC, Munakata Y (2000) Computational explorations in cognitive neuroscience: Understanding the mind by simulating the brain. MIT Press
28.
Zurück zum Zitat Pehlevan C, Chklovskii DB (2015) Optimization theory of hebbian/anti-hebbian networks for pca and whitening. In: 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, pp 1458–1465 Pehlevan C, Chklovskii DB (2015) Optimization theory of hebbian/anti-hebbian networks for pca and whitening. In: 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, pp 1458–1465
29.
Zurück zum Zitat Pehlevan C, Hu T, Chklovskii DB (2015) A hebbian/anti-hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. Neural Comput 27(7):1461–1495MathSciNetCrossRef Pehlevan C, Hu T, Chklovskii DB (2015) A hebbian/anti-hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. Neural Comput 27(7):1461–1495MathSciNetCrossRef
30.
Zurück zum Zitat Ponulak F (2005) Resume-new supervised learning method for spiking neural networks. technical report. In: Institute of control and information engineering, Poznan University of Technology Ponulak F (2005) Resume-new supervised learning method for spiking neural networks. technical report. In: Institute of control and information engineering, Poznan University of Technology
31.
Zurück zum Zitat Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRef
32.
Zurück zum Zitat Rumelhart DE, Zipser D (1985) Feature discovery by competitive learning. Cogn Sci 9(1):75–112CrossRef Rumelhart DE, Zipser D (1985) Feature discovery by competitive learning. Cogn Sci 9(1):75–112CrossRef
33.
Zurück zum Zitat Sanger TD (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw 2(6):459–473CrossRef Sanger TD (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw 2(6):459–473CrossRef
34.
Zurück zum Zitat Shrestha A, Ahmed K, Wang Y, Qiu Q (2017) Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning. In: International joint conference on neural networks (IJCNN). IEEE, pp 1999–2006 Shrestha A, Ahmed K, Wang Y, Qiu Q (2017) Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning. In: International joint conference on neural networks (IJCNN). IEEE, pp 1999–2006
35.
Zurück zum Zitat Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484CrossRef Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of go with deep neural networks and tree search. Nature 529(7587):484CrossRef
36.
Zurück zum Zitat Wadhwa A, Madhow U (2016) Bottom-up deep learning using the hebbian principle Wadhwa A, Madhow U (2016) Bottom-up deep learning using the hebbian principle
37.
Zurück zum Zitat Wadhwa A, Madhow U (2016) Learning sparse, distributed representations using the hebbian principle Wadhwa A, Madhow U (2016) Learning sparse, distributed representations using the hebbian principle
38.
Zurück zum Zitat Xie J, Girshick R, Farhadi A (2016) Unsupervised deep embedding for clustering analysis. In: International conference on machine learning, pp 478–487 Xie J, Girshick R, Farhadi A (2016) Unsupervised deep embedding for clustering analysis. In: International conference on machine learning, pp 478–487
39.
Zurück zum Zitat Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks?
Metadaten
Titel
Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
verfasst von
Gabriele Lagani
Fabrizio Falchi
Claudio Gennaro
Giuseppe Amato
Publikationsdatum
18.01.2022
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 8/2022
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-021-06701-4

Weitere Artikel der Ausgabe 8/2022

Neural Computing and Applications 8/2022 Zur Ausgabe

Premium Partner