Skip to main content

2024 | OriginalPaper | Buchkapitel

The Forward-Forward Algorithm: Analysis and Discussion

verfasst von : Sudhanshu Thakur, Reha Dhawan, Parth Bhargava, Kaustubh Tripathi, Rahee Walambe, Ketan Kotecha

Erschienen in: Advanced Computing

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This study explores the potential and application of the newly proposed Forward-Forward algorithm (FFA). The primary aim of this study is to analyze the results achieved from the proposed algorithm and compare it with the existing algorithms. What we are trying to achieve here is to know the extent to which FFA can be effectively deployed in any neural network and to investigate its efficacy in producing results that can be compared to those generated by the conventional Backpropagation method. For diving into a deeper understanding of this new algorithm’s benefits and limitations in the context of neural network training, this study is conducted. In the process of experimentation, the four datasets used are the MNIST dataset, COVID-19 X-ray, Brain MRI and the Cat vs. Dog dataset. Our findings suggest that FFA has potential in certain tasks in CV. However, it is yet far from replacing the backpropagation for common tasks. The paper describes the experimental setup and process carried out to understand the efficacy of the FFA and provides the obtained results and comparative analysis.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)CrossRef Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)CrossRef
2.
Zurück zum Zitat Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)CrossRef Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)CrossRef
3.
Zurück zum Zitat Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020) Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020)
4.
Zurück zum Zitat Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020) Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:​2006.​10029 (2020)
5.
Zurück zum Zitat Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:1701.06548 (2017) Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:​1701.​06548 (2017)
6.
Zurück zum Zitat Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet
7.
Zurück zum Zitat Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)CrossRef Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)CrossRef
8.
9.
Zurück zum Zitat Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)CrossRef Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)CrossRef
10.
Zurück zum Zitat Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003) Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003)
11.
Zurück zum Zitat Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:2006.01981 (2020) Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:​2006.​01981 (2020)
12.
Zurück zum Zitat Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009) Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
13.
Zurück zum Zitat Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016) Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016)
14.
Zurück zum Zitat Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)CrossRef Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)CrossRef
15.
Zurück zum Zitat Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019) Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019)
16.
Zurück zum Zitat Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)CrossRef Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)CrossRef
17.
Zurück zum Zitat Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)CrossRef Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)CrossRef
18.
Zurück zum Zitat Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRef Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRef
19.
Zurück zum Zitat Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017) Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017)
20.
Zurück zum Zitat van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018) van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:​1807.​03748 (2018)
21.
Zurück zum Zitat Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014) Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014)
22.
Zurück zum Zitat Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263 (2019) Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:​1912.​03263 (2019)
23.
Zurück zum Zitat Grill, J.-B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020) Grill, J.-B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. arXiv preprint arXiv:​2006.​07733 (2020)
24.
Zurück zum Zitat Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017) Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017)
25.
Zurück zum Zitat Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010) Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010)
26.
Zurück zum Zitat Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986) Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986)
Metadaten
Titel
The Forward-Forward Algorithm: Analysis and Discussion
verfasst von
Sudhanshu Thakur
Reha Dhawan
Parth Bhargava
Kaustubh Tripathi
Rahee Walambe
Ketan Kotecha
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-56700-1_31

Premium Partner