Skip to main content
Top

2024 | OriginalPaper | Chapter

The Forward-Forward Algorithm: Analysis and Discussion

Authors : Sudhanshu Thakur, Reha Dhawan, Parth Bhargava, Kaustubh Tripathi, Rahee Walambe, Ketan Kotecha

Published in: Advanced Computing

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This study explores the potential and application of the newly proposed Forward-Forward algorithm (FFA). The primary aim of this study is to analyze the results achieved from the proposed algorithm and compare it with the existing algorithms. What we are trying to achieve here is to know the extent to which FFA can be effectively deployed in any neural network and to investigate its efficacy in producing results that can be compared to those generated by the conventional Backpropagation method. For diving into a deeper understanding of this new algorithm’s benefits and limitations in the context of neural network training, this study is conducted. In the process of experimentation, the four datasets used are the MNIST dataset, COVID-19 X-ray, Brain MRI and the Cat vs. Dog dataset. Our findings suggest that FFA has potential in certain tasks in CV. However, it is yet far from replacing the backpropagation for common tasks. The paper describes the experimental setup and process carried out to understand the efficacy of the FFA and provides the obtained results and comparative analysis.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)CrossRef Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)CrossRef
2.
go back to reference Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)CrossRef Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)CrossRef
3.
go back to reference Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020) Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020)
4.
go back to reference Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020) Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:​2006.​10029 (2020)
5.
go back to reference Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:1701.06548 (2017) Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:​1701.​06548 (2017)
6.
go back to reference Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNet
7.
go back to reference Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)CrossRef Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)CrossRef
8.
9.
go back to reference Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)CrossRef Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)CrossRef
10.
go back to reference Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003) Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003)
11.
go back to reference Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:2006.01981 (2020) Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:​2006.​01981 (2020)
12.
go back to reference Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009) Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
13.
go back to reference Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016) Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016)
14.
go back to reference Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)CrossRef Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)CrossRef
15.
go back to reference Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019) Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019)
16.
go back to reference Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)CrossRef Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)CrossRef
17.
go back to reference Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)CrossRef Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)CrossRef
18.
go back to reference Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRef Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRef
19.
go back to reference Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017) Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017)
20.
go back to reference van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018) van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:​1807.​03748 (2018)
21.
go back to reference Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014) Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014)
22.
go back to reference Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263 (2019) Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:​1912.​03263 (2019)
23.
24.
go back to reference Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017) Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017)
25.
go back to reference Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010) Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010)
26.
go back to reference Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986) Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986)
Metadata
Title
The Forward-Forward Algorithm: Analysis and Discussion
Authors
Sudhanshu Thakur
Reha Dhawan
Parth Bhargava
Kaustubh Tripathi
Rahee Walambe
Ketan Kotecha
Copyright Year
2024
DOI
https://doi.org/10.1007/978-3-031-56700-1_31

Premium Partner