2006 | OriginalPaper | Buchkapitel
An Experimental Study on Training Radial Basis Functions by Gradient Descent
verfasst von : Joaquín Torres-Sospedra, Carlos Hernández-Espinosa, Mercedes Fernández-Redondo
Erschienen in: Artificial Neural Networks in Pattern Recognition
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this paper, we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consist of an unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare
Batch training
with
Online training
and we conclude that
Online training
suppose a reduction in the number of iterations.