Skip to main content

1993 | OriginalPaper | Buchkapitel

A New Min-Max Optimisation Approach for Fast Learning Convergence of Feed-Forward Neural Networks

verfasst von : A. Chella, A. Gentile, F. Sorbello, A. Tarantino

Erschienen in: Artificial Neural Nets and Genetic Algorithms

Verlag: Springer Vienna

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

One of the most critical aspect for a wide use of neural networks to real world problems is related to the learning process which is known to be computational expensive and time consuming.In this paper we propose a new approach to the problem of the learning process based on optimisation point of view. The developed algorithm is a minimax method based on a combination of the quasi-Newton and the Steepest descent methods: it was previously successfully applied in other areas and shows a faster convergence rate when compared with the classical learning rules.The optimum point is reached by minimising the maximum of the error functions of the network without requiring any tuning of internal parameters.Moreover, the proposed algorithm allows to obtain useful information about the size of the initial values of the weights by simple observations on the Gramian matrix associated to the network.The algorithm has been tested on several widespread benchmarks. The proposed algorithm shows superior properties than the backpropagation either in terms of convergence rate and in terms of easiness of use; its performances are highly competitive when compared with the other learning methods available in literature.Significant simulation results are also reported in the paper.

Metadaten
Titel
A New Min-Max Optimisation Approach for Fast Learning Convergence of Feed-Forward Neural Networks
verfasst von
A. Chella
A. Gentile
F. Sorbello
A. Tarantino
Copyright-Jahr
1993
Verlag
Springer Vienna
DOI
https://doi.org/10.1007/978-3-7091-7533-0_11

Neuer Inhalt