1981 | OriginalPaper | Buchkapitel
Ergodic Learning Algorithms
verfasst von : Prof. S. Lakshmivarahan
Erschienen in: Learning Algorithms Theory and Applications
Verlag: Springer New York
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This chapter presents an analysis of general non-linear reward-penalty ergodic-N R-P Ealgorithms. The basic property that characterizes this class of algorithms is that all the states under this class of algorithms are non-absorbing. The now classic linear reward-penalty — LER−P algorithm is a special case of this algorithm. It is well known [C1] that this LER−P algorithm is only expedient. Using the theory of Markov processes that evolve by small steps [N14] a variety of characterizations of the process p(k) k ≥ 0 such as the evolution of the mean and variance and in fact its actual sample path behavior are given. As a by-product, it is proved that there exists a proper choice of parameters and functions such that the NER−P algorithm is ε-optimal.