1999 | OriginalPaper | Buchkapitel
Universality of Sigmoidal Networks
verfasst von : Hava T. Siegelmann
Erschienen in: Neural Networks and Analog Computation
Verlag: Birkhäuser Boston
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Up to this point we considered only the saturated linear activation function. In this chapter, we investigate the computational power of networks with sigmoid activation functions, such as those widely considered in the neural network literature, e.g., $$ \varrho (x) = \frac{1} {{1 + e^{ - x} }} $$ or (7.1)$$ \varrho (x) = \frac{2} {{1 + e^{ - x} }} - 1. $$. In Chapter 10 we will see that a large class of activation functions, which also includes the sigmoid, yields networks whose computational power is bounded from above by P/poly. In this chapter we obtain a lower bound on the computational power of sigmoidal networks. We prove that there exists a universal architecture of sigmoidal neurons that can be used to compute any recursive function, with exponential slowdown. Our proof techniques can be applied to a much more general class of “sigmoidal-like” activation functions, suggesting that Turing universality is a common property of recurrent neural network models. In conclusion, the computational capabilities of sigmoidal networks are located in between Turing machines and advice Turing machines.