2000 | OriginalPaper | Buchkapitel
On Approximate Learning by Multi-layered Feedforward Circuits
verfasst von : Bhaskar DasGupta, Barbara Hammer
Erschienen in: Algorithmic Learning Theory
Verlag: Springer Berlin Heidelberg
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3],[5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and å-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.