Skip to main content

2000 | OriginalPaper | Buchkapitel

On Approximate Learning by Multi-layered Feedforward Circuits

verfasst von : Bhaskar DasGupta, Barbara Hammer

Erschienen in: Algorithmic Learning Theory

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions.First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3],[5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and å-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function.Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.

Metadaten
Titel
On Approximate Learning by Multi-layered Feedforward Circuits
verfasst von
Bhaskar DasGupta
Barbara Hammer
Copyright-Jahr
2000
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/3-540-40992-0_20

Neuer Inhalt