Skip to main content

2002 | OriginalPaper | Buchkapitel

Scaling Large Learning Problems with Hard Parallel Mixtures

verfasst von : Ronan Collobert, Yoshua Bengio, Samy Bengio

Erschienen in: Pattern Recognition with Support Vector Machines

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

A challenge for statistical learning is to deal with large data sets, e.g. in data mining. Popular learning algorithms such as Support Vector Machines have training time at least quadratic in the number of examples: they are hopeless to solve problems with a million examples. We propose a “hard parallelizable mixture” methodology which yields significantly reduced training time through modularization and parallelization: the training data is iteratively partitioned by a “gater” model in such a way that it becomes easy to learn an “expert” model separately in each region of the partition. A probabilistic extension and the use of a set of generative models allows representing the gater so that all pieces of the model are locally trained. For SVMs, time complexity appears empirically to locally grow linearly with the number of examples, while generalization performance can be enhanced. For the probabilistic version of the algorithm, the iterative algorithm provably goes down in a cost function that is an upper bound on the negative log-likelihood.

Metadaten
Titel
Scaling Large Learning Problems with Hard Parallel Mixtures
verfasst von
Ronan Collobert
Yoshua Bengio
Samy Bengio
Copyright-Jahr
2002
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/3-540-45665-1_2

Premium Partner