2010 | OriginalPaper | Buchkapitel
Complexity Bounds for Batch Active Learning in Classification
verfasst von : Philippe Rolet, Olivier Teytaud
Erschienen in: Machine Learning and Knowledge Discovery in Databases
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Active learning [1] is a branch of Machine Learning in which the learning algorithm, instead of being directly provided with pairs of problem instances and their solutions (their labels), is allowed to choose, from a set of unlabeled data, which instances to query. It is suited to settings where labeling instances is costly. This paper analyzes the speed-up of batch (parallel) active learning compared to sequential active learning (where instances are chosen 1 by 1): how faster can an algorithm become if it can query
λ
instances at once?
There are two main contributions: proving lower and upper bounds on the possible gain, and illustrating them by experimenting on usual active learning algorithms. Roughly speaking, the speed-up is asymptotically logarithmic in the batch size ł (i.e. when ł→ ∞). However, for some classes of functions with finite VC-dimension
V
, a linear speed-up can be achieved until a batch size of
V
. Practically speaking, this means that parallelizing computations on an expensive-to-label problem which is suited to active learning is very beneficial until
V
simultaneous queries, and less interesting (yet still bringing improvement) afterwards.