2007 | OriginalPaper | Chapter
Accelerating Kernel Perceptron Learning
Authors : Daniel García, Ana González, José R. Dorronsoro
Published in: Artificial Neural Networks – ICANN 2007
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Recently it has been shown that appropriate perceptron training methods, such as the Schlesinger–Kozinec (SK) algorithm, can provide maximal margin hyperplanes with training costs O(
N
×
T
), with
N
denoting sample size and
T
the number of training iterations. In this work we shall relate SK training with the classical Rosenblatt rule and show that, when the hyperplane vector is written in dual form, the support vector (SV) coefficients determine their training appearance frequency; in particular, large coefficient SVs penalize training costs. Under this light we shall explore a training acceleration procedure in which large coefficient and, hence, large cost SVs are removed from training and that allows for a further stable large sample shrinking. As we shall see, this results in a much faster training while not penalizing test classification.