Skip to main content
Top

2003 | OriginalPaper | Chapter

Smooth ε-Insensitive Regression by Loss Symmetrization

Authors : Ofer Dekel, Shai Shalev-Shwartz, Yoram Singer

Published in: Learning Theory and Kernel Machines

Publisher: Springer Berlin Heidelberg

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of margin-based loss functions commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. Our construction yields a smooth version of the ε-insensitive hinge loss that is used in support vector regression. A byproduct of this construction is a new simple form of regularization for boosting-based classification and regression algorithms. We present two parametric families of batch learning algorithms for minimizing these losses. The first family employs a log-additive update and is based on recent boosting algorithms while the second family uses a new form of additive update. We also describe and analyze online gradient descent (GD) and exponentiated gradient (EG) algorithms for the ε-insensitive logistic loss. Our regression framework also has implications on classification algorithms, namely, a new additive batch algorithm for the log-loss and exp-loss used in boosting.

Metadata
Title
Smooth ε-Insensitive Regression by Loss Symmetrization
Authors
Ofer Dekel
Shai Shalev-Shwartz
Yoram Singer
Copyright Year
2003
Publisher
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-540-45167-9_32

Premium Partner