2008 | OriginalPaper | Chapter
Extension II: A Regression Model from M4
Published in: Machine Learning
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In this chapter, we present a novel regression model which is directly motivated from the Maxi-Min Margin Machine(M4) model described in Chapter 4. Regression is one of the problems in supervised learning. The objective is to learn a model from a given dataset, (
x
1
,
y
1
),...,(
x
N,
y
N), and then based on the learned model, to make accurate predictions of
y
for future values of
x
. Support Vector Regression (SVR), a successful method in dealing with this problem contains the good generalization ability [
20
,
17
,
8
,
6
]. The standard SVR adopts the
ℓ
2
-
norm to control the functional complexity and chooses an
∈
-insensitive loss function with a fixed tube (margin) to measure the empirical risk. By introducing the ℓ
2
-norm, the optimization problem in SVR can be transformed to a quadratic programming problem. On the other hand, the ∈-tube has the ability to tolerate noise in data and fixing the tube enjoys the advantages of simplicity. These settings are in a global fashion and are effective in common applications, but they lack the ability and the flexibility to capture the local trend in some applications. For example, in stock markets, the data are highly volatile and the associated variance of noise varies over time. In such cases, fixing the tube cannot capture the local trend of data and cannot tolerate the noise adaptively.