In this chapter, we present a novel regression model which is directly motivated from the Maxi-Min Margin Machine(M4) model described in Chapter 4. Regression is one of the problems in supervised learning. The objective is to learn a model from a given dataset, (
N), and then based on the learned model, to make accurate predictions of
for future values of
. Support Vector Regression (SVR), a successful method in dealing with this problem contains the good generalization ability [
]. The standard SVR adopts the
norm to control the functional complexity and chooses an
-insensitive loss function with a fixed tube (margin) to measure the empirical risk. By introducing the ℓ
-norm, the optimization problem in SVR can be transformed to a quadratic programming problem. On the other hand, the ∈-tube has the ability to tolerate noise in data and fixing the tube enjoys the advantages of simplicity. These settings are in a global fashion and are effective in common applications, but they lack the ability and the flexibility to capture the local trend in some applications. For example, in stock markets, the data are highly volatile and the associated variance of noise varies over time. In such cases, fixing the tube cannot capture the local trend of data and cannot tolerate the noise adaptively.