Learning from dependent observations

https://doi.org/10.1016/j.jmva.2008.04.001Get rights and content
Under an Elsevier user license
open archive

Abstract

In most papers establishing consistency for learning algorithms it is assumed that the observations used for training are realizations of an i.i.d. process. In this paper we go far beyond this classical framework by showing that support vector machines (SVMs) only require that the data-generating process satisfies a certain law of large numbers. We then consider the learnability of SVMs for α-mixing (not necessarily stationary) processes for both classification and regression, where for the latter we explicitly allow unbounded noise.

AMS subject classifications

primary
68T05 (1985)
secondary
62G08 (2000)
62H30 (1973)
62M45 (2000)
68Q32 (2000)

Keywords

Support vector machine
Consistency
Non-stationary mixing process
Classification
Regression

Cited by (0)