Regular Article
Fat-Shattering and the Learnability of Real-Valued Functions

https://doi.org/10.1006/jcss.1996.0033Get rights and content
Under an Elsevier user license
open archive

Abstract

We consider the problem of learning real-valued functions from random examples when the function values are corrupted with noise. With mild conditions on independent observation noise, we provide characterizations of the learnability of a real-valued function class in terms of a generalization of the Vapnik–Chervonenkis dimension, the fat-shattering function, introduced by Kearns and Schapire. We show that, given some restrictions on the noise, a function class is learnable in our model if an only if its fat-shattering function is finite. With different (also quite mild) restrictions, satisfied for example by guassion noise, we show that a function class is learnable from polynomially many examples if and only if its fat-shattering function grows polynomially. We prove analogous results in an agnostic setting, where there is no assumption of an underlying function class.

Cited by (0)