Swipe to navigate through the articles of this issue
Communicated by Tomasio Poggio.
This research was supported by the DFG via Research Unit 1735 Structural Inference in Statistics.
We consider a statistical inverse learning (also called inverse regression) problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points \(X_i\), superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Please log in to get access to this content
To get access to this content you need the following product:
R. Bhatia. Matrix Analysis. Springer, 1997.
N. H. Bingham, C. M. Goldie, and J. L. Teugels. Regular Variation, volume 27 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, 1987.
G. Blanchard and N. Krämer. Convergence rates of kernel conjugate gradient for random design regression. Analysis and Applications, 2016.
G. Blanchard and P. Massart. Discussion of “2004 IMS medallion lecture: Local Rademacher complexities and oracle inequalities in risk minimization”, by V. Koltchinskii. Annals of Statistics, 34(6):2664–2671, 2006.
A. Caponnetto. Optimal rates for regularization operators in learning theory. Technical report, MIT, 2006.
L. Dicker, D. Foster, and D. Hsu. Kernel methods and regularization techniques for nonparametric regression: Minimax optimality and adaptation. Technical report, Rutgers University, 2015.
H. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer Academic Publishers, 2000.
F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural network architectures. Neural Computation, 7(2):219–269, 1993. CrossRef
L. Györfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution-free Theory of Nonparametric Regression. Springer, 2002.
P. Halmos and V. Sunder. Bounded Integral Operators on \(L^2\) -Spaces. Springer, 1978.
L. Hörmander. The analysis of linear partial differential operators I. Springer, 1983.
I. Steinwart and A. Christman. Support Vector Machines. Springer, 2008.
I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. Proceedings of the 22nd Annual Conference on Learning Theory, pages 79–93, 2009.
A. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008.
G. Wahba. Spline Models for Observational Data, volume 59. SIAM CBMS-NSF Series in Applied Mathematics, 1990.
- Optimal Rates for Regularization of Statistical Inverse Learning Problems
- Publication date
- Springer US
Neuer Inhalt/© ITandMEDIA