Skip to main content

2000 | Buch | 2. Auflage

The Nature of Statistical Learning Theory

verfasst von: Vladimir N. Vapnik

Verlag: Springer New York

Buchreihe : Information Science and Statistics

insite
SUCHEN

Über dieses Buch

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of

Inhaltsverzeichnis

Frontmatter
Introduction: Four Periods in the Research of the Learning Problem
Abstract
In the history of research of the learning problem one can extract four periods that can be characterized by four bright events:
(i)
Constructing the first learning machines,
 
(ii)
constructing the fundamentals of the theory,
 
(iii)
constructing neural networks,
 
(iv)
constructing the alternatives to neural networks.
 
Vladimir N. Vapnik
Chapter 1. Setting of the Learning Problem
Abstract
In this book we consider the learning problem as a problem of finding a desired dependence using a limited number of observations.
Vladimir N. Vapnik
Chapter 2. Consistency of Learning Processes
Abstract
The goal of this part of the theory is to describe the conceptual model for learning processes that are based on the empirical risk minimization inductive principle. This part of the theory has to explain when a learning machine that minimizes empirical risk can achieve a small value of actual risk (can generalize) and when it cannot. In other words, the goal of this part is to describe necessary and sufficient conditions for the consistency of learning processes that minimize the empirical risk.
Vladimir N. Vapnik
Chapter 3. Bounds on the Rate of Convergence of Learning Processes
Abstract
In this chapter we consider bounds on the rate of uniform convergence. We consider upper bounds (there exist lower bounds as well (Vapnik and Chervonenkis, 1974); however, they are not as important for controlling the learning processes as the upper bounds).
Vladimir N. Vapnik
Chapter 4. Controlling the Generalization Ability of Learning Processes
Abstract
The theory for controlling the generalization ability of learning machines is devoted to constructing an inductive principle for minimizing the risk functional using a small sample of training instances.
Vladimir N. Vapnik
Chapter 5. Methods of Pattern Recognition
Abstract
To implement the SRM inductive principle in learning algorithms one has to minimize the risk in a given set of functions by controlling two factors: the value of the empirical risk and the value of the confidence interval.
Vladimir N. Vapnik
Chapter 6. Methods of Function Estimation
Abstract
In this chapter we generalize results obtained for estimating indicator function (for the pattern recognition problem) to the problem of estimating real-valued functions (regressions). We introduce a new type of loss function (the so-called ε-insensitive loss function) that makes our estimates not only robust but also sparse. As we will see, in this and in the next chapter, the sparsity of the solution is very important for estimating dependencies in high-dimensional spaces using a large number of data.
Vladimir N. Vapnik
Chapter 7. Direct Methods in Statistical Learning Theory
Abstract
In this chapter we introduce a new approach to the main problems of statistical learning theory: pattern recognition, regression estimation, and density estimation.
Vladimir N. Vapnik
Chapter 8. The Vicinal Risk Minimization Principle and the SVMs
Abstract
In this chapter we introduce a new principle for minimizing the expected risk called the vicinal risk minimization (VRM) principle.’ We use this principle for solving our main problems: pattern recognition, regression estimation, and density estimation.
Vladimir N. Vapnik
Chapter 9. Conclusion: What Is Important in Learning Theory?
Abstract
In the beginning of this book we postulated (without any discussion) that learning is a problem of function estimation on the basis of empirical data. To solve this problem we used a classical inductive principle — the ERM principle. Later, however, we introduced a new principle — the SRM principle. Nevertheless, the general understanding of the problem remains based on the statistics of large samples: The goal is to derive the rule that possesses the lowest risk. The goal of obtaining the “lowest risk” reflects the philosophy of large sample size statistics: The rule with low risk is good because if we use this rule for a large test set, with high probability the means of losses will be small.
Vladimir N. Vapnik
Backmatter
Metadaten
Titel
The Nature of Statistical Learning Theory
verfasst von
Vladimir N. Vapnik
Copyright-Jahr
2000
Verlag
Springer New York
Electronic ISBN
978-1-4757-3264-1
Print ISBN
978-1-4419-3160-3
DOI
https://doi.org/10.1007/978-1-4757-3264-1