Skip to main content

1996 | Buch

Explanation-Based Neural Network Learning

A Lifelong Learning Approach

insite
SUCHEN

Über dieses Buch

Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess.
`The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.'
From the Foreword by Tom M. Mitchell.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
One of the key features of human learning is the ability to transfer knowledge to new tasks, acquired from past experiences. When faced with a new thing to learn, humans often learn and generalize successfully from a remarkably small number of training examples. Sometimes a single learning example suffices to generalize reliably in other, similar situations. For example, a single view of a person often suffices to recognize this person reliably even in different poses, from different viewpoints, with different clothes on, and so on. Given the complexity of the real world, the ability of humans to generalize from scarce data is most intriguing for psychologists as well as for machine learning researchers, who are interested in making computers learn.
Sebastian Thrun
2. Explanation-Based Neural Network Learning
Abstract
This chapter introduces the major learning approach studied in this book: the explanation-based neural network learning algorithm (EBNN). EBNN approaches the meta-level learning problem by learning a theory of the domain. This domain theory is domain-specific. It characterizes, for example, the relevance of individual features, their cross-dependencies, or certain invariant properties of the domain that apply to all learning tasks within the domain. Obviously, when the learner has a model of such regularities, there is an opportunity to generalize more accurately or, alternatively, learn from less training data. This is because without knowledge about these regularities the learner has to learn them from scratch, which necessarily requires more training data. EBNN transfers previously learned knowledge by explaining and analyzing training examples in terms of the domain theory. The result of this analysis is a set of domain-specific shape constraints for the function to be learned at the base-level. Thus, these constraints guide the base-level learning of new functions in a knowledgeable, domain-specific way.
Sebastian Thrun
3. The Invariance Approach
Abstract
This chapter considers a particular family of lifelong learning problems. The lifelong supervised learning framework applies the idea of lifelong learning in a concrete (and restrictive) context: the learner is assumed to face supervised learning problems of the same type and, moreover, these learning problems must be related by some domain-specific properties (casted as invariances) that are unknown in the beginning of lifelong learning but can be learned. Central to the learning approach taken here is the domain theory. It consists of a single network, called the invariance network, which represents invariances that exist for all target functions. EBNN analyzes training examples using the invariance network, in order to guide generalization when learning a new function. As will be illustrated, knowing the invariances of the domain can be most instrumental for successful learning if training data is scarce.
Sebastian Thrun
4. Reinforcement Learning
Abstract
Up to this point, EBNN has exclusively been studied in the context of supervised learning. In this and the following chapter we present the lifelong control learning framework and describe how EBNN can be used to transfer knowledge across control learning tasks.
Sebastian Thrun
5. Empirical Results
Abstract
Armed with an algorithm for learning from delayed reward, we are now ready to apply EBNN in the context of lifelong control learning. This chapter deals with the application of Q-Learning and EBNN in the context of robot control and chess. The key questions underlying this research are:
1.
Can EBNN improve learning control when an accurate domain theory is available?
 
2.
How does EBNN perform if the domain theory is poor? Will the analytical component of EBNN hurt the performance if slopes are misleading? How effective is LOB*?
 
3.
How applicable are EBNN and reinforcement learning in the context of control learning problems that involve non-linear target functions and high-dimensional and noisy feature spaces?
 
Sebastian Thrun
6. Discussion
Abstract
This book has investigated learning in a lifelong context. Lifelong learning addresses cases where a learner faces a whole collection of learning tasks over its entire lifetime. If these tasks are appropriately related, knowledge learned in the first n - 1 tasks can be transferred to the n-th task to boost the generalization accuracy. Two special cases of lifelong learning problems have been investigated in this book: lifelong supervised learning and lifelong control learning. In both cases, lifelong learning involves learning at a meta-level, in which whole spaces of appropriate base-level hypotheses are considered. Consequently, learning at the meta-level requires different representations than base-level learning.
Sebastian Thrun
Backmatter
Metadaten
Titel
Explanation-Based Neural Network Learning
verfasst von
Sebastian Thrun
Copyright-Jahr
1996
Verlag
Springer US
Electronic ISBN
978-1-4613-1381-6
Print ISBN
978-1-4612-8597-7
DOI
https://doi.org/10.1007/978-1-4613-1381-6