Skip to main content
main-content

Über dieses Buch

This book contains a selection of papers presented at the International Workshop Machine Learning, Meta-Reasoning and Logics held in Hotel de Mar in Sesimbra, Portugal, 15-17 February 1988. All the papers were edited afterwards. The Workshop encompassed several fields of Artificial Intelligence: Machine Learning, Belief Revision, Meta-Reasoning and Logics. The objective of this Workshop was not only to address the common issues in these areas, but also to examine how to elaborate cognitive architectures for systems capable of learning from experience, revising their beliefs and reasoning about what they know. Acknowledgements The editing of this book has been supported by COST-13 Project Machine Learning and Knowledge Acquisition funded by the Commission o/the European Communities which has covered a substantial part of the costs. Other sponsors who have supported this work were Junta Nacional de lnvestiga~ao Cientlfica (JNICT), lnstituto Nacional de lnvestiga~ao Cientlfica (INIC), Funda~ao Calouste Gulbenkian. I wish to express my gratitude to all these institutions. Finally my special thanks to Paula Pereira and AnaN ogueira for their help in preparing this volume. This work included retyping all the texts and preparing the camera-ready copy. Introduction 1 1. Meta-Reasoning and Machine Learning The first chapter is concerned with the role meta-reasoning plays in intelligent systems capable of learning. As we can see from the papers that appear in this chapter, there are basically two different schools of thought.

Inhaltsverzeichnis

Frontmatter

Meta-Reasoning and Machine Learning

Frontmatter

A Metalevel Manifesto

Abstract
Metalevel architectures are gaining widespread use in many domains. This paper examines the metalevel as a system level, describes the type of knowledge embodied in a metalevel, and discusses the characteristics of good metalevel representations.
D. Paul Benjamin

A Sketch of Autonomous Learning using Declarative Bias

Abstract
This paper summarizes progress towards the construction of autonomous learning agents, in particular those that use existing knowledge in the pursuit of new learning goals. To this end, we show that the bias driving a concept-learning program can be expressed as a first-order sentence that reflects knowledge of the domain in question. We then show how the process of learning a concept from examples can be implemented as a derivation of the appropriate bias for the goal concept, followed by a first-order deduction from the bias and the facts describing the instances. Given sufficient back-ground knowledge, the example complexity of learning can be considerably reduced. Shift of bias, certainkinds of “preference-type” bias, and noisy instance data can be handled by moving to a non-monotonic inference system [Grosof & Russell, this volume]. We emphasize that learning can and should be viewed as an interaction between new experiences and existing knowledge.
Stuart J. Russell, Benjamin N. Grosof

Shift of Bias as Non-Monotonic Reasoning

Abstract
We show how to express many kinds of inductive leaps, and shifts of bias, as deductions in a non-monotonic logic of prioritized defaults, based on circumscription. This continues our effort to view learning a concept from examples as an inference process, based on a declarative representation of biases, developed in [Russell & Grosof 1987, this volume]. In particular, we demonstrate that “version space” bias can be encoded formally in such a way that it will be weakened when contradicted by observations. Implementation of inference in the non-monotonie logic then enables the principled, automatic modification of the description space employed in a concept learning program, which Bundy et al. (1985) named as “the most urgent problem facing automatic learning”. We also show how to formulate with prioritized defaults two kinds of “preference” biases: maximal specificity and maximal generality. This leads us to a perspective on inductive biases as preferred beliefs (about the external environment). Several open questions remain, including how to implement efficiently the required non-monotonic theorem-proving, and how to provide the epistemological basis for default axioms and priorities among default axioms.
Benjamin N. Grosof, Stuart J. Russell

Mutual Constraints on Representation and Inference

Abstract
We examine the nature of representations from first principles, with the goal of providing an autonomous resource-limited agent with the ability to construct an appropriate language in which to express its beliefs. The principal cognitive constraints on representations for such agents are correctness of conclusions and efficiency of use. For deductive systems, the choice of formulation can have a great impact on efficiency, and we present a generative technique for automatic reformulation that creates abstractions to improve performance. The competence (ability to solve problems correctly) of non-deductive systems depends strongly on choice of representation. We illustrate this dependence using two simple cases from nonmonotonic reasoning and give some ideas for choosing representations that allow successful analogical reasoning. In sum, we propose a new division of labor between representation choice and inference.
Stuart Russell, Devika Subramanian

Meta-Reasoning: Transcription of an Invited Lecture by

Abstract
This text is an edited version of the lecture on Meta-Reasoning, presented by Luigia Aiello at the Workshop in Sesimbra, Portugal. The lecture was recorded and the transcript was edited afterwards.
Luigia Aiello

Backmatter

Reasoning about Proofs and Explanations

Frontmatter

Overgenerality in Explanation-Based Generalization

Abstract
This paper demonstrates how explanation-based generalization (EBG) can create overgeneral rules if relevant knowledge is hidden from the EBG process, and proposes two solutions to this problem. EBG finds the weakest preconditions of a proof to from a generalization covering all cases for which the proof succeeds. However, when knowledge relevant to the success of EBG is hidden from the EBG process, the results of the process can be overgeneral. Two examples of suchknowledge are discussed: search control, and theories of operationality. The key idea is that when such relevant knowledge is hidden from the EBG process, the results of EBG are no longer unconditionally true. Additional conditions — the conditions on when the extra information would still in general hold — must be added to the results of EBG to guarantee their correctness. Two methods for generating these additional conditions are presented: explicitly generalizing meta-level reasoning and including the results in the base-level generalization; and rewriting the domain theory to include the additional knowledge so that standard EBG forms correct results.
Haym Hirsh

A Tool for the Management of Incomplete Theories: Reasoning about Explanation

Abstract
Explanation-Based Learning is drawing an increasing interest in the Machine Learning community. Many researchers are now interested in the problem of integrating Explanation-Based Learning (EBL) with different techniques of empirical learning. We propose a system that relies on Explanation-Based Generalization (EBG). In our case, the EBG module receives multiple concept instances. The learning mechanism presented in this paper allows incremental modifications of the EBG generated generalization.
When dealing with incomplete theories, we propose to complete proofs that fail using an abduction mechanism. The problem then is to limit the number of possible explanations to be considered. For that purpose, the abduction process is guided by comparison to a reference explanation. We look for an augmented explanation which is analogous to the already known explanation of the concept being studied. We thus propose incremental refinements to the existing rules of the theory.
Béatrice Duval, Yves Kodratoff

A Comparison of Rule and Exemplar-Based Learning Systems

Abstract
Recently, there has been renewed interest in the use of exemplar-based schemes for concept representation and learning. In this paper, we compare systems learning concepts represented in this form with those which learn concepts represented by decision rules, such as the ID3 and AQU rule induction systems. We aim to clarify the distinction between the two representational schemes, and compare how systems based on the different schemes address the problem of leaming within finite resources. Our conclusions are that the schemes differ in two important ways: in the different ‘biases’ with which they select between alternative concepts during search and in the different computational approaches of generalising before or during a run-time task. We also show that in addressing the problem of finite resources important commonalities between implementations based on both representational schemes arise, and by highlighting them aim to encourage the transfer of techniques between the two paradigms.
Peter Clark

Discovery and Revision via Incremental Hill Climbing

Abstract
An important issue in machine learning concerns how initial discoveries learned in a domain must be revised over time to account for newly learned beliefs. When performing this task, existing truth maintenance systems, such as the TMS an ATMS, tend to keep too many alternate beliefs in memory. When searching for consistency among beliefs, and ATMS typically finds all solutions. ATMS finds only one, but beliefs not part of the current theory are still kept in memory in case they become current at a later time. Neither approach uses heuristics to decide which beliefs should be active, and thus in some domains they do too much work. In this paper I describe Revolver, a discovery system that conducts more reasoned belief revision through an incremental, hill-climbing approach. Amain assumption of the program is that all beliefs should not be stored over time; Revolver uses a heuristic evaluation function to decide how beliefs should be revised to account for new information. First, I will illustrate the program’s behavior with a detailed example from the domain of chemical discovery. An analysis of the system ‘s behaviour follows, with particular emphasis on issues pertaining to its belief revision process. Third, I compare Revolver to other belief revision approaches. Ideas are then presented for synthesizing these various methods, followed by a summary and closing remarks.
Donald Rose

Learning from Imperfect Data

Abstract
Systems interacting with the real world must address the issues raised by the possible presence of errors in the observations. In this paper we first present a framework for discussing imperfect data and the resulting problems. We distinguish between various categories of errors, such as random errors, systematic errors or errors in teaching. We examine some of the techniques currently used in AI for dealing with random errors and discuss the way the other types of errors could be dealt with.
Pavel Brazdil, Peter Clark

Foundations of AI and Machine Learning

Frontmatter

Knowledge Revision and Multiple Extensions

Abstract
We present a tense logic, ZK, for representing changes within knowledge based systems. ZK has tense operators for the future and for the past, and a modal operator for describing consistency. A knowledge based system consists of a set K of general laws (described by first order formulae) and of a set of states, each described by a first order formula (called descriptive formula). Changes are represented by pairs of formulae (P,R) (precondition, result). A change can occur within a state whenever the preconditions are true. The descriptive formula of the resulting new state is the conjunction of R with the maximal subformula of the descriptive formula of the old state which is consistent with K and R. Generally, a change will yield more than one new state (multiple extensions).
Camilla B. Schwind

Minimal Change — A Criterion for Choosing between Competing Models

Abstract
Situations are often encountered in which we are forced to make some decision although there is not enough information available. In that situation we normally use common sense to draw conclusions on the basis of incomplete knowledge. This reasoning mechanism can be regarded as meta-reasoning which chooses preferred models from several consistent models. This paper formalizes common sense reasoning of tree-structured inheritance systems and temporal projection in one framework — minimal change model. Both types of reasoning have a common mechanism — to prefer a model which changes minimally in one direction. In inheritance systems, the direction is from superclass to subclass, and in temporal projection, the direction is from earlier state to later state.
Ken Satoh

Hierarchic Autoepistemic Theories for Nonmonotonic Reasoning: Preliminary Report

Abstract
Nonmonotonic logics are meant to be a formalization of nonmonotonic reasoning. However, for the most part they fail to embody two of the most important aspects of such reasoning: the explicit computational nature of nonmonotonic inference, and the assignment of preferences among competing inferences. We propose a method of nonmonotonic reasoning in which the notion of inference from specific bodies of evidence plays a fundamental role. The formalization is based on autoepistemic logic, but introduces additional structure, a hierarchy of evidential spaces. The method offers a natural formalization of many different applications of nonmonotonic reasoning, including reasoning about action, speech acts, belief revision, and various situations involving competing defaults.
Kurt Konolige

Automated Quantified Modal Logic

Abstract
In this paper we present a deduction method for quantified modal logics via extensions of the classical unification algorithm. We show that deductions in modal logics can be formulated as particular problems in unification theory.
L. Fariñas del Cerro, A. Herzig

Backmatter

Weitere Informationen