Skip to main content

1993 | Buch

Machine Learning: ECML-93

European Conference on Machine Learning Vienna, Austria, April 5–7, 1993 Proceedings

herausgegeben von: Pavel B. Brazdil

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume contains the proceedings of the Eurpoean Conference on Machine Learning (ECML-93), continuing the tradition of the five earlier EWSLs (European Working Sessions on Learning). The aim of these conferences is to provide a platform for presenting the latest results in the area of machine learning. The ECML-93 programme included invited talks, selected papers, and the presentation of ongoing work in poster sessions. The programme was completed by several workshops on specific topics. The volume contains papers related to all these activities. The first chapter of the proceedings contains two invited papers, one by Ross Quinlan and one by Stephen Muggleton on inductive logic programming. The second chapter contains 18 scientific papers accepted for the main sessions of the conference. The third chapter contains 18 shorter position papers. The final chapter includes three overview papers related to the ECML-93 workshops.

Inhaltsverzeichnis

Frontmatter
FOIL: A midterm report

FOIL is a learning system that constructs Horn clause programs from examples. This paper summarises the development of FOIL from 1989 up to early 1993 and evaluates its effectiveness on a non-trivial sequence of learning tasks taken from a Prolog programming text. Although many of these tasks are handled reasonably well, the experiment highlights some weaknesses of the current implementation. Areas for further research are identified.

J. R. Quinlan, R. M. Cameron-Jones
Inductive logic programming: Derivations, successes and shortcomings

Inductive Logic Programming (ILP) is a research area which investigates the construction of quantified definite clause theories from examples and background knowledge. ILP systems have been applied successfully in a number of real-world domains. These include the learning of structure-activity rules for drug design, finite-element mesh design rules, rules for primary-secondary prediction of protein structure and fault diagnosis rules for satellites. There is a well established tradition of learning-in-the-limit results in ILP. Recently some results within Valiant's PAC-learning frame-work have also been demonstrated for ILP systems. In this paper it is argued that algorithms can be directly derived from the formal specifications of ILP. This provides a common basis for Inverse Resolution, Explanation-Based Learning, Abduction and Relative Least General Generalisation. A new general-purpose, efficient approach to predicate invention is demonstrated. ILP is underconstrained by its logical specification. Therefore a brief overview of extra-logical constraints used in ILP systems is given. Some present limitations and research directions for the field are identified.

Stephen Muggleton
Two methods for improving inductive logic programming systems

In this paper we describe two methods for improving systems that induce disjunctive Horn clause definitions. The first method is the well-known use of argument types during induction. Our novel contribution is an algorithm for extracting type information from the example set mechanically.The second method provides a set of clause heads partitioning the example set in disjuncts according to structural properties. Those heads can be used in top-down inductive inference systems as starting point of the general-to-specific search and reduce the resulting space of clause bodies.

Irene Stahl, Birgit Tausend, Rüdiger Wirth
Generalization under implication by using or-introduction

In the area of inductive learning, generalization is a main operation. Already in the early 1970's Plotkin described algorithms for computation of least general generalizations of clauses under θ-subsumption. However, there is a type of generalizations, called roots of clauses, that is not possible to find by generalization under θ-subsumption. This incompleteness is important, since almost all inductive learners that use clausal representation perform generalization under θ-subsumption.In this paper a technique to eliminate this incompleteness, by reducing generalization under implication to generalization under θ-subsumption, is presented. The technique is conceptually simple and is based on an inference rule from natural deduction, called or-introduction. The technique is proved to be sound and complete, but unfortunately it suffers from complexity problems.

Peter Idestam-Almquist
On the proper definition of minimality in specialization and theory revision

A central operation in an incremental learning system is the specialization of an incorrect theory in order exclude incorrect inferences. In this paper, we discuss what properties are to be required from such theory revision operations. In particular, we examine what it should mean for a revision to be minimal. As a surprising result, the seemingly most natural criterion, requiring revisions to produce maximally general correct specializations, leads to a number of serious problems. We therefore propose an alternative interpretation of minimality based on the notion of base revision from theory contraction work, and formally define it as a set of base revision postulates. We then present a revision operator (Mbr) that meets these postulates, and shown that it produces the maximally general correct revision satisfying the postulates, i.e., the revisions produced by Mbr are indeed minimal in our sense. The operator is implemented and used in Krt, the knowledge revision tool of the Mobal system.

Stefan Wrobel
Predicate invention in inductive data engineering

By inductive data engineering we mean the (interactive) process of restructuring a knowledge base by means of induction. In this paper we describe INDEX, a system that constructs decompositions of database relations by inducing attribute dependencies. The system employs heuristics to locate exceptions to dependencies satisfied by most of the data, and to avoid the generation of dependencies for which the data don't provide enough support. The system is implemented in a deductive database framework, and can be viewed as an Inductive Logic Programming system with predicate invention capabilities.

Peter A. Flach
Subsumption and refinement in model inference

In his famous Model Inference System, Shapiro [10] uses socalled refinement operators to replace too general hypotheses by logically weaker ones. One of these refinement operators works in the search space of reduced first order sentences. In this article we show that this operator is not complete for reduced sentences, as he claims. We investigate the relations between subsumption and refinement as well as the role of a complexity measure. We present an inverse reduction algorithm which is used in a new refinement operator. This operator is complete for reduced sentences. Finally, we will relate our new refinement operator with its dual, a generalization operator, and its possible application in model inference using inverse resolution.

Patrick R. J. van der Laag, Shan-Hwei Nienhuys-Cheng
Some lower bounds for the computational complexity of inductive logic programming

The field of Inductive Logic Programming (ILP), which is concerned with the induction of Hornclauses from examples and background knowledge, has received increased attention over the last time. Recently, some positive results concerning the learnability of restricted logic programs have been published. In this paper we review these restrictions and prove some lower-bounds of the computational complexity of learning. In particular, we show that a learning algorithm for i2-determinate Hornclauses (with variable i) could be used to decide the PSPACE-complete problem of Finite State Automata Intersection, and that a learning algorithm for 12-nondeterminate Hornclauses could be used to decide the NP-complete problem of Boolean Clause Satisfiability (SAT). This also shows, that these Hornclauses are not PAC-learnable, unless RP=NP=PSPACE.

Jörg-Uwe Kietz
Improving example-guided unfolding

It has been observed that the addition of clauses learned by explanation-based generalization may degrade, rather than improve, the efficiency of a logic program. There are three reasons for the degradation: i) increased unification cost ii) increased inter-clause repetition of goal calls iii) increased redundancy. There have been several approaches to solve (or reduce) these problems. However, previous techniques that solve the redundancy problem do in fact increase the two first problems. Hence, the benefit of avoiding redundancy might be outweighed by the cost associated with these techniques. A solution to this problem is presented: the algorithm EGU II, which is a reformulation of one of the previous techniques (Example-Guided Unfolding). The algorithm is based upon the application of program transformation rules (definition, unfolding and folding) and is shown to preserve the equivalence of the domain theory. Experimental results are presented showing that the cost of avoiding redundancy is significantly reduced by EGU II, and that even when the redundancy problem is not present, the technique can be superior to adding clauses redundantly.

Henrik Boström
Bayes and pseudo-Bayes estimates of conditional probabilities and their reliability

Various ways of estimating probabilities, mainly within the Bayesian framework, are discussed. Their relevance and application to machine learning is given, and their relative performance empirically evaluated. A method of accounting for noisy data is given and also applied. The reliability of estimates is measured by a significance measure, which is also empirically tested. We briefly discuss the use of likelihood ratio as a significance measure.

James Cussens
Induction of recursive Bayesian classifiers

In this paper, we review the induction of simple Bayesian classifiers, note some of their drawbacks, and describe a recursive algorithm that constructs a hierarchy of probabilistic concept descriptions. We posit that this approach should outperform the simpler scheme in domains that involve disjunctive concepts, since they violate the independence assumption on which the latter relies. To test this hypothesis, we report experimental studies with both natural and artificial domains. The results are mixed, but they are encouraging enough to recommend closer examination of recursive Bayesian classifiers in future work.

Pat Langley
Decision tree pruning as a search in the state space

This paper presents a study of one particular problem of decision tree induction, namely (post-)pruning, with the aim of finding a common framework for the plethora of pruning methods appeared in literature. Given a tree Tmax to prune, a state space is defined as the set of all subtrees of Tmax to which only one operator, called any-depth branch pruning operator, can be applied in several ways in order to move from one state to another. By introducing an evaluation function f defined on the set of subtrees, the problem of tree pruning can be cast as an optimization problem, and it is also possible to classify each post-pruning method according to both its search strategy and the kind of information exploited by f. Indeed, while some methods use only the training set in order to evaluate the accuracy of a decision tree, other methods exploit an additional pruning set that allows them to get less biased estimates of the predictive accuracy of apruned tree. The introduction of the state space shows that very simple search strategies are used by the postpruning methods considered. Finally, some empirical results allow theoretical observations on strengths and weaknesses of pruning methods to be better understood.

Floriana Esposito, Donate Malerba, Giovanni Semeraro
Controlled redundancy in incremental rule learning

This paper introduces a new concept learning system. Its main features are presented and discussed. The controlled use of redundancy is one of the main characteristics of the program. Redundancy, in this system, is used to deal with several types of uncertainty existing in real domains. The problem of the use of redundancy is addressed, namely its influence on accuracy and comprehensibility. Extensive experiments were carried out on three real world domains. These experiments showed clearly the advantages of the use of redundancy.

Luis Torgo
Getting order independence in incremental learning

It is empirically known that most incremental learning systems are order dependent, i.e. provide results that depend on the particular order of the data presentation. This paper aims at uncovering the reasons behind this, and at specifying the conditions that would guarantee order independence. It is shown that both an optimality and a storage criteria are sufficient for ensuring order independence. Given that these correspond to very strong requirements however, it is interesting to study necessary, hopefully less stringent, conditions. The results obtained prove that these necessary conditions are equally difficult to meet in practice.Besides its main outcome, this paper provides an interesting method to transform an history dependent bias into an history independent one.

Antoine Cornuéjols
Feature selection using rough sets theory

The paper is related to one of the aspects of learning from examples, namely learning how to identify a class of objects a given object instance belongs to. In the paper a method of generating sequence of features allowing such identification is presented. In this approach examples are represented in the form of attribute-value table with binary values of attributes. The main assumption is that one feature sequence is determined for all possible object instances, that is next feature in the order does not depend on values of the previous features. The algorithm is given generating a sequence under these conditions. Theoretical background of the proposed method is rough sets theory. Some generalizations of this theory are introduced in the paper. Finally, a discussion of the presented approach is provided and results of functioning of the proposed algorithm are summarized. Direction of further research is also indicated.

Maciej Modrzejewski
Effective learning in dynamic environments by explicit context tracking

Daily experience shows that in the real world, the meaning of many concepts heavily depends on some implicit context, and changes in that context can cause radical changes in the concepts. This paper introduces a method for incremental concept learning in dynamic environments where the target concepts may be context-dependent and may change drastically over time. The method has been implemented in a system called FLORA3. FLORA3 is very flexible in adapting to changes in the target concepts and tracking concept drift. Moreover, by explicitly storing old hypotheses and re-using them to bias learning in new contexts, it possesses the ability to utilize experience from previous learning. This greatly increases the system's effectiveness in environments where contexts can reoccur periodically. The paper describes the various algorithms that constitute the method and reports on several experiments that demonstrate the flexibility of FLORA3 in dynamic environments.

Gerhard Widmer, Miroslav Kubat
COBBIT—A control procedure for COBWEB in the presence of concept drift

This paper is concerned with the robustness of concept formation systems in the presence of concept drift. By concept drift is meant that the intension of a concept is not stable during the period of learning, a restriction which is otherwise often imposed. The work is based upon the architecture of Cobweb, an incremental, probabilistic conceptual clustering system. When incrementally and sequentially exposed to the extensions of a set of concepts, Cobweb retains all examples, disregards the age of a concept and may create different conceptual structures dependent on the order of examples. These three characteristics make Cobweb sensitive to the effects of concept drift. Six mechanisms that can detect concept drift and adjust the conceptual structure are proposed. A variant of one of these mechanisms: dynamic deletion of old examples, is implemented in a modified Cobweb system called Cobbit. The relative performance of Cobweb and Cobbit in the presence of concept drift is evaluated. In the experiment the error index, i.e. the average of the ability to predict each attribute is used as the major instrument. The experiment is performed in a synthetical domain and indicates that Cobbit regain performance faster after a discrete concept shift.

Fredrik Kilander, Carl Gustaf Jansson
Genetic algorithms for protein tertiary structure prediction

This article describes the application of genetic algorithms to the problem of protein tertiary structure prediction. The genetic algorithm is used to search a set of energetically sub-optimal conformations. A hybrid representation of proteins, three operators MUTATE, SELECT and CROSSOVER and a fitness function, that consists of a simple force field were used. The prototype was applied to the ab initio prediction of Crambin. None of the conformations generated by the genetic algorithm are similar to the native conformation, but all show much lower energy than the native structure on the same force field. This means the genetic algorithm's search was successful but the fitness function was not a good indicator for native structure. In another experiment, the backbone was held constant in the native state and only side chains were allowed to move. For Crambin, this produced an alignment of 1.86 Å r.m.s. from the native structure.

Steffen Schulze-Kremer
SIA: A supervised inductive algorithm with genetic search for learning attributes based concepts

This paper describes a genetic learning system called SIA, which learns attributes based rules from a set of preclassified examples. Examples may be described with a variable number of attributes, which can be numeric or symbolic, and examples may belong to several classes. SIA algorithm is somewhat similar to the AQ algorithm because it takes an example as a seed and generalizes it, using a genetic process, to find a rule maximizing a noise tolerant rule evaluation criterion. The SIA approach to supervised rule learning reduces greatly the possible rule search space when compared to the genetic Michigan and Pitt approaches. SIA is comparable to AQ and decision trees algorithms on two learning tasks. Furthermore, it has been designed for a data analysis task in a large and complex justice domain.

Gilles Venturini
SAMIA: A bottom-up learning method using a simulated annealing algorithm

This paper presents a description and an experimental evaluation of SAMIA, a learning system which induces characteristic concept descriptions from positive instances, negative instances and a background knowledge theory. The resulting concept description is expressed as a disjunction of conjunctive terms in a propositional language. SAMIA works in three steps. The first step consists in an exhaustive use of the theory in order to extend the instances representation. Then the learning component combines a bottom-up induction process and a simulated annealing strategy which performs a search through the concept description space. During the final step, the theory is used again in order to reduce each conjunctive term of the resulting formula to a minimal representation. The paper reports the results of several experiments and compares the performance of SAMIA with two other learning methods, namely ID and CN. Accuracies on test instances and concept description sizes are compared. The experiments indicate that SAMIA's classification accuracy is roughly equivalent to the two previous systems. Morever, as the results of the learning algorithms can be expressed as a set of rules, one can notice that the number of rules of SAMIA's concept descriptions is lower than both ID's and CN's one.

Pierre Brézellec, Henri Soldano
Predicate invention in ILP — an overview

Inductive Logic Programming (ILP) is a subfield of machine learning dealing with inductive inference in a first order Horn clause framework. A problem in ILP is how to extend the hypotheses language in the case that the vocabulary given initially is insufficient. One way to adapt the vocabulary is to introduce new predicates.In this paper, we give an overview of different approaches to predicate invention in ILP. We discuss theoretical results concerning the introduction of new predicates, and ILP-systems capable of inventing predicates.

Irene Stahl
Functional inductive logic programming with queries to the user

The FILP learning system induces functional logic programs from positive examples. For every predicate P, the user is asked to provide a mode (input or output) for each of its argument, and the system assumes that the mode corresponds to a total function, i.e., for a given input there is one and only one corresponding output that makes the predicate true. Functionality serves two goals: it restricts the hypothesis space and it allows the system to ask existential queries to the user. By means of these queries, missing examples can be added to the ones given initially, and this makes the learned programs complete and consistent and the system adequate for learning multiple predicates and recursive clauses in a reliable manner.

F. Bergadano, D. Gunetti
A note on refinement operators

The top down induction of logic programs is faced with the problem of ensuring that the search space includes all the desired hypotheses. The conventional way of of organizing the search space is via refinement of clauses. Within this context the existence of a well behaved refinement operator complete for Horn clause logic is desirable.We show that there is no natural way in which a complete refinement operator can be defined which avoids the production of non-reduced clauses. Consideration is given to subsets of full Horn clause logic for which more efficient refinement operators can be constructed.

Tim Niblett
An iterative and bottom-up procedure for proving-by-example

We give a procedure for generalizing a proof of a concrete instance of a theorem by recovering inductions that have been expanded in the concrete proof. It consists of three operations introduction, extension and propagation, and by iterating these operations in a bottom-up fashion, it can reconstruct nested inductions. We discuss how to use EBG for identifying the induction formula, and how EBG must be modified so that nested inductions can be reconstructed.

Masami Hagiya
Learnability of constrained logic programs

The field of Inductive Logic Programming (ILP) is concerned with inducing logic programs from examples in the presence of back-ground knowledge. This paper defines the ILP problem and describes several syntactic restrictions that are often used in ILP. We then derive some positive results concerning the learnability of these restricted classes of logic programs, by reduction to a standard propositional learning problem. More specifically, k-literal predicate definitions consisting of constrained, function-free, nonrecursive program clauses are polynomially PAC-learnable under arbitrary distributions.

Sašo Džeroski, Stephen Muggleton, Stuart Russell
Complexity dimensions and learnability

In a discussion of the Vapnik Chervonenkis (VC) dimension ([7]), which is closely related to the learnability of concept classes in Valiant's PAC-model ([6]), we will give an algorithm to compute it. Furthermore, we will take Natarajan's equivalent dimension for well-ordered classes into a more general scheme, by showing that these well-ordered classes happen to satisfy some general condition, which makes it possible to construct for a class a number of equivalent dimensions. We will give this condition, as well as a relatively efficient algorithm for the calculation of one such dimension for well-ordered classes.

S. H. Nienhuys-Cheng, M. Polman
Can complexity theory benefit from Learning Theory?

We show that the results achieved within the framework of Computational Learning Theory are relevant enough to have non-trivial applications in other areas of Computer Science, namely in Complexity Theory. Using known results on efficient query-learnability of some Boolean concept classes, we prove several (co-NP-completeness) results on the complexity of certain decision problems concerning representability of general Boolean functions in special forms.

Tibor Hegedüs
Learning domain theories using abstract background knowledge

Substantial machine learning research has addressed the task of learning new knowledge given a (possibly incomplete or incorrect) domain theory, but leaves open the question of where such domain theories originate. In this paper we address the problem of constructing a domain theory from more general, abstract knowledge which may be available. The basis of our method is to first assume a structure for the target domain theory, and second to view background knowledge as constraints on components of that structure. This enables a focusing of search during learning, and also produces a domain theory which is explainable with respect to the background knowledge. We evaluate an instance of this methodology applied to the domain of economics, where background knowledge is represented as a qualitative model.

Peter Clark, Stan Matwin
Discovering patterns in EEG-signals: Comparative study of a few methods

The objective of this paper is to draw the attention of the ML-researchers to the domain of data analysis. The issue is illustrated by an attractive case study—automatic classification of non-averaged EEG-signals. We applied several approaches and obtained best results from a combination of an ID3-like program with Bayesian learning.

Miroslav Kubat, Doris Flotzinger, Gert Pfurtscheller
Learning to control dynamic systems with automatic quantization

Reinforcement learning is often used in learning to control dynamic systems, which are described by quantitative state variables. Most previous work that learns qualitative (symbolic) control rules cannot construct symbols themselves. That is, a correct partition of the state variables, or a correct set of qualitative symbols, is given to the learning program.We do not make this assumption in our work of learning to control dynamic systems. The learning task is divided into two phases. The first phase is to extract symbols from quantitative inputs. This process is also commonly called quantization. The second phase is to evaluate the symbols obtained in the first phase and to induce the best possible symbolic rules based on those symbols. These two phases interact with each other and thus make the whole learning task very difficult. We demonstrate that our new method, called STAQ (Set Training with Automatic Quantization), can aggressively partition the input variables to a finer resolution until the correct control rules based on these partitions (symbols) are learned. In particular, we use STAQ to solve the well-known cart-pole balancing problem.

Charles X. Ling, Ralph Buchal
Refinement of rule sets with JoJo

In the paper we discuss a new approach for learning classification rules from examples. We sketch out the algorithm JoJo and its extension to a four step procedure which can be used to incrementally refine a set of classification rules. Incorrect rules are refined, the entire rule set is completed, redundant rules are deleted and the rule set can be minimized. The first two steps are done by applying JoJo which searches through the lattice of rules by generalization and specialization.

Dieter Fensel, Markus Wiese
Rule combination in inductive learning

This paper describes the work on methods for combining rules obtained by machine learning systems. Three methods for obtaining the classification of examples with those rules are compared. The advantages and disadvantages of each method are discussed and the results obtained on three real world domains are commented. The methods compared are: selection of the best rule; PROSPECTOR-like probabilistic approximation for rule combination; and MYCIN-like approximation. Results show significant differences between methods indicating that the problem-solving strategy is important for accuracy oflearning systems.

Luis Torgo
Using heuristics to speed up induction on continuous-valued attributes

Induction of decision trees in domains with continuous-valued attributes is computationally expensive due to the evaluation of every possible test on these attributes. As the number of tests to be considered grows linearly with the number of examples, this poses a problem for induction on large databases. Two variants of a heuristic, based on the possible difference of the entropy-minimization selection-criterion between two tests, are proposed and compared to a previously known heuristic. Empirical results with real-world data confirm that the heuristics can reduce the computational effort significantly without any change in the induced decision trees.

Günter Seidelmann
Integrating models of knowledge and Machine Learning

We propose a framework allowing a real integration of Machine Learning and Knowledge acquisition. This paper shows how the input of a Machine Learning system can be mapped to the model of expertise as it is used in KADS methodology. The notion of learning bias will play a central role. We shall see that parts of it can be identified to what KADS's people call the inference and the task models. Doing this conceptual mapping, we give a semantics to most of the inputs of Machine Learning programs in terms of knowledge acquisition models. The ENIGME system which implements this work will be presented

Jean-Gabriel Ganascia, Jérôme Thomas, Philippe Laublet
Exploiting context when learning to classify

This paper addresses the problem of classifying observations when features are context-sensitive, specifically when the testing set involves a context that is different from the training set. The paper begins with a precise definition of the problem, then general strategies are presented for enhancing the performance of classification algorithms on this type of problem. These strategies are tested on two domains. The first domain is the diagnosis of gas turbine engines. The problem is to diagnose a faulty engine in one context, such as warm weather, when the fault has previously been seen only in another context, such as cold weather. The second domain is speech recognition. The problem is to recognize words spoken by a new speaker, not represented in the training set. For both domains, exploiting context results in substantially more accurate classification.

Peter D. Turney
IDDD: An inductive, domain dependent decision algorithm

Decision tree induction, as supported by id3, is a well known approach of heuristic classification. In this paper we introduce mother-child relationships to model dependencies between attributes which are used to represent, training examples. Such relationships are implemented via iddd which extends the original id3 algorithm. The application of iddd is demonstrated via a series of concept acquisition experiments using a ‘real-world’ medical domain. Results demonstrate that the application of iddd contributes to the acquisition of more domain relevant knowledge as compared to knowledge induced by id3 itself.

L. Gaga, V. Moustakis, G. Charissis, S. Orphanoudakis
An application of machine learning in the domain of loan analysis

Making decisions on whether to give or not a financial support to an industrial project is a very common and yet very complex task. Financial institutions need much expertise to deal with the large amount of information that has to be considered for this process. An expert system based approach seems to be an interesting solution to the problems raised by this type of application. Knowledge acquisition is, however, a very time consuming task. We have used APT, a multistrategy learning system, as a knowledge elicitation tool in the domain of loan decision. We describe the process of building and refining a knowledge base and compare the results of our approach to a conventional expert system.

José Ferreira, Joaquim Correia, Thomas Jamet, Ernesto Costa
Extraction of knowledge from data using constrained neural networks

This paper deals with two complementary problems: the problem of extracting knowledge from neural networks and the problem of inserting knowledge into neural networks. Our approach to the extraction of knowledge is essentially constraints-based. Local constraints are imposed on the neural network's weights and activities to make neural networkunits work as logical operators. We have modified two well-known learning algorithms, namely the simulated annealing and the backpropagation, with respect to imposed constraints. In the case of the non-empty domain theory, the knowledge insertion technique is used to impose global constraints to determine the neural network's topology and initialization according to a priori knowledge about the problem under study. The knowledge to be inserted can be expressed as a set of propositional rules. We report simulation results obtained by running our algorithms to extract boolean formulae.

Raqui Kane, Irina Tchoumatchenko, Maurice Milgram
Integrated learning architectures

Research in systems where learning is integrated to other components like problem solving, vision, or natural language is becoming an important topic for Machine Learning. Situations where learning methods are embedded or integrated into broader systems offers new theoretical challenges to ML and enlarge the potential range of ML applications. In this position paper we propose the research topic of integrated learning architectures as an initial discussion of the role of learning in intelligent systems. We review the current state of the art and characterise several dimensions along which integrated learning architectures may vary. This paper has been prepared as a position paper with the purpose of providing an initial common ground for discussion in the ECML-93 Workshop on Integrated Learning Architectures. The paper has been edited by E Plaza on the basis of the individual contributions of the authors.

E. Plaza, A. Aamodt, A. Ram, W. van de Velde, M. van Someren
An overview of evolutionary computation

Evolutionary computation uses computational models of evolutionary processes as key elements in the design and implementation of computerbased problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and differences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.

William M. Spears, Kenneth A. De Jong, Thomas Bäck, David B. Fogel, Hugo de Garis
ML techniques and text analysis

In this paper text analysis is presented as a special subdiscipline of automated language learning, which in itself is a subdiscipline of machine learning. A formal classification scheme for analysis of language learning algorithms in terms of abstract learners and speaker/authors is introduced. The inductive inference approach of Gold and successors is rejected as being of little practical value. The perspectives of this newly emerging field are discussed in the light of a number of exemplifying research projects.

Pieter Adriaans
Backmatter
Metadaten
Titel
Machine Learning: ECML-93
herausgegeben von
Pavel B. Brazdil
Copyright-Jahr
1993
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-47597-2
Print ISBN
978-3-540-56602-1
DOI
https://doi.org/10.1007/3-540-56602-3