Skip to main content

2005 | Buch

Foundations of Learning Classifier Systems

insite
SUCHEN

Über dieses Buch

This volume brings together recent theoretical work in Learning Classifier Systems (LCS), which is a Machine Learning technique combining Genetic Algorithms and Reinforcement Learning. It includes self-contained background chapters on related fields (reinforcement learning and evolutionary computation) tailored for a classifier systems audience and written by acknowledged authorities in their area - as well as a relevant historical original work by John Holland.

Inhaltsverzeichnis

Frontmatter
Foundations of Learning Classifier Systems: An Introduction
Abstract
[Learning] Classifier systems are a kind of rule-based system with general mechanisms for processing rules in parallel, for adaptive generation of new rules, and for testing the effectiveness of existing rules. These mechanisms make possible performance and learning without the “brittleness” characteristic of most expert systems in AI.
Larry Bull, Tim Kovacs
Population Dynamics of Genetic Algorithms
Abstract
The theory of evolutionary algorithms has developed significantly in the last few years.A variety of techniques and perspectives have been brought to bear on the analysis and understanding of these algorithms. However, it is fair to say that we are still some way away from a coherent theory that explains and predicts behaviour, and can give guidance to applied practitioners. Theory has so far developed in a fragmented, piecemeal fashion, with different researchers applying their own perspectives, and using tools with which they are familiar. This is beginning to change, as the research community develops and individual insights become shared. Consequently, the work presented in this chapter is a somewhat biased selection of results. However, I hope that other researchers will appreciate this material, even if they would themselves have concentrated on a different approach. Readers who are interested in a survey of current theory are referred to the books Genetic algorithms: principles and perspectives [15] and Theoretical aspects of evolutionary computation [9].
Jonathan E. Rowe
Approximating Value Functions in Classifier Systems
Abstract
While there has been some attention given recently to the issues of function approximation using learning classifier systems (e.g. [13, 3]), few studies have looked at the quality of the value function approximation computed by a learning classifier system when it solves a reinforcement learning problem [1, 8]. By contrast, considerable attention has been paid to this issue in the reinforcement learning literature [12]. One of the fundamental assumptions underlying algorithms for solving reinforcement learning problems is that states and state-action pairs have well-defined values that can be computed and used to help determine an optimal policy. The quality of those approximations is a critical factor in determining the success of many algorithms in solving reinforcement learning problems.
Lashon B. Booker
Two Simple Learning Classifier Systems
Abstract
Since its introduction Holland’s Learning Classifier System (LCS) [Holland, 1976] has inspired much research into ‘genetics-based’ machine learning [Goldberg, 1989]. Given the complexity of the developed system [Holland, 1986], simplified versions have previously been presented (e.g., [Goldberg, 1989][Wilson, 1994]) to improve both performance and understanding. It has recently been shown that Wilson’s simpler ‘zeroth-level’ system (ZCS) [Wilson, 1994] can perform optimally [Bull & Hurst, 2002] but “it would appear that the interaction between the rate of rule updates and the fitness sharing process is critical” [ibid.]. In this chapter, a simplified version of ZCS is explored - termed a ‘minimal’ classifier system, MCS.
Larry Bull
Computational Complexity of the XCS Classifier System
Abstract
Learning classifier systems (LCSs) are online-generalizing rule-based learning systems that use evolutionary computation techniques to evolve an optimal set of rules, that is, a population of classifiers (1; 2). LCSs tackle both single-step classification problems and multi-step reinforcement learning (RL) problems. Although the LCS proposal dates back over twenty years ago, there has been hardly any theory regarding convergence, computational effort, problem instances, etc. Successful applications seemed to rather rely on a “black art” of correct parameter settings, supported by powerful computers, than on actual insight.
Martin V. Butz, David E. Goldberg, Pier Luca Lanzi
An Analysis of Continuous-Valued Representations for Learning Classifier Systems
Abstract
Learning Classifier Systems [11] typically use a ternary representation to encode the environmental condition that a classifier matches. However, many real-world problems are not conveniently expressed in terms of a ternary representation and several alternate representations have been suggested to allow Learning Classifier Systems to handle these problems more readily [1, 3, 6, 15].
Christopher Stone, Larry Bull
Reinforcement Learning: A Brief Overview
Abstract
Learning techniques can be usefully grouped by the type of feedback that is available to the learner. A commonly drawn distinction is that between supervised and unsupervised techniques. In supervised learning a teacher gives the learner the correct answers for each input example. The task of the learner is to infer a function which returns the correct answers for these exemplars while generalising well to new data. In unsupervised learning the learner’s task is to capture and summarise regularities present in the input examples. Reinforcement learning (RL) problems fall somewhere between these two by giving not the correct response, but an indication of how good an response is. The learner’s task in this framework is to learn to produce repsonses that maximise goodness.
Jeremy Wyatt
A Mathematical Framework for Studying Learning in Classifier Systems
Abstract
Massively parallel, rule-based systems offer both a practical and a theoretical tool for understanding systems that act usefully in complex environments [see, for example, refs 1-4], However, these systems pose a number of problems of a high order of difficulty - problems that can be broadly characterized as problems in nonlinear dynamics. The difficulties stem from the fact that the systems are designed to act in environments with complex transition functions - environments that, in all circumstances of interest, are far from equilibrium. Interactions with the environment thus face the systems with perpetual novelty, and the usual simplifications involving fixed points, limit cycles, etc., just do not apply.
John H. Holland
Rule Fitness and Pathology in Learning Classifier Systems
Abstract
When applied to reinforcement learning, Learning Classifier Systems (LCS) [5] evolve sets of rules in order to maximise the return they receive from their task environment. They employ a genetic algorithm to generate rules, and to do so must evaluate the fitness of existing rules. In order for the Genetic Algorithm (GA) [4] to produce rules which are better adapted to the task, rule fitness needs somehow to be connected to the rewards received by the system – a credit assignment problem. Precisely how to relate LCS performance to rule fitness has been the subject of much research, and is of great significance because adaptation of rules and LCS alike depends on it.
Tim Kovacs
Learning Classifier Systems: A Reinforcement Learning Perspective
Abstract
Reinforcement learning is defined as the problem of an agent that learns to perform a certain task through trial and error interactions with an unknown environment [27]. Most of the research in reinforcement learning focuses on algorithms that are inspired, in a way or another, by methods of Dynamic Programming (e.g., Watkins’ Q-learning [29]). These algorithms have a strong theoretical framework but assume a tabular representation of the value function; thus, their applicability is limited to problems involving few input states and few actions. Alternatively, these methods can be extended for large applications by using function approximators (e.g., neural networks) to represent the value function [27]. In these cases, the general theoretical framework remains but convergence theorems no longer apply.
Pier Luca Lanzi
Learning Classifier System with Convergence and Generalization
Abstract
Learning Classifier Systems (LCSs) are rule-based systems whose rules are named classifiers. The original LCS was introduced by Holland [1, 2], and was intended to be a framework to study learning in condition-action rules. It included the distinctive features of a generalization mechanism in rule conditions and a rule discovery mechanism using genetic algorithms (GAs) [3]. Later, this original LCS was revised to its “standard form”[4], which produced many variants [5–8].
Atsushi Wada, Keiki Takadama, Katsunori Shimohara, Osamu Katai
On the Classification of Maze Problems
Abstract
A maze is a grid-like two-dimensional area of any size, usually rectangular. A maze consists of cells. A cell is an elementary maze item, a formally bounded space, interpreted as a single site. The maze may contain different obstacles in any quantity. Some may be significant for learning purposes, like virtual food. The agent is randomly placed in the maze on an empty cell. The agent is allowed to move in all directions, but only through empty space. The task is to learn a policy to reach food as fast as possible from any square. Once the food is reached, the agent position is reset to a random one and the task repeated.
Anthony J. Bagnall, Zhanna V. Zatuchna
What Makes a Problem Hard for XCS?
Abstract
Two basic questions to ask about any learning system are: to what kinds of problems is it well suited? To what kinds of problems is it poorly suited? Despite two decades of work, Learning Classifier Systems (LCS) researchers have had relatively little to say on the subject. Although this may in part be due to the wide range of systems and problems the LCS paradigm encompasses, it is certainly a reflection of deficiency in LCS theory.
Tim Kovacs, Manfred Kerber
Metadaten
Titel
Foundations of Learning Classifier Systems
herausgegeben von
Larry Bull
Tim Kovacs
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32396-9
Print ISBN
978-3-540-25073-9
DOI
https://doi.org/10.1007/b100387

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.