Skip to main content

2012 | Buch

Autonomous Search

herausgegeben von: Youssef Hamadi, Eric Monfroy, Frédéric Saubion

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

Decades of innovations in combinatorial problem solving have produced better and more complex algorithms. These new methods are better since they can solve larger problems and address new application domains. They are also more complex which means that they are hard to reproduce and often harder to fine-tune to the peculiarities of a given problem. This last point has created a paradox where efficient tools are out of reach of practitioners.

Autonomous search (AS) represents a new research field defined to precisely address the above challenge. Its major strength and originality consist in the fact that problem solvers can now perform self-improvement operations based on analysis of the performances of the solving process -- including short-term reactive reconfiguration and long-term improvement through self-analysis of the performance, offline tuning and online control, and adaptive control and supervised control. Autonomous search "crosses the chasm" and provides engineers and practitioners with systems that are able to autonomously self-tune their performance while effectively solving problems.

This is the first book dedicated to this topic, and it can be used as a reference for researchers, engineers, and postgraduates in the areas of constraint programming, machine learning, evolutionary computing, and feedback control theory. After the editors' introduction to autonomous search, the chapters are focused on tuning algorithm parameters, autonomous complete (tree-based) constraint solvers, autonomous control in metaheuristics and heuristics, and future autonomous solving paradigms.

Autonomous search (AS) represents a new research field defined to precisely address the above challenge. Its major strength and originality consist in the fact that problem solvers can now perform self-improvement operations based on analysis of the performances of the solving process -- including short-term reactive reconfiguration and long-term improvement through self-analysis of the performance, offline tuning and online control, and adaptive control and supervised control. Autonomous search "crosses the chasm" and provides engineers and practitioners with systems that are able to autonomously self-tune their performance while effectively solving problems.

This is the first book dedicated to this topic, and it can be used as a reference for researchers, engineers, and postgraduates in the areas of constraint programming, machine learning, evolutionary computing, and feedback control theory. After the editors' introduction to autonomous search, the chapters are focused on tuning algorithm parameters, autonomous complete (tree-based) constraint solvers, autonomous control in metaheuristics and heuristics, and future autonomous solving paradigms.

This is the first book dedicated to this topic, and it can be used as a reference for researchers, engineers, and postgraduates in the areas of constraint programming, machine learning, evolutionary computing, and feedback control theory. After the editors' introduction to autonomous search, the chapters are focused on tuning algorithm parameters, autonomous complete (tree-based) constraint solvers, autonomous control in metaheuristics and heuristics, and future autonomous solving paradigms.

This is the first book dedicated to this topic, and it can be used as a reference for researchers, engineers, and postgraduates in the areas of constraint programming, machine learning, evolutionary computing, and feedback control theory. After the editors' introduction to autonomous search, the chapters are focused on tuning algorithm parameters, autonomous complete (tree-based) constraint solvers, autonomous control in metaheuristics and heuristics, and future autonomous solving paradigms.

Inhaltsverzeichnis

Frontmatter
Chapter 1. An Introduction to Autonomous Search
Abstract
An Autonomous Search system should have the ability to advantageously modify its internal components when exposed to changing external forces and opportunities. It corresponds to a particular case of adaptive systems, with the objective of improving its problem-solving performance by adapting its search strategy to the problem at hand. Internal components correspond to the various algorithms involved in the search process: heuristics, inference mechanisms, etc. External forces correspond to the evolving information collected during this search process: search landscape analysis (quality, diversity, entropy, etc), external knowledge (prediction models, rules, etc) and so on.
In 2007, we organized the first workshop on Autonomous Search in Providence, RI, USA, in order to present relevant works aimed at building more intelligent solvers for the constraint programming community. We tried to describe more conceptually the concept of Autonomous Search from the previously described related works. The purpose of this book is to provide a clear overview of recent advances in autonomous tools for optimization and constraint satisfaction problems. In order to be as exhaustive as possible, keeping the focus on constrained problem solving, this book includes ten chapters that cover different solving techniques from metaheuristics to tree-based search and that illustrate how these solving techniques may benefit from intelligent tools by improving their efficiency and adaptability to solve the problems at hand.
Youssef Hamadi, Eric Monfroy, Frédéric Saubion

Off-line Configuration

Frontmatter
Chapter 2. Evolutionary Algorithm Parameters and Methods to Tune Them
Abstract
In this chapter we discuss the notion of Evolutionary Algorithm (EAs) parameters and propose a distinction between EAs and EA instances, based on the type of parameters used to specify their details. Furthermore, we consider the most important aspects of the parameter tuning problem and give an overview of existing parameter tuning methods. Finally, we elaborate on the methodological issues involved here and provide recommendations for further development.
A. E. Eiben, S. K. Smit
Chapter 3. Automated Algorithm Configuration and Parameter Tuning
Abstract
The use of automated configuration and parameter tuning techniques plays an increasingly important role in the design, evaluation and application of high-performance algorithms for difficult computational problems. This chapter provides an introduction to these techniques and gives an overview of three families of procedures for automatically optimising the performance of parameterised algorithms. Racing procedures iteratively evaluate parameter settings on problem instances from a given set and use statistical hypothesis tests to eliminate candidate configurations that are significantly outperformed by other configurations. ParamILS uses a powerful stochastic local search method to search within potentially vast spaces of candidate configurations of a given algorithm. And finally, sequential model-based optimisation (SMBO) methods build a response surface model that relates parameter settings to performance, and use this model to iteratively identify promising settings. We also briefly survey other algorithm configuration and parameter tuning procedures, as well as related approaches, such as instance-based algorithm selection and configuration.
Holger H. Hoos
Chapter 4. Case-Based Reasoning for Autonomous Constraint Solving
Abstract
Humans often reason from experiences in the way exemplified above. Faced with a new problem, we recall our experiences in solving similar problems in the past, and we modify the past solutions to fit the circumstances of the new problem.
Within Artificial Intelligence (AI), the idea that we can solve problems by recalling and reusing the solutions to similar past problems, rather than reasoning ‘from scratch’, underlies Case-Based Reasoning (CBR), which has been the target of active research and development since the late 1980s. CBR is a problem solving and learning strategy: reasoning is remembered (this is learning); and reasoning is remembering (this is problem-solving). CBR can be useful in domains where problem types recur, and where similar problems have similar solutions. Its wide range of application areas — from classification and numeric prediction to configuration, design and planning — and domains — from medicine to law to recommender systems — is testimony to its generality. In this chapter, we review the application of CBR to search and especially to constraint solving. We present CPHYDRA, a recent successful application of CBR to autonomous constraint solving. In CPHYDRA, CBR is used to inform a portfolio approach to constraint problem solving.
Derek Bridge, Eoin O’Mahony, Barry O’Sullivan
Chapter 5. Learning a Mixture of Search Heuristics
Abstract
This chapter describes programs that solve constraint satisfaction problems with multiple heuristics. It demonstrates the varied efficacy of individual constraint-solving metrics and the potential power available from a mixture of heuristics that references them. It describes a weighted-mixture decision process, and explains how one autonomous learner constructs its own labeled training examples from its search experience, and then learns a weighted mixture from them. Four new techniques are introduced to manage a large body of conflicting heuristics, and illustrated with empirical results.
Susan L. Epstein, Smiljana Petrovic

On-line Control

Frontmatter
Chapter 6. An Investigation of Reinforcement Learning for Reactive Search Optimization
Abstract
Reactive Search Optimization advocates the adoption of learning mechanisms as an integral part of a heuristic optimization scheme. This work studies reinforcement learning methods for the online tuning of parameters in stochastic local search algorithms. In particular, the reactive tuning is obtained by learning a (near-)optimal policy in a Markov decision process where the states summarize relevant information about the recent history of the search. The learning process is performed by the Least Squares Policy Iteration (LSPI) method. The proposed framework is applied for tuning the prohibition value in the Reactive Tabu Search, the noise parameter in the Adaptive Walksat, and the smoothing probability in the Reactive Scaling and Probabilistic Smoothing (RSAPS) algorithm. The novel approach is experimentally compared with the original ad hoc. reactive schemes.
Roberto Battiti, Paolo Campigotto
Chapter 7. Adaptive Operator Selection and Management in Evolutionary Algorithms
Abstract
One of the settings that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. The control of these operators can be handled in an autonomous way, while solving the problem, at two different levels: at the structural level, when deciding which operators should be part of the algorithmic framework, referred to as Adaptive Operator Management (AOM); and at the behavioral level, when selecting which of the available operators should be applied at a given time instant, called as Adaptive Operator Selection (AOS). Both controllers guide their choices based on a common knowledge about the recent performance of each operator. In this chapter, we present methods for these two complementary aspects of operator control, the ExCoDyMAB AOS and the Blacksmith AOM, providing case studies to analyze them in order to highlight the major issues that should be considered for the design of more autonomous Evolutionary Algorithms.
Jorge Maturana, Álvaro Fialho, Frédéric Saubion, Marc Schoenauer, Frédéric Lardeux, Michèle Sebag
Chapter 8. Parameter Adaptation in Ant Colony Optimization
Abstract
This chapter reviews the approaches that have been studied for the online adaptation of the parameters of ant colony optimization (ACO) algorithms, that is, the variation of parameter settings while solving an instance of a problem. We classify these approaches according to the main classes of online parameter-adaptation techniques. One conclusion of this review is that the available approaches do not exploit an in-depth understanding of the effect of individual parameters on the behavior of ACO algorithms. Therefore, this chapter also presents results of an empirical study of the solution quality over computation time for Ant Colony System and MAX-MIN Ant System, two well-known ACO algorithms. The first part of this study provides insights on the behaviour of the algorithms in dependence of fixed parameter settings. One conclusion is that the best fixed parameter settings of MAX-MIN Ant System depend strongly on the available computation time. The second part of the study uses these insights to propose simple, pre-scheduled parameter variations. Our experimental results show that such pre-scheduled parameter variations can dramatically improve the anytime performance of MAX-MIN Ant System.
Thomas Stützle, Manuel López-Ibáñez, Paola Pellegrini, Michael Maur, Marco Montes de Oca, Mauro Birattari, Marco Dorigo

New Directions and Applications

Frontmatter
Chapter 9. Continuous Search in Constraint Programming
Abstract
This work presents the concept of Continuous Search (CS), whose objective is to allow any user to eventually get their constraint solver achieving a top performance on their problems. Continuous Search comes in two modes: the functioning mode solves the user’s problem instances using the current heuristics model; the exploration mode reuses these instances to train and improve the heuristics model through Machine Learning during the computer idle time. Contrasting with previous approaches, Continuous Search thus does not require that the representative instances needed to train a good heuristics model be available beforehand. It achieves lifelong learning, gradually becoming an expert on the user’s problem instance distribution. Experimental validation suggests that Continuous Search can design efficient mixed strategies after considering a moderate number of problem instances.
Alejandro Arbelaez, Youssef Hamadi, Michèle Sebag
Chapter 10. Control-Based Clause Sharing in Parallel SAT Solving
Abstract
Conflict driven clause learning, one of the most important component of modern SAT solvers, is also recognized as very important in parallel SAT solving. Indeed, it allows clause sharing between multiple processing units working on related (sub-)problems. However, without limitation, sharing clauses might lead to an exponential blow up in communication or to the sharing of irrelevant clauses. This chapter, proposes new innovative policies to dynamically adjust the size of shared clauses between any pair of processing units. The first approach controls the overall number of exchanged clauses whereas the other ones additionally exploit the relevance quality of shared clauses. Extensive experiments show important improvements of the state-of the-art parallel SAT solver.
Youssef Hamadi, Said Jabbour, Jabbour Sais
Chapter 11. Learning Feature-Based Heuristic Functions
Abstract
Planning is the process of creating a sequence of actions that achieve some desired goals. Automated planning arguably plays a key role in both developing intelligent systems and solving many practical industrial problems. Typical planning problems are characterized by a structured state space, a set of possible actions, a description of the effects of each action, and an objective measure. In this chapter, we consider planning as an optimization problem, seeking plans that minimize the cost of reaching the goals or some other performance measure.
Marek Petrik, Shlomo Zilberstein
Metadaten
Titel
Autonomous Search
herausgegeben von
Youssef Hamadi
Eric Monfroy
Frédéric Saubion
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-21434-9
Print ISBN
978-3-642-21433-2
DOI
https://doi.org/10.1007/978-3-642-21434-9