Skip to main content

2012 | Buch

Learning and Intelligent Optimization

6th International Conference, LION 6, Paris, France, January 16-20, 2012, Revised Selected Papers

herausgegeben von: Youssef Hamadi, Marc Schoenauer

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 6th International Conference on Learning and Intelligent Optimization, LION 6, held in Paris, France, in January 2012. The 23 long and 30 short revised papers were carefully reviewed and selected from a total of 99 submissions. The papers focus on the intersections and uncharted territories between machine learning, artificial intelligence, mathematical programming and algorithms for hard optimization problems. In addition to the paper contributions the conference also included 3 invited speakers, who presented forefront research results and frontiers, and 3 tutorial talks, which were crucial in bringing together the different components of LION community.

Inhaltsverzeichnis

Frontmatter

Long Papers

Iterative-Deepening Search with On-Line Tree Size Prediction

The memory requirements of best-first graph search algorithms such as A* often prevent them from solving large problems. The best-known approach for coping with this issue is iterative deepening, which performs a series of bounded depth-first searches. Unfortunately, iterative deepening only performs well when successive cost bounds visit a geometrically increasing number of nodes. While it happens to work acceptably for the classic sliding tile puzzle, IDA* fails for many other domains. In this paper, we present an algorithm that adaptively chooses appropriate cost bounds on-line during search. During each iteration, it learns a model of the search tree that helps it to predict the bound to use next. Our search tree model has three main benefits over previous approaches: 1) it will work in domains with real-valued heuristic estimates, 2) it can be trained on-line, and 3) it is able to make predictions with only a small number of training examples. We demonstrate the power of our improved model by using it to control an iterative-deepening A* search on-line. While our technique has more overhead than previous methods for controlling iterative-deepening A*, it can give more robust performance by using its experience to accurately double the amount of search effort between iterations.

Ethan Burns, Wheeler Ruml
A Learning Optimization Algorithm in Graph Theory
Versatile Search for Extremal Graphs Using a Learning Algorithm

Using a heuristic optimization module based upon Variable Neighborhood Search (VNS), the system AutoGraphiX’s main feature is to find extremal or near extremal graphs, i.e., graphs that minimize or maximize an invariant. From the so obtained graphs, conjectures are found either automatically or interactively. Most of the features of the system relies on the optimization that must be efficient but the variety of problems handled by the system makes the tuning of the optimizer difficult to achieve. We propose a learning algorithm that is trained during the optimization of the problem and provides better results than all the algorithms previously used for that purpose.

Gilles Caporossi, Pierre Hansen
A Math-Heuristic Dantzig-Wolfe Algorithm for the Capacitated Lot Sizing Problem

The multi-item multi-period capacitated lot sizing problem with setups (CLST) is a well known optimization problem with wide applicability in real-world production planning problems. Based on a recently proposed Dantzig-Wolfe approach we present a novel math-heuristic algorithm for the CLST. The major contribution of this paper lies in the presentation of an algorithm that exploits exact techniques (Dantzig-Wolfe) in a metaheuristic fashion, in line with the novel trend of math-heuristic algorithms. To the best of the authors’ knowledge, it is the first time that such technique is employed within a metaheuristic framework, with the aim of tackling challenging instances in short computational time.

Marco Caserta, Stefan Voß
Application of the Nested Rollout Policy Adaptation Algorithm to the Traveling Salesman Problem with Time Windows

In this paper, we are interested in the minimization of the travel cost of the traveling salesman problem with time windows. In order to do this minimization we use a Nested Rollout Policy Adaptation (NRPA) algorithm. NRPA has multiple levels and maintains the best tour at each level. It consists in learning a rollout policy at each level. We also show how to improve the original algorithm with a modified rollout policy that helps NRPA to avoid time windows violations.

Tristan Cazenave, Fabien Teytaud
Parallel Algorithm Configuration

State-of-the-art algorithms for solving hard computational problems often expose many parameters whose settings critically affect empirical performance. Manually exploring the resulting combinatorial space of parameter settings is often tedious and unsatisfactory. Automated approaches for finding good parameter settings are becoming increasingly prominent and have recently lead to substantial improvements in the state of the art for solving a variety of computationally challenging problems. However, running such automated algorithm configuration procedures is typically very costly, involving many thousands of invocations of the algorithm to be configured. Here, we study the extent to which parallel computing can come to the rescue. We compare straightforward parallelization by multiple independent runs with a more sophisticated method of parallelizing the model-based configuration procedure

SMAC

. Empirical results for configuring the MIP solver

CPLEX

demonstrate that near-optimal speedups can be obtained with up to 16 parallel workers, and that 64 workers can still accomplish challenging configuration tasks that previously took 2 days in 1–2 hours. Overall, we show that our methods make effective use of large-scale parallel resources and thus substantially expand the practical applicability of algorithm configuration methods.

Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown
Community Detection in Social and Biological Networks Using Differential Evolution

The community detection in complex networks is an important problem in many scientific fields, from biology to sociology. This paper proposes a new algorithm, Differential Evolution based Community Detection (DECD), which employs a novel optimization algorithm, differential evolution (DE) for detecting communities in complex networks. DE uses network modularity as the fitness function to search for an optimal partition of a network. Based on the standard DE crossover operator, we design a modified binomial crossover to effectively transmit some important information about the community structure in evolution. Moreover, a biased initialization process and a clean-up operation are employed in DECD to improve the quality of individuals in the population. One of the distinct merits of DECD is that, unlike many other community detection algorithms, DECD does not require any prior knowledge about the community structure, which is particularly useful for its application to real-world complex networks where prior knowledge is usually not available. We evaluate DECD on several artificial and real-world social and biological networks. Experimental results show that DECD has very competitive performance compared with other state-of-the-art community detection algorithms.

Guanbo Jia, Zixing Cai, Mirco Musolesi, Yong Wang, Dan A. Tennant, Ralf J. M. Weber, John K. Heath, Shan He
A Study on Large Population MOEA Using Adaptive ε-Box Dominance and Neighborhood Recombination for Many-Objective Optimization

Multi-objective evolutionary algorithms are increasingly being investigated to solve many-objective optimization problems. However, most algorithms recently proposed for many-objective optimization cannot find Pareto optimal solutions with good properties on convergence, spread, and distribution. Often, the algorithms favor one property at the expense of the other. In addition, in some applications it takes a very long time to evaluate solutions, which prohibits running the algorithm for a large number of generations. In this work to obtain good representations of the Pareto optimal set we investigate a large population MOEA, which employs adaptive

ε

-box dominance for selection and neighborhood recombination for variation, using a very short number of generations to evolve the population. We study the performance of the algorithm on some functions of the DTLZ family, showing the importance of using larger populations to search on many-objective problems and the effectiveness of employing adaptive

ε

-box dominance with neighborhood recombination that take into account the characteristics of many-objective landscapes.

Naoya Kowatari, Akira Oyama, Hernán E. Aguirre, Kiyoshi Tanaka
A Non-adaptive Stochastic Local Search Algorithm for the CHeSC 2011 Competition

In this work, we present our submission for the Cross-domain Heuristic Search Challenge 2011. We implemented a stochastic local search algorithm that consists of several algorithm schemata that have been offline-tuned on four sample problem domains. The schemata are based on all families of low-level heuristics available in the framework used in the competition with the exception of crossover heuristics. Our algorithm goes through an initial phase that filters dominated low-level heuristics, followed by an algorithm schemata selection implemented in a race. The winning schema is run for the remaining computation time. Our algorithm ranked seventh in the competition results. In this paper, we present the results obtained after a more careful tuning, and a different combination of algorithm schemata included in the final algorithm design. This improved version would rank fourth in the competition.

Franco Mascia, Thomas Stützle
Local Search and the Traveling Salesman Problem: A Feature-Based Characterization of Problem Hardness

With this paper we contribute to the understanding of the success of 2-opt based local search algorithms for solving the traveling salesman problem (TSP). Although 2-opt is widely used in practice, it is hard to understand its success from a theoretical perspective. We take a statistical approach and examine the features of TSP instances that make the problem either hard or easy to solve. As a measure of problem difficulty for 2-opt we use the approximation ratio that it achieves on a given instance. Our investigations point out important features that make TSP instances hard or easy to be approximated by 2-opt.

Olaf Mersmann, Bernd Bischl, Jakob Bossek, Heike Trautmann, Markus Wagner, Frank Neumann
Evaluating Tree-Decomposition Based Algorithms for Answer Set Programming

A promising approach to tackle intractable problems is given by a combination of decomposition methods with dynamic algorithms. One such decomposition concept is tree decomposition. However, several heuristics for obtaining a tree decomposition exist and, moreover, also the subsequent dynamic algorithm can be laid out differently. In this paper, we provide an experimental evaluation of this combined approach when applied to reasoning problems in propositional answer set programming. More specifically, we analyze the performance of three different heuristics and two different dynamic algorithms, an existing standard version and a recently proposed algorithm based on a more involved data structure, but which provides better theoretical runtime. The results suggest that a suitable combination of the tree decomposition heuristics and the dynamic algorithm has to be chosen carefully. In particular, we observed that the performance of the dynamic algorithm highly depends on certain features (besides treewidth) of the provided tree decomposition. Based on this observation we apply supervised machine learning techniques to automatically select the dynamic algorithm depending on the features of the input tree decomposition.

Michael Morak, Nysret Musliu, Reinhard Pichler, Stefan Rümmele, Stefan Woltran
High-Dimensional Model-Based Optimization Based on Noisy Evaluations of Computer Games

Most publications on surrogate models have focused either on the prediction quality or on the optimization performance. It is still unclear whether the prediction quality is indeed related to the suitability for optimization. Moreover, most of these studies only employ low-dimensional test cases. There are no results for popular surrogate models, such as kriging, for high-dimensional (

n

 > 10) noisy problems. In this paper, we analyze both aspects by comparing different surrogate models on the noisy 22-dimensional car setup optimization problem, based on both, prediction quality and optimization performance. In order not to favor specific properties of the model, we run two conceptually different modern optimization methods on the surrogate models, CMA-ES and BOBYQA. It appears that kriging and random forests are very good modeling techniques with respect to both, prediction quality and suitability for optimization algorithms.

Mike Preuss, Tobias Wagner, David Ginsbourger
Pilot, Rollout and Monte Carlo Tree Search Methods for Job Shop Scheduling

Greedy heuristics may be attuned by looking ahead for each possible choice, in an approach called the rollout or Pilot method. These methods may be seen as meta-heuristics that can enhance (any) heuristic solution, by repetitively modifying a master solution: similarly to what is done in game tree search, better choices are identified using lookahead, based on solutions obtained by repeatedly using a greedy heuristic. This paper first illustrates how the Pilot method improves upon some simple well known dispatch heuristics for the job-shop scheduling problem. The Pilot method is then shown to be a special case of the more recent Monte Carlo Tree Search (MCTS) methods: Unlike the Pilot method, MCTS methods use random completion of partial solutions to identify promising branches of the tree. The Pilot method and a simple version of MCTS, using the

ε

-greedy exploration paradigms, are then compared within the same framework, consisting of 300 scheduling problems of varying sizes with fixed-budget of rollouts. Results demonstrate that MCTS reaches better or same results as the Pilot methods in this context.

Thomas Philip Runarsson, Marc Schoenauer, Michèle Sebag
Minimizing Time When Applying Bootstrap to Contingency Tables Analysis of Genome-Wide Data

Bootstrap resampling is starting to be frequently applied to contingency tables analysis of Genome-Wide SNP data, to cope with the bias in genetic effect estimates, the large number of false positive associations and the instability of the lists of SNPs associated with a disease. The bootstrap procedure, however, increases the computational complexity by a factor

B

, where

B

is the number of bootstrap samples.

In this paper, we study the problem of minimizing time when applying bootstrap to contingency tables analysis and propose two levels of optimization of the procedure. The first level of optimization is based on an alternative representation of bootstrap replicates,

bootstrap histograms

, which is exploited to avoid unnecessary computations for repeated subjects in each bootstrap replicate. The second level of optimization is based on an ad-hoc data structure, the

bootstrap tree

, exploited for reusing computations on sets of subjects which are in common across more than one bootstrap replicate. The problem of finding the best bootstrap tree given a set of bootstrap replicates is tackled with best improvement local search. Different constructive procedures and local search operators are proposed to solve it.

The two proposed levels of optimization are tested on a real Genome-Wide SNP dataset and both are proven to significantly decrease computation time.

Francesco Sambo, Barbara Di Camillo
Quantifying Homogeneity of Instance Sets for Algorithm Configuration

Automated configuration procedures play an increasingly prominent role in realising the performance potential inherent in highly parametric solvers for a wide range of computationally challenging problems. However, these configuration procedures have difficulties when dealing with inhomogenous instance sets, where the relative difficulty of problem instances varies between configurations of the given parametric algorithm. In the literature, instance set homogeneity has been assessed using a qualitative, visual criterion based on heat maps. Here, we introduce two quantitative measures of homogeneity and empirically demonstrate these to be consistent with the earlier qualitative criterion. We also show that according to our measures, homogeneity increases when partitioning instance sets by means of clustering based on observed runtimes, and that the performance of a prominent automatic algorithm configurator increases on the resulting, more homogenous subsets.

Marius Schneider, Holger H. Hoos
Automatically Configuring Algorithms for Scaling Performance

Automated algorithm configurators have been shown to be very effective for finding good configurations of high performance algorithms for a broad range of computationally hard problems. As we show in this work, the standard protocol for using these configurators is not always effective. We propose a simple and computationally inexpensive modification to this protocol and apply it to state-of-the-art solvers for two prominent problems, TSP and computer Go playing, where the standard protocol is unable or unlikely to yield performance improvements, and one problem, mixed integer programming, where the standard protocol is known to be effective. We show that our new protocol is able to find configurations between 4% and 180% better than the standard protocol within the same time budget.

James Styles, Holger H. Hoos, Martin Müller
Upper Confidence Tree-Based Consistent Reactive Planning Application to MineSweeper

Many reactive planning tasks are tackled through myopic optimization-based approaches. Specifically, the problem is simplified by only considering the observations available at the current time step and an estimate of the future system behavior; the optimal decision on the basis of this information is computed and the simplified problem description is updated on the basis of the new observations available in each time step. While this approach does not yield optimal strategies

stricto sensu

, it indeed gives good results at a reasonable computational cost for highly intractable problems, whenever fast off-the-shelf solvers are available for the simplified problem.

The increase of available computational power − even though the search for optimal strategies remains intractable with brute-force approaches − makes it however possible to go beyond the intrinsic limitations of myopic reactive planning approaches.

A consistent reactive planning approach is proposed in this paper, embedding a solver with an Upper Confidence Tree algorithm. While the solver is used to yield a consistent estimate of the belief state, the UCT exploits this estimate (both in the tree nodes and through the Monte-Carlo simulator) to achieve an asymptotically optimal policy. The paper shows the consistency of the proposed

Upper Confidence Tree-based Consistent Reactive Planning

algorithm and presents a proof of principle of its performance on a classical success of the myopic approach, the MineSweeper game.

Michèle Sebag, Olivier Teytaud
Bounding the Effectiveness of Hypervolume-Based (μ + λ)-Archiving Algorithms

In this paper, we study bounds for the

α

-approximate effectiveness of non-decreasing (

μ

 + 

λ

)-archiving algorithms that optimize the hypervolume. A (

μ

 + 

λ

)-archiving algorithm defines how

μ

individuals are to be selected from a population of

μ

parents and

λ

offspring. It is non-decreasing if the

μ

new individuals never have a lower hypervolume than the

μ

original parents. An algorithm is

α

-approximate if for any optimization problem and for any initial population, there exists a sequence of offspring populations for which the algorithm achieves a hypervolume of at least 1/

α

times the maximum hypervolume.

Bringmann and Friedrich (GECCO 2011, pp. 745–752) have proven that all non-decreasing, locally optimal (

μ

 + 1)-archiving algorithms are (2 + 

ε

)-approximate for any

ε

 > 0. We extend this work and substantially improve the approximation factor by generalizing and tightening it for any choice of

λ

to

α

 = 2 − (

λ

 − 

p

)/

μ

with

μ

 = 

q

·

λ

 − 

p

and 0 ≤ 

p

 ≤ 

λ

 − 1. In addition, we show that

$1 + \frac{1}{2 \lambda}-\delta$

, for

λ

 < 

μ

and for any

δ

 > 0, is a lower bound on

α

, i.e. there are optimization problems where one can not get closer than a factor of 1/

α

to the optimal hypervolume.

Tamara Ulrich, Lothar Thiele
Optimization by ℓ1-Constrained Markov Fitness Modelling

When the function to be optimized is characterized by a limited and unknown number of interactions among variables, a context that applies to many real world scenario, it is possible to design optimization algorithms based on such information. Estimation of Distribution Algorithms learn a set of interactions from a sample of points and encode them in a probabilistic model. The latter is then used to sample new instances. In this paper, we propose a novel approach to estimate the Markov Fitness Model used in DEUM. We combine model selection and model fitting by solving an ℓ

1

-constrained linear regression problem. Since candidate interactions grow exponentially in the size of the problem, we first reduce this set with a preliminary coarse selection criteria based on Mutual Information. Then, we employ ℓ

1

-regularization to further enforce sparsity in the model, estimating its parameters at the same time. Our proposal is analyzed against the 3D Ising Spin Glass function, a problem known to be NP-hard, and it outperforms other popular black-box meta-heuristics.

Gabriele Valentini, Luigi Malagò, Matteo Matteucci
Vehicle Routing and Adaptive Iterated Local Search within the HyFlex Hyper-heuristic Framework

HyFlex (Hyper-heuristic Flexible framework) [15] is a software framework enabling the development of domain independent search heuristics (hyper-heuristics), and testing across multiple problem domains. This framework was used as a base for the first

Cross-domain Heuristic Search Challenge

, a research competition that attracted significant international attention. In this paper, we present one of the problems that was used as a hidden domain in the competition, namely, the capacitated vehicle routing problem with time windows. The domain implements a data structure and objective function for the vehicle routing problem, as well as many state-of- the-art low-level heuristics (search operators) of several types. The domain is tested using two adaptive variants of a multiple-neighborhood iterated local search algorithm that operate in a domain independent fashion, and therefore can be considered as hyper-heuristics. Our results confirm that adding adaptation mechanisms improve the performance of hyper-heuristics. It is our hope that this new and challenging problem domain can be used to promote research within hyper-heuristics, adaptive operator selection, adaptive multi-meme algorithms and autonomous control for search algorithms.

James D. Walker, Gabriela Ochoa, Michel Gendreau, Edmund K. Burke
Quasi-elementary Landscapes and Superpositions of Elementary Landscapes

There exist local search landscapes where the evaluation function is an eigenfunction of the graph Laplacian that corresponds to the neighborhood structure of the search space. Problems that display this structure are called “Elementary Landscapes” and they have a number of special mathematical properties. The term “Quasi-elementary landscapes” is introduced to describe landscapes that are “almost” elementary; in quasi-elementary landscapes there exists some efficiently computed “correction” that captures those parts of the neighborhood structure that deviate from the normal structure found in elementary landscapes. The “shift” operator, as well as the “3-opt” operator for the Traveling Salesman Problem landscapes induce quasi-elementary landscapes. A local search neighborhood for the Maximal Clique problem is also quasi-elementary. Finally, we show that landscapes which are a superposition of elementary landscapes can be quasi-elementary in structure.

Darrell Whitley, Francisco Chicano
Fast Permutation Learning

Permutations occur in a great variety of optimization problems, such as routing, scheduling and assignment problems. The present paper introduces the use of learning automata for the online learning of good quality permutations. Several centralized and decentralized methods using individual and common rewards are presented. The performance, memory requirement and scalability of the presented methods is analyzed. Results on well known benchmark problems show interesting properties. It is also demonstrated how these techniques are successfully applied to multi-project scheduling problems.

Tony Wauters, Katja Verbeeck, Patrick De Causmaecker, Greet Vanden Berghe
Parameter-Optimized Simulated Annealing for Application Mapping on Networks-on-Chip

Application mapping is an important issue in designing systems based on many-core networks-on-chip (NoCs). Simulated Annealing (SA) has been often used for searching for the optimized solution of application mapping problem. The parameters applied in the SA algorithm jointly control the annealing schedule and have great impact on the runtime and the quality of the final solution of the SA algorithm. The optimized parameters should be selected in a systematic way for each particular mapping problem, instead of using an identical set of empirical parameters for all problems. In this work, we apply an optimization method, Nelder-Mead simplex method, to obtain optimized parameters of SA. The experiment shows that with optimized parameters, we can get an average 237 times speedup of the SA algorithm, compared to the work where the empirical values are used for setting parameters. For the set of benchmarks, the proposed parameter-optimized SA algorithm achieves comparable communication energy consumption using less than 1% of iterations of that used in the reference work.

Bo Yang, Liang Guang, Tero Säntti, Juha Plosila
Learning Algorithm Portfolios for Parallel Execution

Portfolio-based solvers are both effective and robust, but their promise for parallel execution with constraint satisfaction solvers has received relatively little attention. This paper proposes an approach that constructs algorithm portfolios intended for parallel execution based on a combination of case-based reasoning, a greedy algorithm, and three heuristics. Empirical results show that this method is efficient, and can significantly improve performance with only a few additional processors. On problems from solver competitions, the resultant algorithm portfolios perform nearly as well as an oracle.

Xi Yun, Susan L. Epstein

Short Papers

Bayesian Optimization Using Sequential Monte Carlo

We consider the problem of optimizing a real-valued continuous function

f

using a Bayesian approach, where the evaluations of

f

are chosen sequentially by combining prior information about

f

, which is described by a random process model, and past evaluation results. The main difficulty with this approach is to be able to compute the posterior distributions of quantities of interest which are used to choose evaluation points. In this article, we decide to use a Sequential Monte Carlo (SMC) approach.

Romain Benassi, Julien Bect, Emmanuel Vazquez
Influence of the Migration Period in Parallel Distributed GAs for Dynamic Optimization

Dynamic optimization problems (DOP) challenge the performance of the standard Genetic Algorithm (GA) due to its

panmictic

population strategy. Several approaches have been proposed to tackle this limitation. However, one of the barely studied domains has been the parallel distributed GA (dGA), characterized by decentralizing the population in islands communicating through

migrations

of individuals. In this article, we analyze the influence of the migration period in dGAs for DOPs. Results show how to adjust this parameter for addressing different change severities in a comprehensive set of dynamic test-bed functions.

Yesnier Bravo, Gabriel Luque, Enrique Alba
A Hyper-Heuristic Inspired by Pearl Hunting

Pearl hunting is a traditional way of diving to retrieve pearl from pearl oysters or to hunt some other sea creatures. In some areas, hunters need to dive and search seafloor repeatedly at several meters depth for pearl oysters. In a search perspective, pearl hunting consists of repeated diversification (to surface and change target area) and intensification (to dive and find pearl oysters). A Pearl Hunter (PHunter) hyper-heuristic is inspired by the pearl hunting, as shown in Fig. 1. Given a problem domain and some low-level heuristics (LLHs), PHunter can group, test, select and organize LLHs for the domain by imitating a rational diver.

C. Y. Chan, Fan Xue, W. H. Ip, C. F. Cheung
Five Phase and Genetic Hive Hyper-Heuristics for the Cross-Domain Search

In this paper we present two hyper-heuristics: Five Phase Approach (5Ph) and Genetic Hive (GH), developed for the Cross-Domain Heuristic Search Challenge held in 2011. Performance of both methods is studied. Experience gained in construction of the hyper-heuristics is presented. Conclusions and recommendations for the future advancement of hyper-heuristic methodologies are discussed.

Tomasz Cichowicz, Maciej Drozdowski, Michał Frankiewicz, Grzegorz Pawlak, Filip Rytwiński, Jacek Wasilewski
Implicit Model Selection Based on Variable Transformations in Estimation of Distribution

In this paper we address the problem of model selection in Estimation of Distribution Algorithms from a novel perspective. We perform an implicit model selection by transforming the variables and choosing a low dimensional model in the new variable space. We apply such paradigm in EDAs and we introduce a novel algorithm called I-FCA, which makes use of the independence model in the transformed space, yet being able to recover higher order interactions among the original variables. We evaluated the performance of the algorithm on well known benchmarks functions in a black-box context and compared with other popular EDAs.

Emanuele Corsano, Davide Cucci, Luigi Malagò, Matteo Matteucci
Improving the Exploration in Upper Confidence Trees

In the standard version of the UCT algorithm, in the case of a continuous set of decisions, the exploration of new decisions is done through blind search. This can lead to very inefficient exploration, particularly in the case of large dimension problems, which often happens in energy management problems, for instance. In an attempt to use the information gathered through past simulations to better explore new decisions, we propose a method named Blind Value (BV). It only requires the access to a function that randomly draws feasible decisions. We also implement it and compare it to the original version of continuous UCT. Our results show that it gives a significant increase in convergence speed, in dimensions 12 and 80.

Adrien Couëtoux, Hassen Doghmen, Olivier Teytaud
Parallel GPU Implementation of Iterated Local Search for the Travelling Salesman Problem

The purpose of this paper is to propose effective parallelization strategies for the Iterated Local Search (ILS) metaheuristic on Graphics Processing Units (GPU). We consider the decomposition of the 3-opt Local Search procedure on the GPU processing hardware and memory structure. Two resulting algorithms are evaluated and compared on both speedup and solution quality on a state-of-the-art Fermi GPU architecture. We report speedups of up to 6.02 with solution quality similar to the original sequential implementation on instances of the Travelling Salesman Problem ranging from 100 to 3038 cities.

Audrey Delévacq, Pierre Delisle, Michaël Krajecki
Constraint-Based Local Search for the Costas Array Problem

The Costas Array Problem is a highly combinatorial problem linked to radar applications. We present in this paper its detailed modeling and solving by Adaptive Search, a constraint-based local search method. Experiments have been done on both sequential and parallel hardware up to several hundreds of cores. Performance evaluation of the sequential version shows results outperforming previous implementations, while the parallel version shows nearly linear speedups w.r.t. the sequential one, for instance 120 for 128 cores and 230 for 256 cores.

Daniel Diaz, Florian Richoux, Philippe Codognet, Yves Caniou, Salvador Abreu
Evaluation of a Family of Reinforcement Learning Cross-Domain Optimization Heuristics

In our participation to the

Cross-Domain Heuristic Search Challenge

(

CHeSC 2011

) [1] we developed an approach based on Reinforcement Learning for the automatic, on-line selection of low-level heuristics across different problem domains. We tested different memory models and learning techniques to improve the results of the algorithm. In this paper we report our design choices and a comparison of the different algorithms we developed.

Luca Di Gaspero, Tommaso Urli
Autonomous Local Search Algorithms with Island Representation

The aim of this work is to use this dynamic island model to autonomously select local search operators within a classical evolutionary algorithm. In order to assess the relevance of this approach, we will use the model considering a population-based local search algorithm, with no crossover and where each island is associated to a particular local search operator. Here, contrary to recent works [6], the goal is not to forecast the most promising crossovers between individuals like in classical island models, but to detect at each time of the search the most relevant LS operators. This application constitutes an original approach in defining autonomous algorithms.

Adrien Goëffon, Frédéric Lardeux
An Approach to Instantly Use Single-Objective Results for Multi-objective Evolutionary Combinatorial Optimization

Standard dominance-based multi-objective evolutionary algorithms hardly allow to integrate problem knowledge without redesigning the approach as a whole. We present a flexible alternative approach based on an abstraction from predator-prey interplay. For parallel machine scheduling problems, we find that the combination of problem knowledge principally leads to better trade-off approximations compared to standard class of algorithms, especially NSGA-2. Further, we show that the incremental integration of existing problem knowledge gradually improves the algorithm’s performance.

Christian Grimme, Joachim Lepping
Lower Bounds and Upper Bounds for MaxSAT

This paper presents several ways to compute

lower

and

upper

bounds

for MaxSAT based on calling a complete SAT solver. Preliminary results indicate that (i) the bounds are of high quality, (ii) the bounds can boost the search of MaxSAT solvers on some benchmarks, and (iii) the upper bounds computed by a

Stochastic Local Search

procedure (

SLS

) can be substantially improved when its search is initialized with an assignment provided by a complete SAT solver.

Federico Heras, Antonio Morgado, Joao Marques-Silva
Determining the Characteristic of Difficult Job Shop Scheduling Instances for a Heuristic Solution Method

Many heuristic methods have been proposed for the job-shop scheduling problem. Different solution methodologies outperform other depending on the particular problem instance under consideration. Therefore, one is interested in knowing how the instances differ in structure and determine when a particular heuristic solution is likely to fail and explore in further detail the causes. In order to achieve this, we seek to characterise features for different difficulties. Preliminary experiments show there are different significant features that distinguish between easy and hard JSSP problem, and that they vary throughout the scheduling process. The insight attained by investigating the relationship between problem structure and heuristic performance can undoubtedly lead to better heuristic design that is tailored to the data distribution under consideration.

Helga Ingimundardottir, Thomas Philip Runarsson
Expected Improvements for the Asynchronous Parallel Global Optimization of Expensive Functions: Potentials and Challenges

Sequential sampling strategies based on Gaussian processes are now widely used for the optimization of problems involving costly simulations. But Gaussian processes can also generate parallel optimization strategies. We focus here on a new, parameter free, parallel expected improvement criterion for asynchronous optimization. An estimation of the criterion, which mixes Monte Carlo sampling and analytical bounds, is proposed. Logarithmic speed-ups are measured on 1 and 9 dimensional functions.

Janis Janusevskis, Rodolphe Le Riche, David Ginsbourger, Ramunas Girdziusas
Effect of SMS-EMOA Parameterizations on Hypervolume Decreases

It is possible for the (

μ

 + 1)-SMS-EMOA to decrease in dominated hypervolume w.r.t. a global reference point. We study the influence of SMS-EMOA parameter settings on number and amount of the observed decreases. We show that the number of decreases drop and the number of increases rise with a higher population size. In addition, a positive correlation between mean increase and mean decrease can be observed. Our findings further indicate a substantial impact of the mutation operators on the number and amount of decreases.

Leonard Judt, Olaf Mersmann, Boris Naujoks
Effects of Speciation on Evolution of Neural Networks in Highly Dynamic Environments

Using genetic algorithms for solving dynamic optimization problems is an important area of current research. In this work, we investigate effects of speciation in NeuroEvolution of Augmenting Topologies (NEAT), a well-known method for evolving neural network topologies, on problems with dynamic fitness function. NEAT uses speciation as a method of maintaining diversity in the population and protecting new solutions against competition. We show that NEAT outperforms non-speciated genetic algorithm (GA) not only on problems with static fitness function, but also on problems with gradually moving optimum. We also demonstrate that NEAT fails to achieve better performance on problems where the optimum moves rapidly. We propose a novel method called DynNEAT, which extends NEAT by changing the size of each species based on its historical performance. We demonstrate that DynNEAT outperforms both NEAT and non-speciated GA on problems with rapidly moving optimum, while it achieves performance similar to NEAT on problems with static or slowly moving optimum.

Peter Krčah
Natural Max-SAT Encoding of Min-SAT

We show that there exists a natural encoding which transforms Min-SAT instances into Max-SAT instances. Unlike previous encodings, this natural encoding keeps the same variables, and the optimal assignment for the Min-SAT instance is identical to the optimal assignment of the corresponding Max-SAT instance. In addition to that the encoding can be generalized to the Min-SAT variants with clause weights and hard clauses. We conducted experiments which give evidence that our encoding is practically relevant, as Min-2-SAT instances can be solved much faster by transforming them to Max-SAT and using a Max-SAT solver than by using the best Min-SAT solver directly.

Adrian Kügel
A New Hyperheuristic Algorithm for Cross-Domain Search Problems

This paper describes a new hyperheuristic algorithm that performs well over a variety of different problem classes. A novel method for switching between working on a single solution and a pool of solutions is proposed. This method is combined with an adaptive strategy that guides the selection of the underlying low-level heuristics throughout the search. The algorithm was implemented based on the HyFlex framework and was submitted as a candidate for the Cross-Domain Heuristic Search Challenge 2011.

Andreas Lehrbaum, Nysret Musliu
Brain Cine-MRI Sequences Registration Using B-Spline Free-Form Deformations and MLSDO Dynamic Optimization Algorithm

In this paper, a dynamic optimization algorithm is used to assess the deformations of the wall of the third cerebral ventricle in the case of a brain cine-MR imaging. In this method, a nonrigid registration process is applied to a 2D+t cine-MRI sequence of a region of interest. In this paper, we propose to use a B-spline Free-Form deformation model. The registration process consists of optimizing an objective function that can be considered as a dynamic function. Thus, a dynamic optimization algorithm, called MLSDO, is used to accomplish this task. The obtained results are compared to those of several well-known static optimization algorithms. This comparison shows the relevance of using a dynamic optimization algorithm to solve this kind of problems, and the efficiency of MLSDO.

Julien Lepagnot, Amir Nakib, Hamouche Oulhadj, Patrick Siarry
Global Optimization for Algebraic Geometry – Computing Runge–Kutta Methods

This research work presents a new evolutionary optimization algorithm,

Evo-Runge-Kutta

in theoretical mathematics with applications in scientific computing. We illustrate the application of

Evo-Runge-Kutta

, a two-phase optimization algorithm, to a problem of pure algebra, the study of the parameterization of an algebraic variety, an open problem in algebra. Results show the design and optimization of particular algebraic varieties, the Runge-Kutta methods of order

q

. The mapping between algebraic geometry and evolutionary optimization is direct, and we expect that many open problems in pure algebra will be modelled as constrained global optimization problems.

Ivan Martino, Giuseppe Nicosia
Clause Sharing in Parallel MaxSAT

In parallel MaxSAT solving, sharing learned clauses is expected to help to further prune the search space and boost the performance of a parallel solver. However, not all learned clauses should be shared since it could lead to an exponential blow up in memory and to sharing many irrelevant clauses. The main question is which learned clauses should be shared among the different threads. This paper reviews the existing heuristics for sharing learned clauses, namely, static and dynamic heuristics. Moreover, a new heuristic for clause sharing is presented based on

freezing

shared clauses. Shared clauses are only incorporated into the solver when they are expected to be useful in the near future. Experimental results show the importance of clause sharing and that the freezing heuristic outperforms other clause sharing heuristics.

Ruben Martins, Vasco Manquinho, Inês Lynce
An Intelligent Hyper-Heuristic Framework for CHeSC 2011

The present study proposes a new selection hyper-heuristic providing several adaptive features to cope with the requirements of managing different heuristic sets. The approach suggested provides an intelligent way of selecting heuristics, determines effective heuristic pairs and adapts the parameters of certain heuristics online. In addition, an adaptive list-based threshold accepting mechanism has been developed. It enables deciding whether to accept or not the solutions generated by the selected heuristics. The resulting approach won the first Cross Domain Heuristic Search Challenge against 19 high-level algorithms.

Mustafa Mısır, Katja Verbeeck, Patrick De Causmaecker, Greet Vanden Berghe
An Efficient Meta-heuristic Based on Self-control Dominance Concept for a Bi-objective Re-entrant Scheduling Problem with Outsourcing

We study a two-machine re-entrant flowshop scheduling problem in which the jobs have strict due dates. In order to be able to satisfy all customers and avoid any tardiness, scheduler decides which job shall be outsourced and find the best sequence for in-house jobs. Two objective functions are considered: minimizing total completion time for in-house jobs and minimizing outsource cost for others. Since the problem is NP-hard, an efficient genetic algorithm based on modified self-control dominance concept with adaptive generation size is proposed. Non-dominated solutions are compared with classical NSGA-II regarding different metrics. The results indicate the ability of our proposed algorithm to find a good approximation of the middle part of the Pareto front.

Atefeh Moghaddam, Farouk Yalaoui, Lionel Amodeo
A Tree Search Approach to Sparse Coding

Sparse coding is an important optimization problem with numerous applications. In this paper, we describe the problem and the commonly used pursuit methods, and propose a best-first tree search algorithm employing multiple queues for unexplored tree nodes. We assess the effectiveness of our method in an extensive computational experiment, showing its superiority over other methods even for modest computational time.

Rui Rei, João P. Pedroso, Hideitsu Hino, Noboru Murata
Adaptive Control of the Number of Crossed Genes in Many-Objective Evolutionary Optimization

To realize effective genetic operation in evolutionary many-objective optimization, crossover controlling the number of crossed genes (CCG) has been proposed. CCG controls the number of crossed genes by using an user-defined parameter

α

. CCG with small

α

significantly improves the search performance of multi-objective evolutionary algorithm in many-objective optimization by keeping small the number of crossed genes. However, to achieve high search performance by using CCG, we have to find out an appropriate parameter

α

by conducting many experiments. To avoid parameter tuning and automatically find out an appropriate

α

in a single run of the algorithm, in this work we propose an adaptive CCG which adopts the parameter

α

during the solutions search. Simulation results show that the values of

α

controlled by the proposed method converges to an appropriate value even when the adaptation is started from any initial values. Also we show the adaptive CCG achieves more than 80% with a single run of the algorithm for the maximum search performance of the static CCG using an optimal

α

*

.

Hiroyuki Sato, Carlos A. Coello Coello, Hernán E. Aguirre, Kiyoshi Tanaka
Counter Implication Restart for Parallel SAT Solvers

A portfolio approach has become the mainstream for parallel SAT solvers, making diversification of the search for each process more important. In the SAT Competition 2011, we proposed a novel restart method called counter implication restart (CIR), for sequential solvers and won gold and silver medals with CIR. CIR enables SAT solvers to change the search spaces drastically after a restart. In this paper, we propose an adaptation of CIR to parallel SAT solvers to provide better diversification. Experimental results indicate that CIR provides good diversification and its overall performance is very competitive with state-of-the-art parallel solvers.

Tomohiro Sonobe, Mary Inaba
Learning the Neighborhood with the Linkage Tree Genetic Algorithm

We discuss the use of online learning of the local search neighborhood. Specifically, we consider the

Linkage Tree Genetic Algorithm (LTGA)

, a population-based, stochastic local search algorithm that learns the neighborhood by identifying the problem variables that have a high mutual information in a population of good solutions. The LTGA builds each generation a

linkage tree

using a hierarchical clustering algorithm. This bottom-up hierarchical clustering is computationally very efficient and runs in

O

(

n

2

). Each node in the tree represents a specific cluster of problem variables. When generating new solutions, these linked variables specify the neighborhood where the LTGA searches for better solutions by sampling values for these problem variables from the current population. To demonstrate the use of learning the neighborhood we experimentally compare iterated local search (ILS) with the LTGA on a hard discrete problem, the nearest-neighbor NK-landscape problem with maximal overlap. Results show that the LTGA is significantly superior to the ILS, proving that learning the neighborhood during the search can lead to a considerable gain in search performance.

Dirk Thierens, Peter A. N. Bosman
A Comparison of Operator Utility Measures for On-Line Operator Selection in Local Search

This paper investigates the adaptive selection of operators in the context of Local Search. The utility of each operator is computed from the solution quality and distance of the candidate solution from the search trajectory. A number of utility measures based on the Pareto dominance relationship and the relative distances between the operators are proposed and evaluated on QAP instances using an implied or static target balance between exploitation and exploration. A refined algorithm with an adaptive target balance is then examined.

Nadarajen Veerapen, Jorge Maturana, Frédéric Saubion
Monte Carlo Methods for Preference Learning

Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function.

Critical to these applications is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems.

Paolo Viappiani
Hybridizing Reactive Tabu Search with Simulated Annealing

Reactive tabu search (RTS) aims at the automatic adaptation of the tabu list length. The idea is to increase the tabu list length when the tabu memory indicates that the search is revisiting formerly traversed solutions. Once too many repetitions are encountered, an escape mechanism constituting a random walk is an essential part of the method. We propose to replace this random walk by a controlled simulated annealing (SA). Excellent results are presented for various combinatorial optimization problems.

Stefan Voß, Andreas Fink
Backmatter
Metadaten
Titel
Learning and Intelligent Optimization
herausgegeben von
Youssef Hamadi
Marc Schoenauer
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-34413-8
Print ISBN
978-3-642-34412-1
DOI
https://doi.org/10.1007/978-3-642-34413-8

Premium Partner