Skip to main content

2013 | Buch

Learning and Intelligent Optimization

7th International Conference, LION 7, Catania, Italy, January 7-11, 2013, Revised Selected Papers

herausgegeben von: Giuseppe Nicosia, Panos Pardalos

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 7th International Conference on Learning and Optimization, LION 7, which was held in Catania, Italy, in January 2013. The 49 contributions presented in this volume were carefully reviewed and selected from 101 submissions. They explore the intersections and uncharted territories between machine learning, artificial intelligence, mathematical programming and algorithms for hard optimization problems.

Inhaltsverzeichnis

Frontmatter
Interleaving Innovization with Evolutionary Multi-Objective Optimization in Production System Simulation for Faster Convergence

This paper introduces a novel methodology for the optimization, analysis and decision support in production systems engineering. The methodology is based on the

innovization

procedure, originally introduced to unveil new and innovative design principles in engineering design problems. The innovization procedure stretches beyond an optimization task and attempts to discover new design/operational rules/principles relating to decision variables and objectives, so that a deeper understanding of the underlying problem can be obtained. By integrating the concept of innovization with simulation and data mining techniques, a new set of powerful tools can be developed for general systems analysis. The uniqueness of the approach introduced in this paper lies in that decision rules extracted from the multi-objective optimization using data mining are used to modify the original optimization. Hence, faster convergence to the desired solution of the decision-maker can be achieved. In other words, faster convergence and deeper knowledge of the relationships between the key decision variables and objectives can be obtained by interleaving the multi-objective optimization and data mining process. In this paper, such an interleaved approach is illustrated through a set of experiments carried out on a simulation model developed for a real-world production system analysis problem.

Amos H. C. Ng, Catarina Dudas, Henrik Boström, Kalyanmoy Deb
Intelligent Optimization for the Minimum Labelling Spanning Tree Problem

Given a connected, undirected graph whose edges are labelled (or coloured), the minimum labelling spanning tree (MLST) problem seeks a spanning tree whose edges have the smallest number of distinct labels (or colours). In recent work, the MLST problem has been shown to be NP-hard and some effective heuristics have been proposed and analysed. In this paper we present preliminary results of a currently on-going project regarding the implementation of an intelligent optimization algorithm to solve the MLST problem. This algorithm is obtained by the basic Variable Neighbourhood Search heuristic with the integration of other complements from machine learning, statistics and experimental algorithmics, in order to produce high-quality performance and to completely automate the resulting optimization strategy.

Sergio Consoli, José Andrés Moreno Pérez, Nenad Mladenović
A Constraint Satisfaction Approach to Tractable Theory Induction

A novel framework for combining logical constraints with theory induction in Inductive Logic Programming is presented. The constraints are solved using a boolean satisfiability solver (SAT solver) to obtain a candidate solution. This speeds up induction by avoiding generation of unnecessary candidates with respect to the constraints. Moreover, using a complete SAT solver, search space exhaustion is always detectable, leading to faster small clause/base case induction. We run benchmarks using two constraints: input-output specification and search space pruning. The benchmarks suggest our constraint satisfaction approach can speed up theory induction by four orders of magnitude or more, making certain intractable problems tractable.

John Ahlgren, Shiu Yin Yuen
Features for Exploiting Black-Box Optimization Problem Structure

Black-box optimization (BBO) problems arise in numerous scientific and engineering applications and are characterized by computationally intensive objective functions, which severely limit the number of evaluations that can be performed. We present a robust set of features that analyze the fitness landscape of BBO problems and show how an algorithm portfolio approach can exploit these general, problem independent, features and outperform the utilization of any single minimization search strategy. We test our methodology on data from the GECCO Workshop on BBO Benchmarking 2012, which contains 21 state-of-the-art solvers run on 24 well-established functions.

Tinus Abell, Yuri Malitsky, Kevin Tierney
MOCA-I: Discovering Rules and Guiding Decision Maker in the Context of Partial Classification in Large and Imbalanced Datasets

This paper focuses on the modeling and the implementation as a multi-objective optimization problem of a Pittsburgh classification rule mining algorithm adapted to large and imbalanced datasets, as encountered in hospital data. We associate to this algorithm an original post-processing method based on ROC curve to help the decision maker to choose the most interesting rules. After an introduction to problems brought by hospital data such as class imbalance, volumetry or inconsistency, we present MOCA-I - a Pittsburgh modelization adapted to this kind of problems. We propose its implementation as a dominance-based local search in opposition to existing multi-objective approaches based on genetic algorithms. Then we introduce the post-processing method to sort and filter the obtained classifiers. Our approach is compared to state-of-the-art classification rule mining algorithms, giving as good or better results, using less parameters. Then it is compared to C4.5 and C4.5-CS on hospital data with a larger set of attributes, giving the best results.

Julie Jacques, Julien Taillard, David Delerue, Laetitia Jourdan, Clarisse Dhaenens
Sharing Information in Parallel Search with Search Space Partitioning

In this paper we propose a new approach to share information among the computation units of an iterative search partitioning parallel SAT solver by approximating validity. Experimental results show the streh of the approach, against both existing sharing techniques and absence of sharing. With the improved clause sharing, out of 600 instances we could solve 13 more than previous sharing techniques.

Davide Lanti, Norbert Manthey
Fast Computation of the Multi-Points Expected Improvement with Applications in Batch Selection

The Multi-points Expected Improvement criterion (or

$$q$$

-EI) has recently been studied in batch-sequential Bayesian Optimization. This paper deals with a new way of computing

$$q$$

-EI, without using Monte-Carlo simulations, through a closed-form formula. The latter allows a very fast computation of

$$q$$

-EI for reasonably low values of

$$q$$

(typically, less than 10). New parallel kriging-based optimization strategies, tested on different toy examples, show promising results.

Clément Chevalier, David Ginsbourger
R2-EMOA: Focused Multiobjective Search Using R2-Indicator-Based Selection

An indicator-based evolutionary multiobjective optimization algorithm (EMOA) is introduced which incorporates the contribution to the unary R2-indicator as the secondary selection criterion. First experiments indicate that the R2-EMOA accurately approximates the Pareto front of the considered continuous multiobjective optimization problems. Furthermore, decision makers’ preferences can be included by adjusting the weight vector distributions of the indicator which results in a focused search behavior.

Heike Trautmann, Tobias Wagner, Dimo Brockhoff
A Heuristic Algorithm for the Set Multicover Problem with Generalized Upper Bound Constraints

We consider an extension of the set covering problem (SCP) introducing (i) multicover and (ii) generalized upper bound (GUB) constraints that arise in many real applications of SCP. For this problem, we develop a 2-flip neighborhood local search algorithm with a heuristic size reduction algorithm, in which a new evaluation scheme of variables is introduced taking account of GUB constraints. According to computational comparison with the latest version of a mixed integer programming solver, our algorithm performs quite effectively for various types of instances, especially for very large-scale instances.

Shunji Umetani, Masanao Arakawa, Mutsunori Yagiura
A Genetic Algorithm Approach for the Multidimensional Two-Way Number Partitioning Problem

This paper addresses the problem of partitioning a set of vectors into two subsets such that the sums per every coordinate should be exactly or approximately equal. This problem, introduced by Kojic [

8

], is called the multidimensional two-way number partitioning problem (MDTWNPP) and generalizes the classical two-way number partitioning problem. We propose an efficient genetic algorithm based heuristic for solving the multidimensional two-way number partitioning problem. The performances of our genetic algorithm have been compared with the existing numerical results obtained by CPLEX based on an integer linear programming formulation of the problem. The obtained preliminary results, in the case of medium and large instances, reveal that our proposed methodology performs very well in terms of both quality of the solutions and the computational times compared with the previous method of solving the MDTWNPP.

P. C. Pop, O. Matei
Adaptive Dynamic Load Balancing in Heterogeneous Multiple GPUs-CPUs Distributed Setting: Case Study of B&B Tree Search

The emergence of new hybrid and heterogenous multi-GPUs multi-CPUs large scale platforms offers new opportunities and poses new challenges when solving difficult optimization problems. This paper targets irregular tree search algorithms in which workload is unpredictable. We propose an adaptive distributed approach allowing to distribute the load dynamically at runtime while taking into account the computing abilities of either GPUs or CPUs. Using Branch-and-Bound and FlowShop as a case study, we deployed our approach using up to

$$20$$

20

GPUs and

$$128$$

128

CPUs. Through extensive experiments in different system configurations, we report near optimal speedups, thus providing new insights into how to take full advantage of both GPUs and CPUs power in modern computing platforms.

Trong-Tuan Vu, Bilel Derbel, Nouredine Melab
Multi-Objective Optimization for Relevant Sub-graph Extraction

In recent years, graph clustering methods have rapidly emerged to mine latent knowledge and functions in networks. Most sub-graphs extracting methods that have been introduced fall into graph clustering. In this paper, a novel trend of relevant sub-graphs extraction problem was considered as multi-objective optimization. Genetic Algorithms (GAs) and Simulated Annealing (SA) were then used to solve the problem applied to biological networks. Comparisons between GAs, SA and Markov Cluster Algorithm (MCL) were carried out and the results showed that the proposed approach is superior.

Mohamed Elati, Cuong To, Rémy Nicolle
PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection

In most engineering problems, experiments for evaluating the performance of different setups are time consuming, expensive, or even both. Therefore, sequential experimental designs have become an indispensable technique for optimizing the objective functions of these problems. In this context, most of the problems can be considered as a black-box. Specifically, no function properties are known a priori to select the best suited surrogate model class. Therefore, we propose a new ensemble-based approach, which is capable of identifying the best surrogate model during the optimization process by using reinforcement learning techniques. The procedure is general and can be applied to arbitrary ensembles of surrogate models. Results are provided on 24 well-known black-box functions to show that the progressive procedure is capable of selecting suitable models from the ensemble and that it can compete with state-of-the-art methods for sequential optimization.

Stefan Hess, Tobias Wagner, Bernd Bischl
Neutrality in the Graph Coloring Problem

In this paper, the neutrality of some hard instances of the graph coloring problem (GCP) is quantified. This neutrality property has to be detected as it impacts the search process. Indeed, local optima may belong to plateaus that represent a barrier for local search methods. Then, we also aim at pointing out the interest of exploiting neutrality during the search. Therefore, a generic local search dedicated to neutral problems, NILS, is performed on several hard instances.

Marie-Eléonore Marmion, Aymeric Blot, Laetitia Jourdan, Clarisse Dhaenens
Kernel Multi Label Vector Optimization (kMLVO): A Unified Multi-Label Classification Formalism

We here propose the kMLVO (kernel Multi-Label Vector Optimization) framework designed to handle the common case in binary classification problems, where the observations, at least in part, are not given as an explicit class label, but rather as several scores which relate to the binary classification. Rather than handling each of the scores and the labeling data as separate problems, the kMLVO framework seeks a classifier which will satisfy all the corresponding constraints simultaneously. The framework can naturally handle problems where each of the scores is related differently to the classifying problem, optimizing both the classification, the regressions and the transformations into the different scores. Results from simulations and a protein docking problem in immunology are discussed, and the suggested method is shown to outperform both the corresponding SVM and SVR.

Gilad Liberman, Tal Vider-Shalit, Yoram Louzoun
Robust Benchmark Set Selection for Boolean Constraint Solvers

We investigate the composition of representative benchmark sets for evaluating and improving the performance of robust Boolean constraint solvers in the context of satisfiability testing and answer set programming. Starting from an analysis of current practice, we isolate a set of desiderata for guiding the development of a parametrized benchmark selection algorithm. Our algorithm samples a benchmark set from a larger base set (or distribution) comprising a large variety of instances. This is done fully automatically, in a way that carefully calibrates instance hardness and instance similarity. We demonstrate the usefulness of this approach by means of empirical results showing that optimizing solvers on the benchmark sets produced by our method leads to better configurations than obtained based on the much larger, original sets.

Holger H. Hoos, B. Kaufmann, T. Schaub, M. Schneider
Boosting Sequential Solver Portfolios: Knowledge Sharing and Accuracy Prediction

Sequential algorithm portfolios for satisfiability testing (SAT), such as

SATzilla

and

3S

, have enjoyed much success in the last decade. By leveraging the differing strengths of individual SAT solvers, portfolios employing older solvers have often fared as well or better than newly designed ones, in several categories of the annual SAT Competitions and Races. We propose two simple yet powerful techniques to further boost the performance of sequential portfolios, namely, a generic way of knowledge sharing suitable for sequential SAT solver schedules which is commonly employed in parallel SAT solvers, and a meta-level guardian classifier for judging whether to switch the main solver suggested by the portfolio with a recourse action solver. With these additions, we show that the performance of the sequential portfolio solver

3S

, which dominated other sequential categories but was ranked 10th in the application category of the 2011 SAT Competition, can be boosted significantly, bringing it just one instance short of matching the performance of the winning application track solver, while still outperforming all other solvers submitted to the crafted and random categories.

Yuri Malitsky, Ashish Sabharwal, Horst Samulowitz, Meinolf Sellmann
A Fast and Adaptive Local Search Algorithm for Multi-Objective Optimization

Although population-based algorithms are robust in solving Multi-objective Optimization Problems (MOP), they often require a large number of function evaluations. In contrast, individual-solution based algorithms are fast but can be stuck in local minima. To solve these problems, we introduce a fast and adaptive local search algorithm for MOP. Our algorithm is an individual-solution algorithm with a flexible mechanism for switching between the exploration and exploitation phase to escape from local minima. The experimental results on the

DTLZ

benchmark show that our algorithm significantly outperforms the popular evolutionary algorithm

NSGAII

and three other simulated annealing algorithms for MOP.

Duy Tin Truong
An Analysis of Hall-of-Fame Strategies in Competitive Coevolutionary Algorithms for Self-Learning in RTS Games

This paper explores the use of Hall-of-Fame (HoF) in the application of competitive coevolution for finding winning strategies in

RobotWars

, a two-player real time strategy (RTS) game developed in the University of Malaga for research purposes. The main goal is testing different approaches in order to implement the concept of HoF as part of the self learning mechanism in competitive coevolutionary algorithms. Five approaches were designed and tested, the difference between them being based on the implementation of HoF as a long or short-term memory mechanism. Specifically they differ on the police followed to keep the members in the champions’ memory during an updating process which deletes the weakest individuals, in order to consider only the robust members in the evaluation phase. It is shown how strategies based on periodical update of the HoF set on the basis of quality and diversity provide globally better results.

Mariela Nogueira, Carlos Cotta, Antonio J. Fernández-Leiva
Resources Optimization in (Video) Games: A Novel Approach to Teach Applied Mathematics?

In spite of the efficacy of Operations Research (OR), its tools are still underused, due to the difficulties that people experience when describing a problem through a mathematical model. For this reason, teaching how to approach and model complex problems is still an open issue. A strong relation exists between (video) games and learning: for this reason we explore to which extent (real time) simulation video games could be envisaged to be an innovative, stimulating and compelling approach to teach OR techniques.

Dario Maggiorini, Simone Previti, Laura Anna Ripamonti, Marco Trubian
CMF: A Combinatorial Tool to Find Composite Motifs

Controlling the differential expression of many thousands genes at any given time is a fundamental task of metazoan organisms and this complex orchestration is controlled by the so-called

regulatory genome

encoding complex regulatory networks.

Cis-Regulatory Modules

are fundamental units of such networks. To detect Cis-Regulatory Modules “in-silico” a key step is the discovery of recurrent clusters of DNA binding sites for sets of cooperating Transcription Factors.

Composite motif

is the term often adopted to refer to these clusters of sites. In this paper we describe CMF, a new efficient combinatorial method for the problem of detecting composite motifs, given in input a description of the binding affinities for a set of transcription factors. Testing with known benchmark data, we attain statistically significant better performance against nine state-of-the-art competing methods.

Mauro Leoncini, Manuela Montangero, Marco Pellegrini, Karina Panucia Tillán
Hill-Climbing Behavior on Quantized NK-Landscapes

This paper provides guidelines to design climbers considering a landscape shape under study. In particular, we aim at competing best improvement and first improvement strategies, as well as evaluating the behavior of different neutral move policies. Some conclusions are assessed by an empirical analysis on non-neutral (NK-) and neutral (quantized NK-) landscapes. Experiments show the ability of first improvement to explore rugged landscapes, as well as the interest of accepting neutral moves at each step of the search.

Matthieu Basseur, Adrien Goëffon
Neighborhood Specification for Game Strategy Evolution in a Spatial Iterated Prisoner’s Dilemma Game

The prisoner’s dilemma is a two-player non-zero-sum game. Its iterated version has been frequently used to examine game strategy evolution in the literature. In this paper, we discuss the setting of neighborhood structures in its spatial iterated version. The main characteristic feature of our spatial iterated prisoner’s dilemma game model is that each cell has a different scheme to represent game strategies. In our computational experiments, one of four representation schemes is randomly assigned to each cell in a two-dimensional grid-world. An agent at each cell has a game strategy encoded by the assigned representation scheme. In this situation, an agent may have no neighbors with the same representation scheme as the agent’s scheme. The existence of such an agent has a negative effect on the evolution of cooperative behavior. This is because strategies with different representation schemes cannot be recombined. When no neighbors have the same representation scheme as the agent’s scheme, no recombination can be used for generating a new strategy for the agent. In our former study, we used a larger neighborhood structure for such an agent. As a result, each agent has a different neighborhood structure and a different number of neighbors. This makes it difficult to discuss the effect of the neighborhood size on the evolution of cooperative behavior. In this paper, we propose the use of the following setting: Each agent has the same number of neighbors with the same representation scheme as the agent’s scheme. This means that each agent has the same number of qualified neighbors as its mates. We also examine a different spatial model where the location of each agent is randomly specified as a point in a two-dimensional continuous space instead of a grid-world.

Hisao Ishibuchi, Koichiro Hoshino, Yusuke Nojima
A Study on the Specification of a Scalarizing Function in MOEA/D for Many-Objective Knapsack Problems

In recent studies on evolutionary multiobjective optimization, MOEA/D has been frequently used due to its simplicity, high computational efficiency, and high search ability. A multiobjective problem in MOEA/D is decomposed into a number of single-objective problems, which are defined by a single scalarizing function with evenly specified weight vectors. The number of the single-objective problems is the same as the number of weight vectors. The population size is also the same as the number of weight vectors. Multiobjective search for a variety of Pareto optimal solutions is realized by single-objective optimization of a scalarizing function in various directions. In this paper, we examine the dependency of the performance of MOEA/D on the specification of a scalarizing function. MOEA/D is applied to knapsack problems with 2-10 objectives. As a scalarizing function, we examine the weighted sum, the weighted Tchebycheff, and the PBI (penalty-based boundary intersection) function with a wide range of penalty parameter values. Experimental results show that the weighted Tchebycheff and the PBI function with an appropriate penalty parameter value outperformed the weighted sum and the PBI function with no penalty parameter in computational experiments on two-objective problems. However, better results were obtained from the weighted sum and the PBI function with no penalty parameter for many-objective problems with 6-10 objectives. We discuss the reason for these observations using the contour line of each scalarizing function. We also suggest potential usefulness of the PBI function with a negative penalty parameter value for many-objective problems.

Hisao Ishibuchi, Naoya Akedo, Yusuke Nojima
Portfolio with Block Branching for Parallel SAT Solvers

A portfolio approach has become a widespread method for parallelizing SAT solvers. In comparison with a divide-and-conquer approach, an important feature of the portfolio approach is that there is no need to conduct load-balancing for workers. Instead of load-balancing, the portfolio makes a diversification of workers by differentiating their search parameters. However, it is difficult to achieve effective diversification in a massively parallel environment because the number of combinations of the search parameters is limited. Thus, many overlaps of the search spaces between the workers can occur in such an environment. In order to prevent these overlaps, we propose a novel diversification method, called

block branching

, for the portfolio approach. Preliminary experimental results show that our approach works well, even in a small parallel setting (sixteen processes), and shows potential for a massively parallel environment.

Tomohiro Sonobe, Mary Inaba
Parameter Setting with Dynamic Island Models

In this paper we proposed the use of a dynamic island model which aim at adapting parameter settings dynamically. Since each island corresponds to a specific parameter setting, measuring the evolution of islands populations sheds light on the optimal parameter settings efficiency throughout the search. This model can be viewed as an alternative adaptive operator selection technique for classic steady state genetic algorithms. Empirical studies provide competitive results with respect to other methods like automatic tuning tools. Moreover, this model could ease the parallelization of evolutionary algorithms and can be used in a synchronous or asynchronous way.

Caner Candan, Adrien Goëffon, Frédéric Lardeux, Frédéric Saubion
A Simulated Annealing Algorithm for the Vehicle Routing Problem with Time Windows and Synchronization Constraints

This paper focuses on solving a variant of the vehicle routing problem (VRP) in which a time window is associated with each customer service and some services require simultaneous visits from different vehicles to be accomplished. The problem is therefore called the VRP with time windows and synchronization constraints (VRPTWSyn). We present a simulated annealing algorithm (SA) that incorporates several local search techniques to deal with this problem. Experiments on the instances from the literature show that our SA is fast and outperforms the existing approaches. To the best of our knowledge, this is the first time that dedicated local search methods have been proposed and evaluated on this variant of VRP.

Sohaib Afifi, Duc-Cuong Dang, Aziz Moukrim
Solution of the Maximum $$k$$ k -Balanced Subgraph Problem

A signed graph

$$G=(V,E,s)$$

G

=

(

V

,

E

,

s

)

is

$$k$$

k

-balanced if

$$V$$

V

can be partitioned into at most

$$k$$

k

sets in such a way that positive edges are found only within the sets and negative edges go between sets. We study the problem of finding a subgraph of

$$G$$

G

that is

$$k$$

k

-balanced and maximum according to the number of vertices. This problem has applications in clustering problems appearing in collaborative

$$\times $$

×

conflicting environments. We describe a 0-1 linear programming formulation for the problem and implement a first version of a branch-and-cut algorithm based on it. GRASP metaheuristics are used to implement the separation routines in the branch-and-cut. We also propose GRASP and ILS-VND procedures to solve heuristically the problem.

Rosa Figueiredo, Yuri Frota, Martine Labbé
Racing with a Fixed Budget and a Self-Adaptive Significance Level

F-Race is an offline parameter tuning method which efficiently allocates samples in order to identify the parameter setting with the best expected performance, out of a given set of parameter settings. Using non parametric statistical tests, F-Race discards parameter settings which perform significantly worse than the current best, allowing the surviving parameter settings to be tested on more instances and hence obtaining better estimates for their performance. The statistical tests require setting significance levels which directly affect the algorithm’s ability of detecting the best parameter setting, and the total runtime. In this paper, we show that it is not straightforward to set the significance level and propose a simple modification to automatically adapt the significance level such that the failure rate is minimized. This is tested empirically using data drawn from probability distributions with pre-defined characteristics. Results indicate that, under a strict computational budget, F-Race with online adaptation performs significantly better than its counterpart with even the best fixed value.

Juergen Branke, Jawad Elomari
An Efficient Best Response Heuristic for a Non-preemptive Strictly Periodic Scheduling Problem

We propose an enhanced version of a original heuristic first proposed in [

1

,

2

] to solve a NP-hard strictly periodic scheduling problem. Inspired by game theory, the heuristic reaches an equilibrium by iteratively solving best response problems. Our contribution is to greatly improve its efficiency, taking advantage of the two-dimensionality of the best response problem. The results show that the new heuristic compares favorably with MILP solutions.

Clément Pira, Christian Artigues
Finding an Evolutionary Solution to the Game of Mastermind with Good Scaling Behavior

There are two main research issues in the game of Mastermind: one of them is finding solutions that are able to minimize the number of turns needed to find the solution, and another is finding methods that scale well when the size of the search space is increased. In this paper we will present a method that uses evolutionary algorithms to find fast solutions to the game of Mastermind that scale better with problem size than previously described methods; this is obtained by just fixing one parameter.

Juan Julian Merelo, Antonio M. Mora, Carlos Cotta, Antonio J. Fernández-Leiva
A Fast Local Search Approach for Multiobjective Problems

In this article, we present a new local method for multiobjective problems. It is an extension of local search algorithms for the single objective case, with specific mechanisms used to build the Pareto set. The performance of the local search algorithm is illustrated by experimental results based on a real problem with three objectives. The problem is issued from electric car-sharing service with a car manufacturer partner. Compared to the Multiobjective Pareto Local Search (PLS) well known in the scientific literature [

1

], the proposed model aims to improve: the solutions quality and the time computing.

Laurent Moalic, Alexandre Caminada, Sid Lamrous
Generating Customized Landscapes in Permutation-Based Combinatorial Optimization Problems

Designing customized optimization problem instances is a key issue in optimization. They can be used to tune and evaluate new algorithms, to compare several optimization algorithms, or to evaluate techniques that estimate the number of local optima of an instance. Given this relevance, several methods have been proposed to design customized optimization problems in the field of evolutionary computation for continuous as well as binary domains. However, these proposals have not been extended to permutation spaces. In this paper we provide a method to generate customized landscapes in permutation-based combinatorial optimization problems. Based on a probabilistic model for permutations, called the Mallows model, we generate instances with specific characteristics regarding the number of local optima or the sizes of the attraction basins.

Leticia Hernando, Alexander Mendiburu, Jose A. Lozano
Multiobjective Evolution of Mixed Nash Equilibria

In a mixed strategy equilibrium players randomize between their actions according to a very specific probability distribution, even though with regard to the game payoff, they are indifferent between their actions. Currently, there is no compelling model explaining why and how agents may randomize their decisions is such a way, in real world scenarios.

We experiment with a model for two player games, where the goal of the players is to find robust strategies for which the uncertainty in the outcome of the opponent is reduced as much as possible. We show that in an evolutionary setting, the proposed model converges to mixed strategy profiles, if these exist. The results suggest that only local knowledge of the game is sufficient to attain the adaptive convergence.

David Iclănzan, Noémi Gaskó, Réka Nagy, D. Dumitrescu
Hybridizing Constraint Programming and Monte-Carlo Tree Search: Application to the Job Shop Problem

Constraint Programming (CP) solvers classically explore the solution space using tree search-based heuristics. Monte-Carlo Tree-Search (MCTS), a tree-search based method aimed at sequential decision making under uncertainty, simultaneously estimates the reward associated to the sub-trees, and gradually biases the exploration toward the most promising regions. This paper examines the tight combination of MCTS and CP on the job shop problem (JSP). The contribution is twofold. Firstly, a reward function compliant with the CP setting is proposed. Secondly, a biased MCTS node-selection rule based on this reward is proposed, that is suitable in a multiple-restarts context. Its integration within the Gecode constraint solver is shown to compete with JSP-specific CP approaches on difficult JSP instances.

Manuel Loth, Michéle Sebag, Youssef Hamadi, Marc Schoenauer, Christian Schulte
From Grammars to Parameters: Automatic Iterated Greedy Design for the Permutation Flow-Shop Problem with Weighted Tardiness

Recent advances in automatic algorithm configuration have made it possible to configure very flexible algorithmic frameworks in order to fine-tune them for particular problems. This is often done by the use of automatic methods to set the values of algorithm parameters. A rather different approach uses grammatical evolution, where the possible algorithms are implicitly defined by a context-free grammar. Possible algorithms may then be instantiated by repeated applications of the rules in the grammar. Through grammatical evolution, such an approach has shown to be able to generate heuristic algorithms. In this paper we show that the process of instantiating such a grammar can be described in terms of parameters. The number of parameters increases with the maximum number of applications of the grammar rules. Therefore, this approach is only practical if the number of rules and depth of the derivation tree are bounded and relatively small. This is often the case in the heuristic-generating grammars proposed in the literature, and, in such cases, we show that the parametric representation may lead to superior performance with respect to the representation used in grammatical evolution. In particular, we first propose a grammar that generates iterated greedy (IG) algorithms for the permutation flow-shop problem with weighted tardiness minimization. Next, we show how this grammar can be represented in terms of parameters. Finally, we compare the quality of the IG algorithms generated by an automatic configuration tool using the parametric representation versus using the codon-based representation of grammatical evolution. In our scenario, the parametric approach leads to significantly better IG algorithms.

Franco Mascia, Manuel López-Ibáñez, Jérémie Dubois-Lacoste, Thomas Stützle
Architecture for Monitoring Learning Processes Using Video Games

PLAGER-VG is an architecture used to design, execute, analyze and adapt educational processes supported by video games, especially those that include collaborative activities and which use collaborative learning techniques. In this paper, we have focused on the monitoring and adaptive processes in order to customize activities both within the game and the learning process to improve the results obtained from using these collaborative video games. To perform these processes we propose a mechanism based on the use of a set of specialized agents included in this architecture to collect relevant information and to process it in order to obtain the necessary adaptation actions.

N. Padilla-Zea, J. R. Lopez-Arcos, F. L. Gutiérrez-Vela, P. Paderewski, N. Medina-Medina
Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning

Parameter tuning is recognized today as a crucial ingredient when tackling an optimization problem. Several meta-optimization methods have been proposed to find the best parameter set for a given optimization algorithm and (set of) problem instances. When the objective of the optimization is some scalar quality of the solution given by the target algorithm, this quality is also used as the basis for the quality of parameter sets. But in the case of multi-objective optimization by aggregation, the set of solutions is given by several single-objective runs with different weights on the objectives, and it turns out that the hypervolume of the final population of each single-objective run might be a better indicator of the global performance of the aggregation method than the best fitness in its population. This paper discusses this issue on a case study in multi-objective temporal planning using the evolutionary planner

DaE

$$_{\text {YAHSP}}$$

YAHSP

and the meta-optimizer

ParamILS

. The results clearly show how

ParamILS

makes a difference between both approaches, and demonstrate that indeed, in this context, using the hypervolume indicator as

ParamILS

target is the best choice. Other issues pertaining to parameter tuning in the proposed context are also discussed.

M. R. Khouadjia, M. Schoenauer, V. Vidal, J. Dréo, P. Savéant
Evolutionary FSM-Based Agents for Playing Super Mario Game

Most of game development along the years has been focused on the technical part (graphics and sound), leaving the artificial intelligence aside. However computational intelligence is becoming more significant, leading to much research on how to provide non-playing characters with adapted and unpredictable behaviour so as to afford users a better gaming experience. This work applies strategies based on Genetic Algorithms mixed with behavioural models, to obtain an agent (or bot) capable of completing autonomously different scenarios on a simulator of Super Mario Bros. game. Specifically, the agent follows the rules of the

Gameplay

track of Mario AI Championship. Different approaches have been analysed, combining Genetic Algorithms with Finite State Machines, yielding agents which can complete levels of different difficulties playing much better than an expert human player.

R. M. Hidalgo-Bermúdez, M. S. Rodríguez-Domingo, A. M. Mora, P. García-Sánchez, Juan Julian Merelo, Antonio J. Fernández-Leiva
Identifying Key Algorithm Parameters and Instance Features Using Forward Selection

Most state-of-the-art algorithms for large-scale optimization problems expose free parameters, giving rise to combinatorial spaces of possible configurations. Typically, these spaces are hard for humans to understand. In this work, we study a model-based approach for identifying a small set of both algorithm parameters and instance features that suffices for predicting empirical algorithm performance well. Our empirical analyses on a wide variety of hard combinatorial problem benchmarks (spanning SAT, MIP, and TSP) show that—for parameter configurations sampled uniformly at random—very good performance predictions can typically be obtained based on just two key parameters, and that similarly, few instance features and algorithm parameters suffice to predict the most salient algorithm performance characteristics in the combined configuration/feature space. We also use these models to identify settings of these key parameters that are predicted to achieve the best overall performance, both on average across instances and in an instance-specific way. This serves as a further way of evaluating model quality and also provides a tool for further understanding the parameter space. We provide software for carrying out this analysis on arbitrary problem domains and hope that it will help algorithm developers gain insights into the key parameters of their algorithms, the key features of their instances, and their interactions.

Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown
Using Racing to Automatically Configure Algorithms for Scaling Performance

Automated algorithm configuration has been proven to be an effective approach for achieving improved performance of solvers for many computationally hard problems. Following our previous work, we consider the challenging situation where the kind of problem instances for which we desire optimised performance are too difficult to be used during the configuration process. In this work, we propose a novel combination of racing techniques with existing algorithm configurators to meet this challenge. We demonstrate that the resulting algorithm configuration protocol achieves better results than previous approaches and in many cases closely matches the bound on performance obtained using an oracle selector. An extended version of this paper can be found at

www.cs.ubc.ca/labs/beta/Projects/Config4Scaling

.

James Styles, Holger H. Hoos
Algorithm Selection for the Graph Coloring Problem

We present an automated algorithm selection method based on machine learning for the graph coloring problem (GCP). For this purpose, we identify

$$78$$

78

features for this problem and evaluate the performance of six state-of-the-art (meta) heuristics for the GCP. We use the obtained data to train several classification algorithms that are applied to predict on a new instance the algorithm with the highest expected performance. To achieve better performance for the machine learning algorithms, we investigate the impact of parameters, and evaluate different data discretization and feature selection methods. Finally, we evaluate our approach, which exploits the existing GCP techniques and the automated algorithm selection, and compare it with existing heuristic algorithms. Experimental results show that the GCP solver based on machine learning outperforms previous methods on benchmark instances.

Nysret Musliu, Martin Schwengerer
Batched Mode Hyper-heuristics

A primary role for

hyper-heuristics

is to control search processes based on moves generated by neighbourhood operators. Studies have shown that such hyper-heuristics can be effectively used, without modification, for solving unseen problem instances not only from a particular domain, but also on different problem domains. They hence provide a general-purpose software component to help reduce the implementation time needed for effective search methods. However, hyper-heuristic studies have generally used time-contract algorithms (i.e. a fixed execution time) and also solved each problem instance independently. We consider the potential gains and challenges of a hyper-heuristic being able to treat a set of instances as a batch; to be completed within an overall joint execution time. In batched mode, the hyper-heuristic can freely divide the computational effort between the individual instances, and also exploit what it learns on one instance to help solve other instances.

Shahriar Asta, Ender Özcan, Andrew J. Parkes
Tuning Algorithms for Tackling Large Instances: An Experimental Protocol

Tuning stochastic local search algorithms for tackling large instances is difficult due to the large amount of CPU-time that testing algorithm configurations requires on such large instances. We define an experimental protocol that allows tuning an algorithm on small tuning instances and extrapolating from the obtained configurations a parameter setting that is suited for tackling large instances. The key element of our experimental protocol is that both the algorithm parameters that need to be scaled to large instances and the stopping time that is employed for the tuning instances are treated as free parameters. The scaling law of parameter values, and the computation time limits on the small instances are then derived through the minimization of a loss function. As a proof of concept, we tune an iterated local search algorithm and a robust tabu search algorithm for the quadratic assignment problem.

Franco Mascia, Mauro Birattari, Thomas Stützle
Automated Parameter Tuning Framework for Heterogeneous and Large Instances: Case Study in Quadratic Assignment Problem

This paper is concerned with automated tuning of parameters of algorithms to handle heterogeneous and large instances. We propose an automated parameter tuning framework with the capability to provide instance-specific parameter configurations. We report preliminary results on the Quadratic Assignment Problem (QAP) and show that our framework provides a significant improvement on solutions qualities with much smaller tuning computational time.

Lindawati, Zhi Yuan, Hoong Chuin Lau, Feida Zhu
Practically Desirable Solutions Search on Multi-Objective Optimization

This work investigates a method to search practically desirable solutions expanding the objective space with additional fitness functions associated to particular decision variables. The aim is to find solutions around preferred values of the chosen variables while searching for optimal solutions in the original objective space. Solutions to be practically desirable are constrained to be within a certain distance from the instantaneous Pareto optimal set computed in the original objective space. Our experimental results show that the proposed method can effectively find practically desirable solutions.

Natsuki Kusuno, Hernán Aguirre, Kiyoshi Tanaka, Masataka Koishi
Oversized Populations and Cooperative Selection: Dealing with Massive Resources in Parallel Infrastructures

This paper proposes a new selection scheme for Evolutionary Algorithms (EAs) based on altruistic cooperation between individuals. Cooperation takes place every time an individual undergoes selection: the individual decreases its own fitness in order to improve the mating chances of worse individuals. On the one hand, the selection scheme guarantees that the genetic material of fitter individuals passes to subsequent generations as to decrease their fitnesses individuals have to be firstly selected. On the other hand, the scheme restricts the number of times an individual can be selected not to take over the entire population. We conduct an empirical study for a parallel EA version where

cooperative selection

scheme is shown to outperform binary tournament: both selection schemes yield the same qualities of solutions but

cooperative selection

always improves the times to solutions.

Juan Luis Jiménez Laredo, Bernabe Dorronsoro, Carlos Fernandes, Juan Julian Merelo, Pascal Bouvry
Effects of Population Size on Selection and Scalability in Evolutionary Many-Objective Optimization

In this work we study population size as a fraction of the true Pareto optimal set and analyze its effects on selection and performance scalability of a conventional multi-objective evolutionary algorithm applied to many-objective optimization of small MNK-landscapes.

Hernán Aguirre, Arnaud Liefooghe, Sébastien Verel, Kiyoshi Tanaka
A Novel Feature Selection Method for Classification Using a Fuzzy Criterion

Although many classification methods take advantage of fuzzy sets theory, the same cannot be said for feature reduction methods. In this paper we explore ideas related to the use of fuzzy sets and we propose a novel fuzzy feature selection method tailored for the Regularized Generalized Eigenvalue Classifier (ReGEC). The method provides small and robust subsets of features that can be used for supervised classification. We show, using real world datasets that the performance of ReGEC classifier on the selected features well compares with that obtained using them all.

Maria Brigida Ferraro, Antonio Irpino, Rosanna Verde, Mario Rosario Guarracino
Backmatter
Metadaten
Titel
Learning and Intelligent Optimization
herausgegeben von
Giuseppe Nicosia
Panos Pardalos
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-44973-4
Print ISBN
978-3-642-44972-7
DOI
https://doi.org/10.1007/978-3-642-44973-4