Skip to main content

2009 | Buch

Tuning Metaheuristics

A Machine Learning Perspective

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
Aiming at the best is one of the most fundamental traits of intelligence. In all activities, human beings tend to maximize benefit or, equivalently, to minimize inconvenience in some context-dependent sense. The pursuit of the best appears so connatural with the human mind that when we do not recognize it in somebody’s behavior we readily qualify him/her as irrational.
Quite naturally, along the centuries man has devoted much effort to the development of formal tools that help in spotting optimality. Mathematical optimization, as we know it nowadays, is the result of this effort: A collection of powerful methods and algorithms for tackling a large variety of different problems. In the contemporary world, optimization plays an extremely important role in all branches of engineering and has therefore a huge economical relevance. During the last two decades in particular, the ever increasing availability of computing power has further enlarged the set of optimization problems that can be effectively handled.
Mauro Birattari
Background and State-of-the-Art
Abstract
Although the problem of tuning metaheuristics is highly relevant in practical applications and notwithstanding the general acknowledgment of this fact, no well defined and established methodology exists for tackling it. Indeed, the most striking element emerging from the literature is that the problem even lacks of a clear and explicit definition. In our opinion, this indicates that a full understanding of its very nature is still missing. As a consequence, the analysis of the state-of-the-art that we propose in this chapter will be unavoidably “in the negative:” It will be characterized more vividly by what is missing in the literature rather than by what is present.
Mauro Birattari
Statement of the Tuning Problem
Abstract
Metaheuristics are general algorithmic templates whose components need to be instantiated and properly tuned in order to yield a fully functioning algorithm. We call a configuration of a metaheuristic any possible instantiation of this template while with the expression tuning problem we denote the problem of properly instantiating the template, that is, the problem of choosing among the set of possible components and assigning specific values to all free parameters.
As practitioners know, metaheuristics are in general quite sensitive to the value of their parameters and a careful fine-tuning typically improves the performance in a significant way. This chapter is devoted to the definition of a formal framework in which the tuning problem can be cast and solved. Before entering in the technical details, an informal description of a typical situation in which the tuning problem arises is given in Section 3.1. In Section 3.2 we give the formal position of the problem and in Section 3.3 we discuss some possible extensions and variants. Section 3.4 summarizes and discusses the results presented in the chapter.
Mauro Birattari
F-Race for Tuning Metaheuristics
Abstract
This chapter is devoted to the definition of a number of algorithms for solving the tuning problem as posed in Section 3.2.
In the tuning problem, formally described by a tuple \(\langle{\Theta, I, P_I, P_C, t, {\mathcal C}, T}\rangle\), a finite set Θ of candidate configurations is given together with a class I of instances. The instances appear sequentially and at each step an instance i is generated with a probability defined by the measure P I . The cost of the best solution of i found by a candidate θ in a time t(i) is a stochastic quantity described by the conditional measure P C . The tuning problem consists in finding, within a time T, the best configuration according to the criterion \(\mathcal C\), when the measures P I and P C are unknown but a sample of instances can be obtained on which to test the candidate configurations.
Mauro Birattari
Experiments and Applications
Abstract
In this chapter we provide some composite evidence of the effectiveness of the racing approach, and in particular of the F-Race algorithm, for tuning metaheuristics and more in general for tuning stochastic algorithms.
In particular, Section 5.1 proposes a formal empirical analysis of F-Race on the problem of tuning metaheuristics. In this analysis, the algorithm is compared against other racing algorithms and against the brute-force approach. The latter, being the most straightforward approach for tackling the tuning problem defined in Chapter 3, can be considered as the natural choice of a baseline algorithm for assessing the performance of tuning methods. In order to assess F-Race, two tuning problems are considered. In the first, proposed in Section 5.1.1, the metaheuristic to be tuned is iterated local search and the optimization problem considered is the quadratic assignment problem. In the second, proposed in Section 5.1.2, the algorithm is ant colony optimization and the problem is the traveling salesman problem. On these problems, a formal empirical analysis is carried out: a number of algorithms, including indeed F-Race, are compared under controlled conditions, the differences in their performance are assessed through appropriate statistical tests of significance, and a number of graphs are proposed that help visualize and understand the characteristics of the algorithms under study. The experimental methodology followed in the analysis is heavily influenced by the one commonly adopted within the machine learning community. A discussion on the implications and on the opportunity of adopting this methodology is given in Chapter 6. A particularly valuable tool that has been employed is a so-called re-sampling method. This method, originally presented in the literature on nonparametric statistics, is rather well known by machine learning practitioners but, to the best of our knowledge, it has never been adopted before for what concerns the empirical analysis of metaheuristics: Its first application in the field is described in Birattari et al. (2002).
Mauro Birattari
Some Considerations on the Experimental Methodology
Abstract
Throughout this book, a number of issues are raised that cast some shadow on the experimental methodology that is currently adopted in the vast majority of the works proposing an empirical evaluation of metaheuristics. Indeed, in the combinatorial optimization field it is common to encounter works in which some dubious procedure is adopted for assessing the algorithms under analysis and in which no clear statement is made on how the values of the parameters are obtained. Apparently, the need for a thorough revision of the research methodology in the optimization field is shared by many members of the research community as it is testified by the interest raised during the last years by a number of methodological articles appeared in the main journals of the community-the works by Barr et al. (1995), Hooker (1995), and Rardin & Uzsoy (2001) published in the Journal of Heuristics are just a representative sample. In this chapter, we intend to complement the existing literature with the analysis of some fundamental issues that remained so far under-explored. In particular, we focus on the problems connected with the tuning of metaheuristics. Indeed, tuning is a particularly critical element when metaheuristics are assessed and compared, being related with many different catches that might invalidate the results. Among them, we study the risks deriving from adopting the same set of instances for tuning a metaheuristic and for then assessing its performance. In the context of this study, we introduce the concept of over-tuning which is akin to over-fitting in supervised learning.
Mauro Birattari
Conclusions
Abstract
Metaheuristics are a relatively new but already established approach for tackling combinatorial optimization problems. In the last decade, metaheuristics have been the focus of a significant amount of academic research and have been successfully employed in an increasing number of practical applications. In particular, more and more companies rely on metaheuristics for solving complex optimization problems in various domains. Indeed, the increasing availability of low-cost computing power starts convincing companies of any size that many aspects of their activities—including design, planning, manufacturing, inventory, distribution, management, etc.—would greatly profit from proper optimization. In this context, metaheuristics are a particularly appealing solution since their development costs are typically much lower than those of alternative approaches, such as, for example, ad hoc heuristics.
Mauro Birattari
Backmatter
Metadaten
Titel
Tuning Metaheuristics
verfasst von
Mauro Birattari
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-00483-4
Print ISBN
978-3-642-00482-7
DOI
https://doi.org/10.1007/978-3-642-00483-4

Premium Partner