main-content

This book is the first comprehensive tutorial on matheuristics. Matheuristics are based on mathematical extensions of previously known heuristics, mainly metaheuristics, and on original, area-specific approaches. This tutorial provides a detailed discussion of both contributions, presenting the pseudocodes of over 40 algorithms, abundant literature references, and for each case a step-by-step description of a sample run on a common Generalized Assignment Problem example. C++ source codes of all algorithms are available in an associated SW repository.

### Chapter 1. The Generalized Assignment Problem

Abstract
The generalized assignment problem (GAP) asks to assign n clients to m servers in such a way that the assignment cost is minimized, provided that all clients are assigned to a server and that the capacity of each server is not exceeded. It is a problem that appears, by itself or as a subproblem, in a very high number of practical applications and has therefore been intensively studied. We use this problem as a test case example of all algorithms presented in the text. This section reviews the state of the art of research on GAP and shows the application of several mathematical programming techniques on GAP instances.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 2. Automatic Design for Matheuristics

Abstract
Matheuristics have become widespread and effective methods for tackling the generalized assignment problem (GAP) and many other NP-hard problems. In fact, in this book we have many such methods, ranging from metaheuristics and mathematical programming techniques but mainly to real matheuristics. In these methods we will see no parameter settings, but it is true: all these methods in the end rely on a good parameter setting. Here, in this chapter, we will learn of what can be seen as a good parameter setting. After having seen the parameters and the parameter configuration problem, we turn to the automatic algorithm configuration and then learn about three methods: ParamILS, SMAC, and irace. Then we learn about what these methods are good for and go into some details of automated configuration of existing algorithms, the integration of an algorithm engineering process up to the design of, in part, completely new algorithms. The chapter ends with a little example of how irace can be used to automatically configure a simple iterated local search method for the GAP.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 3. Single Solution Metaheuristics

Abstract
Metaheuristic approaches can be classified according to different criteria, one being the number of solutions that are evolved at each stage of the algorithm: one single solution or more than one. This chapter deals with metaheuristic algorithms that evolve one single solution; they are all enhancements of a basic local search procedure. Many different approaches have been presented, which could be included here. We choose six of them, namely Simulated Annealing, Tabu Search, GRASP, Iterated Local Search, Variable Neighborhood Search, and Ejection chains, as representative of the class and as the most widely used in the matheuristic literature. Matheuristic algorithms have in fact been used to complement each of them, along with others of the unreported ones. The techniques used to include mathematical components in the basic metaheuristic structure tend to be general and independent of the specific single solution metaheuristic they have been reported for.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 4. Population-Based Metaheuristics

Abstract
A specialized thread of metaheuristic research, bordering and often overlapping with Artificial Intelligence, studied heuristics that evolved whole sets of candidate solutions, often named “populations” of solutions. Genetic algorithms were among the first results, and following their success it became common to get inspiration from some natural phenomenon to design the heuristic. This chapter considers three representative population-evolving metaheuristics, namely genetic algorithms, ant colony optimization, and scatter search (with path relinking) and shows how they have been complemented with mathematical programming modules to achieve better performance.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 5. Diving Heuristics

Abstract
Diving heuristics are methods that progressively enlarge a partial solution up to its possible completion, thus ˇjump˘into a solution with no way back. While this is common to all constructive heuristics, diving ones are usually characterized by working on the mathematical formulation of the problem to solve. Some contributions of this type showed remarkably effective, and are included as standard components of general MIP solvers. This chapter reviews two well-known diving heuristics, namely Relaxation Induced Neighborhood Search (RINS) and Local Branching, and outlines a possible specification to the Generalized Assignment Problem.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 6. Very Large-Scale Neighborhood Search

Abstract
Very Large-Scale Neighborhood Search is not an algorithm or a class of algorithms, but rather a conceptual framework which can be used for solving combinatorial optimization problems. The approach “concentrates on neighborhood search algorithms where the size of the neighborhood is ‘very large’ with respect to the size of the input data.” Typically, the searched neighborhoods are exponentially large, but the term has come to be used more generally to describe algorithms working on neighborhoods that are too large to explicitly search in practice. The search of the large neighborhood has often been made by dedicated mathematical programming modules, deriving from different techniques. This chapter reports on the rich literature following this approach and proposes a possible implementation for the GAP.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 7. Decomposition-Based Heuristics

Abstract
Decompositions are methods derived from the “divide et impera” principle, dictating to break up a difficult problem into smaller ones, and to solve each of the smaller ones separately, ultimately recomposing the individual solutions to get the overall one. Decompositions have longly been applied to solve optimization problems, and they come in many different flavors, ranging from constraint programming to logical decomposition, from dynamic programming to linear decompositions, among many others. In this text, we are interested in heuristic approaches. We present three related mathematical decomposition techniques that have been used as seeds for classes of heuristic algorithms, plainly to be included among matheuristics, namely Lagrangian, Dantzig–Wolfe, and Benders decompositions. A detailed presentation and a run trace will be presented for a Lagrangian heuristic.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 8. Corridor Method

Abstract
The corridor method is a general method originally proposed as a way to gain efficiency in dynamic programming search, possibly losing optimality. Later, it has been extended beyond DP to other exact optimization methods. The basic idea is that of using the exact method over successive restricted portions of the solution space of the given problem. The restriction is obtained by applying exogenous constraints, which define local neighborhoods around points of interest. The corridor method is very general, the basic structure of the neighborhoods to explore is determined by the needs and requirements of the optimization method used for searching. The intrinsic reliance on mathematical programming makes this approach a prototypical case of matheuristic. A possible application to the GAP is detailed.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 9. Kernel Search

Abstract
Kernel search is a purely matheuristic method, which leverages MIP solvers to obtain heuristic, or possibly optimal, solutions of instances encoded as (mixed) integer linear programming problems. It was first presented as a method to solve mixed-integer linear problems defined on binary variables modeling items selection, together with other integer or continuous variables related to the selected items, and later extended also to problems that do not involve a selection stage. The central idea of kernel search is to use some method, for example, the LP-relaxation, to identify a subset (named kernel) of promising decision variables and then to partition the remaining ones into buckets, which are concatenated one at a time to the kernel in order to check whether improving solutions can be found. An example along these lines is proposed in this chapter for the GAP.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle

### Chapter 10. Fore-and-Back

Abstract
Fore-and-Back, previously also denoted as Forward&Backward or simply as F&B (though the algorithm presented in this chapter differs in some details from the previously published ones), is an extension of beam search that can improve its effectiveness and that, when run with no limits on computational resources, becomes an exact solution method. However, by design, Fore-and-Back is mainly concerned with heuristic solving, trying to quickly get high quality solutions with little attention paid to optimality proofs. A significant characteristic of this method is that, despite being a primal only method, it is able to compute bounds to the cost of completing partial solutions, therefore to discard partial solutions from expansion and ultimately to reduce the search space.
Vittorio Maniezzo, Marco Antonio Boschetti, Thomas Stützle