Skip to main content

2014 | Buch

Search Methodologies

Introductory Tutorials in Optimization and Decision Support Techniques

herausgegeben von: Edmund K. Burke, Graham Kendall

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

The first edition of Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques was originally put together to offer a basic introduction to the various search and optimization techniques that students might need to use during their research, and this new edition continues this tradition. Search Methodologies has been expanded and brought completely up to date, including new chapters covering scatter search, GRASP, and very large neighborhood search. The chapter authors are drawn from across Computer Science and Operations Research and include some of the world’s leading authorities in their field. The book provides useful guidelines for implementing the methods and frameworks described and offers valuable tutorials to students and researchers in the field.

“As I embarked on the pleasant journey of reading through the chapters of this book, I became convinced that this is one of the best sources of introductory material on the search methodologies topic to be found. The book’s subtitle, “Introductory Tutorials in Optimization and Decision Support Techniques”, aptly describes its aim, and the editors and contributors to this volume have achieved this aim with remarkable success. The chapters in this book are exemplary in giving useful guidelines for implementing the methods and frameworks described.”
Fred Glover, Leeds School of Business, University of Colorado Boulder, USA

“[The book] aims to present a series of well written tutorials by the leading experts in their fields. Moreover, it does this by covering practically the whole possible range of topics in the discipline. It enables students and practitioners to study and appreciate the beauty and the power of some of the computational search techniques that are able to effectively navigate through search spaces that are sometimes inconceivably large. I am convinced that this second edition will build on the success of the first edition and that it will prove to be just as popular.”
Jacek Blazewicz, Institute of Computing Science, Poznan University of Technology and Institute of Bioorganic Chemistry, Polish Academy of Sciences

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Search and optimization technologies underpin the development of decision support systems in a wide variety of applications across industry, commerce, science and government. There is a significant level of diversity among optimization and computational search applications. This can be evidenced by noting that a small selection of applications includes transport scheduling, bioinformatics optimization, personnel rostering, medical decision support and timetabling. Later in this introduction we present some recent survey papers for some of these areas and more examples of relevant applications are available in Pardalos and Resende (2002) and Leung (2004).
Edmund K. Burke, Graham Kendall
Chapter 2. Classical Techniques
Abstract
The purpose of this chapter is to provide an introduction to three classical search techniques—branch and bound, dynamic programming and network flow programming—all of which have a well established record in the solution of both classical and practical problems. All three have their origins in, or prior to, the 1950s and were the result of a surge in interest in the use of mathematical techniques for the solution of practical problems. The timing was in part due to developments in Operations Research in World War II, but was also spurred by increasing competition in the industrial sector and the promise of readily accessible computing power in the foreseeable future.
Kathryn A. Dowsland
Chapter 3. Integer Programming
Abstract
Over the last 25 years, the combination of faster computers, more reliable data and improved algorithms has resulted in the near-routine solution of many integer programs of practical interest. Integer programming models are used in a wide variety of applications, including scheduling, resource assignment, planning, supply chain design, auction design, and many, many others. In this tutorial, we outline some of the major themes involved in creating and solving integer programming models.
Robert Bosch, Michael Trick
Chapter 4. Genetic Algorithms
Abstract
Genetic algorithms (GAs) are search methods based on principles of natural selection and genetics (Fraser 1957; Bremermann 1958; Holland 1975). We start with a brief introduction of simple GAs and the associated terminologies. GAs encode the decision variables of a search problem into finite-length strings of alphabets of certain cardinality. The strings which are candidate solutions to the search problem are referred to as chromosomes, the alphabets are referred to as genes and the values of genes are called alleles. For example, in a problem such as the traveling salesman problem (TSP), a chromosome represents a route, and a gene may represent a city. In contrast to traditional optimization techniques, GAs work with coding of parameters, rather than the parameters themselves.
Kumara Sastry, David E. Goldberg, Graham Kendall
Chapter 5. Scatter Search
Abstract
Scatter search (SS) is an evolutionary approach for optimization. It has been applied to problems with continuous and discrete variables and with one or multiple objectives. The success of SS as an optimization technique is well documented in a constantly growing number of journal articles, book chapters (Glover et al. 2003a, 2003b, 2004; Laguna 2002) and a book (Laguna and Martí 2003).
Manuel Laguna
Chapter 6. Genetic Programming
Abstract
The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called machine intelligence (Turing 1948, 1950).
Riccardo Poli, John Koza
Chapter 7. Artificial Immune Systems
Abstract
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self or nonself substances. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years.
Uwe Aickelin, Dipankar Dasgupta, Feng Gu
Chapter 8. Swarm Intelligence
Abstract
The complex and often coordinated behavior of swarms fascinates not only biologists but also computer scientists. Bird flocking and fish schooling are impressive examples of coordinated behavior that emerges without central control. Social insect colonies show complex problem-solving skills arising from the actions and interactions of nonsophisticated individuals.
Daniel Merkle, Martin Middendorf
Chapter 9. Tabu Search
Abstract
This chapter is an introductory tutorial on tabu search. It emphasizes the basic mechanisms of this search method and illustrates their application on two classical combinatorial problems. Some more advanced concepts, like diversification and intensification, are also introduced. The chapter ends with useful tips for designing a successful tabu search implementation.
Michel Gendreau, Jean-Yves Potvin
Chapter 10. Simulated Annealing
Abstract
Many problems in engineering, planning and manufacturing can be modeled as that of minimizing or maximizing a cost function over a finite set of discrete variables. This class of so-called combinatorial optimization problems has received much attention over the years and major achievements have been made in its analysis (Ausiello et al.
Emile Aarts, Jan Korst, Wil Michiels
Chapter 11. GRASP: Greedy Randomized Adaptive Search Procedures
Abstract
GRASP is a multi-start metaheuristic for combinatorial optimization problems, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. An intensification strategy based on path-relinking is frequently used to improve solution quality and to reduce computation times by exploring elite solutions previously found along the search. This chapter describes the basic components of GRASP, successful implementation strategies, and effective hybridizations with path-relinking and other metaheuristics. We also list some tricks to be used in the quest for good implementations. The bibliography is enriched by an account of relevant applications and by links to surveys, software, and additional sources of material.
Mauricio G. C. Resende, Celso C. Ribeiro
Chapter 12. Variable Neighborhood Search
Abstract
Variable neighborhood search (VNS) is a metaheuristic, or framework for building heuristics, which exploits systematically the idea of neighborhood change, both in the descent to local minima and in the escape from the valleys which contain them. In this tutorial we first present the ingredients of VNS, i.e. variable neighborhood descent (VND) and Reduced VNS (RVNS) followed by the basic and then the general scheme of VNS itself which contain both of them. Extensions are presented, in particular Skewed VNS which enhances exploration of far away valleys andvariable neighborhood decomposition search (VNDS), a two-level scheme for solution of large instances of various problems. In each case, we present the scheme, some illustrative examples and questions to be addressed in order to obtain an efficient implementation.
Pierre Hansen, Nenad Mladenović
Chapter 13. Very Large-Scale Neighborhood Search
Abstract
One of the central issues in developing neighborhood search techniques is defining the neighborhood. As a rule of thumb, larger neighborhoods contain higher quality local optimal solutions compared to smaller neighborhoods. However, larger neighborhoods also typically require more time to search than smaller neighborhoods. A neighborhood search algorithm is not practical if the neighborhoods cannot be searched efficiently. Thus, a rapid search algorithm is needed to make efficient use of large neighborhoods.
Douglas S. Altner, Ravindra K. Ahuja, Özlem Ergun, James B. Orlin
Chapter 14. Constraint Programming
Abstract
Constraint satisfaction problems are ubiquitous. A simple example that we will use throughout the first half of this chapter is the following scheduling problem: Choose employees A or B for each of three tasks, X, Y, Z, subject to the work rules that the same employee cannot carry out both tasks X and Y, the same employee cannot carry out both tasks Y and Z, and only employee B is allowed to carry out task Z. (Many readers will recognize this as a simple coloring problem.)
Eugene C. Freuder, Mark Wallace
Chapter 15. Multi-objective Optimization
Abstract
Multi-objective optimization is an integral part of optimization activities and has a tremendous practical importance, since almost all real-world optimization problems are ideally suited to be modeled using multiple conflicting objectives. The classical means of solving such problems were primarily focused on scalarizing multiple objectives into a single objective, whereas the evolutionary means have been to solve a multi-objective optimization problem as it is. In this chapter, we discuss the fundamental principles of multi-objective optimization, the differences between multi-objective optimization and single-objective optimization, and describe a few well-known classical and evolutionary algorithms for multi-objective optimization. Two application case studies reveal the importance of multi-objective optimization in practice. A number of research challenges are then highlighted. The chapter concludes by suggesting a few tricks of the trade and mentioning some key resources to the field of multi-objective optimization.
Kalyanmoy Deb, Kalyanmoy Deb
Chapter 16. Sharpened and Focused No Free Lunch and Complexity Theory
Abstract
This tutorial reviews basic concepts in complexity theory, as well as various No Free Lunch results and how these results relate to computational complexity. The tutorial explains basic concepts in an informal fashion that illuminates key concepts. “No Free Lunch” theorems for search can be summarized by the following result:
Darrell Whitley
Chapter 17. Machine Learning
Abstract
Machine learning is a very active sub-field of artificial intelligence concerned with the development of computational models of learning. Machine learning is inspired by the work in several disciplines: cognitive sciences, computer science, statistics, computational complexity, information theory, control theory, philosophy and biology. Simply speaking, machine learning is learning by machine. From a computational point of view, machine learning refers to the ability of a machine to improve its performance based on previous results. From a biological point of view, machine learning is the study of how to create computers that will learn from experience and modify their activity based on that learning as opposed to traditional computers whose activity will not change unless the programmer explicitly changes it.
Xin Yao, Yong Liu
Chapter 18. Fuzzy Reasoning
Abstract
The derivation of mathematical models that can efficiently describe real-world problems is generally an overwhelming or even impossible task, due to the complexity and inherent ambiguity of characteristics that these problems can possess. As L. A. Zadeh (1973), the founder of the theory of fuzzy sets, puts it
Costas P. Pappis, Constantinos I. Siettos
Chapter 19. Rough-Set-Based Decision Support
Abstract
In this chapter, we are concerned with the discovery of knowledge from data describing a decision situation. A decision situation is characterized by a set of states or examples, which relate the input with the output of the situation. The aim is to find concise knowledge patterns that summarize a decision situation, and that are useful for explanation of this situation, as well as for the prediction of future similar situations. They are particularly useful in such decision problems as technical or medical diagnostics, performance evaluation and risk assessment. A decision situation is described by a set of attributes, which we might also call properties, features, characteristics, etc. Such attributes may be concerned with either the input or output of a situation or, in other words, with either conditions or decisions. Within this chapter, we will refer to states or examples of a decision situation as objects. The goal of the chapter is to present a knowledge discovery paradigm for multi-attribute and multicriteria decision making, which is based upon the concept of rough sets. Rough set theory was introduced by Pawlak (1982, 1991). Since then, it has often proved to be an excellent mathematical tool for the analysis of a vague description of objects. The adjective vague (referring to the quality of information) is concerned with inconsistency or ambiguity. The rough set philosophy is based on the assumption that with every object of the universe U there is associated a certain amount of information (data, knowledge). This information can be expressed by means of a number of attributes. The attributes describe the object. Objects which have the same description are said to be indiscernible (similar) with respect to the available information. The indiscernibility relation thus generated constitutes the mathematical basis of rough set theory. It induces a partition of the universe into blocks of indiscernible objects, called elementary sets, which can then be used to build knowledge about a real or abstract world. The use of the indiscernibility relation results in information granulation.
Roman Słowiński, Salvatore Greco, Benedetto Matarazzo
Chapter 20. Hyper-heuristics
Abstract
Many practical problems are awkward to solve computationally. Whether you are trying to find any solution at all, or perhaps to find a solution that is optimal or close to optimal according to some criteria, exact methods can be unfeasibly expensive. In such cases it is common to resort to heuristic methods, which are typically derived from experience but are inexact or incomplete. For example, in packing and cutting problems a very simple heuristic might be to try to pack the items in some standardized way starting with the largest remaining one first, on the reasonable grounds that the big ones tend to cause the most trouble. But such heuristics can easily lead to suboptimal answers.
Peter Ross
Chapter 21. Approximations and Randomization
Abstract
In response to the apparent intractability of NP-hard problems, approximation algorithms were introduced as a way to provide strong guarantees about the quality of solutions, without requiring exponential time to obtain them. The study of randomized algorithms, procedures that “flip coins” and are allowed to err with some probability, arose alongside approximation algorithms as a possible resource for circumventing intractability. We outline some of the various types of approximation algorithms that have been proposed, with a special focus on ones using randomization, and suggest further research directions in this area.
Carla P. Gomes, Ryan Williams
Chapter 22. Fitness Landscapes
Abstract
The idea of a fitness landscape is a common metaphor for understanding the process of local search methods for optimization. This chapter provides an introduction to some of the theory, and describes some practical applications derived from the landscape concept.
Colin R. Reeves
Backmatter
Metadaten
Titel
Search Methodologies
herausgegeben von
Edmund K. Burke
Graham Kendall
Copyright-Jahr
2014
Verlag
Springer US
Electronic ISBN
978-1-4614-6940-7
Print ISBN
978-1-4614-6939-1
DOI
https://doi.org/10.1007/978-1-4614-6940-7