Skip to main content

2005 | Buch

Search Methodologies

Introductory Tutorials in Optimization and Decision Support Techniques

herausgegeben von: Edmund K. Burke, Graham Kendall

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

Search Methodologies is a tutorial survey of the methodologies that are at the confluence of several fields: Computer Science, Mathematics and Operations Research. It is a carefully structured and integrated treatment of the major technologies in optimization and search methodology. The book is made up of 19 chapters. The chapter authors are drawn from across Computer Science and Operations Research and include some of the world’s leading authorities in their field.

The result is a major state-of-the-art tutorial text of the main optimization and search methodologies available to researchers, students and practitioners across discipline domains in applied science. It can be used as a textbook or a reference book to learn and apply these methodologies to a wide range of today’s problems. It has been written by some of the world’s most well known authors in the field.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The investigation of search and optimization technologies underpins the development of decision support systems in a wide variety of applications across industry, commerce, science and government. There is a significant level of diversity among optimization and computational search applications. This can be evidenced by noting that a very small selection of such applications includes transport scheduling, bioinformatics optimization, personnel rostering, medical decision support and timetabling. More examples of relevant applications can be seen in (2002), (2004) and (1997). The exploration of decision support methodologies is a crucially important research area. The potential impact of more effective and more efficient decision support methodologies is enormous and can be illustrated by considering just a few of the potential benefits: more efficient production scheduling can lead to significant financial savings; higher quality personnel rosters lead to a more contented workforce; more efficient healthcare scheduling will lead to faster treatment (which could save lives); more effective cutting/packing systems can reduce waste; better delivery schedules can reduce fuel emissions.
Edmund K. Burke, Graham Kendall
Chapter 2. Classical Techniques
Abstract
The purpose of this chapter is to provide an introduction to three classical search techniques, branch and bound, dynamic programming and network flow programming, all of which have a well established record in the solution of both classical and practical problems. All three have their origins in, or prior to, the 1950s and were the result of a surge in interest in the use of mathematical techniques for the solution of practical problems. The timing was in part due to developments in Operations Research in World War II, but was also spurred on by increasing competition in the industrial sector and the promise of readily accessible computing power in the foreseeable future. A fourth technique belonging to this class, that of Integer Programming, is covered in Chapter 3. Given their age, it is not surprising that they no longer generate the same level of excitement as the more modern approaches covered elsewhere in this volume, and as a result they are frequently overlooked. This effect is reinforced as many texts such as this omit them—presumably because they have already been covered by a range of sources aimed at a wide variety of different abilities and backgrounds. In this volume we provide an introduction to these well-established classics alongside their more modern counterparts. Although they have shortcomings, many of which the more recent approaches were designed to address, they still have a role to play both as stand-alone techniques and as important ingredients in hybridized solution methods.
Kathryn A. Dowsland
Chapter 3. Integer Programming
Abstract
Over the last 20 years, the combination of faster computers, more reliable data, and improved algorithms has resulted in the near-routine solution of many integer programs of practical interest. Integer programming models are used in a wide variety of applications, including scheduling, resource assignment, planning, supply chain design, auction design, and many, many others. In this tutorial, we outline some of the major themes involved in creating and solving integer programming models.
Robert Bosch, Michael Trick
Chapter 4. Genetic Algorithms
Abstract
Genetic algorithms (GAs) are search methods based on principles of natural selection and genetics (Fraser, 1957; Bremermann, 1958; Holland, 1975). We start with a brief introduction to simple genetic algorithms and associated terminology.
Kumara Sastry, David Goldberg, Graham Kendall
Chapter 5. Genetic Programming
Abstract
The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called “machine intelligence„ (Turing, 1948, 1950). In his talk entitled AI: Where It Has Been and Where It Is Going, machine learning pioneer Arthur Samuel stated the main goal of the fields of machine learning and artificial intelligence: [T]he aim [is]... to get machines to exhibit behavior, which if done by humans, would be assumed to involve the use of intelligence. (Samuel, 1983) Genetic programming is a systematic method for getting computers to automatically solve a problem starting from a high-level statement of what needs to be done. Genetic programming is a domain-independent method that genetically breeds a population of computer programs to solve a problem. Specifically, genetic programming iteratively transforms a population of computer programs into a new generation of programs by applying analogs of naturally occurring genetic operations. This process is illustrated in Figure 5.1.
John R. Koza, Riccardo Poli
Chapter 6. Tabu Search
Abstract
Over the last 15 years, hundreds of papers presenting applications of tabu search, a heuristic method originally proposed by (1986), to various combinatorial problems have appeared in the operations research literature: see, for example, (1997), (1993b), (1996), (2002), (2002) and (1999). In several cases, the methods described provide solutions very close to optimality and are among the most effective, if not the best, to tackle the difficult problems at hand. These successes have made tabu search extremely popular among those interested in finding good solutions to the large combinatorial problems encountered in many practical settings. Several papers, book chapters, special issues and books have surveyed the rich tabu search literature (a list of some of the most important references is provided at the end of this chapter). In spite of this abundant literature, there still seem to be many researchers who, while they are eager to apply tabu search to new problem settings, find it difficult to properly grasp the fundamental concepts of the method, its strengths and its limitations, and to come up with effective implementations. The purpose of this chapter is thus to focus on the fundamental concepts of tabu search.
Michel Gendreau, Jean-Yves Potvin
Chapter 7. Simulated Annealing
Abstract
Many problems in engineering, planning and manufacturing can be modeled as that of minimizing or maximizing a cost function over a finite set of discrete variables. This class of so-called combinatorial optimization problems has received much attention over the last two decades and major achievements have been made in its analysis (Papadimitriou and Steiglitz, 1982). One of these achievements is the separation of this class into two subclasses. The first one contains the problems that can be solved efficiently, i.e. problems for which algorithms are known that solve each instance to optimality in polynomial time. Examples are linear programming, matching and network problems. The second subclass contains the problems that are notoriously hard, formally referred to as NP-hard (see Chapter 11 for more details). For an NP-hard algorithm it is generally believed that no algorithm exists that solves each instance in polynomial time. Consequently, there are instances that require superpolynomial or exponential time to be solved to optimality. Many known problems belong to this class and probably the best known example is the traveling salesman problem. The above-mentioned distinction is supported by a general framework in computer science called complexity theory; for a detailed introduction and an extensive listing of provably hard problems see Garey and Johnson (1979) and Ausiello et al. (1999).
Emile Aarts, Jan Korst, Wil Michiels
Chapter 8. Variable Neighborhood Search
Abstract
Variable Neighborhood Search (VNS) is a recent metaheuristic, or framework for building heuristics, which exploits systematically the idea of neighborhood change, both in the descent to local minima and in the escape from the valleys which contain them. In this tutorial we first present the ingredients of VNS, i.e. Variable Neighborhood Descent (VND) and Reduced VNS (RVNS) followed by the basic and then the general scheme of VNS itself which contain both of them. Extensions are presented, in particular Skewed VNS (SVNS) which enhances exploration of far-away valleys and Variable Neighborhood Decomposition Search (VNDS), a two-level scheme for solution of large instances of various problems. In each case, we present the scheme, some illustrative examples and questions to be addressed in order to obtain an efficient implementation.
Pierre Hansen, Nenad Mladenović
Chapter 9. Constraint Programming
Abstract
Constraint satisfaction problems are ubiquitous. A simple example that we will use throughout the first half of this chapter is the following scheduling problem: Choose employees A or B for each of three tasks, X, Y, Z, subject to the work rules that the same employee cannot carry out both tasks X and Y, the same employee cannot carry out both tasks Y and Z, and only employee B is allowed to carry out task Z. (Many readers will recognize this as a simple coloring problem.)
Eugene C. Freuder, Mark Wallace
Chapter 10. Multi-Objective Optimization
Abstract
Many real-world search and optimization problems are naturally posed as non-linear programming problems having multiple objectives. Due to the lack of suitable solution techniques, such problems were artificially converted into a single-objective problem and solved. The difficulty arose because such problems give rise to a set of trade-off optimal solutions (known as Pareto-optimal solutions), instead of a single optimum solution. It then becomes important to find not just one Pareto-optimal solution, but as many of them as possible. This is because any two such solutions constitutes a trade-off among the objectives and users would be in a better position to make a choice when many such trade-off solutions are unveiled.
Kalyanmoy Deb
Chapter 11. Complexity Theory and the No Free Lunch Theorem
Abstract
This tutorial reviews basic concepts in complexity theory, as well as various No Free Lunch results and how these results relate to computational complexity. The tutorial explains basic concepts in an informal fashion that illuminates key concepts. No Free Lunch theorems for search can be summarized by the following result: For all possible performance measures, no search algorithm is better than another when its performance is averaged over all possible discrete functions.
Darrell Whitley, Jean Paul Watson
Chapter 12. Machine Learning
Abstract
Machine learning is a very active sub-field of artificial intelligence concerned with the development of computational models of learning. Machine learning is inspired by the work in several disciplines: cognitive sciences, computer science, statistics, computational complexity, information theory, control theory, philosophy, and biology. Simply speaking, machine learning is learning by machine. From a computational point of view, machine learning refers to the ability of a machine to improve its performance based on previous results. From a biological point of view, machine learning is the study of how to create computers that will learn from experience and modify their activity based on that learning as opposed to traditional computers whose activity will not change unless the programmer explicitly changes it.
Xin Yao, Yong Liu
Chapter 13. Artificial Immune Systems
Abstract
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or nonself cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years.
Uwe Aickelin, Dipankar Dasgupta
Chapter 14. Swarm Intelligence
Abstract
The complex and often coordinated behavior of swarms fascinates not only biologists but also computer scientists. Bird flocking and fish schooling are impressive examples of coordinated behavior that emerges without central control. Social insect colonies show complex problem-solving skills arising from the actions and interactions of nonsophisticated individuals.
Daniel Merkle, Martin Middendorf
Chapter 15. Fuzzy Reasoning
Abstract
The derivation of mathematical models that can efficiently describe real-world problems is most of the time an overwhelming or even impossible task due to the complexity and the inherent ambiguity of characteristics that these problems may possess. As (1973), the founder of the theory of fuzzy sets, puts it, as the complexity of a system increases, our ability to make precise and yet significant statements about its behavior diminishes until a threshold is reached beyond which precision and significance (or relevance) become almost mutually exclusive characteristics.
Costas P. Pappis, Constantinos I. Siettos
Chapter 16. Rough Set Based Decision Support
Abstract
In this chapter, we are concerned with discovering knowledge from data. The aim is to find concise classification patterns that agree with situations that are described by the data. Such patterns are useful for explanation of the data and for the prediction of future situations. They are particularly useful in such decision problems as technical diagnostics, performance evaluation and risk assessment. The situations are described by a set of attributes, which we might also call properties, features, characteristics, etc. Such attributes may be concerned with either the input or output of a situation. These situations may refer to states, examples, etc. Within this chapter, we will refer to them as objects. The goal of the chapter is to present a knowledge discovery paradigm for multi-attribute and multicriteria decision making, which is based upon the concept of rough sets. Rough set theory was introduced by (Pawlak 1982, Pawlak 1991). Since then, it has often proved to be an excellent mathematical tool for the analysis of a vague description of objects. The adjective vague (referring to the quality of information) is concerned with inconsistency or ambiguity. The rough set philosophy is based on the assumption that with every object of the universe U there is associated a certain amount of information (data, knowledge). This information can be expressed by means of a number of attributes. The attributes describe the object. Objects which have the same description are said to be indiscernible (similar) with respect to the available information.
Roman Slowinski, Salvatore Greco, Benedetto Matarazzo
Chapter 17. Hyper-Heuristics
Abstract
The term “hyper-heuristics” is fairly new, although the notion has been hinted at in papers from time to time since the 1960s (e.g. Crowston et al., 1963). The key idea is to devise new algorithms for solving problems by combining known heuristics in ways that allow each to compensate, to some extent, for the weaknesses of others. They might be thought of as heuristics to choose heuristics. They are methods which work with a search space of heuristics. In this sense, they differ from most applications of metaheuristics (see Glover and Kochenberger, 2003) which usually work with search spaces of solutions. One of the main goals of research in this area is to devise algorithms that are fast and exhibit good performance across a whole family of problems, presumably because the algorithms address some shared features of the whole set of problems.
Peter Ross
Chapter 18. Approximation Algorithms
Abstract
Most interesting real-world optimization problems are very challenging from a computational point of view. In fact, quite often, finding an optimal or even a near-optimal solution to a large-scale optimization problem may require computational resources far beyond what is practically available. There is a substantial body of literature exploring the computational properties of optimization problems by considering how the computational demands of a solution method grow with the size of the problem instance to be solved (see e.g. Chapter 11 or Aho et al., 1979). A key distinction is made between problems that require computational resources that grow polynomially with problem size versus those for which the required resources grow exponentially. The former category of problems are called efficiently solvable, whereas problems in the latter category are deemed intractable because the exponential growth in required computational resources renders all but the smallest instances of such problems unsolvable.
Carla P. Gomes, Ryan Williams
Chapter 19. Fitness Landscapes
Abstract
One of the most commonly-used metaphors to describe the process of heuristic methods such as local search in solving a combinatorial optimization problem is that of a “fitness landscape”. However, describing exactly what we mean by such a term is not as easy as might be assumed. Indeed, many cases of its usage in both the biological and optimization literature reveal a rather serious lack of understanding.
Colin R. Reeves
Backmatter
Metadaten
Titel
Search Methodologies
herausgegeben von
Edmund K. Burke
Graham Kendall
Copyright-Jahr
2005
Verlag
Springer US
Electronic ISBN
978-0-387-28356-2
Print ISBN
978-0-387-23460-1
DOI
https://doi.org/10.1007/0-387-28356-0

Premium Partner