Skip to main content
main-content

Über dieses Buch

This book is an updated effort in summarizing the trending topics and new hot research lines in solving dynamic problems using metaheuristics. An analysis of the present state in solving complex problems quickly draws a clear picture: problems that change in time, having noise and uncertainties in their definition are becoming
very important. The tools to face these problems are still to be built, since existing techniques are either slow or inefficient in tracking the many global optima that those problems are presenting to the solver technique.

Thus, this book is devoted to include several of the most important advances in solving dynamic problems. Metaheuristics are the more popular tools to this end, and then we can find in the book how to best use genetic algorithms, particle swarm, ant colonies, immune systems, variable neighborhood search, and many other bioinspired
techniques. Also, neural network solutions are considered in this book.

Both, theory and practice have been addressed in the chapters of the book. Mathematical background and methodological tools in solving this new class of problems and applications are included. From the applications point of view, not just academic benchmarks are dealt with, but also real world applications in logistics and bioinformatics
are discussed here. The book then covers theory and practice, as well as discrete versus continuous dynamic optimization, in the aim of creating a fresh and comprehensive volume. This book is targeted to either beginners and experienced practitioners in dynamic optimization, since we took care of devising the chapters in a way that a wide audience could profit from its contents. We hope to offer a single source for up-to-date information in dynamic optimization, an inspiring and attractive new research domain that appeared in these last years and is here to stay.

Inhaltsverzeichnis

Frontmatter

Performance Analysis of Dynamic Optimization Algorithms

In recent years dynamic optimization problems have attracted a growing interest from the community of stochastic optimization researchers with several approaches developed to address these problems. The goal of this chapter is to present the different tools and benchmarks to evaluate the performances of the proposed algorithms. Indeed, testing and comparing the performances of a new algorithm to the different competing algorithms is an important and hard step in the development process. The existence of benchmarks facilitates this step, however, the success of these benchmarks is conditioned by their use by the community. In this chapter, we cite many tested problems (we focused only on the continuous case), and we only present the most used: the moving peaks benchmark , and the last proposed: the generalized approach to construct benchmark problems for dynamic optimization (also called benchmark GDBG).
Amir Nakib, Patrick Siarry

Quantitative Performance Measures for Dynamic Optimization Problems

Measuring the performance  of algorithms over dynamic optimization problems (DOPs) presents some important differences when compared to static ones. One of the main problems is the loss of solution quality as the optimization process advances in time. The objective in DOPs is in tracking the optima as the landscape changes; however, it is possible that the algorithm gets progressively further from the optimum after some changes happened. The main goal of this chapter is to present some difficulties that may appear while reporting the results on DOPs, and introduce two new performance  tools to overcome these problems. We propose a measure based on linear regression to measure fitness performance degradation,  and analyze our results on the moving peaks problem, using several measures existing in the literature as well as our performance performance degradation  measure. We also propose a second measure based on the area below the curve defined by some population attribute at each generation (e.g., the best-of-generation fitness), which is analyzed in order to see how it can help in understanding the algorithmic search behavior.
Briseida Sarasola, Enrique Alba

Dynamic Function Optimization: The Moving Peaks Benchmark

Many practical, real-world applications have dynamic features. If the changes in the fitness function of an optimization problem are moderate, a complete restart of the optimization algorithm may not be warranted. In those cases, it is meaningful to apply optimization algorithms that can accommodate change. In the recent past, many researchers have contributed algorithms suited for dynamic problems. To facilitate the comparison between different approaches, the Moving Peaks (MP) function was devised. This chapter reviews all known optimization algorithms that have been tested on the dynamic MP problem. The majority of these approaches are nature-inspired. The results of the best-performing solutions based on the MP benchmark are directly compared and discussed. In the concluding remarks, the main characteristics of good approaches for dynamic optimization are summarised.
Irene Moser, Raymond Chiong

SRCS: A Technique for Comparing Multiple Algorithms under Several Factors in Dynamic Optimization Problems

Performance comparison among several algorithms is an essential task. This is already a difficult process when dealing with stationary problems where the researcher usually tests many algorithms, with several parameters, under different problems. The situation is even more complex when dynamic optimization problems are considered, since additional dynamism-specific configurations should also be analyzed (e.g. severity, frequency and type of the changes, etc). In this work, we present a technique to compact those results in a visual way, improving their understanding and providing an easy way to detect algorithms’ behavioral patterns. However, as every form of compression, it implies the loss of part of the information. The pros and cons of this technique are explained, with a special emphasis on some statistical issues that commonly arise when dealing with random-nature algorithms.
Ignacio G. del Amo, David A. Pelta

Dynamic Combinatorial Optimization Problems: A Fitness Landscape Analysis

The role of representations and variation operators in evolutionary computation is relatively well understood for the case of static optimization problems thanks to a variety of empirical studies as well as some theoretical results. In the field of evolutionary dynamic optimization very few studies exist to date that explicitly analyse the impact of these elements on the algorithm’s performance. In this chapter we utilise the fitness landscape metaphor to review previous work on evolutionary dynamic combinatorial optimization. This review highlights some of the properties unique to dynamic combinatorial optimization problems and paves the way for future research related to these important issues.
Philipp Rohlfshagen, Xin Yao

Two Approaches for Single and Multi-Objective Dynamic Optimization

Many real-world optimization problems involve objectives, constraints, and parameters which constantly change with time. However, to avoid complications, such problems are usually treated as static optimization problems demanding the knowledge of the pattern of change a priori. If the problem is optimized in its totality for the entire duration of application, the procedure can be computationally expensive, involving a large number of variables. Despite some studies on the use of evolutionary algorithms in solving single-objective dynamic optimization problems, there has been a lukewarm interest in solving dynamic multi-objective optimization problems. In this paper, we discuss two different approaches to dynamic optimization for single as well as multi-objective problems. Both methods are discussed and their working principles are illustrated by applying them to different practical optimization problems. The off-line optimization approach in arriving at a knowledge base which can then be used for on-line applications is applicable when the change in the problem is significant. On the other hand, an off-line approach to arrive at a minimal time window for treating the problem in a static manner is more appropriate for problems having a slow change. Further approaches and applications of these two techniques remain as important future work in making on-line optimization task a reality in the coming years.
Kalyanmoy Deb

Self-Adaptive Differential Evolution for Dynamic Environments with Fluctuating Numbers of Optima

In this chapter, we introduce the algorithm called: SADynPopDE, a self adaptive multi-population DE-based optimization algorithm, aimed at dynamic optimization problems in which the number of optima in the environment fluctuates over time. We compare the performance of SADynPopDE to those of two algorithms upon which it is based: DynDE and DynPopDE. DynDE extends DE for dynamic environments by utilizing multiple sub-populations which are encouraged to converge to distinct optima by means of exclusion. DynPopDE extends DynDE by: using competitive population evaluation to selectively evolve sub-populations, using a midpoint check during exclusion to determine whether sub-populations are indeed converging to the same optimum, dynamically spawning and removing sub populations, and using a penalty factor to aid the stagnation detection process. The use of self-adaptive control parameters into DynPopDE, allows a more effective algorithm, and to remove the need to fine-tune the DE crossover and scale factors.
Mathys C. du Plessis, Andries P. Engelbrecht

Dynamic Multi-Objective Optimization Using PSO

Dynamic multi-objective optimization problems occur in many situations in the real world. These optimization problems do not have a single goal to solve, but many goals that are in conflict with one another - improvement in one goal leads to deterioration of another. Therefore, when solving dynamic multi-objective optimization problem, an algorithm attempts to find the set of optimal solutions, referred to as the Pareto-optimal front. Each dynamic multi-objective optimization problem also has a number of boundary constraints that limits the search space. When the particles of a particle swarm optimization (PSO) algorithm move outside the search space, an approach should be followed to manage violation of the boundary constraints. This chapter investigates the effect of various approaches to manage boundary constraint violations on the performance of the dynamic Vector Evaluated Particle Swarm optimization (DVEPSO) algorithm when solving DMOOP. Furthermore, the performance of DVEPSO is compared against the performance of three other state-of-the-art dynamic multi-objective optimization algorithms.
Mardé Helbig, Andries P. Engelbrecht

Ant Colony Based Algorithms for Dynamic Optimization Problems

The use of metaheuristic approaches to deal with dynamic optimization problems has been largely studied, being evolutionary techniques the more widely used and assessed techniques. Nevertheless, successful applications coming from other nature-inspired metaheuristics, e.g., ant algorithms, have also shown their applicability in dynamic optimization problems, but received a limited attention until now. Different from perturbative techniques, ant algorithms use a set of agents which evolve in an environment to construct one solution. They cooperate by means of asynchronous communications based on numerical information laid on an environment. This environment is often modeled by a graph which constitutes a formalism with a great expressiveness, specially well-suited for dynamic optimization problems. A solution could be a structure like a subgraph, a route, a short path, a spanning tree, or even a partition of vertices. In this chapter we present a general overview of the more relevant works regarding the application of ant colony based algorithms for dynamic optimization problems. We will also highlight the mechanisms used in different implementations found in the literature, and thus show the potential of this kind of algorithms for research in this area.
Guillermo Leguizamón, Enrique Alba

Elastic Registration of Brain Cine-MRI Sequences Using MLSDO Dynamic Optimization Algorithm

In this chapter, we propose to use a dynamic optimization algorithm to assess the deformations of the wall of the third cerebral ventricle in the case of a brain cine-MR imaging. In this method, an elastic registration process is applied to a 2D+t cine-MRI sequence of a region of interest (i.e. lamina terminalis). This registration process consists in optimizing an objective function that can be considered as dynamic. Thus, a dynamic optimization algorithm based on multiple local searches, called MLSDO, is used to accomplish this task. The obtained results are compared to those of several well-known static optimization algorithms. This comparison shows the efficiency of MLSDO, and the relevance of using a dynamic optimization algorithm to solve this kind of problems.
Julien Lepagnot, Amir Nakib, Hamouche Oulhadj, Patrick Siarry

Artificial Immune System for Solving Dynamic Constrained Optimization Problems

In this chapter, we analyze the behavior of an adaptive immune system when solving dynamic constrained optimization problems (DCOPs). Our proposed approach is called Dynamic Constrained T-Cell (DCTC) and it is an adaptation of an existing algorithm, which was originally designed to solve static constrained problems. Here, this approach is extended to deal with problems which change over time and whose solutions are subject to constraints. Our proposed DCTC is validated with eleven dynamic constrained problems which involve the following scenarios: dynamic objective function with static constraints, static objective function with dynamic constraints, and dynamic objective function with dynamic constraints. The performance of the proposed approach is compared with respect to that of another algorithm that was originally designed to solve static constrained problems (SMES) and which is adapted here to solve DCOPs. Besides, the performance of our proposed DCTC is compared with respect to those of two approaches which have been used to solve dynamic constrained optimization problems (RIGA and dRepairRIGA). Some statistical analysis is performed in order to get some insights into the effect that the dynamic features of the problems have on the behavior of the proposed algorithm.
Victoria S. Aragón, Susana C. Esquivel, Carlos A. Coello

Metaheuristics for Dynamic Vehicle Routing

Combinatorial optimization problems are usually modeled in a static fashion. In this kind of problems, all data are known in advance, i.e. before the optimization process has started. However, in practice, many problems are dynamic, and change while the optimization is in progress. For example, in the Dynamic Vehicle Routing Problem (DVRP), which is one of the most challenging combinatorial optimization tasks, the aim consists in designing the optimal set of routes for a fleet of vehicles in order to serve a given set of customers. However, new customer orders arrive while the working day plan is in progress. In this case, routes must be reconfigured dynamically while executing the current simulation. The DVRP is an extension of the conventional routing problem, its main interest being the connection to many real-word applications (repair services, courier mail services, dial-a-ride services, etc.). In this chapter, the DVRP is examined, and a survey on solving methods such as population-based metaheuristics and trajectory-based metaheuristics is exposed. Dynamic performances measures of different metaheuristics are assessed using dedicated indicators for the dynamic environment.
Mostepha R. Khouadjia, Briseida Sarasola, Enrique Alba, El-Ghazali Talbi, Laetitia Jourdan

Low-Level Hybridization of Scatter Search and Particle Filter for Dynamic TSP Solving

This work presents the application of the Scatter Search Particle Filter (SSPF) algorithm to solve the Dynamic Travelling Salesman Problem (DTSP). SSPF combines sequential estimation and combinatorial optimization methods to efficiently address dynamic optimization problems. SSPF obtains high quality solutions at each time step by taking advantage of the best solutions obtained in the previous ones. To demonstrate the performance of the proposed algorithm, we conduct experiments using two different benchmarks. The first one was generated for us and contains instances sized 25, 50, 75 and 100-cities and the second one are dynamic versions of TSPLIB benchmarks. Experimental results have shown that the performance of SSPF for the DTSP is significantly better than other population based metaheuristics, such as Evolutionary Algorithms or Scatter Search. Our proposal appreciably reduces the execution time without affecting the quality of the obtained results.
Juan José Pantrigo, Abraham Duarte

From the TSP to the Dynamic VRP: An Application of Neural Networks in Population Based Metaheuristic

In this paper, we consider the standard dynamic and stochastic vehicle routing problem (dynamic VRP) where new requests are received over time and must be incorporated into an evolving schedule in real time.We identify the key features which make the dynamic problem different from the static problem. The approach presented to address the problem is a hybrid method which manipulates the self-organizing map (SOM) neural network similarly as a local search into a population based memetic algorithm, it is called memetic SOM. The approach illustrates how the concept of intermediate structure provided by the original SOM algorithm can naturally operate in a dynamic and real-time setting of vehicle routing. A set of operators derived from the SOM algorithm structure are customized in order to perform massive and distributed insertions of transport demands located in the plane. The goal is to simultaneously minimize the route lengths and the customer waiting time. The experiments show that the approach outperforms the operations research heuristics that were already applied to the Kilby et al. benchmark of 22 problems with up to 385 customers, which is one of the very few benchmark sets commonly shared on this dynamic problem. Our approach appears to be roughly 100 times faster than the ant colony algorithm MACS-VRPTW, and at least 10 times faster than a genetic algorithm also applied to the dynamic VRP, for a better solution quality.
Amir Hajjam, Jean-Charles Créput, Abderrafiãa Koukam

Insect Swarm Algorithms for Dynamic MAX-SAT Problems

The satisfiability (SAT) problem and the maximum satisfiability problem (MAX-SAT) were among the first problems proven to be \(\mathcal{N P}\)-complete. While only a limited number of theoretical and real-world problems come as instances of SAT or MAX-SAT, many combinatorial problems can be encoded into them. This puts the study of MAX-SAT and the development of adequate algorithms to address it in an important position in the field of computer science. Among the most frequently used optimization methods for the MAX-SAT problem are variations of the greedy hill climbing algorithm. This chapter studies the application to dynamic MAX-SAT (i.e. MAX-SAT problems with structures that change over time) of the swarm based metaheuristics ant colony optimization and wasp swarm optimization algorithms, which are based in the real life behavior of ants and wasps, respectively. The algorithms are applied to several sets of static and dynamic MAX-SAT instances and are shown to outperform the greedy hill climbing and simulated annealing algorithms used as benchmarks.
Pedro C. Pinto, Thomas A. Runkler, João M. C. Sousa

Dynamic Time-Linkage Evolutionary Optimization: Definitions and Potential Solutions

Dynamic time-linkage optimization problems (DTPs) are special dynamic optimization problems (DOPs) where the current solutions chosen by the solver can influence how the problems might change in the future. Although DTPs are very common in real-world applications (e.g. online scheduling, online vehicle routing, and online optimal control problems), they have received very little attention from the evolutionary dynamic optimization (EDO) research community. Due to this lack of research there are still many characteristics that we do not fully know about DTPs. For example, how should we define and classify DTPs in detail; are there any characteristics of DTPs that we do not know; with these characteristics are DTPs still solvable; and what is the appropriate strategy to solve them. In this chapter these issues will be partially addressed. First, we will propose a detailed definition framework to help characterising DOPs and DTPs. Second, we will identify a new and challenging class of DTPs where it might not be possible to solve the problems using traditional methods. Third, an approach to solve this class of problems under certain circumstances will be suggested and experiments to verify the hypothesis will be carried out. Two test problems will be proposed to simulate the property of this new class of DTPs, and discussions of real-world applications will be introduced.
Trung Thanh Nguyen, Xin Yao

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise