Skip to main content

2009 | Buch

Foundations of Computational Intelligence Volume 3

Global Optimization

herausgegeben von: Ajith Abraham, Aboul-Ella Hassanien, Patrick Siarry, Andries Engelbrecht

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Computational Intelligence

insite
SUCHEN

Über dieses Buch

Global optimization is a branch of applied mathematics and numerical analysis that deals with the task of finding the absolutely best set of admissible conditions to satisfy certain criteria / objective function(s), formulated in mathematical terms. Global optimization includes nonlinear, stochastic and combinatorial programming, multiobjective programming, control, games, geometry, approximation, algorithms for parallel architectures and so on. Due to its wide usage and applications, it has gained the attention of researchers and practitioners from a plethora of scientific domains. Typical practical examples of global optimization applications include: Traveling salesman problem and electrical circuit design (minimize the path length); safety engineering (building and mechanical structures); mathematical problems (Kepler conjecture); Protein structure prediction (minimize the energy function) etc.

Global Optimization algorithms may be categorized into several types: Deterministic (example: branch and bound methods), Stochastic optimization (example: simulated annealing). Heuristics and meta-heuristics (example: evolutionary algorithms) etc. Recently there has been a growing interest in combining global and local search strategies to solve more complicated optimization problems.

This edited volume comprises 17 chapters, including several overview Chapters, which provides an up-to-date and state-of-the art research covering the theory and algorithms of global optimization. Besides research articles and expository papers on theory and algorithms of global optimization, papers on numerical experiments and on real world applications were also encouraged. The book is divided into 2 main parts.

Inhaltsverzeichnis

Frontmatter

Global Optimization Algorithms: Theoretical Foundations and Perspectives

Frontmatter
Genetic Algorithms for the Use in Combinatorial Problems
Abstract
Turbo code interleaver optimization is a NP-hard combinatorial optimization problem attractive for its complexity and variety of real world applications. In this paper, we investigate the usage and performance of recent variant of genetic algorithms, higher level chromosome genetic algorithms, on the turbo code optimization task. The problem as well as higher level chromosome genetic algorithms, that can be use for combinatorial optimization problems in general, is introduced and experimentally evaluated.
Václav Snášel, Jan Platoš, Pavel Krömer, Nabil Ouddane
Bacterial Foraging Optimization Algorithm: Theoretical Foundations, Analysis, and Applications
Abstract
Bacterial foraging optimization algorithm (BFOA) has been widely accepted as a global optimization algorithm of current interest for distributed optimization and control. BFOA is inspired by the social foraging behavior of Escherichia coli. BFOA has already drawn the attention of researchers because of its efficiency in solving real-world optimization problems arising in several application domains. The underlying biology behind the foraging strategy of E.coli is emulated in an extraordinary manner and used as a simple optimization algorithm. This chapter starts with a lucid outline of the classical BFOA. It then analyses the dynamics of the simulated chemotaxis step in BFOA with the help of a simple mathematical model. Taking a cue from the analysis, it presents a new adaptive variant of BFOA, where the chemotactic step size is adjusted on the run according to the current fitness of a virtual bacterium. Nest, an analysis of the dynamics of reproduction operator in BFOA is also discussed. The chapter discusses the hybridization of BFOA with other optimization techniques and also provides an account of most of the significant applications of BFOA until date.
Swagatam Das, Arijit Biswas, Sambarta Dasgupta, Ajith Abraham
Global Optimization Using Harmony Search: Theoretical Foundations and Applications
Abstract
This chapter presents the theoretical foundations of the musicphenomenon- mimicking optimization technique, harmony search (HS) algorithm. The HS algorithm mimics music improvisation where musicians try to find better harmonies based on randomness or their experiences, which can be expressed as a novel stochastic derivative rather than a calculus-based gradient derivative. The chapter also presents three applications that demonstrate the global optimization feature of the HS algorithm. For the water network design, HS reached global optimum faster than other algorithms such as genetic algorithm, simulated annealing, shuffled frog leaping algorithm, cross entropy algorithm, and scatter search; for the multiple dam scheduling, HS reached five different global optima with identical benefit while the genetic algorithm found a near-optimum under the same number of function evaluations; and HS reached global optimum for the tree-like network layout while genetic algorithm and evolutionary algorithm found near-optimum.
Zong Woo Geem
Hybrid GRASP Heuristics
Abstract
Experience has shown that a crafted combination of concepts of different metaheuristics can result in robust combinatorial optimization schemes and produce higher solution quality than the individual metaheuristics themselves, especially when solving difficult real-world combinatorial optimization problems. This chapter gives an overview of different ways to hybridize GRASP (Greedy Randomized Adaptive Search Procedures) to create new and more effective metaheuristics. Several types of hybridizations are considered, involving different constructive procedures, enhanced local search algorithms, and memory structures.
Paola Festa, Mauricio G. C. Resende
Particle Swarm Optimization: Performance Tuning and Empirical Analysis
Abstract
This chapter presents some of the recent modified variants of Particle Swarm Optimization (PSO). The main focus is on the design and implementation of the modified PSO based on diversity, Mutation, Crossover and efficient Initialization using different distributions and Low-discrepancy sequences. These algorithms are applied to various benchmark problems including unimodal, multimodal, noisy functions and real life applications in engineering fields. The effectiveness of the algorithms is discussed.
Millie Pant, Radha Thangaraj, Ajith Abraham
Tabu Search to Solve Real-Life Combinatorial Optimization Problems: A Case of Study
Summary
Tabu Search (TS) is a very powerful metaheuristic that has shown its efficiency when solving various combinatorial optimization problems, ranging from academic to real-life ones. The idea behind TS is to include during the search a guidance based on adaptive memory and learning. We present in this paper a short overview on Tabu Search and detail its application to solve successfully a real-life optimization problem under constraints.
Djamal Habet
Reformulations in Mathematical Programming: A Computational Approach
Abstract
Mathematical programming is a language for describing optimization problems; it is based on parameters, decision variables, objective function(s) subject to various types of constraints. The present treatment is concerned with the case when objective(s) and constraints are algebraic mathematical expressions of the parameters and decision variables, and therefore excludes optimization of black-box functions. A reformulation of a mathematical program P is a mathematical program Q obtained from P via symbolic transformations applied to the sets of variables, objectives and constraints. We present a survey of existing reformulations interpreted along these lines, some example applications, and describe the implementation of a software framework for reformulation and optimization.
Leo Liberti, Sonia Cafieri, Fabien Tarissan
Graph-Based Local Elimination Algorithms in Discrete Optimization
Abstract
The aim of this chapter is to provide a review of structural decomposition methods in discrete optimization and to give a unified framework in the form of local elimination algorithms (LEA). This chapter is organized as follows. Local elimination algorithms for discrete optimization (DO) problems (DOPs) with constraints are considered; a classification of dynamic programming computational procedures is given. We introduce Elimination Game and Elimination tree. Application of bucket elimination algorithm from constraint satisfaction (CS) to solving DOPs is done. We consider different local elimination schemes and related notions. Clustering that merges several variables into single meta-variable defines a promising approach to solve DOPs. This allows to create a quotient (condensed) graph and apply a local block elimination algorithm. In order to describe a block elimination process, we introduce Block Elimination Game. We discuss the connection of aforementioned local elimination algorithmic schemes and a way of transforming the directed acyclic graph (DAG) of computational LEA procedure to the tree decomposition.
Oleg Shcherbina
Evolutionary Approach to Solving Non-stationary Dynamic Multi-Objective Problems
Abstract
This chapter aims at presenting the general problem of decision making in unknown, complex or changing environment by an extension of static multiobjective optimization problem. General optimization problem is defined, which encompasses not just dynamics, but also change in the optimization problem itself, with focus on changing number of objectives used to evaluate potential solutions.
In order to solve the defined problem, a variant of multi-objective genetic algorithm was used. Since the chapter doesn’t focus on the performance of the algorithm used for solving the problem, but tends to demonstrate the approach, experimental results produced by tests with MOGA are presented. These experimental results clearly demonstrate, that MOGA successfully led the population of potential solutions to the problem for different test cases, such as homogenous, non-homogenous, and the problem with changing number of objectives. Decision-making based on ranking of the potential solutions has also been demonstrated.
Zikrija Avdagić, Samim Konjicija, Samir Omanović
Turbulent Particle Swarm Optimization Using Fuzzy Parameter Tuning
Abstract
Particle Swarm Optimization (PSO) algorithm is a stochastic search technique, which has exhibited good performance across a wide range of applications. However, very often for multi-modal problems involving high dimensions the algorithm tends to suffer from premature convergence. Premature convergence could make the PSO algorithm very difficult to arrive at the global optimum or even a local optimum. Analysis of the behavior of the particle swarm model reveals that such premature convergence is mainly due to the decrease of velocity of particles in the search space that leads to a total implosion and ultimately fitness stagnation of the swarm. This paper introduces Turbulence in the Particle Swarm Optimization (TPSO) algorithm to overcome the problem of stagnation. The algorithm uses a minimum velocity threshold to control the velocity of particles. TPSO mechanism is similar to a turbulence pump, which supplies some power to the swarm system to explore new neighborhoods for better solutions. The algorithm also avoids clustering of particles and at the same time attempts to maintain diversity of population. We attempt to theoretically analyze that the algorithm converges with a probability of 1 towards the global optimal. The parameter, the minimum velocity threshold of the particles is tuned adaptively by a fuzzy logic controller embedded in the TPSO algorithm, which is further called as Fuzzy Adaptive TPSO (FATPSO). We evaluated the performance of FATPSO and compared it with the Standard PSO (SPSO), Genetic Algorithm (GA) and Simulated Annealing (SA). The comparison was performed on a suite of 20 widely used benchmark problems. Empirical results illustrate that the FATPSO could prevent premature convergence very effectively. It clearly outperforms the considered methods, especially for high dimension multi-modal optimization problems.
Ajith Abraham, Hongbo Liu

Global Optimization Algorithms: Applications

Frontmatter
An Evolutionary Approximation for the Coefficients of Decision Functions within a Support Vector Machine Learning Strategy
Abstract
Support vector machines represent a state-of-the-art paradigm, which has nevertheless been tackled by a number of other approaches in view of the development of a superior hybridized technique. It is also the proposal of present chapter to bring support vector machines together with evolutionary computation, with the aim to offer a simplified solving version for the central optimization problem of determining the equation of the hyperplane deriving from support vector learning. The evolutionary approach suggested in this chapter resolves the complexity of the optimizer, opens the ’blackbox’ of support vector training and breaks the limits of the canonical solving component.
Ruxandra Stoean, Mike Preuss, Catalin Stoean, Elia El-Darzi, D. Dumitrescu
Evolutionary Computing in Statistical Data Analysis
Abstract
Evolutionary computing methods are being used in a wide field domain with increasing confidence and encouraging outcomes. We want to illustrate how these new techniques have influenced the statistical theory and practice concerned with multivariate data analysis, time series model building and optimization methods for statistical estimates computation and inference in complex systems. The distinctive features all these subject topics have in common are the large number of alternatives for model choice, parametrization over high dimensional discrete spaces and lack of convenient properties that may be assumed to hold at least approximately about the data generating process. Evolutionary computing proved to be able to offer a valuable framework to deal with complicated problems in statistical data analysis and time series analysis and we shall draw a wide though by no means exhaustive list of topics of interest in statistics that have been successfully handled by evolutionary computing procedures. Specific issues will be concerned with variable selection in linear regression models, non linear regression, time series model identification and estimation, detection of outlying observations in time series as regards both location and type identification, cluster analysis and grouping problems, including clusters of directional data and clusters of time series. Simulated examples and applications to real data will be used for illustration purpose through the chapter.
Roberto Baragona, Francesco Battaglia
Meta-heuristics for System Design Engineering
Abstract
Industries have to design and produce performing and reliable systems. Nevertheless, designers suffer from the diversity of methods, which are not really adequate to their needs. Authors highlight the need of close interactions between product and project design, often treated either independently or sequentially, necessary to improve system design, and logistics in this context. Strengthening the links between product design and project management processes is an ongoing challenge, and this situation relies on perfect control of methods, tools and know-how, both on the technical side as well as on the organizational side. The aim of our work is to facilitate the project manager’s decision making, thus allowing him to define, follow and adapt a working plan, while still considering various organizational options. From these options, the project manager chooses the scheme that best encompasses the project’s objectives with respect to costs, delay and risks, without neglecting performance and safety. To encourage the project manager to explore various possibilities, we developed and tested a heuristic based on ant colony optimization and evolutionary algorithm adapted for multi-objective problems. Its hybridization with a tabu search and a greedy algorithm were performed in order to accelerate convergence of the research study and to reduce the cost engendered by the evaluation process. The experiments carried out reveals that it was possible to offer the decision maker a reduced number of solutions that he can evaluate more accurately in order to choose one according to technical, economic and financial criteria.
Rachid Chelouah, Claude Baron, Marc Zholghadri, Citlalih Gutierrez
Transgenetic Algorithm: A New Endosymbiotic Approach for Evolutionary Algorithms
Summary
This chapter introduces a class of evolutionary algorithms whose inspiration comes from living processes where cooperation is the main evolutionary strategy. The proposed technique is called Transgenetic Algorithms and is based on two recognized driving forces of evolution: the horizontal gene transfer and the endosymbiosis. These algorithms perform a stochastic search simulating endosymbiotic interactions between a host and a population of endosymbionts. The information exchanging between the host and ensosymbionts is intermediated by agents, called transgenetic vectors, who are inspired on natural mechanisms of horizontal gene transfer. The proposed approach is described and a didactic example with the well-known Traveling Salesman Problem illustrates its basic components. Applications of the proposed technique are reported for two NP-hard combinatorial problems: the Traveling Purchaser Problem and the Bi-objective Minimum Spanning Tree Problem.
Elizabeth F. Gouvêa Goldbarg, Marco C. Goldbarg
Multi-objective Team Forming Optimization for Integrated Product Development Projects
Summary
Integrated product development (IPD) is a holistic approach that helps to overcome problems that arise in complex product development environments. This paper presents a model that aims to support the optimal formulation and assignment of multi-functional teams in IPD organizations - or any project-based organization. The model accounts for limited availability of personnel, required skills, team homogeneity, and, further, maximizes organization’s payoff by formulating and assigning teams to projects with higher expected payoffs. A Pareto multi-objective particle swarm optimization approach was used to solve the model. It allows personnel to work in several concurrent projects and considers both person-job and person-team fit.
Hisham M. E. Abdelsalam
Genetic Algorithms for Task Scheduling Problem
Abstract
The scheduling and mapping of the precedence-constrained task graph to the processors is considered one of the most crucial NP-complete problems in the parallel and distributed computing systems. Several genetic algorithms have been developed to solve this problem. The primary distinction among most of them is being the used chromosomal representation for a schedule. However, these existing algorithms are monolithic as they attempt to scan the entire solution space without consideration how to reduce the complexity of the optimization. In this chapter, two genetic algorithms have been developed and implemented. Our developed algorithms are genetic algorithms with some heuristic principles have been added to improve the performance. According to the first developed genetic algorithm, two fitness functions have been applied one after another. The first fitness function is concerned with minimizing the total execution time (schedule length) and the second one is concerned with the load balance satisfaction. The second developed genetic algorithm is based on task duplication technique to overcome the communication overhead. Our proposed algorithms have been implemented and evaluated using benchmarks. According to the evolution results, it found that our algorithms always outperform the traditional algorithms.
Fatma A. Omara, Mona M. Arafa
PSO_Bounds: A New Hybridization Technique of PSO and EDAs
Abstract
Particle Swarm Optimization (PSO) is a nature inspired population-based approach successfully used as an optimization tool in many application. Estimation of distribution algorithms (EDAs), are evolutionary algorithms that try to estimate the probability distribution of the good individuals in the population. In this work, we present a new PSO algorithm that borrows ideas from EDAs. This algorithm is implemented and compared to previous PSO and EDAs hybridization approaches using a suite of well-known benchmark optimization functions.
Mohammed El-Abd, Mohamed S. Kamel
Backmatter
Metadaten
Titel
Foundations of Computational Intelligence Volume 3
herausgegeben von
Ajith Abraham
Aboul-Ella Hassanien
Patrick Siarry
Andries Engelbrecht
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-01085-9
Print ISBN
978-3-642-01084-2
DOI
https://doi.org/10.1007/978-3-642-01085-9

Premium Partner