Skip to main content

2010 | Buch

Computational Intelligence in Optimization

Applications and Implementations

herausgegeben von: Yoel Tenne, Chi-Keong Goh

Verlag: Springer Berlin Heidelberg

Buchreihe : Adaptation, Learning, and Optimization

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
New Hybrid Intelligent Systems to Solve Linear and Quadratic Optimization Problems and Increase Guaranteed Optimal Convergence Speed of Recurrent ANN
Abstract
This chapter deals with the study of artificial neural networks (ANNs) and Heuristic Rules (HR) to solve optimization problems. The study of ANN as optimization tools for solving large scale problems was due to the fact that this technique has great potential for hardware VLSI implementation, in which it may be more efficient than traditional optimization techniques. However, the implementation of computational algorithm has shown that the proposed technique should have been efficient but slow as compared with traditional mathematical methods. In order to make it a fast method, we’ll show two ways to increase the speed of convergence of the computational algorithm. For analyzes and comparison, we solved three test cases. This paper considers recurrent ANN to solve linear and quadratic programming problems. These networks are based on the solution of a set of differential equations that are obtained from a transformation of an augmented Lagrange energy function. The proposed hybrid systems combining recurrent ANN and HR presented a reduced computational effort in relation to the one using only the recurrent ANN.
Otoni Nóbrega Neto, Ronaldo R. B. de Aquino, Milde M. S. Lira
A Novel Optimization Algorithm Based on Reinforcement Learning
Abstract
In this chapter, an efficient optimization algorithm is presented for the problems with hard to evaluate objective functions. It uses the reinforcement learning principle to determine the particle move in search for the optimum process. A model of successful actions is build and future actions are based on past experience. The step increment combines exploitation of the known search path and exploration for the improved search direction. The algorithm does not require any prior knowledge of the objective function, nor does it require any characteristics of such function. It is simple, intuitive and easy to implement and tune. The optimization algorithm was tested using several multi-variable functions and compared with other widely used random search optimization algorithms. Furthermore, the training of a multi-layer perceptron, to find a set of optimized weights, is treated as an optimization problem. The optimized multi-layer perceptron was applied to Iris database classification. Finally, the algorithm is used in image recognition to find a familiar object with retina sampling and micro-saccades.
Janusz A. Starzyk, Yinyin Liu, Sebastian Batog
The Use of Opposition for Decreasing Function Evaluations in Population-Based Search
Abstract
This chapter discusses the application of opposition-based computing to reducing the amount of function calls required to perform optimization by population-based search. We provide motivation and comparison to similar, but different approaches including antithetic variates and quasi-randomness/low-discrepancy sequences. We employ differential evolution and population-based incremental learning as optimization methods for image thresholding. Our results confirm improvements in required function calls, as well as support the oppositional princples used to attain them.
Mario Ventresca, Shahryar Rahnamayan, Hamid Reza Tizhoosh
Search Procedure Exploiting Locally Regularized Objective Approximation. A Convergence Theorem for Direct Search Algorithms
Abstract
The Search Procedure Exploiting Locally Regularized Objective Approximation is a method to speed-up local optimization processes in which the objective function evaluation is expensive. It was introduced in [1] and further developed in [2]. In this paper we present the convergence theorem of the method. The theorem is proved for the EXTREM [6] algorithm but applies to any Gauss-Siedle algorithm that uses sequential quadratic interpolation (SQI) as a line search method. After some extension it can also be applied to conjugate direction algorithms. The proof is based on the Zangwill theory of closed transformations. This method of the proof was chosen instead of sufficient decrease approach since the crucial element of the presented proof is an extension of the SQI convergence proof from [14] which is based on this approach.
Marek Bazan
Optimization Problems with Cardinality Constraints
Abstract
In this article we review several hybrid techniques that can be used to accurately and efficiently solve large optimization problems with cardinality constraints. Exact methods, such as branch-and-bound, require lengthy computations and are, for this reason, infeasible in practice. As an alternative, this study focuses on approximate techniques that can identify near-optimal solutions at a reduced computational cost. Most of the methods considered encode the candidate solutions as sets. This representation, when used in conjunction with specially devised search operators, is specially suited to problems whose solution involves the selection of optimal subsets of specified cardinality. The performance of these techniques is illustrated in optimization problems of practical interest that arise in the fields of machine learning (pruning of ensembles of classifiers), quantitative finance (portfolio selection), time-series modeling (index tracking) and statistical data analysis (sparse principal component analysis).
Rubén Ruiz-Torrubiano, Sergio García-Moratilla, Alberto Suárez
Learning Global Optimization Through a Support Vector Machine Based Adaptive Multistart Strategy
Abstract
We propose a global optimization algorithm called GOSAM (Global Optimization using Support vector regression based Adaptive Multistart) that applies statistical machine learning techniques, viz. Support Vector Regression (SVR) to adaptively direct iterative search in large-scale global optimization. At each iteration, GOSAM builds a training set of the objective function’s local minima discovered till the current iteration, and applies SVR to construct a regressor that learns the structure of the local minima. In the next iteration the search for the local minimum is started from the minimum of this regressor. The idea is that the regressor for local minima will generalize well to the local minima not obtained so far in the search, and hence its minimum would be a ‘crude approximation’ to the global minimum. This approximation improves over time, leading the search towards regions that yield better local minima and eventually the global minimum. Simulation results on well known benchmark problems show that GOSAM requires significantly fewer function evaluations to reach the global optimum, in comparison with methods like Particle Swarm optimization and Genetic Algorithms. GOSAM proves to be relatively more efficient as the number of design variables (dimension) increases. GOSAM does not require explicit knowledge of the objective function, and also does not assume any specific properties. We also discuss some real world applications of GOSAM involving constrained and design optimization problems.
Jayadeva, Sameena Shah, Suresh Chandra
Multi-Objective Optimization Using Surrogates
Abstract
Until recently, optimization was regarded as a discipline of rather theoretical interest, with limited real-life applicability due to the computational or experimental expense involved. Practical multiobjective optimization was considered almost as a utopia even in academic studies due to the multiplication of this expense. This paper discusses the idea of using surrogate models for multiobjective optimization. With recent advances in grid and parallel computing more companies are buying inexpensive computing clusters that work in parallel. This allows, for example, efficient fusion of surrogates and finite element models into a multiobjective optimization cycle. The research presented here demonstrates this idea using several response surface methods on a pre-selected set of test functions. We aim to show that there are number of techniques which can be used to tackle difficult problems and we also demonstrate that a careful choice of response surface methods is important when carrying out surrogate assisted multiobjective search.
Ivan Voutchkov, Andy Keane
A Review of Agent-Based Co-Evolutionary Algorithms for Multi-Objective Optimization
Abstract
Agent-based evolutionary algorithms are a result of mixing two paradigms: multi-agent systems and evolutionary algorithms. Agent-based co-evolutionary algorithms allow for existing many species and sexes of agents within the system as well as for defining co-evolutionary interactions between species and sexes. Algorithms based on the model of co-evolutionary multi-agent system have been already applied in many domains, like multi-modal optimization, generation of investment strategies, portfolio optimization, and multi-objective optimization. In this chapter we present an overview of selected agent-based co-evolutionary algorithms, their formal models, and results of experiments with standard test problems and financial problem, aimed at making comparison of agent-based and “classical” state-of-the-art multi-objective algorithms. Presented results show that, depending on the problem being solved, agent-based algorithms obtain comparable, and sometimes even better, results than “classical” algorithms, however of course they are not the universal solver for all multi-objective optimization problems.
Rafał Dreżewski, Leszek Siwik
A Game Theory-Based Multi-Agent System for Expensive Optimisation Problems
Abstract
This paper is concerned with the development of a novel approach to solve expensive optimisation problems. The approach relies on game theory and a multi-agent framework in which a number of existing algorithms, cast as agents, are deployed with the aim to solve the problem in hand as efficiently as possible. The key factor for the success of this approach is a dynamic resource allocation biased toward promising algorithms on the given problem. This is achieved by allowing the agents to play a cooperative-competitive game the outcomes of which will be used to decide which algorithms, if any, will drop out of the list of solver-agents and which will remain in use. A successful implementation of this framework will result in the most suited algorithm(s) for the given problem being predominantly used on the available computing platform. In other words it guarantees the best use of the resources both algorithms and hardware with the by-product being the best approximate solution for the problem given the available resources. GTMAS is tested on a standard collection of TSP problems. The results are included.
Abdellah Salhi, Özgun Töreyen
Optimization with Clifford Support Vector Machines and applications
Abstract
This chapter introduces a generalization of the real- and complex-valued SVM’s using the Clifford algebra. In this framework we handle the design of kernels involving the geometric product for linear and nonlinear classification and regression. The major advantage of our approach is that we redefine the optimization variables as multivectors. This allows us to have a multivector as output and, therefore we can represent multiple classes according to the dimension of the geometric algebra in which we work. By using the CSVM with one Clifford kernel we reduce the complexity of the computation greatly. This can be done thanks to the Clifford product, which performs the direct product between the spaces of different grade involved in the optimization problem. We conduct comparisons between CSVM and the most used approaches to solve multi-class classification to show that ours is more suitable for practical use on certain type of problems. In this chapter are included several experiments to show the application of CSVM to solve classification and regression problems, as well as 3D object recognition for visual guided robotics. In addition, it is shown the design of a recurrent system involving LSTM network connected with CSVM and we study the performance of this system with time series experiments and robot navigation using reinforcement learning.
N. Arana-Daniel, C. López-Franco, E. Bayro-Corrochano
A Classification method based on principal component analysis and differential evolution algorithm applied for prediction diagnosis from clinical EMR heart data sets
Abstract
In this article we have studied the usage of a classification method based on preprocessing the data first using principal component analysis, and then using the compressed data in actual classification process which is based on differential evolution algorithm, an evolutionary optimization algorithm. This method is applied here for prediction diagnosis from clinical data sets with chief complaint of chest pain using classical Electronic Medical Record (EMR), heart data sets. For experimentation we used a set of five frequently applied benchmark data sets including Cleveland, Hungarian, Long Beach, Switzerland and Statlog data sets. These data sets are containing demographic properties, clinical symptoms, clinical findings, laboratory test results specific electrocardiography (ECG), results pertaining to angina and coronary infarction, etc. In other words, classical EMR data pertaining to the evaluation of a chest pain patient and ruling out angina and/or Coronary Artery Disease, (CAD). The prediction diagnosis results with the proposed classification approach were found promisingly accurate. For example, the Switzerland data set was classified with 94.5 % ±0.4 % accuracy. Combining all these data sets resulted in the classification accuracy of 82.0 % ±0.5 %. We compared the results of the proposed method with the corresponding results of the other methods reported in the literature that have demonstrated relatively high classification performance in solving this problem. Depending on the case, the results of the proposed method were of equal level with the best compared methods, or outperformed their classification accuracy clearly. In general, the results are suggesting that the proposed method has potential in this task.
Pasi Luukka, Jouni Lampinen
An Integrated Approach to Speed Up GA-SVM Feature Selection Model
Abstract
Significant information or features are often overshadowed by noises and resulted in poor classification results. Feature selection methods such as GA-SVM are desirable in filtering out the irrelevant features and thus improve the accuracy; the selection itself might also offer critical insights into the problems. However, the high computational cost greatly discourages the application of GA-SVM, especially for large-scale datasets. In this paper, an HPC-enabled GA-SVM (HGA-SVM) is proposed and implemented by integrating data parallelization, multithreading and heuristic techniques with the ultimate goal of maintaining robustness and lowering computational cost. Our proposed model is comprised of four improvement strategies: 1) GA Parallelization, 2) SVM Parallelization, 3) Neighbor Search and 4) Evaluation Caching. All the four strategies improve the respective aspects of the feature selection algorithm and contribute collectively towards higher computational throughput.
Tianyou Zhang, Xiuju Fu, Rick Siow Mong Goh, Chee Keong Kwoh, Gary Kee Khoon Lee
Computation in Complex Environments;
Optimizing Railway Timetable Problems with Symbiotic Networks
Abstract
The title of this contribution balances on two concepts. The first is ‘complex environments’.‘Complex’ should not be read in the colloquial sense of the word; complexity addresses, amongst others, non-linear, contingent and ‘chaotic’ phenomena ([10],[11]). Many thinkers on complexity consider such characteristics - sometimes called organized complexity-to demarcate a transition point where analytical approaches are no longer feasible ([26]:18).
Kees Pieters
Project Scheduling: Time-Cost Tradeoff Problems
Abstract
We design and implement new methods to solve multiobjective time-cost tradeoff (TCT) problems in project scheduling using evolutionary algorithm and its hybrid variants with fuzzy logic, and artificial neural networks. We deal with a wide variety of TCT problems encountered in real world engineering projects. These include consideration of (i) nonlinear time-cost relationships of project activities, (ii) presence of a constrained resource apart from precedence constraints, and (iii) project uncertainties. We also present a hybrid meta heuristic (HMH) combining a genetic algorithm with simulated annealing to solve discrete version of multiobjective TCT problem. HMH is employed to solve two test cases of TCT.
Sanjay Srivastava, Bhupendra Pathak, Kamal Srivastava
Systolic VLSI and FPGA Realization of Artificial Neural Networks
Abstract
Systolic architectures are established as a widely popular class of VLSI structures for repetitive and computation-intensive applications due to the simplicity of their processing elements (PEs), modularity of design, regular and nearest neighbor interconnections between the PEs, high-level of pipelinability, small chip-area and low-power consumption. In systolic arrays, the desired data is pumped rhythmically in a regular interval across the PEs to yield high throughput by fully pipelined processing. The I/O bottleneck is significantly reduced by the systolic array architectures by feeding the data at the chip-boundary, and pipelining it across the structure. The extensive reuse of data within the array allows for executing large volume of computation with only a modest increase of bandwidth. Since the FPGA devices consist of regularly placed inter-connected logic blocks, they closely resemble with the layout of systolic processors. The systolic computation within the PEs therefore could easily be mapped to the configurable logic blocks in FPGA device. Interestingly also, the artificial neural network (ANN) algorithms are quite suitable for systolic implementation due to their repetitive multiply-accumulate behaviour. Several variations of one-dimensional and two-dimensional systolic arrays are, therefore, reported in the literature for the implementation of different types of neural networks. Special purpose systolic designs for various ANN-based applications relating to pattern recognition and classification, adaptive filtering and channel equalization, vector quantization, image compression and general signal/image processing applications have been reported in the last two decades. We have devoted this chapter on the systolic architectures for the implementation of ANN algorithms in custom VLSI and FPGA platforms. The key techniques used for the design of basic systolic building blocks of ANN algorithms are discussed in detail. Moreover, the mapping of fully-connected unconstrained ANN, as well as, multilayer ANN algorithm into fully-pipelined systolic architecture is described with generalized dependence graph formulation. A brief overview of systolic architectures for advance ANN algorithms for different applications are presented at the end.
Pramod Kumar Meher
Application of Coarse-Coding Techniques for Evolvable Multirobot Controllers
Abstract
Robots, in their most general embodiment, can be complex systems trying to negotiate and manipulate an unstructured environment. They ideally require an ‘intelligence’ that reflects our own. Artificial evolutionary algorithms are often used to generate a high-level controller for single and multi robot scenarios. But evolutionary algorithms, for all their advantages, can be very computationally intensive. It is therefore very desirable to minimize the number of generations required for a solution. In this chapter, we incorporate the Artificial Neural Tissue (ANT) approach for robot control from previous work with a novel Sensory Coarse Coding (SCC) model. This model is able to exploit regularity in the sensor data of the environment. Determining how the sensor suite of a robot should be configured and utilized is critical for the robot’s operation. Much as nature evolves body and brain simultaneously, we should expect improved performance resulting from artificially evolving the controller and sensor configuration in unison. Simulation results on an example task, resource gathering, show that the ANT+SCC system is capable of finding fitter solutions in fewer generations. We also report on hardware experiments for the same task that show complex behaviors emerging through self-organized task decomposition.
Jekanthan Thangavelautham, Paul Grouchy, Gabriele M. T. D’Eleuterio
Metadaten
Titel
Computational Intelligence in Optimization
herausgegeben von
Yoel Tenne
Chi-Keong Goh
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-12775-5
Print ISBN
978-3-642-12774-8
DOI
https://doi.org/10.1007/978-3-642-12775-5

Premium Partner