Skip to main content
main-content

Über dieses Buch

The two-volume set LNCS 6593 and 6594 constitutes the refereed proceedings of the 10th International Conference on Adaptive and Natural Computing Algorithms, ICANNGA 2010, held in Ljubljana, Slovenia, in April 2010. The 83 revised full papers presented were carefully reviewed and selected from a total of 144 submissions. The first volume includes 42 papers and a plenary lecture and is organized in topical sections on neural networks and evolutionary computation.

Inhaltsverzeichnis

Frontmatter

Plenary Session

Autonomous Discovery of Abstract Concepts by a Robot

In this paper we look at the discovery of abstract concepts by a robot autonomously exploring its environment and learning the laws of the environment. By

abstract

concepts we mean concepts that are not explicitly observable in the measured data, such as the notions of obstacle, stability or a tool. We consider mechanisms of machine learning that enable the discovery of abstract concepts. Such mechanisms are provided by the logic based approach to machine learning called Inductive Logic Programming (ILP). The feature of predicate invention in ILP is particularly relevant. Examples of actually discovered abstract concepts in experiments are described.

Ivan Bratko

Neural Networks

Kernel Networks with Fixed and Variable Widths

The role of width in kernel models and radial-basis function networks is investigated with a special emphasis on the Gaussian case. Quantitative bounds are given on kernel-based regularization showing the effect of changing the width. These bounds are shown to be

d

-th powers of width ratios, and so they are exponential in the dimension of input data.

Věra Kůrková, Paul C. Kainen

Evaluating Reliability of Single Classifications of Neural Networks

Current machine learning algorithms perform well on many problem domains, but in risk-sensitive decision making, for example in medicine and finance, common evaluation methods that give overall assessments of models fail to gain trust among experts, as they do not provide any information about single predictions. We continue the previous work on approaches for evaluating the reliability of single classifications where we focus on methods that are model independent. These methods have been shown to be successful in their narrow fields of application, so we constructed a testing methodology to evaluate these methods in straightforward, general-use test cases. For the evaluation, we had to derive a statistical reference function, which enables comparison between the reliability estimators and the model’s own predictions. We compare five different approaches and evaluate them on a simple neural network with several artificial and real-world domains. The results indicate that reliability estimators CNK and LCV can be used to improve the model’s predictions.

Darko Pevec, Erik Štrumbelj, Igor Kononenko

Nonlinear Predictive Control Based on Multivariable Neural Wiener Models

This paper describes a nonlinear Model Predictive Control (MPC) scheme in which a neural Wiener model of a multivariable process is used. The model consists of a linear dynamic part in series with a steady-state nonlinear part represented by neural networks. A linear approximation of the model is calculated on-line and used for prediction. Thanks to it, the control policy is calculated from a quadratic programming problem. Good control accuracy and computational efficiency of the discussed algorithm are shown in the control system of a chemical reactor for which the classical MPC strategy based on a linear model is unstable.

Maciej Ławryńczuk

Methods of Integration of Ensemble of Neural Predictors of Time Series - Comparative Analysis

It is well known fact that organizing different predictors in an ensemble increases the accuracy of prediction of the time series. This paper discusses different methods of integration of predictors cooperating in an ensemble. The considered methods include the ordinary averaging, weighted averaging, application of principal component analysis to the data, blind source separation as well as application of additional neural predictor as an integrator. The proposed methods will be verified on the example of prediction of 24-hour ahead load pattern in the power system, as well as prediction of the environmental pollution for the next day.

Stanislaw Osowski, Krzysztof Siwek

A Rejection Option for the Multilayer Perceptron Using Hyperplanes

Currently, a growing quantity of the Artificial Intelligence tasks demand a high efficiency of the classification systems (classifiers); making an error in the classification of an object or event can cause serious problems. This is worrying when the classifiers confront tasks where the classes are not linearly separable, the classifiers efficiency diminishes considerably. One solution for decreasing this complication is the Rejection Option. In several circumstances it is advantageous to not have a decision be taken and wait to obtain additional information instead of making an error.

This work contains the description of a novel reject procedure whose purpose is to identify elements with a high risk of being misclassified; like those in an overlap zone. For this, the location of the object in evaluation is calculated with regard to two hyperplanes that emulate the classifiers decision boundary. The area between these hyperplanes is named an overlap region. If the element is localized in this area, it is rejected.

Experiments conducted with the artificial neural network Multilayer Perceptron, trained with the Backpropagation algorithm, show between 12.0%- 91.4%of the objects in question would have been misclassified if they had not been rejected.

Eduardo Gasca A., Sergio Saldaña T., José S. Sánchez G., Valentín Velásquez G., Eréndira Rendón L., Itzel M. Abundez B., Rosa M. Valdovinos R., Rafael Cruz R.

Parallelization of Algorithms with Recurrent Neural Networks

Neural networks can be used to describe symbolic algorithms like those specified in high-level programming languages. This article shows how to translate these network description of algorithms into a more suitable format in order to feed an arbitrary number of parallel processors to speed-up the computation of sequential and parallel algorithms.

João Pedro Neto, Fernando Silva

Parallel Training of Artificial Neural Networks Using Multithreaded and Multicore CPUs

This paper reports on methods for the parallelization of artificial neural networks algorithms using multithreaded and multicore CPUs in order to speed up the training process. The developed algorithms were implemented in two common parallel programming paradigms and their performances are assessed using four datasets with diverse amounts of patterns and with different neural network architectures. All results show a significant increase in computation speed, which is reduced nearly linear with the number of cores for problems with very large training datasets.

Olena Schuessler, Diego Loyola

Supporting Diagnostics of Coronary Artery Disease with Neural Networks

Coronary artery disease is one of its most important causes of early mortality in western world. Therefore, clinicians seek to improve diagnostic procedures in order to reach reliable early diagnoses. In the clinical setting, coronary artery disease diagnostics is often performed in a sequential manner, where the four diagnostic steps typically consist of evaluation of (1) signs and symptoms of the disease and electrocardiogram (ECG) at rest, (2) sequential ECG testing during the controlled exercise, (3) myocardial perfusion scintigraphy, and (4) finally coronary angiography, that is considered as the “gold standard” reference method. Our study focuses on improving diagnostic and probabilistic interpretation of scintigraphic images obtained from the penultimate step. We use automatic image parameterization on multiple resolutions, based on spatial association rules. Extracted image parameters are combined into more informative composite parameters by means of principle component analysis, and finally used to build automatic classifiers with neural networks and naive Bayes learning methods. Experiments show that our approach significantly increases diagnostic accuracy, specificity and sensitivity with respect to clinical results.

Matjaž Kukar, Ciril Grošelj

The Right Delay

Detecting Specific Spike Patterns with STDP and Axonal Conduction Delays

Axonal conduction delays should not be ignored in simulations of spiking neural networks. Here it is shown that by using axonal conduction delays, neurons can display sensitivity to a specific spatio-temporal spike pattern. By using delays that complement the firing times in a pattern, spikes can arrive simultaneously at an output neuron, giving it a high chance of firing in response to that pattern. An unsupervised learning mechanism called spike-timing-dependent plasticity then increases the weights for connections used in the pattern, and decreases the others. This allows for an attunement of output neurons to specific activity patterns, based on temporal aspects of axonal conductivity.

Arvind Datadien, Pim Haselager, Ida Sprinkhuizen-Kuyper

New Measure of Boolean Factor Analysis Quality

Learning of objects from complex patterns is a long-term challenge in philosophy, neuroscience, machine learning, data mining, and in statistics. There are some approaches in literature trying to solve this difficult task consisting in discovering hidden structure of high-dimensional binary data and one of them is Boolean factor analysis. However there is no expert independent measure for evaluating this method in terms of the quality of solutions obtained, when analyzing unknown data. Here we propose information gain, model-based measure of the rate of success of individual methods. This measure presupposes that observed signals arise as Boolean superposition of base signals with noise. For the case whereby a method does not provide parameters necessary for information gain calculation we introduce the procedure for their estimation. Using an extended version of the ”Bars Problem” generation of typical synthetics data for such a task, we show that our measure is sensitive to all types of data model parameters and attains its maximum, when best fit is achieved.

Alexander A. Frolov, Dusan Husek, Pavel Yu. Polyakov

Mechanisms of Adaptive Spatial Integration in a Neural Model of Cortical Motion Processing

In visual cortex information is processed along a cascade of neural mechanisms that pool activations from the surround with spatially increasing receptive fields. Watching a scenery of multiple moving objects leads to object boundaries on the retina defined by discontinuities in feature domains such as luminance or velocities. Spatial integration across the boundaries mixes distinct sources of input signals and leads to unreliable measurements. Previous work [6] proposed a luminance-gated motion integration mechanism, which does not account for the presence of discontinuities in other feature domains. Here, we propose a biologically inspired model that utilizes the low and intermediate stages of cortical motion processing, namely V1, MT and MSTl, to detect motion by locally adapting spatial integration fields depending on motion contrast. This mechanism generalizes the concept of bilateral filtering proposed for anisotropic smoothing in image restoration in computer vision.

Stefan Ringbauer, Stephan Tschechne, Heiko Neumann

Self-organized Short-Term Memory Mechanism in Spiking Neural Network

The paper is devoted to implementation and exploration of evolutionary development of the short-term memory mechanism in spiking neural networks (SNN) starting from initial chaotic state. Short-term memory is defined here as a network ability to store information about recent stimuli in form of specific neuron activity patterns. Stable appearance of this effect was demonstrated for so called stabilizing SNN, the network model proposed by the author. In order to show the desired evolutionary behavior the network should have a specific topology determined by “horizontal” layers and “vertical” columns.

Mikhail Kiselev

Approximation of Functions by Multivariable Hermite Basis: A Hybrid Method

In this paper an approximation of multivariable functions by Hermite basis is presented and discussed. Considered here basis is constructed as a product of one-variable Hermite functions with adjustable scaling parameters. The approximation is calculated via hybrid method, the expansion coefficients by using an explicit, non-search formulae, and scaling parameters are determined via a search algorithm. A set of excessive number of Hermite functions is initially calculated. To constitute the approximation basis only those functions are taken which ensure the fastest error decrease down to a desired level. Working examples are presented, demonstrating a very good generalization property of this method.

Bartlomiej Beliczynski

Using Pattern Recognition to Predict Driver Intent

Advanced Driver Assistance Systems (ADAS) should correctly infer the intentions of the driver from what is implied by the incoming data available to it. Gaze behaviour has been found to be an indicator of information gathering, and therefore could be used to derive information about the driver’s next planned objective in order to identify intended manoeuvres without relying solely on car data. Previous work has shown that significantly distinct gaze patterns precede each of the driving manoeuvres analysed indicating that eye movement data might be used as input to ADAS supplementing sensors, such as CAN-Bus, laser, or radar in order to recognise intended driving manoeuvres. Drivers’ gaze behaviour was measured prior to and during the execution of different driving manoeuvres performed in a dynamic driving simulator. The efficacy of Artificial Neural Network models in learning to predict the occurrence of certain driving manoeuvres using both car and gaze data was investigated, which could successfully be demonstrated with real traffic data [1]. Issues considered included the amount of data prior to the manoeuvre to use, the relative difficulty of predicting different manoeuvres, and the accuracy of the models at different pre-manoeuvre times.

Firas Lethaus, Martin R. K. Baumann, Frank Köster, Karsten Lemmer

Neural Networks Committee for Improvement of Metal’s Mechanical Properties Estimates

In this paper we discuss the problem of metal’s mechanical characteristics estimation on the basis of indentation curves. The solution of this problem makes it possible to unify computational and experimental control methods of elastic properties of materials at all stages of equipment life cycle (manufacturing, maintenance, reparation). Preliminary experiments based on data obtained by the use of finite element analysis method have proved this problem to be ill-posed and impossible to be solved by a single multilayered perceptron at the required precision level. To improve the accuracy of the estimates we propose to use a special neural net structure for the neural networks committee decision making. Experimental results have shown accuracy improvement for estimates produced by the neural networks committee and confirmed their stability.

Olga A. Mishulina, Igor A. Kruglov, Murat B. Bakirov

Logarithmic Multiplier in Hardware Implementation of Neural Networks

Neural networks on chip have found some niche areas of applications, ranging from massive consumer products requiring small costs to real-time systems requiring real time response. Speaking about latter, iterative logarithmic multipliers show a great potential in increasing performance of the hardware neural networks. By relatively reducing the size of the multiplication circuit, the concurrency and consequently the speed of the model can be greatly improved. The proposed hardware implementation of the multilayer perceptron with on chip learning ability confirms the potential of the concept. The experiments performed on a

Proben1

benchmark dataset show that the adaptive nature of the proposed neural network model enables the compensation of the errors caused by inexact calculations by simultaneously increasing its performance and reducing power consumption.

Uroš Lotrič, Patricio Bulić

Efficiently Explaining Decisions of Probabilistic RBF Classification Networks

For many important practical applications model transparency is an important requirement. A probabilistic radial basis function (PRBF) network is an effective non-linear classifier, but similarly to most other neural network models it is not straightforward to obtain explanations for its decisions. Recently two general methods for explaining of a model’s decisions for individual instances have been introduced which are based on the decomposition of a model’s prediction into contributions of each attribute. By exploiting the marginalization property of the Gaussian distribution, we show that PRBF is especially suitable for these explanation techniques. By explaining the PRBF’s decisions for new unlabeled cases we demonstrate resulting methods and accompany presentation with visualization technique that works both for single instances as well as for the attributes and their values, thus providing a valuable tool for inspection of the otherwise opaque models.

Marko Robnik-Šikonja, Aristidis Likas, Constantinos Constantinopoulos, Igor Kononenko, Erik Štrumbelj

Evolving Sum and Composite Kernel Functions for Regularization Networks

In this paper we propose a novel evolutionary algorithm for regularization networks. The main drawback of regularization networks in practical applications is the presence of meta-parameters, including the type and parameters of kernel functions Our learning algorithm provides a solution to this problem by searching through a space of different kernel functions, including sum and composite kernels. Thus, an optimal combination of kernel functions with parameters is evolved for given task specified by training data. Comparisons of composite kernels, single kernels, and traditional Gaussians are provided in several experiments.

Petra Vidnerová, Roman Neruda

Optimisation of Concentrating Solar Thermal Power Plants with Neural Networks

The exploitation of solar power for energy supply is of increasing importance. While technical development mainly takes place in the engineering disciplines, computer science offers adequate techniques for simulation, optimisation and controller synthesis.

In this paper we describe a work from this interdisciplinary area. We introduce our tool for the optimisation of parameterised solar thermal power plants, and report on the employment of genetic algorithms and neural networks for parameter synthesis. Experimental results show the applicability of our approach.

Pascal Richter, Erika Ábrahám, Gabriel Morin

Emergence of Attention Focus in a Biologically-Based Bidirectionally-Connected Hierarchical Network

We present a computational model for visual processing where attentional focus emerges fundamental mechanisms inherent to human vision. Through detailed analysis of activation development in the network we demonstrate how normal interaction between top-down and bottom-up processing and intrinsic mutual competition within processing units can give rise to attentional focus. The model includes both spatial and object-based attention, which are computed simultaneously, and can mutually reinforce each other. We show how a non-salient location and a corresponding non-salient feature set that are at first weakly activated by visual input can be reinforced by top-down feedback signals (centrally controlled attention), and instigate a change in attentional focus to the weak object. One application of this model is highlight a task-relevant object in a cluttered visual environment, even when this object is non-salient (non-conspicuous).

Mohammad Saifullah, Rita Kovordányi

Visualizing Multidimensional Data through Multilayer Perceptron Maps

Visualization of high-dimensional data is a major task in data mining. The main idea of visualization is to map data from the high-dimensional space onto a certain position in a low-dimensional space. From all mappings, only those that lead to maps that are good approximations of the data distribution observed in the high-dimensional space are of interest. Here, we present a mapping scheme based on multilayer perceptrons that forms a two-dimensional representation of high-dimensional data. The core idea is that the system maps all vectors to a certain position in the two-dimensional space. We then measure how much does this map resemble the distribution in the original high-dimensional space, which leads to an error measure. Based on this error, we apply reinforcement learning to multilayer perceptrons to find good maps. We present here the description of the model as well as some results in well-known benchmarks. We conclude that the multilayer perceptron is a good tool to visualize high-dimensional data.

Antonio Neme, Antonio Nido

Input Separability in Living Liquid State Machines

To further understand computation in living neuronal networks (LNNs) and improve artificial neural networks (NNs), we seek to create ahybrid liquid state machine (LSM) that relies on an LNN for the reservoir.This study embarks on a crucial first step, establishing effective methods for findinglarge numbers of separable input stimulation patternsin LNNs. The separation property is essential forinformation transfer to LSMs and therefore necessary for computation in our hybrid system. In order to successfully transfer information to the reservoir, it must be encoded into stimuli that reliably evoke separable responses. Candidate spatio-temporal patterns are delivered to LNNs via microelectrode arrays (MEAs), and the separability of their corresponding responses is assessed. Support vector machine (SVM)classifiers assess separability and a genetic algorithm-based method identifiessubsets of maximally separable patterns. The tradeoff between symbol set sizeand separabilityis evaluated.

Robert L. Ortman, Kumar Venayagamoorthy, Steve M. Potter

Predictive Control of a Distillation Column Using a Control-Oriented Neural Model

This paper describes a special neural model developed with the specific aim of being used in nonlinear Model Predictive Control (MPC). The model consists of two neural networks. The model structure strictly mirrors its role in a suboptimal (linearisation-based) MPC algorithm: the first network is used to calculate on-line the influence of the past, the second network directly estimates the time-varying step-response of the locally linearised neural model, without explicit on-line linearisation. Advantages of MPC based on the described model structure (high control accuracy, computational efficiency and easiness of development) are demonstrated in the control system of a distillation column.

Maciej Ławryńczuk

Neural Prediction of Product Quality Based on Pilot Paper Machine Process Measurements

We describe a multilayer perceptron model to predict the laboratory measurements of paper quality using the instantaneous state of the papermaking production process. Actual industrial data from a pilot paper machine was used. The final model met its goal accuracy 95.7% of the time at best (tensile index quality) and 66.7% at worst (beta formation). We anticipate usage possibilities in lowering machine prototyping expenses, and possibly in quality control at production sites.

Paavo Nieminen, Tommi Kärkkäinen, Kari Luostarinen, Jukka Muhonen

A Robotic Scenario for Programmable Fixed-Weight Neural Networks Exhibiting Multiple Behaviors

Artificial neural network architectures are systems which usually exhibit a

unique/special

behavior on the basis of a fixed structure expressed in terms of parameters computed by a training phase. In contrast with this approach, we present a robotic scenario in which an artificial neural network architecture, the Multiple Behavior Network (MBN), is proposed as a robotic controller in a simulated environment. MBN is composed of two Continuous-Time Recurrent Neural Networks (CTRNNs), and is organized in a hierarchial way: Interpreter Module (

IM

) and Program Module (

PM

).

IM

is a fixed-weight CTRNN designed in such a way to behave as an

interpreter

of the signals coming from

PM

, thus being able to switch among different behaviors in response to the

PM

output

programs

. We suggest how such an MBN architecture can be incrementally trained in order to show and even acquire new behaviors by letting

PM

learn new programs, and without modifying

IM

structure.

Guglielmo Montone, Francesco Donnarumma, Roberto Prevete

Self-Organising Maps in Document Classification: A Comparison with Six Machine Learning Methods

This paper focuses on the use of self-organising maps, also known as Kohonen maps, for the classification task of text documents. The aim is to effectively and automatically classify documents to separate classes based on their topics. The classification with self-organising map was tested with three data sets and the results were then compared to those of six well known baseline methods:

k

-means clustering, Ward’s clustering,

k

nearest neighbour searching, discriminant analysis, Naïve Bayes classifier and classification tree. The self-organising map proved to be yielding the highest accuracies of tested unsupervised methods in classification of the Reuters news collection and the Spanish CLEF 2003 news collection, and comparable accuracies against some of the supervised methods in all three data sets.

Jyri Saarikoski, Jorma Laurikkala, Kalervo Järvelin, Martti Juhola

Analysis and Short-Term Forecasting of Highway Traffic Flow in Slovenia

Analysis and short-term forecasting of traffic flow data for several locations of the Slovenia highway network are presented. Daily and weekly seasonal components of the data are analysed and several features are extracted to support the forecasting. Various short-term forecasting models are developed for one hour ahead forecasting of the traffic flow. Models include benchmark models (random walk, seasonal random walk, naive model), AR and ARMA models, and various configuration of feedforward neural networks. Results show that the best forecasting results (correlation coefficient R > 0.99) are obtained by a feedforward neural network and a selected set of inputs but this sophisticated model surprisingly only slightly surpasses the accuracy of a simple naive model.

Primož Potočnik, Edvard Govekar

Evolutionary Computation

A New Method of EEG Classification for BCI with Feature Extraction Based on Higher Order Statistics of Wavelet Components and Selection with Genetic Algorithms

A new method of feature extraction and selection of EEG signal for brain-computer interface design is presented. The proposed feature selection method is based on higher order statistics (HOS) calculated for the details of discrete wavelets transform (DWT) of EEG signal. Then a genetic algorithm is used for feature selection. During the experiment classification is conducted on a single trial of EEG signals. The proposed novel method of feature extraction using HOS and DWT gives more accurate results then the algorithm based on discrete Fourier transform (DFT).

Marcin Kołodziej, Andrzej Majkowski, Remigiusz J. Rak

Regressor Survival Rate Estimation for Enhanced Crossover Configuration

In the framework of nonlinear systems identification by means of multiobjective genetic programming, the paper introduces a customized crossover operator, guided by fuzzy controlled regressor encapsulation. The approach is aimed at achieving a balance between exploration and exploitation by protecting well adapted subtrees from division during recombination. To reveal the benefits of the suggested genetic operator, the authors introduce a novel mathematical formalism which extends the Schema Theory for cut point crossover operating on trees encoding regressor based models. This general framework is afterwards used for monitoring the survival rates of fit encapsulated structural blocks. Other contributions are proposed in answer to the specific requirements of the identification problem, such as a customized tree building mechanism, enhanced elite processing and the hybridization with a local optimization procedure. The practical potential of the suggested algorithm is demonstrated in the context of an industrial application involving the identification of a subsection within the sugar factory of Lublin, Poland.

Alina Patelli, Lavinia Ferariu

A Study on Population’s Diversity for Dynamic Environments

The use of mechanisms that generate and maintain diversity in the population was always seen as fundamental to help Evolutionary Algorithms to achieve better performances when dealing with dynamic environments. In the last years, several studies showed that this is not always true and, in some situations,

too much

diversity can hinder the performance of the Evolutionary Algorithms dealing with dynamic environments. In order to have more insight about this important issue, we tested the performance of four types of Evolutionary Algorithms using different methods for promoting diversity. All the algorithms were tested in cyclic and random dynamic environments using two different benchmark problems. We measured the diversity of the population and the performances obtained by the algorithms and important conclusions were obtained.

Anabela Simões, Rui Carvalho, João Campos, Ernesto Costa

Effect of the Block Occupancy in GPGPU over the Performance of Particle Swarm Algorithm

Diverse technologies have been used to accelerate the execution of Evolutionary Algorithms. Nowadays, the GPGPU cards have demonstrated a high efficiency in the improvement of the execution times in a wide range of scientific problems, including some excellent examples with diverse categories of Evolutionary Algorithms. Nevertheless, the studies in depth of the efficiency of each one of these technologies, and how they affect to the final performance are still scarce. These studies are relevant in order to reduce the execution time budget, and therefore affront higher dimensional problems. In this work, the improvement of the speed-up face to the percentage of threads used per block in the GPGPU card is analysed. The results conclude that a correct election of the occupancy —number of the threads per block— contributes to win an additional speed-up.

Miguel Cárdenas-Montes, Miguel A. Vega-Rodríguez, Juan José Rodríguez-Vázquez, Antonio Gómez-Iglesias

Two Improvement Strategies for Logistic Dynamic Particle Swarm Optimization

A new variant of particle swarm optimization, Logistic Dynamic Particle Swarm Optimization (termed LDPSO), is introduced in this paper. LDPSO is constructed based on the new inspiration of population generation method according to the historical information about particles. It has a better searching capability in comparison to the canonical method. Furthermore, according to the characteristics of LDPSO, two improvement strategies are designed respectively. Mutation strategy is employed to prevent premature convergence of particles. Selection strategy is adopted to maintain the diversity of particles. Experiment results demonstrate the efficiency of LDPSO and the effectiveness of the two improvement strategies.

Qingjian Ni, Jianming Deng

Digital Watermarking Enhancement Using Wavelet Filter Parametrization

In this paper a genetic-based enhancement of digital image watermarking in the Discrete Wavelet Transform domain is presented. The proposed method is based on adaptive synthesis of a mother wavelet used for image decomposition. Wavelet synthesis is performed using parametrization based on an orthogonal lattice structure. A genetic algorithm is applied as an optimization method to synthesize a wavelet that provides the best watermarking quality in respect to the given optimality criteria. Effectiveness of the proposed method is demonstrated by comparing watermarking results using synthesized wavelets and the most commonly used Daubechies wavelets. Experiments demonstrate that mother wavelet selection is an important part of a watermark embedding process and can influence watermarking robustness, separability and fidelity.

Piotr Lipiński, Jan Stolarek

CellularDE: A Cellular Based Differential Evolution for Dynamic Optimization Problems

In real life we are often confronted with dynamic optimization problems whose optima change over time. These problems challenge traditional optimization methods as well as conventional evolutionary optimization algorithms. In this paper, we propose an evolutionary model that combines the differential evolution algorithm with cellular automata to address dynamic optimization problems. In the proposed model, called CellularDE, a cellular automaton partitions the search space into cells. Individuals in each cell, which implicitly create a subpopulation, are evolved by the differential evolution algorithm to find the local optimum in the cell neighborhood. Experimental results on the moving peaks benchmark show that CellularDE outperforms DynDE, cellular PSO, FMSO, and mQSO in most tested dynamic environments.

Vahid Noroozi, Ali B. Hashemi, Mohammad Reza Meybodi

Optimization of Topological Active Nets with Differential Evolution

The Topological Active Net model for image segmentation is a deformable model that integrates features of region–based and boundary–based segmentation techniques. The segmentation process turns into a minimization task of the energy functions which control the model deformation. We used Differential Evolution as an alternative evolutionary method that minimizes the decisions of the designer with respect to other evolutionary methods such as genetic algorithms. Moreover, we hybridized Differential Evolution with a greedy search to integrate the advantages of global and local searches at the same time that the segmentation speed is improved.

Jorge Novo Buján, José Santos, Manuel G. Penedo

Study on the Effects of Pseudorandom Generation Quality on the Performance of Differential Evolution

Experiences in the field of Monte Carlo methods indicate that the quality of a random number generator is exceedingly significant for obtaining good results. This result has not been demonstrated in the field of evolutionary optimization, and many practitioners of the field assume that the choice of the generator is superfluous and fail to document this aspect of their algorithm. In this paper, we demonstrate empirically that the requirement of high quality generator

does not hold

in the case of Differential Evolution.

Ville Tirronen, Sami Äyrämö, Matthieu Weber

Sensitiveness of Evolutionary Algorithms to the Random Number Generator

This article presents an empirical study of the impact of the change of the Random Number Generator over the performance of four Evolutionary Algorithms: Particle Swarm Optimisation, Differential Evolution, Genetic Algorithm and Firefly Algorithm. Random Number Generators are a key piece in the production of e-science, including optimisation problems by Evolutionary Algorithms. However, Random Number Generator ought to be carefully selected taking into account the quality of the generator. In order to analyse the impact over the performance of an evolutionary algorithm due to the change of Random Number Generator, a huge production of simulated data is necessary as well as the use of statistical techniques to extract relevant information from large data set. To support this production, a grid computing infrastructure has been employed. In this study, the most frequently employed high-quality Random Number Generators and Evolutionary Algorithms are coupled in order to cover the widest portfolio of cases. As consequence of this study, an evaluation about the impact of the use of different Random Number Generators over the final performance of the Evolutionary Algorithm is stated.

Miguel Cárdenas-Montes, Miguel A. Vega-Rodríguez, Antonio Gómez-Iglesias

New Efficient Techniques for Dynamic Detection of Likely Invariants

Invariants could be defined as prominent relation among program variables. Daikon software has implemented a practical algorithm for invariant detection. There are several other dynamic approaches to dynamic invariant detection. Daikon is considered to be the best software developed for dynamic invariant detection in comparing other dynamic invariant detection methods. However this method has some problems. Its time order is highly which this results in uselessness in practice. The bottleneck of the algorithm is predicate checking. In this paper, two new techniques are presented to improve the performance of the Daikon algorithm. Experimental results show that With regard to these amendments, runtime of dynamic invariant detection is much better than the original method.

Saeed Parsa, Behrouz Minaei, Mojtaba Daryabari, Hamid Parvin

Classification Ensemble by Genetic Algorithms

Different classifiers with different characteristics and methodologies can complement each other and cover their internal weaknesses; Thus Classifier ensemble is an important approach to handle the drawback. If an automatic and fast method is obtained to approximate the accuracies of different classifiers on a typical dataset, the learning can be converted to an optimization problem and genetic algorithm is an important approach in this way. We proposed a selection method for classification ensemble by applying GA for improving performance of classification. CEGA is examined on some datasets and it considerably shows improvements.

Hamid Parvin, Behrouz Minaei, Akram Beigi, Hoda Helmi

Simulated Evolution (SimE) Based Embedded System Synthesis Algorithm for Electric Circuit Units (ECUs)

ECU (Electric Circuit Unit) is a type of embedded system that is used in automobiles to perform different functions. The synthesis process of ECU requires that the hardware should be optimized for cost, power consumption and provides fault tolerance as many applications are related to car safety systems. This paper presents a Simulated Evolution (SimE) based multiobjective optimization algorithm to perform the ECU synthesis. The optimization objectives are: optimizing hardware cost, power consumption and also provides fault tolerance from single faults. The performance of the proposed algorithm is measured and compared with Parallel Re-combinative Simulated Annealing (PRSA) and Genetic Algorithm (GA). The comparison results show that the proposed algorithm has an execution time that is 5.19 and 1.15 times lesser, and cost of the synthesized hardware that is 3.35 and 2.73 times lesser than the PRSA and GA. The power consumption of the PRSA and GA (without fault tolerance) are 0.94 and 0.68 times of the proposed algorithm with fault tolerance.

Umair F. Siddiqi, Yoichi Shiraishi, Mona A. El-Dahb, Sadiq M. Sait

Taxi Pick-Ups Route Optimization Using Genetic Algorithms

This paper presents a case study of a taxi drive company whose problem is the pick up passengers more efficiently in order to save time and fuel. The taxi company journey starts and ends in the two near by locations, which can be address as the same location for the problem solving, transforming this problem in a typical Travelling Salesman Problem where the goal is, given a set of cities and roads, to find the best route by which to visit every city and return home. The result of the study is a user-friendly software tool that allows the selection on a map of the pick-up locations of the taxi passengers presenting afterwards in the same map the best route that was computed using a genetic algorithm. The taxi company is currently using the developed software.

Jorge Nunes, Luís Matos, António Trigo

Optimization of Gaussian Process Models with Evolutionary Algorithms

Gaussian process (GP) models are non-parametric, black-box models that represent a new method for system identification. The optimization of GP models, due to their probabilistic nature, is based on maximization of the probability of the model. This probability can be calculated by the marginal likelihood. Commonly used approaches for maximizing the marginal likelihood of GP models are the deterministic optimization methods. However, their success critically depends on the initial values. In addition, the marginal likelihood function often has a lot of local minima in which the deterministic method can be trapped. Therefore, stochastic optimization methods can be considered as an alternative approach. In this paper we test their applicability in GP model optimization. We performed a comparative study of three stochastic algorithms: the genetic algorithm, differential evolution, and particle swarm optimization. Empirical tests were carried out on a benchmark problem of modeling the concentration of

CO

2

in the atmosphere. The results indicate that with proper tuning differential evolution and particle swarm optimization significantly outperform the conjugate gradient method.

Dejan Petelin, Bogdan Filipič, Juš Kocijan

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise