Skip to main content
Top

2005 | Book

Adaptive and Natural Computing Algorithms

Proceedings of the International Conference in Coimbra, Portugal, 2005

Editors: Dr. Bernardete Ribeiro, Dr. Rudolf F. Albrecht, Dr. Andrej Dobnikar, Dr. David W. Pearson, Dr. Nigel C. Steele

Publisher: Springer Vienna

insite
SEARCH

About this book

The ICANNGA series of Conferences has been organised since 1993 and has a long history of promoting the principles and understanding of computational intelligence paradigms within the scientific community and is a reference for established workers in this area. Starting in Innsbruck, in Austria (1993), then to Ales in Prance (1995), Norwich in England (1997), Portoroz in Slovenia (1999), Prague in the Czech Republic (2001) and finally Roanne, in France (2003), the ICANNGA series has established itself for experienced workers in the field. The series has also been of value to young researchers wishing both to extend their knowledge and experience and also to meet internationally renowned experts. The 2005 Conference, the seventh in the ICANNGA series, will take place at the University of Coimbra in Portugal, drawing on the experience of previous events, and following the same general model, combining technical sessions, including plenary lectures by renowned scientists, with tutorials.

Table of Contents

Frontmatter

Neural Networks

ADFUNN: An Adaptive Function Neural Network

An adaptive function neural network (ADFUNN) is introduced. It is based on a linear piecewise artificial neuron activation function that is modified by a novel gradient descent supervised learning algorithm. This Δf process is carried out in parallel with the traditional Δw process. Linearly inseparable problems can be learned with ADFUNN, rapidly and without hidden neurons. The Iris dataset classification problem is learned as an example. An additional benefit of ADFUNN is that the learned functions can support intelligent data analysis.

Dominic Palmer-Brown, Miao Kang
Certain comments on data preparation for neural networks based modelling

The process of data preparation for neural networks based modelling is examined. We are discussing sampling, preprocessing and decimation, finally urguing for orthonormal input preprocessing.

Bartlomiej Beliczynski
A simple method for selection of inputs and structure of feedforward neural networks

When feedforward neural networks of multi-layer perceptron (MLP) type are used as black-box models of complex processes, a common problem is how to select relevant inputs from a large set of potential variables that affect the outputs to be modeled. If, furthermore, the observations of the input-output tuples are scarce, the degrees of freedom may not allow for the use of a fully connected layer between the inputs and the hidden nodes. This paper presents a systematic method for selection of both input variables and a constrained connectivity of the lower-layer weights in MLPs. The method, which can also be used as a means to provide initial guesses for the weights prior to the final training phase of the MLPs, is illustrated on a class of test problems.

H. Saxén, F. Pettersson
The Concept and Properties of Sigma-if Neural Network

Our recent works on artificial neural networks point to the possibility of extending the activation function of a standard artificial neuron model using the conditional signal accumulation technique, thus significantly enhancing the capabilities of neural networks. We present a new artificial neuron model, called Sigma-if, with the ability to dynamically tune the size of the decision space under consideration, resulting from a novel activation function. The paper discusses construction of the proposed neuron as well as training Sigma-if feedforward neural networks for well known sample classification problems.

M. Huk, H. Kwasnicka
Beta wavelet networks for function approximation

Wavelet neural networks (WNN) have recently attracted great interest, because of their advantages over radial basis function networks (RBFN) as they are universal approximators. In this paper we present a novel wavelet neural network, based on Beta wavelets, for 1-D and 2-D function approximation. Our purpose is to approximate an unknown function f: R

n

→ R from scattered samples (x

i

; y

i

= f(x)) i=1.…n, where:

we have little a priori knowledge on the unknown function f which lives in some infinite dimensional smooth function space,

the function approximation process is performed iteratively: each new measure on the function (x

i

; f(x

i

)) is used to compute a new estimate

$${\hat f}$$

as an approximation of the function f.

Simulation results are demonstrated to validate the generalization ability and efficiency of the proposed Beta wavelet network.

Wajdi Bellil, Chokri Ben Amar, Adel M. Alimi
Speeding up backpropagation with Multiplicative Batch Update Step

Updating steps in a backpropagation neural network with multiplicative factors

u

> 1 and

d

< 1 has been presented by several authors. The istatistics field of Stochastic Approximation has a close relation with back-propagation algorithms. Recent theoretical results in this field show that for functions of one variable, different values of

u

and

d

can produce very different results: fast convergence at the cost of a poor solution, slow convergence with a better solution, or produce a fast move towards a solution but without converging. To speed up backpropagation in a simple manner we propose a batch step adaptation technique for the online backpropagation algorithm based on theoretical results on simple cases.

Pedro Cruz
Generating Sequential Triangle Strips by Using Hopfield Nets

The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motived by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. The Hopfield network powered by simulated annealing (i.e. Boltzmann machine) which is implemented in a program HTGEN can be used for computing the semi-optimal stripifications. Practical experiments confirm that one can obtain much better results using HTGEN than by a leading stripification program FTSG although the running time of simulated annealing grows rapidly near the global optimum.

Jiří Šíma
The Linear Approximation Method to the Modified Hopfield Neural Network Parameters Analysis

The dynamic of Hopfield network is usually described by the system of differential equations. Our idea is to modify Hopfield network in aim to allow its behavior description by the system of transcendental exponential equations solvable analytically by the Special Trans Function Theory (STFT). Furthermore, the linear approximation method to the system of transcendental exponential equations describing the modified Hopfield network, based upon the STFT, has been discussed in some details.

S. I. Bauk, S. M. Perovich, A. Lompar
The Analytical Analysis of Hopfield Neuron Parameters by the Application of Special Trans Function Theory

The subject of the theoretical analysis presented in the paper is Hopfield neuron electronic model modification based upon capacitor replacement with an inverse polarized diode. The modified neuron parameters have been analytically analyzed by application of the Special Trans Function Theory (STFT). The obtained results are presented numerically and graphically.

S. M. Perovich, S. I. Bauk, N. Konjevic
Time-Oriented Hierarchical Method for Computation of Minor Components

This paper proposes a general method that transforms known neural network MSA algorithms, into MCA algorithms. The method uses two distinct time scales. A given MSA algorithm is responsible, on a faster time scale, for the “behavior” of all output neurons. On this scale minor subspace is obtained. On a slower time scale, output neurons compete to fulfill their “own interests”. On this scale, basis vectors in the minor subspace are rotated toward the minor eigenvectors. Actually, time-oriented hierarchical method is proposed. Some simplified mathematical analysis, as well as simulation results are presented.

M. Jankovic, H. Ogawa
Evolution versus Learning in Temporal Neural Networks

In this paper, we study the difference between two ways of setting synaptic weights in a “temporal” neural network. Used as a controller of a simulated mobile robot, the neural network is alternatively evolved through an evolutionary algorithm or trained via an hebbian reinforcement learning rule. We compare both approaches and argue that in the last instance only the learning paradigm is able to exploit

meaningfully

the temporal features of the neural network.

Hédi Soula, Guillaume Beslon, Joël Favrel
Minimization of empirical error over perceptron networks

Supervised learning by perceptron networks is investigated a minimization of empirical error functional. Input/output functions minimizing this functional require the same number

m

of hidden units as the size of the training set. Upper bounds on rates of convergence to zero of infima over networks with

n

hidden units (where

n

is smaller than

m

) are derived in terms of a variational norm. It is shown that fast rates are guaranteed when the sample of data defining the empirical error can be interpolated by a function, which may have a rather large Sobolev-type seminorm. Fast convergence is possible even when the seminorm depends exponentially on the input dimension.

Věra Kůrková
Interval Basis Neural Networks

The paper introduces a new type of ontogenic neural networks called Interval Basis Neural Networks (IBNNs). The IBNN configures the whole topology and computes all weights after

a priori

knowledge collected form training data. The training patterns are grouped together producing intervals separately for all input features for each class after statistical analyses of the training data. This IBNNs feature make possible to computed all network parameters without training. Moreover, the IBNN takes into account the distances between patterns of the same classes and builds the well-approximating model especially on the borders between the classes. Furthermore, the IBNNs are insensitive for quantity differences in patterns representation of classes. The IBNNs always correctly classify training data and very good generalize other data.

A. Horzyk
Learning from Randomly-Distributed Inaccurate Measurements

Traditional measurement systems are designed with tight control over the time and place of measurement of the device or environment under test. This is true whether the measurement system uses a centralized or a distributed architecture. Currently there is considerable interest in using mobile consumer devices as measurement platforms for testing large dispersed systems. There is also growing activity in developing concepts of ubiquitous measurement, such as “smart dust.” Under these conditions the times and places of measurement are random, which raises the question of the validity and interpretation of the acquired data. This paper presents a mathematical analysis that shows it is possible under certain conditions to establish dependence between error bounds and confidence probability on models built using data acquired in this manner.

John Eidson, Bruce Hamilton, Valery Kanevsky
Combining Topological and Cardinal Directional Relation Information in Qualitative Spatial Reasoning

Combining different knowledge representation languages is one of the main topics in Qualitative Spatial Reasoning (QSR). In this paper, we combine well known RCC8 calculus (RCC8) and cardinal direction calculus (CDC) based on regions and give the interaction tables for the two calculi. The interaction tables can be used as a tool in solving constraint satisfaction problems (CSP) and consistency checking procedure of QSR for combined spatial knowledge.

Haibin Sun, Wenhui Li
An Evidence Theoretic Ensemble Design Technique

Ensemble design techniques based on resampling the training set are successfully used to improve the classification accuracies of the base classifiers. In Boosting technique, each training set is obtained by drawing samples with replacement from the available training set according to a weighted distribution which is iteratively updated for generating new classifiers for the ensemble. The resultant classifiers are accurate in different parts of the input space mainly specified the sample weights. In this study, a dynamic integration of boosting based ensembles is proposed so as to take into account the heterogeneity of the input sets. In this approach, a Dempster-Shafer theory based framework is developed to consider the training sample distribution in the restricted input space of each test sample. The effectiveness of the proposed technique is compared to AdaBoost algorithm using nearest mean type base classifier.

H. Altinçay
Cortical Modulation of Synaptic Efficacies through Norepinephrine

I propose a norepinephrine- (NE-) neuromodulatory system, which I call “enhanced-excitatory and enhanced-inhibitory (E- E/E-I) system”. The E-E/E-I system enhanced excitatory and inhibitory synaptic connections between cortical cells, modified their ongoing background activity, and influenced subsequent cognitive neuronal processing. When stimulated with sensory features, cognitive performance of neurons, signal-to-noise (S/N) ratio, was greatly enhanced, for which one of the three possible S/N enhancement schemes operated under the E-E/E-I system, namely; i) signal enhancement more than noise increase, ii) signal enhancement and noise reduction, and iii) noise reduction more than signal decrease. When a weaker (or subthreshold) stimulus was presented, the scheme (ii) effectively enhanced S/N ratio, whereas the scheme (iii) was effective for enhancing stronger stimuli. I suggest that a release of NE into cortical areas may modify their background neuronal activity, whereby cortical neurons can effectively respond to a variety of external sensory stimuli.

O. Hoshino
Associative Memories with Small World Connectivity

In this paper we report experiments designed to find the relationship between the different parameters of sparsely connected networks of perceptrons with small world connectivity patterns, acting as associative memories.

Neil Davey, Lee Calcraft, Bruce Christianson, Rod Adams
A Memory-Based Reinforcement Learning Model Utilizing Macro-Actions

One of the difficulties in reinforcement learning (RL) is that an optimal policy is acquired through enormous trials. As a solution to reduce waste explorations in learning, recently the exploitation of macro-actions has been focused. In this paper, we propose a memory-based reinforcement learning model in which macro-actions are generated and exploited effectively. Through the experiments for two standard tasks, we confirmed that our proposed method could decrease waste explorations especially in the early training stage. This property contributes to enhancing training efficiency in RL tasks.

Makoto Murata, Seiichi Ozawa
A Biologically Motivated Classifier that Preserves Implicit Relationship Information in Layered Networks

A fundamental problem with layered neural networks is the loss of information about the relationships among features in the input space and relationships inferred by higher order classifiers. Information about these relationships is required to solve problems such as discrimination of simultaneously presented objects and discrimination of feature components. We propose a biologically motivated model for a classifier that preserves this information. When composed into classification networks, we show that the classifier propagates and aggregates information about feature relationships. We discuss how the model should be capable of segregating this information for the purpose of object discrimination and aggregating multiple feature components for the purpose of feature component discrimination.

Charles C. Peck, James Kozloski, Guillermo A. Cecchi, A. Ravishankar Rao
Large Scale Hetero-Associative Networks with Very High Classification Ability and Attractor Discrimination Consisting of Cumulative-Learned 3-Layer Neural Networks

Auto-Associative neural networks have limited memory capacity and no classification capability. We propose a hetero-associative network consisting of a cumulative-learned forward 3-layer neural network and a backward 3-layer neural network, and a hetero-tandem associative network. The hetero-tandem associative network has a spindle type single cyclic-associative network with cumulative learning and is connected in tandem with the subsequent hetero-associative network. These hetero-associative networks with classification ability have high recognition performance as well as rapid attractor absorption.

Consecutive codification of outputs in the forward network was found to produce no spurious attractors, and coarse codification of converged attractors can be easily identified as training or spurious attractors.

Cumulative learning with prototypes and additive training data adjacent to prototypes can also drastically improve associative performance of both the spindle type single cyclic- and hetero-associative networks, allowing them to effectively be connected in tandem.

Yohtaro Yatsuzuka, Yo Ho
Crack width prediction of RC structures by Artificial Neural Networks

This paper proposes the use of Artificial Neural Networks (ANN) for the prediction of the maximum surface crack width of precast reinforced concrete beams joined by steel coupler connectors and anchor bars (jointed beams). Two different training algorithms are used in this study and their performances are compared. The first approach used Back propagation (BPANN) and the second one includes Genetic Algorithms (GANN) during the training process. Input and output vectors are designed on the basis of empirical equations available in the literature to estimate crack widths in common reinforced concrete (RC) structures and parameters that characterize the mechanical behavior of RC beams with overlapped reinforcement. Two well-defined points of loading are considered in this study to demonstrate the suitability of this approach in both, a linear and a highly nonlinear stage of the mechanical response of this type of structures. Remarkable results were obtained, however, in all cases the combined Genetic Artificial Neural Network approach resulted in improved prediction performance over networks trained by error back propagation.

Carlos Avila, Yukikazu Tsuji, Yoichi Shiraishi
A neural network system for modelling of coagulant dosage used in drinking water treatment

This paper presents the elaboration and validation of “soft sensor” using neural networks for on-line estimation of the coagulation dose from raw water characteristics. The main parameters influencing the coagulant dosage are firstly determined via a PCA. A brief description of the methodology used for the synthesis of neural model is given and experimental results are included. The training of the neural network is performed using the Weight Decay regularization in combination with Levenberg-Marquardt method. The performance of this soft sensor is illustrated with real data.

B. Lamrini, A. Benhammou, A. Karama, M-V. Le Lann
ANN modeling applied to NOX reduction with octane. Ann future in personal vehicles

A silver/alumina catalyst was tested for its NO

X

reduction activity during oxygen-rich conditions and during variation in the input parameters (nitric oxide, octane and oxygen). A multi-bed approach was tested where the initial bed was divided into four beds acting in different temperature rages. The experimental data were investigated by means of artificial neural networks that were demonstrated to be able to model the process.

Mats Rönnholm, Kalle Arve, Kari Eränen, Fredrik Klingstedt, Tapio Salmi, Henrik Saxén
A method for detecting cause-effects in data from complex processes

When models are developed to aid the decision making in the operation of industrial processes, lack of understanding of the underlying mechanisms can make a first-principles modeling approach infeasible. An alternative is to develop a black-box model on the basis of historical data, and neural networks can be used for this purpose to cope with nonlinearities. Since numerous factors may influence the variables to be modeled, and all potential inputs cannot be considered, one may instead solely focus on occasions where the (input or output) variables exhibit larger changes. The paper describes a modeling method by which historical data can be interpreted with respect to changes in key variables, yielding a model that is well suited for analysis of how changes in the input variables affect the outputs.

M. Helle, H. Saxén
Predictive data mining on rubber compound database

Neural network based predictive data mining techniques are used to find relationships between rubber compound parameters obtained by rheological and mechanical tests. The preprocessing methods appropriate to the problem are also introduced. Good prediction of different rubber compound parameters evidently indicate that the majority of rubber compounds’ mechanical properties can be devised from the rheological measurements of cross-linking process.

M. Trebar, U. Lotrič
Applying Neural Network to Inverse Kinematic Problem for 6R Robot Manipulator with Offset Wrist

An Artificial Neural Network (ANN) using backpropagation algorithm is applied to solve inverse kinematics problems of industrial robot manipulator. 6R robot manipulator with offset wrist was chosen as industrial robot manipulator because geometric feature of this robot does not allow to solve inverse kinematics problems analytically. In other words, there is no closed form solution for this problem. As the number of neurons at hidden layer is varied between 4 and 32, the robot joint angles (

θ

1

θ

2

θ

6

) were predicted with average errors of 8.9°, 7.8°, 8.3°, 13°, 8.5°, and 10.5° for the 1

st

, 2

nd

, 3

rd

, 4

th

and 6

th

joint, respectively.

Z. Bingul, H. M. Ertunc, C. Oysu
Local Cluster Neural Network Chip for Control

The local cluster neural network (LCNN) is an alternative to RBF networks that performs well in digital simulation. The LCNN is suitable for an analog VLSI implementation that is attractive for a wide range of embedded neural net applications. In this paper, we present the input-output characterisation of LCNN analog chip. The effect of manufacturing variations on the chip’s function is investigated and analyzed.

Liang Zhang, Joaquin Sitte, Ulrich Rueckert
A Switching Controller for Nonlinear Systems via Fuzzy Models

A Lyapunov based switching control design method for non linear systems using fuzzy models is proposed. The switching controller consists of several linear state feedback controllers; only one of the linear controllers is employed at each moment according to a switching scheme. The gains of the linear state feedback controllers are derived based on Lyapunov stability theory. The fuzzy design model is represented as a set of uncertain linear subsystems and then sufficiency conditions for the system to be globally stabilisable by the switching controller are given. The proposed design method is illustrated trough numerical simulations on the chaotic Lorenz system.

M. Boumehraz, K. Benmahammed
Competitive Decentralized Autonomous Neural Net Controllers

A simple and effective method is proposed for controlling a system consists of small processes. Each process is controlled by a decentralized autonomous neural network controller. These controllers compete with each other in order to increase their performances. As a result of the competition, the performance of whole system is kept at a suboptimal level. A control of example system consist of lots of processes is performed.

Takehiro Ohba, Masaru Ishida
Improved hierarchical fuzzy control scheme

New modifications in mapping hierarchical fuzzy control scheme are proposed, to get effective and optimized control. The scheme was developed so that one can easily understand and modify fuzzy rules in different levels of the hierarchy. The presented approach ensures the universal approximation of functions in a compact domain. To validate this conceptual approach, we consider a textile data base as a non linear, multivariable and dynamic system.

Taher M. Jelleli, Adel M. Alimi
On-line Inference of Finite Automata in Noisy Environments

The most common type of noise in continuous systems of the real world is Gaussian noise, whereas discrete environments are usually subject to noise of a discrete type. The established, original solution for on-line inference of finite automata that is based on generalized recurrent neural networks is evaluated in the presence of noise of both types. It showed quite good performance and robustness.

Ivan Gabrijel, Andrej Dobnikar
Improved Clustering by Rotation of Cluster Centres

In this paper we present a method that leads to the improvement of a subtractive clustering model by modifying the centres. In order to keep within certain bounds, a centre is modified by rotating it.

D. W. Pearson, M. Batton-Hubert
Hierarchical Growing Neural Gas

This paper describes TreeGNG, a top-down unsupervised learning method that produces hierarchical classification schemes. TreeGNG is an extension to the Growing Neural Gas algorithm that maintains a time history of the learned topological mapping. TreeGNG is able to correct poor decisions made during the early phases of the construction of the tree, and provides the novel ability to influence the general shape and form of the learned hierarchy.

K.A.J. Doherty, R.G. Adams, N. Davey
A Clustering Algorithm using Cellular Learning Automata based Evolutionary Algorithm

In this paper, a new clustering algorithm based on CLA-EC is proposed. The CLA-EC is a model obtained by combining the concepts of cellular learning automata and evolutionary algorithms. The CLA-EC is used to search for cluster centers in such a way that minimizes the squared-error criterion. The simulation results indicate that the proposed algorithm produces clusters with acceptable quality with respect to squared-error criteria and provides a performance that is significantly superior to that of the K-means algorithm.

R. Rastegar, M. Rahmati, M. R. Meybodi
Estimating the number of clusters from distributional results of partitioning a given data set

When estimating the optimal value of the number of clusters,

C

, of a given data set, one typically uses, for each candidate value of

C

, a single (final) result of the clustering algorithm. If distributional data of size

T

are used, these data come from

T

data sets obtained, e.g., by a bootstrapping technique. Here a new approach is introduced that utilizes distributional data generated by clustering the original data

T

times in the framework of cost function optimization and cluster validity indices. Results of this method are reported for model data (100 realizations) and gene expression data. The probability of correctly estimating the number of clusters was often higher compared to recently published results of several classical methods and a new statistical approach (Clest).

U. Möller
AUDyC Neural Network using a new Gaussian Densities Merge Mechanism

In the context of evolutionary data classification, dynamical modeling techniques are useful to continuously learn clusters models. Dedicated to on-line clustering, the AUDyC (Auto-adaptive and Dynamical Clustering) algorithm is an unsupervised neural network with auto-adaptive abilities in nonstationary environment. These particular abilities are based on specific learning rules that are developed into three stages: “Classification”, “Evaluation” and “Fusion”. In this paper, we propose a new densities merge mechanism to improve the “Fusion” stage in order to avoid some local optima drawbacks of Gaussian fitting. The novelty of our approach is to use an ambiguity rule of fuzzy modelling with new merge acceptance criteria. Our approach can be generalized to any type of fuzzy classification method using Gaussian models. Some experiments are presented to show the efficiency of our approach to circumvent to AUDyC NN local optima problems.

Habiboulaye Amadou Boubacar, Stéphane Lecoeuche, Salah Maouche
The Growing Hierarchical Self-Organizing Feature Maps And Genetic Algorithms for Large Scale Power System Security

This paper proposes a new methodology which combines supervised learning, unsupervised learning and genetic algorithm for evaluating power system dynamic security. Based on the concept of stability margin, pre-fault power system conditions are assigned to the output neurons on the two-dimensional grid with the growing hierarchical self-organizing map technique (GHSOM) via supervised ANNs which perform an estimation of post-fault power system state. The technique estimates the dynamic stability index that corresponds to the most critical value of synchronizing and damping torques of multimachine power systems. ANN-based pattern recognition is carried out with the growing hierarchical self-organizing feature mapping in order to provide an adaptive neural net architecture during its unsupervised training process. Numerical tests, carried out on a IEEE 9 bus power system are presented and discussed. The analysis using such method provides accurate results and improves the effectiveness of system security evaluation.

M. Boudour, A. Hellal
3D Self-organizing Convex Neural Network Architectures

Surface modeling and structure representation from unorganized sample points are key problems in many applications whose neural networks are recently starting a gradual breakthrough. Our purpose is the development of innovative self-organizing neural network architecture for surface modeling.

We propose an original neural architecture and algorithm inspired by Kohonen’s self-organizing maps, based on dynamic neighborhood propagation along with an adaptive learning and repulsion process applied to a generalized mesh structure that will lead to a topological definition of the surface given as an input.

F. Boudjemaï, P. Biela Enberg, J. G. Postaire
Novel Learning Algorithm Aiming at Generating a Unique Units Distribution in Standard SOM

Self-organizing maps, SOMs, are a data visualization technique developed to reduce the dimensions of data through the use of self-organizing neural networks. However, one of the limitations of Self Organizing Maps algorithm, is that every SOM is different and finds different similarities among the sample vectors each time the initial conditions are changed.

In this paper, we propose a modification of the SOM basic algorithm in order to make the resulted mapping invariant to the initial conditions. We extend the neighborhood concept to processing units, selected in a fashionable manner, other than those commonly selected relatively to the immediate surroundings of the best matching unit. We also introduce a new learning function for the newly introduced neighbors.

The modified algorithm was tested on a color classification application and performed very well in comparison with the traditional SOM.

Kirmene Marzouki, Takeshi Yamakawa
SOM-Based Estimation of Meteorological Profiles

The task of estimating the meteorological profile of any location of interest within a specified area is undertaken. Assuming that the meteorological profiles of a sufficient number of representative reference locations within the specified area are available, the proposed methodology is based on (a) the organisation of the meteorological profiles of the reference locations employing a self-organising map (SOM) and (b) the classification of the most salient morphological characteristics of the reference locations. Subsequently, the meteorological profile of any novel location of interest is approximated by a weighted average of the meteorological profiles represented on the SOM for those reference locations whose morphological characteristics most closely match the morphological characteristics of the location of interest. The proposed methodology is evaluated by comparing the accuracy of meteorological profile estimation with that of existing estimation techniques as well as with the actual meteorological profiles of the locations of interest.

T. Tambouratzis
An Efficient Heuristic for the Traveling Salesman Problem Based on a Growing SOM-like Algorithm

A growing self-organizing (SOM) neural network, enhanced with a local search heuristic is proposed as an efficient traveling salesman problem solver. A ring structure of processing units is evolved in time with a Kohonen type adaptation dynamics together with a simple growing rule in the number of processing units. The result is a neural network heuristic for the TSP with a computational complexity of

O

(

n

2

), comparable to other reported SOM-like networks. The tour emerging from the SOM network is enhanced by the application of a simple greedy 2-Opt local search. Experiments over a broad set of TSP instances are carried out. The experimental results show a solution accuracy equivalent to that of the best SOM based heuristics reported in the literature.

Cristina García, José Alí Moreno

Evolutionary Computation

Evolutionary Design and Evaluation of Modeling System for Forecasting Urban Airborne Maximum Pollutant Concentrations

In this paper, an integrated modeling system based on a multi-layer perceptron model is developed and evaluated for the forecasting of urban airborne maximum pollutant concentrations. In the first phase, the multi-objective genetic algorithm (MOGA) and sensitivity analysis are used in combination for identifying feasible system inputs. In the second phase, the final evaluation of the developed system is performed for the concentrations of pollutants measured at an urban air quality station in central Helsinki, Finland. This study showed that the evolutionary design of neural network inputs is an efficient tool, which can help to improve the accuracy of the model. The evaluation work itself showed that the developed modeling system is capable of producing fairly good operational forecasts.

H. Niska, T. Hiltunen, A. Karppinen, M. Kolehmainen
Evolving Evolvability: Evolving both representations and operators

The behavior of an evolutionary system incorporating both an evolving genetic representation (a learning mechanism) and an evolving genetic operator (mutation) is explored. Simulations demonstrate the evolution of evolvability through the co-adaptation of these two mechanisms. It is also shown that this co-adaptation produces a transmission function that becomes more conservative as the strength of the learning mechanism increases.

Grant W. Braught
A Multi-Objective Evolutionary Algorithm for Solving Traveling Salesman Problems: Application to the Design of Polymer Extruders

A Multi-Objective Evolutionary Algorithm (MOEA) for solving Traveling Salesman Problems (TSP) was developed and used in the design of screws for twin screw polymer extrusion. Besides the fact that MOEA for TSP have already been developed, this paper constitutes an important and original contribution, since in this case, they are applied in the design of machines. The Twin- Screw Configuration Problem (TSCP) can be formulated as a TSP. A different MOEA is developed, in order to take into account the discrete nature of the TSCP. The algorithm proposed was applied to some case studies where the practical usefulness of this approach was demonstrated. Finally, the computational results are confronted with experimental data showing the validity of the approach proposed.

A. Gaspar-Cunha
The Pareto-Box Problem for the Modelling of Evolutionary Multiobjective Optimization Algorithms

This paper presents the Pareto-Box problem for modelling evolutionary multi-objective search. The problem is to find the Pareto set of randomly selected points in the unit hypercube. While the Pareto set itself is only comprised of the point 0, this problem allows for a complete analysis of random search and demonstrates the fact that with increasing number of objectives, the probability of finding a dominated vector is decreasing exponentially. Since most nowadays evolutionary multi-objective optimization algorithms rely on the existence of dominated individuals, they show poor performance on this problem. However, the fuzzification of the Pareto-dominance is an example for an approach that does not need dominated individuals, thus it is able to solve the Pareto-Box problem even for a higher number of objectives.

Mario Köppen, Raul Vicente-Garcia, Bertram Nickolay
Implementation and Experimental Validation of the Population Learning Algorithm Applied to Solving QAP Instances

The paper proposes an implementation of the population learning algorithm designed to solve instances of the quadratic assignment problem. A short overview of the population- learning algorithm and a more detailed presentation of the proposed implementation is followed by the results of computational experiments carried. Particular attention is given to investigating performance characteristics and convergence of the PLA. Experiments have focused on identification of the probability distribution of solution time to a sub-optimal target value.

J. Jedrzejowicz, P. Jedrzejowicz
Estimating the distribution in an EDA

This paper presents an extension to our work on estimating the probability distribution by using a Markov Random Field (MRF) model in an Estimation of Distribution Algorithm (EDA) [1]. We propose a method that directly samples a MRF model to generate new population. We also present a new EDA, called the Distribution Estimation Using MRF with direct sampling (DEUM

d

), that uses this method, and iteratively refines the probability distribution to generate better solutions. Our experiments show that the direct sampling of a MRF model as estimation of distribution provides a significant advantage over other techniques on problems where a univariate EDA is typically used.

S. Shakya, J. McCall, D. F. Brown
Modelling Chlorine Decay in Water Networks with Genetic Programming

The disinfection of water supplies for domestic consumption is often achieved with the use of chlorine. Aqueous chlorine reacts with many harmful micro-organisms and other aqueous constituents when added to the water supply, which causes the chlorine concentration to decay over time. Up to a certain extent, this decay can be modelled using various decay models that have been developed over the last 50+ years. Assuming an accurate prediction of the chlorine concentration over time, a measured deviation from the values provided by such a decay model could be used as an indicator of harmful (intentional) contamination. However, most current chlorine decay models have been based on assumptions that do not allow the modelling of another species, i.e. the species with which chlorine is reacting, thereby limiting their use for modelling the effect of a contaminant on chlorine. This paper investigates the use of genetic programming as a method for developing a mixed second-order chlorine decay model.

Philip Jonkergouw, Ed Keedwell, Soon-Thiam Khu
Evolving Blackjack Strategies Using Cultural Learning

This paper presents a new approach to the evolution of blackjack strategies, that of cultural learning. Populations of neural network agents are evolved using a genetic algorithm and at each generation the best performing agents are selected as teachers. Cultural learning is implemented through a hidden layer in each teacher’s neural network that is used to produce utterances which are imitated by its pupils during many games of blackjack. Results show that the cultural learning approach outperforms previous work and equals the best known non-card counting human approaches.

Dara Curran, Colm O’Riordan
Two-Criterion Optimization in State Assignment for Synchronous Finite State Machines using NSGA-II

One of the challenging problems in circuit implementations is finding the best state assignment for implementing a synchronous sequential circuit which are also represented as Finite State Machines. This problem, commonly known as State Assignment Problem (S.A.P.), has been studied extensively because of its importance in reducing the cost of implementation of circuits. The previous work on this problem assumes the number of coding bits as constant, making it a single objective problem with the only objective being to reduce the cumulative cost of transition between the connected states.

In this work, we add another dimension to this optimization problem by introducing a second objective of minimizing the number of bits used for assignment. This is desirable to reduce the complexity and the cost of a circuit. The two objectives are conflicting and thus the optimal solution requires a tradeoff. We present an evoluationary algorithms based approach to solve this multi-dimensional optimization problem. We compare the results from two algorithms, and find that an NSGA-II based approach, with some modifications to constraint handling, gives better results and running time than NSGA. We also gain some insights about the shape of the efficient frontier.

Nitin Gupta, Vivek Kumar Agrawal
Offspring Selection: A New Self-Adaptive Selection Scheme for Genetic Algorithms

In terms of goal orientedness, selection is the driving force of Genetic Algorithms (GAs). In contrast to crossover and mutation, selection is completely generic, i.e. independent of the actually employed problem and its representation. GA-selection is usually implemented as selection for reproduction (parent selection). In this paper we propose a second selection step after reproduction which is also absolutely problem independent. This self-adaptive selection mechanism, which will be referred to as offspring selection, is closely related to the general selection model of population genetics. As the problem- and representation-specific implementation of reproduction in GAs (crossover) is often critical in terms of preservation of essential genetic information, offspring selection has proven to be very suited for improving the global solution quality and robustness concerning parameter settings and operators of GAs in various fields of applications. The experimental part of the paper discusses the potential of the new selection model exemplarily on the basis of standardized real-valued test functions in high dimensions.

M. Affenzeller, S. Wagner
Using Genetic Algorithms with Real-coded Binary Representation for Solving Non-stationary Problems

This paper presents

genetic algorithms with real-coded binary representation

- a novel approach to improve the performance of genetic algorithms. The algorithm is capable of maintaining the diversity of the evolved population during the whole run which protects it from the premature convergence. This is achieved by using a special encoding scheme, introducing a high redundancy, which is further supported by the so-called

gene-strength adaptation mechanism

for controlling the diversity. The mechanism for the population diversity self-regulation increases the robustness of the algorithm when solving non-stationary problems as was empirically proven on two test cases. The achieved results show the competitiveness of the proposed algorithm with other techniques designed for solving non-stationary problems.

Jiří Kubalík
Dynamics in Proportionate Selection.

This paper proposes a new selection method for Genetic Algorithms. The motivation behind the proposed method is to investigate the effect of different selection methods on the rate of convergence. The new method Dynamic Selection Method (DSM) is based on proportionate selection. DSM functions by continuously changing the criteria for parent selection (dynamic) based on the number of generations in a run and the current generation. Results show that by using DSM to maintain diversity in a population gives slower convergence, but, their overall performance was an improvement. Relationship between slower convergences, in GA runs, leading to better solutions, has been identified.

Abhishek Agrawal, Ian Mitchell, Peter Passmore, Ivan Litovski
Generating grammatical plant models with genetic algorithms

A method for synthesizing grammatical models of natural plants is presented. It is an attempt at solving the inverse problem of generating the model that best describes a plant growth process, presented in a set of 2D pictures. A geometric study is undertaken before translating it into grammatical meaning; a genetic algorithm, coupled with a deterministic rule generation algorithm, is then applied for navigating through the space of possible solutions. Preliminary results together with a detailed description of the method are presented.

Luis E. Da Costa, Jacques-André Landry
Golomb Rulers: Experiments with Marks Representation

We present a set of experiments regarding an evolutionary algorithm designed to efficiently search for Optimal Golomb Rulers. The approach uses a binary representation to codify the marks contained in a ruler and relies on standard genetic operators. Furthermore, during evaluation, an insertion and correction procedure is implemented in order to improve the algorithm performance. This method is successful in quickly identifying good solutions, outperforming previous evolutionary approaches by discovering optimal and near-optimal Golomb Rulers.

Jorge Tavares, Francisco B. Pereira, Ernesto Costa
Evolving Segments Length in Golomb Rulers

An evolutionary algorithm based on Random Keys to represent Golomb Rulers segments, has been found to be a reliable option for finding Optimal Golomb Rulers in a short amount of time, when comparing with standard methods. This paper presents a modified version of this evolutionary algorithm where the maximum segment length for a Golomb Ruler is also part of the evolutionary process. Attained experimental results shows us that this alteration does not seems to provide significant benefits to the static version of the algorithm.

Jorge Tavares, Tiago Leitão, Francisco B. Pereira, Ernesto Costa
Resource-Limited Genetic Programming: Replacing Tree Depth Limits

We propose replacing the traditional tree depth limit in Genetic Programming by a single limit on the amount of resources available to the whole population, where resources are the tree nodes. The resource-limited technique removes the disadvantages of using depth limits at the individual level, while introducing automatic population resizing, a natural side-effect of using an approach at the population level. The results show that the replacement of individual depth limits by a population resource limit can be done without impairing performance, thus validating this first and important step towards a new approach to improving the efficiency of GP.

Sara Silva, Pedro J.N. Silva, Ernesto Costa
Treating Some Constraints as Hard Speeds up the ESG Local Search Algorithm

Local search (LS) methods for solving constraint satisfaction problems (CSP) such as GSAT, WalkSAT and DLM starts the search for a solution from a random assignment. LS then examines the neighbours of this assignment, using the penalty function to determine a better neighbour valuations to move to. It repeats this process until it finds a solution that satisfies all constraints. ICM considers some of the constraints as hard constraints that are always satisfied. In this way, the constraints reduce the possible neighbours in each move and hence the overall search space. We choose the hard constraints in such away that the space of valuations that satisfies these constraints is connected in order to guarantee that a local search can reach any solution from any valuation in this space. In this paper, we incorporate ICM into one of the most recent local search algorithm, ESG, and we show the improvement of the new algorithm.

Y. kilani, A. MohdZin
Applications of PSO Algorithm and OIF Elman Neural Network to Assessment and Forecasting for Atmospheric Quality

The assessment and forecast for atmospheric quality have become the key problem in the study of the quality of atmospheric environment. In order to evaluate the grade of the atmospheric pollution, a model based on the particle swarm optimization (PSO) algorithm is proposed in this paper. Experimental results show the advantages of the proposed models, such as pellucid principle and physical explication, predigested formula and low computation complexity. In addition, an improved Elman neural network, namely, the output-input feedback Elman (OIF Elman) neural network is also applied to forecast the atmospheric quality. Simulations show that the OIF Elman neural network has great potential in the field of forecasting the atmospheric quality.

L.M. Wang, X.H. Shi, M. Li, G.J. Chen, H.W. Ge, H.P. Lee, Y.C. Liang
A hybrid particle swarm optimization model for the traveling salesman problem

This work presents a new hybrid model, based on Particle Swarm Optimization, Genetic Algorithms and Fast Local Search, for the symmetric blind traveling salesman problem. A detailed description of the model is provided. The implemented system was tested with instances from 76 to 2103 cities. For instances up to 439 cities, results were, in average, less than or around 1% in excess of the known optima. When considering all instances, results were 2.1498% in excess, in average. These excellent results encourage further research and improvement of the hybrid model.

Thiago R. Machado, Heitor S. Lopes
Perceptive Particle Swarm Optimisation

Conventional particle swarm optimisation relies on exchanging information through social interaction among individuals. However for real-world problems involving control of physical agents (i.e., robot control), such detailed social interaction is not always possible. In this study, we propose the Perceptive Particle Swarm Optimisation algorithm, in which both social interaction and environmental interaction are increased to mimic behaviours of social animals more closely.

Boonserm Kaewkamnerdpong, Peter J. Bentley
Wasp swarm optimization of logistic systems

In this paper, we present the optimization of logistic processes in supply chains using the meta-heuristic algorithm known as wasp swarm, which draws parallels between the process to optimize and the way individuals in wasp colonies interact and allocate tasks to meet the demands of the nest.

Pedro Pinto, Thomas A. Runkler, João M. Sousa
A Parallel Vector-Based Particle Swarm Optimizer

Several techniques have been employed to adapt particle swarm optimization to find multiple optimal solutions in a problem domain. Niching algorithms have to identify good candidate solutions among a population of particles in order to split the space into regions where an optimal solution may be found. Subsequently the swarm must be optimized so that particles contained inside the niches will converge on multiple optimal solutions.

This paper presents an improved vector-based particle swarm optimizer where subswarms contained in niches are optimized in parallel.

I. L Schoeman, A. P. Engelbrecht
Numerical Simulations of a Possible Hypercomputational Quantum Algorithm

The hypercomputers compute functions or numbers, or more generally solve problems or carry out tasks, that cannot be computed or solved by a Turing machine. Several numerical simulations of a possible hypercomputational algorithm based on quantum computations previously constructed by the authors are presented. The hypercomputability of our algorithm is based on the fact that this algorithm could solve a classically non-computable decision problem, the Hilbert’s tenth problem. The numerical simulations were realized for three types of Diophantine equations: with and without solutions in non-negative integers, and without solutions by way of various traditional mathematical packages.

Andrés Sicard, Juan Ospina, Mario Vélez
Simulated Fault Injection in Quantum Circuits with the Bubble Bit Technique

The simulation of quantum circuits is usually exponential. The Hardware Description Languages methodology is able to isolate the entanglement as source of simulation complexity. However, it was shown that this methodology is not efficient unless the bubble bit technique is employed [1]. In this paper, we present an extension of the HDL-bubble bit simulation methodology, which provides means for simulated fault injection — at the unitary level — in quantum circuits. The purpose is, just like in classical computer hardware design, to be able to verify the effectiveness of the considered quantum circuit fault tolerance methodologies.

M. Udrescu, L. Prodan, M. VlĂaduţiu
Redundant Quantum Arithmetic

Redundant number systems have been widely used in the speedup of classical digital arithmetic. This work introduces the concept of redundancy in the quantum computation field. We show that a constant depth quantum adder circuit is attainable under this new framework.

António Pereira, Rosália Rodrigues
A Special Class of Additive Cyclic Codes for DNA Computing

In this paper, we study a special class of nonbinary additive cyclic codes over

GF

(4) which we call

reversible complement cyclic codes

. Such codes are suitable for constructing codewords for DNA computing. We develop the theory behind constructing the set of generator polynomials for these codes. We study, as an example, all length—7 codes over

GF

(4) and list those that have the largest minimum Hamming distance and largest number of codewords.

Taher Abualrub, Ali Ghrayeb, Xiang Nian Zeng

Adaptive and Natural Computing

Evolutionary Algorithms for Static and Dynamic Optimization of Fed-batch Fermentation Processes

In this work,

Evolutionary Algorithms (EAs)

are used to control a recombinant bacterial fed-batch fermentation process, that aims to produce a bio-pharmaceutical product. Initially, a novel

EA

was used to optimize the process, prior to its run, by simultaneously adjusting the feeding trajectory, the duration of the fermentation and the initial conditions of the process. Finally, dynamic optimization is proposed, where the

EA

is running simultaneously with the fermentation process, receiving information regarding the process, updating its internal model and reaching new solutions that will be used for online control.

Miguel Rocha, José Neves, Ana C.A. Veloso, Eugénio C. Ferreira, Isabel Rocha
Benchmark testing of simulated annealing, adaptive random search and genetic algorithms for the global optimization of bioprocesses

This paper studies the global optimisation of bioprocesses employing model-based dynamic programming schemes. Three stochastic optimisation algorithms were tested: simulated annealing, adaptive random search and genetic algorithms. The methods were employed for optimising two challenging optimal control problems of fed-batch bioreactors. The main results show that adaptive random search and genetic algorithms are superior at solving these problems than the simulated annealing based method, both in accuracy and in the number of function evaluations.

R. Oliveira, R. Salcedo
Dynamic modelling and optimisation of a mammalian cells process using hybrid grey-box systems

In this work a model-based optimisation study of fed-batch BHK-21 cultures expressing the human fusion glycoprotein IgGl-IL2 was performed. Due to the complexity of the BHK metabolism it is rather difficult to develop an accurate kinetic model that could be used for optimisation studies. Many kinetic expressions and parameters are involved resulting in a complex identification problem. For this reason an alternative more cost-effective methodology was adopted, based on hybrid grey-box models. It was concluded that modulation particularities of BHK cultures were effectively captured by the hybrid model, this being of crucial importance for the successful optimisation of the process operation. From the optimisation study it was concluded that the glutamine and glucose concentrations should be maintained at low levels during the exponential growth phase and then glutamine feeding should be increased. In this way it is expected that both the cell density and final product titre can be considerably increased.

A. Teixeira, A. Cunha, J. Clemente, P.M. Alves, M. J. T Carrondo, R. Oliveira
Adaptive DO-based control of substrate feeding in high cell density cultures operated under oxygen transfer limitation

The carbon source feeding strategy is crucial for the productivity of many biochemical processes. In high density and shear sensitive cultures the feed of the carbon source is frequently constrained by the bioreactor maximum oxygen transfer capacity. In order to maximise the product formation, these processes should be operated at low dissolved oxygen (DO) concentrations close to the limitation level. This operating strategy may be realised with a closed-loop controller that regulates the DO concentration through the manipulation of the carbon source feed rate. The performance of this controller may have a significant influence on the final product production and should be as accurate as possible. In this work we study the application of adaptive control for solving this problem focusing not only on stability but also on accuracy. Whenever possible the convergence trajectories to the set point are characterised mathematically. Concerning the instrumentation, two situations are covered i) only the DO Tension (DOT) is measured, ii) both DOT and off-gas composition are measured on-line. The controllers are tested in a pilot plant recombinant

Pichia pastoris

process.

R. Oliveira, A. Cunha, J. Clemente, M. J. T. Carrondo
Evolutionary Design of Neural Networks for Classification and Regression

The

Multilayer Perceptrons (MLPs)

are the most popular class of

Neural Networks

. When applying

MLPs

, the search for the ideal architecture is a crucial task, since it should should be complex enough to learn the input/output mapping, without overfitting the training data. Under this context, the use of

Evolutionary Computation

makes a promising global search approach for model selection. On the other hand,

ensembles

(combinations of models) have been boosting the performance of several

Machine Learning (ML)

algorithms. In this work, a novel evolutionary technique for

MLP

design is presented, being also used an ensemble based approach. A set of real world classification and regression tasks was used to test this strategy, comparing it with a heuristic model selection, as well as with other

ML

algorithms. The results favour the evolutionary

MLP

ensemble method.

Miguel Rocha, Paulo Cortez, José Neves
Pelican — Protein-structure Alignment using Cellular Automaton models

With more than 23000 protein structures deposited in the Protein Data Bank (PDB) and more structures being discovered with each passing day, the experimental determination of the 3-dimensional structure of proteins is just the beginning of a journey in-silico. For a structural biologist, this enormous surge of structural data carries with it far greater computational challenges; Compare, align, classify, and categorize them under families, domains and functionally similar proteins already discovered.

Pelican

provides the structural biologist with a strong and easy technique that that will help him in facing these challenges. Pelican is a rapid way to align the backbones of two protein structures using 2-dimensional Cellular Automaton (CA) models. Breaking down the protein structure into distance matrices comprising of 5 peptide units, Pelican uses the differences of these matrices to construct the 2-dimensional CA grid. Starting from an initial unaligned state, the CA evolves through several generations according to a defined set of local rules. As the CA evolves through successive generations, the emergent patterns made by the live cells are the ones that contribute to the alignment.

Pelican is also an example of a system exhibiting emergent behavior. Each cell behaves in a strictly microscopic way, but each individual cell’s behavior leads to a macroscopic long range behavior exhibited by the entire system which collectively gives the alignment

.

Deepak K Gangadhar
An Efficient Algorithm for De Novo Peptide Sequencing

In this paper we propose a new algorithm for the

de novo

peptide sequencing problem. This problem reconstructs a peptide sequence from a given tandem mass spectra data containing

n

peaks. We first build a directed acyclic graph

G

= (

V, E

) in

O

(

n

log

n

) time, where

v

V

is a spectrum mass ion or a complementary mass to a spectrum ion. The solutions of this problem are then given by the paths in the graph between two designated vertices. Unlike previous approaches, the proposed algorithm does not use dynamic programming, but it builds the graph in a progressive fashion using a priority queue, thus obtaining an improvement over other methods [1,2].

S. Brunetti, D. Dutta, S. Liberatori, E. Mori, D. Varrazzo
Emergent Behavior of Interacting Groups of Communicative Agents

This paper presents a simulation of the behavior of different species of birds, which share the same habitat, but manage to use different times of the day to sing their songs. Therefore, they avoid a vocal competition and improve the conditions to find a mate. Communicative agents are used to model the birds and their behavior. A simple set of rules is used to make the decisions when and how to change the time for the search for a mate. By incorporating damping and amplifying feedback loops the collective behavior of each species led the system to a solution which was favorable to all agents.

Alexander Bisler
Integrating binding site predictions using meta classification methods

Currently the best algorithms for transcription factor binding site prediction are severely limited in accuracy. There is good reason to believe that predictions from these different classes of algorithms could be used in conjunction to improve the quality of predictions. In this paper, we apply single layer networks and support vector machines on predictions from 12 key algorithms. Furthermore, we use a ‘window’ of consecutive results for the input vectors in order to contextualise the neighbouring results. Moreover, we improve the classification result with the aid of under- and over- sampling techniques. We find that by integrating 12 base algorithms, support vector machines and single layer networks can give better binding site predictions.

Y. Sun, M. Robinson, R. Adams, A. G. Rust, P. Kaye, N. Davey
Reverse engineering gene networks with artificial neural networks

Temporal gene expression data is of particular interest to researchers as it can be used to create regulatory gene networks. Such gene networks represent the regulatory relationships between genes over time and provide insight into how genes up- and down-regulate each other from one time-point to the next (the Biological Motherboard). Reverse engineering gene networks from temporal gene expression data is considered an important step in the study of complex biological systems. This paper introduces sensitivity analysis of trained perceptions to reverse engineer the gene networks from temporal gene expression data. It is shown that a trained neural network, with pruning (gene silencing), can also be described as a gene network with minimal re-interpretation, where the sensitivity between nodes reflects the probability of one gene affecting another gene in time. The methodology is known as the Neural Network System Biology Approach with Gene Silencing Simulations (NNSBAGSS). The methodology was applied to artificial temporal data and rat CNS development time-course data.

A. Krishna, A. Narayanan, E.C. Keedwell
A Meta-Level Architecture for Adaptive Applications

The goal of this work is to investigate meta-level architectures for adaptive systems. The main application area is the user modeling for mobile and digital television systems. The results of a set of experiments performed on the proposed architecture showed that it is possible to reuse the components responsible for user modeling if they are designed as meta-level components.

Fabrício J. Barth, Edson S. Gomi
Adaptive Finite State Automata and Genetic Algorithms: Merging Individual Adaptation and Population Evolution

This paper presents adaptive finite state automata as an alternative formalism to model individuals in a genetic algorithm environment. Adaptive finite automata, which are basically finite state automata that can change their internal structures during operation, have proven to be an attractive way to represent simple learning strategies. We argue that the merging of adaptive finite state automata and GA results in an elegant and appropriate environment to explore the impact of individual adaptation, during lifetime, on population evolution.

H. Pistori, P. S. Martins, A. A. de Castro Jr.
Modeling a tool for the generation of programming environments for adaptive formalisms

This paper aims to present the logical model that makes up the structure of a tool for the definition of environments for rule-driven adaptive formalisms.

A.R. Camolesi
Adaptive Decision Tables A Case Study of their Application to Decision-Taking Problems

Decision tables have been traditionally used for solving problems involving decision-taking tasks. In this paper, adaptive devices based on decision tables are used for the solution of decision-taking problems. The resulting adaptive decision tables have proved to be effective due to their generality and flexibility. They are helpful tools for automatically choosing an applicable alternative among several available at each stage in the decision-taking process. An illustrative example as well as an overall comparative evaluation is shown in the business management field.

T. C. Pedrazzi, A. H. Tchemra, R. L. A. Rocha
Robotic mapping and navigation in unknown environments using adaptive automata

Real mobile robots should be able to build a virtual representation of the physical environment, in order to navigate and work in such environment. This paper presents an adaptive way to make such representation without any

a priori

information of the environment. The proposed system allows the robot to explore the entire environment and acquire the information incoming from the sensors while it travels and, due to the adaptability of the mapping method, the system is able to increase the memory usage according to the already mapped area. The map, built using the adaptive technique, is useful to provide navigation information for the robot, allowing it to move on the environment.

Miguel Angelo de Abreu de Sousa, André Riyuiti Hirakawa
An Adaptive Framework for the Design of Software Specification Languages

Software has been specified as domain theories. A useful strategy for building specifications is the incremental extension of an initial theory, in which increments add new terms and notions not considered in previous extensions. Given an increment, the corresponding theory is stated in a corresponding specification language. The next increment — or extension of the theory — typically requires a related language extension, which has been specified in a variety of ways, e.g. meta-computations, rewriting systems, etc. Adaptive devices naturally support such scheme, whose instances should reflect the impact of extension variations on the specification language. This paper describes an adaptive framework for the design of a class of software specification languages supporting the incremental process of elaborating software specifications.

João J. Neto, Paulo S. Muniz Silva
Swarm Intelligence Clustering Algorithm based on Attractor

Ant colonies behavior and their self-organizing capabilities have been popularly studied, and various swarm intelligence models and clustering algorithms also have been proposed. Unfortunately, the cluster number is often too high and convergence is also slow. We put forward a novel structure-attractor, which actively attracts and guides the ant’s behavior, and implement an efficient strategy to adaptively control the clustering behavior. Our experiments show that swarm intelligence clustering algorithm based on attractor (

SICBA

for short) greatly improves the convergence speed and clustering quality compared with LF and also has many notable virtue such as flexibility, decentralization.

Qingyong Li, Zhiping Shi, Zhongzhi Shi
Ant-based distributed optimization for supply chain management

Multi-agent systems are the best approach for an efficient supply chain management. However, the control of each sub-system in a supply-chain is a complex optimization problem and therefore the agents have to include powerful optimization resources along with the communication capacities. This paper presents a new methodology for supply-chain management, the distributed optimization based on ant colony optimization, where the concepts of multi-agent systems and meta-heuristics are merged. A simulation example, with the logistic and the distribution sub-systems of a supply-chain, shows how the distributed optimization outperforms a centralized approach.

Carlos. A. Silva, J. M. Sousa, T. Runkler, J.M.G da Sá Costa
Comparison of nature inspired and deterministic scheduling heuristics considering optimal schedules

We report about a performance evaluation of nature inspired stochastic vs. conventional deterministic scheduling algorithms. By means of a comprehensive test bench, that comprises task graphs with diverse properties, we determined the

absolute

performance of those algorithms with respect to the optimal solutions. Surprisingly, the nature inspired stochastic algorithms outperformed all the investigated deterministic algorithms.

Udo Hönig, Wolfram Schiffmann
An External Memory Supported ACO for the Frequency Assignment Problem

Ant colony optimization algorithm is integrated with an external memory for the purpose of improving its efficiency for the solution of a well-known hard combinatorial optimization problem. The external memory keeps variable-size solution segments extracted from promising solutions of previous iterations. Each solution segment is associated with its parent’s fitness value. In the construction of a solution, each ant retrieves a segment from the memory using tournament selection and constructs a complete solution by filling the absent components. The proposed approach is used for the solution of minimum span frequency assignment problem for which very promising results are obtained for provably difficult benchmark test problems that could not be handled by any other ACO-based approach so far.

Adnan Acan, Akin Günay

Soft Computing Applications

Neural Networks for Extraction of Fuzzy Logic Rules with Application to EEG Data

The extraction of logical rules from data is a key application of artificial neural networks (ANNs) in data mining. However, most of the ANN-based rule extraction methods rely primarily on heuristics, and their underlying theoretical principles are not very deep. That is especially much true for methods extracting fuzzy logic rules, which usually allow to mix different logical connectives in such a way that extracted rules can not be correctly evaluated in any particular fuzzy logic model. This paper shows that mixing of connectives is not needed. A method for fuzzy rules extraction for which the evaluation of the extracted rules in a single model is the basic principle is outlined and illustrated on a case study with EEG data.

Martin Holeňa
A New Neuro-Based Method for Short Term Load Forecasting of Iran National Power System

This paper presents a new neuro-based method for short term load forecasting of Iran national power system (INPS). A MultiLayer Perceptron (MLP) based Neural Network (NN) toolbox has been develeped to forecast 168 hours ahead. The proposed MLP has one hiden layer with 5 neurons. The effective inputs were selected through a peer investigation on historical data released from the INPS. To adjust the parameters of the MLP, the Levenberg-Marquardt Back Propagation (LMBP) training algorithm has been employed because of its remarkable fast speed of convergence. Most of papers dealt with 168-hour forecasting employed a hirachical method in the sense of monthly or seasonly provided that there are enough data. In the absence of rich data, forecasting error would increase. To remedy this problem, the proposed neuro-based approach uses only the weekly group data of concern while an extra input is added up to indicate the month. In other words for each weekly group, a unique MLP based neural network is designed for the purposed of load forecasting.

R. Barzamini, M. B. Menhaj, Sh. Kamalvand, M. A. Fasihi
Approximating the Algebraic Solution of System of Interval Linear Equations with Use of Neural Networks

A new approach to approximate the algebraic solution of systems of interval linear equations (SILE) is proposed in this paper. The original SILE problem is first transformed into an optimization problem, which is in turn solved with use of artificial neural networks and gradient-based optimization techniques.

Nguyen Hoang Viet, Michał Kleiber
Comparing Diversity and Training Accuracy in Classifier Selection for Plurality Voting Based Fusion

Selection of an optimal subset of classifiers in designing classifier ensembles is an important problem. The search algorithms used for this purpose maximize an objective function which may be the combined training accuracy or diversity of the selected classifiers. Taking into account the fact that there is no benefit in using multiple copies of the same classifier, it is generally argued that the classifiers should be diverse and several measures of diversity are proposed for this purpose. In this paper, the relative strengths of combined training accuracy and diversity based approaches are investigated for the plurality voting based combination rule. Moreover, we propose a diversity measure where the difference in classification behavior exploited by the plurality voting combination rule is taken into account.

H. Altinçay
Visualization of Meta-Reasoning in Multi-Agent Systems

This paper describes the advances of our research on visualization of multi-agent systems (MAS) for purposes of analysis, monitoring and debugging. MAS are getting more complex and widely used, such analysis tools are highly beneficial in order to achieve better understanding of agents’ behaviour. Our solution is based on our originally offline visualization tools suite, which now uses a new realtime data acquisition framework. In this case we have focused on agent meta-reasoning in a MAS for planning of humanitarian relief operations. Previous tools were unable to deal with complex characteristics of these simulations. This paper describes our new approach, declares conditions and proposes visualization methods, which fulfil them.

D. Řehoř, J. Tožička, P. Slavík
Intelligent Agent-Inspired Genetic Algorithm

This paper presents an intelligent agent-inspired genetic algorithm (IAGA). Analogous to the intelligent agent, each individual in IAGA has its own properties, including crossover probability, mutation probability, etc. Numerical simulations demonstrate that, compared with the standard GA where all individuals in a population share the same crossover and mutation probabilities, the proposed algorithm is more flexible, efficient and effective.

C. G. Wu, Y.C. Liang, H.P. Lee, C. Lu
Combining Lazy Learning, Racing and Subsampling for Effective Feature Selection

This paper presents a wrapper method for feature selection that combines Lazy Learning, racing and subsampling techniques. Lazy Learning (LL) is a local learning technique that, once a query is received, extracts a prediction by locally interpolating the neighboring examples of the query which are considered relevant according to a distance measure. Local learning techniques are often criticized for their limitations in dealing with problems with high number of features and large samples. Similarly wrapper methods are considered prohibitive for large number of features, due to the high cost of the evaluation step. The paper aims to show that a wrapper feature selection method based on LL can take advantage of two effective strategies: racing and subsampling. While the idea of racing was already proposed by Maron and Moore, this paper goes a step further by (i) proposing a multiple testing technique for less conservative racing (ii) combining racing with sub-sampling techniques.

Gianluca Bontempi, Mauro Birattari, Patrick E. Meyer
Personalized News Access

PENA (Personalized News Access) is an adaptive system for the personalized access to news. The aims of the system are to collect news from predefined news sites, to select the sections and news in the server that are most relevant for each user and to present the selected news. In this paper are described the news collection process, the techniques adopted for structuring the news archive, the creation, maintenance and update of the user model and the generation of the personalized web pages. This is a preliminary work that is based on the system that is described in [1].

D. G. Kaklamanos, K. G. Margaritis
A More Accurate Text Classifier for Positive and Unlabeled data

Almost all

LPU

algorithms rely heavily on two steps: exploiting reliable negative dataset and supplementing positive dataset. For above two steps, this paper originally proposes a two-step approach, that is,

CoTrain-Active

. The first step, employing

CoTrain

algorithm, iterates to purify the unlabeled set with two individual

SVM

base classifiers. The second step, adopting active-learning algorithm, further expands the positive set effectively by request the true label for the "suspect positive" examples. Comprehensive experiments demonstrate that our approach is superior to

Biased-SVM

which is said to be previous best. Moreover,

CoTrain-Active

is especially suitable for those situations where the given positive dataset

P

is extremely insufficient.

Rur Ming Xin, Wan li Zuo
Efficiency Aspects of Neural Network Architecture Evolution Using Direct and Indirect Encoding

Using a GA as a NN designing tool deals with many aspects. We must decide, among others, about: coding schema, evaluation function, genetic operators, genetic parameters, etc. This paper focuses on an efficiency of NN architecture evolution. We use two main approaches for neural network representation in the form of chromosomes: direct and indirect encoding. Presented research is a part of our wider study of this problem [1, 2]. We present the influence of coding schemata on the possibilities of evolving optimal neural network.

H. Kwasnicka, M. Paradowski
Genetic Algorithm Optimization of an Artificial Neural Network for Financial Applications

Model discovery and performance surface optimization with genetic algorithm demonstrate profitability improvement with an inconclusive effect on statistical criteria. The examination of relationships between statistics used for economic forecasts evaluation and profitability of investment decisions reveals that only the ‘degree of improvement over efficient prediction’ shows robust links with profitability. If profits are not observable, this measure is proposed as an evaluation criterion for an economic prediction. Also combined with directional accuracy, it could be used in an estimation technique for economic behavior, as an alternative to conventional least squares.

Serge Hayward
A Method to Improve Generalization of Neural Networks: Application to the Problem of Bankruptcy Prediction

The Hidden Layer Learning Vector Quantization is used to correct the prediction of multilayer perceptrons in classification of high-dimensional data. Corrections are significant for problems with insufficient training data to constrain learning. Our method, HLVQ-C, allows the inclusion of a large number of attributes without compromising the generalization capabilities of the network. The method is applied to the problem of bankruptcy prediction with excellent results.

Armando Vieira, João C. Neves, Bernardete Ribeiro
An Adaptive Neural System for Financial Time Series Tracking

In this paper, we present a neural network based system to generate an adaptive model for financial time series tracking. This kind of data is quite relevant for data quality monitoring in large databases. The proposed system uses the past samples of the series to indicate its future trend and to generate a corridor inside which the future samples should lie. This corridor is derived from an adaptive forecasting model, which makes use of the walk-forward method to take into account the most recent observations of the series and bring up to date the values of the neural model parameters. The model can serve also to manage other time series characteristics, such as the detection of irregularities.

A. C. H. Dantas, J. M. Seixas
Probabilistic Artificial Neural Networks for Malignant Melanoma Prognosis

Artificial Neural networks (ANNs) have found applications in a wide variety of medical problems and have proved successful for non-linear regression and classification. This paper details a novel and flexible probabilistic non-linear ANN model for the prediction of conditional survival probability of malignant melanoma patients. Hazard and probability density functions are also estimated. The model is trained using the log-likelihood function, and generalisation has been addressed. Unrestricted by assumptions that are unrealistic or parametric forms that are difficult to justify, the model thereby attains advantage over traditional statistical models. Furthermore, an estimate of the variance-covariance matrix is obtained using the asymptotic Fisher information matrix. Implemented in an Excel® spreadsheet, the model’s user-friendly design further adds to its flexibility, with much potential for use by statisticians as well as researchers.

R. Joshi, C. Reeves, C. Johnston
Boosting Kernel Discriminant Analysis with Adaptive Kernel Selection

In this paper, we present a new method to enhance classification performance based on Boosting by introducing nonlinear discriminant analysis as feature selection. To reduce the dependency between hypotheses, each hypothesis is constructed in a different feature space formed by Kernel Discriminant Analysis (KDA). Then, these hypotheses are integrated based on AdaBoost. To conduct KDA in each Boosting iteration within realistic time, a new method of kernel selection is also proposed. Several experiments are carried out for the blood cell data and thyroid data to evaluate the proposed method. The result shows that it is almost the same as the best performance of Support Vector Machine without any time-consuming parameter search.

Shinji Kita, Satoshi Maekawa, Seiichi Ozawa, Shigeo Abe
Product Kernel Regularization Networks

We study approximation problems formulated as regularized minimization problems with kernel-based stabilizers. These approximation schemas exhibit easy derivation of solution to the problem in the shape of linear combination of kernel functions (one-hidden layer feed-forward neural network schemas). We prove uniqueness and existence of solution to the problem. We exploit the article by N. Aronszajn [1] on reproducing kernels and use his formulation of product of kernels and resulting kernel space to derive a new approximation schema — a Product Kernel Regularization Network. We present a concrete application of PKRN and compare it to classical Regularization Network and show that PKRN exhibit better approximation properties.

Kudová Petra, Šámalová Terezie
Statistical Correlations and Machine Learning for Steganalysis

In this paper, we present a scheme for steganalysis based on statistical correlations and machine learning. In general, digital images are highly correlated in the spatial domain and the wavelet domain; hiding data in images will affect the correlations. Different correlation features are chosen based on ANOVA (analysis of variance) in different steganographic systems. Several machine learning methods are applied to classify the extracted feature vectors. Experimental results indicate that our scheme in detecting the presence of hidden messages in several steganographic systems is highly effective.

Qingzhong Liu, Andrew H. Sung, Bernardete M. Ribeiro
The Use of Multi-Criteria in Feature Selection to Enhance Text Categorization

Feature selection has been an interesting issue in text categorization up to now. Previous works in feature selection often used filter model in which features, after ranked by a measure, are selected based on a given threshold. In this paper, we present a novel approach to feature selection based on multi-criteria of each feature. Instead of only one criterion, multi-criteria of a feature are used; and a procedure based on each threshold of feature selection is proposed. This framework seems to be suitable for text data and applied to text categorization. Experimental results on Reuters-21578 benchmark data show that our approach has a promising scheme and enhances the performance of a text categorization system.

Son Doan, Susumu Horiguchi
Text Classification from Partially Labeled Distributed Data

One of the main problems with text classification systems is the lack of labeled data, as well as the cost of labeling unlabeled data [1]. Thus, there is a growing interest in exploring the combination of labeled and unlabeled data, i.e., partially labeled data [2], as a way to improve classification performance in text classification. The ready availability of this kind of data in most applications makes it an appealing source of information.

The distributed nature of the data, usually available online, makes it a very interesting problem suited to be solved with distributed computing tools, delivered by emerging GRID computing environments.

We evaluate the advantages obtained by blending supervised and unsupervised learning in a support vector machine automatic text classifier. We further evaluate the possibility of learning actively and propose a method for choosing the samples to be learned.

Catarina Silva, Bemardete Ribeiro
Recovering the Cyclic-Code of Generated Polynomial by Using Evolutionary Computation

The data integrity in computer security is a key component of what we call trustworthy computing, and one of the most important issues in data integrity is to detect and correct error codes, which is also a crucial step in software and hardware design. Numerous methods have been recently proposed to solve legal-codes of the cyclic-code generated polynomial g(x). We think that a better approach for this purpose is to solve the legal-codes by finding the roots of the cyclic-code generated polynomial. However, as it is well known, finding roots of polynomials of high degree in the modulo-

q

space GF(q) is very difficult. In this paper we propose a method to solve the roots of cyclic-code generated polynomial by using evolutionary computation, which makes use of randomized searching method from biological natural selection and natural genetic system.

Kangshun Li, Yuanxiang Li, Haifang Mo
Intrusion Detection System Based on a Cooperative Topology Preserving Method

This work describes ongoing multidisciplinary research which aims to analyse and to apply connectionist architectures to the interesting field of computer security. In this paper, we present a novel approach for Intrusion Detection Systems (IDS) based on an unsupervised connectionist model used as a method for classifying data. It is used in this special case, as a method to analyse the traffic which travels along the analysed network, detecting anomalous traffic patterns related to SNMP (Simple Network Management Protocol). Once the data has been collected and pre-processed, we use a novel connectionist topology preserving model to analyse the traffic data. It is an extension of the negative feedback network characterised by the use of lateral connections on the output layer. These lateral connections have been derived from the Rectified Gaussian distribution.

Emilio Corchado, Álvaro Herrero, Bruno Baruque, José Manuel Sáiz
Model Selection for Kernel Based Intrusion Detection Systems

This paper describes results concerning the robustness and generalization capabilities of a supervised machine learning method in detecting intrusions using network audit trails. We also evaluate the impact of kernel type and parameter values on the accuracy with which a support vector machine (SVM) performs intrusion classification. We show that classification accuracy varies with the kernel type and the parameter values; thus, with appropriately chosen parameter values, intrusions can be detected by SVMs with higher accuracy and lower rates of false alarms.

Feature selection is as important for intrusion detection as it is for many other problems. We present support vector decision feature selection method for intrusion detection. It is demonstrated that, with appropriately chosen features, intrusions can be detected in real time or near real time.

Srinvas Mukkamala, A. H. Sung, B. M. Ribeiro
A Comparison of Three Genetic Algorithms for Locking-Cache Contents Selection in Real-Time Systems

Locking caches, providing full determinism and good performance, are a very interesting solution to replacing conventional caches in real-time systems. In such systems, temporal correctness must be guaranteed. The use of predictable components, like locking caches, helps the system designer to determine if all the tasks will meet its deadlines. However, when locking caches are used in a static manner, the system performance depends on the instructions loaded and locked in cache. The selection of these instructions may be accomplished through a genetic algorithm.

This paper shows the impact of the fitness function in the final performance provided by the real-time system. Three fit- ness functions have been evaluated, showing differences in the utilisation and performance obtained.

E. Tamura, J.V. Busquets-Mataix, J. J. Serrano Martín, A. Martín Campoy
A Binary Digital Watermarking Scheme Based On The Orthogonal Vector And ICA-SCS Denoising

This paper proposed a new perceptual digital watermarking scheme based on ICA, SCS, the human visual system (HVS), discrete wavelet transform (DWT) and the orthogonal vector. The original gray image first is divided into 8×8 blocks, and then permuted. A 1-level DWT is applied to each 8×8 block. Each watermark bit is modulated by orthogonal vector, then the watermark is add to the original image. Finally the IDWT is performed to form the watermarked image. In the watermarking detection process the independent component analysis (ICA)-based sparse code shrinkage (SCS) technique is employed to denoise, and make using of the orthogonal vector character. By hypothetical testing, the watermark can be extracted exactly. The experimental results show that the proposed technique successfully survives image processing operations, image cropping, noise adding and the JPEG lossy compression. Especially, the scheme is robust towards image sharping and image enhancement.

Han dongfeng, Li wenhui

Computer Vision and Pattern Recognition

Simulating binocular eye movements based on 3-D short-term memory image in reading

We simulate binocular eye movements in reading. We introduce the 3-D edge features reconstructed from the binocular foveated vision to determine the next fixation point in reading. The next fixation point is determined statistically from the feature points in the 3-D short-term memory edge image. We show the effectiveness of simulating eyes movement based on 3-D short-term memory image to realize humanlike robots.

Satoru Morita
An Algorithm For Face Pose Adjustment Based On Gray-scale Static Image

Face pose adjustment, as a loop of human face location, is very important in computer face recognition. In this paper, we present a new approach to automatic face pose adjustment on gray-scale static images with a single face. In the first stage, with the degree of mediacy make every little image, then ask for one piece of image including two eyes using match degree. And continue the nose and mouth part horizontal gray projection in small scope. Finally adjust this piece correctly. In the second stage, based on the location and the symmetry feature of eyes, the inclination angle is calculated and the face position is redressed. The experimentations show that the algorithm performs very well both in terms of rate and of efficiency. What’s more, due to the precise location of eyes, the apples of the eyes are detected.

Wenming Cao, Shoujue Wang
Learning Image Filtering from a Gold Sample Based on Genetic Optimization of Morphological Processing

This paper deals with the design of a semi-automated noise filtering approach, which receives just original noisy image and corresponding gold (user manipulated) image to learn filtering task. It tries to generate an optimized mathematical morphology procedure for image filtering by applying a genetic algorithm as an optimizer. After training and generating a morphological procedure, the approach is ready to apply the learned procedure on new noisy images. The main advantage of this approach is that it takes just one gold sample to learn filtering and does not need any prior context knowledge. Using the morphological operators makes the filtering procedure robust, effective, and computationally efficient. Furthermore, the proposed filter shows little distortion on the noise free parts of an image and it can extract objects from heavily noisy environments. Architecture of the system and details of implementation are presented. The approach feasibility is tested by well-prepared synthetic noisy images and results are given and discussed.

S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama
Associative Memories and Diagnostic Classification of EMG Signals

In this work, associative memories are used for diagnostic classification of needle EMG signals. Vectors containing 44 autoregressive coefficients represent each signal and are presented as stimuli to associative memories. As the number of training stimuli increases, the method recursively updates associative memories. The obtained classification results are equivalent to the ones provided by the traditional Fisher’s discriminant, indicating the feasibility of the proposed method.

C. Shirota, M. Y. Barretto, C. Itiki
Discretization of Series of Communication Signals in Noisy Environment by Reinforcement Learning

Thinking about the “Symbol Grounding Problem” and the brain structure of living things, the author believes that it is the best solution for generating communication in robot-like systems to use a neural network that is trained based on reinforcement learning. As the first step of the research of symbol emergence using neural network, it was examined that parallel analog communication signals are binarized in some degree by noise addition in reinforcement learning-based communication acquisition. In this paper, it is shown that two consecutive analog communication signals are binarized by noise addition using recurrent neural networks. Furthermore, when the noise ratio becomes larger, the degree of the binarization becomes larger.

Katsunari Shibata
The Research of Speaker-Independent Continuous Mandarin Digits Speech- Recognition Based on the Dynamic Search Method of High-Dimension Space Vertex Cover

In this paper, we present a novel algorithm of speaker-independent continuous Mandarin digits speech-recognition, which is based on the dynamic searching method of high-dimension space vertex cover. It doesn’t need endpoint detecting and segmenting. We construct a coverage area for every class of digits firstly, and then we put every numeric string into these coverage-areas, and the numeric string is recognized directly by the dynamic search method. Finally, there are 32 people in experiment, 16 female and 16 male, and 256 digits all together. All these digits are not learned. The correct recognition result is 218, and error recognition result is 26. Correct recognition rate is 85%

Wenming Cao, Xiaoxia Pan, Shoujue Wang
A Connectionist Model of Finding Partial Groups in Music Recordings with Application to Music Transcription

In this paper, we present a technique for tracking groups of partials in musical signals, based on networks of adaptive oscillators. We show how synchronization of adaptive oscillators can be utilized to detect periodic patterns in outputs of a human auditory model and thus track stable frequency components (partials) in musical signals. We present the integration of the partial tracking model into a connectionist system for transcription of polyphonic piano music. We provide a short overview of our transcription system and present its performance on transcriptions of several real piano recordings.

Matija Marolt
Adaptive ICA Algorithm Based on Asymmetric Generalized Gaussian Density Model

A novel Independent Component Analysis(ICA) algorithm is achieved, which enable to separate mixtures of symmetric and asymmetric sources with self adaptive nonlinear score functions. It is derived by using the parameterized asymmetric generalized Gaussian density (AGGD) model. Compared with conventional ICA algorithm, the proposed AGGD-ICA method can separate a wide range of signals including skewed sources. Simulations confirm the effectiveness and performance of the approach.

Fasong Wang, Hongwei Li

Hybrid Methods and Tools

Toward an On-Line Handwriting Recognition System Based on Visual Coding and Genetic Algorithm

One of the most promising methods of interacting with small portable computing devices, such as personal digital assistants, is the use of handwriting. In order to make this communication method more natural, we proposed to visually observe the writing process on ordinary paper and to automatically recover the pen trajectory from numerical tablet sequences. On the basis of this work we developed handwriting recognition system based on visual coding and genetic algorithm. The system was applied on Arabic script. In this paper we will present the different steps of the handwriting recognition system. We focus our contribution on genetic algorithm method.

M. Kherallah, F. Bouri, A.M. Alimi
Multi-objective genetic algorithm applied to the structure selection of RBFNN temperature estimators

Temperature modelling of a homogeneous medium, when this medium is radiated by therapeutic ultrasound, is a fundamental step in order to analyse the performance of estimators for in-vivo modelling. In this paper punctual and invasive temperature estimation in a homo-geneous medium is employed. Radial Basis Functions Neural Networks (RBFNNs) are used as estimators. The best fitted RBFNNs are selected using a Multi-objective Genetic Algorithm (MOGA). An absolute average error of 0.0084°

C

was attained with these estimators.

C. A. Teixeira, W. C. A. Pereira, A. E. Ruano, M. Graça Ruano
Assessing the Reliability of Complex Networks through Hybrid Intelligent Systems

This paper describes the application of Hybrid Intelligent Systems in a new domain: reliability of complex networks. The reliability is assessed by employing two algorithms (TREPAN and Adaptive Neuro-Fuzzy Inference Systems (ANFIS)), both belonging to the Hybrid Intelligent Systems paradigm. TREPAN is a technique to extract linguistic rules from a trained Neural Network, whereas ANFIS is a method that combines fuzzy inference systems and neural networks. In the experiment presented, the structure function of the complex network analyzed is properly emulated by training both models on a subset of possible system configurations, generated by a Monte Carlo simulation and an appropriate Evaluation Function. Both approaches are able to successfully describe the network status through a set of rules, which allows the reliability assessment

D.E. D. Torres, C.M. S. Rocco
Autonomous Behavior of Computational Agents

In this paper we present an architecture for decision making of software agents that allows the agent to be-have autonomously. Our target area is computational agents — encapsulating various neural networks, genetic algorithms, and similar methods — that are expected to solve problems of different nature within an environment of a hybrid computational multi-agent system. The architecture is based on the vertically-layered and belief-desire-intention architectures. Several experiments with computational agents were conducted to demonstrate the benefits of the architecture.

Roman Vaculín, Roman Neruda
Neural Network Generating Hidden Markov Chain

In this paper we introduce technique how a neural network can generate a Hidden Markov Chain. We use neural network called Temporal Information Categorizing and Learning Map. The network is an enhanced version of standard Categorizing and Learning Module (CALM). Our modifications include Euclidean metrics instead of weighted sum formerly used for categorization of the input space. Construction of the Hidden Markov Chain is provided by turning steady weight internal synapses to associative learning synapses. Result obtained from testing on simple artificial data promises applicability in a real problem domain. We present a visualization technique of the obtained Hidden Markov Chain and the method how the results can be validated. Experiments are being performed.

J. Koutník, M. Šnorek
Datamining in Grid Environment

The paper deals with assessing performance improvements and some implementation issues of two well-known data mining algorithms, Apriori and FP-growth, in Alchemi grid environment. We compare execution times and speed-up of two parallel implementations: pure Apriori and hybrid FP-growth — Apriori version on grid with one to six processors. As expected, the latter shows superior performances. We also discuss the effects of database characteristics on overall performance, and give directions for proper choice of execution parameters and suitable number of executors.

M. Ciglarič, M. Pančur, B. Šter, A. Dobnikar
Parallel Placement Procedure based on Distributed Genetic Algorithms

This paper discusses a novel performance driven placement technique based on distributed Genetic Algorithms, and focuses particularly on the following points:(l) The algorithm has two-level hierarchical structure consisting of outline placement and detail placement. (2) For selection control, which is one of the genetic operations, new multi-objective functions are introduced. (3) In order to reduce the computation time, a parallel processing is introduced. Results show improvement of 22.5% for worst path delay, 11.7% for power consumption, 15.9% for wire congestion and 10.7% for chip area.

Masaya Yoshikawa, Takeshi Fujino, Hidekazu Terai
Massive parallelization of the compact genetic algorithm

This paper presents an architecture which is suitable for a massive parallelization of the compact genetic algorithm. The resulting scheme has three major advantages. First, it has low synchronization costs. Second, it is fault tolerant, and third, it is scalable.

The paper argues that the benefits that can be obtained with the proposed approach is potentially higher than those obtained with traditional parallel genetic algorithms.

Fernando G. Lobo, Cláudio F. Lima, Hugo Mártires
Parallel implementations of feed-forward neural network using MPI and C# on .NET platform

The parallelization of gradient descent training algorithm with momentum and the Levenberg-Marquardt algorithm is implemented using C# and Message Passing Interface (MPI) on .NET platform. The turnaround times of both algorithms are analyzed on cluster of homogeneous computers. It is shown that the optimal number of cluster nodes is a compromise between the decrease of computational time due to parallelization and corresponding increase of time needed for communication.

U. Lotrič, A. Dobnikar
HeuristicLab: A Generic and Extensible Optimization Environment

Today numerous variants of heuristic optimization algorithms are used to solve different kinds of optimization problems. This huge variety makes it very difficult to reuse already implemented algorithms or problems. In this paper the authors describe a generic, extensible, and paradigm-independent optimization environment that strongly abstracts the process of heuristic optimization. By providing a well organized and strictly separated class structure and by introducing a generic operator concept for the interaction between algorithms and problems, HeuristicLab makes it possible to reuse an algorithm implementation for the attacking of lots of different kinds of problems and vice versa. Consequently HeuristicLab is very well suited for rapid prototyping of new algorithms and is also useful for educational support due to its state-of-the-art user interface, its self-explanatory API and the use of modern programming concepts.

S. Wagner, M. Affenzeller
The Satellite List
A Reversible Doubly-Linked List

Subpath reversals are common operations in graph-based structures arising in a wide range of applications in combinatorial optimization. We describe the satellite list, a variation on the doubly-linked list that is symmetric, efficient, and can be reversed or reverse subsections in constant time.

C. Osterman, C. Rego, D. Gamboa
Backmatter
Metadata
Title
Adaptive and Natural Computing Algorithms
Editors
Dr. Bernardete Ribeiro
Dr. Rudolf F. Albrecht
Dr. Andrej Dobnikar
Dr. David W. Pearson
Dr. Nigel C. Steele
Copyright Year
2005
Publisher
Springer Vienna
Electronic ISBN
978-3-211-27389-0
Print ISBN
978-3-211-24934-5
DOI
https://doi.org/10.1007/b138998

Premium Partner