Skip to main content
Top

1993 | Book

Artificial Neural Nets and Genetic Algorithms

Proceedings of the International Conference in Innsbruck, Austria, 1993

Editors: Dr. Rudolf F. Albrecht, Dr. Colin R. Reeves, Dr. Nigel C. Steele

Publisher: Springer Vienna

insite
SEARCH

About this book

Artificial neural networks and genetic algorithms both are areas of research which have their origins in mathematical models constructed in order to gain understanding of important natural processes. By focussing on the process models rather than the processes themselves, significant new computational techniques have evolved which have found application in a large number of diverse fields. This diversity is reflected in the topics which are the subjects of contributions to this volume. There are contributions reporting theoretical developments in the design of neural networks, and in the management of their learning. In a number of contributions, applications to speech recognition tasks, control of industrial processes as well as to credit scoring, and so on, are reflected. Regarding genetic algorithms, several methodological papers consider how genetic algorithms can be improved using an experimental approach, as well as by hybridizing with other useful techniques such as tabu search. The closely related area of classifier systems also receives a significant amount of coverage, aiming at better ways for their implementation. Further, while there are many contributions which explore ways in which genetic algorithms can be applied to real problems, nearly all involve some understanding of the context in order to apply the genetic algorithm paradigm more successfully. That this can indeed be done is evidenced by the range of applications covered in this volume.

Table of Contents

Frontmatter

Workshop Summary

Workshop Summary

Recognising that the Conference would attract participants from two distinct areas, initial arrangements were made to hold a pre-conference workshop to serve as a general introduction to both of the Conference themes.

Nigel Steele, Kevin Warwick, Carsten Petersen, Colin Reeves

Artificial Neural Networks

The class of refractory neural nets

We introduce the absolute refractory behaviour into the formal neuron model. While a probabilistic approach to such a refractory model has yet been attempted, in this paper, a deterministic analysis is realized. A first result consists in showing a not expensive algorithm to transform each refractory net into an equivalent not refractory one. Such a result is then exploited to obtain an upper bound to the computational complexity of two classical problems: the reachability and stabilization problems. They find their principal motivations in control and learning theories whenever the necessity to a priori determine the lenght of both transients and limit cycles arises. Finally, we prove that, when the connection matrices of nets are symmetric, the complementary problem of stabilization is NP-complete and reachability is P-complete.

A. Clementi, M. Di Ianni, P. Mentrasti
The Boltzmann ECE Neural Network: A Learning Machine for Estimating Unknown Probability Distributions

This paper treats the following problem: consider an ergodic signal source S. Suppose that each time the source transmitts a multidimensional signal x according to an unknown ergodic probability distribution with density p(x). Then the problem is to estimate the unknown density p(x). The problem is solved via a Reccurent High-Order Neural Network (RHONN) and is based on the Energy Coordinates Equivalence (ECE) principle proposed by the authors. In the proposed method the signals are considered to be the states of a stochastic gradient dynamical system (Langevin s.d.e.) after it converges (in a stochastic manner). Then the (unknown) system is identified using ECE neural networks. After the learning procedure converges, the energy function of the ECE neural network is the estimate of the unknown probability distribution.

Elias B. Kosmatopoulos, Manolis A. Christodoulou
The Functional Intricacy of Neural Networks A Mathematical Study

The motive of the essay is to provide awareness of the functional complexity of our lifestructure and to emphasize the incomprehensibility of this polymorphic mightiness. The consideration is based on the assumption that neurons with axons, dendrites, and synapses form closed functional loops and that axons spread via multiple dendrits and synapses to other neurons, forming in this way highly entangled, constantly operating networks. Two different generalized patterns of neural structures are investigated. The generalization is needed for the purpose of mathematical formulization. In one form of interaction each axon spreads to all other neurons; in the second form there are two bilateral paths of interaction between individual neuron-loops. In both cases a mathematical formula allows to illustrate the doubtless limit of perception of functional comportment of networks, although the investigation hereinconsiders only the architectural structure. With an illustrated examples it is referred to the similarity of the structure of the brain of mammals and the structure of the information networks of technical multi-controlled installations. Thus, it might seem that the basic structure of handling information is universal in nature.

Rudolf Starkermann
Evolving Neural Feedforward Networks

For many practical problem domains the use of neural networks has led to very satisfactory results. Nevertheless the choice of an appropriate, problem specific network architecture still remains a very poorly understood task. Given an actual problem, one can choose a few different architectures, train the chosen architectures a few times and finally select the architecture with the best behaviour. But, of course, there may exist totally different and much more suited topologies. In this paper we present a genetic algorithm driven network generator that evolves neural feedforward network architectures for specific problems. Our system ENZO1 optimizes both the network topology and the connection weights at the same time, thereby saving an order of magnitude in necessary learning time. Together with our new concept to solve the crucial neural network problem of permuted internal representations this approach provides an efficient and successfull crossover operator. This makes ENZO very appropriate to manage the large networks needed in application oriented domains. In experiments with three different applications our system generated very successful networks. The generated topologies possess distinct improvements referring to network size, learning time, and generalization ability.

Heinrich Braun, Joachim Weisbrod
An Example of Neural Code: Neural Trees Implemented by LRAAMs

In this paper we discuss a general method which allows to implement a Neural Tree in a high-order recurrent network, the Executor, using an extension of the RAAM model, the Labeling RAAM model (LRAAM). Neural Trees and LRAAMs are briefly reviewed and the Executor defined. A Neural Tree is encoded by a LRAAM, and the decoding part of the LRAAM used to control the dynamics of the Executor. The main aspect of this kind of technique is that the weights of the LRAAM can be considered as a neural code implementing the Neural Tree on the Executor. An example of the method is presented for the 8 bits parity problem.

Alessandro Sperduti, Antonina Starita
Kolmogorov’s Theorem: From Algebraic Equations and Nomography to Neural Networks

We trace the developments around Hilbert’s thirteenth problem back to questions concerning algebraic equations.Its solution, namely Kolmogorov’s superposition theorem of 1956, is stated in an elaborate form and its relation with neural nets is explained. A detailed proof allows to initiate discussions concerning implementability.We address individuals interested to form an opinion about the hotly debated applicability of the superposition theorem but also the philosophically inclined readers that want to learn the background of a mathematical problem with an eventful history, and who, by studying its proof will get a sense of the difference between construction and existence in mathematics.

Alexander Kovačec, Bernardete Ribeiro
Output Zeroing within a Hopfield Network

In this paper we investigate a specific problem from the control theory literature, that of zeroing a system output, from the point of view of a neural network. For this we consider functions of the neural network states as defining a system output. In particular we are concerned with a continuous network of Hopfield type which could, in theory, be manufactured with available electrical components. Our aim is to impose a specific dynamics on a network by calculating the synaptic weights directly, without requiring training. Hence when a network is initialised in certain states we can be confident that the functions defining the output will remain sufficiently close to zero. We use (nonlinear) geometrical methods in our analysis and reliable numerical methods for our computations.

David William Pearson
Evolving Recurrent Neural Networks

An evolutionary algorithm which allows entities to increase and decrease in complexity during the evolutionary process is applied to recurrent neural networks. Recognition of various regular languages provides a suitable set of test problems.

Kristian Lindgren, Anders Nilsson, Mats G. Nordahl, Ingrid Råde
Speeding Up Back Propagation by Partial Evaluation

We automatically specialize a general Back Propagation learning algorithm to a particular network topology, obtaining a specialized learning algorithm which is faster than the general one.The automatic specialization is done by a partial evaluator for a subset of the imperative programming language C.

Henrik Friborg Jacobsen, Carsten Krogh Gomard, Peter Sestoft
A New Min-Max Optimisation Approach for Fast Learning Convergence of Feed-Forward Neural Networks

One of the most critical aspect for a wide use of neural networks to real world problems is related to the learning process which is known to be computational expensive and time consuming.In this paper we propose a new approach to the problem of the learning process based on optimisation point of view. The developed algorithm is a minimax method based on a combination of the quasi-Newton and the Steepest descent methods: it was previously successfully applied in other areas and shows a faster convergence rate when compared with the classical learning rules.The optimum point is reached by minimising the maximum of the error functions of the network without requiring any tuning of internal parameters.Moreover, the proposed algorithm allows to obtain useful information about the size of the initial values of the weights by simple observations on the Gramian matrix associated to the network.The algorithm has been tested on several widespread benchmarks. The proposed algorithm shows superior properties than the backpropagation either in terms of convergence rate and in terms of easiness of use; its performances are highly competitive when compared with the other learning methods available in literature.Significant simulation results are also reported in the paper.

A. Chella, A. Gentile, F. Sorbello, A. Tarantino
Evolution of neural net architectures by a hierarchical grammar-based genetic system

We present a hierarchically structured system for the evolution of connectionist systems. Our approach is exemplified by evolution paradigms for neural network topologies and weights. Our descriptions of a network’s connectivity are based on context-free grammars which are used to characterize signal flow from input to output neurons. Evolution of a simple control task gives a first impression about the capabilities of this approach.

Christian Jacob, Jan Rehder
Interactive Classification through Neural Networks

In this paper, we present a simple and fast way to provide the operator a plane representation of multidimensional data through neural networks for interactive classification. The superiority of humans over automatic clustering procedures comes from their ability in recognising cluster structures in a two dimensional space, even in the presence of outliers between the clusters, of bridging clusters and of all kinds of irrelevant details in the data points distribution. When giving the operator the interactive means which will help him to isolate clusters of two dimensional points, this visualisation becomes base of a clustering procedure where the operator doesn’t loose his grip on the data he is analysing.

Mohamed Daoudi, Denis Hamad, Jack-Gérard Postaire
Visualization of Neural Network Operation for Improving the Performance Optimization Process

This paper considers the implementation of a neural network development environment, and describes a prototype that has been produced. The prototype has been targeted at pattern classifier applications, where human support can be beneficial. The initial aim of the system is to provide support for generating a neural network solution to a particular pattern classification problem. This takes the form of supporting the supervised training of a neural network. The support is designed to speed up the training process, by providing the operator with rapid indications of network performance, for the current configuration. The development environment currently supports the Neocognitron neural network paradigm, which is a versatile pattern classifier. The operation of the layers of this network can be presented visually to the operator, as an aid to setting up. A major consideration is the use of Graphical User Interfaces, and the use of X Windows and Motif for the development environment. In conclusion, the results of the work are considered, and future developments outlined.

Adrian G. Williamson, Richard D. Thomas
Identification of Nonlinear Systems Using Dynamical Neural Networks: A Singular Perturbation Approach

In this paper, we employ singular perturbation analysis to examine the stability and robustness properties of a dynamical neural network identifier. Various cases that lead to modeling errors are taken into consideration. Input-output stability theory is used to assure convergence and internal stability of the identifier. Not all the plant states are assumed to be available for measurment.

George A. Rovithakis, Manolis A. Christodoulou
The Polynomial Method Augmented by Supervised Training for Hand-Printed Character Recognition

We present a pattern recognition algorithm for handprinted and machine-printed characters, based on a combination of the classical least squares method and a neural-network-type supervised training algorithm. Characters are mapped, nonlinearly, to feature vectors using selected quadratic polynomials of the given pixels. We use a method for extracting an equidistributed subsample of all possible quadratic features.This method creates pattern classifiers with accuracy competitive to feed-forward systems trained using back propagation; however back propagation training takes longer by a factor of ten to fifty. (This makes our system particularly attractive for experimentation with other forms of feature representation, other character sets, etc.)The resulting classifier runs much faster in use than the back propagation trained systems, because all arithmetic is done using bit and integer operations.

Peter G. Anderson, Roger S. Gaborski
Analysis of Electronic Nose Data Using Logical Neurons

The object of this study was to evaluate the performance of networks of logical neurons in analysing data from the Warwick Electronic Nose. The results are compared to those previously obtained from a back-propagation network on the same data.The Warwick Electronic Nose consists of an array of twelve different tin oxide gas sensors, the response to an odour being determined from the conductivity changes of these sensors. Five different alcohol vapours were presented to the nose and the conductivities recorded in each case.The’ standard’ McCulloch and Pitts node used in most topologies — in particular back propagation — has proved successful in many applications. However, implementing nodes of this type in hardware is non-trivial due to the need to store, modify and multiply analogue variables. Logical or Boolean Neurons have their inputs and outputs limited to the set {0,1}. This allows easy implementation in hardware using RAM lookup tables. It is for this reason that Logical Neurons were selected as the basis for this study.In general, the logical neurons were less successful in classifying the alcohol data than a back-propagation technique. However, a 6 layer ω-state PLN performed almost as well with a 94% successrate compared to 100% for the MLP. Further work on logical neurons may lead to improvement by fully exploiting their capacity for generalisation.

J. D. Mason, E. L. Hines, J. W. Gardner
Neural Tree Network Based Electronic Nose

The training of a multi-layer perceptron using the well known back-propagation algorithm usually requires a priori knowledge of the network architecture. In this paper, results are presented on two practical classification problems which use neural tree classifiers to determine automatically the optimal number of neurons required. The data-set comes from the response of the Warwick Electronic Nose to a set of simple and complex odours.

Adhanom A. Fekadu, Evor L. Hines, Julian W. Gardner
Lime Kiln Process Identification and Control: A Neural Network Approach

Complex systems exhibiting strong non linearities and time delays such as chemical processes are very demanding in control requirements. In this paper we present a neural network approach for multivariable non-linear kiln process identification and control. Neural networks, in control theory, are attractive because of their powerful capabilities to successfully approximate nonlinear functions within a specified approximation error as recent research has proven. They can be used to synthetize non-linear controllers for non-linear processes and it is expected that better results can be obtained as compared to more conventional methods.The main objective of this work is to train the neural network kiln controller to provide suitable control inputs that produce a desired kiln response. If the neural network plant model is capable of approximating well and with sufficient accuracy the highly non-linear calcination process in the lime kiln, then it may be used within a model based control strategy.Firstly, the lime kiln process identification is achieved using a feedforward Artificial Neural Network (ANN), namely the plant model. It learns the kiln dynamics through a training process that drives the error between the plant output and network output to a minimum. The nonlinear mapping from control inputs to plant outputs is achieved through the use of the backpropagation learning paradigm. The specifications of the neural network to provide the desired system representation are given.Secondly, a neuralcontroller was designed to adaptively control the non-linear plant. The neural network topology was selected according to common used performance criteria.Simulation results of non-linear kiln process identification as well as non- linear adaptive control are presented to illustrate the neural network approach. Analysis of the neural networks performance is underlined.

B. Ribeiro, A. Dourado, E. Costa
Application of Neural Networks to Automated Brain Maturation Study

An application of Neural Networks (NN) to brain maturation prediction is presented. The problem consists of, given a pattern extracted from electroencephalographic (EEG) signals, state the degree of brain development (low or normal/high). To that end, a population of subjects with their EEG assessed by a neurologist is available. A Backpropagation (BP) neural network is used for this supervised classification task, and a comparison with standard statistical classifiers is made. The effect on performance of several preprocessing techniques such as Principal Components Analysis (PCA), normalization and scaling is investigated. It is found better performance in the NN approach, both in terms of efficiency and consistency.

L. Moreno, J. D. Piñeiro, J. L. Sánchez, S. Mañas, J. Merino, L. Acosta, A. Hamilton
A Neuron Model for Centroid Estimation in Clustering Problems

In this paper a new model of neuron unit able to search centroid of cluster without competition, as in Unsupervised Competitive Learning (UCL) neural networks, is presented. The drawbacks of classical unsupervised learning laws are investigated analyzing the basic mechanisms of clustering operation and the paradigms of an alternative clustering algorithm are carried out. Starting by this new paradigms, a new neural model where the basic neural element is more complex is obtained. Two simple neural elements are locally interconnected in non-linear feedback and the pair is arranged to form a stand-alone neural unit. The properties of the “neural couple” in combination with the proposed learning law are investigated and it is shown that the proposed neuron unit is able, during learning stage, to perform an automatic switch of learning strategy. A first learning strategy is adopted to find barycentre of the whole input space, then a second strategy allows the centroid of the most populate cluster to be found. Simulation on a test set shows that, given an input space in which different clusters are “well” placed, the synoptic vector of neural unit converges to the centroid of the most populate class at least as fast than other popular learning laws based on competitive learning strategy.

G. Acciani, E. Chiarantoni
Systolic Pattern Recognition Based on Neural Network Algorithm

The paper presents a solution for pattern classification, which uses distribute processing both for computing the matching score and selecting the class with the maximum score. The proposed architecture belongs to systolic arrays, being a generalization of the classical priority queue. A detailed description of the elementary processors (EPs) reveals that the algorithm implemented by each EP (which is based on computing the Hamming distance) is common also for neural networks. The overall result is a O(M) execution time for M classes (i.e. linear), and O(1) execution time with respect to n (the size of the patterns).For testing the ideas, a simulator has been developed. It has been built starting from a set of C functions for simulating parallel processes. A short description of these functions supports our claim about the improvement of efficiency when developing a simulator starting from these functions. Several results are shortly discussed. Conclusions and further directions of research end the paper.

D. O. Creteanu, V. Beiu, J. A. Peperstraete, R. Lauwereins
Neural Networks versus Image Pyramids

Neural networks and image pyramids are massively parallel processing structures. In this paper we exploit the similarities as well as the differences between these structures. The general goal is to exchange knowledge between these two fields. After introducing the basic concepts of neural networks and image pyramids we give a translation table of the vocabulary used in image pyramids and those used in neural networks. Image pyramids which store and process numerical information (e.g. grey values of pixels) are very similar to neural networks. Therefore we concentrate on ”symbolic pyramids”. The main idea is to replace a cell of the pyramid by a small neural network, in order to represent and process symbolic information. We will consider local as well as distributed representations for symbolic information. In particular we present a neural implementation of the 2 x 2/2 curve pyramid. We derive some general rules for implementing symbolic pyramids by neural networks. Finally we briefly discuss the role of learning in image pyramids.

Horst Bischof, Walter G. Kropatsch
Application of Neural Networks to Gradient Search Techniques in Cluster Analysis

In this paper, we propose an approach to cluster analysis based on two levels: a neural network and an heuristic clustering algorithm. The neural network provides data compression such as the probability density function (p.d.f.) of the weights must be as near as to the p.d.f. of the data. During the clustering step which is based on the estimation of the gradient of the p.d.f., each available vector weight, or prototype, is moved in the direction of the p.d.f. gradient approximation. The process is iterated until the normalised local gradient is equal to zero, which result near the modes of the p.d.f.

Stephane Delsert, Denis Hamad, Mohamed Daoudi, Jack-Gérard Postaire
LVQ-Based On-line EEG Classification

In this paper the problems of classifying EEG in both its spatial as well as its temporal aspects are described. The task was to identify the intended side of hand movement (left or right) on the basis of EEG recorded on two channels during one second before movement onset. This is part of a larger project aiming to build a “Brain-Computer Interface” (BCI) which should enable handicapped persons to communicate with their surroundings using their EEG.Several solutions based on Learning Vector Quantizers (LVQs) are presented ranging from classification of the whole EEG segment to combinations of classification of several small EEG segments. Two methods of combining these classifications are presented: one using direct classification and majority voting, and the other using a class activation function. The impact of the size of the segments and other parameters on the system performance is discussed. Results of experiments (correct classification rates of between 85–90%) with user-dependent LVQs are reported.

Doris Flotzinger, Joachim Kalcher, Gert Pfurtscheller
A Scalable Neural Architecture Combining Unsupervised and Suggestive Learning

Multi-layered perceptions are, in theory, capable of solving a wide range of problems. However, as the scale of many problems is increased, or requirements change, multi-layered perceptrons fail to learn or become impractical to implement. Self-organizing networks are not so limited by scale, but require a-priori information, typically in the form of preset weights or suitable control parameters to achieve a good categorization of a data set.Based on research into the behaviour of biological neurons during learning, a new self-organizing neural network has been devised. Moving away from the traditional McCulloch and Pitts model, each neuron stores several independent patterns, each capable of initiating a neuron output. By structuring such neurons into a network, a rapid and equal distribution of data across competitive nodes is possible.This paper introduces the new network, known as a Master-Slave architecture, and learning paradigm. By using competitive and suggestive learning, inputs are distributed across all available classification units, without the need for a-priori knowledge. Two experiments are described, highlighting the potential of the master-slave architecture as a building block for larger networks.

R. B. Lambert, W. P. Cockshott, R. J. Fryer
Stability Analysis of the Separation of Sources Algorithm: Application to an Analogue Hardware

We present in our paper a stability analysis of the two-neuron Hérault-Jutten network for separation of two sources from a linear and instantaneous mixture. The non-linear functions used by the learning rule are chosen to be in the form f(x)=sinh(x/α) and g(x)=tanh(x/β), as we can easily implement them on analogue hardware. Based on simulation results and theoretical considerations, the influence of the non-linear functions and the statistical nature of the source signals on the stability of the algorithm, are pointed out. In order to determine the properties and the limitations of our analogue hardware, the behavior of the algorithm is derived finally for g(x)=signe(x), as it is actually implemented in our circuit.

Didier Achvar
Learning with Mappings and Input-Orderings using Random Access Memory — based Neural Networks

Random Access Memory (RAM)-based systems have been studied for several years. A recent paper by Gera and Sperduti demonstrates how one such RAM-based network can be used to produce a real-number ordering for each member of a training set. The coding reflects the relative similarity of input patterns in a highly concise way that is also very economical in its memory requirements. In fact, only one neuron or discriminator is required. However, the larger the training set, the closer some codes become. This in turn means that more decimal places are needed for the codes. One possible solution to this problem is described in this paper. A twostage learning method is used. In the first stage, a Kohonen-like RAM-network divides the training set into groupings of similar patterns. Each grouping is then associated with its own Gera/Sperduti discriminator. Since input patterns to each such discriminator are similar, the discriminator itself can be pruned to remove redundant information and maximise output variance.

Michael H. Gera
New Preprocessing Methods for Holographic Neural Networks

We propose two new methods for preprocessing of stimulus data in holographic neural networks. The first method achieves optimal data symmetrization, and it is based on estimating the actual distributions of elements within a stimulus vector. The second method serves for data expansion, and it relies on sine and cosine functions to produce dummy stimulus elements. We describe an implementation of our methods and report on some experiments. Our proposals improve the applicability of holographic networks. Namely, symmetrization assures accuracy in reproducing learned stimulus-response associations, while expansion increases learning capacity.

Robert Manger, Branko Souček
A Solution for the Processor Allocation Problem: Topology Conserving Graph Mapping by Self-Organization

We consider the problem of how to allocate the tasks of a parallel program to the processor elements of a multicomputer system such that the communication overhead of the program is minimized. This problem basically amounts to a graph embedding problem since both the program and the multicomputer can be modeled as graphs. The embedding problem can be characterized as the search for a topology conserving mapping of the source graph to the target graph.To find such mappings we apply the Kohonen selforganization process. Our main concern in this article is to show how this graph embedding problem can be fitted to the Kohonen technique. To that end, we introduce feature vectors for each node of the source graph to provide topological information exceeding direct neighborhood and considering larger surroundings.The particular optimization goal of the task mapping problem is being related to known results of the Kohonen process. The behavior and performance of our approach is illustrated by some sample graphs.

M. Dormanns, H.-U. Heiss
Using A Synergetic Computer in an Industrial Classification Problem

Synergetic Computers (SCs) represent a class of new algorithms which can be used for different pattern recognition tasks. Due to their strong mathematical similarity with self-organized phenomena of physical nature they embody promising candidates for hardware realizations of classification systems. Until now there is still a lack of investigations concerning the importance of synergetic algorithms in the field of pattern recognition as well as concerning their practical performance. One of these synergetic algorithms (SCAP) will be examined in this paper with respect to pattern recognition capabilities. Its capacity of identifying wheels in an industrial environment is discussed. We show that with adequate preprocessing the SCAP reaches recognition rates of 99.3% under variable illumination conditions and even 100% with constant illumination. In addition to this, we try to specify the SCAP with respect to estabished pattern identification algorithms.

T. Wagner, F. G. Boebel, U. Haßler, H. Haken, D. Seitzer
Connectionist Unifying Prolog

We introduce an connectionist approach to unification using a local and a distributed representation. A Prolog-System using these unification-strategies has been build.Prolog is a Logic Programming Language which utilizes unification. We introduce a uncertainty measurement in unification. This measurement is based on the structure-abilities of the chosen representations. The strategy using a local representation, called ℓ-CUP, utilizes a self-organizing feature-map (FM-net) to determine similarities between terms and induces the representation for a relaxation-network (relax-net). The strategy using a distributed representation, called d-CUP, embeds a similarity measurement by its recurrent representation. It has the advantage that similar terms have a similar representation. The unification itself is done by a backpropagation network (BP-net).We have proven the systems adequacy for unification, its efficient computation, and the ability to do extended unification.

Volker Weber
Plausible Self-Organizing Maps for Speech Recognition

A major problem in connectionist phonetic acoustic decoding is the way to present acoustic signal to the network. Neurobiological data about the inner ear and the primary auditory cortex can be very helpful but are rare. On the other hand, other biological works have shown that structure and functioning of the visual cortex, which have been extensively studied, are very close to the structure and functioning of the auditory cortex.It has been shown that a simple two-layered network of linear neurons can organize itself to extract the complete information contained in a set of presented patterns. This model has been applied to visual information and has revealed orientation and spatial frequency selective cells.Such principles have been applied to speech recognition. So we have designed two kinds of maps. The first kind is able to represent frequency characteristics of the signal (e.g.: formantic structure). The second kind takes into account dynamic aspects of the signal (e.g.: formantic transition).First, these results have been analyzed both from a phonetic and a signal processing point of view and show very interesting representation of the signal. Second, these representations can be used as the input map of a dynamic connectionist network, for speech recognition. The input maps have a selective activity with regard to phonemic structures, and enable dynamic networks to differentiate the phonemes.

Lionel Beaugé, Stéphane Durand, Frédéric Alexandre
Application of Neural Networks to Fault Diagnosis for HVDC Systems

This paper describes a neural network design and its simulation results for fault diagnosis for thyristor converters and the HVDC power system. Fault diagnosis is carried out by mapping input data pattern, which represent the behaviour of the system, to one or more fault conditions. The behaviour of the converters is described in terms of the time varying patterns of conducting thyristors, pulse zone periods, voltage zone periods and ac & dc fault characteristics.A three-layer neural network consisting of 24 input nodes, 12 hidden nodes and 13 output nodes are used. 13 different faults were considered, although a lot of research still ne.ed to be done, the neural network approach shows a great potential as a more effective strategy for fault diagnosis.

K. S. Swarup, H. S. Chandrasekharaiah, L. L. Lai, F. Ndeh-Che
The Hopfield and Hamming Networks Applied to the Automatic Speech Recognition of the Five Spanish Vowels

In the present work, two trained classifiers in Neural Networks, specifically the Hopfield network and the Hamming network, were applied to a problem in speech recognition and the results were compared. The problem is that of the automatic recognition of the five Spanish vowels in a multi-speaker environment. There is thus a twofold purpose to the present note: to give an application of two NN algorithms and to give two different, although similar, methods for recognizing those vowels.The results showed that Neural Networks have a good prediction capability. The Hamming network has a series of evident advantages over the Hopfield network, although the results were quite similar. The main advantage of the Hamming network, in our case, is that it has far fewer connections than the Hopfield one; while Hamming needs Np connections, Hopfield needs N2.The models of unsupervised neural networks with the use of back-propagation algorithms will be developed in forthcoming papers. Although the success rate of the Hopfield and Hamming networks is somewhat less than that of the multi-layer perceptron network, the present models of neural networks are readily installable and easy to use, since they do not require the high level of training necessary for the multilayer perceptron.The algorithms described in this paper have been written on Macintosh computers, with a microphone/digitizer arrangement which provides sampling up to 22kHz. It was developed with a mathematical signal laboratory using parameters in the realm of the frequency spectrum.

Gustavo Santos-García
Speaker-Independent Word Recognition with Backpropagation Networks

This paper presents a system that recognizes a limited vocabulary of spoken words in a speaker-independent manner. The system requires only minimal hardware support for acoustic preprocessing. In contrast to other approaches to word-level recognition, it reduces the information content of the speech signals by a compression algorithm before presenting them as inputs to a standard 3-layer backpropagation network. The network learns to recognize the utterances of the speakers in the training set, and the trained network is then used to recognize the spoken words of unknown speakers. Recognition rates of up to 91% were obtained for unknown speakers of the same sex and up to 72% for a mix of both male and female speakers. Since the training times are fast and the system is very cost effective, the approach is practically feasible for a variety of applications.

Bernd Freisleben, Christian-Arved Bohn
A Neural Learning Framework for Advisory Dialogue Systems

A domain independent neural learning framework for advisory dialogue systems (ADS) is suggested. A connectionist view of user and task modeling is introduced that can be implemented in a neural knowledge network. It implicitly interprets man-computer interaction and causes adaptive task support. Adaptive inference is drawn by modifying the causal connections during interaction. The interpretation of the network gives insights into the user’s knowledge and preferences. Reasons for misconceptions can be estimated and interpreted by users, designers and rules for network modification.Neural ADS learn empirically in real-time to raise future system performance but can also be programmed by experts. Additionally, the network can be used for predicting the behaviour of the whole system or its parts.Advantages are constant retrieval time for associated information, extendability, and variability. Implementing the framework does not require special hardware or neural simulators. To demonstrate the applicability, two prototypical spreadsheet applications are introduced.

Hans-Günter Lindner, Freimut Bodendorf
Symbolic Learning in Connectionist Production Systems

The paper discusses an approach to combine classical artificial intelligence with connectionism. The crucial point with this intention is the implementation of symbols in neural networks. Several proposals are mentioned in [2]. The next important thing to be examined is the increase or modification of knowledge in a connectionist system. In this paper two types of learning are introduced: subsymbolic learning by experience and symbolic learning from linguistic rules. Symbolic learning requires two regions of processing, the language region and the signal region, both of them being coupled with associative links. Symbolic rules are processed sequentially in several cycles, and affect mainly the language region by also influencing the signal region, whereas subsymbolic rules are local to each of the regions and their consequences can be concluded in parallel. In a combined system with both kinds of rules subsymbolic learning and subsymbolic rules are the basis for symbolic learning and the application of linguistic rules. First results with a prototype system are introduced.

Wolfgang Eppler
An Evaluation of Different Network Models in Machine Vision Applications

Recently texture segmentation with neural networks has received much interest in fields like remote sensing, medical imaging and autonomous vehicles. We propose to use this approch to improve state-of-the-art machine vision systems. In this paper we present new experimental data to evaluate the performance of different features as well as different neural network models in a segmentation task. We compare features calculated from first-order statistics with features calculated from second-order statistics (cooccurrence features). We investigate different neural network classifers, i.e. multilayered perceptron and restricted coulomb energy model, as well as different conventional classifiers, i.e. minimum distance and nearest neighbour. A real world example of texture segmentation is the detection of defects on industrial treated surfaces.

U. Schramm, K. Spinnler
Combined Application of Neural Network and Artificial Intelligence Methods to Automatic Speech Recognition in a Continuous Utterance

A very efficient approach to using an artificial supervised neural network in Automatic Speech Recognition in the case of speaker dependent continuous utterance is presented in this paper; it has been tested in the Italian language but in principle not limited to it. An automatic segmentation of the digitized signal and a minimum of human intervention was applied to obtain a phoneme recognition efficiency of 98% on Italian phrases constructed with a limited number of 11 alphabetic classes, defining 20 phonetic subclasses. The efficiency we observed is due to the combined effect of four factors: • the differential of the detected signal of the utterance was digitized;• during the parametrization of segments through Fast Fourier Transform (FFT) and critical band definition the effect of a second derivative was simulated: the higher sensitivity in the higher frequency range of ear complex was thus simulated.• the proper input pattern to be used in the early stages of the training of the neural network was selected by a very sensitive similitude algorithm;• a dynamic and repetitive training procedure was applied through which the generalization shown by the network after training was used to modify and select the input patterns as well as to control the number of the output nodes used in successive training.

U. Emiliani, P. Podini, F. Sani
A Neural Network Based Control of a Simulated Biochemical Process

This paper describes an application of feedforward neural networks for inverse plant control of a process with highly non-linear characteristics. A biochemical process was considered where the microorganism, Saccharomyces cerevisiae, a yeast, grows in a chemostat on a glucose substrate and produces ethanol as a product of primary energy metabolism. In this process, which is of immense interest to industries worldwide, three state variables were considered: microbial, substrate and product concentrations. The last one is the controlled variable, and the dilution rate is the manipulated variable. In the study, the quality of the control is analyzed for the case where all states are assumed to be measurable and the case where only the product concentration is available.

Abhay B. Bulsari, Henrik Saxén
Applications of Neural Networks for Filtering

This paper investigates the applicability of feedforward neural networks for filtering purposes. A biochemical process was simulated to generate the data for training and testing the networks. Delayed measurements of one or more state variables were used as inputs to the networks, which were trained to provide filtered values of the state variables as outputs. The results of filtering were quite accurate. In most cases, linear models i.e., networks with linear activation functions, were found to be adequate. This is due to the short sampling time and the fact that the non-linearities of the process are not very strong in the region of state space considered.

Abhay B. Bulsari, Henrik Saxén
A Recurrent Neural Network for Time-series Modelling

This paper describes an architecture for discrete time feedback neural networks. Some analytical results for networks with linear nodal activation functions are derived, while simulations demonstrate the performance of recurrent networks with nonlinear (sigmoidal) activation functions. The need for models with a capacity to consider correlated noise sequences is pointed out, and it is shown that the recurrent networks can perform state estimation and entertain models with coloured noise.

Abhay B. Bulsari, Henrik Saxén
The Application of Neural Sensors to Fermentation Processes

This paper describes the training and implementation of Artificial Neural Network models for the real-time, on-line estimation of key biological variables in a fed-batch yeast fermentation process. The neural networks are generic nonlinear models that are configured and trained to act as software sensors for biomass concentration.In order to successfully customise a neural sensor it is necessary to give considerable attention to the choice of network inputs. The task of choosing these inputs is somewhat eased by knowledge of the yeast’s metabolism supplemented where necessary by a statistical analysis of the available inputs and outputs. In this study, the inputs that were chosen reflected both the yeast’s metabolic activity and its rate of growth.The experimental investigation was performed using a Ferranti Process Control Computer connected to a pilot-scale fermentation suite. The neural sensor module was written such that the customised neural network (sensor), configured and trained offline, could be implemented on-line using the Ferranti PML sequence language. The training and configuration stages of the neural sensor were conducted using software written in-house and designed to be compatible with the Ferranti based neural sensor module.

S. J. Rawling, D. Peel, S. M. Keith, B. Buxton
An Application of Unsupervised Neural Networks Based Condition Monitoring System

Integrated condition monitoring for fault identification and maintenance planing is increasingly becoming an indispensable activity in today’s industrial environment. Expert systems and neural networks are emerging to be the latest tools to be applied for condition monitoring. This paper briefly reviews these techniques and describes applications of artificial neural networks in diagnosing the health of various systems.The application of neural networks discussed here contemplates to devise an intelligent, self-adaptive monitoring module which can be employed in a wider range of industrial environments. The paper describes a general purpose unsupervised neural networks based monitoring system which categorises the operational routines within the individual application environments of a wide range of industrial machinery. The monitor classifies the sensed data into its respective clusters and demonstrate its potential diagnostic capabilities.

M. A. Javed, A. D. Hope
A Report of the Practical Application of a Neural Network in Financial Service Decision Making

Marks & Spencer Financial Services (MSFS) provides three credit products for its customers; Charge Card, Budget Card and Personal Loans, see Figure 1.

Graham Bazley
Performance Evaluation of Neural Networks Applied to Queueing Allocation Problem

In this paper we consider the dynamic allocation of customers to queues and study the performance of neural networks applied to the problem. The queueing system consists of N parallel distinct servers, each of which has its own queue with infinite capacity. A controller allocates each arriving customer to one of the servers at arrival epoch, who maximizes the probability of starting service for the customer in the earliest time. A neural network is incorporated into the controller, so that the neural controller can make an allocation decision adaptively to changing situations. We present a simple training method for the neural controller. We consider two types of neural networks (BP and LVQ3) and compare their performance in numerical examples.

Junichi Takinami, Yutaka Matsumoto, Norio Okino
Real-data-based Car-following with Adaptive Neural Control

Our goal is to train a neural network to control the longitudinal dynamics of a real car. Input signals to the neural controller are its own speed, its own acceleration, distance and change of distance to the car ahead. Net output are control signals for throttle and brake. The acceleration of our own car and the distance to the car ahead are computed in discrete time-steps. Therefore all networks were trained with discrete-time data.We performed experiments with several architectures including Jordan net, Elman net, a fully recurrent network and a recurrent network with a slightly different architecture (using ‘normal’ hidden units without recurrent connections as well) to analyze different behaviour and the efficiency of these neural network types. We used standard back-propagation to train the Jordan- and Elman-style networks and the backpropagation through time algorithm to train the recurrent nets.Currently, we are working on a two-level predictor-hierarchy, where a higher-level network is intended to solve problems the lower-level net can not. This is achieved by restricting the highlevelnetwork’s input to those data items in time in the case of which the low-level net had difficulties to produce a matching output.The most promising results have been achieved using an algorithm which is a kind of backpropagation through time, in the case of which the time to look back is restricted to a specific amount of time steps. We call this backpropagation through truncated time.Throughout our experiments we have been observing that the way of encoding data has an important influence on the performance of a network. Several runnings confirmed that this has a stronger impact than all of the well-known parameters (e.g. learning rate, momentum) of backpropagation. It could be shown that backpropagation through truncated time and our way of encoding data both lead to far better results than conventional approaches.

Walter Weber, Jost Bernasch

Genetic Algoritms

An Adaptive Plan

This paper introduces a novel form of adaptive plan that is significantly different from current ones. The plan presented does not rely on the generation of expected numbers of solutions and as such can use a new means of sampling the solutions which are used as parents in the crossover process.A mathematical analysis of some parts of the adaptive plan is presented followed by a discussion of some of the plan’s main advantages. Finally, the successful use of the adaptive plan in a genetic algorithm applied to a problem in shape representation is discussed.

R. Garigliano, A. Purvis, P. A. Giles, D. J. Nettleton
Mapping Parallel Genetic Algorithms on WK-Recursive Topologies

In this paper a parallel simulator of Genetic Algorithms is described. The target machine is a parallel distributed-memory system whose processors have been configured in a WK-Recursive topology. A diffusion mechanism of useful local information among processors has been carried out. Specifically, simulations of genetic processes have been conducted using the Travelling Salesman Problem as an artificial environment. The experimental results are presented and discussed. Furthermore, performance with respect to well-known problems taken from literature is shown.

Diversity and Diversification in Genetic Algorithms: Some Connections with Tabu Search

Genetic Algorithms (GAs) have been used very successfully to solve a variety of optimisation problems, but despite their successes, there are a number of outstanding problems in their implementation. One of the most pervasive problems is that of premature convergence of the process, usually associated with a loss of diversity in the population of chromosomes.In this paper, we will first review some of the existing solutions to the problem of preserving diversity. These all use the basic GA framework; there are also extreme solutions such as the invariant GA which dispenses with the fundamental selection process altogether.We argue that an underlying (and largely unaddressed) problem is the GA’s lack of memory: it is here that some connections with the concept of Tabu Search (TS) may prove fruitful. In TS the search is characterised in terms of the twin concepts of intensification and diversification. Intensification relates to the ability of the search strategy to focus on a particular area (or particular areas) of the search space in order to find improved solutions. In this sense, a GA as customarily conceived is clearly an intensifying process. Diversification is achieved by the incorporation of memory into its basic structures — that is, structures are devised which can record the history of the search.We will describe these mechanisms in more detail, with particular reference in the context of this paper to their diversifying effects. From this standpoint we will then suggest some ways in which these mechanisms can be adapted so as to offer a systematic and coherent framework for diversification within the Genetic Algorithm paradigm.

Clique Partitioning Problem and Genetic Algorithms

In this paper we show how critical coding can be for the Clique Partitioning Problem with Genetic Algorithm (GA). Three chromosomic coding techniques are compared. We improve the classical linear coding with a new label renumbering technique and propose a new tree structured coding. We also show the limitation of an hybrid approach that combines GA with dynamic programming. Experimental results are presented for 30 and 150 vertex graphs.

Self-Organization of Communication in Distributed Learning Classifier Systems

In this paper, an application of learning classifier systems is presented. An artificial multi-agent environment has been designed. Mate finding problem, a learning task inspired by nature, is considered which needs cooperation by two distinct agents to achieve the goal. The main feature of our system is existence of two parallel learning subsystems which have to agree on a common communication protocol to succeed in accomplishing the task. Apart from standard learning algorithms, a unification mechanism has been introduced to encourage coordinated behavior among the agents belonging to the same class. Experimental results are presented which demonstrate the effectiveness of this mechanism and the learning capabilities of classifier systems.

Design of Digital Filters with Evolutionary Algorithms

A recursive digital Filter with infinite impulse response (IIR) is characterized by its recursive and non-recursive filter coefficients and the corresponding filter structure. In the well-known filter design algorithms the structure has generally to be chosen beforehand together with the maximum allowed filter length.Speed of computation can be an important aspect in determining the maximum permissible filter length and therewith the filter structure in an application. Specialized’ slim’ IIR filter (SIIR) structures are proposed as a generalized class of recursive filters including the classical forms. In SIIR structures a large number of coefficients is set to zero, thereby reducing the total number of non-zero coefficients and thus the computational cost.

An Empirical Study of Population and Non-population Based Search Strategies for Optimizing a Combinatorical Problem

The performance of several algorithms for optimizing a combinatorical problem is compared empirically. The following topics are studied: • Can population based search strategies find good solutions faster than non-population ones? • Are strategies that recombine two individuals to form new solutions better than those who use one individual only?• Does the size of the population affect on the performance?• Does the way how individuals are selected from the population affect on the performance?The test problem used in these experiments is knapsack. 5000 variations of the test problem were solved. The number of the test function evaluations was recorded in the simulations. Statistical data was obtained in the form of cumulative frequencies and average values.Based on these simulations the short answers to the four questions above are: yes; not necessarily; yes, but it depends on the selection strategy; and yes.

The Parallel Genetic Cellular Automata: Application to Global Function Optimization

This work describes massively parallel genetic algorithms inspired by cellular automata models and by large, spatially distributed populations of individuals, as suggested by biological analogies. Models with strict locality and a variety of pseudo-diffusion models are presented. The models are applied to the the global optimization problem of multiextremal multimodal functions. They are tested on a suite of hard standard test functions. Results are then discussed for the various models taking into account the unusual population sizes, their diversity and the role of individual’s migration.

Optimization of Genetic Algorithms by Genetic Algorithms

This paper presents an approach to determine the optimal Genetic Algorithm (GA), i.e. the most preferable type of genetic operators and their parameter settings, for a given problem. The basic idea is to consider the search for the best GA as an optimization problem and use another GA to solve it. As a consequence, a primary GA operates on a population of secondary GAs which in turn solve the problem in discussion. The paper describes how to encode the relevant information about GAs in gene strings and analyzes the impact of the individual genes on the results produced. The feasibility of the approach is demonstrated by presenting a parallel implementation on a multi-transputer system. Performance results for finding the best GA for the problem of optimal weight assignment in feedforward neural networks are presented.

Achieving Self-Stabilization in a Distributed System Using Evolutionary Strategies

In this paper we present a genetic self-stabilization protocol for the canonical distributed problem of leader election. A self-stabilizing distributed system is one that can be started in any global state, and, during its execution, will eventually reach a legitimate global state(s) and henceforth remain there, maintaining its integrity without any kind of outside intervention. Current self-stabilizing systems either program the stabilizing feature into their protocols, or they use randomized protocols and special processors to stabilize the system. We believe that self-stabilization should be an emergent property of a distributed system, and, by transforming a distributed problem to a model of evolution, which is inherently self-stabilizing, we demonstrate how the emergence of self-stabilization can be achieved. We attempt to achieve more than just solving a problem with a distributed genetic algorithm: we take a distributed problem and show how analogies from evolution can be used to solve it.

Improving Simple Classifier Systems to alleviate the problems of Duplication, Subsumption and Equivalence of Rules

For new, potentially improved rules that is, the search performed by a classifier system’s genetic algorithm is guided by the relative strength of the rules in the extant rule base. This paper identifies three general types of rule whose presence in a plan can affect the relative strength of rules in a rule base and thereby provide the potential to compromise the effectiveness of the genetic algorithm. The nature and extent of relative strength distortion is investigated and a method to combat the distortion which involves adaptation of the standard bucket brigade algorithm, is proposed.

Genetic Algorithm Selection of Features for Hand-printed Character Identification

We have constructed a linear discriminator for handprinted character recognition that uses a (binary) vector of 1, 500 features based on an equidistributed collection of products of pixel pairs. This classifier is competitive with other techniques, but faster to train and to run for classification.However, the 1, 500-member feature set clearly contains many redundant (overlapping or useless) members, and a significantly smaller set would be very desirable (e.g., for faster training, a faster and smaller application program, and a smaller system suitable for hardware implementation). A system using the small set of features should also be better at generalization, since fewer features are less likely to allow a system to “memorize noise in the training data.”We tried several genetic algorithm approaches to search for effective small subsets of features, and we have successfully found a 300-element set of features and built a classifier whose performance is as good on our testing set as the system using the full feature set.

Analysis and Comparison of different Genetic Models for the Clustering problem in Image Analysis

This paper presents several genetic approaches to the clustering problem of N elements in an n-dimensional Feature Space. This process has been applied in an Image Analysis context in order to divide a set of objects into a fixed number of groups, dependending on their characteristics. The partitioning models are based on very general issues so they can be used in many different clustering applications, as well as real objects grouping. The genetic paradigm has been choisen because the cluster Solution Space has to be explored without any ‘a priori’ or heuristic knowledge and also because the performed parallel search can elude the relevant number of local minima in the solution optimisation. Different cluster models and genetic operators have been analysed in order to exploit the genetic algorithm power in an Image Analysis environment. A performance comparison between solutions is shown, using several chromosome codes and genetic operators.

Evolving Recurrent Dynamical Networks for Robot Control

This paper describes aspects of our research into the development of artificial evolution techniques for the creation of control systems for autonomous mobile robots operating in complex, noisy and generally hostile environments. At the heart of our method is an extended genetic algorithm which allows the openended formation of control architectures based on artificial neural networks. After outlining the rationale for our work, and giving the background to our techniques and experimental method, results are presented from experiments in which we contrast the behaviours of robots evolved under the same evaluation function but with different sensory capabilities. One set of robots have tactile sensing only, whereas the other have both tactile and primitive visual sensors. Although the evaluation task does not explicitly rely on vision, results show conclusively that evolution is able to exploit visual input to produce very successful controllers, far better than those for robots without vision.

Genetic Algorithms for On-Line System Identification

When required for batch processing, system identification can be used in a number of ways to find a good fit in terms of modelling the system structure, time delay and characteristic parameters of a plant. The best structure and delay selection can often be found in terms of a computationally simple model order testing procedure, with a range of different candidate models being considered.Due to the need for computational efficiency for on-line identification, this is often restricted to a more straightforward parameter estimation exercise, the structure being selected as a fixed term, usually of low order. Any structural tuning is then carried out in terms of sampling period variation, which can mean that vital plant information does not appear in the identified model.This paper presents an on-line technique for system identification, making use of genetic algorithms for structure and time delay estimation. The identification scheme is multi-level, the bottom level consisting of the more usual parameter estimation exercise, with the upper level carrying out structure identification. The first of these can update in real-time, whilst the second operates in its own time. At any instant, one model is selected as “best” in terms of both structure and parameters, and this is the one employed as on-line identification.

EVOLVABLE HARDWARE Genetic Programming of a Darwin Machine

For the past three years, the author has been dreaming of the possibility of building machines which are capable of evolution, called “Darwin Machines”. As a result of several brain storming sessions with some colleagues in electrical engineering, the author now realizes that hardware devices are on the market today, which use “software configurable hardware” technologies that the author believes can be used to build Darwin Machines within a year or two. This paper suggests there are at least two approaches to be taken. The first approach uses “software configurable hardware” chips, e.g. FPGAs (Field Programmable Gate Arrays), HDPLDs (High Density Programmable Logic Devices), or possibly a new generation of chips based on the ideas that FPGAs etc embody. The second approach uses a special hardware device called a “hardware accelerator” which accelerates the simulation in software of digital hardware devices containing up to several hundred thousand gates. Darwin Machines will be essential if artificial nervous systems are to be evolved for biots (i.e. biological robots) which consist of thousands of evolved neural network modules (called GenNets). The evolution time of 1000-GenNet biots will need to be reduced by many orders of magnitude if they are to be built at all. It is for this reason that Darwin Machines may prove to be a breakthrough in biotic design. When molecular scale technologies come on line in the late 1990s, the Darwin Machine approach will probably be the only way to build self assembling, self testing molecular scale devices.

A Fast Genetic Algorithm with Sharing Scheme Using Cluster Analysis Methods in Multimodal Function Optimization

Genetic algorithms with sharing are well known for tackling multimodal function optimization problems. In this paper, a sharing scheme using a clustering methodology is introduced and compared with the classical sharing scheme. It is shown from the simulation on test functions and on a practical problem that the proposed scheme proceeds faster than the classical scheme with a performance remaining as good as the classical one. In addition, the proposed scheme reveals unknown multimodal function structure when a priori knowledge about the function is poor. Finally, introduction of a mating restriction inside the proposed scheme is investigated and shown to increase the optimization quality without requiring additional computation efforts.

An Interactive Genetic Algorithm for Controller Parameter Optimization

Genetic algorithms are stochastic search algorithms inspired by biological phenomena of genetic recombination and natural selection. They simulate the evolution of string individuals encoding candidate solutions to a given problem. Genetic algorithms proved robust and efficient in finding near-optimal solutions in complex problem spaces. They are usually exploited as an optimization method, suitable for both continuous and discrete optimization tasks.In this paper, genetic algorithms are investigated as an engineering optimization tool. The work focuses on tuning parameters in control system design. This domain has already been approached with genetic algorithms, but most of the experiments have been done using computer simulations of the devices to be controlled. We present the Interactive Genetic Algorithm (IGA) for controller parameter optimization. IGA carries out the evolution of parameter values in a traditional manner, but differs from a conventional genetic algorithm in that it allows interaction with real-world environment. The algorithm suggests the trials to be performed to explore the parameter space, and accepts results of the trials from the environment. The paper describes IGA and its application in tuning the parameters of a PID regulator operating on a laboratory device.

Robustness and Evolution in an Adaptive System Application on Classification Task

In this paper, we proposed an approach to a single-step Classifier System, in which the useful population is built by progressively specializing classifiers. It has been applied to a classification task in a medical domain. To permit the system to explore alternatives without making decisions earlier in learning stages, all the classifiers that might be selected are triggered and receive the resulting reward corresponding to their action. The payoff function involves the classifier’s performance, its specificity and the system’s performance (its robustness). Genetic operators are activated with a probability which depends on the system’s robustness. During the test stages, no further learning takes place and the system’s performance is measured by the percentage of correct classification made on the second set of examples. When the measure of performance is the highest, the population is stabilized and contains the correct classifiers (the payoff function and genetic operators have no more effect on classifiers). This approach achieves convergency more quickly and makes it possible to have a final accurate population without over-specializing.

On Robot Navigation Using a Genetic Algorithm

In this work we have studied the possibility of using genetic algorithms in mobile robot navigation. The autonomous robot has a map of the room it moves within and some simulated sensors including range sensors to measure the distance between the robot and the other objects in the room. The location estimation method is based on minimizing the fitness function that depends on the measured data and the environment model by a genetic algorithm. The potential benefits of the genetic algorithms in this application area include robustness, parallel nature, generality, flexibility, incrementality, and simplicity. The obvious drawbacks include slow and stochastic processing. This work is a preliminary one in examining the applicability of genetic algorithms to solve computational problems of industrial and autonomous robots. In general the proposed method of finding the vector that gives the optimal fitting to the model used can be applied in many other calibration type problems as well in robotics as many other fields, too.

Genetic Algorithms and Classifier Systems in simulating a Cooperative behavior

Genetic Algorithms and Classifier Systems are often used in biologic-like and evolutionary behaviors’ simulations. The basic example is Wood7 Wilson’s world. In this environment it is interesting to study some problems: Can evolve the cooperative behaviors of organisms present in the world? How and when do the behaviors evolve? Some preliminary results show the conditions under that cooperative behavior rules are developing rapidly. Particularly we have pointed out the likely of following observations: a.The cooperative behavior develops more easily if the initial population start from the same point.b.It exists some thresholds under that the cooperative behavior can’t evolve; these thresholds depend to the population size.

Dynamic Management of the Specificity in Classifier Systems

The estimation of the rule usefulness in a classifier system is faced to the creditapportionment problem. Usually, the apportionning of payoffs process is performed by the bucket brigade algorithm. However, some works have shown that this algorithm presents some difficulties.Generally, the condition part of a rule is defined on an alphabet containing a “don’t care” symbol. That is why a same rule can be fired in different contexts. In such conditions, it is impossible to use too generalized classifiers because of the incoherence of the strength management.The solution we propose here, can solve the problem: general classifiers belonging to a success-ending sequence are dynamically specialized. In order not to store all the sequence actions, the Bucket Brigade algorithm is applied to the new-created rule specificity. So, the closer a classifier is from the end of the solution sequence, the more specific it is. This new algorithm is presented here and applied to an automous moving robot which must learn how to move in an environment with obstacles.

Dynamic Sequencing of A Multi-Processor System: A Genetic Algorithm Approach

The dynamic sequencing of jobs through a multiprocessor system is one to which little attention has been paid, in contrast to the extensive literature on static sequencing problems. Yet in many real problems, to assume, as the static model does, that we know about all jobs that will arrive in the course of a processing cycle is hardly realistic.Existing solutions usually assume a queueingtheoretic orientation, rather than an optimization one, in which the decision as to which job should be processed is made on the basis of some simple selection criteria, such as First-Come First-Served, or Shortest Processing Time.Here we investigate the use of a Genetic Algorithm (GA) to solve the successive sequencing problems generated by finding a near-optimal sequence for those jobs available just before successive event times—the times at which the job being processed on the first processor completes its processing. Some comparisons are made between using the GA approach versus some simple rules.

Genetic Algorithms versus Tabu Search for Instruction Scheduling

Most scheduling problems require either exponential time or space to generate an optimal answer [7]. Instruction scheduling is an instance of a general scheduling problem and Dewitt [8] uses this fact to show instruction scheduling is a NP-complete problem. This paper applies Genetic Algorithms, Tabu Search, and list scheduling to the instruction scheduling problem and compares the results obtained by each.

A Massively Parallel Genetic Algorithm on the MasPar MP-1

This contribution describes the implementation of a fine-grained parallel genetic algorithm ‘MPGA’ on the MasPar MP-1, a massively parallel mesh connected array processor with global router and 1024 (up to 16384) 4-bit processing elements. The implementation uses object oriented methods to provide a large set of standard strategies which can be adapted for a given application. Report modules support the investigation of the performance of the GA. The Implementation shows a good performance compared to other implementations on parallel hardware.

Standard Cell Routing Optimization Using A Genetic Algorithm

A typical approach to cell-based analog design involving resistors is to maintain the standard cell height by using an array of standard resistor legs. As the array becomes larger and more taps are placed on the resistor, random orientation of the resistor legs can lead to routing problems as signals cross back and forth across the cell routing channel. Heuristic optimization of the resistor leg orientation to reduce signal crossing becomes a complex search through a 2n-1 search space, where n is the number of legs to be oriented.

Efficient Parallel Learning in Classifier Systems

Classifier systems are simple production systems working on binary messages of a fixed length. Genetic algorithms are employed in classifier systems in order to discover new classifiers. We use methods of the computational complexity theory in order to analyse the inherent difficulty of learning in classifier systems. Hence our results do not depend on special (possibly genetic) learning algorithms. The paper formalises this rule discovery or learning problem for classifier systems which has been proved to be hard in general. It will be proved that restrictions on two distinct learning problems lead to problems in NC, i.e. problems which are efficiently solvable in parallel.

Co-Evolving Communicating Classifier Systems for Tracking

In this paper we suggest a general approach to using the genetic algorithm (GA)[1] to evolve complex control systems. It has been shown [2] that although the GA may be used to evolve simple controllers, it is not able to cope with the evolution of controllers for more complex problems. We present an architecture of co-evolving communicating classifier systems [3] as a general solution to this, where the only restriction is that each classifier system is responsible for one simple behaviour. Thus the ecology of subproblems evolves its own organisational structure at the same time its constituents evolve their solutions. Whether this structure ends up as a democratic soup, a hierarchy, or something in between, is determined by co-evolution rather than prescribed a priori by a creator. We use the trail following “tracker task” to compare the performance of a single classifier, responsible for the control of the whole system, evolved for this task with the performance of a co-evolved controller using our approach. The resulting interactions of the classifier systems are also examined.

On Finding Optimal Potential Customers from a Large Marketing Database — a Genetic Algorithm Approach

In this work we have studied the problem of finding potential customers from large marketing databases using a genetic algorithm. The problem is that it is far from clear, in advance, what constitutes a good potential customer in the database in question, even if criteria for how a customer should be graded can be formulated. The genetic algorithm approach uses this grading as the basis for a fitness function, crucial to the genetic algorithm, and effectively applies the genetic algorithm to classify the database accordingly. As a consequence, the result directly tells us both how well the grading succeeds in finding good customers and gives us a set of found optimal customers.

Structural Design for Enhanced Noise Performance Using Genetic Algorithm and Other Optimization Techniques

The control of structural vibration in aeroplanes and ships is of great importance in achieving low noise targets. Currently, such control is effected using viscoelastic coating materials although much current research is concerned with active, anti-noise based control measures. Recent studies using Genetic Algorithm (GA) optimization methods in the field of Statistical Energy Analysis (SEA) suggest that it may be possible to design passive noise filtration characteristics into such structures. This paper reports initial work in this field: it compares GA’s with more classical optimization methods and shows how improvements in noise performance can be obtained for the simple structures considered.

The Concrete Arch Dam
An Evolutionary Model of the Design Process

The Plymouth Engineering Design Centre (PEDC) is one of the seven Science and Engineering Research Council funded Design Centres that have been established at various academic institutions within the UK. The initial aim of the PEDC is to carry out fundamental research into the application of adaptive search techniques to engineering design. The following paper describes the application of the adaptive search technique known as the Genetic Algorithm (GA) to the multi-variable problems associated with the design of a double curvature Concrete Arch Dam.The evolutionary characteristics of the Algorithm and their relevance to the designer are briefly described and the main features of the Arch Dam design geometry are defined. The development of an interactive designer / GA design tool and problems concerning the integration of related design aspects are discussed.The current status of the project is considered and further work at the Centre involving the application of the Genetic Algorithm to other engineering design domains is outlined.

The Genie Project — A Genetic Algorithm Application to a Sequencing Problem in the Biological Domain.

This paper describes the current development and implementation of a form of genetic algorithm (GA) suitable for tackling a complex sequencing problem in the biological domain — the building of restriction “maps” from the results of partial digest experiments. Building restriction maps is a time-consuming and lengthy activity which relies on human judgement of inexact data.The paper is organised into the following sections. The procedure for building restriction enzyme maps is described in section 2. There are several aspects of map assembly which make it a relevant problem to study from a GA point of view and these are outlined in section 3. The GENIE project is an ongoing project and the way in which the problem is being tackled by developing a GA and the implementation issues are discussed in section 4. Preliminary results are shown in section 5, the paper is summarised in section 6 and section 7 highlights future developments.

Hybrid Genetic Algorithms for the Traveling Salesman Problem

A comparative analysis is performed on an experimental basis among four different cross-over operators. In order to exploit the benefits of the different operators, a new one (called Mixed Cross-over) is introduced, trading-off the CPU time requirements and the obtained results. A new operator is then proposed, whose goal is to include in the genetic mechanism some heuristic knowledge drawn from the already proposed local-optimization techniques. The performance of the new operator is discussed.

Using a Genetic Algorithm to Investigate Taxation Induced Interactions in Capital Budgeting

The capital budgeting problem, as analysed here, involves selecting a combination of projects from the set of all possible combinations of projects to maximise the net present value of cash flows. The traditional investment appraisal approach of carrying out a project by project analysis and selecting accordingly is invalid because of features of the many current taxation system, which introduce non-linear interdependences between the projects.Using a serial implementation of the genetic algorithm toolkit GAmeter, we investigate this effect using aspects of the UK taxation system on a set of standard capital budgeting problems and compare the results with those obtained using a more traditional approach and a mixed integer programming approach. The capital budgeting model developed includes features such as “carry forward” of capital allowances, multiple rates of corporation tax and capital rationing constraints. The model can be extended to include other tax features found in many economies.

Fast Sequential and Parallel Implementation of Genetic Algorithms using the GAmeter Toolkit

A General Search Paradigm is formulated using a higher order function and, in this context, we discuss the properties which characterize genetic algorithms, tabu search, simulated annealing, etc.From the specification of this general search algorithm, we develop a formal specification of a class of genetic algorithms. By suitable settings of input parameters, we show that a wide variety of (genetic) algorithms can then be instantiated. We have developed kernel software to implement this specification on different architectures. In particular, we have versions running sequentially on Macintosh computers and under Unix and another, parallel version on a Meiko transputer rack. The user interface is identical and the user of the system need have no architecturedependent knowledge to use this software. We briefly describe these implementations and show how simple it is to port genetic algorithms from one achitecture to the other.The toolkit implementing the kernel together with its associated interface is called GAmtier and we report on the use of this software on a number of case studies in combinatorial optimisation. We have encouraging results on a range of examples including the undirected and directed Steiner tree problems.The main strengths of our software is its excellent graphics interface and its wide applicability. However, it also has a number of novel features including dynamic control of parameters such as population size and crossover/mutation probabilities, thus giving additional control to the user.

Locating Pressure Control Elements for Leakage Minimisation in Water Supply Networks by Genetic Algorithms

The complex problem of choosing the types of pressure-control elements, locating them and determining their settings in order to minimise leakage in water supply networks is addressed. The difficulty stems from the discrete nature of the choice/location variables, as well as from the fact that the problem is invariably large-scale, due to the size of the networks involved, in addition to being nonlinear and non-convex due to the nature of the head-flow relationships. The paper describes the design of a genetic algorithm to solve the problem and compares the results obtained with those obtained by a mixedinteger programming model developed earlier by the authors.

A “Noise Gene” for Econets

Genetically controlled noise is applied to the weights of neural networks trained with a genetic algorithm. Networks simulate simple organisms living in an environment Reproduction is based on the ability of each network, during its life, to respond to sensory information from the environment with appropriate motor action. Each network has an amount of noise which is genetically inherited (in the ‘noise gene’) with mutations and it varies interindividually. Noise modifies the value of a weight differently for each spreading of the activation through the network. Such noise has a positive effect on the evolutionary increase in fitness and it makes fitness less dependent on the initial choice of a random population. Evolutionarily, whatever its initial amount, noise reaches an intermediate amount during the first third of the evolutionary process and then it goes near zero.

A Solution to Global Illumination by Genetic Algorithms

A new approach to optimize the computer simulation of radiant light transfer by means of evolutionary techniques for the generation of photorealistic images is introduced.The formulation of radiant light transfer in a model leads to a system of complex integral equations, which currently have been solved by Monte Carlo Methods. One of the major problems in Monte Carlo sampling is to determine the location and density of sample points in order to reduce the variance of the estimates.Here a solution is provided by applying evolution strategies to calculate the global illumination. Thus exploiting the search space, i.e. the hemisphere of incident radiation to a point on a surface in a very efficient way through maintaining populations of rays and applying selfadaptive genetic recombination operators.The simulation process now becomes selforganizing and the transition of one state into another is no longer independent of previous states which allows the system to adjust optimally to a particular lighting situation.

NeXTGene: A graphical user-interface for GENESIS under NeXTStep

Experimenting with genetic algorithm (GA) systems can be substantially supported by graphical user interface (GUI) frontends to GA-kernels. We briefly describe our ideas about what a GUI for evolutional systems might look like. Our first results in designing an interface with the help of NeXTStep’s Interface Builder 1 are presented.

Using Genetic Algorithms in Economic Modelling: The Many-Agents Approach

The theory of economies with many agents was developed in the sixties, stimulated by two papers of Aumann (Aumann 1964, Aumann 1966). The mathematical properties of this approach have been analysed by Dierker (Dierker 1974). According to Dierker the theory of economies with many agents allows removing both the convexity and the rationality hypothesis in economics. The idea of replacing the rationality hypothesis by the theorem of “bounded rationality”, has been the center of more recent work in economics (Schelling 1978, Arthur 1991). As it is possible to map the framework of the theory of economies with many agents directly onto PBE-automata, Principle-based engineering offers interesting routes to explore in the area of economics; especially in Economic Modelling. Principle-Based Engineering (PBE) is the process of designing a system through the definition of interacting elements where each element is an example of a principle (Addis 1991). Higher order systems can be constructed from principles that have, in their turn, been engineered as principle-based sub-systems. The behaviour of a principle-based system (PBS) is “emergent” through the resultant interaction of the elements and with an ‘environment’.Using the PBE approach, economic theorist’s might directly model the behavior of many economic agents with partially conflicting interests and therefore gain insights in the results of the interdependencies of these agents for complete markets or market segments etc. This type of work is currently being actively researched at Brainware in the framework of the ESPRIT project PAPAGENA.

GIRS: A genetic approach to Information Retrieval

The fundamental problem in information retrieval (IR) is to identify the relevant documents from nonrelevant ones according to a particular user’s request. This paper describes a genetic approach to probabilistic information retrieval for improving recall and precision of an information retrieval system. The presented prototype model — GIRS: Genetic Information retrieval System — consists of the following parts: 1.a Genetic Algorithm (GA) is used for (a) the redescription of documents (indexing component) and (b) the clustering of co-relevant documents depending on user relevance judgements.2.Classifier System (CS): the system evolves sets of rules containing clustered descriptions of similar documents to improve the recall and presicion of the system.

Classifier System in Traffic Management

The systems of controlling and improving traffic movement have been studied for several years now. The usefulness of these systems is that they can modify and change the lights signals of traffic lights. It is not enough to intervene when the situation has reached a critical point such as a traffic jam. The system has to work out how the traffic will flow. The ideal solution would be a system that works out and foresees the situation on the roads based on a model of motorists’ behaviour. This research shows how to best utilise the classifier systems so that it would be possible to create a model that is similar to that of the real world.

Artificial Neural Networks & Genetic Algorithms

Genetic Search for Optimal Representations in Neural Networks

An approach to learning in feed-forward neural networks is put forward that combines gradual synaptic modification at the output layer with genetic adaptation in the lower layer(s). In this “GA-delta” technique, the alleles are linear threshold units (a set of weights and a threshold); a chromosome is a collection of such units, and hence defines a mapping from the input layer to a hidden layer. The fitness is evaluated by measuring the error after a small number of delta rule iterations on the hidden-output weights. Genetic operators are defined on these chromosomes to facilitate search for a mapping that renders the task solvable by a single layer of weights. The performance of GA-delta is presented on several tasks, and the effects of the various operators are analyzed.

Searching among Search Spaces: Hastening the genetic evolution of feedforward neural networks

The paper introduces a genetic paradigm for evolving feedforward neural network, where both the network topology and the weights distribution are coded in the individuals. The resulting binary coded strings are too long for efficient evolution, thus two novel techniques are employed. The first consists in a coding procedure that allows the genetic algorithm to evolve the length of the coding string along with its content. This goes by the name of granularity evolution procedure. The second technique is inspired by linear programming and yields a fast and accurate fine tuning of the solutions.These ideas have been implemented in a running system. Computational results show how during evolution the genetic algorithm uses several coding lengths, thus it autonomously identifies the search spaces that lead to more promising solutions.

Representation and Evolution of Neural Networks

An evolutionary approach for developing improved neural network architectures is presented. It is shown that it is possible to use genetic algorithms for the construction of backpropagation networks for real world tasks. Therefore a network representation is developed with certain properties. Results with various application are presented.

Using a Genetic Algorithm to tune Potts Neural Networks

This paper describes the successful application of a Genetic Algorithm to tuning the parameters of a Potts Neural Network.Potts Neural Networks can be used to solve optimization problems. First the problem is mapped upon the network by generating neurons and connections depending on the instance of the problem at hand. The stable state found with simulation represents a solution for the problem.There are several parameters that influence correctness and quality of the solutions found. The parameters depend on the problem to be solved. Another optimization technique inspired by nature, Genetic Algorithms, is used to find parameter settings that give the best performance of the network.The approach has been tested on two applications of Potts Neural Networks. In both cases good results were achieved.

Genetic Weight Optimization of a Feedforward Neural Network Controller

The optimization of the weights of a feedforward neural network with a genetic algorithm is discussed. The search by the recombination operator is hampered by the existence of two functional equivalent symmetries in feedforward neural networks. To sidestep these representation redundancies we reorder the hidden neurons on the genotype before recombination according to a weight sign matching criterion, and flip the weight signs of a hidden neuron’s connections whenever there are more inhibitory than excitatory incoming and outgoing links. As an example we optimize a feedforward neural network that implements a nonlinear optimal control law. The neural controller has to swing up the inverted pendulum from its lower equilibrium point to its upper equilibrium point and stabilize it there. Finding the weights of the network represents a nonlinear optimization problem which is solved by the genetic algorithm.

Using a genetic algorithm to find the rules of a neural network

Neural networks can be used in various applications, and there are many forms of neural network. However, all networks have two common factors: they can learn from examples and they can generalise; that is, they can respond suitably both to the data taught and to similar data. Once trained, the network can be used for such tasks as recognition and classification.One criticism of neural networks is that they are black boxes: appropriate outputs are generated for each set of inputs, but the user has no understanding of how such outputs are generated. The aim of the work here is to generate the rules which determine how a taught network responds.This is achieved using a genetic algorithm which suggests possible rules for the network. The network itself is then used to evaluate the performance of each rule, and this performance measure is used to determine new rules.This paper outlines the problem in more detail, describes the proposed technique and discusses the preliminary results obtained so far.

M. L. P. Optimal Topology via Genetic Algorithms

In the paper a Genetic Algorithm in order to select the optimal topology of a Multi Layer Perceptron is adopted. Two different problems are considered. The first one is to select the optimal number of neurons in a structure with one hidden layer. The second one is the choice of the number of layers into which a fixed number of neurons has to be arranged, to solve a given problem. To this aim, a suitable set of genetic operators has been introduced.

Application of Genetic Algorithms to the Construction of Topologies for Multilayer Perceptrons

In this paper we present a new approach for automatic topology optimization of backpropagation networks. It is based on a genetic algorithm. In contrast to other approaches it allows that two networks with different number of units can be crossed to a new valid “child” network. We applied this algorithm to a medical classification task, which is extremely difficult to solve. The results confirm, that optimization make sence, because the generated network outperform all fixed topologies.

Genetic Algorithms as Heuristics for Optimizing ANN Design

The problem of the ANN design is usually thought as residing in solving the training problem for some predefined ANN structure and connectivity. Training methods are very problem and ANN dependent. They are sometimes very accurate procedures but they work in narrow and restrictive domains. Thus the designer is faced to a wide diversity of different training mechanisms. We have selected Genetic Algorithms because of their robustness and their potential extension to train any ANN type. Furthermore we have addressed the connectivity and structure definition problems in order to accomplish a full genetic ANN design. These three levels of design can work in parallel, thus achieving multilevel relationships to build better ANNs. GRIAL is the tool used to test several new and known genetic techniques and operators. PARLOG is the Concurrent Logic Language used for the implementation in order to introduce new models for the genetic work and to attain an intralevel distributed search as well as to parallelize any ANN management and any genetic operations.

Genetic Algorithm Design of Neural Net Based Electronic Nose

The training of a multi-layer perceptron using the well known back-propagation algorithm normally takes place after the neural network architecture and the initial values of various network parameters have been defined. Since the success of the training process, in terms of a fast rate of convergence and good generalisation, can be affected by the choice of the architecture and the initial network parameters, much time is spent in searching for the optimal neural paradigm. In this paper, results are presented on the use of Genetic Algorithms to determine automatically a suitable network architecture and a set of parameters from a restricted region of design space. The data-set comes from the response of the Warwick Electronic Nose to a set of simple and complex odours.

Adhanom A. Fekadu, Evor L. Hines, Julian W. Gardner
Circuits of Production Rule GenNets
The Genetic Programming of Artificial Nervous Systems

A year ago, the author evolved a simulated artificial creature (biot, i.e. biological robot), called LIZZY, which consisted of fully connected neural network modules (called GenNets), whose weights were evolved such that each GenNet performed some desired behavior, such as making the biot walk straight ahead, turn clockwise or anticlockwise, peck, or mate [de Garis 1990, 1991, 1993]. Other such GenNets were evolved to detect sinusoidal frequencies, or signal strengths, or signal strength differences, etc. However, the middle layer, between the detector and the motion GenNets, was implemented with traditional symbolic production rules, for reasons of computer simulation speed. Every time a GenNet was added to the system, simulation speed dropped. This paper “completes” the above work, by proposing a model which shows how a circuit of production-rule-GenNets (i.e. GenNets which behave like production rules), can be evolved which implements the middle or “decisional” layer, which takes signals outputted from the detector GenNets, and then decides which motion GenNet should be switched on. This work takes a first step towards the evolution of whole nervous systems, where circuits of GenNet modules (of appropriate types) are evolved to give a biot a total behavioral performance repertoire.

Hugo de Garis
Neuromimetic Algorithms Processing: Tools for Design of Dedicated Architectures

The finality of the research subject is to derive a new parallel architecture of computer dedicated to connectionist algorithms and especially neuromimetic algorithms. The study of this problem induces us to carry out a simulation model. This model allows to highlight difficulties of parallel implementation on multiprocessor architecture of connectionist algorithms, i.e. projection of application graphs on the target machine graph in order to achieve optimal performances. Implementation is difficult because data transfers between processors induce mapping and scheduling problems. Then, some mapping strategies and optimization heuristics are used, as Node Swapping, Simulated Annealing or Genetic Algorithms. According to the results the model simulation allows to vary several parameters of the studied machine: number of processors, architecture, particular parallel mode (SIMD, SPMD,...), interconnection network (static or dynamic) and communication mode. It also allows to research a trade off between processing loads and communication loads. Such a way, some basic specifications of dedicated architecture will be chosen from the comparison of a set of alternatives.

Laurent Kwiatkowski, Jean-Paul Stromboni
Towards the Development of Cognitive Maps in Classifier Systems

Classifier systems are well tested vehicles for implementing genetic algorithms in machine learning environments. This paper presents a novel system architecture that transforms a classifier system’s knowledge representation from message-based structures to self-organizing neural networks. These networks have been integrated with a classifier system to produce a Hybrid Learning System (HLS) that exhibits adaptive behaviour when driven by low level environmental feedback. Problems are represented within HLS as objects characterized by environmental features. Objects controlled by the system have preset goals set against a subset of their features and the system has to achieve these goals by developing a behavioural repertoire that efficiently explores and exploits the problem environment. Three types of knowledge structures evolve during this adaptive process: a cognitive map of useful regularities within the environment (encoded in a single self-organizing network); classifier behaviour calibrated against feature states and targets (encoded in a set of self-organizing feature maps); a population of complex behaviours (evolved from a gene pool supplied as part of the initial problem specification).

N. R. Ball
Genetic Optimisation of Neural Network Architectures for Colour Recipe Prediction

Colour control systems based on spectrophotometers and microprocessors are finding increased use in production environments. One of the most important aspects of quality control in manufacturing processes is the maintenance of colour in the product. This involves selecting a recipe of appropriate dyes or pigments which when applied at a specific concentration to the product will render the required colour. This process is known as recipe prediction and is traditionally carried out by trained colourists who achieve a colour match via a combination of experience and trial-and-error. Instrumental recipe prediction was introduced commercially in the 1960’s and has become one of the most important industrial applications of colorimetry. The model that is almost exclusively used is known as the Kubelka-Munk theory, however its operation in certain areas of coloration is such as to warrant an alternative approach. The purpose of this paper is to investigate the performance of a Genetically optimised Neural Network applied to this recipe prediction task.

J. M. Bishop, M. J. Bushnell, A. Usher, S. Westland
Use of Genetic Algorithms for Optimal Topology Determination in Back Propagation Neural Networks

A genetic algorithm is applied to evolve neural network topologies suitable for given problem domains. Certain concepts, from the fields of statistics and genetics, are considered with a view to possible future improvements to the genetic algorithm.

Philip Robbins, Alan Soper, Keith Rennolls
Counting and Naming Connection Islands on a Grid of Conductors

Consider a grid of T horizontal and T vertical information conductors. At each crossing point it is possible to either connect a horizontal to a vertical conductor or just let the connection be open. When connected at a crossing point, both conductors share the same information state throught their lengths.When several conductors are mutually connected at their crossing points, an island of connections is formed. A requirement for an island is that there be representative conductors of both kinds. All conductors associated with an island share the same information state.For the case of several islands, each island’s conductors are mutually exclusive of any other island’s conductors. In fact, an island can be uniquely named by its conductors. As a further requirement on the conductors, the grid is assumed to be completely utilized with no idle conductors.It is under the above conditions we develop the general combinatorial equations for counting the number of ways s sets of islands can exist on a TxT grid of conductors. The restraints of the problem introduce interesting combinatorial aspects. We also discuss methods of counting connections and naming sets of islands.

Daniel C. Fielder, Cecil O. Alford
Backmatter
Metadata
Title
Artificial Neural Nets and Genetic Algorithms
Editors
Dr. Rudolf F. Albrecht
Dr. Colin R. Reeves
Dr. Nigel C. Steele
Copyright Year
1993
Publisher
Springer Vienna
Electronic ISBN
978-3-7091-7533-0
Print ISBN
978-3-211-82459-7
DOI
https://doi.org/10.1007/978-3-7091-7533-0