Skip to main content

About this book

This volume presents the set of final accepted papers for the tenth edition of the IWANN conference “International Work-Conference on Artificial neural Networks” held in Salamanca (Spain) during June 10–12, 2009. IWANN is a biennial conference focusing on the foundations, theory, models and applications of systems inspired by nature (mainly, neural networks, evolutionary and soft-computing systems). Since the first edition in Granada (LNCS 540, 1991), the conference has evolved and matured. The list of topics in the successive Call for - pers has also evolved, resulting in the following list for the present edition: 1. Mathematical and theoretical methods in computational intelligence. C- plex and social systems. Evolutionary and genetic algorithms. Fuzzy logic. Mathematics for neural networks. RBF structures. Self-organizing networks and methods. Support vector machines. 2. Neurocomputational formulations. Single-neuron modelling. Perceptual m- elling. System-level neural modelling. Spiking neurons. Models of biological learning. 3. Learning and adaptation. Adaptive systems. Imitation learning. Reconfig- able systems. Supervised, non-supervised, reinforcement and statistical al- rithms. 4. Emulation of cognitive functions. Decision making. Multi-agent systems. S- sor mesh. Natural language. Pattern recognition. Perceptual and motor functions (visual, auditory, tactile, virtual reality, etc.). Robotics. Planning motor control. 5. Bio-inspired systems and neuro-engineering. Embedded intelligent systems. Evolvable computing. Evolving hardware. Microelectronics for neural, fuzzy and bio-inspired systems. Neural prostheses. Retinomorphic systems. Bra- computer interfaces (BCI). Nanosystems. Nanocognitive systems.

Table of Contents


Theoretical Foundations and Models

Lower Bounds for Approximation of Some Classes of Lebesgue Measurable Functions by Sigmoidal Neural Networks

We propose a general method for estimating the distance between a compact subspace


of the space





) of Lebesgue measurable functions defined on the hypercube [0,1]


, and the class of functions computed by artificial neural networks using a single hidden layer, each unit evaluating a sigmoidal activation function. Our lower bounds are stated in terms of an invariant that measures the oscillations of functions of the space


around the origin. As an application we estimate the minimal number of neurons required to approximate bounded functions satisfying uniform Lipschitz conditions of order


with accuracy



José L. Montaña, Cruz E. Borges

A Wavelet Based Method for Detecting Multiple Encoding Rhythms in Neural Networks

In this work we propose the use of the discrete wavelet transform for the detection of multiple encoding rhythms that appear, for example, in spatio-temporal patterns generated by neuronal activity in a set of coupled neurons. The method here presented allows a quantitative characterization of spatio-temporal patterns and is based on the behavior of a compression-like scheme. The wavelet-based method is faster than the two-dimensional spectral methods for finding different rhythms on spatio-temporal patterns, as it has a computational complexity






) for each 2D-frame of the spatio-temporal pattern. The method also provides a easy method for classifying different qualitative behaviors of the patterns.

Carlos Aguirre, Pedro Pascual

Switching Dynamics of Neural Systems in the Presence of Multiplicative Colored Noise

We study the dynamics of a simple bistable system driven by multiplicative correlated noise. Such system mimics the dynamics of classical attractor neural networks with an additional source of noise associated, for instance, with the stochasticity of synaptic transmission. We found that the multiplicative noise, which performs as a fluctuating barrier separating the stable solutions, strongly influences the behaviour of the system, giving rise to complex time series and scale-free distributions for the escape times of the system. This finding may be of interest to understand nonlinear phenomena observed in real neural systems and to design bio-inspired artificial neural networks with convenient complex characteristics.

Jorge F. Mejias, Joaquin J. Torres, Samuel Johnson, Hilbert J. Kappen

Gradient Like Behavior and High Gain Design of KWTA Neural Networks

It is considered the static and dynamic analysis of an analog electrical circuit having the structure of the Hopfield neural network, the KWTA (K-Winners-Take-All) network. The mathematics of circuit design and operation is discussed


two basic tools: the Liapunov function ensuring the gradient like behavior and the rational choice of the weights that stands for network training to ensure order-preserving trajectories. Dynamics and behavior at equilibria are considered in their natural interaction, and some connections to the ideas in general dynamical systems of convolution type are suggested.

Daniela Danciu, Vladimir Răsvan

Fast Evaluation of Connectionist Language Models

Connectionist language models offer many advantages over their statistical counterparts, but they also have some drawbacks like a much more expensive computational cost. This paper describes a novel method to overcome this problem. A set of normalization values associated to the most frequent


-gramsis pre-computed and the model is smoothed with lower


-gramconnectionist or statistical models. The proposed approach is favourably compared to standard connectionist language models and with statistical back-off language models.

F. Zamora-Martínez, M. J. Castro-Bleda, S. España-Boquera

Improving the Consistency of AHP Matrices Using a Multi-layer Perceptron-Based Model

The Analytic Hierarchy Process (AHP) uses hierarchical structures to arrange comparing criteria and alternatives in order to give support in decision making tasks. The comparisons are realized using pairwise matrices which are filled according to the decision maker criterion. Then, matrix consistency is tested and priorities of alternatives are obtained. If a pairwise matrix is incomplete, two procedures must be realized: first, to complete the matrix with adequate values for missing entries and, second, to improve the consistency matrix to an acceptable level. In this paper a model based on Multi-layer Perceptron (MLP) neural networks is presented. This model is capable of completing missing values in AHP pairwise matrices and improving its consistency at the same time.

Jose Antonio Gomez-Ruiz, Marcelo Karanik, José Ignacio Peláez

Global and Local Modelling in Radial Basis Functions Networks

In the problem of modelling Input/Output data using neuro-fuzzy systems, the performance of the global model is normally the only objective optimized, and this might cause a misleading performance of the local models. This work presents a modified radial basis function network that maintains the optimization properties of the local sub-models whereas the model is globally optimized, thanks to a special partitioning of the input space in the hidden layer performed to carry out those objectives. The advantage of the methodology proposed is that due to those properties, the global and the local models are both directly optimized. A learning methodology adapted to the proposed model is used in the simulations, consisting of a clustering algorithm for the initialization of the centers and a local search technique.

L. J. Herrera, H. Pomares, I. Rojas, A. Guillén, G. Rubio, J. Urquiza

A Preliminar Analysis of CO2RBFN in Imbalanced Problems

In many real classification problems the data are imbalanced, i.e., the number of instances for some classes are much higher than that of the other classes. Solving a classification task using such an imbalanced data-set is difficult due to the bias of the training towards the majority classes. The aim of this contribution is to analyse the performance of CO


RBFN, a cooperative-competitive evolutionary model for the design of RBFNs applied to classification problems on imbalanced domains and to study the cooperation of a well known preprocessing method, the “Synthetic Minority Over-sampling Technique” (SMOTE) with our algorithm. The good performance of CO


RBFN is shown through an experimental study carried out over a large collection of imbalanced data-sets.

M. D. Pérez-Godoy, A. J. Rivera, A. Fernández, M. J. del Jesus, F. Herrera

Feature Selection in Survival Least Squares Support Vector Machines with Maximal Variation Constraints

This work proposes the use of maximal variation analysis for feature selection within least squares support vector machines for survival analysis. Instead of selecting a subset of variables with forward or backward feature selection procedures, we modify the loss function in such a way that the maximal variation for each covariate is minimized, resulting in models which have sparse dependence on the features. Experiments on artificial data illustrate the ability of the maximal variation method to recover relevant variables from the given ones. A real life study concentrates on a breast cancer dataset containing clinical variables. The results indicate a better performance for the proposed method compared to Cox regression with an



regularization scheme.

V. Van Belle, K. Pelckmans, J. A. K. Suykens, S. Van Huffel

A Simple Maximum Gain Algorithm for Support Vector Regression


et al.

Modification 2 is one of the most widely used algorithms to build Support Vector Regression (SVR) models. It selects as a size 2 working set the index pair giving the maximum KKT violation and combines it with the updating heuristics of Smola and Schölkopf enforcing at each training iteration a

$\alpha_i \alpha^*_i =0$

condition. In this work we shall present an alternative, much simpler procedure that selects the updating indices as those giving a maximum gain in the SVR dual function. While we do not try to enforce the

$\alpha_i \alpha^*_i =0$

condition, we show that it will hold at each iteration provided it does so at the starting multipliers. We will numerically show that the proposed procedure requires essentially the same number of iterations than Modification 2 having thus the same time performance while being much simpler to code.

Álvaro Barbero, José R. Dorronsoro

Domains of Competence of Artificial Neural Networks Using Measures of Separability of Classes

In this work we want to analyse the behaviour of two classic Artificial Neural Network models respect to a data complexity measures. In particular, we consider a Radial Basis Function Network and a Multi-Layer Perceptron. We examine the metrics of data complexity known as

Measures of Separability of Classes

over a wide range of data sets built from real data, and try to extract behaviour patterns from the results. We obtain rules that describe both good or bad behaviours of the Artificial Neural Networks mentioned.

With the obtained rules, we try to predict the behaviour of the methods from the data set complexity metrics prior to its application, and therefore establish their domains of competence.

Julián Luengo, Francisco Herrera

Self-estimation of Data and Approximation Reliability through Neural Networks

This paper presents a method to estimate the reliability of the output of a (possibly neuro-fuzzy) model by means of an additional neural network. The proposed technique is most effective when the reliability of the model significantly varies in different areas of input space, as it often happens in many real-world problems, allowing the user to predict how reliable is a given model for each specific situation. Alternatively, the proposed technique can be used to analyze particular anomalies of input data set such as the outliers.

Leonardo M. Reyneri, Valentina Colla, Mirko Sgarbi, Marco Vannucci

FPGA Implementations Comparison of Neuro-cortical Inspired Convolution Processors for Spiking Systems

Image convolution operations in digital computer systems are usually very expensive operations in terms of resource consumption (processor resources and processing time) for an efficient Real-Time application. In these scenarios the visual information is divided in frames and each one has to be completely processed before the next frame arrives. Recently a new method for computing convolutions based on the neuro-inspired philosophy of spiking systems (Address-Event-Representation systems, AER) is achieving high performances. In this paper we present two FPGA implementations of AER-based convolution processors that are able to work with 64x64 images and programmable kernels of up to 11x11 elements. The main difference is the use of RAM for integrators in one solution and the absence of integrators in the second solution that is based on mapping operations. The maximum equivalent operation rate is 163.51 MOPS for 11x11 kernels, in a Xilinx Spartan 3 400 FPGA with a 50MHz clock. Formulations, hardware architecture, operation examples and performance comparison with frame-based convolution processors are presented and discussed.

A. Linares-Barranco, R. Paz, F. Gómez-Rodríguez, A. Jiménez, M. Rivas, G. Jiménez, A. Civit

Learning and Adaptation

Nonparametric Location Estimation for Probability Density Function Learning

We present a method to estimate the probability density function of multivariate distributions. Standard Parzen window approaches use the sample mean and the sample covariance matrix around every input vector. This choice yields poor robustness for real input datasets. We propose to use the L1-median to estimate the local mean and covariance matrix with a low sensitivity to outliers. In addition to this, a smoothing phase is considered, which improves the estimation by integrating the information from several local clusters. Hence, a specific mixture component is learned for each local cluster. This leads to outperform other proposals where the local kernel is not as robust and/or there are no smoothing strategies, like the manifold Parzen windows.

Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, María Carmen Vargas-González

An Awareness-Based Artificial Neural Network for Cooperative Distributed Environments

Cooperative tasks to be successful in Collaborative Distributed Environments (CDE) require from users to be known. Before hard, which users are more suitable in the system to cooperate with, as well as which tools are needed to achieve the common goal in the system in a cooperative way. On the other hand, awareness allows users to be aware of others’ activities each and every moment. Information about others’ activities combined with their intentions and purposes could be used to improve cooperation in CDEs. This paper focuses on associating the concept of awareness with vector quantization techniques. It presents a novel non-supervised Artificial Neural Network (ANN) based model for CDE named CAwANN, which uses the information of awareness collaborations occurring in the environment in order to achieve the most suitable awareness-based collaboration / cooperation in the environment.

Mauricio Paletta, Pilar Herrero

Improving Classification under Changes in Class and Within-Class Distributions

The fundamental assumption that training and operational data come from the same probability distribution, which is the basis of most learning algorithms, is often not satisfied in practice. Several algorithms have been proposed to cope with classification problems where the class priors may change after training, but they can show a poor performance when the class conditional data densities also change. In this paper, we propose a re-estimation algorithm that makes use of unlabeled operational data to adapt the classifier behavior to changing scenarios. We assume that (a) the classes may be decomposed in several (unknown) subclasses, and (b) the prior subclass probabilities may change after training. Experimental results with practical applications show an improvement over an adaptive method based on class priors, while preserving a similar performance when there are no subclass changes.

Rocío Alaiz-Rodríguez, Alicia Guerrero-Curieses, Jesús Cid-Sueiro

Improving Training in the Vicinity of Temporary Minima

An important problem in learning using gradient descent algorithms (such as backprop) is the slowdown incurred by temporary minima (TM). We consider this problem for an artificial neural network trained to solve the XOR problem. The network is transformed into the equivalent

all permutations fuzzy rule-base

which provides a symbolic representation of the knowledge embedded in the network. We develop a mathematical model for the evolution of the fuzzy rule-base parameters during learning in the vicinity of TM. We show that the rule-base becomes singular and tends to remain singular in the vicinity of TM.

Our analysis suggests a simple remedy for overcoming the slowdown in the learning process incurred by TM. This is based on slightly perturbing the values of the training examples, so that they are no longer symmetric. Simulations demonstrate the usefulness of this approach.

Ido Roth, Michael Margaliot

Convergence in an Adaptive Neural Network: The Influence of Noise Inputs Correlation

This paper presents a study of convergence modalities in a small adaptive network of conductance-based neurons, receiving input patterns with different degrees correlation . The models for the neurons, synapses and plasticity rules (STDP) have a common biophysics basis. The neural network is simulated using a mixed analog-digital platform, which performs real-time simulations. We describe the study context, and the models for the neurons and for the adaptation functions. Then we present the simulation platform, including analog integrated circuits to simulate the neurons and a real-time software to simulate the plasticity. We also detail the analysis tools used to evaluate the final state of the network by the way of its post-adaptation synaptic weights. Finally, we present experimental results, with a systematic exploration of the network convergence when varying the input correlation, the initial weights and the distribution of hardware neurons to simulate the biological variability.

Adel Daouzli, Sylvain Saïghi, Michelle Rudolph, Alain Destexhe, Sylvie Renaud

Adaptative Resonance Theory Fuzzy Networks Parallel Computation Using CUDA

Programming of Graphics Processing Units (GPUs) has evolved in a way they can be used to address and speed-up computation of algorithms exemplified by data-parallel models. In this paper parallelization of a Fuzzy ART algorithm is described and a detailed explanation of its implementation under CUDA is given. Experimental results show the algorithm runs up to 52 times faster on the GPU than on the CPU for testing and 18 times faster for training under specific conditions.

M. Martínez-Zarzuela, F. J. Díaz Pernas, A. Tejero de Pablos, M. Antón Rodríguez, J. F. Díez Higuera, D. Boto Giralda, D. González Ortega

A Supervised Learning Method for Neural Networks Based on Sensitivity Analysis with Automatic Regularization

The Sensitivity-Based Linear Learning Method (SBLLM) is a learning method for two-layer feedforward neural networks based on sensitivity analysis, that calculates the weights by solving a linear system of equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other learning algorithms. In this paper a generalization of the SBLLM that includes a regularization term in the cost function is presented. The estimation of the regularization parameter is made by means of an automatic technique. The theoretical basis for the method is given and its performance is illustrated by comparing the results obtained by the automatic technique and those obtained manually by cross-validation.

Beatriz Pérez-Sánchez, Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas

Ensemble Methods for Boosting Visualization Models

Topology preserving mappings are great tools for data visualization and inspection in large datasets. This research presents a study of the combination of different ensemble training techniques with a novel summarization algorithm for ensembles of topology preserving models. The aim of these techniques is the increase of the truthfulness of the visualization of the dataset obtained by this kind of algorithms and, as an extension, the stability conditions of the former. A study and comparison of the performance of some novel and classical ensemble techniques, using well-known datasets from the UCI repository (Iris and Wine), are presented in this paper to test their suitability, in the fields of data visualization and topology preservation when combined with one of the most widespread of that kind of models such as the Self-Organizing Map.

Bruno Baruque, Emilio Corchado, Aitor Mata, Juan M. Corchado

New Artificial Metaplasticity MLP Results on Standard Data Base

This paper tests a novel improvement in neural network training by implementing Metaplasticity Multilayer Perceptron (MMLP) Neural Networks (NNs), that are based on the biological property of metaplasticity. Artificial Metaplasticity bases its efficiency in giving more relevance to the less frequent patterns and subtracting relevance to the more frequent ones. The statistical distribution of training patterns is used to quantify how frequent a pattern is. We model this interpretation in the NNs training phase. Wisconsin breast cancer database (WBCD) was used to train and test MMLP. Our results were compared to recent research results on the same database, proving to be superior or at least an interesting alternative.

Alexis Marcano-Cedeño, Aleksandar Jevtić, Antonio Álvarez-Vellisco, Diego Andina

Self-organizing Networks, Methods and Applications

Probabilistic Self-Organizing Graphs

Self-organizing neural networks are usually focused on prototype learning, while the topology is held fixed during the learning process. Here we propose a method to adapt the topology of the network so that it reflects the internal structure of the input distribution. This leads to a self-organizing graph, where each unit is a mixture component of a Mixture of Gaussians (MoG). The corresponding update equations are derived from the stochastic approximation framework. Experimental results are presented to show the self-organization ability of our proposal and its performance when used with multivariate datasets.

Ezequiel López-Rubio, Juan Miguel Ortiz-de-Lazcano-Lobato, María Carmen Vargas-González

Spicules for Unsupervised Learning

We present a new model of unsupervised competitive neural network, based on spicules. This model is capable of detecting topological information of an input space, determining its orientation and, in most case, its skeleton.

J. A. Gómez-Ruiz, J. Muñoz-Perez, M. A. García-Bernal

Topology Preserving Visualization Methods for Growing Self-Organizing Maps

Self-organizing map (SOM) is a neural network model widely used in high dimensional data visualization processes. A trained SOM provides a simplified data model as well as a projection of the multidimensional input data into a bi-dimensional plane that reflects the relationships involving the training patters. Visualization methods based in SOM explore different characteristics related to the data learned by the network. It is necessary to find methods to determine the goodness of a trained network in order to evaluate the quality of the high dimensional data visualizations generated using the SOM simplified model. The degree of topology preservation is the most common concept used to implement this measure. Several qualitative and quantitative methods have been proposed for measuring the degree of SOM topology preservation, in particular using Kohonen model. In this work, two measuring topology preservation methods for Growing Cell Structures (GCS) model are proposed: the topographic function and the topology preserving map.

Soledad Delgado, Consuelo Gonzalo, Estibaliz Martinez, Agueda Arquero

Making Standard SOM Invariant to the Initial Conditions

In data clustering, the assessment of learning properties with respect to data is important for a reliable classification. However, in standard Self Organizing Map (SOM), weight vectors initialization is done randomly, leading to a different final feature map each time the initial conditions are changed. To cope with this issue, in this paper, we present a behavioral study of the first iterations of the learning process in standard SOM. After establishing the mathematical foundations of the first passage of input vectors, we show how to conclude a better initialization relatively to the data set, leading to the generation of a unique feature map.

Soukeina Ben Chikha, Kirmene Marzouki

The Complexity of the Batch Neural Gas Extended to Local PCA

The adaptation of a neural gas algorithm to local principal component analysis (NG-LPCA) is a useful technique in data compression, pattern recognition, classification or even in data estimation. However, the batch NG-LPCA becomes unfeasible when dealing with high dimensional data. In this paper, a regularization method is described in detail to prevent the batch NG-LPCA approach from instability. The proposed method is tested and the results seem to prove that it is a suitable tool for classifying tasks avoiding instability with high dimensional datasets.

Iván Machón-González, Hilario López-García, José Luís Calvo-Rolle

Self Organized Dynamic Tree Neural Network

Cluster analysis is a technique used in a variety of fields. There are currently various algorithms used for grouping elements that are based on different methods including partitional, hierarchical, density studies, probabilistic, etc. This article will present the SODTNN, which can perform clustering by integrating hierarchical and density-based methods. The network incorporates the behavior of self-organizing maps and does not specify the number of existing clusters in order to create the various groups.

Juan F. De Paz, Sara Rodríguez, Javier Bajo, Juan M. Corchado, Vivian López

Development of Neural Network Structure with Biological Mechanisms

We present an evolving neural network model in which synapses appear and disappear stochastically according to bio-inspired probabilities. These are in general nonlinear functions of the local fields felt by neurons—akin to electrical stimulation—and of the global average field—representing total energy consumption. We find that initial degree distributions then evolve towards stationary states which can either be fairly homogeneous or highly heterogeneous, depending on parameters. The critical cases—which can result in scale-free distributions—are shown to correspond, under a mean-field approximation, to nonlinear drift-diffusion equations. We show how appropriate choices of parameters yield good quantitative agreement with published experimental data concerning synaptic densities during brain development (synaptic pruning).

Samuel Johnson, Joaquín Marro, Jorge F. Mejias, Joaquín J. Torres

Fuzzy Systems

Fuzzy Logic, Soft Computing, and Applications

We survey on the theoretical and practical developments of the theory of fuzzy logic and soft computing. Specifically, we briefly review the history and main milestones of fuzzy logic (in the wide sense), the more recent development of soft computing, and finalise by presenting a panoramic view of applications: from the most abstract to the most practical ones.

Inma P. Cabrera, Pablo Cordero, Manuel Ojeda-Aciego

A Similarity-Based WAM for Bousi~Prolog




is an extension of the standard


language with an operational semantics which is an adaptation of the SLD resolution principle where classical unification has been replaced by a fuzzy unification algorithm based on proximity relations defined on a syntactic domain. In this paper we present the structure and main features of a low level implementation for




. It consists in a compiler and an extension of the






achine (WAM) able to incorporate fuzzy unification.

Pascual Julián-Iranzo, Clemente Rubio-Manzano

On the Declarative Semantics of Multi-Adjoint Logic Programs

The notion of least Herbrand model has been traditionally accepted as the declarative semantics for programs in the context of pure logic programming. Some adaptations of this concept, using model-theory, were made for a few number of fuzzy logic programming frameworks in the recent years. Unfortunately, this is not the case of multi-adjoint logic programming, one of the most expressive, powerful approaches for fuzzifying logic programming. To fulfill this gap, in this paper we propose a declarative semantics for such kind of fuzzy logic programs based on the so-called least fuzzy Herbrand model. We prove and important “minimality” property of our construction which can not trivially be inherited from pure logic programming. Moreover, apart from relating our notion with other existing procedural and fix-point semantics (what is also instrumental to prove its properties), we provide evident cases where our construction exists even when the rest of the aforementioned fuzzy semantics remain undefined.

P. Julián, G. Moreno, J. Penabad

A Complete Logic for Fuzzy Functional Dependencies over Domains with Similarity Relations

An axiomatic system for fuzzy functional dependencies is introduced. The main novelty of the system is that it is not based on the transitivity rule like all the others, but it is built around a simplification rule which allows the removal of redundancy. The axiomatic system presented here is shown to be sound and complete.

P. Cordero, M. Enciso, A. Mora, I. P. de Guzmán

RFuzzy: An Expressive Simple Fuzzy Compiler

Fuzzy reasoning is a very productive research field that during the last years has provided a number of theoretical approaches and practical implementation prototypes. Nevertheless, the classical implementations, like Fril, are not adapted to the latest formal approaches, like multi-adjoint logic semantics.

Some promising implementations, like Fuzzy Prolog, are so general that the regular user/programmer does not feel comfortable because either the representation of fuzzy concepts is complex or the results of the fuzzy queries are difficult to interpret.

In this paper we present a modern framework,


, that is modeling multi-adjoint logic in a practical way. It provides some extensions as default values (to represent missing information), partial default values (for a subset of data) and typed variables.


represents the truth value of predicates using facts, rules and also can define fuzzy predicates as continuous functions. Queries are answered with direct results (instead of providing complex constraints), so it is easy to use for any person that wants to represent a problem using fuzzy reasoning in a simple way (just using the classical fuzzy representation with real numbers). The most promising characteristic of


is that the user can obtain constructive answers to queries that restrict the truth value.

Susana Munoz-Hernandez, Victor Pablos Ceruelo, Hannes Strass

Overcoming Non-commutativity in Multi-adjoint Concept Lattices

Formal concept analysis has become an important and appealing research topic. In this paper, we present the t-concept lattice as a set of triples associated to graded tabular information interpreted in a non-commutative fuzzy logic, in order to “soften” the non-commutativity character. Moreover, we show that the common information to both (sided) concept lattices can be seen as a sublattice of the Cartesian product of both concept lattices.

Jesús Medina

Evolutionary Fuzzy Scheduler for Grid Computing

In the last few years, the Grid community has been growing very rapidly and many new components have been proposed. In this sense, the scheduler represents a very relevant element that influences decisively on the grid system performance. The scheduling task of a set of heterogeneous, dynamically changing resources is a complex problem. Several scheduling systems have already been implemented; however, they still provide only “ad hoc” solutions to manage scheduling resources in a grid system. This paper presents a fuzzy scheduler obtained by means of evolving a previous fuzzy scheduler using Pittsburgh approach. This new evolutionary fuzzy scheduler improves the performance of the classical scheduling system.

R. P. Prado, S. García Galán, A. J. Yuste, J. E. Muñoz Expósito, A. J. Sánchez Santiago, S. Bruque

Improving the Performance of Fuzzy Rule Based Classification Systems for Highly Imbalanced Data-Sets Using an Evolutionary Adaptive Inference System

In this contribution, we study the influence of an Evolutionary Adaptive Inference System with parametric conjunction operators for Fuzzy Rule Based Classification Systems. Specifically, we work in the context of highly imbalanced data-sets, which is a common scenario in real applications, since the number of examples that represents one of the classes of the data-set (usually the concept of interest) is usually much lower than that of the other classes.

Our experimental study shows empirically that the use of the parametric conjunction operators enables simple Fuzzy Rule Based Classification Systems to enhance their performance for data-sets with a high imbalance ratio.

Alberto Fernández, María José del Jesus, Francisco Herrera

A t-Norm Based Approach to Edge Detection

In this paper we study a modification of the original method by Genyun Sun

et al.

[1] for egde detection based on the Law of Universal Gravitation. We analyze the effect of the substitution of the product by other t-norms in the calculation of the gravitatory forces. We construct a fuzzy set where memberships to the edges are extracted from the magnitude of the resulting force on each pixel. To finish, we experimentally proof that the features of each t-norm determine the kind of edges to be detected.

C. Lopez-Molina, H. Bustince, J. Fernández, E. Barrenechea, P. Couto, B. De Baets

Evolutionary Computation and Genetic Algoritms

Applying Evolutionary Computation Methods to Formal Testing and Model Checking

Formal Methods (FM) provide mathematically founded algorithms for analyzing the correctness of systems. Though FM have been successfully applied to many industrial problems, these methods typically find the practical problem that the number of states to be systematically analyzed grows exponentially with the size of the system to be analyzed. Thus, exhaustive techniques to find system faults are typically substituted by heuristic strategies allowing to focalize the search for potential faults in some suspicious or critical configurations. In particular, Evolutionary Computation (EC) methods provide efficient generic strategies to search for good solutions in big solution spaces, which fit into the kind of problems appearing in FM. This paper summarizes several works where EC techniques have been applied to FM problems.

Pablo Rabanal, Ismael Rodríguez, Fernando Rubio

Applying Evolutionary Techniques to Debug Functional Programs

Selecting an appropriate test suite to detect faults in a program is a difficult task. In the case of functional languages, altough there are some additional difficulties (due to the lack of state, laziness and its higher-order nature), we can also take advantage of higher-order programming to allow defining generic ways of obtaining tests. In this paper we present a genetic algorithm to automatically select appropriate criteria to generate tests for Haskell programs. As a case study, we apply the algorithm to deal with a program using red-black trees.

Alberto de la Encina, Mercedes Hidalgo-Herrero, Pablo Rabanal, Fernando Rubio

Aiding Test Case Generation in Temporally Constrained State Based Systems Using Genetic Algorithms

Generating test data is computationally expensive. This paper improves a framework that addresses this issue by representing the test data generation problem as an optimisation problem and uses heuristics to help generate test cases. The paper considers the temporal constraints and behaviour of a certain class of (timed) finite state machines. A very simple fitness function is defined that can be used with several evolutionary search techniques and automated test case generation tools.

Karnig Derderian, Mercedes G. Merayo, Robert M. Hierons, Manuel Núñez

Creation of Specific-to-Problem Kernel Functions for Function Approximation

Although there is a large diversity in the literature related to kernel methods, there are only a few works which do not use kernels based on Radial Basis Functions (RBF) for regression problems. The reason for that is that they present very good generalization capabilities and smooth interpolation. This paper studies an initial framework to create specific-to-problem kernels for application to regression models. The kernels are created without prior knowledge about the data to be approximated by means of a Genetic Programming algorithm. The quality of a kernel is evaluated independently of a particular model, using a modified version of a non parametric noise estimator. For a particular problem, performances of generated kernels are tested against common ones using weighted k-nn in the kernel space. Results show that the presented method produces specific-to-problem kernels that outperform the common ones for this particular case. Parallel programming is utilized to deal with large computational costs.

Ginés Rubio, Héctor Pomares, Ignacio Rojas, Alberto Guillén

Combining Genetic Algorithms and Mutation Testing to Generate Test Sequences

The goal of this paper is to provide a method to generate efficient and short test suites for Finite State Machines (FSMs) by means of combining Genetic Algorithms (GAs) techniques and mutation testing. In our framework, mutation testing is used in various ways. First, we use it to produce (faulty) systems for the GAs to learn. Second, it is used to sort the intermediate tests with respect to the number of mutants killed. Finally, it is used to measure the fitness of our tests, therefore allowing to reduce redundancy. We present an experiment to show how our approach outperforms other approaches.

Carlos Molinero, Manuel Núñez, César Andrés

Testing Restorable Systems by Using RFD

Given a finite state machine denoting the


of a system, finding some short interaction sequences capable to reach some/all states or transitions of this machine is a typical goal in testing methods. We study the problem of finding such sequences in the case where configurations previously traversed can be




(at some cost). Finding optimal sequences for this case is an NP-hard problem. We propose an heuristic method to approximately solve this problem based on an evolutionary computation approach, in particular

River Formation Dynamics

. Some experimental results are reported.

Pablo Rabanal, Ismael Rodríguez

RCGA-S/RCGA-SP Methods to Minimize the Delta Test for Regression Tasks

Frequently, the number of input variables (features) involved in a problem becomes too large to be easily handled by conventional machine-learning models. This paper introduces a combined strategy that uses a real-coded genetic algorithm to find the optimal scaling (RCGA-S) or scaling + projection (RCGA-SP) factors that minimize the Delta Test criterion for variable selection when being applied to the input variables. These two methods are evaluated on five different regression datasets and their results are compared. The results confirm the goodness of both methods although RCGA-SP performs clearly better than RCGA-S because it adds the possibility of projecting the input variables onto a lower dimensional space.

Fernando Mateo, Dus̆an Sovilj, Rafael Gadea, Amaury Lendasse

An Evolutionary Hierarchical Clustering Method with a Visual Validation Tool

In this paper, we propose a novel hierarchical clustering method based on evolutionary strategies. This method leads to gene expression data analysis, and shows its effectiveness with regard to other clustering methods through cluster validity measures on the results. Additionally, a novel visual validation interactive tool is provided to carry out visual analytics among clusters of a dendrogram. This interactive tool is an alternative for the used validity measures. The method introduced here attempts to solve some of the problems faced by other hierarchical methods. Finally, the results of the experiments show that the method can be very effective in the cluster analysis on DNA microarray data.

José A. Castellanos-Garzón, Carlos Armando García, Luis A. Miguel-Quintales

An Adaptive Parameter Control for the Differential Evolution Algorithm

The Differential Evolution is a floating-point evolutionary algorithm that has demonstrated good performance on locating the global optima in a wide variety of problems and applications. It has mainly three tuning parameters and their choice is fundamental to ensure good quality solutions. Because of this, adaptive parameter control and self-adaptive parameter control had been object of research. We present a novel scheme for controlling two parameters of the Differential Evolution using fitness information of the population in each generation. The algorithm shows outstanding performance on a well known benchmark functions, improving the standard DE and comparable with similar algorithms.

Gilberto Reynoso-Meza, Javier Sanchis, Xavier Blasco

Parallelizing the Design of Radial Basis Function Neural Networks by Means of Evolutionary Meta-algorithms

This work introduces SymbPar, a parallel meta-evolutionary algorithm designed to build Radial Basis Function Networks minimizing the number of parameters needed to be set by hand. Parallelization is implemented using independent agents to evaluate every individual. Experiments over classifications problems show that the new method drastically reduces the time took by sequential algorithms, while maintaining the generalization capabilities and sizes of the nets it builds.

M. G. Arenas, E. Parras-Gutiérrez, V. M. Rivas, P. A. Castillo, M. J. Del Jesus, J. J. Merelo

A Genetic Algorithm for ANN Design, Training and Simplification

This paper proposes a new evolutionary method for generating ANNs. In this method, a simple real-number string is used to codify both architecture and weights of the networks. Therefore, a simple GA can be used to evolve ANNs. One of the most interesting features of the technique presented here is that the networks obtained have been optimised, and they have a low number of neurons and connections. This technique has been applied to solve one of the most used benchmark problems, and results show that this technique can obtain better results than other automatic ANN development techniques.

Daniel Rivero, Julian Dorado, Enrique Fernández-Blanco, Alejandro Pazos

Pattern Recognition

Graph-Based Representations in Pattern Recognition and Computational Intelligence

Graph theory, which used to be a purely academic discipline, is now increasingly becoming an essential part in different areas of research. This paper briefly present new perspectives in graph–based representations applied in emerging fields, such as computer vision and image processing, robotics, network analysis, web mining, chemistry, bioinformatics, sensor networks, biomedical engineering or evolutionary computation.

R. Marfil, F. Escolano, A. Bandera

Kernelization of Softassign and Motzkin-Strauss Algorithms

This paper reviews two continuous methods for graph matching: Softassign and Replicator Dynamics. These methods can be applied to non-attributed graphs, but considering only structural information results in a higher ambiguity in the possible matching solutions. In order to reduce this ambiguity, we propose to extract attributes from non-attributed graphs and embed them in the graph-matching cost function, to be used as a similarity measure between the nodes in the graphs. Then, we evaluate their performance within the reviewed graph-matching algorithms.

M. A. Lozano, F. Escolano

Connectivity Forests for Homological Analysis of Digital Volumes

In this paper, we provide a graph-based representation of the homology (information related to the different “holes” the object has) of a binary digital volume. We analyze the digital volume AT-model representation [8] from this point of view and the cellular version of the AT-model [5] is precisely described here as three forests (connectivity forests), from which, for instance, we can straightforwardly determine representative curves of “tunnels” and “holes”, classify cycles in the complex, computing higher (co)homology operations,... Depending of the order in which we gradually construct these trees, tools so important in Computer Vision and Digital Image Processing as Reeb graphs and topological skeletons appear as results of pruning these graphs.

Pedro Real

Energy-Based Perceptual Segmentation Using an Irregular Pyramid

This paper implements a fast bottom-up approach for perceptual grouping. The proposal consists of two main stages: firstly, it detects the homogeneous blobs of the input image using a color-based distance and then, it hierarchically merges these blobs according to a set of energy functions. Both stages are performed over the Bounded Irregular Pyramid. The performance of the proposed algorithm has been quantitatively evaluated with respect to ground–truth segmentation data.

R. Marfil, F. Sandoval

Hierarchical Graphs for Data Clustering

The self-organizing map (SOM) has been used in multiple areas and constitutes an excellent tool for data mining. However, SOM has two main drawbacks: the static architecture and the lack of representation of hierarchical relations among input data. The growing hierarchical SOM (GHSOM) was proposed in order to face these difficulties. The network architecture is adapted during the learning process and provides an intuitive representation of the hierarchical relations of the data. Some limitations of this model are the static topology of the maps (2-D grids) and the big amount of neurons created without necessity. A growing hierarchical self-organizing graph (GHSOG) based on the GHSOM is presented. The maps are graphs instead of 2-D rectangular grids, where the neurons are considered the vertices, and each edge of the graph represents a neighborhood relation between neurons. This new approach provides greater plasticity and a more flexible architecture, where the neurons arrangement is not restricted to a fixed topology, achieving a more faithfully data representation. The proposed neural model has been used to build an Intrusion Detection Systems (IDS), where experimental results confirm its good performance.

E. J. Palomo, J. M. Ortiz-de-Lazcano-Lobato, Domingo López-Rodríguez, R. M. Luque

Real Adaboost Ensembles with Emphasized Subsampling

Multi-Net systems in general, and the Real Adaboost algorithm in particular, offer a very interesting way of designing very powerful classifiers. However, one inconvenient of this schemes is the large computational burden required for their construction. In this paper, we propose a new Boosting scheme which incorporates subsampling mechanisms to speed up the training of base learners and, therefore, the setup of the ensemble network. Furthermore, subsampling the training data provides additional diversity among the constituent learners, according to the some principles exploited by Bagging approaches. Experimental results show that our method is in fact able to improve both Boosting and Bagging schemes in terms of recognition rates, while allowing significant training time reductions.

Sergio Muñoz-Romero, Jerónimo Arenas-García, Vanessa Gómez-Verdejo

Using the Negentropy Increment to Determine the Number of Clusters

We introduce a new validity index for crisp clustering that is based on the average normality of the clusters. A normal cluster is optimal in the sense of maximum uncertainty, or minimum structure, and so performing further partitions on it will not reveal additional substructures. To characterize the normality of a cluster we use the negentropy, a standard measure of distance to normality which evaluates the difference between the cluster’s entropy and the entropy of a normal distribution with the same covariance matrix. Although the definition of the negentropy involves the differential entropy, we show that it is possible to avoid its explicit computation by considering only negentropy increments with respect to the initial data distribution. The resulting

negentropy increment

validity index only requires the computation of determinants of covariance matrices. We have applied the index to randomly generated problems, and show that it provides better results than other indices for the assessment of the number of clusters.

Luis F. Lago-Fernández, Fernando Corbacho

A Wrapper Method for Feature Selection in Multiple Classes Datasets

Feature selection algorithms should remove irrelevant and redundant features while maintaining or even improving performance, and thus contributing to enhance generalization in learning models. Feature selection methods can be mainly grouped into filters and wrappers. Most of the models built can deal more or less adequately with binary problems, but often under perform on multi-class tasks. In this article, a new wrapper method, called IAFN-FS (Incremental ANOVA and Functional Networks-Feature Selection) is described in its version for dealing with multiclass problems. In order to carry out the multiclass approach, two different alternatives were tried: (a) treating directly the multiclass problem, (b) dividing the original multiclass problem in several binary problems. In order to evaluate the performance of both approaches, a comparative study using several benchmark datasets, our two methods and other wrappers based in classical algorithms, such as C4.5 and Naive-Bayes, was carried out.

Noelia Sánchez-Maroño, Amparo Alonso-Betanzos, Rosa M. Calvo-Estévez

Formal Languages in Linguistics

New Challenges in the Application of Non-classical Formal Languages to Linguistics

We provide a short overview on the application of non- classical formal languages to linguistics. The paper focuses on the main achievements on formal language theory that have some impact on the description and explanation of natural language, points out the lack of mathematical tools to deal with some general branches of linguistics and suggests several computational theories that can help to construct a computational-friendly approach to language structure.

Gemma Bel-Enguix, M. Dolores Jiménez-López

PNEPs, NEPs for Context Free Parsing: Application to Natural Language Processing

This work tests the suitability of NEPs to parse languages. We propose PNEP, a simple extension to NEP, and a procedure to translate a grammar into a PNEP that recognizes the same language. These parsers based on NEPs do not impose any additional constrain to the structure of the grammar, which can contain all kinds of recursive, lambda or ambiguous rules. This flexibility makes this procedure specially suited for Natural Languge Processing (NLP). In a first proof with a simplified English grammar, we got a performance (a linear time complexity) similar to that of the most popular syntactic parsers in the NLP area (Early and its derivatives). All the possible derivations for ambiguous grammars were generated.

Alfonso Ortega, Emilio del Rosal, Diana Pérez, Robert Mercaş, Alexander Perekrestenko, Manuel Alfonseca

A Hyprolog Parsing Methodology for Property Grammars

Property Grammars

, or PGs, belong to a new family of linguistic formalisms which view a grammar as a set of linguistic constraints, and parsing as a constraint satisfaction problem. Rigid hierarchical parsing gives way to flexible mechanisms which can handle incomplete, ambiguous or erroneous text, and are thus more adequate for new applications such as speech recognition, internet mining, controlled languages and biomedical information. The present work contributes a) a new parsing methodology for PGs in terms of Hyprolog – an extension of Prolog with linear and intuitionistic logic and with abduction; and b) a customisable extension of PGs that lets us model also concepts and relations to some degree. We exemplify within the domain of extracting concepts from biomedical text.

Veronica Dahl, Baohua Gu, Erez Maharshak

Adaptable Grammars for Non-Context-Free Languages

We consider, as an alternative to traditional approaches for describing non-context-free languages, the use of grammars in which application of grammar rules themselves control the creation or modification of grammar rules. This principle is shown to capture, in a concise way, standard example languages that are considered as prototype representatives of non-context-free phenomena in natural languages. We define a grammar formalism with these characteristics and show how it can be implemented in logic programming in a surprisingly straightforward way, compared with the expressive power. It is also shown how such adaptable grammars can be applied for describing meta-level architectures that include their own explicit meta-languages for defining new syntax.

Henning Christiansen

β-Reduction and Antecedent–Anaphora Relations in the Language of Acyclic Recursion

In this paper, I introduce briefly the formal language of Acyclic Recursion


(see Moschovakis ). Then I present the


-reduction rule of the calculus of the referential synonymy in


and some arguments for its restricted manifestation with respect to preserving the algorithmic sense of the


-terms. This paper demonstrates further evidence for the need of restricted


-reduction. It also contributes to exploration of the inherent potentials of


for underspecified semantic representations. I show that the restricted


-reduction, in combination with “underspecified” terms, gives rise to various distinctions between antecedent-anaphora and co-denotation relations in NL.

Roussanka Loukanova

Permutation Languages in Formal Linguistics

Derivations using branch-interchanging and the language family (


) obtained by context-free and interchange (



) rules are analysed. This family is not closed under intersection with regular sets, therefore the obtained family of languages

$\textbf{L}_{perm \cap reg}$

is also interesting. Some non-trivial properties are shown. Important languages of mildly context-sensitive classes are shown to be belonging to

$\textbf{L}_{perm \cap reg}$

. Closure properties and other properties are detailed. Relations to partial and semi-commutations and to parallel processes are shown.

Benedek Nagy

Agents and Multi-agent on Intelligent Systems

Thomas: Practical Applications of Agents and Multiagent Systems

This paper presents a brief summary of the contents of the special session on practical applications held in the framework of IWANN 2009. The special session has been supported by the THOMAS (TIN2006-14630-C03-03) project and aims at presenting the results obtained in the project, as well as at exchanging experience with other researchers in this field.

Javier Bajo, Juan M. Corchado

INGENIAS Development Process Assisted with Chains of Transformations

This paper presents a chain of model transformations to guide and support the application of the INGENIAS development process. The


tool generates these transformations with a

Model Transformation By-Example

approach, that is, automatically from pairs of model prototypes. The


has the advantage over similar approaches and tools of being able to generate many-to-many transformation rules between non-connected graphs of elements. The work in this paper sets the foundation for future research on software processes aided by integrated standard transformations. Two case studies illustrate the generation of transformations with the


and its use in the process. They also show the applicability of the approach to different application domains.

Iván García-Magariño, Rubén Fuentes-Fernández, Jorge J. Gómez-Sanz

A Secure Group-Oriented Model for Multiagent Systems

In this paper, a secure group-oriented model for Multiagent Systems is presented. This model is provided as a support from the Multiagent Platform level. The benefits introduced by the model are shown applying it to an existing MAS-based application.

Jose M. Such, Juan M. Alberola, Antonio Barella, Agustin Espinosa, Ana Garcia-Fornes

Interactive Animation of Agent Formation Based on Hopfield Neural Networks

Formation of agents is of recent interest in computer sciences, robotics and control systems. Several goal formation strategies may be of interest according to the sensory capabilities of the agents. This paper addresses the formation of mobile agents in absolute positioning without order. In this paper a control system based on Hopfiled neural networks is proposed. The paper summarizes the control system and describes a JAVA-based application developed to visualize the control system behavior. This interactive animation tool improves the understanding of and intuition for a number of aspects dealing with the formation of agents such as agents dynamics. This tool allows to directly manipulate graphical representation of the systems such as initial configuration of the agents, and get instant feedback on the effects.

Rafael Kelly, Carmen Monroy

The INGENIAS Development Kit: A Practical Application for Crisis-Management

The INGENIAS Development Kit (IDK) supports the development of fully functional Multi-agent Systems (MASs) from specification models, following a model-driven approach. This paper presents a practical application about crisis-management, in order to provide a full example of application of the IDK tool; and consequently, the specification and the code of this system are included in the IDK 2.8 distribution. The presented application manages the crisis situation of a city, in which, a poisonous material is released, and the central services are not enough to heal all the affected people. The software engineering process of the presented MAS application covers the following phases: specification, model design, implementation, and testing. Both the number of interactions and the number of participants are economized in order to increase the network efficiency, due to the real-time necessity of the MAS application.

Iván García-Magariño, Celia Gutiérrez, Rubén Fuentes-Fernández

The Delphi Process Applied to African Traditional Medicine

The African Traditional Medicine (ATM) has been applied over the years in the ethnics of the African continent. The experts of this area have conserved this knowledge by oral transmission. This knowledge varies from one ethnic to other, thus it is distributed among these ethnics. In this context, this work proposes to use a multi-agent system to manage this distributed knowledge. The presented approach uses the Delphi process, in which, there are several agents that represent ATM healers and participate in several rounds of questionnaires in order to reach a consensus for providing a heal to a patient.

Ghislain Atemezing, Iván García-Magariño, Juan Pavón

Composing and Ensuring Time-Bounded Agent Services

There are situations where an agent needs to compose several services together to achieve its goals. Moreover, if these goals should be fulfilled before a deadline, the problem of service composition become more complex. In this paper a multi-agent framework is presented to deal with service composition considering service execution time taking into account the availability and the workload of the agent that offers the service.

Martí Navarro, Elena del Val, Miguel Rebollo, Vicente Julián

An Organisation-Based Multiagent System for Medical Emergency Assistance

In this paper we present a demonstrator application for a real-world m-Health scenario: mobile medical emergency management in an urban area. Medical emergencies have a high priority given the potential life risk to the patients. And the use of advanced applications that support the different actors involved can improve the usage of appropriate resources within an acceptable response time. The demonstrator is implemented using the THOMAS approach to open multiagent system based on an organisational metaphor. This metaphor is very suitable for capturing the nature and the complexity of the mobile health domain and, thus, it provides an appropriate mechanism for developing next-generation m-Health applications.

Roberto Centeno, Moser Fagundes, Holger Billhardt, Sascha Ossowski, Juan Manuel Corchado, Vicente Julian, Alberto Fernandez

TEMMAS: The Electricity Market Multi-Agent Simulator

This paper presents the multi-agent TEMMAS simulator of an artificial electric power market populated with learning agents. The simulator facilitates the integration of two modeling constructs: i) the specification of the environmental physical market properties, and ii) the modeling of the decision-making (deliberative) and reactive agents.

Paulo Trigo, Paulo Marques, Helder Coelho

Two Steps Reinforcement Learning in Continuous Reinforcement Learning Tasks

Two steps reinforcement learning is a technique that combines an iterative refinement of a Q function estimator that can be used to obtains a state space discretization with classical reinforcement learning algorithms like Q-learning or Sarsa. However, the method requires a discrete reward function that permits learning an approximation of the Q function using classification algorithms. However, many domains have continuous reward functions that could only be tackled by discretizing the rewards. In this paper we propose solutions to this problem using discretization and regression methods. We demonstrate the usefulness of the resulting approach to improve the learning process in the Keepaway domain. We compare the obtained results with other techniques like VQQL and CMAC.

Iván López-Bueno, Javier García, Fernando Fernández

Multi-Agent System Theory for Modelling a Home Automation System

A paradigm for modelling and analysing Home Automation Systems is introduced, based on Multi-Agent System theory. A rich and versatile environment for Home Automation System simulation is constructed for investigating the performances of the system. The exploitation of limited resources (electricity, gas, hot water) depends on behavioural parameters of the individual appliances. In order to deal with the problem of developing systematic design and validation procedures for control strategies, global performance indices for the system are introduced. Different strategies for allocating resources and establishing priorities in their use can therefore be tested and compared.

G. Morganti, A. M. Perdon, G. Conte, D. Scaradozzi

THOMAS-MALL: A Multiagent System for Shopping and Guidance in Malls

This article presents a case study in which the THOMAS architecture is applied in order to obtain a multi-agent system (MAS) that can provide recommendations and guidance in a shopping mall. THOMAS is made up of a group of related modules that are well-suited for developing systems in other highly volatile environments similar to a shopping mall. Because the development of this type of system is complex, it is essential to thoroughly analyze the intrinsic characteristics of typical environment applications, and to design all of the system components at a very high level of abstraction.

S. Rodríguez, A. Fernández, V. Julián, J. M. Corchado, S. Ossowski, V. Botti

Multiagent-Based Educational Environment for Dependents

This paper presents a multiagent architecture that facilitates active learning in educational environments for dependents. The multiagent architecture incorporates agents that can be executed on mobile devices and that facilitate language learning in an ubiquitous way. Moreover, the agents are specifically designed to provide advanced interfaces for elderly and dependent people. The architecture has been tested in a real environment and the preliminary results obtained are presented within this paper.

Antonia Macarro, Alberto Pedrero, Juan A. Fraile

Social and Cognitive System for Learning Negotiation Strategies with Incomplete Information

Finding adequate (


solutions for both parties) negotiation strategy with


information for autonomous agents, even in one-to-one negotiation, is a complex problem. First part of this paper aims to develop negotiation strategies for autonomous agents with incomplete information, where negotiation behaviors, based on time-dependent behaviors, are suggested to be used in combination (inspired from empirical human negotiation research). Suggested combination allows agents to improve negotiation process in terms of agent utilities, round number to reach an agreement, and percentage of agreements. Second part aims to develop a social and cognitive system for learning negotiation strategies from interaction, where characters conciliatory, neutral, or aggressive, are suggested to be integrated in negotiation behaviors (inspired from research works aiming to analyze human behavior and those on social negotiation psychology). Suggested strategy displays ability to provide agents, through a basic buying strategy, with a first intelligence level in social and cognitive system to learn from interaction (human-agent or agent-agent).

Amine Chohra, Arash Bahrammirzaee, Kurosh Madani

Evaluation of Multi-Agent System Communication in INGENIAS

This paper proposes a corpus of metrics to evaluate the balance of communications in these systems. The hypothesis of this paper is that these metrics are strongly related with the quality of service of the MASs. In addition, some classification rules are provided to classify agents according to the metrics; thus, an origin of the low quality of service of MASs is detected. The detection of this origin is the first step for debugging and improving the communication in MASs. The experimentation of this work uses the INGENIAS Development Kit, because it supports the development of fully functional Multi-agent Systems (MASs) from specification models. As a proof of concept, this work measures two variants of a MAS for managing a crisis situation in a city and shows the relationship between the proposed metrics and the quality of service.

Celia Gutiérez, Iván García-Magariño, Jorge J. Gómez-Sanz

Agents Jumping in the Air: Dream or Reality?

Mobile agent technology has traditionally been recognized as a very useful approach to build applications for mobile computing and wireless environments. However, only a few studies report practical experiences with mobile agents in a wireless medium. This leads us to the following question: is mobile agent technology ready to be used in this environment?

In this paper, we study existing mobile agent platforms by analyzing if they could be used effectively in a wireless medium. We identify some key missing features in the platforms and highlight the requirements and challenges that lie ahead. With this work, we expose existing problems and hope to motivate further research in the area.

Oscar Urra, Sergio Ilarri, Eduardo Mena

Using Scenarios to Draft the Support of Intelligent Tools for Frail Elders in the SHARE-it Approach

A scenario is an example narrative description of typical interactions of users with the system. Scenarios are concrete, informal descriptions, and are not intended to describe all possible interactions. This document describes the scenarios used in the


project. These scenarios have several purposes. The main purpose of scenarios is to provide a communication tool between the user experts and the technology experts. As scenarios describe the actual deployment of the system, it encapsulates both knowledge about how the system can be useful, and also what is technically feasible. Each scenario in this document has the following items: Purpose of the scenario, User Description, Narrative Scenario, Structure of the scenario, Roles of the Agents, Role of Communication.

R. Annicchiarico, F. Campana, A. Federici, C. Barrué, U. Cortés, A. Villar, C. Caltagirone

On the Road to an Abstract Architecture for Open Virtual Organizations

The paper presents an abstract architecture specifically addressed for the design of open multi-agent systems (MAS) and virtual organizations. The proposal takes into account all the components to cover the characteristics and needs for systems of this kind. This architecture has been called THOMAS (Me


ods, Techniques and Tools for








ystems) and consists of a set of modular services.

M. Rebollo, A. Giret, E. Argente, C. Carrascosa, J. M. Corchado, A. Fernandez, V. Julian

Brain-Computer Interface (BCI)

Using Rest Class and Control Paradigms for Brain Computer Interfacing

The use of Electro-encephalography (EEG) for Brain Computer Interface (BCI) provides a cost-efficient, safe, portable and easy to use BCI for both healthy users and the disabled. This paper will first briefly review some of the current challenges in BCI research and then discuss two of them in more detail, namely modeling the “no command” (rest) state and the use of control paradigms in BCI. For effective prosthetic control of a BCI system or when employing BCI as an additional control-channel for gaming or other generic man machine interfacing, a user should not be required to be continuously in an active state, as is current practice. In our approach, the signals are first transduced by computing Gaussian probability distributions of signal features for each mental state, then a prior distribution of idle-state is inferred and subsequently adapted during use of the BCI. We furthermore investigate the effectiveness of introducing an intermediary state between state probabilities and interface command, driven by a dynamic control law, and outline the strategies used by 2 subjects to achieve idle state BCI control.

Siamac Fazli, Márton Danóczy, Florin Popescu, Benjamin Blankertz, Klaus-Robert Müller

The Training Issue in Brain-Computer Interface: A Multi-disciplinary Field

A tough question to address would be who can be considered the inventor of the Brain-computer interface. Many researchers have contributed to the evolution of this concept, from the basis conception of the neurons, synaptic connections and rudimentary EEG measurement systems up to the use of virtual reality environments with modern wireless devices. In the middle, series of devoted people with the honourable aim of provide a better quality of life to disabled patients. In the course of this evolution a key point is the use of efficient techniques of mutual training for both the patient and the system. In this paper we introduce the multi-disciplinary nature of the Brain-computer interfaces, for later focus on training techniques used in those based in the sensorimotor rhythm.

Ricardo Ron-Angevin, Miguel Angel Lopez, Francisco Pelayo

A Maxmin Approach to Optimize Spatial Filters for EEG Single-Trial Classification

Electroencephalographic single-trial analysis requires methods that are robust with respect to noise, artifacts and non-stationarity among other problems. This work contributes by developing a maxmin approach to robustify the common spatial patterns (CSP) algorithm. By optimizing the worst-case objective function within a prefixed set of the covariance matrices, we can transform the respective complex mathematical program into a simple generalized eigenvalue problem and thus obtain robust spatial filters very efficiently. We test our maxmin CSP method with real world brain-computer interface (BCI) data sets in which we expect substantial fluctuations caused by day-to-day or paradigm-to-paradigm variability or different forms of stimuli. The results clearly show that the proposed method significantly improves the classical CSP approach in multiple BCI scenarios.

Motoaki Kawanabe, Carmen Vidaurre, Benjamin Blankertz, Klaus-Robert Müller

Multiple AM Modulated Visual Stimuli in Brain-Computer Interface

Steady-state visual evoked potential (SSVEP) based Brain-computer interfaces (BCIs) use the spectral power at the flickering frequencies of the stimuli as the feature for classification in an Attend/Ignore multi-class paradigm. The performance of a BCI based on this principle increases with the number of stimuli. However the number of the corresponding flickering frequencies is limited due to several factors. Besides the frequency response of SSVEPs is not uniform for the full range of frequencies and varies among individuals, being difficult to establish accurate decision boundaries. In this paper we propose a new technique to overcome this limitation based on the AM modulation of the flickering stimuli that reuses the same modulating frequencies with different phase.

M. -A. Lopez, H. Pomares, A. Prieto, F. Pelayo

A Brain-Computer Interface Based on Steady State Visual Evoked Potentials for Controlling a Robot

In this paper a brain computer interface (BCI) based on steady state visual evoked potentials (SSVEP) is presented. For stimulation a box equipped with LEDs (for forward, backward, left and right commands) is used that flicker with different frequencies (10, 11, 12, 13 Hz) to induce the SSVEPs. Eight channels of EEG were derived mostly over visual cortex for the experiment with 3 subjects. To calculate features and to classify the EEG data Minimum Energy and Fast Fourier Transformation with linear discriminant analysis was used. Finally the change rate (fluctuation of the classification result) and the majority weight were calculated to increase the robustness and to provide a null classification. As feedback a tiny robot was used that moved forward, backward, to the left and to the right and stopped the movement if the subject did not look at the stimulation LEDs.

Robert Prueckl, Christoph Guger

Asynchronous Brain-Computer Interface to Navigate in Virtual Environments Using One Motor Imagery

A Brain-Computer Interface (BCI) application focused on the control of a wheelchair must consider the danger which a wrong command would involve in a real situation. Virtual reality is a suitable tool to provide subjects with the opportunity to train and test the application before using it under real conditions. Recent studies aimed at such control let the subject decide the timing of the interaction, those are the so-called asynchronous BCI. One way to reduce the probability of misclassification is to achieve control with only two different mental tasks. The system presented in this paper combines the mentioned advantages in a paradigm that enables the control of a virtual wheelchair with three commands: move forward, turn left and turn right. The results obtained over three subjects support the viability of the proposed system.

Francisco Velasco-Álvarez, Ricardo Ron-Angevin

Impact of Frequency Selection on LCD Screens for SSVEP Based Brain-Computer Interfaces

In this work, the high impact of appropriate selection of visual stimuli on liquid crystal displays (LCDs) used for the brain-computer interfaces (BCIs) based on the Steady-State Visual Evoked Potentials (SSVEPs) has been confirmed. The number of suitable frequencies on the standard LCD monitor is limited due to the vertical refresh rate of 60Hz and the number of simultaneously used stimuli. Two sets of frequencies have been compared among each other during the on-line spelling task with the Bremen-BCI system in the study with 10 healthy subjects. This work is meaningful for the practical design of LCD based BCIs. In this study, appropriate selection of visual stimuli results in a 40% change in the BCI literacy under otherwise equal conditions.

Ivan Volosyak, Hubert Cecotti, Axel Gräser

Multiobjetive Optimization

Multiobjective Evolutionary Algorithms: Applications in Real Problems

The concept of optimization refers to the process of finding one or more feasible solutions of a problem which corresponds to the extreme values (either maximum or minimum) of one or more objective functions. Initial approaches to optimization were focused on the case of solving problems involving only one objective. However, as most real-world optimization problems involve many objectives the research on this area has rapidly broaden this attention to encompass what has been called multi-objective optimization.

Antonio Berlanga, Jesús García Herrero, José Manuel Molina

Evolutionary Genetic Algorithms in a Constraint Satisfaction Problem: Puzzle Eternity II

This paper evaluates a genetic algorithm and a multiobjective evolutionary algorithm in a constraint satisfaction problem (CSP). The problem that has been chosen is the Eternity II puzzle (E2), an edge-matching puzzle. The objective is to analyze the results and the convergence of both algorithms in a problem that is not purely multiobjective but that can be split into multiple related objectives. For the genetic algorithm two different fitness functions will be used, the first one as the score of the puzzle and the second one as a combination of the multiobjective algorithm objectives.

Jorge Muñoz, German Gutierrez, Araceli Sanchis

Multiobjective Algorithms Hybridization to Optimize Broadcasting Parameters in Mobile Ad-Hoc Networks

The aim os this paper is to study the hybridization of two multi-objective algorithms in the context of a real problem, the MANETs problem. The algorithms studied are Particle Swarm Optimization (MOPSO) and a new multiobjective algorithm based in the combination of NSGA-II with Evolution Strategies (ESN). This work analyzes the improvement produced by hybridization over the Pareto’s fronts compared with the non-hybridized algorithms. The purpose of this work is to validate how hybridization of two evolutionary algorithms of different families may help to solve certain problems together in the context of MANETs problem. The hybridization used for this work consists on a sequential execution of the two algorithms and using the final population of the first algorithm as initial population of the second one.

Sandra García, Cristóbal Luque, Alejandro Cervantes, Inés M. Galván

Application Synthesis for MPSoCs Implementation Using Multiobjective Optimization

Network-on-chip (NoC) are considered the next generation of communication infrastructure for a multiprocessors system-on-chip (MPSoCs). In the platform-based methodology, an application is implemented by a set of collaborating intellectual properties (IPs) blocks. In this paper, we use NSGA-II and microGA to yield efficient topological pre-selected sets IPs on the tiles of a mesh-based NoC. Each IP is associated with a processor that best implements its functionality. The IP mapping optimization is driven by the area occupied, execution time and power consumption.

Marcus Vinícius Carvalho da Silva, Nadia Nedjah, Luiza de Macedo Mourelle

Multi Objective Optimization Algorithm Based on Neural Networks Inversion

A new neural network–based multiobjective optimization approach is presented, which performs an approximation of the direct problem by means of a neural network, and solves the inverse problem inverting the neural network itself, namely by imposing the value of the desired objective functions and by searching the corresponding value of the design parameters. The search for the Pareto front can be performed directly in the objectives space, rather than in the design parameters, allowing both to uniformly sample the Pareto front, and to limit the computational load. Inverting a neural network corresponds to find the intersection of non convex domains. The proposed inversion algorithm allows to exploit the algorithms available for linear domains, by iteratively evaluating linear approximations of non linear domains, increasing the convergence property. To demonstrate the procedure and the performance of the neural network–based approach, the problem of optimal configuration of an electromagnetic device is selected for analysis and discussion.

Sara Carcangiu, Alessandra Fanni, Augusto Montisci

EMORBFN: An Evolutionary Multiobjetive Optimization Algorithm for RBFN Design

In this paper a multiobjective optimization algorithm for the design of Radial Basis Function Networks is proposed. The goal of the design algorithm is to obtain networks with a high tradeoff between accuracy and complexity, overcoming the drawbacks of the traditional single objective evolutionary algorithms. The main features of EMORBFN are a selection mechanism based on NSGA-II and specialized operators. To test the behavior of EMORBFN a similar mono-objective optimization algorithm for Radial Basis Function Network design has been developed. Also C4.5, a Multilayer Perceptron network or an incremental method to design of Radial Basis Function Networks have been included in the comparison. Experimental results on six UCI datasets show that EMORBFN obtains networks with high accuracy and low complexity, outperforming other more mature methods.

Pedro L. López, Antonio J. Rivera, M. Dolores Pérez-Godoy, María J. del Jesus, Cristóbal Carmona

Performance Measures for Dynamic Multi-Objective Optimization

As most of the performance measures proposed for dynamic optimization algorithms in the literature are only for single objective problems, we propose new measures for dynamic multi-objective problems. Specifically, we give new measures for those problems in which the Pareto fronts are unknown. As these problems are the most common in the industry, our proposed measures constitute an important contribution in order to promote further research on these problems.

Mario Cámara, Julio Ortega, Francisco de Toro


Methods for Artificial Evolution of Truly Cooperative Robots

Cooperation applies the situations where two or more individuals obtain a net benefit by working together. Cooperation is widely spread in nature and takes several forms, ranging from behavioral coordination to sacrifice of one’s own life for the benefit of the group. This latter form of cooperation is known as “true cooperation”, or “altruism”, and is found only in few cases. Truly cooperative robots would be very useful in conditions where unpredictable events may require costly actions by individual robots for the success of the mission. However, the interactions among robots sharing the same environment can affect in unexpected ways the behavior of individual robots, making very difficult the design of rules that produce stable cooperative behavior. It is thus interesting to examine under which conditions truly cooperative behavior evolves in nature and how those conditions can be translated into evolutionary algorithms that are applicable to a wide range of robotic situations.

Dario Floreano, Laurent Keller

Social Robot Paradigms: An Overview

This article overviews the current and diverse paradigms about a social robot. We discuss and contextualize key aspects about the mechanisms used by this type of robot to relate with humans, other robots and its surrounded social environment: interaction, communication, physical and environmental embodiment and autonomy. We enrich current definitions and discuss about the crucial role of the purpose of this type of robots within nowadays society.

Sergi del Moral, Diego Pardo, Cecilio Angulo

A Dual Graph Pyramid Approach to Grid-Based and Topological Maps Integration for Mobile Robotics

A pyramid is a hierarchy of successively reduced graphs which represents the contents of a base graph at multiple levels of abstraction. The efficiency of the pyramid to represent the information is strongly influenced by the graph selected to encode the information within each pyramid level (data structure) and the scheme used to build one graph from the graph below (decimation process). In this paper, the dual graph data structure and the maximal independent edge set (MIES) decimation process are applied in the context of robot navigation. The aim is to integrate the grid-based and the topological paradigms for map building. In this proposal, dual graphs allow to correctly represent the embedding of the topological map into the metric one.

J. M. Pérez-Lorenzo, R. Vázquez-Martín, E. Antúnez, A. Bandera

Integrating Graph-Based Vision Perception to Spoken Conversation in Human-Robot Interaction

In this paper we present the integration of graph-based visual perception to spoken conversation in human-robot interaction. The proposed architecture has a dialogue manager as the central component for the multimodal interaction, which directs the robot’s behavior in terms of the intentions and actions associated to the conversational situations. We tested this ideas on a mobile robot programmed to act as a visitor’s guide to our department of computer science.

Wendy Aguilar, Luis A. Pineda

From Vision Sensor to Actuators, Spike Based Robot Control through Address-Event-Representation

One field of the neuroscience is the neuroinformatic whose aim is to develop auto-reconfigurable systems that mimic the human body and brain. In this paper we present a neuro-inspired spike based mobile robot. From commercial cheap vision sensors converted into spike information, through spike filtering for object recognition, to spike based motor control models. A two wheel mobile robot powered by DC motors can be autonomously controlled to follow a line drown in the floor. This spike system has been developed around the well-known Address-Event-Representation mechanism to communicate the different neuro-inspired layers of the system. RTC lab has developed all the components presented in this work, from the vision sensor, to the robot platform and the FPGA based platforms for AER processing.

A. Jimenez-Fernandez, C. Lujan-Martinez, R. Paz-Vicente, A. Linares-Barranco, G. Jimenez, A. Civit

Automatic Generation of Biped Walk Behavior Using Genetic Algorithms

Controlling a biped robot with several degrees of freedom is a challenging task that takes the attention of several researchers in the fields of biology, physics, electronics, computer science and mechanics. For a humanoid robot to perform in complex environments, fast, stable and adaptive behaviors are required. This paper proposes a solution for automatic generation of a walking gait using genetic algorithms (GA). A method based on partial Fourier series was developed for joint trajectory planning. GAs were then used for offline generation of the parameters that define the gait. GAs proved to be a powerful method for automatic generation of humanoid behaviors resulting on a walk forward velocity of 0.51m/s which is a good result considering the results of the three best teams of RoboCup 3D simulation league for the same movement.

Hugo Picado, Marcos Gestal, Nuno Lau, Luis P. Reis, Ana M. Tomé

Motion Planning of a Non-holonomic Vehicle in a Real Environment by Reinforcement Learning*

In this work, we present a new algorithm that obtains a minimum-time solution in real-time to the optimal motion planning of a non-holonomic vehicle. The new algorithm is based on the combination of Cell-Mapping and reinforcement learning techniques. While the algorithm is performed on the vehicle, it learns its kinematics and dynamics from received experience with no need to have a mathematical model available. The algorithm uses a transformation of the cell-to-cell transitions in order to reduce the time spent in the knowledge of the vehicle’s parameters. The presented results have been obtained executing the algorithm with the real vehicle and generating different trajectories to specific goals.

M. Gómez, L. Gayarre, T. Martínez-Marín, S. Sánchez, D. Meziat


Applications in Bio-informatics and Biomedical Engineering

In this paper, an overview of the main topics presented in the special session of bioinformatics and biomedical engineering is presented. Bioinformatics consists of two subfields: the development of computational tools and databases, and the application of these tools and databases in generating biological knowledge to better understand living systems, being the main subject genomics and proteomics. Another knowledge scope close to the previous one are the problems related to medicine and biomedical engineering in which it is needed the participation of computer technologies and intelligent systems. The evolution of both disciplines, analyzing the number of publications presented in the bibliography during the last twenty years is presented.

I. Rojas, H. Pomares, O. Valenzuela, J. L. Bernier

A Large-Scale Genomics Studies Conducted with Batch-Learning SOM Utilizing High-Performance Supercomputers

Self-Organizing Map (SOM) developed by Kohonen’s group is an effective tool for clustering and visualizing high-dimensional complex data on a two-dimensional map. We previously modified the conventional SOM to genome informatics, making the learning process and resulting map independent of the order of data input. This BLSOM developed on the basis of batch-learning SOM became suitable for actualizing high-performance parallel-computing using high-performance supercomputers. This BLSOM revealed phylotype-specific characteristics of oligonucleotide frequencies occurred in their genome sequences and thus permitted clustering (self-organization) of genome fragments (e.g., 10 kb) according to phylotype without phylogenetic information during the BLSOM learning. Using a high-performance supercomputer “the Earth Simulator”, almost all prokaryotic, eukaryotic and viral sequences currently available could be classified according to phylotypes on a single map. Using this large-scale BLSOM, phylotypes of a large number of genomic fragments obtained by metagenome analyses of environmental samples could be predicted.

Takashi Abe, Yuta Hamano, Shigehiko Kanaya, Kennosuke Wada, Toshimichi Ikemura

Clustering Method to Identify Gene Sets with Similar Expression Profiles in Adjacent Chromosomal Regions

The analysis of transcriptional data accounting for the chromosomal locations of genes can be applied to detecting gene sets sharing similar expression profiles in an adjacent chromosomal region. In this paper, we propose a new distance measure to integrate expression profiles with chromosomal locations. The performance of the proposed distance measure is evaluated via the bootstrap resampling procedure. We applied the proposed method to the microarray data in


genome and identified the set of genes of Toll and Imd pathway in adjacent chromosomal regions. Not only the proposed method gives stronger biological meaning to the clustering result, but also it provides biologically meaningful gene sets.

Min A. Jhun, Taesung Park

On Selecting the Best Pre-processing Method for Affymetrix Genechips

Affymetrix High Oligonucleotide expression arrays, also known as Affymetrix GeneChips, are widely used for the high-throughput assessment of gene expression of thousands of genes simultaneously. Although disputed by several authors, there are non-biological variations and systematic biases that must be removed as much as possible before an absolute expression level for every gene is assessed. Several pre-processing methods are available in the literature and five common ones (RMA, GCRMA, MAS5, dChip and VSN) and two customized Loess methods are benchmarked in terms of data variability, similarity of data distributions and correlation coefficient among replicated slides in a variety of real examples. Besides, it will be checked how the variant and invariant genes can influence on preprocessing performance.

J. P. Florido, H. Pomares, I. Rojas, J. C. Calvo, J. M. Urquiza, M. Gonzalo Claros

Method for Prediction of Protein-Protein Interactions in Yeast Using Genomics/Proteomics Information and Feature Selection

Protein-protein interaction (PPI) prediction is one of the main goals in the current Proteomics. This work presents a method for prediction of protein-protein interactions through a classification technique known as Support Vector Machines. The dataset considered is a set of positive and negative examples taken from a high reliability source, from which we extracted a set of genomic features, proposing a similarity measure. Feature selection was performed to obtain the most relevant variables through a modified method derived from other feature selection methods for classification. Using the selected subset of features, we constructed a support vector classifier that obtains values of specificity and sensitivity higher than 90% in prediction of PPIs, and also providing a confidence score in interaction prediction of each pair of proteins.

J. M. Urquiza, I. Rojas, H. Pomares, J. P. Florido, G. Rubio, L. J. Herrera, J. C. Calvo, J. Ortega

Protein Structure Prediction by Evolutionary Multi-objective Optimization: Search Space Reduction by Using Rotamers

The protein structure prediction (PSP) problem is considered an open problem as there is no recognized ”best” procedure to find solutions. Moreover, this problem presents a vast search space and the analysis of each protein conformation requires significant amount of computing time. We propose a reduction of the search space by using the dependent rotamer library. Also this work introduces new heuristics to improve the multi-objective optimization approach to this problem.

J. C. Calvo, J. Ortega, M. Anguita, J. M. Urquiza, J. P. Florido

Using Efficient RBF Networks to Classify Transport Proteins Based on PSSM Profiles and Biochemical Properties

Transport proteins are difficult to understand by biological experiments due to the difficulty in obtaining crystals suitable for X-ray diffraction. Therefore, the use of computational techniques is a powerful approach to annotate the function of proteins.

In this work, we propose a method based on PSSM profiles and other biochemical properties for classifying three major classes of transport proteins. Our method shows a 5-fold cross validation accuracy of 75.4% in a set of 1146 transport proteins with less than 20% mutual sequence identity.

Yu-Yen Ou, Shu-An Chen

Artificial Neural Network Based Algorithm for Biomolecular Interactions Modeling

With the advent of new genomic platforms there is the potential for data mining of genomic profiles associated with specific subclasses of disease. Many groups have focused on the identification of genes associated with these subclasses. Fewer groups have taken this analysis a stage further to identify potential associations between biomolecules to determine hypothetical inferred biological interaction networks (


gene regulatory networks) associated with a given condition (termed the interactome). Here we present an artificial neural network based approach using the back propagation algorithm to explore associations between genes in hypothetical inferred pathways, by iteratively predicting the level of expression of each gene with the others, with respect to the genes associated with metastatic risk in breast cancer based on the publicly available van’t Veer data set [1]. We demonstrate that we can identify a subset of genes that is strongly associated with others within the metastatic system. Many of these interactions are strongly representative of likely biological interactions and the interacting genes are known to be associated with metastatic disease.

Christophe Lemetre, Lee J. Lancashire, Robert C. Rees, Graham R. Ball

Biomedical Applications

Modelling Dengue Epidemics with Autoregressive Switching Markov Models (AR-HMM)

This work presents the Autorregresive switching-Markov Model (AR-HMM) as a technique that allows modelling time series which are controlled by some unobserved process and finite time lags. Our objetive is to bring to light the potential of this method to give valuable information about how an efficient control strategy can be performed. As a case of study, we apply the method to the dengue fever epidemics (DF) in 2001 in Havana. For this time serie, a first experiment with real data is performed in order to obtain the characterization of differents stages of the epidemics.

Madalina Olteanu, Esther García-Garaluz, Miguel Atencia, Gonzalo Joya

A Theoretical Model for the Dengue Epidemic Using Delayed Differential Equations: Numerical Approaches

We formulate an autonomous dynamical system to model an epidemic outbreak of Dengue fever in which the population of mosquitoes is not directly present. In their place, we consider delayed differential equations. Our model is mainly based on vertical transmission. We found equilibrium points, studied its stability and gave some possible interpretations of the results. Numerical work is present too because we try to fit the parameters with data from a real epidemic.

Andrés Sánchez Pérez, Héctor de Arazoza Rodríguez, Teresita Noriega Sánchez, Jorge Barrios, Aymee Marrero Severo

System Identification of Dengue Fever Epidemics in Cuba

The objective of the work described in this paper is twofold. On the one hand, the aim is to present and validate a model of Dengue fever for the Cuban case which is defined by a delay differential system. Such a model includes time-varying parameters, which are estimated by means of a method based upon Hopfield Neural Networks. This method has been successfully applied in both robotic and epidemiological models described by Ordinary Differential Equations. Therefore, on the other hand, an additional aim of this work is to assess the behaviour of this neural estimation technique with a delay differential system. Experimental results show the ability of the estimator to deal with systems with delays, as well as plausible parameter estimations, which lead to predictions that are coherent with actual data.

Esther García-Garaluz, Miguel Atencia, Francisco García-Lagos, Gonzalo Joya, Francisco Sandoval

HIV Model Described by Differential Inclusions

Infected population size estimation is a common problem in HIV/AIDS epidemic analysis and it is the most important aspect for planning appropriate care and prevention policies. Some Ordinary Differential Equations models of HIV epidemic in Cuba considering the Contact Tracing strategy have been described in previous works. In this paper we present a HIV/AIDS model described by Differential Inclusions. Also, we establish a mathematical framework allowing us to make suitable prediction of the size HIV infected population at a future time.

Jorge Barrios, Alain Piétrus, Aymée Marrero, Héctor de Arazoza

Data Mining in Complex Diseases Using Evolutionary Computation

A new algorithm is presented for finding genotype-phenotype association rules from data related to complex diseases. The algorithm was based on Genetic Algorithms, a technique of Evolutionary Computation. The algorithm was compared to several traditional data mining techniques and it was proved that it obtained similar classification scores but found more rules from the data generated artificially. In this paper it is assumed that several groups of SNPs have an impact on the predisposition to develop a complex disease like schizophrenia. It is expected to validate this in a short period of time on real data.

Vanessa Aguiar, Jose A. Seoane, Ana Freire, Cristian R. Munteanu

Using UN/CEFACT’S Modelling Methodology (UMM) in e-Health Projects

UN/CEFACT’s Modelling Methodology (UMM) is a methodology created to capture the business requirements of inter-organizational business processes, regardless of the underlying technology. An example of how to apply UMM to an inter-enterprise e-health project is presented in this paper.

P. García-Sánchez, J. González, P. A. Castillo, A. Prieto

Matrix Metric Adaptation Linear Discriminant Analysis of Biomedical Data

A structurally simple, yet powerful, formalism is presented for adapting attribute combinations in high-dimensional data, given categorical data class labels. The rank-1 Mahalanobis distance is optimized in a way that maximizes between-class variability while minimizing within-class variability. This optimization target has resemblance to Fisher’s linear discriminant analysis (LDA), but the proposed formulation is more general and yields improved class separation, which is demonstrated for spectrum data and gene expression data.

M. Strickert, J. Keilwagen, F. -M. Schleif, T. Villmann, M. Biehl

SPECT Image Classification Techniques for Computer Aided Diagnosis of the Alzheimer Disease

Alzheimer disease (AD) is a progressive neurodegenerative disorder first affecting memory functions and then gradually affecting all cognitive functions with behavioral impairments. As the number of AD patients has increased, early diagnosis has received more attention for both social and medical reasons. Currently, accuracy in the early AD diagnosis is below 70% so that AD does not receive a suitable treatment. Functional brain imaging including single-photon emission computed tomography (SPECT) is commonly used to guide the clinician’s diagnosis. However, conventional evaluation of SPECT scans often relies on manual reorientation, visual reading and semiquantitative analysis of certain regions of the brain. This paper evaluates different pattern classifiers for the development of a computer aided diagnosis (CAD) system for improving the early AD detection. Discriminant template-based normalized mean square error (NMSE) features of several coronal slices of interest (SOI) were used. The proposed system yielding a 97% AD diagnosis accuracy, reports clear improvements over existing techniques such as the voxel-as-features (VAF) which yields just a 78% classification accuracy.

J. Ramírez, R. Chaves, J. M. Górriz, M. López, D. Salas-Gonzalez, I. Álvarez, F. Segovia

Automatic System for Alzheimer’s Disease Diagnosis Using Eigenbrains and Bayesian Classification Rules

Alzheimer’s Disease (AD) is a progressive neurologic disease of the brain that leads to the irreversible loss of neurons and dementia. The new brain imaging techniques PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computed Tomography) provide functional information about the brain activity and have been widely used in the AD diagnosis process. However, the diagnosis currently relies on manual image reorientations, visual evaluation and other subjective, time consuming steps. In this work, a complete computer aided diagnosis (CAD) system is developed to assist the clinicians in the AD diagnosis process. It is based on bayesian classifiers, made up from features previously extracted. The small size sample problem, consisting of having a number of available samples much lower than the dimension of the feature space, is faced up by applying Principal Component Analysis (PCA) to the features. This approach provides higher accuracy values than other previous approaches do, yielding 91.21% and 98.33% accuracy values for SPECT and PET images, respectively.

M. López, J. Ramírez, J. M. Górriz, I. Álvarez, D. Salas-Gonzalez, F. Segovia, C. G. Puntonet

On the Use of Morphometry Based Features for Alzheimer’s Disease Detection on MRI

We have studied feature extraction processes for the detection of Alzheimer’s disease on brain Magnetic Resonance Imaging (MRI) based on Voxel-based morphometry (VBM). The clusters of voxel locations detected by the VBM were applied to select the voxel intensity values upon which the classification features were computed. We have explored the use of the data from the original MRI volumes and the GM segmentation volumes. In this paper, we apply the Support Vector Machine (SVM) algorithm to perform classification of patients with mild Alzheimer’s disease vs. control subjects. The study has been performed on MRI volumes of 98 females, after careful demographic selection from the Open Access Series of Imaging Studies (OASIS) database, which is a large number of subjects compared to current reported studies.

Maite García-Sebastián, Alexandre Savio, Manuel Graña, Jorge Villanúa

Selecting Regions of Interest for the Diagnosis of Alzheimer’s Disease in Brain SPECT Images Using Welch’s t-Test

This paper presents a computer-aided diagnosis technique for the diagnosis of Alzheimer type dementia. The proposed methodology is based on the selection of voxels which present a Welch’s t-test between both classes, Normal and Alzheimer images, greater than a given value. The mean and standard deviation of intensity values are calculated for selected voxels. They are chosen as feature vectors for two different classifiers: support vector machines with linear kernel and classification trees. The proposed methodology reaches an accuracy greater than 98 % in the classification task.

D. Salas-Gonzalez, J. M. Górriz, J. Ramírez, M. López, I. Álvarez, F. Segovia, C. G. Puntonet

Alzheimer’s Diagnosis Using Eigenbrains and Support Vector Machines

An accurate and early diagnosis of the Alzheimer’s Disease (AD) is of fundamental importance for the patients medical treatment. Single Photon Emission Computed Tomography (SPECT) images are commonly used by physicians to assist the diagnosis, rating them by visual evaluations. In this work we present a computer assisted diagnosis tool based on a Principal Component Analysis (PCA) dimensional reduction of the feature space approach and a Support Vector Machine (SVM) classification method for improving the AD diagnosis accuracy by means of SPECT images. The most relevant image features were selected under a PCA compression, which diagonalizes the covariance matrix, and the extracted information was used to train a SVM classifier which could classify new subjects in an unsupervised manner.

I. Álvarez, J. M. Górriz, J. Ramírez, D. Salas-Gonzalez, M. López, F. Segovia, C. G. Puntonet, B. Prieto

Artificial Intelligent Systems Based on Supervised HUMANN for Differential Diagnosis of Cognitive Impairment: Towards a 4P-HCDS

Differential and early diagnosis of cognitive impairment (CI) continues being one of the crucial points to which clinical medicine faces at every level of attention, and a significant public health concern. This work proposes new CI diagnostic tools based on a data fusion scheme, artificial neural networks and ensemble systems. Concretely we have designed a supervised HUMANN [1] with capacity of missing data processing (HUMANN-S) and a HUMANN-S ensemble system. These intelligent diagnostic systems are inside EDEVITALZH, a clinical virtual environment to assist the diagnosis and prognosis of CI, Alzheimer’s disease and other dementias. Our proposal is a personalized, predictive, preventive, and participatory-healthcare delivery system (4P-HCDS) and is an optimal solution for an e-health framework. We explore their ability presenting preliminary results on differential diagnosis of CI using neuropsychological tests from 267 consultations on 30 patients by the Alzheimer’s Patient Association of Gran Canaria.

Patricio García Báez, Miguel Angel Pérez del Pino, Carlos Fernández Viadero, Carmen Paz Suárez Araujo

Stratification Methodologies for Neural Networks Models of Survival

Clinical management often relies on stratification of patients by outcome. The application of flexible non-linear time-to-event models to stratification of patient populations into different and clinically meaningful risk groups is currently an important area of research. This paper proposes a definition of prognostic index for neural network models of survival. This index underpins different stratification strategies including k-means clustering, regression trees and recursive application of the log-rank test. It was obtained with multiple imputation applied to a neural network model of survival fitted to a substantial data set for breast cancer (n=931) and was evaluated with a large out of sample data set (n=4,083). It was found that the constraint imposed by regression trees on the form of the permitted rules makes it less specific than stratifying directly from the prognostic index and deriving unconstrained low-order rules with Orthogonal Search Rule Extraction.

Ana S. Fernandes, Ian H. Jarman, Terence A. Etchells, José M. Fonseca, Elia Biganzoli, Chris Bajdik, Paulo J. G. Lisboa

Model Comparison for the Detection of EEG Arousals in Sleep Apnea Patients

Sleep Apnea/Hypopnea Syndrome (SAHS) is a very common sleep disorder characterized by the repeated occurrence of involuntary breathing pauses during sleep. Cessation in breathing often causes Electroencephalographic (EEG) arousals as a response, and therefore detection of arousals is important since they provide important evidence for localization of apneic events and their number is directly related with SAHS severity. Arousals result in fragmented sleep and so they are one of the most important causes of daytime sleepiness. In this paper we present an approach to detect these arousals over polysomnographic recordings based on the machine learning paradigm. First a signal processing technique is proposed for the construction of learning patterns. Subsequently classifiers based on Fisher’s linear and quadratic discriminates, Support Vector Machines (SVM) and Artificial Neural Networks (ANN) are compared for the learning process. The more suitable model was chosen finally showing an accuracy of 0.92.

D. Álvarez-Estévez, V. Moret-Bonillo

Ranking of Brain Tumour Classifiers Using a Bayesian Approach

This study presents a ranking for classifers using a Bayesian perspective. This ranking framework is able to evaluate the performance of the models to be compared when they are inferred from different sets of data. It also takes into account the performance obtained on samples not used during the training of the classifiers. Besides, this ranking assigns a prior to each model based on a measure of similarity of the training data to a test case. An evaluation consisting of ranking brain tumour classifiers is presented. These multilayer perceptron classifiers are trained with


H magnetic resonance spectroscopy (MRS) signals following a multiproject multicenter evaluation approach. We demonstrate that such a framework can be effectively applied to the real problem of selecting classifiers for brain tumour classification.

Javier Vicente, Juan Miguel García-Gómez, Salvador Tortajada, Alfredo T. Navarro, Franklyn A. Howe, Andrew C. Peet, Margarida Julià-Sapé, Bernardo Celda, Pieter Wesseling, Magí Lluch-Ariet, Montserrat Robles

Feature Selection with Single-Layer Perceptrons for a Multicentre 1H-MRS Brain Tumour Database

A Feature Selection process with Single-Layer Perceptrons is shown to provide optimum discrimination of an international, multi-centre


H-MRS database of brain tumors at reasonable computational cost. Results are both intuitively interpretable and very accurate. The method remains simple enough as to allow its easy integration in existing medical decision support systems.

Enrique Romero, Alfredo Vellido, Josep María Sopena

Weakly-Supervised Classification with Mixture Models for Cervical Cancer Detection

The human supervision is required nowadays in many scientific applications but, due to the increasing data complexity, this kind of supervision has became too difficult or expensive and is no longer tenable. This paper therefore focuses on weakly-supervised classification which uses contextual informations to label the learning observations and to build a supervised classifier. This new kind of classification is treated in this work with a mixture model approach. For this, the problem of weakly-supervised classification is recasted in a problem of supervised classification with uncertain labels. The proposed approach is applied to cervical cancer detection for which the human supervision is very difficult and promising results are observed.

Charles Bouveyron

Edges Detection of Clusters of Microcalcifications with SOM and Coordinate Logic Filters

Breast cancer is one of the leading causes to women mortality in the world. Clusters of Microcalcifications (MCCs) in mammograms can be an important early sign of breast cancer, the detection is important to prevent and treat the disease. Coordinate Logic Filters (CLF), are very efficient in digital signal processing applications, such as noise removal, magnification, opening, closing, skeletonization, and coding, as well as in edge detection, feature extraction, and fractal modelling. This paper presents an edge detector of MCCs in Regions of Interest (ROI) from mammograms using a novel combination. The edge detector consist in the combination of image enhancement by histogram adaptive technique, a Self Organizing Map (SOM) Neural Network and CLF. The experiment results show that the proposed method can locate MCCs edges. Moreover, the proposed method is quantitatively evaluated by Pratt’s figure of merit together with two widely used edge detectors and visually compared, achieving the best results.

J. Quintanilla-Domínguez, B. Ojeda-Magaña, J. Seijas, A. Vega-Corona, D. Andina

A New Methodology for Feature Selection Based on Machine Learning Methods Applied to Glaucoma

In this paper we present a new methodology based on machine learning methods that allows to select from the available features that define a problem, a subset with the most discriminant ones to outperform a classification. As an application, we have used it to select, from the attributes of the optic nerve obtained by Heidelberg Retina Tomograph II, the most informative ones to discriminate between glaucoma and non-glaucoma. Applying this methodology we have identified 7 attributes from the original 103 attributes, improving the ROC area a 2.38%. These attributes match to a large extent with the most informative ones according to the ophthalmologist’s experience in clinic as well as the literature.

Diego García-Morate, Arancha Simón-Hurtado, Carlos Vivaracho-Pascual, Alfonso Antón-López

Tissue Recognition Approach to Pressure Ulcer Area Estimation with Neural Networks

Pressure ulcer is a clinical pathology of localized damage to the skin and underlying tissue with high prevalence rates in aged people. Diagnosis and treatment of pressure ulcers involve high costs for sanitary systems. Accurate wound-state evaluation is a critical task for optimizing the effectiveness of treatments. Reliable trace of wound-state evolution can be done by precisely registering the wound area. Clinicians estimate the wound area with often subjective and imprecise manual methods. This article presents a computer-vision approach based on machine hybrid-learning techniques to precise automatic estimation of wound dimensions on pressure ulcer real images taken under non-controlled illumination conditions. The system combines neural networks and Bayesian classifiers to effectively recognize and separate skin and healing regions from wound-tissue regions to be measured. This tissue-recognition approach to wound area estimation gives high performance rates and operates better than a widespread clinical method when approximating real wound areas of variable size.

Francisco J. Veredas, Héctor Mesa, Laura Morente

Classification of Schistosomiasis Prevalence Using Fuzzy Case-Based Reasoning

In this work we propose the use of a similarity-based fuzzy CBR approach to classify the prevalence of Schistosomiasis in the state of Minas Gerais in Brazil.

Flávia T. Martins-Bedé, Lluís Godo, Sandra Sandri, Luciano V. Dutra, Corina C. Freitas, Omar S. Carvalho, Ricardo J. P. S. Guimarães, Ronaldo S. Amaral

BAC Overlap Identification Based on Bit-Vectors

There is no software that accurately calculates the overlap of two BACs fast enough for application to thousands of cases in turn. The problems include unacceptably low speed of dynamic programming algorithms for sequences of the considered size and failure of the faster local alignment methods to identify complete sequence overlaps. Lower sequence quality at both BAC ends and internal difference blocks, being small enough to not significantly increase relative error rates but large enough to terminate local alignments, cause output of multiple overlapping local matches which do not extend to both sequence ends. Based on Myers’ bit-vector algorithm for fast edit distance calculation, we developed the program BACOLAP, that identifies overlapping BACs just as sensitive as global dynamic programming alignment and as fast as local heuristic alignment.

Jens-Uwe Krause, Jürgen Kleffe

Ambient Assisted Living (AAL) and Ambient Intelligence (AI)

AAL and the Mainstream of Digital Home

The technological standards for the Digital Home play a key role in providing entertainment services but also Ambient Assisted Living (AAL) services for the elderly. Both kind of services are multimedia and share the Digital Home infrastructure or Smart Home Ecosystem. This Ecosystem is made up of a growing number of devices that communicate with other local or remote devices. AAL services cannot be developed for the Digital Home in the "a technical solution for every problem” model and has to rely on existing and widespread technologies provided by this mainstream of technology.

Esteban Pérez-Castrejón, Juan J. Andrés-Gutiérrez

Legal Concerns Regarding AmI Assisted Living in the Elderly, Worldwide and in Romania

The recent concepts developed in order to meet the specific technology-related needs of older people are “Ambient Intelligence” and "Ambient Assisted Living". The major aim of AmI is to prolong the time older people can live decently in their own homes with an increased autonomy and self-confidence. One category of patients that would necessarily require intelligent ambient assistance devices in the future is the old one, characterized by a high incidence of co-morbid conditions and diseases, cognitive and/or physical impairment, functional loss from multiple disabilities, leading to impaired self-dependency. US and EU law and research programs are concerned to support the research, development and implementation of such devices able to reduce the social burden and to improve the geriatric healthcare systems.

Luiza Spiru, Lucian Stefan, Ileana Turcu, Camelia Ghita, Ioana Ioancio, Costin Nuta, Mona Blaciotti, Mariana Martin, Ulises Cortes, Roberta Annicchiarico

Construction and Debugging of a Multi-Agent Based Simulation to Study Ambient Intelligence Applications

This paper introduces the development of an infrastructure to study highly complex applications of Ambient Intelligence (AmI) which involve a large number of users. The key ideas about the development of a multi-agent based simulation (MABS) for such purposes, Ubik, are given. The paper also extrapolates effective technologies in the development of multi-agent systems (MAS) to the field of MABS. In particular, the basis for the use of forensic analysis as a method to assist the analysis, understanding and debugging of Ubik in particular and the general MABS are set up.

Emilio Serrano, Juan A. Botia, Jose M. Cadenas

Easing the Smart Home: Translating Human Hierarchies to Intelligent Environments

Ubiquitous computing research have extended traditional environments in the so–called Intelligent Environments. All of them use their capabilities for pursuing their inhabitants’s satisfaction, but the ways of getting it are most of the times unclear and frequently unshared among different users. This last problem becomes patent in shared environments in which users with different preferences live together. This article presents a solution translating human hierarchies to the Ubicomp domain, in a continuing effort for leveraging the control capabilities of the inhabitants in their on–growing capable environments. This mechanism, as a natural ubicomp extension of the coordination mechanism used daily by humans, has been implemented over a real environment: a living room equipped with ambient intelligence capabilities, and installed in two more: an intelligent classroom and an intelligent secure room.

Manuel García–Herranz, Pablo A. Haya, Xavier Alamán

Wireless Sensor Networks in Home Care

Ambient Intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a novel architecture which integrates a service oriented approach into Wireless Sensor Networks to optimize the construction of Ambient Intelligence environments. The architecture proposes a new and easier method to develop distributed intelligent ubiquitous systems, where devices can communicate in a distributed way independent of time and location restrictions. The architecture is easily adapted to changing environments. A prototype system has been proposed to test this architecture. This system is aimed to improve health care and assistance to dependent persons in their homes. Preliminary results are presented in this paper.

Dante I. Tapia, Juan A. Fraile, Sara Rodríguez, Juan F. de Paz, Javier Bajo

Indoor Localization Based on Neural Networks for Non-dedicated ZigBee Networks in AAL

Indoor localization is one of the most appealing technologies in Ambient Assisted Living (AAL) applications, providing support for diverse services such as personal security, guidance or innovative interfaces. Dedicated systems can be deployed to provide that information, but it is possible to gain advantage of available elements to compute a location without requiring additional hardware. In this paper, a ZigBee network designed for a home control application is improved with a localization functionality based on neural networks, achieving room-level accuracy, and non introducing additional infrastructure constraints to the original application.

Rubén Blasco, Álvaro Marco, Roberto Casas, Alejando Ibarz, Victorián Coarasa, Ángel Asensio

Managing Ambient Intelligence Sensor Network Systems, an Agent Based Approach

Smart homes and ubiquitous computing solutions are getting special interest in healthcare context as they can help extending prevalence in prefered enviroments. This paper presents an agent based implementation approach for heterogeneus flexible sensor networks in the ambient intelligence context.

Guillermo Bosch, Cristian Barrué

Ambulatory Mobility Characterization Using Body Inertial Systems: An Application to Fall Detection

The aim of this paper is to study the use of a prototype of wearable device for long term monitoring of gait and balance using inertial sensors. First, it is focused on the design of the device that can be used all day during the patient daily life activities, because it is small, usable and non invasive. Secondly, we present the system calibration to ensure the quality of the sensors data. Afterwodrs, we focus in the experimental methodology for data harvest from extensive types of falls. Finally a statistical analysis allows us to determine the discriminant information to detect falls.

Marc Torrent, Alan Bourke, Xavier Parra, Andreu Català

User Daily Activity Classification from Accelerometry Using Feature Selection and SVM

User daily activity monitoring is useful for physicians in geriatrics and rehabilitation as a indicator of user health and mobility. Real time activities recognition by means of a processing node including a triaxial accelerometer sensor situated in the user’s chest is the main goal for the presented experimental work. A two-phases procedure implementing features extraction from the raw signal and SVM-based classification has been designed for real time monitoring. The designed procedure showed an overall accuracy of 92% when recogninzing experimentation performed in daily conditions.

Jordi Parera, Cecilio Angulo, A. Rodríguez-Molinero, Joan Cabestany

A Metrics Review for Performance Evaluation on Assisted Wheelchair Navigation

In nowadays aging society, many people require assistance for non-pedestrian mobility. In some cases, assistive devices require a certain degree of autonomy when the persons’ disabilities difficult manual control. In this field, it is important to rate user performance to check how much help he/she needs. This paper presents an overview of common metrics for wheelchair navigation, plus some proposed by authors to take into account new approaches to wheelchair control. We present an example of proposed metrics on a robotized Meyra wheelchair at Santa Lucia Hospedale in Rome to prove their meaning and adequacy.

Cristina Urdiales, Jose M. Peula, Ulises Cortés, Christian Barrué, Blanca Fernández-Espejo, Roberta Annichiaricco, Francisco Sandoval, Carlo Caltagirone

Conventional Joystick vs. Wiimote for Holonomic Wheelchair Control

In nowadays aging society, power wheelchairs provide assistance for non-pedestrian mobility, but inside narrow indoor spaces, holonomic ones are required. While they adapt well to complex environments, it is harder to control them via a conventional joystick. Thus, extra buttons and/or knobs are included to decide what to do. To make control more intuitive, we propose to use a


for holonomic wheelchair control. Experiments in a narrow environment have been succesful and prove that


requires less interaction to achieve the same results that a conventional joystick. This has been reported to reduce mental workload and, hence, allow more relaxed interaction with the wheelchair.

L. Duran, M. Fernandez-Carmona, C. Urdiales, J. M Peula, F. Sandoval

Normal versus Pathological Cognitive Aging: Variability as a Constraint of Patients Profiling for AmI Design

The development of supportive environments and Assistive Technology (AT) is a priority recommended by the International Plan of Action on Ageing (Madrid 2002). The first essential matter related to the creation and implementation of a certain Ambient Intelligence (AmI) system is to know for whom it have to be designed, in order to meet user’s profile and needs with the functional capabilities of the intelligent and semi-autonomous assistive device, and to identify the best ways of human-machine interaction. The multiple dimensions of cognitive variability in the elderly make these tasks very difficult. A comparative overview of the cognitive changes in normal and pathological aging could be useful in order to deal with the cognitive variability constraint in the establishment of user profiles and target populations.

Luiza Spiru, Camelia Ghita, Ileana Turcu, Lucian Stefan, Ioana Ioancio, Costin Nuta, Mona Blaciotti, Mariana Martin, Ulises Cortes, Roberta Annicchiarico

Other Applications

Estimating the Embedding Dimension Distribution of Time Series with SOMOS

The paper proposes a new method to estimate the distribution of the embedding dimension associated with a time series, using the

Self Organizing Map decision taken in Output Space

(SOMOS) dimensionality reduction neural network. It is shown that SOMOS, besides estimating the embedding dimension, it also provides an approximation of the overall distribution of such dimension for the set where the time series evolves. Such estimation can be employed to select a proper window size in different predictor schemes; also, it can provide a measure of the future predictability at a given instant of time. The results are illustrated via the analysis of time series generated from both chaotic Hénon map and Lorenz system.

Pedro J. Zufiria, Pascual Campoy

Training Methods and Analysis of Composite, Evolved, On-Line Networks for Time Series Prediction

New results for online prediction using predictive networks composed of smaller prediction units are presented. Strategies for choosing training signals across a range of signal types are discussed. Composite networks are shown to generalise across a wide range of test signals. The best network found by a genetic evolution is present, simplified, and analysed.

Russell Y. Webb

Special Time Series Prediction: Creep of Concrete

This paper presents an algorithm, different from the classical time series, specialised in extracting knowledge from time series. The algorithm, based on Genetic Programming, enables the dynamic introduction of non-terminal operators shaped as mathematical expressions (operator-expression) that works as an unique node for the purpose of genetic operations (crossover and mutation). A new characteristic of this algorithm is the possibility of expansion the individuals, which, besides inducing a better global fitness, enables breaking up the expressions (operator-expression) into basic operators in order to achieve expression recombination. The performance of the implemented algorithm was showed by means of its application to the creep of structural concrete, a specific case of Construction Engineering where a best adjustment to the current regulative codes was subsequently achieved.

Juan L. Pérez, Fernando Martínez Abella, Alba Catoira, Javier Berrocal

Artificial Neural Networks in Urban Runoff Forecast

One of the applications of Data Mining is the extraction of knowledge from time series [1][2]. The Artificial Neural Networks (ANNs), one of the techniques of Artificial Intelligence (AI), have proved to be suitable in Data Mining for handling this type of series. This paper presents the use of ANNs and Genetic Algorithms (GA) with a time series in the field of Civil Engineering where the predictive structure does not follow the classic paradigms. In this specific case, the AI technique is applied to a phenomenon that models the process where, for a specific area, the fallen rain concentrates and flows on the surface.

Mónica Miguélez, Jerónimo Puertas, Juan Ramón Rabuñal

A Secret Sharing Scheme for Digital Images Based on Cellular Automata and Boolean Functions

In this paper an efficient probabilistic secret sharing scheme for black and white digital images is introduced. It is an extension of the algorithm proposed originally in [11]. Specifically, it is a (




)-threshold scheme. It is based on the use of a simple matrix boolean function which involves the Hadamard matrix product modulo 2, and the addition modulo 2 and


. Moreover, an algorithm based on cellular automata is used for the enhancement of the recovered image. The algorithm introduced is shown to be secure and its expansion factor is equal to 1.

Ángel Martín del Rey, Gerardo Rodríguez Sánchez

Shapes Description by a Segments-Based Neural Network

Skeletonization is the process of transforming a shape in a digital image, composed of several pixels, to a line-based shape so that the topological properties of the original shape are preserved. The resulting shape is called skeleton. Such skeletons are useful in the recognition of elongated shaped objects, for example, character patterns, chromosome patterns, etc. The skeleton provides an abstraction of geometrical and topological features of the object, so that the skeletonization can be viewed as data compression. In this paper, a model of competitive neural network based in segments is proposed. This model is suitable for obtaining the skeleton of elongated shapes with an unsupervised method.

J. A. Gómez-Ruiz, J. Muñoz-Perez, M. A. García-Bernal

Protecting DCT Templates for a Face Verification System by Means of Pseudo-random Permutations

Biometric template security and privacy are a great concern of biometric systems, because unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. In this paper we present a protection scheme for an identity verification system through biometrical face recognition based on a user dependent pseudo-random ordering of the DCT template coefficients and MPL and RBF Neural Networks for classification. In addition to privacy enhancement, because a hacker can hardly match a fake biometric sample without knowing the pseudo-random ordering this scheme, it also increases the biometric recognition performance.

Marco Grassi, Marcos Faundez-Zanuy

Efficient Parallel Feature Selection for Steganography Problems

The steganography problem consists of the identification of images hiding a secret message, which cannot be seen by visual inspection. This problem is nowadays becoming more and more important since the World Wide Web contains a large amount of images, which may be carrying a secret message. Therefore, the task is to design a classifier, which is able to separate the genuine images from the non-genuine ones. However, the main obstacle is that there is a large number of variables extracted from each image and the high dimensionality makes the feature selection mandatory in order to design an accurate classifier. This paper presents a new efficient parallel feature selection algorithm based on the Forward-Backward Selection algorithm. The results will show how the parallel implementation allows to obtain better subsets of features that allow the classifiers to be more accurate.

Alberto Guillén, Antti Sorjamaa, Yoan Miche, Amaury Lendasse, Ignacio Rojas

Mobile Applications: MovilPIU and Mobiblio

Mobile devices are a new platform with a lot of possibilities to realize studies and to implement projects. It is a new applications platform very interesting to make innovative projects.

In this article we present two real projects of mobile applications that use some of the most popular mobile services. First, MovilPIU allows to access to academic services using a mobile phone, a PDA or a computer. The second project, Mobiblio, applies mobile technology to libraries world.

Roberto Berjón Gallinas, M. Encarnación Beato Gutiérrez, Montserrat Mateos Sánchez, Miguel Ángel Sánchez Vidales, Ana Fermoso García

A Case Study of a Pull WAP Location-Based Service Incorporating Maps Services

The progress in Internet, in network technologies and the rapidly growing of the number of the mobile devices have now resulted that both companies and individuals demanding information and services over mobiles environments, moreover they hope that these services will be provided fitted to their positioning or location context. Services and applications with these requirements could be develop with Location-Based Service (LBS).

In this paper an example of a LBS application that uses the GoogleMaps services is presented. This service or application is placed in a university environment. Both the architecture proposed and the technologies used to develop the application are explained.

Montserrat Mateos Sanchez, Roberto Berjon Gallinas, Miguel Angel Sanchez Vidales, Encarnacion Beato Gutierrez, Ana Fermoso Garcia

A Mobile Tourist Decision Support System for Small Footprint Devices

This paper presents a mobile tourist decision support system that suggests personal trips, tailored to the user’s interests and context. The system enables planning a customised trip that maximises the interest of the tourist, while taking the opening hours of the points of interest (POI) and the available time into account. The planning problem is modelled as an orienteering problem with time windows, which is a hard combinatorial optimisation problem. It is solved by an iterated local search metaheuristic procedure, resulting in a personal trip. This procedure is implemented and tested on a mobile phone. Despite the limited computational resources of a small footprint device, the system succesfully solves instances up to 50 POIs in an acceptable execution time. Not more than 1% of the solution quality turned out to be sacrificed in order to keep the worst–case execution time under 5 seconds.

Wouter Souffiau, Joris Maervoet, Pieter Vansteenwegen, Greet Vanden Berghe, Dirk Van Oudheusden

Stereo-MAS: Multi-Agent System for Image Stereo Processing

This article presents a distributed agent-based architecture that can process the visual information obtained by stereoscopic cameras. The system is embedded within a global project whose objective is to develop an intelligent environment for location and identification within dependent environments that merge with other types of technologies. Vision algorithms are very costly and take a lot of time to respond, which is highly inconvenient if we consider that many applications can require action to be taken in real time. An agent architecture can automate the process of analyzing images obtained by cameras, and optimize the procedure.

Sara Rodríguez, Juan F. De Paz, Javier Bajo, Dante I. Tapia, Belén Pérez

Participatory EHPR: A Watermarking Solution

The Electronic Patient Health Record (EPHR), as the basis for future ehealthcare systems, suffers from public suspicion and doubt regarding its security and exploitation.

In this paper a mechanism permitting patient empowerment to increase public acceptance of the EPHR is introduced, which also has the benefits of making its use more secure. We argue that the use of steganography (information hiding) to embed distributed personal patient details within the patient’s medical data increases the personalised and participatory aspects. It also empowers the patient to have more control over the use of their own personal data. We discuss a hierarchical watermarking scheme for the embedding of multiple watermarks to provide different levels of security under patient control.

David Lowe, B. R. Matam

Bus Network Scheduling Problem: GRASP + EAs with PISA * Simulation

In this work a memetic algorithm for the Bus Network Scheduling Problem (BNSP) is presented. The algorithm comprises two stages: the first one calculates the distance among all the pairs of bus stops, and the second one is a MOEA that uses a novel simulation procedure for the calculus of the fitness function. This simulation method was specially developed for the BNSP. The EA used for the second stage was selected between the IBEA, NSGA-II and SPEA2 by means of some PISA tools. As a result of this experimentation, the SPEA2 was preferred since it presents the more spread solution set.

Ana C. Olivera, Mariano Frutos, Jessica A. Carballido, Ignacio Ponzoni, Nélida B. Brignole

Wine Classification with Gas Sensors Combined with Independent Component Analysis and Neural Networks

The aim of this work is to demonstrate the alternative of using Independent Component Analysis (ICA) as a dimensionality reduction technique combined with Artificial Neural Networks (ANNs) for wine classification in an electronic nose. ICA has been used to reduce the dimension of the data in order to show in two variables the discrimination capability of the gas sensors array and as a preprocessing tool for further analysis with ANNs for classification purposes.

Jesús Lozano, Antonio García, Carlos J. García, Fernándo Alvarez, Ramón Gallardo

Experiments and Reference Models in Training Neural Networks for Short-Term Wind Power Forecasting in Electricity Markets

Many published studies in wind power forecasting based on Neural Networks have provided performance factors based on error criteria. Based on the standard protocol for forecasting, the published results must provide improvement criteria over the persistence or references models of its same place. Persistence forecasting is the easier way of prediction in time series, but first order Wiener predictive filter is an enhancement of pure persistence model that have been adopted as the reference model for wind power forecasting. Pure enhanced persistence is simple but hard to beat in short-term prediction. This paper shows some experiments that have been performed by applying the standard protocols with Feed Forward and Recurrent Neural Networks architectures in the background of the requirements for Open Electricity Markets.

Juan Méndez, Javier Lorenzo, Mario Hernández

Intrusion Detection Method Using Neural Networks Based on the Reduction of Characteristics

The application of techniques based on Artificial Intelligence for intrusion detection systems (IDS), mostly, artificial neural networks (ANN), is becoming a mainstream as well as an extremely effective approach to address some of the current problems in this area. Nevertheless, the selection criteria of the features to be used as inputs for the ANNs remains a problematic issue, which can be put, in a nutshell, as follows: The wider the detection spectrum of selected features is, the lower the performance efficiency of the process becomes and vice versa. This paper proposes sort of a compromise between both ends of the scale: a model based on Principal Component Analysis (PCA) as the chosen algorithm for reducing characteristics in order to maintain the efficiency without hindering the capacity of detection. PCA uses a data model to diminish the size of ANN’s input vectors, ensuring a minimum loss of information, and consequently reducing the complexity of the neural classifier as well as maintaining stability in training times. A test scenario for validation purposes was developed, using based-on-ANN IDS. The results obtained based on the tests have demonstrated the validity of the proposal.

Iren Lorenzo-Fonseca, Francisco Maciá-Pérez, Francisco José Mora-Gimeno, Rogelio Lau-Fernández, Juan Antonio Gil-Martínez-Abarca, Diego Marcos-Jorquera

Evaluating the Performance of the Multilayer Perceptron as a Data Editing Tool

Usually, the knowledge discovery process is developed using data sets which contain errors in the form of inconsistent values. The activity aimed at detecting and correcting logical inconsistencies in data sets is named as data editing. Traditional tools for this task, as the Fellegi-Holt methodology, require a heavy intervention of subject matter experts. This paper discusses a methodological framework for the development of an automated data editing process which can be accomplished by a general nonlinear approximation model, as an artificial neural network. We have performed and empirical evaluation of the performance of this approach over eight data sets, considering several hidden layer sizes and seven learning algorithms for the multilayer perceptron. The obtained results suggest that this approach offers a hopeful performance, providing a promising data cleaning tool.

Ma-Dolores Cubiles-de-la-Vega, Esther-Lydia Silva-Ramírez, Rafael Pino-Mejías, Manuel López-Coello

A.N.N. Based Approach to Mass Biometry Taking Advantage from Modularity

Over the recent years, new public security tendency to fit up public areas with biometric devices has emerged new requirements in biometric recognition dealing with what we call here “mass biometry”. If the goal in “individual biometry” is to authenticate and/or identify an individual within a set of favored folks, the aim in “mass biometry” is to classify a suspect individual or behavior within a flow of mass customary information. In this case, the ability of handling relatively poor information and the skill of high speed processing become chief requirements. These antagonistic requests make the “mass biometry” and related applications among the most challenging frames. In this paper we present an ANN based system in a “mass biometry” context using facial biometric features. The proposed system takes advantage from kernel functions ANN model and IBM ZISC based hardware. Experimental results validating our system are presented and discussed.

Kurosh Madani, Abdennasser Chebira, Véronique Amarger

Thresholded Neural Networks for Sensitive Industrial Classification Tasks

In this paper a novel classification method for real world classification tasks is proposed. The method was designed to overcome the difficulties encountered by traditional methods when coping with those real world problems where the key issue is the detection of particular situations - such as for instance machine faults or anomalies - which in some frameworks are hard to be recognized due to some interacting factors that are analyzed within the paper. The method is described and tested on two industrial problems, which show the goodness of the proposed approach and encourage its use in the industrial environments.

Marco Vannucci, Valentina Colla, Mirko Sgarbi, Orlando Toscanelli

ANN Based Solutions: It Is Time to Defeat Real-World and Industrial Dilemmas

Over past decades, Artificial Neural Network (ANN) area has been the focal point of an ever-increasing number of research works and a very active pivot of interdisciplinary research activity. It is now time to state if ANN are ready to defeat nowadays’ real-world and industrial challenges. The main goal of this paper is to present, through some of main ANN models and based techniques, their capability in real world industrial dilemmas solution. Examples of real world and industrial applications have been presented and discussed.

Kurosh Madani, Véronique Amarger, Christophe Sabourin

Pollution Alarm System in Mexico

Air pollution is one of the most important environmental problems. The prediction of air pollutant concentrations would allow taking preventive measures such as reducing the pollutant emission to the atmosphere. This paper presents a pollution alarm system used to predict the air pollution concentrations in Salamanca, Mexico. The work focuses on the daily maximum concentration of



. A Feed Forward Neural Network has been used to make the prediction. A database used to train the Neural Network corresponds to historical time series of meteorological variables (wind speed, wind direction, temperature and relative humidity) and air pollutant concentrations of



along a year. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of



pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).

M. G. Cortina-Januchs, J. M. Barrón-Adame, A. Vega-Corona, D. Andina


Additional information

Premium Partner

    Image Credits