Skip to main content

2011 | Buch

New Challenges on Bioinspired Applications

4th International Work-conference on the Interplay Between Natural and Artificial Computation, IWINAC 2011, La Palma, Canary Islands, Spain, May 30 - June 3, 2011. Proceedings, Part II

herausgegeben von: José Manuel Ferrández, José Ramón Álvarez Sánchez, Félix de la Paz, F. Javier Toledo

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two volumes, LNCS 6686 resp. LNCS 6687, constitute the refereed proceedings of the 4th International Work-Conference on the Interplay between Natural and Artificial Computation, IWINAC 2011, held in La Palma, Canary Islands, Spain, in May/June 2011. The 108 revised full papers presented in LNCS 6686 resp. LNCS 6687 were carefully reviewed and selected from numerous submissions. The first part, LNCS 6686, entitled "Foundations on Natural and Artificial Computation", includes all the contributions mainly related to the methodological, conceptual, formal, and experimental developments in the fields of neurophysiology and cognitive science. The second part, LNCS 6687, entitled "New Challenges on Bioinspired Applications", contains the papers related to bioinspired programming strategies and all the contributions related to the computational solutions to engineering problems in different application domains, specially Health applications, including the CYTED ``Artificial and Natural Computation for Health'' (CANS) research network papers.

Inhaltsverzeichnis

Frontmatter
Neuromorphic Detection of Vowel Representation Spaces

In this paper a layered architecture to spot and characterize vowel segments in running speech is presented. The detection process is based on neuromorphic principles, as is the use of Hebbian units in layers to implement lateral inhibition, band probability estimation and mutual exclusion. Results are presented showing how the association between the acoustic set of patterns and the phonologic set of symbols may be created. Possible applications of this methodology are to be found in speech event spotting, in the study of pathological voice and in speaker biometric characterization, among others.

Pedro Gómez-Vilda, José Manuel Ferrández-Vicente, Victoria Rodellar-Biarge, Agustín Álvarez-Marquina, Luis Miguel Mazaira-Fernández, Rafael Martínez-Olalla, Cristina Muñoz-Mulas
Speaker Recognition Based on a Bio-inspired Auditory Model: Influence of Its Components, Sound Pressure and Noise Level

In the present work an assessmet of the influence of the different components that form a bioinspired auditory model in the speaker recognition performance by means of neuronal networks, at different sound pressure levels and Gaussian white noise of the voice signal, was made. The speaker voice is processed through three variants of an auditory model. From its output, a set of psychophysical parameters is extracted, with which neuronal networks for speaker recognition will be trained. Furthermore, the aim is to compare three standardization methods of parameters. As a conclusion, we can observed how psycophysical parameters characterize the speaker with acceptable rates of recognition; the typology of auditory model has influence on speaker recognition.

Ernesto A. Martínez–Rams, Vicente Garcerán–Hernández
Inner-Hair Cells Parameterized-Hardware Implementation for Personalized Auditory Nerve Stimulation

In this paper the hardware implementation of an inner hair cell model is presented. Main features of the design are the use of Meddis’ transduction structure and the methodology for Design with Reusability. Which allows future migration to new hardware and design refinements for speech processing and custom-made hearing aids.

Miguel A. Sacristán-Martínez, José M. Ferrández-Vicente, Vicente Garcerán-Hernández, Victoria Rodellar-Biarge, Pedro Gómez-Vilda
Semiautomatic Segmentation of the Medial Temporal Lobe Anatomical Structures

Medial temporal lobe (MTL) is a region of the brain related with processing and declarative memory consolidation. Structural changes in this region are directly related with Alzheimer’s disease and other dementias. Manual delimitation of these structures is very time consuming and error prone. Automatic methods are needed in order to solve these problems and make it available in the clinical practice. Unfortunately, automatic methods are not robust enough yet. The use of semiautomatic methods provides an intermediate solution with the advantages of automatic methods under the supervision of the expert. This paper propose two semiautomatic methods oriented to make the delineation of the MTL structures easy, robust and fast.

M. Rincón, E. Díaz-López, F. Alfaro, A. Díez-Peña, T. García-Saiz, M. Bachiller, A. Insausti, R. Insausti
Analysis of Spect Brain Images Using Wilcoxon and Relative Entropy Criteria and Quadratic Multivariate Classifiers for the Diagnosis of Alzheimer’s Disease

This paper presents a computer aided diagnosis technique for improving the accuracy of the early diagnosis of the Alzheimer’s disease. 97 SPECT brain images from the “Virgen de las Nieves” Hospital in Granada are studied. The proposed method is based on two different classifiers that use two different separability criteria and a posterior reduction of the feature dimension using factor analysis. Factor loadings are used as features of two multivariate classifiers with quadratic discriminant functions. The result of these two different classifiers is used to figure out the final decision. An accuracy rate up to 92.78 % when NC and AD are considered is obtained using the proposed methodology.

F. J. Martínez, D. Salas-González, J. M. Górriz, J. Ramírez, C. G. Puntonet, M. Gómez-Río
MRI Brain Image Segmentation with Supervised SOM and Probability-Based Clustering Method

Nowadays, the improvements in Magnetic Resonance Imaging systems (MRI) provide new and aditional ways to diagnose some brain disorders such as schizophrenia or the Alzheimer’s disease. One way to figure out these disorders from a MRI is through image segmentation. Image segmentation consist in partitioning an image into different regions. These regions determine diferent tissues present on the image. This results in a very interesting tool for neuroanatomical analyses. Thus, the diagnosis of some brain disorders can be figured out by analyzing the segmented image. In this paper we present a segmentation method based on a supervised version of the Self-Organizing Maps (SOM). Moreover, a probability-based clustering method is presented in order to improve the resolution of the segmented image. On the other hand, the comparisons with other methods carried out using the IBSR database, show that our method ourperforms other algorithms.

Andres Ortiz, Juan M. Gorriz, Javier Ramirez, Diego Salas-Gonzalez
Effective Diagnosis of Alzheimer’s Disease by Means of Distance Metric Learning and Random Forest

In this paper we present a novel classification method of SPECT images for the development of a computer aided diagnosis (CAD) system aiming to improve the early detection of the Alzheimer’s Disease (AD). The system combines firstly template-based normalized mean square error (NMSE) features of tridimensional Regions of Interest (ROIs) t-test selected with secondly Kernel Principal Components Analysis (KPCA) to find the main features. Thirdly, aiming to separate examples from different classes (Controls and ATD) by a Large Margin Nearest Neighbors technique (LMNN), distance metric learning methods namely Mahalanobis and Euclidean distances are used. Moreover, the proposed system evaluates Random Forests (RF) classifier, yielding a 98.97% AD diagnosis accuracy, which reports clear improvements over existing techniques, for instance the Principal Component Analysis(PCA) or Normalized Minimum Squared Error (NMSE) evaluated with RF.

R. Chaves, J. Ramírez, J. M. Górriz, I. Illán, F. Segovia, A. Olivares
Distance Metric Learning as Feature Reduction Technique for the Alzheimer’s Disease Diagnosis

In this paper we present a novel classification method of SPECT images for the development of a computer aided diagnosis (CAD) system aiming to improve the early detection of the Alzheimer’s Disease (AD). The system combines firstly template-based normalized mean square error (NMSE) features of tridimensional Regions of Interest (ROIs) t-test selected with secondly Large Margin Nearest Neighbors (LMNN), which is a distance metric technique aiming to separate examples from different classes (Controls and AD) by a Large Margin. LMNN uses a rectangular matrix (called RECT-LMNN) as an effective feature reduction technique. Moreover, the proposed system evaluates Support Vector Machine (SVM) classifier, yielding a 97.93% AD diagnosis accuracy, which reports clear improvements over existing techniques, for instance the Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) or Normalized Minimum Squared Error (NMSE) evaluated with SVM.

R. Chaves, J. Ramírez, J. M. Górriz, D. Salas-Gonzalez, M. López
Brain Status Data Analysis by Sliding EMD

Biomedical signals are in general non-linear and non-stationary which renders them difficult to analyze with classical time series analysis techniques.

Empirical Mode Decomposition

(EMD) in conjunction with a Hilbert spectral transform, together called

Hilbert-Huang Transform

, is ideally suited to extract informative components which are characteristic of underlying biological or physiological processes. The method is fully adaptive and generates a complete set of orthogonal basis functions, called

Intrinsic Mode Functions

(IMFs), in a purely data-driven manner. Amplitude and frequency of IMFs may vary over time which renders them different from conventional basis systems and ideally suited to study non-linear and non-stationary time series. However, biomedical time series are often recorded over long time periods. This generates the need for efficient EMD algorithms which can analyze the data in real time. No such algorithms yet exist which are robust, efficient and easy to implement. The contribution shortly reviews the technique of EMD and related algorithms and develops an

on-line

variant, called

slidingEMD

, which is shown to perform well on large scale biomedical time series recorded during neuromonitoring.

A. Zeiler, R. Faltermeier, A. Brawanski, A. M. Tomé, C. G. Puntonet, J. M. Górriz, E. W. Lang
A Quantitative Study on Acupuncture Effects for Fighting Migraine Using SPECT Images

The aim of this paper is to quantitatively determine whether acupuncture, applied under real conditions of clinical practice in the area of primary healthcare, is effective for fighting migraine. This is done by evaluating SPECT images of migraine patients’ brain in a context of image classification. Two different groups of patients are randomly collected and received

verum

and

sham

acupuncture, respectively. In order to make the image processing computationally efficient and solve the small sample size problem, an initial feature extraction step based on Principal Component Analysis is performed on the images. Differences among features extracted from pre– and post–acupuncture scans are quantified by means of Support Vector Machines for verum and sham modalities, and statistically reinforced by carrying out a statistical

t

–test. The conclusions of this work point at acupuncture as an effective method to fight migraine.

M. López, J. Ramírez, J. M. Górriz, R. Chaves, M. Gómez-Río
High Resolution Segmentation of CSF on Phase Contrast MRI

Dynamic velocity-encoded Phase-contrast MRI (PC-MRI) techniques are being used increasingly to quantify pulsatile flows for a variety of flow clinical application. A method for igh resolution segmentation of cerebrospinal fluid (CSF) velocity is described. The method works on PC-MRI with high temporal and spatial resolution. It has been applied in this paper to the CSF flow at the Aqueduct of Sylvius (AS). The approach first selects the regions with high flow applying a threshold on the coefficient of variation of the image pixels velocity profiles. The AS corresponds to the most central detected region. We perform a lattice independent component analysis (LICA) on this small region, so that the image abundances provide the high resolution segmentation of the CSF flow at the AS. Long term goal of our work is to use this detection and segmentation to take some measurements and evaluate the changes in patients with suspected Idiopathic Normal Pressure Hydrocephalus (iNPH).

Elsa Fernández, Manuel Graña, Jorge Villanúa
Exploration of LICA Detections in Resting State fMRI

Lattice Independent Component Analysis (LICA) approach consists of a detection of lattice independent vectors (endmembers) that are used as a basis for a linear decomposition of the data (unmixing). In this paper we explore the network detections obtained with LICA in resting state fMRI data from healthy controls and schizophrenic patients. We compare with the findings of a standard Independent Component Analysis (ICA) algorithm. We do not find agreement between LICA and ICA. When comparing findings on a control versus a schizophrenic patient, the results from LICA show greater negative correlations than ICA, pointing to a greater potential for discrimination and construction of specific classifiers.

Darya Chyzhyk, Ann K. Shinn, Manuel Graña
FreeSurfer Automatic Brain Segmentation Adaptation to Medial Temporal Lobe Structures: Volumetric Assessment and Diagnosis of Mild Cognitive Impairment

Alzheimer’s disease is a prevalent and progressive neurodegenerative disease that often starts clinically as a memory deficit. Specifically, the hippocampal formation (HF) and medial part of the temporal lobe (MTL) are severely affected. Those structures are at the core of the neural system responsible for encoding and retrieval of the memory for facts and events (episodic memory) which is dependent on the HF and MTL. Clinical lesions as well as experimental evidence point that the HF (hippocampus plus entorhinal cortex) and the adjacent cortex in the MTL, are the regions critical for normal episodic memory function. Structural MRI studies can be processed by FreeSurfer to obtain an automatic segmentation of many brain structures. We wanted to explore the advantages of complementing the automatic segmentation of FreeSurfer with a manual segmentation of the HF and MTL to obtain a more accurate evaluation of these memory centers.

We examined a library of cases in which neuroanatomical delimitation of the extent of the HF and MTL was made in 48 control and 16 AD brains, and the knowledge provided was applied to 7 cases (2 controls and 5 MCI patients) in which 3T MRI scans were obtained at two time points, one year and a half apart. Our results show that volumetric values were preserved in controls as well as non amnestic MCI patients, while the amnestic type (the more often to develop full AD) showed a volume decrease in the HF and MTL structures. The methodology still needs further development to a full automatization, but it seems to be promising enough for early detection of volume changes in patients at risk of developing AD.

R. Insausti, M. Rincón, E. Díaz-López, E. Artacho-Pérula, F. Mansilla, J. Florensa, C. González-Moreno, J. Álvarez-Linera, S. García, H. Peraita, E. Pais, A. M. Insausti
Alzheimer Disease Classification on Diffusion Weighted Imaging Features

An on-going study in Hospital de Santiago Apostol collects anatomical T1-weighted MRI volumes and Diffusion Weighted Imaging (DWI) data of control and Alzheimer’s Disease patients. The aim of this paper is to obtain discriminant features from scalar measures of DWI data, the Fractional Anisotropy (FA) and Mean Diffusivity (MD) volumes, and to train and test classifiers able to discriminate AD patients from controls on the basis of features selected from the FA or MD volumes. In this study, separate classifiers were trained and tested on FA and MD data. Feature selection is done according to the Pearson’s correlation between voxel values across subjects and the control variable giving the subject class (1 for AD patients, 0 for controls). Some of the tested classifiers reach very high accuracy with this simple feature selection process. Those results point to the validity of DWI data as a image-marker for AD.

M. Termenon, A. Besga, J. Echeveste, A. Gonzalez-Pinto, M. Graña
Future Applications with Diffusion Tensor Imaging

In this paper we present which are the operations, that all programs that work with Diffusion Tensor Imaging (DTI), should run to execute to transform the initial set of images to calculate the ellipsoids to be used for the definition of the fiber tracks. At this time we do not care to check the quality of the results of different programs, because our interest is to seek difference options for using the information that is calculated in the intermediate steps, because now the programs only are using the information for the computation of fiber tracks.

The second idea of the paper is the comparison of DTI with Magnetic Resonance Imaging (MRI).

T. García-Saiz, M. Rincón, A. Lundervold
Monitoring Neurological Disease in Phonation

It is well known that many neurological diseases leave a fingerprint in voice and speech production. The dramatic impact of these pathologies in life quality is a growing concert. Many techniques have been designed for the detection, diagnose and monitoring the neurological disease. Most of them are costly or difficult to extend to primary services. The present paper shows that some neurological diseases can be traced a the level of voice production. The detection procedure would be based on a simple voice test. The availability of advanced tools and methodologies to monitor the organic pathology of voice would facilitate the implantation of these tests. The paper hypothesizes some of the underlying mechanisms affecting the production of voice and presents a general description of the methodological foundations for the voice analysis system which can estimate correlates to the neurological disease. A case of study is presented from spasmodic dysphonia to illustrate the possibilities of the methodology to monitor other neurological problems as well.

Pedro Gómez-Vilda, Roberto Fernández-Baíllo, José Manuel Ferrández-Vicente, Victoria Rodellar-Biarge, Agustín Álvarez-Marquina, Luis Miguel Mazaira-Fernández, Rafael Martínez-Olalla, Cristina Muñoz-Mulas
Group Formation for Minimizing Bullying Probability. A Proposal Based on Genetic Algorithms

Bullying is a problem that needs to be considered in the early stages of group formation. Unfortunately, as far as we are aware, there is not known procedure helping teachers to cope with this problem. It has been established that, in a certain group, a specific configuration in the students distribution affects the behavior among them. Based on this fact, we propose the use of genetic algorithms for helping in students distribution in a classroom, taking into account elements like leadership traits among other features. The sociogram is a technique that teachers have been using for years for supporting group formation. The sociogram is a sociometric diagram representing the pattern of relationships among individuals in a group, usually expressed in terms of which persons they prefer to associate with. This work combines the concepts of genetic algorithms and sociograms, that can be easily represented by means of relationships graphs. A set of tests is applied to the students to collect relevant data, and results can be validated with the help of specialists.

L. Pedro Salcedo, M. Angélica Pinninghoff J., A. Ricardo Contreras
A Speaker Recognition System Based on an Auditory Model and Neural Nets: Performance at Different Levels of Sound Pressure and of Gaussian White Noise

This paper performs the assessment of an auditory model based on a human nonlinear cochlear filter-bank and on Neural Nets. The efficiency of this system in speaker recognition tasks has been tested at different levels of voice pressure and different levels of noise. The auditory model yields five psychophysical parameters with which a neural network is trained. We used a number of Spanish words from the ’Ahumada’ database as uttered by native male speakers.

Ernesto A. Martínez–Rams, Vicente Garcerán–Hernández
Automatic Detection of Hypernasality in Children

Automatic hypernasality detection in children with Cleft Lip and Palate is made considering five Spanish vowels. Characterization is performed by means of some acoustic and noise features, building a representation space with high dimensionality. Most relevant features are selected using Principal Components Analisis and linear correlation in order to enable clinical interpretation of results and achieving spaces with lower dimensions per vowel. Using a Linear-Bayes classifier, success rates between 80% and 90% are reached, beating success rates achived in similiar studies recently reported.

S. Murillo Rendón, J. R. Orozco Arroyave, J. F. Vargas Bonilla, J. D. Arias Londoño, C. G. Castellanos Domínguez
Characterization of Focal Seizures in Scalp Electroencephalograms Based on Energy of Signal and Time-Frequency Analysis

This work presents a method for characterization of focal seizures from scalp digital electroencephalograms (EEG) obtained in the Central League Against Epilepsy in Bogotá, which were acquired from patients between 13 and 53 years old. This characterization was performed in segments of 500 ms with presence of focal seizures that had been initially identified and labeled by a specialist during their visual examination. After selection of the segments and channels for analysis, the energy of the signals were calculated with the idea that the energy of focal seizures could be larger than the one of their side waves in the segments. This procedure produced peaks of energy corresponding to the seizures and, some times, to noise and artifacts. In order to identify the peaks of energy of the seizures an analysis with continuous wavelet transform was performed. It was found that the mother wavelet ‘bior2.2’ allowed more easily the identification of such seizures from the seventh scale of the analysis. The method allowed the identification of the 65% of the seizures labeled by the specialist.

Alexander Cerquera, Laura V. Guío, Elías Buitrago, Rafael M. Gutiérrez, Carlos Medina
An Optimized Framework to Model Vertebrate Retinas

The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. Therefore the retina performs spatial, temporal, and chromatic processing on visual information and converts it into a compact ’digital’ format composed of neural impulses. However, how groups of retinal ganglion cells encode a broad range of visual information is still a challenging and unresolved question. The main objective of this work is to design and develop a new functional tool able to describe, simulate and validate custom retina models. The whole system is optimized for visual neuroprosthesis and can be accelerated by using

FPGAs

,

COTS

microprocessors or

GP-GPU

based systems.

Andrés Olmedo-Payá, Antonio Martínez-Álvarez, Sergio Cuenca-Asensi, Jose M. Ferrández, Eduardo Fernández
An Expandable Hardware Platform for Implementation of CNN-Based Applications

This paper proposes a standalone system for real-time processing of video streams using CNNs. The computing platform is easily expandable and customizable for any application. This is achieved by using a modular approach both for the CNN architecture itself and for its hardware implementation. Several FPGA-based processing modules can be cascaded together with a video acquisition stage and an output interface to a framegrabber for video output storage, all sharing a common communication interface. The pre-verified CNN components, the modular architecture, and the expandable hardware platform provide an excellent workbench for fast and confident developing of CNN applications.

J. Javier Martínez-Álvarez, F. Javier Garrigós-Guerrero, F. Javier Toledo-Moreo, J. Manuel Ferrández-Vicente
Classification of Welding Defects in Radiographic Images Using an Adaptive-Network-Based Fuzzy System

In this paper, we describe an automatic system of radiographic inspection of welding. An important stage in the construction of this system is the classification of defects. In this stage, an adaptive-network-based fuzzy inference system (ANFIS) for weld defect classification was used. The results was compared with the aim to know the features that allow the best classification. The correlation coefficients were determined obtaining a minimum value of 0.84. The accuracy or the proportion of the total number of predictions that were correct was determined obtaining a value of 82.6%.

Rafael Vilar, Juan Zapata, Ramón Ruiz
Reinforcement Learning Techniques for the Control of WasteWater Treatment Plants

Since water pollution is one of the most serious environmental problems today, control of wastewater treatment plants (WWTPs) is a crucial issue nowadays and stricter standards for the operation of WWTPs have been imposed by authorities. One of the main problems in the automation of the control of Wastewater Treatment Plants (WWTPs) appears when the control system does not respond as it should because of changes on influent load or flow. Thus, it is desirable the development of autonomous systems that learn from interaction with a WWTP and that can operate taking into account changing environmental circumstances. In this paper we present an intelligent agent using reinforcement learning for the oxygen control in the N-Ammonia removal process in the well known Benchmark Simulation Model no.1 (BSM1). The aim of the approach presented in this paper is to minimize the operation cost changing the set-points of the control system autonomously.

Felix Hernandez-del-Olmo, Elena Gaudioso
Genetic Programming for Prediction of Water Flow and Transport of Solids in a Basin

One of the applications of Data Mining is the extraction of knowledge from time series [1][2]. The Evolutionary Computation (EC) and the Artificial Neural Networks (ANNs) have proved to be suitable in Data Mining for handling this type of series [3] [4]. This paper presents the use of Genetic Programming (GP) for the prediction of time series in the field of Civil Engineering where the predictive structure does not follow the classic paradigms. In this specific case, the GP technique is applied to two phenomenon that models the process where, for a specific area, the fallen rain concentrates and flows on the surface, and later from the water flows is predicted the solids transport. In this article it is shown the Genetic Programming technique use for the water flows and the solids transport prediction. It is achieved good results both in the water flow prediction as in the solids transport prediction.

Juan R. Rabuñal, Jerónimo Puertas, Daniel Rivero, Ignacio Fraga, Luis Cea, Marta Garrido
Comparing Elastic Alignment Algorithms for the Off-Line Signature Verification Problem

This paper systematically compares two elastic graph matching methods for off-line signature verification: shape-memory snakes and parallel segment matching, respectively. As in many practical applications (i.e. those related to bank environments), the number of sample signatures to train the system must be very reduced, we selected these two methods which hold that property. Both methods also share some other similarities since they use graph models to perform the verification task and require a registration pre-processing. Experimental results on the same database and using the same evaluation metrics have shown that the shape-memory snakes clearly outperformed to the parallel segment matching approach on the same signature dataset (9% EER compared to 24% EER, respectively).

J. F. Vélez, A. Sánchez, A. B. Moreno, L. Morillo-Velarde
A Fuzzy Cognitive Maps Modeling, Learning and Simulation Framework for Studying Complex System

This paper presents Fuzzy Cognitive Maps as an approach in modeling the behavior and operation of complex systems; they combine aspects of fuzzy logic, neural networks, semantic networks, expert systems, and nonlinear dynamical systems. They are fuzzy weighted directed graphs with feedback that create models that emulate the behavior of complex decision processes using fuzzy causal relations. First, the description and the methodology that this theory suggests is examined, later some ideas for using this approach in the control process area are discussed. An inspired on particle swarm optimization learning method for this technique is proposed, and then, the implementation of a tool based on Fuzzy Cognitive Maps is described. The application of this theory might contribute to the progress of more intelligent and independent systems. Fuzzy Cognitive Maps have been fruitfully used in decision making and simulation of complex situation and analysis. At the end, a case study about Travel Behavior is analyzed and results are assessed.

Maikel León, Gonzalo Nápoles, Ciro Rodriguez, María M. García, Rafael Bello, Koen Vanhoof
Study of Strength Tests with Computer Vision Techniques

Knowing the strain response of materials in strength tests is one of the main issues in construction and engineering fields. In these tests, information about displacements and strains is usually carried out using physical devices attached to the material.

In this paper, the suitability of Computer Vision techniques to analyse strength tests without interfering with the assay is discussed and a new technique is proposed.

This technique measures displacements and deformations from a video sequence of the assay.

With this purpose a Block-Matching Optical Flow algorithm is integrated with a calibration process to extract the vectorfield from the displacement in the material.

To evaluate the proposed technique, a synthetic image set and a real sequence from a strength tests were analysed.

Alvaro Rodriguez, Juan R. Rabuñal, Juan L. Perez, Fernando Martinez-Abella
Scaling Effects in Crossmodal Improvement of Visual Perception

Inspired in the work of J. Gonzalo [

Dinámica Cerebral

. Publ. Red Comput. Natural y Artificial, Univ. Santiago de Compostela, Spain 2010] on multisensory effects and crossmodal facilitation of perception in patients with cerebral cortical lesions, we have observed and modelled weaker but similar effects in normal subjects: Moderate and static muscular effort improves visual vernier acuity in ten tested normal subjects, and a scaling power law describes the improvement with the intensity of the effort. This suggests that the mechanism of activation of unspecific (or multispecific) neural mass in the facilitation phenomena in damaged brain is also involved in the normal brain, and that the power law reflects a basic biological scaling with the activated neural mass, inherent to natural networks.

Isabel Gonzalo-Fonrodona, Miguel A. Porras
Pattern Recognition Using a Recurrent Neural Network Inspired on the Olfactory Bulb

The olfactory system is a remarkable system capable of discriminating very similar odorant mixtures. This is in part achieved via spatio-temporal activity patterns generated in mitral cells, the principal cells of the olfactory bulb, during odor presentation. In this work, we present a spiking neural network model of the olfactory bulb and evaluate its performance as a pattern recognition system with datasets taken from both artificial and real pattern databases. Our results show that the dynamic activity patterns produced in the mitral cells of the olfactory bulb model by pattern attributes presented to it have a pattern separation capability. This capability can be explored in the construction of high-performance pattern recognition systems.

Lucas Baggio Figueira, Antonio Carlos Roque
Experiments on Lattice Independent Component Analysis for Face Recognition

In previous works we have proposed Lattice Independent Component Analysis (LICA) for a variety of image processing tasks. The first step of LICA is to identify strong lattice independent components from the data. The set of strong lattice independent vector are used for linear unmixing of the data, obtaining a vector of abundance coefficients. In this paper we propose to use the resulting abundance values as features for clasification, specifically for face recognition. We report results on two well known benchmark databases.

Ion Marqués, Manuel Graña
A Hyperheuristic Approach for Dynamic Enumeration Strategy Selection in Constraint Satisfaction

In this work we show a framework for guiding the classical constraint programming resolution process. Such a framework allows one to measure the resolution process state in order to perform an “on the fly”replacement of strategies exhibiting poor performances. The replacement is performed depending on a quality rank, which is computed by means of a choice function. The choice function determines the performance of a given strategy in a given amount of time through a set of indicators and control parameters. The goal is to select promising strategies to achieve efficient resolution processes. The main novelty of our approach is that we reconfigure the search based solely on performance data gathered while solving the current problem. We report encouraging results where our combination of strategies outperforms the use of individual strategies.

Broderick Crawford, Ricardo Soto, Carlos Castro, Eric Monfroy
Genetic Algorithm for Job-Shop Scheduling with Operators

We face the job-shop scheduling problem with operators. To solve this problem we propose a new approach that combines a genetic algorithm with a new schedule generation scheme. We report results from an experimental study across conventional benchmark instances showing that our approach outperforms some current state-of-the-art methods.

Raúl Mencía, María R. Sierra, Carlos Mencía, Ramiro Varela
Bio-inspired System in Automatic Speaker Recognition

Automatic speaker recognition determines whether a person is who he/she claims to be without the intervention of another human being, from their voiceprint. In recent years different methods of voice analysis to fulfill this objective have been developed. At the moment, the development of computational auditory models that emulate the physiology and function of the inner ear have added a new tool in the field of speaker recognition, including the Triple Resonance Nonlinear filter. This paper studies the behavior of a bio-inspired model of inner ear applied to the speakers’s voice analysis. Their ability on speaker recognition tasks was statistically analyzed. We conclude, given the results obtained, that this system is an excellent tool, which shows a high rate of sensitivity and specificity in speaker recognition.

Lina Rosique–López, Vicente Garcerán–Hernández
Independent Component Analysis: A Low-Complexity Technique

This paper presents a new algorithm to solve the Independent Component Analysis (ICA) problem that has a very low computational complexity. The most remarkable feature of the proposed algorithm is that it does not need to compute higher-order statistics (HOS). In fact, the algorithm is based on trying to guess the sign of the independent components, after which it approximates the rest of the values.

Rubén Martín-Clemente, Susana Hornillo-Mellado, José Luis Camargo-Olivares
AdaBoost Face Detection on the GPU Using Haar-Like Features

Face detection is a time consuming task in computer vision applications. In this article, an approach for AdaBoost face detection using Haar-like features on the GPU is proposed. The GPU adapted version of the algorithm manages to speed-up the detection process when compared with the detection performance of the CPU using a well-known computer vision library. An overall speed-up of × 3.3 is obtained on the GPU for video resolutions of 640x480 px when compared with the CPU implementation. Moreover, since the CPU is idle during face detection, it can be used simultaneously for other computer vision tasks.

M. Martínez-Zarzuela, F. J. Díaz-Pernas, M. Antón-Rodríguez, F. Perozo-Rondón, D. González-Ortega
Fuzzy ARTMAP Based Neural Networks on the GPU for High-Performance Pattern Recognition

In this paper we introduce, to the best of our knowledge, the first adaptation of the Fuzzy ARTMAP neural network for its execution on a GPU, together with a self-designed neural network based on ART models called SOON. The full VisTex database, containing 167 texture images, is proved to be classified in a very short time using these GPU-based neural networks. The Fuzzy ARTMAP neural network implemented on the GPU performs up to ×100 times faster than the equivalent CPU version, while the SOON neural network is speeded-up by ×70 times. Also, using the same texture patterns the Fuzzy ARTMAP neural network obtains a success rate of 48% and SOON of 82% for texture classification.

M. Martínez-Zarzuela, F. J. Díaz-Pernas, A. Tejero de Pablos, F. Perozo-Rondón, M. Antón-Rodríguez, D. González-Ortega
Bio-inspired Color Image Segmentation on the GPU (BioSPCIS)

In this paper we introduce a neural architecture for multiple scale color image segmentation on a Graphics Processing Unit (GPU): the BioSPCIS

(Bio-Inspired Stream Processing Color Image Segmentation)

architecture. BioSPCIS has been designed according to the physiological organization of the cells on the mammalian visual system and psychophysical studies about the interaction of these cells for image segmentation. Quality of the segmentation was measured against hand-labelled segmentations from the Berkeley Segmentation Dataset. Using a stream processing model and hardware suitable for its execution, we are able to compute the activity of several neurons in the visual path system simultaneously. All the 100 test images in the Berkeley database can be processed in 5 minutes using this architecture.

M. Martínez-Zarzuela, F. J. Díaz-Pernas, M. Antón-Rodríguez, F. Perozo-Rondón, D. González-Ortega
Simulating a Rock-Scissors-Paper Bacterial Game with a Discrete Cellular Automaton

This paper describes some of the results obtained after the design and implementation of a discrete cellular automata simulating the generation, degradation and diffusion of particles in a two dimensional grid where different colonies of bacteria coexist and interact. This lattice-based simulator use a random walk-based algorithm to diffuse particles in a 2D discrete lattice. As first results, we analyze and show the oscillatory dynamical behavior of 3 colonies of bacteria competing in a non-transitive relationship analogous to a Rock-Scissors-Paper game (Rock bacteria beats Scissors bacteria that beats Paper bacteria; and Paper beats Rock bacteria). The interaction and communication between bacteria is done with the quorum sensing process through the generation and diffusion of three small molecules called autoinducers. These are the first results obtained from the first version of a general simulator able to model some of the complex molecular information processing and rich communication processes in synthetic bacterial ecosystems.

Pablo Gómez Esteban, Alfonso Rodríguez-Patón
Mobile Robot Localization through Identifying Spatial Relations from Detected Corners

In this paper, the Harris corner detection algorithm is applied to images captured by a time-of-flight (ToF) camera. In this case, the ToF camera mounted on a mobile robot is exploited as a gray-scale camera for localization purposes. Indeed, the gray-scale image represents distances for the purpose of finding good features to be tracked. These features, which actually are points in the space, form the basis of the spatial relations used in the localization algorithm. The approach to the localization problem is based on the computation of the spatial relations existing among the corners detected. The current spatial relations are matched with the relations gotten during previous navigation.

Sergio Almansa-Valverde, José Carlos Castillo, Antonio Fernández-Caballero, José Manuel Cuadra Troncoso, Javier Acevedo-Rodríguez
Improving the Accuracy of a Two-Stage Algorithm in Evolutionary Product Unit Neural Networks for Classification by Means of Feature Selection

This paper introduces a methodology that improves the accuracy of a two-stage algorithm in evolutionary product unit neural networks for classification tasks by means of feature selection. A couple of filters have been taken into consideration to try out the proposal. The experimentation has been carried out on seven data sets from the UCI repository that report test mean accuracy error rates about twenty percent or above with reference classifiers such as C4.5 or 1-NN. The study includes an overall empirical comparison between the models obtained with and without feature selection. Also several classifiers have been tested in order to illustrate the performance of the different filters considered. The results have been contrasted with nonparametric statistical tests and show that our proposal significantly improves the test accuracy of the previous models for the considered data sets. Moreover, the current proposal is much more efficient than a previous methodology developed by us; lastly, the reduction percentage in the number of inputs is above a fifty five, on average.

Antonio J. Tallón-Ballesteros, César Hervás-Martínez, José C. Riquelme, Roberto Ruiz
Knowledge Living on the Web (KLW)

The amount of information currently available on the Internet is absolutely huge. The absence of semantic organization in the web resources hinders the access and use of the information stored and in particular the access to the computational systems in an autonomous manner.

Ontologies make use of formal structures mainly based in logics to define and allow reuse of the knowledge stored on the Internet. This article presents a different line of work which does not provide an alternative to already existing ontologies but is intended to complement one another by coexisting on the Internet.

The foundations of our proposal lie in the idea that the nature of knowledge, which is evolutionary, emergent, self-generative, is closer to the characteristics associated to life rather than to those pertaining to logics or mathematics.

We attempt to use the Internet to generate a virtual world where knowledge can grow and evolve according to artificial life rules utilizing users as replicators, as well as the use they make of such knowledge on the Internet.

We have developed a representation scheme called KDL (Knowledge Description Language), a DKS server application (Domain Knowledge Server) and a KEE user interface (Knowledge Explorer and Editor) which allow Internet users register, search, use and refer the knowledge housed on the KLW Web.

Miguel A. Fernandez, Juan Miguel Ruiz, Olvido Arraez Jarque, Margarita Carrion Varela
Local Context Discrimination in Signature Neural Networks

Bio-inspiration in traditional artificial neural networks (ANN) relies on knowledge about the nervous system that was available more than 60 years ago. Recent findings from neuroscience research provide novel elements of inspiration for ANN paradigms. We have recently proposed a Signature Neural Network that uses: (i) neural signatures to identify each unit in the network, (ii) local discrimination of input information during the processing, and (iii) a multicoding mechanism for information propagation regarding the

who

and the

what

of the information. The local discrimination implies a distinct processing as a function of the neural signature recognition and a local transient memory. In this paper we further analyze the role of this local context memory to efficiently solve jigsaw puzzles.

Roberto Latorre, Francisco B. Rodríguez, Pablo Varona
Spider Recognition by Biometric Web Analysis

Saving earth’s biodiversity for future generations is an important global task. Spiders are creatures with a fascinating behaviour, overall in the way they build their webs. This is the reason this work proposed a novel problem: the used of spider webs as a source of information for specie recognition. To do so, biometric techniques such as image processing tools, Principal Component Analysis, and Support Vector Machine have been used to build a spider web identification system. With a database built of images from spider webs of three species, the system reached a best performance of 95,44 % on a 10 K-Folds cross-validation procedure.

Jaime R. Ticay-Rivas, Marcos del Pozo-Baños, William G. Eberhard, Jesús B. Alonso, Carlos M. Travieso
A Prediction Model to Diabetes Using Artificial Metaplasticity

Diabetes is the most common disease nowadays in all populations and in all age groups. Different techniques of artificial intelligence has been applied to diabetes problem. This research proposed the artificial metaplasticity on multilayer perceptron (AMMLP) as prediction model for prediction of diabetes. The Pima Indians diabetes was used to test the proposed model AMMLP. The results obtained by AMMLP were compared with other algorithms, recently proposed by other researchers,that were applied to the same database. The best result obtained so far with the AMMLP algorithm is 89.93%.

Alexis Marcano-Cedeño, Joaquín Torres, Diego Andina
Band Correction in Random Amplified Polymorphism DNA Images Using Hybrid Genetic Algorithms with Multilevel Thresholding

This paper describes an approach for correcting bands in RAPD images that involves the multilevel thresholding technique and hybridized genetic algorithms. Multilevel thresholding is applied for detecting bands, and genetic algorithms are combined with Tabu Search and with Simulated Annealing, as a mechanism for correcting bands. RAPDs images are affected by various factors; among these factors, the noise and distortion that impact the quality of images, and subsequently, accuracy in interpreting the data. This work proposes hybrid methods that use genetic algorithms, for dealing with the highly combinatorial feature of this problem and, tabu search and simulated annealing, for dealing with local optimum. The results obtained by using them in this particular problem show an improvement in the fitness of individuals.

Carolina Gárate O., M. Angélica Pinninghoff J., Ricardo Contreras A.
Discrimination of Epileptic Events Using EEG Rhythm Decomposition

The use of time series decomposition into sub–bands of frequency to accomplish the oscillation modes in nonstationary signals is proposed. Specifically, EEG signals are decomposed into frequency sub-bands, and the most relevant of them are employed for the detection of epilepsy seizures. Since the computation of oscillation modes is carried out based on Time-Variant Autoregressive model parameters, both approaches for searching an optimal order are studied: estimation over the entire database, and over each database recording. The feature set appraises parametric power spectral density in each frequency band of the Time-Variant Autoregressive models. Developed dimension reduction approach of high dimensional spectral space that is based on principal component analysis searches for frequency bands holding the higher values of relevance, in terms of performed accuracy of detection. Attained outcomes for

k

 −nn classifier over 29 epilepsy patients reach a performed accuracy as high as 95% As a result, the proposed methodology provides a higher performance when is used a optimal order for each signal. The advantage of the proposed methodology is the interpretations that may lead to the data, since each oscillation mode can be associated with one of the eeg rhythms.

L. Duque-Muñoz, L. D Avendaño-Valencia, G. Castellanos-Domínguez
Methodology for Attention Deficit/Hyperactivity Disorder Detection by Means of Event-Related Potentials

Event-related Potentials (ERPs) are voltage fluctuations in electroencephalogram that allow the examination of electrical representations of the underlying sensory and cognitive processes occurring in the brain in response to stimuli. These waveforms contain characteristic peaks and troughs, which can correspond to certain underlying processes. The determination of the functional significance of a particular ERP component involves simultaneous consideration of its eliciting conditions, polarity, latency and scalp distribution. The evaluation of these parameters, by medical specialists, leads to diagnostic of important psychiatric disorders such as attention deficit/hyperactivity disorder (ADHD). However, the measurement on these parameters is usually susceptible to the subjectivity of the medical concept. This work presents a comparison between two methodologies that consider characterization and feature extraction/selection of ERPs signals, in order to distinguish normal from ADHD patients on a feature set formed by morphological, frequency and wavelet characteristics. Moreover, tests are made on the raw signals looking for informative events that could provide an increasing on classification accuracy.

Paola Castro-Cabrera, Jorge Gómez-García, Francia Restrepo, Oscar Moscoso, German Castellanos-Dominguez
Methodology for Epileptic Episode Detection Using Complexity-Based Features

Epilepsy is a neurological disease with a high prevalence on human beings, for which an accurate diagnosis remains as an essential step for medical treatment. Making use of pattern recognition tools is possible to design accurate automatic detection systems, capable of helping medical diagnostic. The present work presents an automatic epileptic episode methodology, based on complexity analysis where 3 classical nonlinear dynamic based features are used in conjunction with 3 regularity measures. k-nn and Support Vector Machines are used for classification. Results, superior to 98% confirm the discriminative ability of the presented methodology on epileptic detection labours.

Jorge Andrés Gómez García, Carolina Ospina Aguirre, Edilson Delgado Trejos, Germán Castellanos Dominguez
Segmentation of the Carotid Artery in Ultrasound Images Using Neural Networks

Atherosclerosis is a cardiovascular disease very widespread into population. The intima-media thickness (IMT) is a reliable early indicator of this pathology. The IMT is measured by the doctor using images acquired with a B-scan ultrasound and this fact presents several problems. Image segmentation can detect the IMT throughout the artery length in an automatic way. This paper presents an effective segmentation method based on the use of a neural network ensemble. The obtained results show the ability of the method to extract the IMT contour in ultrasound images.

Rosa-María Menchón-Lara, M-Consuelo Bastida-Jumilla, Juan Morales-Sánchez, Rafael Verdú-Monedero, Jorge Larrey-Ruiz, José Luis Sancho-Gómez
Tools for Controlled Experiments and Calibration on Living Tissues Cultures

In recent years, numerous studies attempted to demonstrate the feasibility of using live cell cultures as units of information processing. In this context, it is necessary to develop both hardware and software tools to facilitate this task. The later part is in the aim of this paper. It presents a complete software suite to design, develop, test, perform and record experiments on culture-based biological processes of living cells on multi-electrode array in a reliable, easy and efficient way.

Daniel de Santos, José Manuel Cuadra, Félix de la Paz, Víctor Lorente, José Ramón Álvarez-Sánchez, José Manuel Ferrández
Backmatter
Metadaten
Titel
New Challenges on Bioinspired Applications
herausgegeben von
José Manuel Ferrández
José Ramón Álvarez Sánchez
Félix de la Paz
F. Javier Toledo
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-21326-7
Print ISBN
978-3-642-21325-0
DOI
https://doi.org/10.1007/978-3-642-21326-7