Skip to main content
Top

2011 | Book

Advances in Soft Computing

10th Mexican International Conference on Artificial Intelligence, MICAI 2011, Puebla, Mexico, November 26 - December 4, 2011, Proceedings, Part II

Editors: Ildar Batyrshin, Grigori Sidorov

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The two-volume set LNAI 7094 and 7095 constitutes the refereed proceedings of the 10th Mexican International Conference on Artificial Intelligence, MICAI 2011, held in Puebla, Mexico, in November/December 2011. The 96 revised papers presented were carefully selected from XXX submissions. The second volume contains 46 papers focusing on soft computing. The papers are organized in the following topical sections: fuzzy logic, uncertainty and probabilistic reasoning; evolutionary algorithms and other naturally-inspired algorithms; data mining; neural networks and hybrid intelligent systems; and computer vision and image processing.

Table of Contents

Frontmatter

Fuzzy Logic, Uncertainty and Probabilistic Reasoning

Intelligent Control of Nonlinear Dynamic Plants Using a Hierarchical Modular Approach and Type-2 Fuzzy Logic

In this paper we present simulation results that we have at this moment with a new approach for intelligent control of non-linear dynamical plants. First we present the proposed approach for intelligent control using a hierarchical modular architecture with type-2 fuzzy logic used for combining the outputs of the modules. Then, the approach is illustrated with two cases: aircraft control and shower control and in each problem we explain its behavior. Simulation results of the two case show that proposed approach has potential in solving complex control problems.

Leticia Cervantes, Oscar Castillo, Patricia Melin
No-Free-Lunch Result for Interval and Fuzzy Computing: When Bounds Are Unusually Good, Their Computation Is Unusually Slow

On several examples from interval and fuzzy computations and from related areas, we show that when the results of data processing are unusually good, their computation is unusually complex. This makes us think that there should be an analog of Heisenberg’s uncertainty principle well known in quantum mechanics: when we an unusually beneficial situation in terms of results, it is not as perfect in terms of computations leading to these results. In short, nothing is perfect.

Martine Ceberio, Vladik Kreinovich
Intelligent Robust Control of Dynamic Systems with Partial Unstable Generalized Coordinates Based on Quantum Fuzzy Inference

This article describes a new method of quality control dynamically unstable object based on quantum computing. This method enables to control object in unpredicted situations with incomplete information about the structure of the control object. The efficiency over other methods of intelligent control is shown on the benchmark with partial unstable generalized coordinates as stroboscopic robotic manipulator.

Andrey Mishin, Sergey Ulyanov
Type-2 Neuro-Fuzzy Modeling for a Batch Biotechnological Process

In this paper we developed a Type-2 Fuzzy Logic System (T2FLS) in order to model a batch biotechnological process. Type-2 fuzzy logic systems are suitable to drive uncertainty like that arising from process measurements. The developed model is contrasted with an usual type-1 fuzzy model driven by the same uncertain data. Model development is conducted, mainly, by experimental data which is comprised by thirteen data sets obtained from different performances of the process, each data set presents a different level of uncertainty. Parameters from models are tuned with gradient-descent rule, a technique from neural networks field.

Pablo Hernández Torres, María Angélica Espejel Rivera, Luis Enrique Ramos Velasco, Julio Cesar Ramos Fernández, Julio Waissman Vilanova
Assessment of Uncertainty in the Projective Tree Test Using an ANFIS Learning Approach

In psychology projective tests are interpretative and subjective obtaining results based on the eye of the beholder, they are widely used because they yield rich and unique data and are very useful. Because measurement of drawing attributes have a degree of uncertainty it is possible to explore a fuzzy model approach to better assess interpretative results. This paper presents a study of the tree projective test applied in software development teams as part of RAMSET’s (Role Assignment Methodology for Software Engineering Teams) methodology to assign specific roles to work in the team; using a Takagi-Sugeno-Kang (TSK) Fuzzy Inference System (FIS) and also training data applying an ANFIS model to our case studies we have obtained an application that can help in role assignment decision process recommending best suited roles for performance in software engineering teams.

Luis G. Martínez, Juan R. Castro, Guillermo Licea, Antonio Rodríguez-Díaz
ACO-Tuning of a Fuzzy Controller for the Ball and Beam Problem

We describe the use of Ant Colony Optimization (ACO) for the ball and beam control problem, in particular for the problem of tuning a fuzzy controller of the Sugeno type. In our case study the controller has four inputs, each of them with two membership functions; we consider the intersection point for every pair of membership functions as the main parameter and their individual shape as secondary ones in order to achieve the tuning of the fuzzy controller by using an ACO algorithm. Simulation results show that using ACO and coding the problem with just three parameters instead of six, allows us to find an optimal set of membership function parameters for the fuzzy control system with less computational effort needed.

Enrique Naredo, Oscar Castillo
Estimating Probability of Failure of a Complex System Based on Inexact Information about Subsystems and Components, with Potential Applications to Aircraft Maintenance

In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component

A

, we know the probability

P

(

A

) of its failure with some uncertainty: e.g., we know the lower and upper bounds

$\underline P(A)$

and

$\overline P(A)$

for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe a new efficient method for such estimation based on Cauchy deviates.

Vladik Kreinovich, Christelle Jacob, Didier Dubois, Janette Cardoso, Martine Ceberio, Ildar Batyrshin
Two Steps Individuals Travel Behavior Modeling through Fuzzy Cognitive Maps Pre-definition and Learning

Transport “management and behavior” modeling takes place in developed societies because of the benefit that it brings for all social and economic processes. Using in this field, advanced computer science techniques like Artificial Intelligence is really relevant from the scientific, economic and social point of view. This paper deals with Fuzzy Cognitive Maps as an approach in representing the behavior and operation of such complex systems. Two steps are presented, an initial modeling trough Automatic Knowledge “Engineering and Formalizing”; and secondly, using readjustment of parameters with an inspired on Particle Swarm Optimization learning method. The theoretical results come from necessities in a real case study that is also presented, showing then the practical approach of the proposal, where new issues were obtained but also real problems were solved.

Maikel León, Gonzalo Nápoles, María M. García, Rafael Bello, Koen Vanhoof
Evaluating Probabilistic Models Learned from Data

Several learning algorithms have been proposed to construct probabilistic models from data using the Bayesian networks mechanism. Some of them permit the participation of human experts in order to create a knowledge representation of the domain. However, multiple different models may result for the same problem using the same data set. This paper presents the experiences in the construction of a probabilistic model that conforms a viscosity virtual sensor. Several experiments have been conduced and several different models have been obtained. This paper describes the evaluation implemented of all models under different criteria. The analysis of the models and the conclusions identified are included in this paper.

Pablo H. Ibargüengoytia, Miguel A. Delgadillo, Uriel A. García

Evolutionary Algorithms and Other Naturally-Inspired Algorithms

A Mutation-Selection Algorithm for the Problem of Minimum Brauer Chains

This paper aims to face the problem of getting Brauer Chains (BC) of minimum length by using a Mutation-Selection (MS) algorithm and a representation based on the Factorial Number System (FNS). We explain our MS strategy and report the experimental results for a benchmark considered difficult to show that this approach is a viable alternative to solve this problem by getting the shortest BCs reported in the literature and in a reasonable time. Also, it was used a fine-tuning process for the MS algorithm, which was done with the help of Covering Arrays (CA) and the solutions of a Diophantine Equation (DE).

Arturo Rodriguez-Cristerna, José Torres-Jiménez, Ivan Rivera-Islas, Cindy G. Hernandez-Morales, Hillel Romero-Monsivais, Adan Jose-Garcia
Hyperheuristic for the Parameter Tuning of a Bio-Inspired Algorithm of Query Routing in P2P Networks

The computational optimization field defines the parameter tuning problem as the correct selection of the parameter values in order to stabilize the behavior of the algorithms. This paper deals the parameters tuning in dynamic and large-scale conditions for an algorithm that solves the

Semantic Query Routing Problem

(SQRP) in

peer-to-peer

networks. In order to solve SQRP, the HH_AdaNAS algorithm is proposed, which is an ant colony algorithm that deals synchronously with two processes. The first process consists in generating a SQRP solution. The second one, on the other hand, has the goal to adjust the

Time To Live

parameter of each ant, through a hyperheuristic. HH_AdaNAS performs adaptive control through the hyperheuristic considering SQRP local conditions. The experimental results show that HH_AdaNAS, incorporating the techniques of parameters tuning with hyperheuristics, increases its performance by 2.42% compared with the algorithms to solve SQRP found in literature.

Paula Hernández, Claudia Gómez, Laura Cruz, Alberto Ochoa, Norberto Castillo, Gilberto Rivera
Bio-Inspired Optimization Methods for Minimization of Complex Mathematical Functions

This paper describes a hybrid approach for optimization combining Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) using Fuzzy Logic to integrate the results, the proposed method is called FPSO+FGA. The new hybrid FPSO+FGA approach is compared with the Simulated Annealing (SA), PSO, GA, Pattern Search (PS) methods with a set of benchmark mathematical functions.

Fevrier Valdez, Patricia Melin, Oscar Castillo
Fundamental Features of Metabolic Computing

The cell is the basic unit of life and can be interpreted as a chemical machine. The present knowledge of molecular biology allows the characterization of the metabolism as a processing unit/concept. This concept is an evolutionary biochemical product, which has been developed over millions of years. In this paper we will present and discuss the analyzed features of metabolism, which represent the fundamental features of the metabolic computing process. Furthermore, we will compare this molecular computing method with methods which are defined and discussed in computer science. Finally, we will formalize the metabolic processing method.

Ralf Hofestädt
Clustering Ensemble Framework via Ant Colony

Ensemble-based learning is a very promising option to reach a robust partition. Due to covering the faults of each other, the classifiers existing in the ensemble can do the classification task jointly more reliable than each of them. Generating a set of primary partitions that are different from each other, and then aggregation the partitions via a consensus function to generate the final partition, is the common policy of ensembles. Another alternative in the ensemble learning is to turn to fusion of different data from originally different sources. Swarm intelligence is also a new topic where the simple agents work in such a way that a complex behavior can be emerged. Ant colony algorithm is a powerful example of swarm intelligence. In this paper we introduce a new ensemble learning based on the ant colony clustering algorithm. Experimental results on some real-world datasets are presented to demonstrate the effectiveness of the proposed method in generating the final partition.

Hamid Parvin, Akram Beigi
Global Optimization with the Gaussian Polytree EDA

This paper introduces the Gaussian polytree estimation of distribution algorithm, a new construction method, and its application to estimation of distribution algorithms in continuous variables. The variables are assumed to be Gaussian. The construction of the tree and the edges orientation algorithm are based on information theoretic concepts such as mutual information and conditional mutual information. The proposed Gaussian polytree estimation of distribution algorithm is applied to a set of benchmark functions. The experimental results show that the approach is robust, comparisons are provided.

Ignacio Segovia Domínguez, Arturo Hernández Aguirre, Enrique Villa Diharce
Comparative Study of BSO and GA for the Optimizing Energy in Ambient Intelligence

One of the concerns of humanity today is developing strategies for saving energy, because we need to reduce energetic costs and promote economical, political and environmental sustainability. As we have mentioned before, in recent times one of the main priorities is energy management. The goal in this project is to develop a system that will be able to find optimal configurations in energy savings through management light. In this paper a comparison between Genetic Algorithms (GA) and Bee Swarm Optimization (BSO) is made. These two strategies are focus on lights management, as the main scenario, and taking into account the activity of the users, size of area, quantity of lights, and power. It was found that the GA provides an optimal configuration (according to the user’s needs), and this result was consistent with Wilcoxon’s Test.

Wendoly J. Gpe. Romero-Rodríguez, Victor Manuel Zamudio Rodríguez, Rosario Baltazar Flores, Marco Aurelio Sotelo-Figueroa, Jorge Alberto Soria Alcaraz
Modeling Prey-Predator Dynamics via Particle Swarm Optimization and Cellular Automata

Through the years several methods have been used to model organisms movement within an ecosystem modelled with cellular automata, from simple algorithms that change cells state according to some pre-defined heuristic, to diffusion algorithms based on the one dimensional Navier - Stokes equation or lattice gases. In this work we show a novel idea since the predator dynamics evolve through Particle Swarm Optimization.

Mario Martínez-Molina, Marco A. Moreno-Armendáriz, Nareli Cruz-Cortés, Juan Carlos Seck Tuoh Mora

Data Mining

Topic Mining Based on Graph Local Clustering

This paper introduces an approach for discovering thematically related document groups (a topic mining task) in massive document collections with the aid of graph local clustering. This can be achieved by viewing a document collection as a directed graph where vertices represent documents and arcs represent connections among these (e.g. hyperlinks). Because a document is likely to have more connections to documents of the same theme, we have assumed that topics have the structure of a

graph cluster

, i.e. a group of vertices with more arcs to the inside of the group and fewer arcs to the outside of it. So, topics could be discovered by clustering the document graph; we use a

local

approach to cope with scalability. We also extract properties (keywords and most representative documents) from clusters to provide a summary of the topic. This approach was tested over the Wikipedia collection and we observed that the resulting clusters in fact correspond to topics, which shows that topic mining can be treated as a graph clustering problem.

Sara Elena Garza Villarreal, Ramón F. Brena
SC Spectra: A Linear-Time Soft Cardinality Approximation for Text Comparison

Soft cardinality (SC) is a softened version of the classical cardinality of set theory. However, given its prohibitive cost of computing (exponential order), an approximation that is quadratic in the number of terms in the text has been proposed in the past. SC Spectra is a new method of approximation in linear time for text strings, which divides text strings into consecutive substrings (i.e., q-grams) of different sizes. Thus, SC in combination with resemblance coefficients allowed the construction of a family of similarity functions for text comparison. These similarity measures have been used in the past to address a problem of entity resolution (name matching) outperforming SoftTFIDF measure. SC spectra method improves the previous results using less time and obtaining better performance. This allows the new method to be used with relatively large documents such as those included in classic information retrieval collections. SC spectra method exceeded SoftTFIDF and cosine tf-idf baselines with an approach that requires no term weighing.

Sergio Jiménez Vargas, Alexander Gelbukh
Times Series Discretization Using Evolutionary Programming

In this work, we present a novel algorithm for time series discretization. Our approach includes the optimization of the word size and the alphabet as one parameter. Using evolutionary programming, the search for a good discretization scheme is guided by a cost function which considers three criteria: the entropy regarding the classification, the complexity measured as the number of different strings needed to represent the complete data set, and the compression rate assessed as the length of the discrete representation. Our proposal is compared with some of the most representative algorithms found in the specialized literature, tested in a well-known benchmark of time series data sets. The statistical analysis of the classification accuracy shows that the overall performance of our algorithm is highly competitive.

Fernando Rechy-Ramírez, Héctor-Gabriel Acosta Mesa, Efrén Mezura-Montes, Nicandro Cruz-Ramírez
Clustering of Heterogeneously Typed Data with Soft Computing - A Case Study

The problem of finding clusters in arbitrary sets of data has been attempted using different approaches. In most cases, the use of metrics in order to determine the adequateness of the said clusters is assumed. That is, the criteria yielding a measure of quality of the clusters depends on the distance between the elements of each cluster. Typically, one considers a cluster to be adequately characterized if the elements within a cluster are close to one another while, simultaneously, they appear to be far from those of different clusters. This intuitive approach fails if the variables of the elements of a cluster are not amenable to distance measurements, i.e., if the vectors of such elements cannot be quantified. This case arises frequently in real world applications where several variables (if not most of them) correspond to categories. The usual tendency is to assign arbitrary numbers to every category: to encode the categories. This, however, may result in spurious patterns: relationships between the variables which are not really there at the offset. It is evident that there is no truly valid assignment which may ensure a universally valid numerical value to this kind of variables. But there is a strategy which guarantees that the encoding will, in general, not bias the results. In this paper we explore such strategy. We discuss the theoretical foundations of our approach and prove that this is the best strategy in terms of the statistical behavior of the sampled data. We also show that, when applied to a complex real world problem, it allows us to generalize soft computing methods to find the number and characteristics of a set of clusters. We contrast the characteristics of the clusters gotten from the automated method with those of the experts.

Angel Kuri-Morales, Daniel Trejo-Baños, Luis Enrique Cortes-Berrueco
Regional Flood Frequency Estimation for the Mexican Mixteca Region by Clustering Techniques

Regionalization methods can help to transfer information from gauged catchments to ungauged river basins. Finding homogeneous regions is crucial for regional flood frequency estimation at ungauged sites. As it is the case for the Mexican Mixteca region site, where actually only one gauging station is working at present. One way of delineate these homogeneous watersheds into natural groups is by clustering techniques. In this paper, two different clustering approaches are used and compared for the delineation of homogeneous regions. The first one is the hierarchical clustering approach, which is widely used for regionalization studies. The second one is the Fuzzy C-Means technique which allow a station belong, at different grades, to several regions. The optimal number of regions is based on fuzzy cluster validation measures. The experimental results of both approaches are similar which confirm the delineated homogeneous region for this case study. Finally, the stepwise regression model using the forward selection approach is applied for the flood frequency estimation in each found homogeneous region.

Felix Emilio Luis-Pérez, Raúl Cruz-Barbosa, Gabriela Álvarez-Olguin
Border Samples Detection for Data Mining Applications Using Non Convex Hulls

Border points are those instances located at the outer margin of dense clusters of samples. The detection is important in many areas such as data mining, image processing, robotics, geographic information systems and pattern recognition. In this paper we propose a novel method to detect border samples. The proposed method makes use of a discretization and works on partitions of the set of points. Then the border samples are detected by applying an algorithm similar to the presented in reference [8] on the sides of convex hulls. We apply the novel algorithm on classification task of data mining; experimental results show the effectiveness of our method.

Asdrúbal López Chau, Xiaoou Li, Wen Yu, Jair Cervantes, Pedro Mejía-Álvarez
An Active System for Dynamic Vertical Partitioning of Relational Databases

Vertical partitioning is a well known technique to improve query response time in relational databases. This consists in dividing a table into a set of fragments of attributes according to the queries run against the table. In dynamic systems the queries tend to change with time, so it is needed a dynamic vertical partitioning technique which adapts the fragments according to the changes in query patterns in order to avoid long query response time. In this paper, we propose an active system for dynamic vertical partitioning of relational databases, called DYVEP (DYnamic VErtical Partitioning). DYVEP uses active rules to vertically fragment and refragment a database without intervention of a database administrator (DBA), maintaining an acceptable query response time even when the query patterns in the database suffer changes. Experiments with the TPC-H benchmark demonstrate efficient query response time.

Lisbeth Rodríguez, Xiaoou Li, Pedro Mejía-Alvarez
Efficiency Analysis in Content Based Image Retrieval Using RDF Annotations

Nowadays it is common to combine low-level and semantic data for image retrieval. The images are stored in databases and computer graphics algorithms are employed to get the pictures. Most of the works consider both aspects separately. In this work, using the capabilities of a commercial ORDBMS a reference architecture was implemented for recovering images, and then a performance analysis is realized using several index types to search some specific semantic data stored in the database via RDF triples. The experiments analyzed the mean recovery time of triples in tables having a hundred of thousands to millions of triples. The performance obtained using Bitmap, B-Tree and Hash Partitioned indexes are analyzed. The results obtained with the experiences performed are implemented in the reference architecture in order to speed up the pattern search.

Carlos Alvez, Aldo Vecchietti
Automatic Identification of Web Query Interfaces

The amount of information contained in databases in the Web has grown explosively in the last years. This information, known as the Deep Web, is dynamically obtained from specific queries to these databases through Web Query Interfaces (WQIs). The problem of finding and accessing databases in the Web is a great challenge due to the Web sites are very dynamic and the information existing is heterogeneous. Therefore, it is necessary to create efficient mechanisms to access, extract and integrate information contained in databases in the Web. Since WQIs are the only means to access databases in the Web, the automatic identification of WQIs plays an important role facilitating traditional search engines to increase the coverage and access interesting information not available on the indexable Web. In this paper we present a strategy for automatic identification of WQIs using supervised learning and making an adequate selection and extraction of HTML elements in the WQIs to form the training set. We present two experimental tests over a corpora of HTML forms considering positive and negative examples. Our proposed strategy achieves better accuracy than previous works reported in the literature.

Heidy M. Marin-Castro, Victor J. Sosa-Sosa, Ivan Lopez-Arevalo

Neural Networks and Hybrid Intelligent Systems

A GRASP with Strategic Oscillation for a Commercial Territory Design Problem with a Routing Budget Constraint

This paper addresses a commercial districting problem arising in the bottled beverage distribution industry. The problem consists of grouping a set of city blocks into territories so as to maximize territory compactness. As planning requirements, the grouping seeks to balance both number of customers and product demand across territories, maintain connectivity of territories, and limit the total cost of routing. A combinatorial optimization model for this problem is introduced. Work on commercial territory design has particularly focused on design decisions. This work is, to the best of our knowledge, the first to address both design and routing decisions simultaneously by considering a budget constraint on the total routing cost in commercial territory design. A greedy randomized adaptive search procedure that incorporates advanced features such as adaptive memory and strategic oscillation is developed. Empirical evidence over a wide set of randomly generated instances based on real-world data show a very positive impact of these advanced components. Solution quality is significantly improved as well.

Roger Z. Ríos-Mercado, Juan C. Salazar-Acosta
Hybrid Intelligent Speed Control of Induction Machines Using Direct Torque Control

This paper presents a novel hybrid adaptive fuzzy controller for the regulation of speed on induction machines with direct torque control. The controller is based on a fuzzy system and PID control with decoupled gains. Genetic programming techniques are used for offline optimizations of the normalization constants of fuzzy membership function ranges. Fuzzy cluster means is introduced for online optimization on the limits of triangular fuzzy membership functions. Finally simulations in LabVIEW are presented validating the response of the controller with and without load on the machine; results and conclusions are discussed.

Fernando David Ramirez Figueroa, Alfredo Victor Mantilla Caeiros
A New Model of Modular Neural Networks with Fuzzy Granularity for Pattern Recognition and Its Optimization with Hierarchical Genetic Algorithms

In this paper we propose a new model of a Modular Neural Network (MNN) with fuzzy integration based on granular computing. The topology and parameters of the model are optimized with a Hierarchical Genetic Algorithm (HGA). The model was applied to the case of human recognition to illustrate its applicability. The proposed method is able to divide the data automatically into sub modules, to work with a percentage of images and select which images will be used for training. We considered, to test this method, the problem of human recognition based on ear, and we used a database with 77 persons (with 4 images each person for this task).

Daniela Sánchez, Patricia Melin, Oscar Castillo
Crawling to Improve Multimodal Emotion Detection

This paper demonstrates multimodal fusion of emotion sensory data in realistic scenarios of relatively long human-machine interactions. Fusion, combining voice and facial expressions, has been enhanced with semantic information retrieved from Internet social networks, resulting in more accurate determination of the conveyed emotion.

Diego R. Cueva, Rafael A. M. Gonçalves, Fábio Cozman, Marcos R. Pereira-Barretto
Improving the MLP Learning by Using a Method to Calculate the Initial Weights of the Network Based on the Quality of Similarity Measure

This work presents a technique that integrates the backpropagation learning method with a method to calculate the initial weights in order to train the Multilayer Perceptron Model. The method to calculate the initial weights of the MLP is based on the quality of similarity measure proposed on the framework of the extended Rough Set Theory. Experimental results show that the proposed initialization method performs better than other methods used to calculate the weight of the features, so it is an interesting alternative to the conventional random initialization.

Yaima Filiberto Cabrera, Rafael Bello Pérez, Yailé Caballero Mota, Gonzalo Ramos Jimenez
Modular Neural Networks with Type-2 Fuzzy Integration for Pattern Recognition of Iris Biometric Measure

This paper presents a new modular neural network architecture that is used to build a system for pattern recognition based on the iris biometric measurement of persons. In this system, the properties of the person iris database are enhanced with image processing methods, and the coordinates of the center and radius of the iris are obtained to make a cut of the area of interest by removing the noise around the iris. The inputs to the modular neural network are the processed iris images and the output is the number of the identified person. The integration of the modules was done with a type-2 fuzzy integrator at the level of the sub modules, and with a gating network at the level of the modules.

Fernando Gaxiola, Patricia Melin, Fevrier Valdez, Oscar Castillo
Wavelet Neural Network Algorithms with Applications in Approximation Signals

In this paper we present algorithms which are adaptive and based on neural networks and wavelet series to build wavenets function approximators. Results are shown in numerical simulation of two wavenets approximators architectures: the first is based on a wavenet for approach the signals under study where the parameters of the neural network are adjusted online, the other uses a scheme approximators with an IIR filter in the output of wavenet, which helps to reduce convergence time to a minimum time desired.

Carlos Roberto Domínguez Mayorga, María Angélica Espejel Rivera, Luis Enrique Ramos Velasco, Julio Cesar Ramos Fernández, Enrique Escamilla Hernández

Computer Vision and Image Processing

Similar Image Recognition Inspired by Visual Cortex

The paper presents a method of image recognition, which is inspired by research in visual cortex. The architecture of our model called CaNN is similar to the one proposed in neocognitron, LeNet or HMAX networks. It is composed of many consecutive layers with various number of planes (receptive fields). Units in the corresponding positions of the planes in one layer receive input from the same region of the precedent layer. Each plane is sensitive to one pattern. The method assumes that the pattern recognition is based on edges, which are found in the input image using Canny detector. Then, the image is processed by the network. The novelty of our method lies in the way of information processing in each layer and an application of clustering module in the last layer where the patterns are recognized. The transformations performed by the CaNN model find the own representation of the training patterns. The method is evaluated in the experimental way. The results are presented.

Urszula Markowska-Kaczmar, Adam Puchalski
Regularization with Adaptive Neighborhood Condition for Image Denoising

Image denoising by minimizing a similarity of neighborhood-based cost function is presented. This cost function consists of two parts, one related to data fidelity and the other is a structure preserving smoothing term. The latter is controlled by a weight coefficient that measures the neighborhood similarity between two pixels and attaching an additional term penalizes it. Unlike most work in noise removal area, the weight of each pixel within the neighborhood is not defined by a Gaussian function. The obtained results show a good performance of our proposal, compared with some state-of-the-art algorithms.

Felix Calderon, Carlos A. Júnez–Ferreira
Multiple Target Tracking with Motion Priors

This paper presents a particle filter-based approach for multiple target tracking in video streams in single static cameras settings. We aim in particular to manage mid-dense crowds situations, where, although tracking is possible, it is made complicated by the presence of frequent occlusions among targets and with scene clutter. Moreover, the appearance of targets is sometimes very similar, which makes standard trackers often switch their target identity. Our contribution is two-fold: (1) we first propose an estimation scheme for motion priors in the camera field of view, that integrates sparse optical flow data and regularizes the corresponding discrete distribution fields on velocity directions and magnitudes; (2) we use these motion priors in a hybrid motion model for a particle filter tracker. Through several results on video-surveillance datasets, we show the pertinence of this approach.

Francisco Madrigal, Mariano Rivera, Jean-Bernard Hayet
Control of a Service Robot Using the Mexican Sign Language

This paper presents the results of our research in automatic recognition of the Mexican Sign Language (MSL) alphabet as control element for a service robot. The technique of active contours was used for image segmentation in order to recognize de signs. Once segmented, we proceeded to obtain the signature of the corresponding sign and trained a neural network for its recognition. Every symbol of the MSL was assigned to a task that the robotic system had to perform; we defined eight different tasks. The system was validated using a simulation environment and a real system. For the real case, we used a mobile platform (Powerbot) equipped with a manipulator with 6 degrees of freedom (PowerCube). For simulation of the mobile platforms, RoboWorks was used as the simulation environment. In both, simulated and real platforms, tests were performed with different images to those learned by the system, obtaining in both cases a recognition rate of 95.8%.

Felix Emilio Luis-Pérez, Felipe Trujillo-Romero, Wilebaldo Martínez-Velazco
Analysis of Human Skin Hyper-Spectral Images by Non-negative Matrix Factorization

This article presents the use of Non-negative Matrix Factorization, a blind source separation algorithm, for the decomposition of human skin absorption spectra in its main pigments: melanin and hemoglobin. The evaluated spectra come from a Hyper-Spectral Image, which is the result of the processing of a Multi-Spectral Image by a neural network-based algorithm. The implemented source separation algorithm is based on a multiplicative coefficient upload. The goal is to represent a given spectrum as the weighted sum of two spectral components. The resulting weighted coefficients are used to quantify melanin and hemoglobin content in the given spectra. Results present a degree of correlation higher than 90% compared to theoretical hemoglobin and melanin spectra. This methodology is validated on 35 melasma lesions from a population of 10 subjects.

July Galeano, Romuald Jolivot, Franck Marzani
Similarity Metric Behavior for Image Retrieval Modeling in the Context of Spline Radial Basis Function

In this paper, the analysis of similarity metrics used for performance evaluation of image retrieval frameworks is provided. Image retrieval based on similarity metrics obtains remarkable results in comparison with robust discrimination methods. Thus, the similarity metrics are used in matching process between visual query from user and descriptors of images in preprocessed collection. In contrast, the discrimination methods usually compare feature vectors computing distances between visual query and images in collections. In this research, a behavior of spline radial basis function used as metric for image similarity measurement is proposed and evaluated, comparing it with discrimination methods, particularly with general principal component analysis algorithm (GPCA). Spline radial basis function has been tested in image retrieval using a standard image collections, such as COIL-100. The obtained results using spline radial basis function report 88% of correct image retrieval avoiding a classification phase required in other well-known methods. The discussion of tests with designed Image Data Segmentation with Spline (IDSS) framework illustrates optimization and improvement of image retrieval process.

Leticia Flores-Pulido, Oleg Starostenko, Gustavo Rodríguez-Gómez, Alberto Portilla-Flores, Marva Angelica Mora-Lumbreras, Francisco Javier Albores-Velasco, Marlon Luna Sánchez, Patrick Hernández Cuamatzi
A Comparative Review of Two-Pass Connected Component Labeling Algorithms

In this paper, we show a comparative review of Connected Component Labeling (CCL) methods, focused in two-pass variants, including their elements and implementation issues. We analyze the main elements used by these CCL algorithms and their importance for the performance of the methods using them. We present some experiments using a complex image set and evaluating the performance of each algorithm under analysis.

Uriel H. Hernandez-Belmonte, Victor Ayala-Ramirez, Raul E. Sanchez-Yanez
A Modification of the Mumford-Shah Functional for Segmentation of Digital Images with Fractal Objects

In this paper we revisit the Mumford-Shah functional, one of the most studied variational approaches to image segmentation. The contribution of this work is to propose a modification of the Mumford-Shah functional that includes Fractal Analysis to improve the segmentation of images with fractal or semi-fractal objects. Here we show how the fractal dimension is calculated and embedded in the functional minimization computation to drive the algorithm to use both, changes in the image intensities and the fractal characteristics of the objects, to obtain a more suitable segmentation. Experimental results confirm that the proposed modification improves the quality of the segmentation in images with fractal objects or semi fractal such as medical images.

Carlos Guillén Galván, Daniel Valdés Amaro, Jesus Uriarte Adrián
Robust RML Estimator - Fuzzy C-Means Clustering Algorithms for Noisy Image Segmentation

Image segmentation is a key step for many images analysis applications. So far, there does not exist a general method to segment suitable all images, regardless if these are corrupted or noise free. In this paper, we propose to modify the Fuzzy C-means clustering algorithm and the FCM_S1 variant by using the RML-estimator. The idea to our method is to get robust clustering algorithms able to segment images with different type and levels of noises. The performance of the proposed algorithms is tested on synthetic and real images. Experimental results show that the proposed algorithms are more robust to the noise presence and more effective than the comparative algorithms.

Dante Mújica-Vargas, Francisco Javier Gallegos-Funes, Alberto J. Rosales-Silva, Rene Cruz-Santiago
Processing and Classification of Multichannel Remote Sensing Data

Several main practical tasks, important for effective pre-processing of multichannel remote sensing (RS) images, are considered in order to reliably retrieve useful information from them and to provide availability of data to potential users. First, possible strategies of data processing are discussed. It is shown that one problem is to use more adequate models to describe the noise present in real images. Another problem is automation of all or, at least, several stages of data processing, like determination of noise type and its statistical characteristics, noise filtering and image compression before applying classification at the final stage. Second, some approaches that are effective and are able to perform well enough within automatic or semi-automatic frameworks for multichannel images are described and analyzed. The applicability of the proposed methods is demonstrated for particular examples of real RS data classification.

Vladimir Lukin, Nikolay Ponomarenko, Andrey Kurekin, Oleksiy Pogrebnyak
Iris Image Evaluation for Non-cooperative Biometric Iris Recognition System

During video acquisition of an automatic non-cooperative biometric iris recognition system, not all the iris images obtained from the video sequence are suitable for recognition. Hence, it is important to acquire high quality iris images and quickly identify them in order to eliminate the poor quality ones (mostly defocused images) before the subsequent processing. In this paper, we present the results of a comparative analysis of four methods for iris image quality assessment to select clear images in the video sequence. The goal is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented methods for iris image quality assessment. The methods are compared based on their robustness to different types of iris images and the computational effort they require. The experiments with the built database (100 videos from MBGC v2) demonstrate that the best performance scores are generated by the kernel proposed by Kang & Park. The FAR and FRR obtained are 1.6% and 2.3% respectively.

Juan M. Colores, Mireya García-Vázquez, Alejandro Ramírez-Acosta, Héctor Pérez-Meana
Optimization of Parameterized Compactly Supported Orthogonal Wavelets for Data Compression

In this work we review the parameterization of filter coefficients of compactly supported orthogonal wavelets used to implement the discrete wavelet transform. We also present the design of wavelet based filters as a constrained optimization problem where a genetic algorithm can be used to improve the compression ratio on gray scale images by minimizing their entropy and we develop a quasi-perfect reconstruction scheme for images. Our experimental results report a significant improvement over previous works and they motivate us to explore other kinds of perfect reconstruction filters based on parameterized tight frames.

Oscar Herrera Alcántara, Miguel González Mendoza
Efficient Pattern Recalling Using a Non Iterative Hopfield Associative Memory

Actually associative memories have demonstrated to be useful in pattern processing field. Hopfield model is an autoassociative memory that has problems in the recalling phase; one of them is the time of convergence or non convergence in certain cases with patterns bad recovered. In this paper, a new algorithm for the Hopfield associative memory eliminates iteration processes reducing time computing and uncertainty on pattern recalling. This algorithm is implemented using a corrective vector which is computed using the Hopfield memory. The corrective vector adjusts misclassifications in output recalled patterns. Results show a good performance of the proposed algorithm, providing an alternative tool for the pattern recognition field.

José Juan Carbajal Hernández, Luis Pastor Sánchez Fernández
Backmatter
Metadata
Title
Advances in Soft Computing
Editors
Ildar Batyrshin
Grigori Sidorov
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25330-0
Print ISBN
978-3-642-25329-4
DOI
https://doi.org/10.1007/978-3-642-25330-0

Premium Partner