Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 16th Iberoamerican Congress on Pattern Recognition, CIARP 2011, held in Pucón, Chile, in November 2011. The 81 revised full papers presented together with 3 keynotes were carefully reviewed and selected from numerous submissions. Topics of interest covered are image processing, restoration and segmentation; computer vision; clustering and artificial intelligence; pattern recognition and classification; applications of pattern recognition; and Chilean Workshop on Pattern Recognition.

Inhaltsverzeichnis

Frontmatter

Keynote Lectures

The Dissimilarity Representation for Structural Pattern Recognition

The patterns in collections of real world objects are often not based on a limited set of isolated properties such as features. Instead, the totality of their appearance constitutes the basis of the human recognition of patterns. Structural pattern recognition aims to find explicit procedures that mimic the learning and classification made by human experts in well-defined and restricted areas of application. This is often done by defining dissimilarity measures between objects and measuring them between training examples and new objects to be recognized.

The dissimilarity representation offers the possibility to apply the tools developed in machine learning and statistical pattern recognition to learn from structural object representations such as graphs and strings. These procedures are also applicable to the recognition of histograms, spectra, images and time sequences taking into account the connectivity of samples (bins, wavelengths, pixels or time samples).

The topic of dissimilarity representation is related to the field of non-Mercer kernels in machine learning but it covers a wider set of classifiers and applications. Recently much progress has been made in this area and many interesting applications have been studied in medical diagnosis, seismic and hyperspectral imaging, chemometrics and computer vision. This review paper offers an introduction to this field and presents a number of real world applications.

Robert P. W. Duin, Elżbieta Pȩkalska

Describing When and Where in Vision

Different from the what and where pathways in the organization of the visual system, we address representations that describe dynamic visual events in a unified way.

Representations are an essential tool for any kind of process that operates on data, as they provide a language to describe, store and retrieve that data. They define the possible properties and aspects that are stored, and govern the levels of abstraction at which the respective properties are described. In the case of visual computing (computer vision, image processing), a representation is used to describe information obtained from visual input (e.g. an image or image sequence and the objects it may contain) as well as related prior knowledge (experience).

The ultimate goal, to make applications of visual computing be part of our daily life, requires that vision systems operate reliably, nearly anytime and anywhere. Therefore, the research community aims to solve increasingly more complex scenarios. Vision both in humans and computers is a dynamic process, thus variations (change) always appear in the spatial and the temporal dimensions. Nowadays significant research efforts are undertaken to represent variable shape and appearance, however, joint representation and processing of spatial and temporal domains is not a well-investigated topic yet. Visual computing tasks are mostly solved by a two-stage approach of frame-based processing and subsequent temporal processing. Unfortunately, this approach reaches its limits in scenes with high complexity or difficult tasks e.g. action recognition. Therefore, we focus our research on representations which jointly describe information in space and time and allow to process data of space-time volumes (several consecutive frames).

In this keynote we relate our own experience and motivations, to the current state of the art of representations of shape, of appearance, of structure, and of motion. Challenges for such representations are in applications like multiple object tracking, tracking non-rigid objects and human action recognition.

Walter G. Kropatsch, Adrian Ion, Nicole M. Artner

Applications of Multilevel Thresholding Algorithms to Transcriptomics Data

Microarrays are one of the methods for analyzing the expression levels of genes in a massive and parallel way. Since any errors in early stages of the analysis affect subsequent stages, leading to possibly erroneous biological conclusions, finding the correct location of the spots in the images is extremely important for subsequent steps that include segmentation, quantification, normalization and clustering. On the other hand, genome-wide profiling of DNA-binding proteins using ChIP-seq and RNA-seq has emerged as an alternative to ChIP-chip methods. Due to the large amounts of data produced by next generation sequencing technology, ChIPseq and RNA-seq offer much higher resolution, less noise and greater coverage than its predecessor, the ChIPchip array.

Multilevel thresholding algorithms have been applied to many problems in image and signal processing. We show that these algorithms can be used for transcriptomics and genomics data analysis such as sub-grid and spot detection in DNA microarrays, and also for detecting significant regions based on next generation sequencing data. We show the advantages and disadvantages of using multilevel thresholding and other algorithms in these two applications, as well as an overview of numerical and visual results used to validate the power of the thresholding methods based on previously published data.

Luis Rueda, Iman Rezaeian

Image Processing, Restoration and Segmentation

Unsupervised Fingerprint Segmentation Based on Multiscale Directional Information

The segmentation task is an important step in automatic fingerprint classification and recognition. In this context, the term refers to splitting the image into two regions, namely,

foreground

and

background

. In this paper, we introduce a novel segmentation approach designed to deal with fingerprint images originated from different sensors. The method considers a multiscale directional operator and a scale-space toggle mapping used to estimate the image background information. We evaluate our approach on images of different databases, and show its improvements when compared against other well-known state-of-the-art segmentation methods discussed in literature.

Raoni F. S. Teixeira, Neucimar J. Leite

Thermal Noise Estimation and Removal in MRI: A Noise Cancellation Approach

In this work a closed-form, maximum-likelihood (ML) estimator for the variance of the thermal noise in magnetic resonance imaging (MRI) systems has been developed. The ML estimator was, in turn, used as

a priori

information for devising a single dimensional noise-cancellation–based image restoration algorithm. The performance of the estimator was assessed theoretically by means of the Crámer-Rao lower bound, and the effect of selecting an appropriate set of no-signal pixels on estimating the noise variance was also investigated. The effectivity of the noise-cancellation–based image restoration algorithm in compensating for the thermal noise in MRI was also evaluated. Actual MRI data from the LONI database was employed to assess the performance of both the ML estimator and the image restoration algorithm.

Miguel E. Soto, Jorge E. Pezoa, Sergio N. Torres

Spectral Model for Fixed-Pattern-Noise in Infrared Focal-Plane Arrays

In this paper a novel and more realistic analytical model for the fixed-pattern noise present in infrared focal plane arrays is developed. The model captures, in the frequency domain, the spatial structure of the fixed-pattern noise yielding a suitable input/output representation for an infrared focal plane array. The theoretical and practical applicability the model is illustrated by both synthesizing fixed-pattern noise from three different infrared cameras and improving the performance of a previously reported fixed-pattern noise compensation algorithm.

Jorge E. Pezoa, Osvaldo J. Medina

Blotch Detection for Film Restoration

Blotches are one of the most common film degradations that must be detected and corrected in the process of film restoration. In this work we will address the problem of blotch detection in the context of digital film restoration. Although there are several methods for blotch detection, in the literature their evaluation is usually subjective. In this work we propose a new method for blotch detection and an objective methodology to evaluate its performance. We show that the proposed method outperforms other existing methods while using this objective metric.

Alvaro Pardo

Rapid Cut Detection on Compressed Video

The temporal segmentation of a video sequence is one of the most important aspects for video processing, analysis, indexing, and retrieval. Most of existing techniques to address the problem of identifying the boundary between consecutive shots have focused on the uncompressed domain. However, decoding and analyzing of a video sequence are two extremely time-consuming tasks. Since video data are usually available in compressed form, it is desirable to directly process video material without decoding. In this paper, we present a novel approach for video cut detection that works in the compressed domain. The proposed method is based on both exploiting visual features extracted from the video stream and on using a simple and fast algorithm to detect the video transitions. Experiments on a real-world video dataset with several genres show that our approach presents high accuracy relative to the state-of-the-art solutions and in a computational time that makes it suitable for online usage.

Jurandy Almeida, Neucimar J. Leite, Ricardo da S. Torres

Local Quality Method for the Iris Image Pattern

Recent researches on iris recognition without user cooperation have introduced video-based iris capturing approach. Indeed, it provides more information and more flexibility in the image acquisition stage for noncooperative iris recognition systems. However, a video sequence can contain images with different level of quality. Therefore, it is necessary to select the highest quality images from each video to improve iris recognition performance. In this paper, we propose as part of a video quality assessment module, a new local quality iris image method based on spectral energy analysis. This approach does not require the iris region segmentation to determine the quality of the image such as most of existing approaches. In contrast to other methods, the proposed algorithm uses a significant portion of the iris region to measure the quality in that area. This method evaluates the energy of 1000 images which were extracted from 200 iris videos from the MBGC NIR video database. The results show that the proposed method is very effective to assess the quality of the iris information. It obtains the highest 2 images energies as the best 2 images from each video in 226 milliseconds.

Luis Miguel Zamudio-Fuentes, Mireya S. García-Vázquez, Alejandro Alvaro Ramírez-Acosta

Assessment of SAR Image Filtering Using Adaptive Stack Filters

Stack filters are a special case of non-linear filters. They have a good performance for filtering images with different types of noise while preserving edges and details. A stack filter decomposes an input image into several binary images according to a set of thresholds. Each binary image is then filtered by a Boolean function, which characterizes the filter. Adaptive stack filters can be designed to be optimal; they are computed from a pair of images consisting of an ideal noiseless image and its noisy version. In this work we study the performance of adaptive stack filters when they are applied to Synthetic Aperture Radar (SAR) images. This is done by evaluating the quality of the filtered images through the use of suitable image quality indexes and by measuring the classification accuracy of the resulting images.

María E. Buemi, Marta Mejail, Julio Jacobo, Alejandro C. Frery, Heitor S. Ramos

Subcutaneous Adipose Tissue Segmentation in Whole-Body MRI of Children

In this paper, we propose a new method to segment the subcutaneous adipose tissue (SAT) in whole-body (WB) magnetic resonance images of children. The method is based on an automated learning of radiometric characteristics, which is adaptive for each individual case, a decomposition of the body according to its main parts, and a minimal surface approach. The method aims at contributing to the creation of WB anatomical models of children, for applications such as numerical dosimetry simulations or medical applications such as obesity follow-up. Promising results are obtained on data from 20 children at various ages. Segmentations are validated with 4 manual segmentations.

Geoffroy Fouquier, Jérémie Anquez, Isabelle Bloch, Céline Falip, Catherine Adamsbaum

Infrared Focal Plane Array Imaging System Characterization by Means of a Blackbody Radiator

Infrared (IR) Focal plane array (IRFPA) cameras are nowadays both, more accessible and with a broad variety in terms of detectors design. In many cases, the IRFPA characterization is not completely given by the manufacturer. In this paper a long wave 8-12 [

μ

m] microbolometer IRFPA is characterized by means of calculating the Noise Equivalent Temperature Difference (NETD) and the Correctability performance parameters. The Correctability parameter has been evaluated by using a black body radiator and Two-Points calibration technique. Also, the Transfer Function of the microbolometer IR camera has been experimentally obtained as well as the NETD by the evaluation of radiometric data from a blackbody radiator. The obtained parameters are the key for any successful application of IR imaging pattern recognition.

Francisca Parra, Pablo Meza, Carlos Toro, Sergio Torres

An Adaptive Color Similarity Function for Color Image Segmentation

In this paper an interactive, semiautomatic image segmentation method is presented which, processes the color information of each pixel as a unit, thus avoiding color information scattering. The process has only two steps: 1) The manual selection of few sample pixels of the color to be segmented in the image; and 2) The automatic generation of the so called

Color Similarity Image

(CSI), which is just a gray level image with all the tonalities of the selected colors. The color information of every pixel is integrated in the segmented image by an adaptive color similarity function designed for direct color comparisons. The color integrating technique is direct, simple, and computationally inexpensive and it has also good performance in gray level and low contrast images.

Rodolfo Alvarado-Cervantes, Edgardo M. Felipe-Riveron

Computer Vision

A New Prior Shape Model for Level Set Segmentation

Level set methods are effective for image segmentation problems. However, the methods suffer from limitations such as slow convergence and leaking problems. As such, over the past two decades, the original level set method has been evolved in many directions, including integration of prior shape models into the segmentation framework. In this paper, we introduce a new prior shape model for level set segmentation. With a shape model represented implicitly by a signed distance function, we incorporate a local shape parameter to the shape model. This parameter helps to regulate the model fitting process. Based on this local parameter of the shape model, we define a shape energy to drive the level set evolution for image segmentation. The shape energy is coupled with a Gaussian kernel, which acts as a weight distribution on the shape model. This Gaussian effect not only allows evolution of level set to deform around the shape model, but also provides a smoothing effect along the edges. Our approach presents a new dimension to extract local shape parameter directly from the shape model, which is different from previous work that focused on an indirect manner of feature extractions. Experimental results on synthetic, optical and MR images demonstrate the feasibility of this new shape model and shape energy.

Poay Hoon Lim, Ulas Bagci, Li Bai

Efficient 3D Curve Skeleton Extraction from Large Objects

Curve skeletons are used for linear representation of 3D objects in a wide variety of engineering and medical applications. The outstandingly robust and flexible curve skeleton extraction algorithm, based on generalized potential fields, suffers from seriously heavy computational burden. In this paper we propose and evaluate a hierarchical formulation of the algorithm, which reduces the space where the skeleton is searched, by excluding areas that are unlikely to contain relevant skeleton branches. The algorithm was evaluated using dozens of object volumes. Tests revealed that the computational load of the skeleton extraction can be reduced up to 100 times, while the accuracy doesn’t suffer relevant damage.

László Szilágyi, Sándor Miklós Szilágyi, David Iclănzan, Lehel Szabó

Improving Tracking Algorithms Using Saliency

One of the challenges of computer vision is to improve the automatic systems for the recognition and tracking of objects in a set of images. One approach that has recently gained importance is based on extracting descriptors, such as the covariance descriptor, because they manage to remain invariant in the regions of these images despite changes of translation, rotation and scale. In this work we propose, using the Covariance Descriptor, a novel saliency system able to find the most relevant regions in an image, which can be used for recognition and tracking objects. Our method is based on the amount of information from each point in the image, and allows us to adapt the regions to maximize the difference of information between the region and its environment. The results show that this tool’s improvements can boost trackers precision up to 90% (with initial precision of 50%) without compromising the recall.

Cristobal Undurraga, Domingo Mery

Using Adaptive Run Length Smoothing Algorithm for Accurate Text Localization in Images

Text information in images and videos is frequently a key factor for information indexing and retrieval systems. However, text detection in images is a difficult task since it is often embedded in complex backgrounds. In this paper, we propose an accurate text detection and localization method in images based on stroke information and the Adaptive Run Lenght Smoothing Algorithm. Experimental results show that the proposed approach is accurate, has high recall and is robust to various text sizes, fonts, colors and languages.

Martin Rais, Norberto A. Goussies, Marta Mejail

Fast Rotation-Invariant Video Caption Detection Based on Visual Rhythm

Text detection in images has been studied and improved for decades. There are many works that extend the existing methods for analyzing videos, however, few of them create or adapt approaches that consider inherent characteristics of videos, such as temporal information. This work proposes a very fast method for identifying video frames that contain text through a special data structure called visual rhythm. The method is robust to detect video captions with respect to font styles, color intensity, and text orientation. A data set was built in our experiments to compare and evaluate the effectiveness of the proposed method.

Felipe Braunger Valio, Helio Pedrini, Neucimar Jeronimo Leite

Morphology Based Spatial Relationships between Local Primitives in Line Drawings

Local primitives and their spatial relationships are useful in the analysis, recognition and retrieval of document and patent binary images. In this paper, a morphology based approach is proposed to establish the connections between the local primitives found at the optimally detected junction points and end points. The grayscale geodesic dilation is employed as the basic technique by taking a marker image with gray values at the local primitives and the skeleton of the original image as the mask image. The geodesic paths along the skeleton between the local primitives are traversed and their points of contact are protected by updating the mask image after each geodesic dilation iteration. By scanning the final marker image for the contact points of the traversed geodesic paths, connections between the local primitives are established. The proposed approach is robust and scale invariant.

Naeem A. Bhatti, Allan Hanbury

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

In this paper, a star-skeleton-based methodology is described for analyzing the motion of a human target in a video sequence. Star skeleton is a fast skeletonization technique by connecting centroid of target object to its contour extremes. We represent the skeleton as a five-dimensional vector, which includes information about the positions of head and four limbs of a human shape in a given frame. In this manner, an action is composed of a sequence of star skeletons. With the purpose of use an HMM which allows model the actions, a posture codebook is built integrating star skeleton and motion information. With this last information we can distinct better between actions. Supervised (manual) and No-supervised methods (clustering-based methodology) have been used to create the posture codebook. The codebook is dependently of the actions to represent (We choose four actions as example: walk, jump, wave and jack). Obtained results show, firstly, including motion information is important to get a correctly differentiation between actions. On the other hand, using a clustering methodology to create the codebook causes a substantial improvement in results.

Ana González, Marcos Ortega Hortas, Manuel G. Penedo

Local Response Context Applied to Pedestrian Detection

Appearing as an important task in computer vision, pedestrian detection has been widely investigated in the recent years. To design a robust detector, we propose a feature descriptor called Local Response Context (LRC). This descriptor captures discriminative information regarding the surrounding of the person’s location by sampling the response map obtained by a generic sliding window detector. A partial least squares regression model using LRC descriptors is learned and employed as a second classification stage (after the execution of the generic detector to obtain the response map). Experiments based on the ETHZ pedestrian dataset show that the proposed approach improves significantly the results achieved by the generic detector alone and is comparable to the state-of-the-art methods.

William Robson Schwartz, Larry S. Davis, Helio Pedrini

Fast Finsler Active Contours and Shape Prior Descriptor

In this paper we proposed a new segmentation method based Fast Finsler Active Contours (FFAC). The FFAC is formulated in the Total Variation (TV) framework incorporating both region and shape descriptors. In the Finsler metrics, the anisotropic boundary descriptor favorites strong edge locations and suitable directions aligned with dark to bright image gradients. Strong edges are not required everywhere along. We prove the existence of a solution to the new binary Finsler active contours model and we propose a fast and easy algorithm in characteristic function framework. Finally, we show results on some MR challenging images to illustrate accurate.

Foued Derraz, Abdelmalik Taleb-Ahmed, Laurent Peyrodie, Gerard Forzy, Christina Boydev

NURBS Skeleton: A New Shape Representation Scheme Using Skeletonization and NURBS Curves Modeling

The representation and description of shapes or regions that have been segmented out of an image are early steps in the operation of most Computer vision systems; they serve as a precursor to several possible higher level tasks such as object/character recognition. In this context, skeletons have good properties for data reduction and representation. In this paper we present a novel shape representation scheme, named ”NURBS Skeleton”, based on the thinning medial axis method, the pruning process and the Non Uniform Rational B-Spline (NURBS) curves approximation for the modeling step.

Mohamed Naouai, Atef Hammouda, Sawssen Jalel, Christiane Weber

Multiple Manifold Learning by Nonlinear Dimensionality Reduction

Methods for nonlinear dimensionality reduction have been widely used for different purposes, but they are constrained to single manifold datasets. Considering that in real world applications, like video and image analysis, datasets with multiple manifolds are common, we propose a framework to find a low-dimensional embedding for data lying on multiple manifolds. Our approach is inspired on the manifold learning algorithm Laplacian Eigenmaps - LEM, computing the relationships among samples of different datasets based on an intra manifold comparison to unfold properly the data underlying structure. According to the results, our approach shows meaningful embeddings that outperform the results obtained by the conventional LEM algorithm and a previous close related work that analyzes multiple manifolds.

Juliana Valencia-Aguirre, Andrés Álvarez-Meza, Genaro Daza-Santacoloma, Carlos Acosta-Medina, César Germán Castellanos-Domínguez

Modeling Distance Nonlinearity in ToF Cameras and Correction Based on Integration Time Offsets

Time of Flight (ToF) cameras capture the depth images based on a new sensor technology allowing them to process the whole 3D scenario at once. These cameras deliver the intensity as well as the amplitude information. Due to difference in travel time of the rays reaching the sensor array, the captured distance information is affected by non linearities. In this paper, the authors propose three models (the monostatic, bistatic and optimized) for correcting the distance non linearity. The thermal characteristic of the sensor is studied in real time and analysis for integration time offsets for different reflectivity boards are carried out. The correction results are demonstrated for different reflectivity targets based on our models and analyzed integration offsets.

Claudio Uriarte, Bernd Scholz-Reiter, Sheshu Kalaparambathu Ramanandan, Dieter Kraus

A Measure for Accuracy Disparity Maps Evaluation

The quantitative evaluation of disparity maps is based on error measures. Among the existing measures, the percentage of Bad Matched Pixels (BMP) is widely adopted. Nevertheless, the BMP does not consider the magnitude of the errors and the inherent error of stereo systems, in regard to the inverse relation between depth and disparity. Consequently, different disparity maps, with quite similar percentages of BMP, may produce 3D reconstructions of largely different qualities. In this paper, a ground-truth based measure of errors in estimated disparity maps is presented. It offers advantages over the BMP, since it takes into account the magnitude of the errors and the inverse relation between depth and disparity. Experimental validations of the proposed measure are conducted by using two state-of-the-art quantitative evaluation methodologies. Obtained results show that the proposed measure is more suited than BMP to evaluate the depth accuracy of the estimated disparity map.

Ivan Cabezas, Victor Padilla, Maria Trujillo

Mixing Hierarchical Contexts for Object Recognition

Robust category-level object recognition is currently a major goal for the Computer Vision community. Intra-class and pose variations, as well as, background clutter and partial occlusions are some of the main difficulties to achieve this goal. Contextual information in the form of object co-ocurrences and spatial contraints has been successfully applied to reduce the inherent uncertainty of the visual world. Recently, Choi et al. [5] propose the use of a tree-structured graphical model to capture contextual relations among objects. Under this model there is only one possible fixed contextual relation among subsets of objects. In this work we extent Choi et al. approach by using a mixture model to consider the case that contextual relations among objects depend on scene type. Our experiments highlight the advantages of our proposal, showing that the adaptive specialization of contextual relations improves object recognition and object detection performances.

Billy Peralta, Alvaro Soto

Encoding Spatial Arrangement of Visual Words

This paper presents a new approach to encode spatial-relationship information of visual words in the well-known visual dictionary model. The current most popular approach to describe images based on visual words is by means of bags-of-words which do not encode any spatial information. We propose a graceful way to capture spatial-relationship information of visual words that encodes the spatial arrangement of every visual word in an image. Our experiments show the importance of the spatial information of visual words for image classification and show the gain in classification accuracy when using the new method. The proposed approach creates opportunities for further improvements in image description under the visual dictionary model.

Otávio A. B. Penatti, Eduardo Valle, Ricardo da S. Torres

Color-Aware Local Spatiotemporal Features for Action Recognition

Despite the recent developments in spatiotemporal local features for action recognition in video sequences, local color information has so far been ignored. However, color has been proved an important element to the success of automated recognition of objects and scenes. In this paper we extend the space-time interest point descriptor STIP to take into account the color information on the features’ neighborhood. We compare the performance of our color-aware version of STIP (which we have called HueSTIP) with the original one.

Fillipe Souza, Eduardo Valle, Guillermo Chávez, Arnaldo de A. Araújo

On the Flame Spectrum Recovery by Using a Low-Spectral Resolution Sensor

In this paper, the Maloney-Wandell and Imai-Berns recovering spectrum techniques are evaluated to extract the continuous flame spectrum, by using three principal components from training matrices constructed from a flame’s spectrum database. Six different sizes of training matrices were considered in the evaluation. To simulate the Maloney-Wandell and Imai-Bern methods, a commercial camera sensitivity was used as a base in the extraction process. The GFC (Goodness-of-fit coefficient) and RMSE (Root-mean-square error) quality metrics were used to compare the performance in the recovering process. The simulation results shown a better performance by using the Maloney-Wandell method in the recovering process, with small sizes of training matrices. The achieved results make of the recovering-spectral techniques a very attractive tools for designing advanced monitoring strategies for combustion processes.

Luis Arias, Sergio Torres

On the Importance of Multi-dimensional Information in Gender Estimation from Face Images

Estimating human face demography from images is a problem that has recently been extensively studied because of its relevant applications. We review state-of-the-art approaches to gender classification and confirm that their performance drops significantly when classifying young or elderly faces. We hypothesize that this is caused by the existence of dependencies among the demographic variables that were not considered in traditional gender classifiers. In the paper we confirm experimentally the existence of such dependencies between age and gender variables. We also prove that the performance of gender classifiers can be improved by considering the dependencies with age in a multi-dimensional approach. The performance improvement is most prominent for young and elderly faces.

Juan Bekios-Calfa, José M. Buenaposada, Luis Baumela

Clustering and Artificial Intelligence

Pattern Classification Using Radial Basis Function Neural Networks Enhanced with the Rvachev Function Method

The proposed method for classifying clusters of patterns in complex non-convex, disconnected domains using Radial Basis Function Neural Networks (RBFNNs) enhanced with the Rvachev Function Method (RFM) is presented with numerical examples.

R

-functions are used to construct complex pattern cluster domain, parameters of which are applied to RBFNNs to establish boundaries for classification. The error functional is a convex quadratic one with respect to weight functions which take weight values on the discrete connectors between neurons. Activation function of neurons of RBFNNs is the

sgn

(·) function and, therefore, the error function is non-smooth. The delta learning rule during training phase is applied. The sub-gradient of the discretized error function is used rather than its gradient, because it is not smooth. The application of the RFM allows for the creation, implementation, and resolution of large heterogeneous NNs capable to solving diverse sets of classification problems with greater accuracy.

Mark S. Varvak

Micro-Doppler Classification for Ground Surveillance Radar Using Speech Recognition Tools

Among the applications of a radar system, target classification for ground surveillance is one of the most widely used. This paper deals with micro-Doppler Signature (

μ

-DS) based radar Automatic Target Recognition (ATR). The main goal for performing

μ

-DS classification using speech processing tools was to investigate whether automatic speech recognition (ASR) techniques are suitable methods for radar ATR. In this work, extracted features from micro-Doppler echoes signal, using MFCC, LPC and LPCC, are used to estimate models for target classification. In classification stage, two parametric models based on Gaussian Mixture Model (GMM) and Greedy GMM were successively investigated for echo target modeling. Maximum a posteriori (MAP) and Majority-voting post-processing (MV) decision schemes are applied. Thus, ASR techniques based on GMM and GMM Greedy classifiers have been successfully used to distinguish different classes of targets echoes (humans, truck, vehicle and clutter) recorded by a low-resolution ground surveillance Doppler radar. Experimental results show that MV post processing improves target recognition and the performances reach to 99,08% correct classification on the testing set.

Dalila Yessad, Abderrahmane Amrouche, Mohamed Debyeche, Mustapha Djeddou

Semantic Integration of Heterogeneous Recognition Systems

Computer perception of real-life situations is performed using a variety of recognition techniques, including video-based computer vision, biometric systems, RFID devices and others. The proliferation of recognition modules enables development of complex systems by integration of existing components, analogously to the Service Oriented Architecture technology. In the paper, we propose a method that enables integration of information from existing modules to calculate results that are more accurate and complete. The method uses semantic description of concepts and reasoning to manage syntactic differences between information returned by modules. The semantic description is based on existing real-world concepts in video recognition and ubiquitous systems. We propose helper functionalities such as: module credibility rating, confidence level declaration and selection of communication protocol. Two integration modes are defined: voting of matching concepts and aggregation of complementing concepts.

Paweł L. Kaczmarek, Piotr Raszkowski

A New Distributed Approach for Range Image Segmentation

In this paper we introduce a new distributed approach for image segmentation based on multi-agent systems. Several agents are placed randomly in the image, then each of them starts a region growing around its position. Several agents can be within the same homogeneous region. So, they must exchange information to better labeling pixels reached by these agents. Every labeled pixel is smoothed by replacing its parameters by those of the pixel in the center of the region seed. A set of real range images from the ABW image base was used to evaluate the proposed approach. Experimental results show the potential of the approach to provide an accurate and efficient image segmentation.

Smaine Mazouzi, Zahia Guessoum

Embedded Feature Selection for Support Vector Machines: State-of-the-Art and Future Challenges

Recently, databases have incremented their size in all areas of knowledge, considering both the number of instances and attributes. Current data sets may handle hundreds of thousands of variables with a high level of redundancy and/or irrelevancy. This amount of data may cause several problems to many data mining algorithms in terms of performance and scalability. In this work we present the state-of-the-art the for embedded feature selection using the classification method Support Vector Machine (SVM), presenting two additional works that can handle the new challenges in this area, such as simultaneous feature and model selection and highly imbalanced binary classification. We compare our approaches with other state-of-the-art algorithms to demonstrate their effectiveness and efficiency.

Sebastián Maldonado, Richard Weber

An Efficient Approach to Intensity Inhomogeneity Compensation Using c-Means Clustering Models

Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for magnetic resonance (MR) image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms, and they generally have difficulties when INU reaches high amplitudes. This study reformulates the design of

c

-means clustering based INU compensation techniques by identifying and separating those globally working computationally costly operations that can be applied to gray intensity levels instead of individual pixels. The theoretical assumptions are demonstrated using the fuzzy

c

-means algorithm, but the proposed modification is compatible with a various range of

c

-means clustering based techniques. Experiments using synthetic phantoms and real MR images indicate that the proposed approach produces practically the same segmentation accuracy as the conventional formulation, but 20-30 times faster.

László Szilágyi, David Iclănzan, Lehel Crăciun, Sándor Miklós Szilágyi

A New Asymmetric Criterion for Cluster Validation

In this paper a new criterion for clusters validation is proposed. Many stability measures to validate a cluster have been proposed such as Normalized Mutual Information. We propose a new criterion for clusters validation. The drawback of the common approach is discussed in this paper and then a new asymmetric criterion is proposed to assess the association between a cluster and a partition which is called Alizadeh-Parvin-Minaei criterion, APM. The APM criterion compensates the drawback of the common Normalized Mutual Information (NMI) measure. Then we employ this criterion to select the more robust clusters in the final ensemble. We also propose a new method named Extended Evidence Accumulation Clustering, EEAC, to construct the matrix of similarity from these selected clusters. Finally, we apply a hierarchical method over the obtained matrix to extract the final partition. The empirical studies show that the proposed method outperforms other ones.

Hosein Alizadeh, Behrouz Minaei-Bidgoli, Hamid Parvin

Semi-supervised Classification by Probabilistic Relaxation

In this paper, a semi-supervised approach based on probabilistic relaxation theory is presented. It combines two desirable properties; firstly, a very small number of labelled samples is needed and, secondly, the assignment of labels is consistently performed according to our contextual information constraints. The proposed technique has been successfully applied to pattern recognition problems, obtaining promising preliminary results in database classification and image segmentation. Our methodology has also been evaluated against a recent state-of-the-art algorithm for semi-supervised learning, obtaining generally comparable or better results.

Adolfo Martínez-Usó, Filiberto Pla, José Martínez Sotoca, Henry Anaya-Sánchez

Identification of the Root Canal from Dental Micro-CT Records

This paper presents a novel semi-automated image processing procedure dedicated to the identification and characterization of the dental root canal, based on high-resolution micro-CT records. After the necessary image enhancement, parallel slices are individually segmented via histogram based quick fuzzy c-means clustering. The 3D model of root canal is built up from the segmented cross sections using the reconstruction of the inner surface, and the medial line is extracted by a 3D curve skeletonization algorithm. The central line of the root canal can finally be approximated as a 3D spline curve. The proposed procedure may support the planning of several kinds of endodontic interventions.

László Szilágyi, Csaba Dobó-Nagy, Balázs Benyó

Semi-supervised Constrained Clustering with Cluster Outlier Filtering

Constrained clustering addresses the problem of creating minimum variance clusters with the added complexity that there is a set of constraints that must be fulfilled by the elements in the cluster. Research in this area has focused on “must-link” and “cannot-link” constraints, in which pairs of elements must be in the same or in different clusters, respectively. In this work we present a heuristic procedure to perform clustering in two classes when the restrictions affect all the elements of the two clusters in such a way that they depend on the elements present in the cluster. This problem is highly susceptible to outliers in each cluster (extreme values that create infeasible solutions), so the procedure eliminates elements with extreme values in both clusters, and achieves adequate performance measures at the same time. The experiments performed on a company database allow to discover a great deal of information, with results that are more readily interpretable when compared to classical k-means clustering.

Cristián Bravo, Richard Weber

Pattern Recognition and Classification

New Results on Minimum Error Entropy Decision Trees

We present new results on the performance of Minimum Error Entropy (MEE) decision trees, which use a novel node split criterion. The results were obtained in a comparive study with popular alternative algorithms, on 42 real world datasets. Carefull validation and statistical methods were used. The evidence gathered from this body of results show that the error performance of MEE trees compares well with alternative algorithms. An important aspect to emphasize is that MEE trees generalize better on average without sacrifing error performance.

J. P. Marques de Sá, Raquel Sebastião, João Gama, Tânia Fontes

Section-Wise Similarities for Classification of Subjective-Data on Time Series

The aim of this paper is to present a novelty methodology to develop similarity measures for classification of time series. First, a linear segmentation algorithm to obtain a section-wise representation of the series is presented. Then, two similarity measures are defined from the differences between the behavior of the series and the level of the series, respectively. The method is applied to subjective-data on time series generated through the evaluations of the driving risk from a group of traffic safety experts. These series are classified using the proposed similarities as kernels for the training of a Support Vector Machine. The results are compared with other classifiers using our similarities, their linear combination and the raw data. The proposed methodology has been successfully evaluated on several databases.

Isaac Martín de Diego, Oscar S. Siordia, Cristina Conde, Enrique Cabello

Some Imputation Algorithms for Restoration of Missing Data

The problem of reconstructing the feature values in samples of objects given in terms of numerical features is considered. The three approaches, not involving the use of probability models and a priori information, are considered. The first approach is based on the organization of the iterative procedure for successive elaboration of missing values of attributes. In this case, the analysis of local information for each object with missing data is fulfilled. The second approach is based on solving an optimization problem. We calculate such previously unknown feature values for which there is maximum correspondence of metric relations between objects in subspaces of known partial values and found full descriptions. The third approach is based on solving a series of recognition tasks for each missing value. Comparisons of these approaches on simulated and real problems are presented.

Vladimir Ryazanov

A Scalable Heuristic Classifier for Huge Datasets: A Theoretical Approach

This paper proposes a heuristic classifier ensemble to improve the performance of learning in multiclass problems. Although the more accurate classifier leads to a better performance, there is another approach to use many inaccurate classifiers while each one is specialized for a few data in the problem space and using their consensus vote as the classifier. In this paper, some ensembles of classifiers are first created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles are combined to decide the final vote in a weighted manner. Finally the outputs of these ensembles are heuristically aggregated. The proposed framework is evaluated on a very large scale Persian digit handwritten dataset and the experimental results show the effectiveness of the algorithm.

Hamid Parvin, Behrouz Minaei-Bidgoli, Sajad Parvin

Improving Persian Text Classification Using Persian Thesaurus

This paper proposes an innovative approach to improve the performance of Persian text classification. The proposed method uses a thesaurus as a helpful knowledge to obtain the real frequencies of words in the corpus. Three types of relationships are considered in our thesaurus. This is the first attempt to use a Persian thesaurus in the field of Persian information retrieval. Experimental results show a significant improvement in the case of employing Persian thesaurus rather common methods.

Hamid Parvin, Behrouz Minaei-Bidgoli, Atousa Dahbashi

An Accumulative Points/Votes Based Approach for Feature Selection

This paper proposes an ensemble based approach for feature selection. We aim at overcoming the problem of parameter sensitivity of feature selection approaches. To do this we employ ensemble method. We get the results per different possible threshold values automatically in our algorithm. For each threshold value, we get a subset of features. We give a score to each feature in these subsets. Finally by use of ensemble method, we select the features which have the highest scores. This method is not a parameter sensitive one, and also it has been shown that using the method based on the fuzzy entropy results in more reliable selected features than the previous methods’. Empirical results show that although the efficacy of the method is not considerably decreased in most of cases, the method becomes free from setting of any parameter.

Hamid Parvin, Behrouz Minaei-Bidgoli, Sajad Parvin

Sentiment-Preserving Reduction for Social Media Analysis

In this paper, we address the problem of opinion analysis using a probabilistic approach to the underlying structure of different types of opinions or sentiments around a certain object. In our approach, an opinion is partitioned according to whether there is a direct relevance to a latent topic or sentiment. Opinions are then expressed as a mixture of sentiment-related parameters and the noise is regarded as data stream errors or spam. We propose an entropy-based approach using a value-weighted matrix for word relevance matching which is also used to compute document scores. By using a bootstrap technique with sampling proportions given by the word scores, we show that a lower dimensionality matrix can be achieved. The resulting noise-reduced data is regarded as a sentiment-preserving reduction layer, where terms of direct relevance to the initial parameter values are stored

Sergio Hernández, Philip Sallis

A Minority Class Feature Selection Method

In many classification problems, and in particular in medical domains, it is common to have an unbalanced class distribution. This pose problems to classifiers as they tend to perform poorly in the minority class which is often the class of interest. One commonly used strategy that to improve the classification performance is to select a subset of relevant features. Feature selection algorithms, however, have not been designed to favour the classification performance of the minority class. In this paper, we present a novel filter feature selection algorithm, called FSMC, for unbalanced data sets. FSMC selects attributes that have minority class distributions significantly different from the majority class distributions. FSMC is fast, simple, selects a small number of features and outperforms in most cases other feature selection algorithms in terms of global accuracy and in terms of performance measures for the minority class such as precision, recall, F-measure and ROC values.

German Cuaya, Angélica Muñoz-Meléndez, Eduardo F. Morales

Dissimilarity-Based Classifications in Eigenspaces

This paper presents an empirical evaluation on a dissimilarity measure strategy by which dissimilarity-based classifications (DBCs) [10] can be efficiently implemented. In DBCs, classifiers are not based on the feature measurements of individual objects, but rather on a suitable dissimilarity measure among the objects. In image classification tasks, however, one of the most intractable problems to measure the dissimilarity is the distortion and lack of information caused by the differences in illumination and directions and outlier data. To overcome this problem, in this paper, we study a new way of performing DBCs in eigenspaces spanned, one for each class, by the subset of principal eigenvectors, extracted from the training data set through a principal component analysis. Our experimental results, obtained with well-known benchmark databases, demonstrate that when the dimensionality of the eigenspaces has been appropriately chosen, the DBCs can be improved in terms of classification accuracies.

Sang-Woon Kim, Robert P. W. Duin

Dynamic Signature Recognition Based on Fisher Discriminant

Biometric technologies are the primary tools for certifying identity of individuals. But cost of sensing hardware plus degree of physical invasion required to obtain reasonable success are considered major drawbacks. Nevertheless, the signature is generally accepted as one means of identification. We present an approach on signature recognition using face recognition algorithms to obtain class descriptors and then use a simple classifier to recognize signatures. We also present an algorithm to store the

writing direction

of a signature, applying a linear transformation to encode this data as a gray scale pattern into the image. The signatures are processed applying Principal Components Analysis and Linear Discriminant Analysis creating descriptors that can be identified using a

KNN

classifier. Results revealed an accuracy performance rate of 97.47% under cross-validation over binary images and an improvement of 98.60% of accuracy by encoding simulated dynamic parameters. The encoding of real dynamic data boosted the performance rate from 90.21% to 94.70% showing that this technique can be a serious contender to other signature recognition methods.

Teodoro Schmidt, Vladimir Riffo, Domingo Mery

A Multi-style License Plate Recognition System Based on Tree of Shapes for Character Segmentation

The aim of this work is to develop a multi-style license plate recognition (LPR) system. Most of the LPR systems are country-dependent and take advantage of it. Here, a new character extraction algorithm is proposed, based on the tree of shapes of the image. This method is well adapted to work with different styles of license plates, does not require skew or rotation correction and is parameterless. Also, it has invariance under changes in scale, contrast, or affine changes in illumination. We tested our LPR system on two different datasets and achieved high performance rates: above 90 % in license plate detection and character recognition steps, and up to 98.17 % in the character segmentation step.

Francisco Gómez Fernández, Pablo Negri, Marta Mejail, Julio Jacobo

Feature and Dissimilarity Representations for the Sound-Based Recognition of Bird Species

Pattern recognition and digital signal processing techniques allow the design of automated systems for avian monitoring. They are a non-intrusive and cost-effective way to perform surveys of bird populations and assessments of biological diversity. In this study, a number of representation approaches for bird sounds are compared; namely, feature and dissimilarity representations. In order to take into account the non-stationary nature of the audio signals and to build robust dissimilarity representations, the application of the Earth Mover’s Distance (EMD) to time-varying measurements is proposed. Measures of the leave-one-out 1-NN performance are used as comparison criteria. Results show that, overall, the Mel-ceptrum coefficients are the best alternative; specially when computed by frames and used in combination with EMD to generate dissimilarity representations.

José Francisco Ruiz-Muñoz, Mauricio Orozco-Alzate, César Germán Castellanos-Domínguez

Environmental Sounds Classification Based on Visual Features

This paper presents a method aimed at classification of the environmental sounds in the visual domain by using the scale and translation invariance. We present a new approach that extracts visual features from sound spectrograms. We suggest to apply support vector machines (SVM’s) in order to address sound classification. Indeed, in the proposed method we explore sound spectrograms as texture images, and extracts the time-frequency structures by using a translation-invariant wavelet transform and a patch transform alternated with local maximum and global maximum to pursuit scale and translation invariance. We illustrate the performance of this method on an audio database, which composed of 10 sounds classes. The obtained recognition rate is of the order 91.82 % with the multiclass decomposition method: One-Against-One.

Sameh Souli, Zied Lachiri

Quaternion Correlation Filters for Illumination Invariant Face Recognition

Illumination variations is one of the factors that causes the degradation of face recognition systems performance. The representation of face image features using the structure of quaternion numbers is a novel way to alleviate the illumination effects on face images. In this paper a comparison of different quaternion representations, based on verification and identification experiments, is presented. Four different face features approaches are used to construct quaternion representations. A quaternion correlation filter is used as similarity measure, allowing to process together all the information encapsulated in quaternion components. The experiment results confirms that using quaternion algebra together with existing face recognition techniques permits to obtain more discriminative and illumination invariant methods.

Dayron Rizo-Rodriguez, Heydi Méndez-Vázquez, Edel García, César San Martín, Pablo Meza

Language Modelization and Categorization for Voice-Activated QA

The interest of the incorporation of voice interfaces to the Question Answering systems has increased in recent years. In this work, we present an approach to the Automatic Speech Recognition component of a Voice-Activated Question Answering system, focusing our interest in building a language model able to include as many relevant words from the document repository as possible, but also representing the general syntactic structure of typical questions. We have applied these technique to the recognition of questions of the CLEF QA 2003-2006 contests.

Joan Pastor, Lluís-F. Hurtado, Encarna Segarra, Emilio Sanchis

Applications of Pattern Recognition

On the Computation of the Geodesic Distance with an Application to Dimensionality Reduction in a Neuro-Oncology Problem

Manifold learning models attempt to parsimoniously describe multivariate data through a low-dimensional manifold embedded in data space. Similarities between points along this manifold are often expressed as Euclidean distances. Previous research has shown that these similarities are better expressed as geodesic distances. Some problems concerning the computation of geodesic distances along the manifold have to do with time and storage restrictions related to the graph representation of the manifold. This paper provides different approaches to the computation of the geodesic distance and the implementation of Dijkstra’s shortest path algorithm, comparing their performances. The optimized procedures are bundled into a software module that is embedded in a dimensionality reduction method, which is applied to MRS data from human brain tumours. The experimental results show that the proposed implementation explains a high proportion of the data variance with a very small number of extracted features, which should ease the medical interpretation of subsequent results obtained from the reduced datasets.

Raúl Cruz-Barbosa, David Bautista-Villavicencio, Alfredo Vellido

Multimodal Schizophrenia Detection by Multiclassification Analysis

We propose a multiclassification analysis to evaluate the relevance of different factors in schizophrenia detection. Several Magnetic Resonance Imaging (MRI) scans of brains are acquired from two sensors: morphological and diffusion MRI. Moreover, 14 Region Of Interests (ROIs) are available to focus the analysis on specific brain subparts. All information is combined to train three types of classifiers to distinguish between healthy and unhealthy subjects. Our contribution is threefold: (i) the classification accuracy improves when multiple factors are taken into account; (ii) proposed procedure allows the selection of a reduced subset of ROIs, and highlights the synergy between the two modalities; (iii) correlation analysis is performed for every ROI and modality to measure the information overlap using the correlation coefficient in the context of schizophrenia classification. We see that we achieve 85.96 % accuracy when we combine classifiers from both modalities, whereas the highest performance of a single modality is 78.95 %.

Aydın Ulaş, Umberto Castellani, Pasquale Mirtuono, Manuele Bicego, Vittorio Murino, Stefania Cerruti, Marcella Bellani, Manfredo Atzori, Gianluca Rambaldelli, Michele Tansella, Paolo Brambilla

Online Signature Verification Method Based on the Acceleration Signals of Handwriting Samples

Here we present a method for online signature verification treated as a two-class pattern recognition problem. The method is based on the acceleration signals obtained from signing sessions using a special pen device. We applied a DTW (dynamic time warping) metric to measure any dissimilarity between the acceleration signals and represented our results in terms of a distance metric.

Horst Bunke, János Csirik, Zoltán Gingl, Erika Griechisch

Dynamic Zoning Selection for Handwritten Character Recognition

This paper presents a two-level based character recognition method in which a dynamically selection of the most promising zoning scheme for feature extraction allows us to obtain interesting results for character recognition. The first level consists of a conventional neural network and a look-up-table that is used to suggest the best zoning scheme for a given unknown character. The information provided by the first level drives the second level in the selection of the appropriate feature extraction method and the corresponding class-modular neural network. The experimental protocol has shown significant recognition rates for handwritten characters (from 80.82% to 88.13%).

Luciane Y. Hirabara, Simone B. K. Aires, Cinthia O. A. Freitas, Alceu S. Britto, Robert Sabourin

Forecasting Cash Demand in ATM Using Neural Networks and Least Square Support Vector Machine

In this work we forecast the daily ATM cash demand using dynamic models of type Nonlinear Autoregressive Exogeneous inputs (NARX) and Nonlinear Autoreggressive Moving Average with Exogeneous Inputs (NARMAX) performed by Neural Networks (NN) and Least Square Support Vector Machine (LS-SVM) and used to predict one step (OSA) or multistep (MPO). The aim is to compare which model perform better results. We found that the Multilayer Perceptron NN presented the best index of agreement with an average of 0.87 in NARX-OSA and 0.85 in NARX-MPO. After, Radial Basis Function NN was 0.82 for both cases. Finally, LS-SVM obtained the worst results with 0.78 for NARX-OSA and 0.70 for NARX-MPO. No significant differences between NARX and NARMAX structures were found. Our contribution would have obtained the 2

nd

place in the NN5 competition of computational methods.

Cristián Ramírez, Gonzalo Acuña

Deep Learning Networks for Off-Line Handwritten Signature Recognition

Reliable identification and verification of off-line handwritten signatures from images is a difficult problem with many practical applications. This task is a difficult vision problem within the field of biometrics because a signature may change depending on psychological factors of the individual. Motivated by advances in brain science which describe how objects are represented in the visual cortex, advanced research on deep neural networks has been shown to work reliably on large image data sets. In this paper, we present a deep learning model for off-line handwritten signature recognition which is able to extract high-level representations. We also propose a two-step hybrid model for signature identification and verification improving the misclassification rate in the well-known GPDS database.

Bernardete Ribeiro, Ivo Gonçalves, Sérgio Santos, Alexander Kovacec

A Study on Automatic Methods Based on Mathematical Morphology for Martian Dust Devil Tracks Detection

This paper presents three methods for automatic detection of dust devils tracks in images of Mars. The methods are mainly based on Mathematical Morphology and results of their performance are analyzed and compared. A dataset of 21 images from the surface of Mars representative of the diversity of those track features were considered for developing, testing and evaluating our methods, confronting their outputs with ground truth images made manually. Methods 1 and 3, based on closing top-hat and path closing top-hat, respectively, showed similar mean accuracies around 90% but the time of processing was much greater for method 1 than for method 3. Method 2, based on radial closing, was the fastest but showed worse mean accuracy. Thus, this was the tiebreak factor.

Thiago Statella, Pedro Pina, Erivaldo Antônio da Silva

An Ensemble Method for Incremental Classification in Stationary and Non-stationary Environments

We present a model based on ensemble of base classifiers, that are combined using weighted majority voting, for the task of incremental classification. Definition of such voting weights becomes even more critical in non-stationary environments where the patterns underlying the observations change over time. Given an instance to classify, we propose to define each voting weight as a function that will take into account the location of an instance to classify in the different class-specific feature spaces and also the prior probability of such classes given the knowledge represented by the classifier as well as its overall performance in learning its training examples. This approach can improve the generalization performance and ability to control the stability/plasticity tradeoff, in stationary and non-stationary environments. Experiments were carried out using several real classification problems already introduced to test incremental algorithms in stationary as well as non-stationary environments.

Ricardo Ñanculef, Erick López, Héctor Allende, Héctor Allende-Cid

Teaching a Robot to Perform Task through Imitation and On-line Feedback

Service robots are becoming increasingly available and it is expected that they will be part of many human activities in the near future. It is desirable for these robots to adapt themselves to the user’s needs, so non-expert users will have to teach them how to perform new tasks in natural ways. In this paper a new teaching by demonstration algorithm is described. It uses a Kinect® sensor to track the movements of a user, eliminating the need of special sensors or environment conditions, it represents the tasks with a relational representation to facilitate the correspondence problem between the user and robot arm and to learn how to perform tasks in a more general description, it uses reinforcement learning to improve over the initial sequences provided by the user, and it incorporates on-line feedback from the user during the learning process creating a novel dynamic reward shaping mechanism to converge faster to an optimal policy. We demonstrate the approach by learning simple manipulation tasks of a robot arm and show its superiority over more traditional reinforcement learning algorithms.

Adrián León, Eduardo F. Morales, Leopoldo Altamirano, Jaime R. Ruiz

Improvements on Automatic Speech Segmentation at the Phonetic Level

In this paper, we present some recent improvements in our automatic speech segmentation system, which only needs the speech signal and the phonetic sequence of each sentence of a corpus to be trained. It estimates a GMM by using all the sentences of the training subcorpus, where each Gaussian distribution represents an acoustic class, which probability densities are combined with a set of conditional probabilities in order to estimate the probability densities of the states of each phonetic unit. The initial values of the conditional probabilities are obtained by using a segmentation of each sentence assigning the same number of frames to each phonetic unit. A DTW algorithm fixes the phonetic boundaries using the known phonetic sequence. This DTW is a step inside an iterative process which aims to segment the corpus and re-estimate the conditional probabilities. The results presented here demonstrate that the system has a good capacity to learn how to identify the phonetic boundaries.

Jon Ander Gómez, Marcos Calvo

An Active Learning Approach for Statistical Spoken Language Understanding

In general, large amount of segmented and labeled data is needed to estimate statistical language understanding systems. In recent years, different approaches have been proposed to reduce the segmentation and labeling effort by means of unsupervised o semi-supervised learning techniques. We propose an active learning approach to the estimation of statistical language understanding models that involves the transcription, labeling and segmentation of a small amount of data, along with the use of raw data. We use this approach to learn the understanding component of a Spoken Dialog System. Some experiments that show the appropriateness of our approach are also presented.

Fernando García, Lluís-F. Hurtado, Emilio Sanchis, Encarna Segarra

Virus Texture Analysis Using Local Binary Patterns and Radial Density Profiles

We investigate the discriminant power of two local and two global texture measures on virus images. The viruses are imaged using negative stain transmission electron microscopy. Local binary patterns and a multi scale extension are compared to radial density profiles in the spatial domain and in the Fourier domain. To assess the discriminant potential of the texture measures a Random Forest classifier is used. Our analysis shows that the multi scale extension performs better than the standard local binary patterns and that radial density profiles in comparison is a rather poor virus texture discriminating measure. Furthermore, we show that the multi scale extension and the profiles in Fourier domain are both good texture measures and that they complement each other well, that is, they seem to detect different texture properties. Combining the two, hence, improves the discrimination between virus textures.

Gustaf Kylberg, Mats Uppström, Ida-Maria Sintorn

A Markov Random Field Model for Combining Optimum-Path Forest Classifiers Using Decision Graphs and Game Strategy Approach

The research on multiple classifiers systems includes the creation of an ensemble of classifiers and the proper combination of the decisions. In order to combine the decisions given by classifiers, methods related to fixed rules and decision templates are often used. Therefore, the influence and relationship between classifier decisions are often not considered in the combination schemes. In this paper we propose a framework to combine classifiers using a decision graph under a random field model and a game strategy approach to obtain the final decision. The results of combining Optimum-Path Forest (OPF) classifiers using the proposed model are reported, obtaining good performance in experiments using simulated and real data sets. The results encourage the combination of OPF ensembles and the framework to design multiple classifier systems.

Moacir P. Ponti, João Paulo Papa, Alexandre L. M. Levada

Selected Topics of Chilean Workshop on Pattern Recognition

A New Approach for Wet Blue Leather Defect Segmentation

In the process plants where beef skin is processed, leather classification is done manually. An expert visually inspects the leather sheet and classifies them based on the different types of defects found on the surface, among other factors. In this study, an automatic method for defect classification of the Wet Blue leather is proposed. A considerable number of descriptors are computerized from the Gray Scale image and the RGB and HSV color model. Features were chosen based on the Sequential Forward Selection method, which allows a high reduction of the numbers of descriptors. Finally, the classification is implemented by using a Supervised Neural Network. The problem formulation is adequate, allowing a high rate of success, obtaining a method with wide range of possibilities for implementation.

Patricio Villar, Marco Mora, Paulo Gonzalez

Objective Comparison of Contour Detection in Noisy Images

The constant appearance of new contour detection methods makes it necessary to have accurate ways of assessing the performance of these methods. This paper proposes an evaluation method of contour detectors for noisy images. The method considers the computation of the optimal threshold that produces a greater approximation to the ground truth and the effect produced by the noise. Both analyzed dimensions allow objective comparisons of the performance of contour detectors.

Rodrigo Pavez, Marco Mora, Paulo Gonzalez

Automatic Search of Nursing Diagnoses

Nursing documentation is all the information that nurses register regarding the clinical assessment and care of a patient. Currently, these records are manually written in a narrative style; consequently, their quality and completeness largely depends on the nurse’s expertise. This paper presents an algorithm based on standardized nursing language that searches and sorts nursing diagnoses by its relevance through a ranking. Diagnoses identification is performed by searching and matching patterns among a set of patient needs or symptoms and the international standard of nursing diagnoses NANDA. Three sorting methods were evaluated using 6 utility cases. The results suggest that TF-IDF (83.43% accuracy) and assignment of weights by hit (80.73% accuracy) are the two best alternatives to implement the ranking of diagnoses.

Matías A. Morales, Rosa L. Figueroa, Jael E. Cabrera

“De-Ghosting” Artifact in Scene-Based Nonuniformity Correction of Infrared Image Sequences

In this paper we present a new technique to improve the convergence and to reduce the ghosting artifacts based on constant statistics (CS) method. We propose to reduce ghosting artifacts and to speed up the convergence by using enhanced constant statistics method with the motion threshold. The key advantage of the method is based in its capacity for estimate detectors parameters, and then compensate for fixed-pattern noise in a frame by frame basics. The ability of the method to compensate for nonuniformity and reducing ghosting artifacts is demonstrated by employing video sequences of simulated and several infrared video sequences obtained using two infrared cameras.

Anselmo Jara, Flavio Torres

Reliable Atrial Activity Extraction from ECG Atrial Fibrillation Signals

Atrial fibrillation (AF) is the most common arrhythmia encountered in clinical research, with a prevalence of 0.4% to 1% of the population. Therefore, the study of AF is an important research field that can provide great treatment improvements. In this paper we apply independent component analysis to a 12-lead electrocardiogram, for which we obtain a 12-source set. We apply to this set three different atrial activity (AA) selection methods based on: kurtosis, correlation of the sources with lead V1, and spectral analysis. We then propose a reliable AA extraction based on the consensus between the three methods in order to reduce the effect of anatomical and physiological variabilities. The extracted AA signal will be used in a future stage for AF classification.

Felipe Donoso, Eduardo Lecannelier, Esteban Pino, Alejandro Rojas

Gray Box Model with an SVM to Represent the Influence of PaCO2 on the Cerebral Blood Flow Autoregulation

Since the appearance of methods based on machine learning, they have been presented as an alternative to classical phenomenological modeling and there are few initiatives that attempt to integrate them. This paper presents a hybrid paradigm called

gray box

that blends a phenomenological description (differential equation) and a Support Vector Machine (SVM) to model a relevant problem in the field of cerebral hemodynamic. The results show that with this type of paradigm it is possible to exceed the results obtained with phenomenological models and also with the models based on learning, in addition to contributing to the description of the modelled phenomenon.

Max Chacón, Mariela Severino, Ronney Panerai

A New Clustering Algorithm Based on K-Means Using a Line Segment as Prototype

This project shows the development of a new clustering algorithm, based on

k-means

, which faces its problems with clusters of differences variances. This new algorithm uses a line segment as prototype which captures the axis that presents the biggest variance of the cluster. The line segment adjusts iteratively its long and direction as the data are classified. To perform the classification, a border region that determines approximately the limit on the cluster is built based on geometric model, which depends on the central line segment. The data are classified later according to their proximity to the different border regions. The process is repeated until the parameters of the all border regions associated with each cluster remain constant.

Juan Carlos Rojas Thomas

A New Method for Olive Fruits Recognition

A model for the recognition of the diameter of olives is presented. The information regarding size of olive fruits is intended for estimating the best harvesting time of olive trees. The recognition is performed by analyzing the RGB images obtained from olive tree pictures

C. Gabriel Gatica, S. Stanley Best, José Ceroni, Gaston Lefranc

Wavelet Autoregressive Model for Monthly Sardines Catches Forecasting Off Central Southern Chile

In this paper, we use multi-scale stationary wavelet decomposition technique combined with a linear autoregressive model for one-month-ahead monthly sardine catches forecasting off central southern Chile.The monthly sardine catches data were collected from the database of the National Marine Fisheries Service for the period between 1 January 1964 and 30 December 2008. The proposed forecasting strategy is to decompose the raw sardine catches data set into trend component and residual component by using multi-scale stationary wavelet transform. In wavelet domain, both the trend component and the residual component are independently predicted using a linear autoregressive model. Hence, proposed forecaster is the co-addition of two predicted components. We find that the proposed forecasting method achieves a 99% of the explained variance with a reduced parsimonious and high accuracy.

Nibaldo Rodriguez, Jose Rubio, Eleuterio Yañez

A Multi-level Thresholding-Based Method to Learn Fuzzy Membership Functions from Data Warehouse

Learn fuzzy membership functions automatically for characterization and operation of fuzzy measures in Data Warehouse is a problem of recent concern. This paper presents a new method to learn membership functions of linguistic labels of fuzzy measures from Data Warehouse. We proposed a multilevel thresholding based method with clustering validation indices in order to obtain optimal number of labels and parameters of membership functions. Validation is performed by comparing the proposal against a supervised learning approach based on clustering and genetic algorithms, including the application in response to queries in a Data Warehouse with fuzzy measures.

Dario Rojas, Carolina Zambrano, Marcela Varas, Angelica Urrutia

A Probabilistic Iterative Local Search Algorithm Applied to Full Model Selection

Currently, there is no solution, which does not require a high runtime, to the problem of choosing preprocessing methods, feature selection algorithms and classifiers for a supervised learning problem. In this paper we present a method for efficiently finding a combination of algorithms and parameters that effectively describes a dataset. Furthermore, we present an optimization technique, based on ParamILS, which can be used in other contexts where each evaluation of the objective function is highly time consuming, but an estimate of this function is possible. In this paper, we present our algorithm and initial validation of it over real and synthetic data. In said validation, our proposal demonstrates a significant reduction in runtime, compared to ParamILS, while solving problems with these characteristics.

Esteban Cortazar, Domingo Mery

Face Recognition Using TOF, LBP and SVM in Thermal Infrared Images

In this work, Binary Local Patterns (LBP), Support Vector Machine (SVM) and Trade-off (TOF) correlation filter are evaluated in face recognition tasks using thermal infrared imagery. The infrared technology has a particular kind of noise called non-uniformity and correspond to a fixed pattern noise superimposed at the input image, degrading the quality of the scene. Non-uniformity varies over time very slowly, and in many applications, depending of the technology used, can be assumed constant for at least several hours. Additionally, additive Gaussian noise (variable over time) is generated by the associated electronics. Both kind of noise affect the performance of classifiers in face recognition applications using infrared technology and must be considered. The comparison of performance of each method considering fixed and variable over time noise leads allow to conclude that SVM is more robust under both kind of noise.

Ramiro Donoso Floody, César San Martín, Heydi Méndez-Vázquez

Hybrid Algorithm for Fingerprint Matching Using Delaunay Triangulation and Local Binary Patterns

This paper proposes a hybrid algorithm for fingerprint matching using geometric structures with Delaunay triangle’s based formed by the minutiae. For those minutiae triangles candidates for fingerprint matching, the texture information is extracted from the original raw image localized inside the triangle using Local Binary Patterns techniques (LBP). The preliminary results have shown that the merging technique is fairly robust for genuine fingerprint matching discrimination, reducing thus the error rate for FRR and FAR and the time comparison between fingerprint in the verification and/or identification process. The experimental results have shown that the proposed algorithm is effective and reliable. Tests were conducted from the database BD1 and BD2 of FVC2002 competition, obtaining an EER of 6.18% and 3.17% respectively.

Alejandro Chau Chau, Carlos Pon Soto

Segmentation of Short Association Bundles in Massive Tractography Datasets Using a Multi-subject Bundle Atlas

This paper presents a method for automatic segmentation of some short association fiber bundles from massive dMRI tractography datasets. The method is based on a multi-subject bundle atlas derived from a two-level intra-subject and inter-subject clustering strategy. Each atlas bundle corresponds to one or more inter-subject clusters, presenting similar shapes. An atlas bundle is represented by the multi-subject list of the centroids of all intra-subject clusters in order to get a good sampling of the shape and localization variability. An atlas of 47 bundles is inferred from a first database of 12 brains, and used to segment the same bundles in a second database of 10 brains.

Pamela Guevara, Delphine Duclap, Cyril Poupon, Linda Marrakchi-Kacem, Josselin Houenou, Marion Leboyer, Jean-François Mangin

Classifying Execution Times in Parallel Computing Systems: A Classical Hypothesis Testing Approach

In this paper two classifiers have been derived in order to determine if identical computer tasks have been executed at different processors. The classifiers have been developed analytically following a classical hypothesis testing approach. The main assumption of this work is that the probability distribution function (pdf) of the random times taken by the processors to serve tasks are known. This assumption has been fulfilled by empirically characterizing the pdf of such random times. The performance of the classifiers developed here has been assessed using traces from real processors. Further, the performance of the classifiers is compared to heuristic classifiers, linear discriminants, and non-linear discriminants among other classifiers.

Hugo Pacheco, Jonathan Pino, Julio Santana, Pablo Ulloa, Jorge E. Pezoa

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise