Skip to main content

2004 | Buch

MICAI 2004: Advances in Artificial Intelligence

Third Mexican International Conference on Artificial Intelligence, Mexico City, Mexico, April 26-30, 2004. Proceedings

herausgegeben von: Raúl Monroy, Gustavo Arroyo-Figueroa, Luis Enrique Sucar, Humberto Sossa

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Applications

Pattern-Based Data Compression

Most modern lossless data compression techniques used today, are based in dictionaries. If some string of data being compressed matches a portion previously seen, then such string is included in the dictionary and its reference is included every time it appears. A possible generalization of this scheme is to consider not only strings made of consecutive symbols, but more general patterns with gaps between its symbols. The main problems with this approach are the complexity of pattern discovery algorithms and the complexity for the selection of a good subset of patterns. In this paper we address the last of these problems. We demonstrate that such problem is NP-complete and we provide some preliminary results about heuristics that points to its solution.Categories and Subject Descriptors: E.4 [Coding and Information Theory]–data compaction and compression; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Problems; I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search–heuristic methods.General Terms: Algorithms, TheoryAdditional Key Words and Phrases: Genetic algorithms, optimization, NP-hardness

Ángel Kuri, José Galaviz
Using Simulated Annealing for Paper Cutting Optimization

This article presents the use of the Simulated Annealing algorithm to solve the waste minimization problem in roll cutting programming, in this case, paper. Client orders, which vary in weight, width, and external and internal diameter, are fully satisfied; and no cuts to inventory are additionally generated, unless, they are specified. Once an optimal cutting program is obtained, the algorithm is applied again to minimize cutting blade movements. Several tests were performed with real data from a paper company in which an average of 30% waste reduction and 100% in production to inventory are obtained compare to the previous procedure. Actual savings represent about $5,200,000 USD in four months with 4 cutting machines.

Horacio Martínez-Alfaro, Manuel Valenzuela-Rendón
Extracting Temporal Patterns from Time Series Data Bases for Prediction of Electrical Demand

In this paper we present a technique for prediction of electrical demand based on multiple models. The multiple models are composed by several local models, each one describing a region of behavior of the system, called operation regime. The multiple models approach developed in this work is applied to predict electrical load 24 hours ahead. Data of electrical load from the state of California that include an approximate period of 2 years was used as a case of study. The concept of multiple model implemented in the present work is also characterized by the combination of several techniques. Two important techniques are applied in the construction of multiple models: Regularization and the Knowledge Discovery in Data Bases (KDD) techniques. KDD is used to identify the operation regime of electrical load time series.

J. Jesus Rico Melgoza, Juan J. Flores, Constantino Sotomane, Félix Calderón
The Synergy between Classical and Soft-Computing Techniques for Time Series Prediction

A new method for extracting valuable process information from input-output data is presented in this paper using a pseudo-gaussian basis function neural network with regression weights. The proposed methodology produces dynamical radial basis function, able to modify the number of neuron within the hidden layer. Other important characteristic of the proposed neural system is that the activation of the hidden neurons is normalized, which, as described in the bibliography, provides better performance than non-normalization. The effectiveness of the method is illustrated through the development of dynamical models for a very well known benchmark, the synthetic time series Mackey-Glass.

Ignacio Rojas, Fernando Rojas, Héctor Pomares, Luis Javier Herrera, Jesús González, Olga Valenzuela

Intelligent Interfaces and Speech Processing

A Naïve Geography Analyst System with Cognitive Support of Imagery Exploitation

This paper is concerned with the development of a naïve geography analyst system, which can provide the analysts with image exploitation techniques based on naïve geography and commonsense spatial reasoning. In the system approach, naïve geography information is acquired and represented jointly with imagery to form cognitively oriented interactive 3-D visualization and analysis space, and formal representations are generated by inferring a set of distributed graphical depictions representing naïve (commonsense) geographical space knowledge. The graphical representation of naïve geography information is functional in the sense that analysts can interact with it in ways that are analogous to corresponding interactions with real-world entities and settings in the spatial environments.

Sung Baik, Jerzy Bala, Ali Hadjarian, Peter Pachowicz
Don’t You Escape! I’ll Tell You My Story

This paper makes two contributions to increase the engagement of users in virtual heritage environments by adding virtual living creatures. This work is carried out on the context of models of the Mayan cities of Palenque and Calakmul. Firstly, it proposes a virtual guide who navigates a virtual world and tells stories about the locations within it, bringing to them its personality and role. Secondly, it develops an architecture for adding autonomous animals to virtual heritage. It develops an affective component for such animal agents in order to increase the realism of their flocking behaviour and adds a mechanism for transmitting emotion between animals via virtual pheromones, modelled as particles in a free expansion gas.

Jesús Ibáñez, Carlos Delgado-Mata, Ruth Aylett, Rocio Ruiz-Rodarte
Querying Virtual Worlds. A Fuzzy Approach

In this paper we describe a querying model that allows users to find virtual worlds and objects in these worlds, using as a base a new virtual worlds representation model and a fuzzy approach to solve the queries. The system has been developed and checked in two different kinds of worlds. Both the design and current implementation of the system are described.

Jesús Ibáñez, Antonio F. Gómez-Skarmeta, Josep Blat
Infant Cry Classification to Identify Hypoacoustics and Asphyxia with Neural Networks

This work presents the development of an automatic recognizer of infant cry, with the objective of classifying three kinds of cry, normal, hypoacoustic and asphyxia. We use acoustic characteristics extraction techniques like LPC and MFCC, for the acoustic processing of the cry’s sound wave, and a Feed Forward Input Delay neural network with training based on Gradient Descent with Adaptive Back-Propagation. We describe the whole process, and we also show the results of some experiments, in which we obtain up to 98.67% precision.

Orion Fausto Reyes Galaviz, Carlos Alberto Reyes Garcia
N-Best List Rescoring Using Syntactic Trigrams

This paper demonstrates the usefulness of syntactic trigrams in improving the performance of a speech recognizer for the Spanish language. This technique is applied as a post-processing stage that uses syntactic information to rescore the N-best hypothesis list in order to increase the score of the most syntactically correct hypothesis. The basic idea is to build a syntactic model from training data, capturing syntactic dependencies between adjoint words in a probabilistic way, rather than resorting to the use of a rule-based system. Syntactic trigrams are used because of their power to express relevant statistics about the short-distance syntactic relationships between the words of a whole sentence. For this work we used a standarized tagging scheme known as the EAGLES tag definition, due of its ease of use and its broad coverage of all grammatical classes for Spanish. Relative improvement for the speech recognizer is 5.16%, which is statistically significant at the level of 10%, for a task of 22,398 words (HUB-4 Spanish Broadcast News).

Luis R. Salgado-Garza, Richard M. Stern, Juan A. Nolazco F.
Continuants Based Neural Speaker Verification System

Among the techniques to protect private information by adopting biometrics, speaker verification is widely used due to its advantages in natural usage and inexpensive implementation cost. Speaker verification should achieve a high degree of reliability in verification score, flexibility in speech text usage, and efficiency in the complexity of verification system Continuants have an excellent speaker-discriminant power and the modest number of phonemes in the phonemic category. Multilayer perceptrons (MLPs) have the superior recognition ability and the fast operation speed. In consequence, the two elements can provide viable ways for speaker verification system to obtain the above properties: reliability, flexibility and efficiency. This paper shows the implementation of a system to which continuants and MLPs are applied, and evaluates the system using a Korean speech database. The results of the evaluation prove that continuants and MLPs enable the system to acquire the three properties.

Tae-Seung Lee, Byong-Won Hwang

Knowledge Representation

Agent Protocols as Executable Ontologies

Agent protocols are difficult to specify, implement, share and reuse. In addition, there are not well developed tools or methodologies to do so. Current efforts are focused in creating protocol languages with which it is possible to have formal diagrammatic representations of protocols. What we propose is a framework to use ontology technology to specify protocols in order to make them shareable and reusable.

Manuel José Contreras M., Alan Smaill
Evaluation of RDF(S) and DAML+OIL Import/Export Services within Ontology Platforms

Both ontology content and ontology building tools evaluations play an important role before using ontologies in Semantic Web applications. In this paper we try to assess ontology evaluation functionalities of the following ontology platforms: OilEd, OntoEdit, Protégé-2000, and WebODE. The goal of this paper is to analyze whether such ontology platforms prevent the ontologist from making knowledge representation mistakes in concept taxonomies during RDF(S) and DAML+OIL ontology import, during ontology building and during ontology export to RDF(S) and DAML+OIL. Our study reveals that most of these ontology platforms only detect a few mistakes in concept taxonomies when importing RDF(S) and DAML+OIL ontologies. It also reveals that most of these ontology platforms only detect some mistakes in concept taxonomies during building ontologies. Our study also reveals that these platforms do not detect any taxonomic mistake when exporting ontologies to such languages.

Asunción Gómez-Pérez, M. Carmen Suárez-Figueroa
Graduated Errors in Approximate Queries Using Hierarchies and Ordered Sets

Often, qualitative values have an ordering, such as (very-short, short, medium-height, tall) or a hierarchical level, such as (The-World, Europe, Spain, Madrid), which are used by people to interpret mistakes and approximations among these values. Confusing Paris with Madrid yields an error smaller than confusing Paris with Australia, or Paris with Abraham Lincoln. And the “difference” between very cold and cold is smaller than that between very cold and warm. Methods are provided to measure such confusion, and to answer approximate queries in an “intuitive” manner. Examples are given. Hierarchies are a simpler version of ontologies, albeit very useful. Queries have a blend of errors by order and errors by hierarchy level, such as “what is the error in confusing very cold with tall?” or “give me all people who are somewhat like (John (plays baseball) (travels-by water-vehicle) (lives-in North-America)).” Thus, retrieval of approximate objects is possible, as illustrated here.

Adolfo Guzman-Arenas, Serguei Levachkine
Finding the Most Similar Concepts in Two Different Ontologies

A concise manner to send information from agent A to B is to use phrases constructed with the concepts of A: to use the concepts as the atomic tokens to be transmitted. Unfortunately, tokens from A are not understood by (they do not map into) the ontology of B, since in general each ontology has its own address space. Instead, A and B need to use a common communication language, such as English: the transmission tokens are English words.An algorithm is presented that finds the concept CB in OB (the ontology of B) most closely resembling a given concept CA. That is, given a concept from ontology OA, a method is provided to find the most similar concept in OB, as well as the similarity sim between both concepts. Examples are given.

Adolfo Guzman-Arenas, Jesus M. Olivares-Ceja
ADM: An Active Deductive XML Database System

As XML is becoming widely accepted as a mean of storing, searching and extracting information, a larger number of Web applications will require conceptual models and administrative tools to organize their collections of documents. Recently, event-condition-action (ECA) rules have been proposed to provide reactive functionality into XML document databases. However, logical inference mechanisms to deliver multiagent-based applications remain unconsidered in those models. In this paper, we introduce ADM, an active deductive XML database model that extends XML with logical variables, logical procedures and ECA rules. ADM has been partially implemented in an open distributed coordination architecture written in Java. Besides of coupling the rational and reactive behavioral aspects into a simple and uniform model, a major contribution of this work is the introduction of sequential and parallel rule composition as an effective strategy to address the problem of scheduling rule selection and execution.

Oscar Olmedo-Aguirre, Karina Escobar-Vázquez, Giner Alor-Hernández, Guillermo Morales-Luna
Web-Based Intelligent Service for Coalition Operation Support in Networked Organizations

Nowadays, organizations must continually adapt to market and organizational changes to achieve their most important goals. The migration to business services and service-oriented architectures provides a valuable opportunity to attain the organization objectives. The migration causes evolution both in organizational structure and technology enabling businesses to dynamically change vendors or services. The paper proposes a view integrating the concept of networked organization & Web intelligence & Web Services into a collaboration environment of a networked organization. An approach to knowledge logistics problem based on the concepts of Web intelligence and Web services in the networked intelligent organization environment is described. Applicability of the approach is illustrated through a “Binni scenario”-based case study of portable hospital configuration as an e-business & e-government coalition operation.

Alexander Smirnov, Mikhail Pashkin, Nikolai Chilov, Tatiana Levashova, Andrew Krizhanovsky
Knowledge Acquisition and Management System for an Intelligent Planner

Intelligent planning helps to solve a great amount of problems based on efficient algorithms and human knowledge about a specific application. However, knowledge acquisition is always a difficult and tedious process. This paper presents a knowledge acquisition and management system for an intelligent planning system. The planning system is designed to assist an operator of a power plant in difficult maneuvers such as the presence of faults. The architecture of the planner is briefly describes as well as the representation language. This language is adequate to represent process knowledge but it is difficult for an expert operator to capture his/her knowledge in the correct format. This paper presents the design of a knowledge acquisition and management system and describes a case study where it is being utilized.

Elí Bulmaro Zenteno, Pablo H. Ibargüengoytia

Logic and Constraint Programming

Stability Analysis for Dynamic Constraint Satisfaction Problems

Problems from various domains can be modeled as dynamic constraint satisfaction problems, where the constraints, the variables or the variable domains change overtime. The aim, when solving this kind of problems, is to decrease the number of variables for which their assignment changes between consecutive problems, a concept known as distance or stability. This problem of stability has previuosly been studied, but only for variations in the constraints of a given problem. This paper describes a wider analysis on the stability problem, when modifying variables, domains, constraints and combinations of these elements for the resource allocation problem, modeled as a DCSP. Experiments and results are presented related to efficiency, distance and a new parameter called global stability for several techniques such as solution reuse, reasoning reuse and a combination of both. Additionaly, results show that the distance behavior is linear with respect to the variations.

Manuel Iván Angles-Domínguez, Hugo Terashima-Marín
Evaluation-Based Semiring Meta-constraints

Classical constraint satisfaction problems (CSPs) provide an expressive formalism for describing and solving many real-world problems. However, classical CSPs prove to be restrictive in situations where uncertainty, fuzziness, probability or optimisation are intrinsic. Soft constraints alleviate many of the restrictions which classical constraint satisfaction impose; in particular, soft constraints provide a basis for capturing notions such as vagueness, uncertainty and cost into the CSP model. We focus on the semiring-based approach to soft constraints. In this paper we present a new evaluation-based scheme for implementing meta-constraints, which can be applied to any existing implementation to improve its run-time performance.

Jerome Kelleher, Barry O’Sullivan
Invariant Patterns for Program Reasoning

We address the problem of integrating standard techniques for automatic invariant generation within the context of program reasoning. We propose the use of invariant patterns which enable us to associate common patterns of program code and specifications with invariant schemas. This allows crucial decisions relating to the development of invariants to be delayed until a proof is attempted. Moreover, it allows patterns within the program to be exploited in patching failed proof attempts.

Andrew Ireland, Bill J. Ellis, Tommy Ingulfsen
Closing the Gap between the Stable Semantics and Extensions of WFS

In order to really understand all aspects of logic-based program development of different semantics, it would be useful to have a common solid logical foundation. The stable semantics has one already based on intuitionistic logic I and using the notion of completions. Since S4 expresses I then the stable semantics can be fully represented in S4. We propose the same approach to define extensions of the WFS semantics. We distinguish a particular semantics that we call AS-WFS wich is defined over general propositional theories, can be defined via completions using S4. Interesting AS-WFS seems to satisfy most of the principles of a well behaved semantics. Our general goal is to propose S4 and completions to study the formal behavior of different semantics.

Mauricio Osorio, Veronica Borja, Jose Arrazola
A Framework for Agent-Based Brokering of Reasoning Services

Many applications have shown that the combination of specialized reasoning systems, such as deduction and computation systems, can lead to synergetic effects. Often, a clever combination of different reasoning systems can solve problems that are beyond the problem solving horizon of single, stand-alone systems. Current platforms for the integration of reasoning systems typically lack abstraction, robustness, and automatic coordination of reasoners. We are currently developing a new framework for reasoning agents to solve these problems. Our framework builds on the FIPA specifications for multi-agent systems, formal service descriptions, and a central brokering mechanism. In this paper we present the architecture of our framework and our progress with the integration of automated theorem provers.

Jürgen Zimmer

Machine Learning and Data Mining

Faster Proximity Searching in Metric Data

A number of problems in computer science can be solved efficiently with the so called memory based or kernel methods. Among this problems (relevant to the AI community) are multimedia indexing, clustering, non supervised learning and recommendation systems. The common ground to this problems is satisfying proximity queries with an abstract metric database.In this paper we introduce a new technique for making practical indexes for metric range queries. This technique improves existing algorithm based on pivots and signatures, and introduce a new data structure, the Fixed Queries Trie to speedup metric range queries. The result is an O(n) construction time index, with query complexity O(nα), α≤ 1. The indexing algorithm uses only a few bits of storage for each database element.

Edgar Chávez, Karina Figueroa
Data Mining with Decision Trees and Neural Networks for Calcification Detection in Mammograms

One of the best prevention measures against breast cancer is the early detection of calcifications through mammograms. Detecting calcifications in mammograms is a difficult task because of their size and the high content of similar patterns in the image. This brings the necessity of creating automatic tools to find whether a mammogram presents calcifications or not. In this paper we introduce the combination of machine vision and data-mining techniques to detect calcifications (including micro-calcifications) in mammograms that achieves an accuracy of 92.6 % with decision trees and 94.3 % with a back-propagation neural network. We also focus in the data-mining task with decision trees to generate descriptive patterns based on a set of characteristics selected by our domain expert. We found that these patterns can be used to support the radiologist to confirm his diagnosis or to detect micro-calcifications that he could not see because of their reduced size.

Beatriz A. Flores, Jesus A. Gonzalez
An Optimization Algorithm Based on Active and Instance-Based Learning

We present an optimization algorithm that combines active learning and locally-weighted regression to find extreme points of noisy and complex functions. We apply our algorithm to the problem of interferogram analysis, an important problem in optical engineering that is not solvable using traditional optimization schemes and that has received recent attention in the research community. Experimental results show that our method is faster than others previously presented in the literature and that it is very accurate for the case of noiseless interferograms, as well as for the case of interferograms with two types of noise: white noise and intensity gradients, which are due to slight missalignments in the system.

Olac Fuentes, Thamar Solorio
MultiGrid-Based Fuzzy Systems for Function Approximation

In this paper we make use of a modified Grid Based Fuzzy System architecture, which may provide an exponential reduction in the number of rules needed. We also introduce an algorithm that automatically, from a set of given I/O training points, is able to determine the pseudo-optimal architecture proposed as well as the optimal parameters needed (number and position of membership functions and fuzzy rule consequents). The suitability of the algorithm and the improvement in both performance and efficiency obtained are shown in an example.

Luis Javier Herrera, Héctor Pomares, Ignacio Rojas, Olga Valenzuela, Mohammed Awad
Inducing Classification Rules from Highly-Structured Data with Composition

This paper elaborates on two techniques, deconstruction and composition, to handle complex data in order to learn from it. We propose typed higher-order logic as a suitable representation formalism for domains with complex structured data. Both techniques derive naturally from such framework. A naive sequential covering algorithm which uses both techniques is applied on well known learning datasets (simple and structured) to test them with good results. A further experiment on the change of knowledge representation is presented to showcase the robustness of our approach.

René MacKinney-Romero, Christophe Giraud-Carrier
Comparing Techniques for Multiclass Classification Using Binary SVM Predictors

Multiclass classification using Machine Learning techniques consists of inducing a function f(x) from a training set composed of pairs (x i ,y i ) where y i  ∈ {1,2,...,k}. Some learning methods are originally binary, being able to realize classifications where k = 2. Among these one can mention Support Vector Machines. This paper presents a comparison of methods for multiclass classification using SVMs. The techniques investigated use strategies of dividing the multiclass problem into binary subproblems and can be extended to other learning techniques. Results indicate that the use of Directed Acyclic Graphs is an efficient approach in generating multiclass SVM classifiers.

Ana Carolina Lorena, André Carlos P. L. F. de Carvalho
Analysing Spectroscopic Data Using Hierarchical Cooperative Maximum Likelihood Hebbian Learning

A novel approach to feature selection is presented in this paper, in which the aim is to visualize and extract information from complex, high dimensional spectroscopic data. The model proposed is a mixture of factor analysis and exploratory projection pursuit based on a family of cost functions proposed by Fyfe and MacDonald [12] which maximizes the likelihood of identifying a specific distribution in the data while minimizing the effect of outliers [9,12]. It employs cooperative lateral connections derived from the Rectified Gaussian Distribution [8,14] to enforce a more sparse representation in each weight vector. We also demonstrate a hierarchical extension to this method which provides an interactive method for identifying possibly hidden structure in the dataset.

Donald MacDonald, Emilio Corchado, Colin Fyfe
Feature Selection-Ranking Methods in a Very Large Electric Database

Feature selection is a crucial activity when knowledge discovery is applied to very large databases, as it reduces dimensionality and therefore the complexity of the problem. Its main objective is to eliminate attributes to obtain a computationally tractable problem, without affecting the quality of the solution. To perform feature selection, several methods have been proposed, some of them tested over small academic datasets. In this paper we evaluate different feature selection-ranking methods over a very large real world database related with a Mexican electric energy client-invoice system. Most of the research on feature selection methods only evaluates accuracy and processing time; here we also report on the amount of discovered knowledge and stress the issue around the boundary that separates relevant and irrelevant features. The evaluation was done using Elvira and Weka tools, which integrate and implement state of the art data mining algorithms. Finally, we propose a promising feature selection heuristic based on the experiments performed.

Manuel Mejía-Lavalle, Guillermo Rodríguez-Ortiz, Gustavo Arroyo-Figueroa, Eduardo F. Morales
Automatic Case Adaptation with a Hybrid Committee Approach

When Case Based Reasoning systems are applied to real-world problems, the retrieved solutions in general require adaptations in order to be useful in new contexts. Therefore, case adaptation is a desirable capability of Case Based Reasoning systems. However, case adaptation is still a challenge for this research area. In general, the acquisition of knowledge for adaptation is more complex than the acquisition of cases. This paper explores the use of a hybrid committee of Machine Learning techniques for automatic case adaptation.

Claudio A. Policastro, André Carlos P. L. F. de Carvalho, Alexandre C. B. Delbem
Class Imbalances versus Class Overlapping: An Analysis of a Learning System Behavior

Several works point out class imbalance as an obstacle on applying machine learning algorithms to real world domains. However, in some cases, learning algorithms perform well on several imbalanced domains. Thus, it does not seem fair to directly correlate class imbalance to the loss of performance of learning algorithms. In this work, we develop a systematic study aiming to question whether class imbalances are truly to blame for the loss of performance of learning systems or whether the class imbalances are not a problem by themselves. Our experiments suggest that the problem is not directly caused by class imbalances, but is also related to the degree of overlapping among the classes.

Ronaldo C. Prati, Gustavo E. A. P. A. Batista, Maria Carolina Monard
Advanced Clustering Technique for Medical Data Using Semantic Information

MEDLINE is a representative collection of medical documents supplied with original full-text natural-language abstracts as well as with representative keywords (called MeSH-terms) manually selected by the expert annotators from a pre-defined ontology and structured according to their relation to the document. We show how the structured manually assigned semantic descriptions can be combined with the original full-text abstracts to improve quality of clustering the documents into a small number of clusters. As a baseline, we compare our results with clustering using only abstracts or only MeSH-terms. Our experiments show 36% to 47% higher cluster coherence, as well as more refined keywords for the produced clusters.

Kwangcheol Shin, Sang-Yong Han, Alexander Gelbukh

Multiagent Systems and Distributed AI

Strict Valued Preference Relations and Choice Functions in Decision-Making Procedures

Fuzzy (valued) preference relations (FPR) give possibility to take into account the intensity of preference between alternatives. The refinement of crisp (non-valued) preference relations by replacing them with valued preference relations often transforms crisp preference relations with cycles into acyclic FPR. It gives possibility to make decisions in situations when crisp models do not work. Different models of rationality of strict FPR defined by the levels of transitivity or acyclicity of these relations are considered. The choice of the best alternatives based on given strict FPR is defined by a fuzzy choice function (FCF) ordering alternatives in given subset of alternatives. The relationships between rationality of strict FPR and rationality of FCF are studied. Several valued generalizations of crisp group decision-making procedures are proposed. As shown on examples of group decision-making in multiagent systems, taking into account the preference values gives possibility to avoid some problems typical for crisp procedures.

Ildar Batyrshin, Natalja Shajdullina, Leonid Sheremetov
Adaptation in the Presence of Exogeneous Information in an Artificial Financial Market

In recent years, agent-based computational models have been used to study financial markets. One of the most interesting elements involved in these studies is the process of learning, in which market participants try to obtain information from the market in order to improve their strategies and hence increase their profits. While in other papers it has been shown how this learning process is determined by factors such as the adaptation period, the composition of the market and the intensity of the signals that an agent can perceive, in this paper we shall discuss the effect of external information in the learning process in an artificial financial market (AFM). In particular, we will analyze the case when external information is such that it forces all participants to randomly revise their expectations of the future. Even though AMFs usually use sophisticated artificial intelligence techniques, in this study we show how interesting results can be obtained using a quite elementary genetic algorithm.

José L. Gordillo, Juan Pablo Pardo-Guerra, Christopher R. Stephens
Design Patterns for Multiagent Systems Design

Capitalizing and diffusing experience about multiagent systems are two key mechanisms the classical approach of methods and tools can’t address. Our hypothesis is that, among available techniques that collect and formalise experience, design patterns are the most able technique allowing to express the agent concepts and to adapt itself to the various MAS developing problems.In this paper, we present several agent oriented patterns, [1], in order to demonstrate the feasibility of helping MAS analysis and design through design patterns. Our agent patterns cover all the development stages, from analysis to implementation, including re-engineering (through antipatterns, [2]).

Sylvain Sauvage
A Faster Optimal Allocation Algorithm in Combinatorial Auctions

In combinatorial auctions, a bidder may bid for arbitrary combinations of items, so combinatorial auction can be applied to resource and task allocations in multiagent systems. But determining the winners of combinatorial auctions who maximize the profit of the auctioneer is known to be NP-complete. A branch-and-bound method can be one of efficient methods for the winner determination.In this paper, we propose a faster winner determination algorithm in combinatorial auctions. The proposed algorithm uses both a branch-and-bound method and Linear Programming. We present a new heuristic bid selection method for the algorithm. In addition, the upper-bounds are reused to reduce the running time of the algorithm in some specific cases.We evaluate the performance of the proposed algorithm by comparing with those of CPLEX and a known method. The experiments have been conducted with six datasets each of which has a different distribution. The proposed algorithm has shown superior efficiency in three datasets and similar efficiency in the rest of the datasets.

Jin-Woo Song, Sung-Bong Yang
An Agent Based Framework for Modelling Neuronal Regulators of the Biological Systems

The neuronal regulators of biological systems are very difficult to deal with since they present nonstructured problems. The agent paradigm can analyze this type of systems in a simple way. In this paper, a formal agent-based framework that incorporates aspects such as modularity, flexibility and scalability is presented. Moreover, it enables the modeling of systems that present distribution and emergence characteristics. The proposed framework provides a definition of a model for the neuronal regulator of the lower urinary tract. Several examples of the experiment have been carried out using the model as presented, and the results have been validated by comparing them with real data. The developed simulator can be used by specialists in research tasks, in hospitals and in the field of education.

Antonio Soriano Payá, Juan Manuel García Chamizo, Francisco Maciá Pérez
Possibilistic Reasoning and Privacy/Efficiency Tradeoffs in Multi-agent Systems

In cooperative problem solving, while some communication is necessary, privacy issues can limit the amount of information transmitted. We study this problem in the context of meeting scheduling. Agents propose meetings consistent with their schedules while responding to other proposals by accepting or rejecting them. The information in their responses is either a simple accept/reject or an account of meetings in conflict with the proposal. The major mechanism of inference involves an extension of CSP technology, which uses information about possible values in an unknown CSP. Agents store such information within ‘views’ of other agents. We show that this kind of possibilistic information in combination with arc consistency processing can speed up search under conditions of limited communication. This entails an important privacy/efficiency tradeoff, in that this form of reasoning requires a modicum of actual private information to be maximally effective. If links between derived possibilistic information and events that gave rise to these deductions are maintained, actual (meeting) information can be deduced without any meetings being communicated. Such information can also be used heuristically to find solutions before such discoveries can occur.

Richard J. Wallace, Eugene C. Freuder, Marius Minca

Natural Language

A Biologically Motivated and Computationally Efficient Natural Language Processor

Conventional artificial neural network models lack many physiological properties of the neuron. Current learning algorithms are more concerned to computational performance than to biological credibility. Regarding a natural language processing application, the thematic role assignment – semantic relations between words in a sentence -, the purpose of the proposed system is to compare two different connectionist modules for the same application: (1) the usual simple recurrent network using backpropagation learning algorithm with (2) a biologically inspired module, which employs a bi-directional architecture and learning algorithm more adjusted to physiological attributes of the cerebral cortex. Identical sets of sentences are used to train the modules. After training, the achieved output data show that the physiologically plausible module displays higher accuracy for expectable thematic roles than the traditional one.

João Luís Garcia Rosa
A Question-Answering System Using Argumentation

This paper presents a novel approach to question answering: the use of argumentation techniques. Our question answering system deals with argumentation in student essays: it sees an essay as an answer to a question and gauges its quality on the basis of the argumentation found in it. Thus, the system looks for expected types of argumentation in essays (i.e. the expectation is that the kind of argumentation in an essay is correlated to the type of question). Another key feature of our work is our proposed categorisation for argumentation in student essays, as opposed to categorisation of argumentation in research papers, where – unlike the case of student essays – it is relatively well-known which kind of argumentation can be found in specific sections.

Emanuela Moreale, Maria Vargas-Vera
Automatic Building of a Machine Translation Bilingual Dictionary Using Recursive Chain-Link-Type Learning from a Parallel Corpus

Numerous methods have been developed for generating a machine translation (MT) bilingual dictionary from a parallel text corpus. Such methods extract bilingual collocations from sentence pairs of source and target language sentences. Then those collocations are registered in an MT bilingual dictionary. Bilingual collocations are lexically corresponding pairs of parts extracted from sentence pairs. This paper describes a new method for automatic extraction of bilingual collocations from a parallel text corpus using no linguistic knowledge. We use Recursive Chain-link-type Learning (RCL), which is a learning algorithm, to extract bilingual collocations. Our method offers two main advantages. One benefit is that this RCL system requires no linguistic knowledge. The other advantage is that it can extract many bilingual collocations, even if the frequency of appearance of the bilingual collocations is very low. Experimental results verify that our system extracts bilingual collocations efficiently. The extraction rate of bilingual collocations was 74.9% for all bilingual collocations that corresponded to nouns in the parallel corpus.

Hiroshi Echizen-ya, Kenji Araki, Yoshio Momouchi, Koji Tochinai
Recognition of Named Entities in Spanish Texts

Proper name recognition is a subtask of Name Entity Recognition in Message Understanding Conference. For our corpus annotation proper name recognition is a crucial task since proper names appear approximately in more than 50% of total sentences of the electronic texts that we collected for such purpose. Our work is focused on composite proper names (names with coordinated constituents, names with several prepositional phrases, and names of songs, books, movies, etc.) We describe a method based on heterogeneous knowledge and simple resources, and the preliminary obtained results.

Sofía N. Galicia-Haro, Alexander Gelbukh, Igor A. Bolshakov
Automatic Enrichment of Very Large Dictionary of Word Combinations on the Basis of Dependency Formalism

The paper presents a method of automatic enrichment of a very large dictionary of word combinations. The method is based on results of automatic syntactic analysis (parsing) of sentences. The dependency formalism is used for representation of syntactic trees that allows for easier treatment of information about syntactic compatibility. Evaluation of the method is presented for the Spanish language based on comparison of the automatically generated results with manually marked word combinations.

Alexander Gelbukh, Grigori Sidorov, Sang-Yong Han, Erika Hernández-Rubio
Towards an Efficient Evolutionary Decoding Algorithm for Statistical Machine Translation

In a statistical machine translation system (SMTS), decoding is the process of finding the most likely translation based on a statistical model according to previously learned parameters. This paper proposes a new approach based on evolutionary hybrid algorithms to translate sentences in a specific technical context. The tests are carried out in the context of Spanish and then translated to English. The experimental results validate the performance of our method.

Eridan Otto, María Cristina Riff
The Role of Imperatives in Inference, Agents, and Actions

The aim of this paper is to present a model for the interpretation of imperative sentences in which reasoning agents play the role of speakers and hearers. A requirement is associated with both the person who makes and the person who receives the order which prevents the hearer coming to inappropriate conclusions about the actions s/he has been commanded to do. By relating imperatives with the actions they prescribe, the dynamic aspect of imperatives is captured and by using the idea of encapsulation, it is possible to distinguish what is demanded from what is not. These two ingredients provide agents with the tools to avoid inferential problems in interpretation.

Miguel Pérez-Ramírez, Chris Fox
Automatic Multilinguality for Time Expression Resolution

In this paper, a semiautomatic extension of our monolingual (Spanish) TERSEO system to a multilingual level is presented . TERSEO implements a method of event ordering based on temporal expression recognition and resolution. TERSEO consists of two different modules, the first module is based on a set of rules that allows the recognition of the temporal expressions in Spanish. The second module is based on a set of rules that allows the resolution of these temporal expressions (which means transforming them into a concrete date, concrete interval or fuzzy interval). Both sets of rules were defined through an empirical study of a training corpus. The extension of the system, that makes the system able to work with multilingual texts, has been made in five stages. First, a direct translation of the temporal expressions in Spanish of our knowledge database to the target language (English, Italian, French, Catalan, etc) is performed. Each expression in the target language is linked to the same resolution rule used in the source language. The second stage is a search in Google for each expression so that we will eliminate all those expressions of which non exact instances are found. The third step is the obtaining of a set of keywords in the target language, that will be used to look for new temporal expressions in this language, learning new rules automatically. Finally, every new rule is linked with its resolution. Besides, we present two different kinds of evaluations, one of them measures the reliability of the system used for the automatic extraction of rules for new languages. In the other evaluation, the results of precision and recall in the recognition and resolution of Spanish temporal expressions are presented.

Estela Saquete, P. Martínez-Barco, R. Muñoz
AQUA – Ontology-Based Question Answering System

This paper describes AQUA, an experimental question answering system. AQUA combines Natural Language Processing (NLP), Ontologies, Logic, and Information Retrieval technologies in a uniform framework. AQUA makes intensive use of an ontology in several parts of the question answering system. The ontology is used in the refinement of the initial query, the reasoning process, and in the novel similarity algorithm. The similarity algorithm, is a key feature of AQUA. It is used to find similarities between relations used in the translated query and relations in the ontological structures.

Maria Vargas-Vera, Enrico Motta
Phrase Chunking for Efficient Parsing in Machine Translation System

Phrase chunking can be an effective way to enhance the performance of an existing parser in machine translation system. This paper presents a Chinese phrase chunker implemented using transformation-based learning algorithm, and an interface devised to convey the dependency information found by the chunker to the parser. The chunker operates as a preprocessor to the parser in a Chinese to Korean machine translation system currently under active development. By introducing chunking module, some of the unlikely dependencies could be ruled out in advance, resulting in noticeable improvements in the parser’s performance.

Jaehyung Yang

Uncertainty Reasoning

Motion Planning Based on Geometric Features

This paper describes the foundations and algorithms of a new alternative to improve the connectivity of the configuration space using probabilistic roadmap methods (PRM). The main idea is to use some geometric features about the obstacles and the robot into work-space, and to obtain useful configurations close to the obstacles to find collision free paths. In order to reach a better performance of these planners, we use the “straightness” feature and propose a new heuristic which allows us to solve the narrow corridor problems. We apply this technique to solve the motion planning problems for a specific kind of robots, “free flying objects” with six degrees of freedom (dof), three degrees used for position and the last three used for orientation. We have implemented the method in three dimensional space and we show results that allow us to be sure that some geometric features on the work-space can be used to improve the connectivity of configuration space and to solve the narrow corridor problems.

Antonio Benitez, Daniel Vallejo
Comparative Evaluation of Temporal Nodes Bayesian Networks and Networks of Probabilistic Events in Discrete Time

Temporal Nodes Bayesian Networks (TNBNs) and Networks of Probabilistic Events in Discrete Time (NPEDTs) are two different types of Bayesian networks (BNs) for temporal reasoning. Arroyo-Figueroa and Sucar applied TNBNs to an industrial domain: the diagnosis and prediction of the temporal faults that may occur in the steam generator of a fossil power plant. We have recently developed an NPEDT for the same domain. In this paper, we present a comparative evaluation of these two systems. The results show that, in this domain, NPEDTs perform better than TNBNs. The ultimate reason for that seems to be the finer time granularity used in the NPEDT with respect to that of the TNBN. Since families of nodes in a TNBN interact through the general model, only a small number of states can be defined for each node; this limitation is overcome in an NPEDT through the use of temporal noisy gates.

Severino F. Galán, Gustavo Arroyo-Figueroa, F. Javier Díez, Luis Enrique Sucar
Function Approximation through Fuzzy Systems Using Taylor Series Expansion-Based Rules: Interpretability and Parameter Tuning

In this paper we present a new approach for the problem of approximating a function from a training set of I/O points using fuzzy logic and fuzzy systems. Such approach, as we will see, will provide us a number of advantages comparing to other more-limited systems. Among these advantages, we may highlight the considerable reduction in the number of rules needed to model the underlined function of this set of data and, from other point of view, the possibility of bringing interpretation to the rules of the system obtained, using the Taylor Series concept. This work is reinforced by an algorithm able to obtain the pseudo-optimal polynomial consequents of the rules. Finally the performance of our approach and that of the associated algorithm are shown through a significant example.

Luis Javier Herrera, Héctor Pomares, Ignacio Rojas, Jesús González, Olga Valenzuela
Causal Identification in Design Networks

When planning and designing a policy intervention and evaluation, the policy maker will have to define a strategy which will define the (conditional independence) structure of the available data. Here, Dawid’s extended influence diagrams are augmented by including ‘experimental design’ decisions nodes within the set of intervention strategies to provide semantics to discuss how a ‘design’ decision strategy (such as randomisation) might assist the systematic identification of intervention causal effects. By introducing design decision nodes into the framework, the experimental design underlying the data available is made explicit. We show how influence diagrams might be used to discuss the efficacy of different designs and conditions under which one can identify ‘causal’ effects of a future policy intervention. The approach of this paper lies primarily within probabilistic decision theory.

Ana Maria Madrigal, Jim Q. Smith
Bayes-N: An Algorithm for Learning Bayesian Networks from Data Using Local Measures of Information Gain Applied to Classification Problems

Bayes-N is an algorithm for Bayesian network learning from data based on local measures of information gain, applied to problems in which there is a given dependent or class variable and a set of independent or explanatory variables from which we want to predict the class variable on new cases. Given this setting, Bayes-N induces an ancestral ordering of all the variables generating a directed acyclic graph in which the class variable is a sink variable, with a subset of the explanatory variables as its parents. It is shown that classification using this variables as predictors performs better than the naive bayes classifier, and at least as good as other algorithms that learn Bayesian networks such as K2, PC and Bayes-9. It is also shown that the MDL measure of the networks generated by Bayes-N is comparable to those obtained by these other algorithms.

Manuel Martínez-Morales, Nicandro Cruz-Ramírez, José Luis Jiménez-Andrade, Ramiro Garza-Domínguez
Methodology for Handling Uncertainty by Using Interval Type-2 Fuzzy Logic Systems

This paper proposes a methodology that is useful for handling uncertainty in non-linear systems by using type-2 Fuzzy Logic (FL). This methodology works under a training scheme from numerical data, using type-2 Fuzzy Logic Systems (FLS). Different training methods can be applied while working with it, as well as different training approaches. One of the training methods used here is also a proposal —the One-Pass method for interval type-2 FLS. We accomplished several experiments forecasting a chaotic time-series with an additive noise and obtained better performance with interval type-2 FLSs than with conventional ones. In addition, we used the designed FLSs to forecast the time-series with different initial conditions, and it did not affect their performance.

Germán Montalvo, Rogelio Soto
Online Diagnosis Using Influence Diagrams

This paper presents the utilization of influence diagrams in the diagnosis of industrial processes. The diagnosis in this context signifies the early detection of abnormal behavior, and the selection of the best recommendation for the operator in order to correct the problem or minimize the effects. A software architecture is presented, based on the Elvira package, including the connection with industrial control systems. A simple experiment is presented together with the acquisition and representation of the knowledge.

Baldramino Morales Sánchez, Pablo H. Ibargüengoytia
Toward a New Approach for Online Fault Diagnosis Combining Particle Filtering and Parametric Identification

This paper proposes a new approach for online fault diagnosis in dynamic systems, combining a Particle Filtering (PF) algorithm with a classic Fault Detection and Isolation (FDI) framework. Of the two methods, FDI provides deeper insight into a process; however, it cannot normally be computed online. Our approach uses a preliminary PF step to reduce the potential solution space, resulting in an online algorithm with the advantages of both methods. The PF step computes a posterior probability density to diagnose the most probable fault. If the desired confidence is not obtained, the classic FDI framework is invoked. The FDI framework uses recursive parametric estimation for the residual generation block and hypothesis testing and Statistical Process Control (SPC) criteria for the decision making block. We tested the individual methods with an industrial dryer.

Rubén Morales-Menéndez, Ricardo Ramírez-Mendoza, Jim Mutch, Federico Guedea-Elizalde
Power Plant Operator Assistant: An Industrial Application of Factored MDPs

Markov decision processes (MDPs) provide a powerful framework for solving planning problems under uncertainty. However, it is difficult to apply them to real world domains due to complexity and representation problems: (i) the state space grows exponentially with the number of variables; (ii) a reward function must be specified for each state-action pair. In this work we tackle both problems and apply MDPs for a complex real world domain -combined cycle power plant operation. For reducing the state space complexity we use a factored representation based on a two–stage dynamic Bayesian network [13]. The reward function is represented based on the recommended optimal operation curve for the power plant. The model has been implemented and tested with a power plant simulator with promising results.

Alberto Reyes, Pablo H. Ibargüengoytia, Luis Enrique Sucar

Vision

Scene Modeling by ICA and Color Segmentation

In this paper, a method is proposed for the interpretation of outdoor natural images. It fastly constructs a basic 2-D scene model that can be used in the visual systems onboard of autonomous vehicles. It is composed of several processes: color image segmentation, principal areas detection, classification and verification of the final model. The regions provided by the segmentation phase are characterized by their color and texture. These features are compared and classified into predefined classes using the Support Vector Machines (SVM) algorithm. An Independent Component Analysis (ICA) is used to reduce redundancy from the database and improve the recognition stage. Finally, a global scene model is obtained by merging the small regions belonging to the same class. The extraction of useful entities for navigation (like roads) from the final model is straightforward. This system has been intensively tested through experiments on sequences of countryside scenes color images.

Juan Gabriel Aviña-Cervantes, Michel Devy
Non–parametric Registration as a Way to Obtain an Accurate Camera Calibration

We present the SSD–ARC, a non–parametric registration technique, as an accurate way to calibrate a camera and compare it with some parametric techniques. In the parametric case we obtain a set of thirteen parameters to model the projective and the distortion transformations of the camera and in the non–parametric case we obtain the displacement between pixel correspondences. We found more accuracy in the non–parametric camera calibration than in the parametric techniques. Finally, we introduce the parametrization of the pixel correspondences obtained by the SSD–ARC algorithm and we present an experimental comparison with some parametric calibration methods.

Félix Calderón, Leonardo Romero
A Probability-Based Flow Analysis Using MV Information in Compressed Domain

In this paper, we propose a method that utilizes the motion vectors (MVs) in MPEG sequence as the motion depicter for representing video contents. We convert the MVs to a uniform MV set, independent of the frame type and the direction of prediction, and then make use of them as motion depicter in each frame. To obtain such uniform MV set, we proposed a new motion analysis method using Bi-directional Prediction-Independent Framework (BPIF). Our approach enables a frame-type independent representation that normalizes temporal features including frame type, MB encoding and MVs. Our approach is directly processed on the MPEG bitstream after VLC decoding. Experimental results show that our method has the good performance, the high validity, and the low time consumption.

Nacwoo W. Kim, Taeyong Y. Kim, Jongsoo S. Choi
How Does the Hue Contribute to Construct Better Colour Features?

We explore the impact of including hue in a feature construction algorithm for colour target detection. Hue has a long standing record as a good attribute in colour segmentation, so it is expected to strengthen features generated by only RGB. Moreover, it may open the door to infer compact feature maps for skin detection. However, contrary to our expectations, those new features where hue participates tend to produce poor features in terms of recall or precision. This result shows that (i) better features can be constructed without the costly hue, and (ii) unfortunately a good feature map for skin detection is still evasive.

Giovani Gomez Estrada, Eduardo Morales, Huajian Gao
Morphological Contrast Measure and Contrast Mappings: One Application to the Segmentation of Brain MRI

In this paper, the use of morphological contrast mappings and a method to quantify the contrast for segmenting magnetic resonance images (MRI) of the brain was investigated. In particular, contrast transformations were employed for detecting white matter in a frontal lobule of the brain. Since contrast mappings depend on several parameters (size, contrast, proximity criterion), a morphological method to quantify the contrast was proposed in order to compute the optimal parameter values. The contrast quantifying method, that employs the gradient luminance concept, enabled us to obtain an output image associated with a good visual contrast. Because the contrast mappings introduced in this article were defined under partitions generated by the flat zone notion, these transformations are connected. Therefore, the degradation of the output images by the formation of new contours was avoided. Finally, the ratio between white and grey matter was calculated and compared with manual segmentations.

Jorge D. Mendiola-Santibañez, Iván R. Terol-Villalobos, Antonio Fernández-Bouzas
Gaze Detection by Dual Camera and Dual IR-LED Illuminators

Gaze detection is to locate the position (on a monitor) of where a user is looking. This paper presents a new and practical method for detecting the monitor position where the user is looking. In general, the user tends to move both his head and eyes in order to gaze at certain monitor position. Previous researches use one wide-view camera, which can capture a whole user’s face. However, the image resolution is too low with such a camera and the fine movements of user’s eye cannot be exactly detected. So, we implement the gaze detection system with dual camera system(a wide and a narrow-view camera). In order to locate the user’s eye position accurately, the narrow-view camera has the functionalities of auto focusing/pan/tilting based on the detected 3D facial feature positions from the wide-view camera. In addition, we use IR-LED illuminator in order to detect facial features and especially eye features. To overcome the problem of specular reflection on a glasses, we use dual IR-LED illuminators and detect the accurate eye position with escaping the glasses specular reflection. From experimental results, we implement the real-time gaze detection system and obtain the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

Kang Ryoung Park, Jaihie Kim
Image Processing and Neural Networks for Early Detection of Histological Changes

A novel methodology for the histological images characterisation taken from the microscopic analysis of cervix biopsies is outlined. First, the fundament of the malignancy process is reviewed in order to understand which parameters are significant. Then, the analysis methodology using equalisation and artificial Neural Networks is depicted and the step by step analysis output images are shown. Finally, the results of the proposed analysis applied to example images are discussed.

José Ramírez-Niño, Miguel Ángel Flores, Carlos Ramírez, Victor Manuel Castaño
An Improved ICP Algorithm Based on the Sensor Projection for Automatic 3D Registration

Three-dimensional (3D) registration is the process aligning the range data sets form different views in a common coordinate system. In order to generate a complete 3D model, we need to refine the data sets after coarse registration. One of the most popular refinery techniques is the iterative closest point (ICP) algorithm, which starts with pre-estimated overlapping regions. This paper presents an improved ICP algorithm that can automatically register multiple 3D data sets from unknown viewpoints. The sensor projection that represents the mapping of the 3D data into its associated range image and a cross projection are used to determine the overlapping region of two range data sets. By combining ICP algorithm with the sensor projection, we can make an automatic registration of multiple 3D sets without pre-procedures that are prone to errors and any mechanical positioning device or manual assistance. The experimental results demonstrated that the proposed method can achieve more precise 3D registration of a couple of 3D data sets than previous methods.

Sang-Hoon Kim, Yong-Ho Hwang, Hyun-Ki Hong, Min-Hyung Choi
Structure and Motion Recovery Using Two Step Sampling for 3D Match Move

Camera pose and scene geometry estimation is a fundamental requirement for match move to insert synthetic 3D objects in real scenes. In order to automate this process, auto-calibration that estimates the camera motion without prior calibration information is needed. Most auto-calibration methods for multi-views contain bundle adjustment or non-linear minimization process that is complex and difficult problem. This paper presents two methods for recovering structure and motion from handheld image sequences: the one is key-frame selection, and the other is to reject the frame with large errors among key-frames in absolute quadric estimation by LMedS (Least Median of Square). The experimental results showed the proposed method can achieve precisely camera pose and scene geometry estimation without bundle adjustment.

Jung-Kak Seo, Yong-Ho Hwang, Hyun-Ki Hong
Multiscale Image Enhancement and Segmentation Based on Morphological Connected Contrast Mappings

This work presents a multiscale image approach for contrast enhancement and segmentation based on a composition of contrast operators. The contrast operators are built by means of the opening and closing by reconstruction. The operator that works on bright regions uses the opening and the identity as primitives, while the one working on the dark zones uses the closing and the identity as primitives. To select the primitives, a contrast criterion given by the connected tophat transformation is proposed. This choice enables us to introduce a well-defined contrast in the output image. By applying these operators by composition according to the scale parameter, the output image not only preserves a well-defined contrast at each scale, but also increases the contrast at finer scales. Because of the use of connected transformations to build these operators, the principal edges of the input image are preserved and enhanced in the output image. Finally, these operators are improved by applying an anamorphosis to the regions verifying the criterion.

Iván R. Terol-Villalobos
Active Object Recognition Using Mutual Information

In this paper, we present the development of an active object recognition system. Our system uses a mutual information framework in order to choose an optimal sensor configuration for recognizing an unknown object. System builds a conditional probability density functions database for some observed features over a discrete set of sensor configurations for a set of interesting objects. Using a sequential decision making process, our system determines an optimal action (sensor configuration) that augments discrimination between objects in our database. We iterate this procedure until a decision about the class of the unknown object can be made. Actions include pan, tilt and zoom values for an active camera. Features include the color patch mean over a region in our image. We have tested on a set composed of 8 different soda bottles and we have obtained a recognition rate of about 95 %. Sequential decision length was of 4 actions in the average for a decision to be made.

Felipe Trujillo-Romero, Victor Ayala-Ramírez, Antonio Marín-Hernández, Michel Devy
An Approach to Automatic Morphing of Face Images in Frontal View

Image metamorphosis, commonly known as morphing, is a powerful tool for visual effects that consists of the fluid transformation of one digital image into another. There are many techniques for image metamorphosis, but in all of them there is a need for a person to supply the correspondence between the features in the source image and target image. In this paper we describe a method to perform the metamorphosis of face images in frontal view with uniform illumination automatically, using a generic model of a face and evolution strategies to find the features in both face images.

Vittorio Zanella, Olac Fuentes

Evolutionary Computation

A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm

In this paper, we present a parallel version of a multi-objective evolutionary algorithm that incorporates some coevolutionary concepts. Such an algorithm was previosly developed by the authors. Two approaches were adopted to parallelize our algorithm (both of them based on a master-slave scheme): one uses Pthreads (shared memory) and the other one uses MPI (distributed memory). We conduct a small comparative study to analyze the impact that the parallelization has on performance. Our results indicate that both parallel versions produce important improvements in the execution times of the algorithm (with respect to the serial version) while keeping the quality of the results obtained.

Carlos A. Coello Coello, Margarita Reyes Sierra
Reactive Agents to Improve a Parallel Genetic Algorithm Solution

One of the distributed artificial intelligence objectives is the decentralization of control. Multi-agent architectures distribute the control among two or more agents, which will be in charge of different events. In this paper the design of a parallel Multi-agent architecture for genetic algorithms is described, using a bottom-up behavior design and reactive agents. Such design tries to achieve the improvement solution of parallel genetic algorithms. The purpose of incorporating a reactive behavior in the parallel genetic algorithms is to improve the overall performance by up-dating the sub-populations according to the general behavior of the algorithm avoiding getting stuck in local minima. Two kinds of experiments were conducted for each one of the algorithms and the results obtained with both experiments are shown.

Ana Lilia Laureano-Cruces, José Manuel de la Cruz-González, Javier Ramírez-Rodríguez, Julio Solano-González
Simple Feasibility Rules and Differential Evolution for Constrained Optimization

In this paper, we propose a differential evolution algorithm to solve constrained optimization problems. Our approach uses three simple selection criteria based on feasibility to guide the search to the feasible region. The proposed approach does not require any extra parameters other than those normally adopted by the Differential Evolution algorithm. The present approach was validated using test functions from a well-known benchmark commonly adopted to validate constraint-handling techniques used with evolutionary algorithms. The results obtained by the proposed approach are very competitive with respect to other constraint-handling techniques that are representative of the state-of-the-art in the area.

Efrén Mezura-Montes, Carlos A. Coello Coello, Edy I. Tun-Morales
Symbolic Regression Problems by Genetic Programming with Multi-branches

This work has the aim of exploring the area of symbolic regression problems by means of Genetic Programming. It is known that symbolic regression is a widely used method for mathematical function approximation. Previous works based on Genetic Programming have already dealt with this problem, but considering Koza’s GP approach. This paper introduces a novel GP encoding based on multi-branches. In order to show the use of the proposed multi-branches representation, a set of testing equations has been selected. Results presented in this paper show the advantages of using this novel multi-branches version of GP.

Carlos Oliver Morales, Katya Rodríguez Vázquez
A Preprocessing That Combines Heuristic and Surrogate Constraint Analysis to Fix Variables in TSP

A preprocessing procedure that uses a local guided search defined in terms of a neighborhood structure to get a feasible solution (UB) and the Osorio and Glover [18], [20] exploiting of surrogate constraints and constraint pairing is applied to the traveling salesman problem. The surrogate constraint is obtained by weighting the original problem constraints by their associated dual values in the linear relaxation of the problem. The objective function is made a constraint less or equal than a feasible solution (UB). The surrogate constraint is paired with this constraint to obtain a combined equation where negative variables are replaced by complemented variables and the resulting constraint is used to fix variables to zero or one before solving the problem.

M. Lama, D. Pinto
An Evolutionary Algorithm for Automatic Spatial Partitioning in Reconfigurable Environments

This paper introduces a CAD tool, ASPIRE (Automatic Spatial Partitioning In Reconfigurable Environments), for the spatial partitioning problem for Multi-FPGA architectures. The tool takes as input a HDL (Hardware Description Language) model of the application along with user specified constraints and automatically generates a task graph G; partitions the G based on the user specified constraints and maps the blocks of the partitions onto the different FPGAs (Field Programmable Gate Arrays) in the given Multi-FPGA architecture, all in a single-shot. ASPIRE uses an evolutionary approach for the partitioning step. ASPIRE handles the major part of the partitioning at the behavioral HDL level making it scalable with larger complex designs. ASPIRE was successfully employed to spatially partition a reasonably big cryptographic application that involved a 1024-bit modular exponentiation and to map the same onto a network of nine ACEX1K based Altera EP1K30QC208-1 FPGAs.

P. Pratibha, B. S. N. Rao, A. Muthukaruppan, S. Suresh, V. Kamakoti
GA with Exaptation: New Algorithms to Tackle Dynamic Problems

It is propose new evolutionary algorithms with exaptive properties to tackle dynamic problems. Exaptation is a new theory with two implicit procedures of retention and reuse of old solutions. The retention of a solution involves some kind of memory and the reuse of a solution implies the adaptation of the solution to the new problem. The first algorithm proposed uses seeding techniques to reuse a solution and the second algorithm proposed uses memory with seeding techniques to retain and reuse solutions respectively. Both algorithms are compared with a simple genetic algorithm (SGA) and the SGA with two populations, where the first one is a memory of solutions and the second population is searching new solutions. The Moving Peak Benchmark (MPB) was used to test every algorithm.

Luis Torres-T

Modeling and Intelligent Control

Intelligent Control Algorithm for Steam Temperature Regulation of Thermal Power Plants

Artificial intelligence techniques have been developed through extensive practical implementations in industry in form of intelligent control. One of the most successful expert-system techniques, applied to a wide range of control applications, has been fuzzy logic technique. This paper shows the implementation of a fuzzy logic controller (FLC) to regulate the steam temperature in a 300 MW Thermal Power Plant. The proposed FLC was applied to regulate superheated and reheated steam temperature. The results show that the fuzzy controller has a better performance than advanced model-based controller, such as Dynamic Matrix Control (DMC) or a conventional PID controller. The main benefits are the reduction of the overshoot and the tighter regulation of the steam temperatures. Fuzzy-logic controllers can achieve good result for complex nonlinear processes with dynamic variation or with long delay times.

Alfredo Sanchez-Lopez, Gustavo Arroyo-Figueroa, Alejandro Villavicencio-Ramirez
A Fed-Batch Fermentation Process Identification and Direct Adaptive Neural Control with Integral Term

A nonlinear mathematical model of a feed-batch fermentation process of Bacillus thuringiensis (Bt.), is derived. The obtained model is validated by experimental data. Identification and direct adaptive neural control systems with and without integral term are proposed. The system contains a neural identifier and a neural controller, based on the recurrent trainable neural network model. The applicability of the proposed direct adaptive neural control system of both proportional and integral-term direct adaptive neural control schemes is confirmed by comparative simulation results, also with respect to the (-tracking control, which exhibit good convergence, but the I-term control could compensate a constant offset and proportional controls could not.

Ieroham S. Baruch, Josefina Barrera-Cortés, Luis Alberto Hernández
A Fuzzy-Neural Multi-model for Mechanical Systems Identification and Control

The paper proposed a new fuzzy-neural recurrent multi-model for systems identification and states estimation of complex nonlinear mechanical plants with friction. The parameters and states of the local recurrent neural network models are used for a local direct and indirect adaptive trajectory tracking control systems design. The designed local control laws are coordinated by a fuzzy rule based control system. The applicability of the proposed intelligent control system is confirmed by simulation and comparative experimental results, where a good convergent results, are obtained.

Ieroham S. Baruch, Rafael Beltran L, Jose-Luis Olivares, Ruben Garrido
Tuning of Fuzzy Controllers

The fuzzy controllers could be broadly used in control processes thanks to their good performance, one disadvantage is the problem of fuzzy controllers tuning, this implies the handling of a great quantity of variables like: the ranges of the membership functions, the shape of this functions, the percentage of overlap among the functions, the number of these and the design of the rule base, mainly, and more even when they are multivariable systems due that the number of parameters grows exponentially with the number of variables. The importance of the tuning problem implies to obtain fuzzy system that decrease the settling time of the processes in which it is applied. In this work a very simple algorithm is presented for the tuning of fuzzy controllers using only one variable to adjust the performance of the system. The results will be obtained considering the relationship that exists between the membership functions and the settling time.

Eduardo Gómez-Ramírez, Armando Chávez-Plascencia
Predictive Control of a Solar Power Plant with Neuro-Fuzzy Identification and Evolutionary Programming Optimization

The paper presents an intelligent predictive control to govern the dynamics of a solar power plant system. This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neuro-fuzzy predictive control, can be a better match to govern the system dynamics. In our proposed method, a neuro-fuzzy model identifies the future behavior of the system over a certain prediction horizon while an optimizer algorithm based on EP determines the input sequence. The first value of this sequence is applied to the plant. Using the proposed intelligent predictive controller, the performance of outlet temperature tracking problem in a solar power plant is investigated. Simulation results demonstrate the effectiveness and superiority of the proposed approach.

Mahdi Jalili-Kharaajoo
Modeling of a Coupled Industrial Tank System with ANFIS

Since liquid tank systems are commonly used in industrial applications, system-related requirements results in many modeling and control problems because of their interactive use with other process control elements. Modeling stage is one of the most noteworthy parts in the design of a control system. Although nonlinear tank problems have been widely addressed in classical system dynamics, when designing intelligent control systems, the corresponding model for simulation should reflect the whole characteristics of the real system to be controlled. In this study, a coupled, interacting, nonlinear liquid leveling tank system is modeled using ANFIS (Adaptive-Network-Based Fuzzy Inference System), which will be further used to design and apply a fuzzy-PID control to this system. Firstly, mathematical modeling of the system is established and then, data gathered from this model is employed to create an ANFIS model of the system. Both mathematical and ANFIS model is compared, model consistencies are discussed, and flexibility of ANFIS modeling is shown.

S. N. Engin, J. Kuvulmaz, V. E. Ömürlü

Neural Networks

Robust Bootstrapping Neural Networks

Artificial neural networks (ANN) have been used as predictive systems for a variety of application domains such as science, engineering and finance. Therefore it is very important to be able to estimate the reliability of a given model. Bootstrap is a computer intensive method used for estimating the distribution of a statistical estimator based on an imitation of the probabilistic structure of the data generating process and the information contained in a given set of random observations. Bootstrap plans can be used for estimating the uncertainty associated with a value predicted by a feedforward neural network.The available bootstrap methods for ANN assume independent random samples that are free of outliers. Unfortunately, the existence of outliers in a sample has serious effects such as some resamples may have a higher contamination level than the initial sample, and the model is affected because it is sensible to these deviations resulting on a poor performance.In this paper we investigate a robust bootstrap method for ANN that is resistant to the presence of outliers and is computationally simple. We illustrate our technique on synthetic and real datasets and results are shown on confidence intervals for neural network prediction.

Héctor Allende, Ricardo Ñanculef, Rodrigo Salas
A Kernel Method for Classification

Kernel Maximum Likelihood Hebbian Learning Scale Invariant Maps is a novel technique developed to facilitate the clustering of complex data effectively and efficiently and that is characterised for converging remarkably quickly. The combination of Maximum Likelihood Hebbian Learning Scale Invariant Map and the Kernel Space provides a very smooth scale invariant quantisation which can be used as a clustering technique. The efficiency of this method have been used to analyse an oceanographic problem.

Donald MacDonald, Jos Koetsier, Emilio Corchado, Colin Fyfe, Juan Corchado
Applying Genetic and Symbolic Learning Algorithms to Extract Rules from Artificial Neural Networks

Several research works have shown that Artificial Neural Networks — ANNs — have an appropriate inductive bias for several domains, since they can learn any input-output mapping, i.e., ANNs have the universal approximation property. Although symbolic learning algorithms have a less flexible inductive bias than ANNs, they are needed when a good understating of the decision process is essential, since symbolic ML algorithms express the knowledge induced using symbolic structures that can be interpreted and understood by humans. On the other hand, ANNs lack the capability of explaining their decisions, since the knowledge is encoded as real-valued weights and biases of the network. This encoding is difficult to be interpreted by humans. Aiming to take advantage of both approaches, this work proposes a method that extract symbolic knowledge, expressed as decision rules, from ANNs. The proposed method combines knowledge induced by several symbolic ML algorithms through the application of a Genetic Algorithm — GA. Our method is experimentally analyzed in a number of application domains. Results show that the method is able to extract symbolic knowledge having high fidelity with trained ANNs. The proposed method is also compared to TREPAN, another method for extracting knowledge from ANNs, showing promising results.

Claudia R. Milaré, Gustavo E. A. P. A. Batista, André Carlos P. L. F. de Carvalho, Maria C. Monard
Combining MLP and RBF Neural Networks for Novelty Detection in Short Time Series

Novelty detection in time series is an important problem with application in different domains such as machine failure detection, fraud detection and auditing. In many problems, the occurrence of short length time series is a frequent characteristic. In previous works we have proposed a novelty detection approach for short time series that uses RBF neural networks to classify time series windows as normal or novelty. Additionally, both normal and novelty random patterns are added to training sets to improve classification performance. In this work we consider the use of MLP networks as classifiers. Next, we analyze (a) the impact of validation and training sets generation, and (b) of the training method. We have carried out a number of experiments using four real-world time series, whose results have shown that under a good selection of these alternatives, MLPs perform better than RBFs. Finally, we discuss the use of MLP and MLP/RBF committee machines in conjunction with our previous method. Experimental results shows that these committee classifiers outperform single MLP and RBF classifiers.

A. L. I. Oliveira, F. B. L. Neto, S. R. L. Meira
Treatment of Gradual Knowledge Using Sigma-Pi Neural Networks

This work belongs to the field of hybrid systems for Artificial Intelligence (AI). It concerns the study of ”gradual” rules, which makes it possible to represent correlations and modulation relations between variables. We propose a set of characteristics to identify these gradual rules, and a classification of these rules into ”direct” rules and ”modulation” rules. In neurobiology, pre-synaptic neuronal connections lead to gradual processing and modulation of cognitive information. While taking as a starting point such neurobiological data, we propose in the field of connectionism the use of ”Sigma-Pi” connections to allow gradual processing in AI systems. In order to represent as well as possible the modulation processes between the inputs of a network, we have created a new type of connection, ”Asymmetric Sigma-Pi” (ASP) units. These models have been implemented within a pre-existing hybrid neuro-symbolic system, the INSS system, based on connectionist nets of the ”Cascade Correlation” type. The new hybrid system thus obtained, INSS-Gradual, allows the learning of bases of examples containing gradual modulation relations. ASP units facilitate the extraction of gradual rules from a neural network.

Gerardo Reyes Salgado, Bernard Amy

Robotics

A Method to Obtain Sensing Requirements in Robotic Assemblies by Geometric Reasoning

This paper presents a method for determining sensing requirements for robotic assemblies from a geometrical analysis of critical contact-state transitions produced among mating parts during the execution of nominal assembly plans. The goal is to support the reduction of real-life uncertainty through the recognition of assembly tasks that require force and visual feedback operations. The assembly tasks are decomposed into assembly skill primitives based on transitions described on a taxonomy of contact relations. Force feedback operations are described as a set of force compliance skills which are systematically associated to the assembly skill primitives. To determine the visual feedback operations and the type of visual information needed, a backward propagation process of geometrical constraints is used. This process defines new visual feedback requirements for the tasks from the discovery of direct, and indirect, insertion and contact dependencies among the mating parts. A computational implementation of the method was developed and validated with test cases containing assembly tasks including all the combinations of sensing requirements. The program behave as expected in every case.

Santiago E. Conant-Pablos, Katsushi Ikeuchi, Horacio Martínez-Alfaro
Intelligent Task Level Planning for Robotic Assembly: Issues and Experiments

Today’s industrial robots use programming languages that do not allow learning and task knowledge acquisition and probably this is one of the reasons of its restricted used for complex task in unstructured environments. In this paper, results on the implementation of a novel task planner using a 6 DOF industrial robot as an alternative to overcome this limitation are presented. Different Artificial Neural Networks (ANN) models were assessed first to evaluate their learning capabilities, stability and feasibility of implementation in the planner. Simulations showed that the Adaptive Resonance Theory (ART) outperformed other connectionist models during tests and therefore this model was chosen. This work describes initial results on the implementation of the planner showing that the manipulator can acquire manipulative skills to assemble mechanical components using only few clues.

Jorge Corona Castuera, Ismael Lopez-Juarez
Wrapper Components for Distributed Robotic Systems

Nowadays, there is a plethora of robotic systems from different vendors and with different characteristics that work in specific tasks. Unfortunately, most of the robotic operating systems come in a closed control architecture. This fact represents a challenge to integrate these systems with other robotic components, such as vision systems or other types of robots. In this paper, we propose an integration methodology to create or to enhance robotic systems by combining tools from computer vision, planning systems and distributed computing areas. In particular we are proposing the use of CORBA specification to create Wrapper Components. They are object-oriented modules that create an abstract interface for a specific class of hardware or software components. Furthermore, they have several connectivity and communication properties that make easy to interconnect with each other.

Federico Guedea-Elizalde, Rubén Morales-Menéndez, Rogelio Soto, Insop Song, Fakhri Karray
Fuzzy Sliding Mode Control of Robotic Manipulators Based on Genetic Algorithms

In this paper, fuzzy sliding mode controller based on genetic algorithms is designed to govern the dynamics of rigid robot manipulators. When fuzzy sliding mode control is designed there is no criterion to reach an optimal design. Therefore, we will design a fuzzy sliding mode controller for the general nonlinear control systems as an optimization problem and apply the optimal searching algorithms and genetic algorithms to find the optimal rules and membership functions of the controller. The proposed approach has the merit to determine the optimal structure and the inference rules of fuzzy sliding mode controller simultaneously. Using the proposed approach, the tracking problem of two-degree-of-freedom rigid robot manipulator is studied. Simulation results of the close-loop system with the proposed controller based on genetic algorithms show the effectiveness of that.

Mahdi Jalili-Kharaajoo, Hossein Rouhani
Collective Behavior as Assembling of Spatial Puzzles

This paper describes how collective behavior can be achieved using simple mechanisms based on local information and low-level cognition. Collective behavior is modeled and analyzed from the spatial point of view. Robots have a set of internal tendencies, such as association and repulsion, that enable them to interact with other robots. Each robot has a space around its body that represents a piece of the puzzle. The robots’ goal is to find other pieces of the puzzle, associate with them and remain associated for as long as possible. Experiments on queuing using this puzzle-like mechanism are analyzed.

Angélica Muñoz Meléndez, Alexis Drogoul, Pierre-Emmanuel Viel
Towards Derandomizing PRM Planners

Probabilistic roadmap methods (PRM) have been successfully applied in motion planning for robots with many degrees of freedom. Many recent PRM approaches have demonstrated improved performance by concentrating samples in a nonuniform way. This work replace the random sampling by the deterministic one. We present several implementations of PRM-based planners (multiple-query, single-query and Lazy PRM) and lattice-based roadmaps. Deterministic sampling can be used in the same way than random sampling. Our work can be seen as an important part of the research in the uniform sampling field. Experimental results show performance advantages of our approach.

Abraham Sánchez, René Zapata
Backmatter
Metadaten
Titel
MICAI 2004: Advances in Artificial Intelligence
herausgegeben von
Raúl Monroy
Gustavo Arroyo-Figueroa
Luis Enrique Sucar
Humberto Sossa
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-24694-7
Print ISBN
978-3-540-21459-5
DOI
https://doi.org/10.1007/b96521