Skip to main content

2011 | Buch

Advances in Artificial Intelligence

10th Mexican International Conference on Artificial Intelligence, MICAI 2011, Puebla, Mexico, November 26 - December 4, 2011, Proceedings, Part I

herausgegeben von: Ildar Batyrshin, Grigori Sidorov

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNAI 7094 and LNAI 7095 constitutes the refereed proceedings of the 10th Mexican International Conference on Artificial Intelligence, MICAI 2011, held in Puebla, Mexico, in November/December 2011. The 96 revised papers presented were carefully reviewed and selected from numerous submissions. The first volume includes 50 papers representing the current main topics of interest for the AI community and their applications. The papers are organized in the following topical sections: automated reasoning and multi-agent systems; problem solving and machine learning; natural language processing; robotics, planning and scheduling; and medical applications of artificial intelligence.

Inhaltsverzeichnis

Frontmatter

Automated Reasoning and Multi-Agent Systems

Case Studies on Invariant Generation Using a Saturation Theorem Prover

Automatic understanding of the intended meaning of computer programs is a very hard problem, requiring intelligence and reasoning. In this paper we evaluate a program analysis method, called symbol elimination, that uses first-order theorem proving techniques to automatically discover non-trivial program properties. We discuss implementation details of the method, present experimental results, and discuss the relation of the program properties obtained by our implementation and the intended meaning of the programs used in the experiments.

Kryštof Hoder, Laura Kovács, Andrei Voronkov
Characterization of Argumentation Semantics in Terms of the MM r Semantics

Argumentation theory studies the fundamental mechanism humans use in argumentation and explores ways to implement this mechanism on computers. Dung’s approach, presented in [9], is a unifying framework which has played an influential role on argumentation research. In this paper, we show that, a logic programming semantics, called

MM

r

, can be used to characterize the preferred argumentation semantics defined by Dung in [9]. The

MM

r

[12] is based on the

the minimal model semantics

. The characterization of this argumentation semantics by the

MM

r

semantics suggests a new perception of this argumentation semantics in terms of

logic foundations

.

Mauricio Osorio, José Luis Carballido, Claudia Zepeda, Zenaida Cruz
Learning Probabilistic Description Logics: A Framework and Algorithms

Description logics have become a prominent paradigm in knowledge representation (particularly for the Semantic Web), but they typically do not include explicit representation of uncertainty. In this paper, we propose a framework for automatically learning a Probabilistic Description Logic from data. We argue that one must learn both concept definitions and probabilistic assignments. We also propose algorithms that do so and evaluate these algorithms on real data.

José Eduardo Ochoa-Luna, Kate Revoredo, Fábio Gagliardi Cozman
Belief Merging Using Normal Forms

Belief merging aims to conciliate multiple possibly inconsistent belief bases into a consistent common belief base. To handle inconsistency some operators have been proposed. Most of them do not consider inconsistent bases.

PS

-

Merge

is an alternative method of merging that uses the notion of Partial Satisfiability and allows us to take into account inconsistent bases.

PS

-

Merge

needs the bases represented as DNF formulas, nevertheless, many practical problems are easily represented in its CNF. The aim of this paper is to extend the notion of Partial Satisfiability in order to consider bases represented as CNF formulas. Moreover, we consider Prime Normal forms in order to define a method that allows us to implement

PS

-

Merge

for difficult theories. We also show that once the belief bases are represented as sets of normal forms,

PS

-

Merge

is polynomial.

Pilar Pozos-Parra, Laurent Perrussel, Jean Marc Thevenin
Toward Justifying Actions with Logically and Socially Acceptable Reasons

This paper formalizes argument-based reasoning for actions supported by believable reasons in terms of nonmonotonic consequences and desirable reasons in terms of Pareto optimality and maximizing social welfare functions. Our unified approach gives a four-layer practical argumentation framework structured with a propositional modal language with defaults and defeasible inference rules associated with practical reasoning. We show that the unified argument-based reasoning justifies an argument whose conclusion is supported by Pareto optimal, social welfare maximizing and nonmonotonic consequence reasons. Our formalization contributes to extend argument-based reasoning so that it can formally combine reasoning about logical believability and social desirability by benefiting from economic notions.

Hiroyuki Kido, Katsumi Nitta
A Complex Social System Simulation Using Type-2 Fuzzy Logic and Multiagent System

The need of better representation of complex systems, such social systems, has made that the use of new simulation techniques are increasingly accepted, one of these accepted techniques are multi-agent systems. In addition to represent the uncertainty that is required by them, fuzzy logic and particularly type-2 fuzzy logic are being accepted. A system with three different types of agents is presented as case of study, each agent is assigned to a role with specific goals to be achieved in both ways individually and as teams, the success or failure is determined by group performance rather than individual achievement. It is also taken into account the environment or context as another type of agent. Fuzzy inference systems are defined for each of the agents to represent the concepts interpretation.

Dora-Luz Flores, Manuel Castañón-Puga, Carelia Gaxiola-Pacheco
Computing Mobile Agent Routes with Node-Wise Constraints in Distributed Communication Systems

A basic problem in the quality-of-service (QoS) analysis of multiagent distributed systems is to find optimal routes for the mobile agents that incrementally fuse the data as they visit hosts in the distributed system. The system is modeled as a directed acyclic graph in which the nodes represent hosts and the edges represent links between them. Each edge is assigned a cost (or benefit) and weights that represent link delay, reliability, or other QoS parameters. The agent scheduling problem is viewed as a constrained routing problem in which a maximum-benefit (or minimum-cost) route connecting the source and the destination subject to QoS constraints is to be found. We study approximation algorithms called ‘fully polynomial time approximation schemes’ (FPTAS) for solving the problem. We suggest an accelerating technique that improves known FPTAS, e.g., Hassin’s (1992); Camponogara & Shima’s (2010); and Elalouf et al. (2011) algorithms, and present new FPTASs.

Amir Elalouf, Eugene Levner, T. C. Edwin Cheng
Collaborative Redundant Agents: Modeling the Dependences in the Diversity of the Agents’ Errors

As computing becomes pervasive, there are increasing opportunities for building collaborative multiagent systems that make use of multiple sources of knowledge and functionality for validation and reliability improvement purposes. However, there is no established method to combine the agents’ contributions synergistically. Independence is usually assumed when integrating contributions from different sources. In this paper, we present a domain-independent model for representing dependences among agents. We discuss the influence that dependence-based confidence determination might have on the results provided by a group of collaborative agents. We show that it is theoretically possible to obtain higher accuracy than that obtained under the assumption of independence among the agents. We empirically evaluate the effectiveness of a collaborative multiagent system in the presence of dependences among the agents, and to analyze the effects of incorrect confidence integration assumptions.

Laura Zavala, Michael Huhns, Angélica García-Vega
Strategy Patterns Prediction Model (SPPM)

Multi-agent systems are broadly known for being able to simulate real-life situations which require the interaction and cooperation of individuals. Opponent modeling can be used along with multi-agent systems to model complex situations such as competitions like soccer games. In this paper, a model for predicting opponent moves is presented. The model is based around an offline step (learning phase) and an online one (execution phase). The offline step is the one that gets and analyses previous experiences while the online step is the one that uses the data generated by offline analysis to predict opponent moves. This model is illustrated by an experiment with the RoboCup 2D Soccer Simulator.

Aram B. González, Jorge A. Ramírez Uresti
Fuzzy Case-Based Reasoning for Managing Strategic and Tactical Reasoning in StarCraft

We present the combination of Fuzzy sets and Case-Based Reasoning (FCBR) to deal with strategic and tactical management in the real-time strategy environment of StarCraft. Case-based reasoning is a problem solving AI approach that uses past experience to deal with actual problems. Fuzzy set theory is used in case representation to provide a characterization of imprecise and uncertain information. The results revealed that our system can successfully reason about strategies and tactics, defeating the built-in AI of StarCraft. The principal conclusion was that FCBR can reason with abstract information and a large space of actions. Moreover, the resulting system shows its potential to incorporate human knowledge and can effectively adapt to varying conditions of the map.

Pedro Cadena, Leonardo Garrido

Problem Solving and Machine Learning

Variable and Value Ordering Decision Matrix Hyper-heuristics: A Local Improvement Approach

Constraint Satisfaction Problems (CSP) represent an important topic of study because of their many applications in different areas of artificial intelligence and operational research. When solving a CSP, the order in which the variables are selected to be instantiated and the order of the corresponding values to be tried affect the complexity of the search. Hyper-heuristics are flexible methods that provide generality when solving different problems and, within CSP, they can be used to determine the next variable and value to try. They select from a set of low-level heuristics and decide which one to apply at each decision point according to the problem state. This study explores a hyper-heuristic model for variable and value ordering within CSP based on a decision matrix hyper-heuristic that is constructed by going into a local improvement method that changes small portions of the matrix. The results suggest that the approach is able to combine the strengths of different low-level heuristics to perform well on a wide range of instances and compensate for their weaknesses on specific instances.

José Carlos Ortiz-Bayliss, Hugo Terashima-Marín, Ender Özcan, Andrew J. Parkes, Santiago Enrique Conant-Pablos
Improving the Performance of Heuristic Algorithms Based on Causal Inference

Causal inference can be used to construct models that explain the performance of heuristic algorithms for NP-hard problems. In this paper, we show the application of causal inference to the algorithmic optimization process through an experimental analysis to assess the impact of the parameters that control the behavior of a heuristic algorithm. As a case study we present an analysis of the main parameters of one state of the art procedure for the Bin Packing Problem (BPP). The studies confirm the importance of the application of causal reasoning as a guide for improving the performance of the algorithms.

Marcela Quiroz Castellanos, Laura Cruz Reyes, José Torres-Jiménez, Claudia Gómez Santillán, Mario César López Locés, Jesús Eduardo Carrillo Ibarra, Guadalupe Castilla Valdez
Fuzzified Tree Search in Real Domain Games

Fuzzified game tree search algorithm is based on the idea that the exact game tree evaluation is not required to find the best move. Therefore, pruning techniques may be applied earlier resulting in faster search and greater performance. Applied to an abstract domain, it outperforms the existing ones such as Alpha-Beta, PVS, Negascout, NegaC*, SSS*/ Dual* and MTD(f). In this paper we present experimental results in real domain games, where the proposed algorithm demonstrated 10 percent performance increase over the existing algorithms.

Dmitrijs Rutko
On Generating Templates for Hypothesis in Inductive Logic Programming

Inductive logic programming is a subfield of machine learning that uses first-order logic as a uniform representation for examples and hypothesis. In its core form, it deals with the problem of finding a hypothesis that covers all positive examples and excludes all negative examples. The coverage test and the method to obtain a hypothesis from a given template have been efficiently implemented using constraint satisfaction techniques. In this paper we suggest a method how to efficiently generate the template by remembering a history of generated templates and using this history when adding predicates to a new candidate template. This method significantly outperforms the existing method based on brute-force incremental extension of the template.

Andrej Chovanec, Roman Barták
Towards Building a Masquerade Detection Method Based on User File System Navigation

Given that information is an extremely valuable asset, it is vital to timely detect whether one’s computer (session) is being illegally seized by a masquerader. Masquerade detection has been actively studied for more than a decade, especially after the seminal work of Schonlau’s group, who suggested that, to profile a user, one should model the history of the commands she would enter into a UNIX session. Schonlau’s group have yielded a masquerade dataset, which has been the standard for comparing masquerade detection methods. However, the performance of these methods is not conclusive, and, as a result, research on masquerade detection has resorted to other sources of information for profiling user behaviour. In this paper, we show how to build an accurate user profile by looking into how the user structures her own file system and how she navigates such structure. While preliminary, our results are encouraging and suggest a number of ways in which new methods can be constructed.

Benito Camiña, Raúl Monroy, Luis A. Trejo, Erika Sánchez
A Fast SVM Training Algorithm Based on a Decision Tree Data Filter

In this paper we present a new algorithm to speed up the training time of Support Vector Machines (SVM). SVM has some important properties like solid mathematical background and a better generalization capability than other machines like for example neural networks. On the other hand, the major drawback of SVM occurs in its training phase, which is computationally expensive and highly dependent on the size of input data set. The proposed algorithm uses a data filter to reduce the input data set to train a SVM. The data filter is based on an induction tree which effectively reduces the training data set for SVM, producing a very fast and high accuracy algorithm. According to the results, the algorithm produces results in a faster way than existing SVM implementations (SMO, LIBSVM and Simple-SVM) with similar accurateness.

Jair Cervantes, Asdrúbal López, Farid García, Adrián Trueba
Optimal Shortening of Covering Arrays

A Covering Array (CA), denoted by

CA

(

N

;

t

,

k

,

v

), is a matrix of size

N

×

k

with entries from the set {0,1,2,...,

v

 − 1}, where in each submatrix of size

N

×

t

appears each combination of symbols derived from

v

t

, at least once. The Covering Arrays (CAs) are combinatorial structures that have applications in software testing. This paper defines the Problem of Optimal Shortening of Covering ARrays (OSCAR), gives its NP-Completeness proof and presents an exact and a greedy algorithms to solve it. The OSCAR problem is an optimization problem that for a given matrix

M

consists in finding a submatrix

M

′ that is close to be a CA. An algorithm that solves the OSCAR problem has application as an initialization function of a metaheuristic algorithm that constructs CAs. Our algorithms were tested on a benchmark formed by 20 instances of the OSCAR problem, derived from 12 CAs taken from the scientific literature. From the solutions of the 20 instances of the OSCAR problem, 12 were transformed into CAs through a previously reported metaheuristic algorithm for the construction of CAs.

Oscar Carrizales-Turrubiates, Nelson Rangel-Valdez, José Torres-Jiménez
An Exact Approach to Maximize the Number of Wild Cards in a Covering Array

Covering Arrays CA(

N

;

t

,

k

,

v

) are combinatorial structures that can be used to define adequate test suites for software testing. The smaller a CA is, the smaller the number of test cases that will be given to test the functionality of a software component in order to identify possible failures. Due to the fact that the construction of CAs of optimal size is a highly combinatorial problem, several approximated strategies have been developed. Some constructions of these strategies can be further improved through a post optimization process. For example, the wild card profile of a CA is the set of symbols that can be modified without changing the properties that define a CA. It has been shown that some CAs can be reduced by merging rows that contain wild cards. This paper presents a Branch and Bound (B&B) strategy that maximizes the number of wild cards in the profile of an already constructed CA. We identify such profiles in 106 CAs of strength

t

 = 2 and alphabets

v

from 6 to 25. Also, it is shown that for an specific CA(42;2,8,6) different profiles can be obtained; such profiles vary in the number of wild cards and their distribution in the CA.

Loreto Gonzalez-Hernandez, José Torres-Jiménez, Nelson Rangel-Valdez
Intelligent Learning System Based on SCORM Learning Objects

This paper shows the creation of the adaptive SCORM sequencing models, taking advantage of the latest developments offered by the artificial intelligence field, to provide the best choice to the student, based in learning objects, using a tutor model in self learning. The Tutor uses decision networks also called influence diagrams, to improve the development of resources and learning materials in a learning content management system, to offer students the best pedagogical decision according to their performance. The intelligent learning system is validated in an online environment. The results of the evaluation process in undergraduate engineering courses are encouraging because they show improvements in student’s learning who used this approach, compared to those who did not use it. The paper also shows the potential application of this learning approach for power system’s operators.

Liliana Argotte, Julieta Noguez, Gustavo Arroyo

Natural Language Processing

A Weighted Profile Intersection Measure for Profile-Based Authorship Attribution

This paper introduces a new similarity measure called weighted profile intersection (WPI) for profile-based authorship attribution (PBAA). Authorship attribution (AA) is the task of determining which, from a set of candidate authors, wrote a given document. Under PBAA an author’s profile is created by combining information extracted from sample documents written by the author of interest. An unseen document is associated with the author whose profile is most similar to the document. Although competitive performance has been obtained with PBAA, the method is limited in that the most used similarity measure only accounts for the number of overlapping terms among test documents and authors’ profiles. We propose a new measure for PBAA, WPI, which takes into account an inter-author term penalization factor, besides the number of overlapping terms. Intuitively, in WPI we rely more on those terms that are (frequently) used by the author of interest and not (frequently) used by other authors when computing the similarity of the author’s profile and a test document. We evaluate the proposed method in several AA data sets, including many data subsets from Twitter. Experimental results show that the proposed technique outperforms the standard PBAA method in all of the considered data sets; although the baseline method resulted very effective. Further, the proposed method achieves performance comparable to classifier-based AA methods (e.g., methods based on SVMs), which often obtain better classification results at the expense of limited interpretability and a higher computational cost.

Hugo Jair Escalante, Manuel Montes-y-Gómez, Thamar Solorio
A New General Grammar Formalism for Parsing

We introduce Probabilistic Constrained W-grammars (PCW-grammars), a two-level formalism capable of capturing grammatical frameworks used in three different state of the art grammar formalism, namely Bilexical Grammars, Markov Rules, and Stochastic Tree Substitution Grammars. For each of them we provide an embedding into PCW-grammars, which allows us to derive properties about their expressive power and consistency, and relations between the formalisms studied.

Gabriel Infante-Lopez, Martín Ariel Domínguez
Contextual Semantic Processing for a Spanish Dialogue System Using Markov Logic

Semantic processing is vital in a dialogue system for the language understanding stage. Recent approaches of semantic processing rely on machine learning methods to perform the task. These are more robust to errors from the speech recogniser. Although these approaches are built on the domain of the dialogue system they do not incorporate contextual information available in the dialogue system. In this paper, we explore the use of contextual information in the form of expectations of a dialogue system to perform semantic processing in a

Spoken Dialogue System

. We show the benefits on doing so, and propose a Markov Logic model which incorporates such information.

Aldo Fabian, Manuel Hernandez, Luis Pineda, Ivan Meza
A Statistics-Based Semantic Textual Entailment System

We present a Textual Entailment (TE) recognition system that uses semantic features based on the Universal Networking Language (UNL). The proposed TE system compares the UNL relations in both the text and the hypothesis to arrive at the two-way entailment decision. The system has been separately trained on each development corpus released as part of the Recognizing Textual Entailment (RTE) competitions RTE-1, RTE-2, RTE-3 and RTE-5 and tested on the respective RTE test sets.

Partha Pakray, Utsab Barman, Sivaji Bandyopadhyay, Alexander Gelbukh
Semantic Model for Improving the Performance of Natural Language Interfaces to Databases

Despite the fact that since the late 60s many Natural Language Interfaces to Databases (NLIDBs) have been developed, up to now many problems continue, which prevent the translation process from natural language to SQL to be totally successful. Some of the main problems that have been encountered relate to 1) achieving domain independence, 2) the use of words or phrases of different syntactic categories for referring to tables and columns, and 3) semantic ellipsis. This paper introduces a new method for modeling databases that includes relevant information for improving the performance of NLIDBs. This method will be useful for solving many problems found in the translation from natural language to SQL, using a database model that contains linguistic information that provides more semantic information than that found in conventional database models (such as the extended entity-relationship model) and those used in previous NLIDBs.

Rodolfo A. Pazos R., Juan J. González B., Marco A. Aguirre L.
Modular Natural Language Processing Using Declarative Attribute Grammars

A system based on a general top-down parsing algorithm has been developed which allows language processors to be created as executable specifications of arbitrary attribute grammars. Declarative notation of attribute grammars allows modular construction of executable language definitions. Syntax is defined through general context-free grammar rules, and meaning is defined by associated semantic rules with arbitrary dependencies. An innovative technique allows parses to be pruned by arbitrary semantic constraints. This new technique is useful in modelling natural-language phenomena by imposing unification-like restrictions, and accommodating long-distance and cross-serial dependencies, which cannot be handled by context-free rules alone.

Rahmatullah Hafiz, Richard A. Frost
EM Clustering Algorithm for Automatic Text Summarization

Automatic text summarization has emerged as a technique for accessing only to useful information. In order to known the quality of the automatic summaries produced by a system, in DUC 2002 (Document Understanding Conference) has developed a standard human summaries called gold collection of 567 documents of single news. In this conference only five systems could outperforms the baseline heuristic in single extractive summarization task. So far, some approaches have got good results combining different strategies with language-dependent knowledge. In this paper, we present a competitive method based on an EM clustering algorithm for improving the quality of the automatic summaries using practically non language-dependent knowledge. Also, a comparison of this method with three text models is presented.

Yulia Ledeneva, René García Hernández, Romyna Montiel Soto, Rafael Cruz Reyes, Alexander Gelbukh
Discourse Segmentation for Sentence Compression

Earlier studies have raised the possibility of summarizing at the level of the sentence. This simplification should help in adapting textual content in a limited space. Therefore, sentence compression is an important resource for automatic summarization systems. However, there are few studies that consider sentence-level discourse segmentation for compression task; to our knowledge, none in Spanish. In this paper, we study the relationship between discourse segmentation and compression for sentences in Spanish. We use a discourse segmenter and observe to what extent the passages deleted by annotators fit in discourse structures detected by the system. The main idea is to verify whether the automatic discourse segmentation can serve as a basis in the identification of segments to be eliminated in the sentence compression task. We show that discourse segmentation could be a first solid step towards a sentence compression system.

Alejandro Molina, Juan-Manuel Torres-Moreno, Eric SanJuan, Iria da Cunha, Gerardo Sierra, Patricia Velázquez-Morales
Heuristic Algorithm for Extraction of Facts Using Relational Model and Syntactic Data

From semantic point of view, information is usually contained in small units, called facts that are usually smaller than sentences. Identification of these facts in a text is not a trivial task. We present a heuristic algorithm for extraction of facts from sentences using a simple representation based on a relational data model. We focus our study on texts that contain a lot of facts by their nature: structured textbooks. The algorithm is based on data obtained by a syntactic analyzer. The obtained facts can be useful for information retrieval tasks, automatic summarization, etc. Our experiments are conducted for Spanish language. We obtained better results than the similar methods.

Grigori Sidorov, Juve Andrea Herrera-de-la-Cruz, Sofía N. Galicia-Haro, Juan Pablo Posadas-Durán, Liliana Chanona-Hernandez
MFSRank: An Unsupervised Method to Extract Keyphrases Using Semantic Information

This paper presents an unsupervised graph-based method to extract keyphrases using semantic information. The proposed method has two stages. In the first one, we have extracted MFS (Maximal Frequent Sequences) and built the nodes of a graph with them. The weight of the connection between two nodes has been established according to common statistical information and semantic relatedness. In the second stage, we have ranked MFS with traditionally PageRank algorithm; but we have included ConceptNet. This external resource adds an extra weight value between two MFS. The experimental results are competitive with traditional approaches developed in this area. MFSRank overcomes the baseline for top 5 keyphrases in precision, recall and F-score measures.

Roque Enrique López, Dennis Barreda, Javier Tejada, Ernesto Cuadros
Content Determination through Planning for Flexible Game Tutorials

The goal of this work is to design and implement an agent which generates hints for a player in a first person shooter game. The agent is a computer-controlled character which collaborates with the player to achieve the goal of the game. Such agent uses state of the art reasoning techniques from the area of artificial intelligence planning in order to come up with the content of the instructions. Moreover, it applies techniques from the area of natural language generation to generate the hints. As a result the instructions are both causally appropriate at the point in which they are uttered and relevant to the goal of the game.

Luciana Benotti, Nicolás Bertoa
Instance Selection in Text Classification Using the Silhouette Coefficient Measure

The paper proposes the use of the Silhouette Coefficient (SC) as a ranking measure to perform instance selection in text classification. Our selection criterion was to keep instances with mid-range SC values while removing the instances with high and low SC values. We evaluated our hypothesis across three well-known datasets and various machine learning algorithms. The results show that our method helps to achieve the best trade-off between classification accuracy and training time.

Debangana Dey, Thamar Solorio, Manuel Montes y Gómez, Hugo Jair Escalante
Age-Related Temporal Phrases in Spanish and French

This paper reports research on temporal expressions. The analyzed phrases include a common temporal expression for a period of years reinforced by an adverb of time. We found that some of those phrases are age-related expressions. We analyzed samples obtained from the Internet for Spanish and French to determine appropriate annotations for marking up text and possible translations. We present the results for a group of selected classes.

Sofía N. Galicia-Haro, Alexander Gelbukh
Sentiment Analysis of Urdu Language: Handling Phrase-Level Negation

The paper investigates and proposes the treatment of the effect of the phrase-level negation on the sentiment analysis of the Urdu text based reviews. The negation acts as the valence shifter and flips or switches the inherent sentiments of the subjective terms in the opinionated sentences. The presented approach focuses on the subjective phrases called the SentiUnits, which are made by the subjective terms (adjectives), their modifiers, conjunctions, and the negation. The final effect of these phrases is computed according to the given model. The analyzer takes one sentence from the given review, extracts the constituent SentiUnits, computes their overall effect (polarity) and then calculates the final sentence polarity. Using this approach, the effect of negation is handled within these subjective phrases. The main contribution of the research is to deal with a morphologically rich, and resource poor language, and despite of being a pioneering effort in handling negation for the sentiment analysis of the Urdu text, the results of experimentation are quit encouraging.

Afraz Zahra Syed, Muhammad Aslam, Ana Maria Martinez-Enriquez
Unsupervised Identification of Persian Compound Verbs

One of the main tasks related to multiword expressions (MWEs) is compound verb identification. There have been so many works on unsupervised identification of multiword verbs in many languages, but there has not been any conspicuous work on Persian language yet. Persian multiword verbs (known as compound verbs), are a kind of light verb construction (LVC) that have syntactic flexibility such as unrestricted word distance between the light verb and the nonverbal element. Furthermore, the nonverbal element can be inflected. These characteristics have made the task in Persian very difficult. In this paper, two different unsupervised methods have been proposed to automatically detect compound verbs in Persian. In the first method, extending the concept of pointwise mutual information (PMI) measure, a bootstrapping method has been applied. In the second approach, K-means clustering algorithm is used. Our experiments show that the proposed approaches have gained results superior to the baseline which uses PMI measure as its association metric.

Mohammad Sadegh Rasooli, Heshaam Faili, Behrouz Minaei-Bidgoli

Robotics, Planning and Scheduling

Testing a Theory of Perceptual Mapping Using Robots

This paper describes the implementation of a new approach to mapping using a mobile robot that is based upon a theory of human perceptual mapping. Its key feature is that it does not need to continuously track the robot’s position as the robot moves through the environment and it does not perform error correction due to robot sensors. It still manages to produce an approximate map of the environment that is adequate for orienting oneself and for knowing where things are located. Arguably, it is the kind of map that humans seem to have.

Md. Zulfikar Hossain, Wai Yeap, Olaf Diegel
A POMDP Model for Guiding Taxi Cruising in a Congested Urban City

We consider a partially observable Markov decision process (POMDP) model for improving a taxi agent cruising decision in a congested urban city. Using real-world data provided by a large taxi company in Singapore as a guide, we derive the state transition function of the POMDP. Specifically, we model the cruising behavior of the drivers as continuous-time Markov chains. We then apply dynamic programming algorithm for finding the optimal policy of the driver agent. Using a simulation, we show that this policy is significantly better than a greedy policy in congested road network.

Lucas Agussurja, Hoong Chuin Lau
Next-Best-View Planning for 3D Object Reconstruction under Positioning Error

To acquire a 3D model of an object it is necessary to plan a set of locations, called views, where a range sensor will be placed. The problem is solved in greedy manner, by selecting iteratively next-best-views. When a mobile robot is used, we have to take into account positioning errors, given that they can affect the quality and efficiency of the plan. We propose a method to plan “safe views” which are successful even when there is positioning error. The method is based on a reevaluation of the candidate views according to their neighbors, so view points which are safer against positioning error are preferred. The method was tested in simulation with objects of different complexities. Experimental results show that the proposed method achieves similar results as the ideal case without error, reducing the number of views required against the standard approach that does not consider positioning error.

Juan Irving Vásquez, L. Enrique Sucar
Stochastic Learning Automata for Self-coordination in Heterogeneous Multi-Tasks Selection in Multi-Robot Systems

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-election of heterogeneous specialized tasks by autonomous robots, as opposed to the usual multi-tasks allocation problem in multi-robot systems in which an external controller distributes the existing tasks among the individual robots. In this work we are considering a specifically distributed or decentralized approach in which we are particularly interested on decentralized solution where the robots themselves autonomously and in an individual manner, are responsible of selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario and we propose a solution through automata learning-based probabilistic algorithm, to solve the corresponding multi-tasks distribution problem. The paper ends with a critical discussion of experimental results.

Yadira Quiñonez, Darío Maravall, Javier de Lope
Stochastic Abstract Policies for Knowledge Transfer in Robotic Navigation Tasks

Most work in navigation approaches for mobile robots does not take into account existing solutions to similar problems when learning a policy to solve a new problem, and consequently solves the current navigation problem from scratch. In this article we investigate a knowledge transfer technique that enables the use of a previously know policy from one or more related source tasks in a new task. Here we represent the knowledge learned as a stochastic abstract policy, which can be induced from a training set given by a set of navigation examples of state-action sequences executed successfully by a robot to achieve a specific goal in a given environment. We propose both a probabilistic and a nondeterministic abstract policy, in order to preserve the occurrence of all actions identified in the inductive process. Experiments carried out attest to the effectiveness and efficiency of our proposal.

Tiago Matos, Yannick Plaino Bergamo, Valdinei Freire da Silva, Anna Helena Reali Costa
The Evolution of Signal Communication for the e-puck Robot

In this paper we report our experiments with the e-puck robots for developing a communication system using evolutionary robotics. In order to do the latter we follow the evolutionary approach by using Neural Networks and Genetic Algorithms. The robots develop a communication scheme for solving tasks like: locating food areas, avoiding obstacles, approaching light sources and locating sound-sources (other robots emitting sounds). Evorobot* and Webots simulators are used as tools for computing the evolutionary process and optimization of the weights of neural controllers. As a consequence, two different kinds of neural controllers emerge. On one hand, one controller is used for robot movement; on the other hand the second controller process sound signals.

Fernando Montes-Gonzalez, Fernando Aldana-Franco
An Hybrid Expert Model to Support Tutoring Services in Robotic Arm Manipulations

To build an intelligent tutoring system, a key task is to define an expertise model that can support appropriate tutoring services. However, for some ill-defined domains, classical approaches for representing expertise do not work well. To address this issue, we illustrate in this paper a novel approach which is to combine several approaches into a hybrid model to support tutoring services in procedural and ill-defined domains. We illustrate this idea in a tutoring system for operating Canadarm2, a robotic arm installed on the international space station. To support tutoring services in this ill-defined domain, we have developed a model combining three approaches: (1) a data mining approach for automatically building a task model from user solutions, (2) a cognitive model to cover well-defined parts of the task and spatial reasoning, (3) and a 3D path-planner to cover all other aspects of the task. Experimental results show that the hybrid model allows providing assistance to learners that is much richer than what could be offered by each individual approach.

Philippe Fournier-Viger, Roger Nkambou, André Mayers, Engelbert Mephu Nguifo, Usef Faghihi
Inverse Kinematics Solution for Robotic Manipulators Using a CUDA-Based Parallel Genetic Algorithm

Inverse kinematics is one of the most basic problems that needs to be solved when using robot manipulators in a work environment. A closed-form solution is heavily dependent on the geometry of the manipulator. A solution may not be possible for certain robots. On the other hand, there may be an infinite number of solutions, as is the case of highly redundant manipulators. We propose a Genetic Algorithm (GA) to approximate a solution to the inverse kinematics problem for both the position and orientation. This algorithm can be applied to different kinds of manipulators. Since typical GAs may take a considerable time to find a solution, a parallel implementation of the same algorithm (PGA) was developed for its execution on a CUDA-based architecture. A computational model of a PUMA 500 robot was used as a test subject for the GA. Results show that the parallel implementation of the algorithm was able to reduce the execution time of the serial GA significantly while also obtaining the solution within the specified margin of error.

Omar Alejandro Aguilar, Joel Carlos Huegel

Medical Applications of Artificial Intelligence

MFCA: Matched Filters with Cellular Automata for Retinal Vessel Detection

Blood vessel extraction is an important step for abnormality detection and for obtaining good retinopathy diabetic diagnosis in digital retinal images. The use of filter bank has shown to be a powerful technique for detecting blood vessels. In particular, the Matched Filter is appropriate and efficient for this task and in combination with other methods the blood vessel detection can be improved. We propose a combination of the Matched Filter with a segmentation strategy by using a Cellular Automata. The strategy presented here is very efficient and experimentally yields competitive results compared with others methods of the state of the art.

Oscar Dalmau, Teresa Alarcon
Computer Assisted Diagnosis of Microcalcifications in Mammograms: A Scale-Space Approach

Computer Assisted Diagnosis (CAD) is rapidly reaching worldwide acceptance in different fields of medicine. Particularly, CAD has found one of its main applications in breast cancer diagnosis where the detection of microcalcifications in women breasts is typically associated with the presence of cancer. In this paper, a method for automatic breast contour detection is presented as a pre-processing step for microcalcification detection. Then, a combination of scale-space algorithms are used to locate candidate regions of microcalcifications and a significant percentage of false positives are finally discriminated via thresholding. Detected regions using this method have been found to describe 91.6% of microcalcifications from the MIAS database with an average specificity of 97.30%.

Alberto Pastrana Palma, Juan Francisco Reyes Muñoz, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar, Alberto Lamadrid Álvarez
Diagnosis in Sonogram of Gall Bladder

This paper describes the development and testing of a diagnostic system using sonograms. In far flung areas of under developing countries where the availability of specialists is a problem and sometimes not even a possibility, it is highly beneficial to have on site diagnosis computer application to support medical staff. Diagnose of sonograms to identify infected part in offline scenarios is not always easy. Besides, lack of infrastructure does not permit online solutions to be a practical option. We implement a system named Intelligent-Eye (I-Eye) which employs imaging and diagnostic techniques to support in the gallbladder diagnosis, saving time and cost of operating medical procedures. We implemented an algorithm highly capable of being used on different diagnostic ultrasonography machines, generating accurate information reports and diagnosis.

Saad Tanveer, Omer Jamshaid, Abdul Mannan, Muhammad Aslam, Ana Maria Martinez-Enriquez, Afraz Zahra Syed, Gonzalo Escalada-Imaz
Genetic Selection of Fuzzy Model for Acute Leukemia Classification

Leukemia is a disease characterized by an abnormal increase of white blood cells. This disease is divided into two types: lymphoblastic and myeloid, each of which is divided in subtypes. Differentiating the type and subtype of acute leukemia is important in order to determine the correct type of treatment to be assigned by the affected person. Diagnostic tests available today, such as those based on cell morphology, have a high error rate. Others, as those based on cytometry or microarray, are expensive. In order to avoid those drawbacks this paper proposes the automatic selection of a fuzzy model for accurate classification of types and subtypes of acute leukemia based on cell morphology. Our experimental results reach up to 93.52% in classification of acute leukemia types, 87.36% in lymphoblastic subtypes and 94.42% in myeloid subtypes. Our results show a significant improvement compared with classifiers which parameters were manually tuned using the same data set. Details of the proposed method, as well as experiments and results are shown.

Alejandro Rosales-Pérez, Carlos A. Reyes-García, Pilar Gómez-Gil, Jesus A. Gonzalez, Leopoldo Altamirano
An Ontology for Computer-Based Decision Support in Rehabilitation

Although functionality and disease classifications are available thanks to initiatives such as the “international classification of functioning, disability and health”, the “systematized nomenclature of medicine - clinical terms” and the “international classification of diseases”, a formal model of rehabilitation interventions has not been defined yet. This model can have a fundamental role in the design of computer-based decision support in rehabilitation. Some initiatives such as the “international classification of health interventions” are in development, but their scope is overly general to cope with the specificities that characterize rehabilitation. The aim of this work is to represent knowledge in order to carry out diagnosis and personalization of activities in cases of people with functional diversity. To define the diagnosis and activity personalization, a methodology has been developed to extract standardized concepts from clinical scales and the literature.

Laia Subirats, Luigi Ceccaroni
Heuristic Search of Cut-Off Points for Clinical Parameters: Defining the Limits of Obesity

We studied the variability of obesity in a sample of 4,164 young Mexicans (17-24 years old) measured through the waist circumference. According to the American Heart Association, obesity is one of the five clinical alterations to define the Metabolic Syndrome (MS); the other four are low levels of HDL cholesterol, and high values of triglycerides, glucose, and blood pressure. It has been proposed a cut-off point of 80 cm for women and 90 cm for men to define a normal or altered value of waist circumference for Mexicans. We assume that the waist circumference in healthy population has a normal distribution, so a monolithic cut-off point is only an upper limit for normal values. The objective of this work is to estimate the subjacent normal distribution of the waist circumference of healthy people, involving in this analysis the other four components of the MS, and approaching the problem as a combinatory one. We defined a combination of cut-off points for the other four components of the MS; if considering a set of 50 cut-off points candidates for each of the five parameters, then results in a searching space of 50

5

(more than 300 millions of combinations). Each particular combination of cut-off points (excluding waist circumference) sets a subpopulation in which parameter values fall into normal ranges so defined; then for each subpopulation we calculated the histogram of the waist circumference values. Using a heuristic function involving the symmetry value of the histogram (skewness), we applied a ‘best first search’ on the combination of cut-off points. We found a combination of cut-off point values that generates the more symmetrical histogram, so we propose it as a useful criterion to set cut-off points of MS parameters for Mexicans. Finally, the obtained histogram is proposed as the normal distribution of healthy population, and represents the variability of the waist circumference of non-obese young Mexicans.

Miguel Murguía-Romero, Rafael Villalobos-Molina, René Méndez-Cruz, Rafael Jiménez-Flores
Development of a System of Electrodes for Reading Consents-Activity of an Amputated Leg (above the knee) and Its Prosthesis Application

It is reported the design of electrodes which was standardized and positioned based on the anatomy study and motor units, identified by the nerve branches of an amputated leg (above the knee)[1], obtaining the myoelectric signals of maximum amplitude of 20

μV

[2]. The initial identification of the optimal position of the electrode was characterized with myoelectrography in order to determine the movements that the patient makes consciously. The myoelectrical signal was accomplish by taking into account electrochemical schema of the the cellular membrane [3] based on the fields and frequencies, recognized by the circuit structural materials that provide sufficient resistivity to the spurious frequencies [4], isolating each motor unit [5]. The results show that it is possible to have a cleaned signal which describes the movement that the patient desires consciously and the application in a prosthesis.

Emilio Soto, Jorge Antonio Ascencio, Manuel Gonzalez, Jorge Arturo Hernandez
Predicting the Behavior of the Interaction of Acetylthiocholine, pH and Temperature of an Acetylcholinesterase Sensor

The steady-state current response of an acetylcholinesterase electrochemical sensor of second generation, which results from the interaction of substrate concentration, pH and temperature, was evaluated to improve biosensor’s analytical characteristics using computational learning models. Artificial Neural Network and Support Vector Machine models demonstrated excellent results, despite of the limited number of samples. The predictions provided by both models were compared in order to determine which of them possesses a better approximation of the response generated by the sensor signal.

Edwin R. García, Larysa Burtseva, Margarita Stoytcheva, Félix F. González
Backmatter
Metadaten
Titel
Advances in Artificial Intelligence
herausgegeben von
Ildar Batyrshin
Grigori Sidorov
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25324-9
Print ISBN
978-3-642-25323-2
DOI
https://doi.org/10.1007/978-3-642-25324-9