Skip to main content

Über dieses Buch

This book contains a selection of higher quality and reviewed papers of the 15th Portuguese Conference on Artificial Intelligence, EPIA 2011, held in Lisbon, Portugal, in October 2011.

The 50 revised full papers presented were carefully reviewed and selected from a total of 203 submissions. The papers are organized in topical sections on affective computing, ambient intelligence environments, artificial intelligence methodologies for games, artificial intelligence in transportation systems, artificial life evolutionary algorithms, computational logic with applications, general artificial intelligence, intelligent robotics, knowledge discovery and business intelligence, multi-agent systems: theory and applications, social simulation and modeling, text mining and applications, and doctoral symposium on artificial intelligence.



Affective Computing

Sentiment Analysis of News Titles

The Role of Entities and a New Affective Lexicon

The growth of content on the web has been followed by increasing interest in opinion mining. This field of research relies on accurate recognition of emotion from textual data. There’s been much research in sentiment analysis lately, but it always focuses on the same elements. Sentiment analysis traditionally depends on linguistic corpora, or common sense knowledge bases, to provide extra dimensions of information to the text being analyzed. Previous research hasn’t yet explored a fully automatic method to evaluate how events associated to certain entities may impact each individual’s sentiment perception. This project presents a method to assign valence ratings to entities, using information from their Wikipedia page, and considering user preferences gathered from the user’s Facebook profile. Furthermore, a new affective lexicon is compiled entirely from existing corpora, without any intervention from the coders.

Daniel Loureiro, Goreti Marreiros, José Neves

Ambient Intelligence Environments

Providing Location Everywhere

The ability to locate an individual is an essential part of many applications, specially the mobile ones. Obtaining this location in an open environment is relatively simple through GPS (Global Positioning System), but indoors or even in dense environments this type of location system doesn’t provide a good accuracy. There are already systems that try to suppress these limitations, but most of them need the existence of a structured environment to work. Since Inertial Navigation Systems (INS) try to suppress the need of a structured environment we propose an INS based on Micro Electrical Mechanical Systems (MEMS) that is capable of, in real time, compute the position of an individual everywhere.

Ricardo Anacleto, Lino Figueiredo, Paulo Novais, Ana Almeida

Modeling Context-Awareness in Agents for Ambient Intelligence: An Aspect-Oriented Approach

Ambient Intelligence (AmI) systems are inherently context aware, since they should be able to react to, adapt to and even anticipate user actions or events occurring in the environment in a manner consistent with the current context. Software agents and especially the BDI architecture are considered to be a promising approach to deal with AmI systems development. However current agent models do not offer a proper support for developing AmI systems because they do not offer support to model explicitly the interaction between the agent, context sources and effectors, and the context-awareness features are scattered in the system model. To solve these problems we propose an aspect-oriented agent metamodel for AmI systems, which encourages modularity in the description of context-aware features in AmI systems. This metamodel achieves better results than other metamodels in separation of concerns, size, coupling and cohesion.

Inmaculada Ayala, Mercedes Amor Pinilla, Lidia Fuentes

Developing Dynamic Conflict Resolution Models Based on the Interpretation of Personal Conflict Styles

Conflict resolution is a classic field of Social Science research. However, with conflicts now also emerging in virtual environments, a new field of research has been developing in which Artificial Intelligence and particularly Ambient Intelligence are interesting. As result, the field of Online Dispute Resolution emerged as the use (in part or entirely) of technological tools to solve disputes. In this paper we focus on developing conflict resolution models that are able to adapt strategies in real time according to changes in the personal conflict styles of the parties. To do it we follow a novel approach in which an intelligent environment supports the lifecycle of the conflict resolution model with the provision of important context knowledge. The presented framework is able to react to important changes in the context of interaction, resulting in a conflict resolution approach that is able to perceive the parties and consequently achieve better outcomes.

Davide Carneiro, Marco Gomes, Paulo Novais, José Neves

Organizations of Agents in Information Fusion Environments

Information fusion in a context-aware system is understood as a process that assembles assessments of the environment based on its goals. Advantages of intelligent approaches such as Multi-Agent Systems (MAS) and the use of Wireless Sensor Networks (WSN) within the information fusion process are emerging, especially in context-aware scenarios. However, it has become critical to propose improved and efficient ways to handle the enormous quantity of data provided by these approaches. Agents are a suitable option because they can represent autonomous entities by modeling their capabilities, expertise and intentions. In this sense, virtual organizations of agents are an interesting option/possibility because they can provide the necessary capacity to handle open and heterogeneous systems such as those normally found in the information fusion process. This paper presents a new framework that defines a method for creating a virtual organization of software and hardware agents. This approach facilitates the inclusion of context-aware capabilities when developing intelligent and adaptable systems, where functionalities can communicate in a distributed and collaborative way. Several tests have been performed to evaluate this framework and preliminary results and conclusions are presented.

Dante I. Tapia, Fernando de la Prieta, Sara Rodríguez González, Javier Bajo, Juan M. Corchado

Artificial Intelligence Methodologies for Games

Wasp-Like Agents for Scheduling Production in Real-Time Strategy Games

In this paper, we propose an algorithm inspired in the social intelligence of wasps for scheduling production in real-time strategy games, and evaluate its performance in a scenario developed as a modification of the game Warcraft III The Frozen Throne. Results show that such an approach is well suited for the highly dynamic nature of the environment in this game genre. We also believe such an approach may allow the exploration of new paradigms of gameplay, and provide some examples in the explored scenarios.

Marco Santos, Carlos Martinho

Artificial Intelligence in Transportation Systems

Operational Problems Recovery in Airlines – A Specialized Methodologies Approach

Disruption management is one of the most important scheduling problems in the airline industry because of the elevated costs associated, however this is relatively new research area comparing for example with fleet and tail assignment. The major goal to solve this kind of problem is to achieve a feasible solution for the airline company minimizing the several costs involved and within time constraints. An approach to solve operational problems causing disruptions is presented using different specialized methodologies for the problems with aircrafts and crewmembers including flight graph based with meta-heuristic optimization algorithms. These approaches were built to fit on a multi-agent system with specialist agents solving disruptions. A comparative analysis of the algorithms is also presented. Using a complete month real dataset we demonstrate an example how the system handled disruption events. The resulting application is able to solve disruption events optimizing costs and respecting operational constraints.

Bruno Aguiar, José Torres, António J. M. Castro

Solving Heterogeneous Fleet Multiple Depot Vehicle Scheduling Problem as an Asymmetric Traveling Salesman Problem

The Vehicle Scheduling Problem is a well-known combinatorial optimization problem that emerges in mobility and transportation sectors. The heterogeneous fleet with multiple depots extension arises in major urban public transportation companies due to different demands throughout the day and some restrictions in the use of different vehicle types. This extension introduces complexity to the problem and makes the known deterministic methods unable to solve it efficiently. This paper describes an approach to create a comprehensive model to represent the Multiple Depot Vehicle Scheduling Problem as an Asymmetric Traveling Salesman Problem. To solve the A-TSP problem an Ant Colony based meta-heuristic was developed. The results achieved on solving problems from a Portuguese major public transportation planning database show the usefulness of the proposed approach.

Jorge Alpedrinha Ramos, Luís Paulo Reis, Dulce Pedrosa

Artificial Life and Evolutionary Algorithms

Evolving Numerical Constants in Grammatical Evolution with the Ephemeral Constant Method

This paper assesses the new numerical-constant generation method called ephemeral constant, which can be seen as a translation of the classical genetic programming’s ephemeral random constant to the grammatical evolution framework. Its most distinctive feature is that it decouples the number of bits used to encode the grammar’s production rules from the number of bits used to represent a constant. This makes it possible to increase the method’s representational power without incurring in an overly redundant encoding scheme. We present experiments comparing ephemeral constant with the three most popular approaches for constant handling: the traditional approach, digit concatenation, and persistent random constant. By varying the number of bits to represent a constant, we can increase the numerical precision to the desired level of accuracy, overcoming by a large margin the other approaches.

Douglas A. Augusto, Helio J. C. Barbosa, André M. S. Barreto, Heder S. Bernardino

The Evolution of Foraging in an Open-Ended Simulation Environment

Throughout the last decades, Darwin’s theory of natural selection has fueled a vast amount of research in the field of computer science, and more specifically in artificial intelligence. The majority of this work has focussed on artificial selection, rather than on natural selection. In this paper we study the evolution of agents’ controllers in an open-ended scenario. To that end, we set up a multi-agent simulation inspired by the ant foraging task, and evolve the agents’ brain (a rule list) without any explicit fitness function. We show that the agents do evolve sustainable foraging behaviors in this environment, and discuss some evolutionary conditions that seem to be important to achieve these results.

Tiago Baptista, Ernesto Costa

A Method to Reuse Old Populations in Genetic Algorithms

In this paper a method to increase the optimization ability of genetic algorithms (GAs) is proposed. To promote population diversity, a fraction of the worst individuals of the current population is replaced by individuals from an older population. To experimentally validate the approach we have used a set of well-known benchmark problems of tunable difficulty for GAs. Standard GA with and without elitism and steady state GA have been augmented with the proposed method. The obtained results show that the algorithms augmented with the proposed method perform better than the not-augmented algorithms or have the same performances. Furthermore, the proposed method depends on two parameters: one of them regulates the size of the fraction of the population replaced and the other one decides the “age” of the population used for the replacement. Experimental results indicate that better performances have been achieved with high values of the former parameter and low values of the latter one.

Mauro Castelli, Luca Manzoni, Leonardo Vanneschi

Towards Artificial Evolution of Complex Behaviors Observed in Insect Colonies

Studies on social insects have demonstrated that complex, adaptive and self-organized behavior can arise at the macroscopic level from relatively simple rules at the microscopic level. Several past studies in robotics and artificial life have focused on the evolution and understanding of the rules that give rise to a specific macroscopic behavior such as task allocation, communication or synchronization. In this study, we demonstrate how colonies of embodied agents can be evolved to display multiple complex macroscopic behaviors at the same time. In our evolutionary model, we incorporate key features present in many natural systems, namely energy consumption, birth, death and a long evaluation time. We use a generic foraging scenario in which agents spend energy while they move and they must periodically recharge in the nest to avoid death. New robots are added (born) when the colony has foraged a sufficient number of preys. We perform an analysis of the evolved behaviors and demonstrate that several colonies display multiple complex and scalable macroscopic behaviors.

Miguel Duarte, Anders Lyhne Christensen, Sancho Oliveira

Network Regularity and the Influence of Asynchronism on the Evolution of Cooperation

In a population of interacting agents, the update dynamics defines the temporal relation between the moments at which agents update the strategies they use when they interact with other agents. The update dynamics is said to be synchronous if this process occurs simultaneously for all the agents and asynchronous if this is not the case. On the other hand, the network of contacts defines who may interact with whom. In this paper, we investigate the features of the network of contacts that play an important role in the influence of the update dynamics on the evolution of cooperative behaviors in a population of agents. First we show that asynchronous dynamics is detrimental to cooperation only when 1) the network of contacts is highly regular and 2) there is no noise in the strategy update process. We then show that, among the different features of the network of contacts, network regularity plays indeed a major role in the influence of the update dynamics, in combination with the temporal scale at which clusters of cooperator agents grow.

Carlos Grilo, Luís Correia

The Squares Problem and a Neutrality Analysis with ReNCoDe

Evolutionary Algorithms (EA) are stochastic search algorithms inspired by the principles of selection and variation posited by the theory of evolution, mimicking in a simple way those mechanisms. In particular, EAs approach differently from nature the genotype - phenotype relationship, and this view is a recurrent issue among researchers. Moreover, in spite of some performance improvements, it is a true fact that biology knowledge has advanced faster than our ability to incorporate novel biological ideas into EAs. Recently, some researchers start exploring computationally our new comprehension about the multitude of the regulatory mechanisms that are fundamental in both processes of inheritance and of development in natural systems, trying to include those mechanism in the EA. One of the first successful proposals is the Artificial Gene Regulatory (ARN) model, by Wolfgang Banzhaf. Soon after some variants of the ARN with increased capabilities were tested. In this paper, we further explore the capabilities of one of those, the Regulatory Network Computational Device, empowering it with feedback connections. The efficacy and efficiency of this alternative is tested experimentally using a typical benchmark problem for recurrent and developmental systems. In order to gain a better understanding about the reasons for the improved quality of the results, we undertake a preliminary study about the role of neutral mutations during the evolutionary process.

Rui L. Lopes, Ernesto Costa

Particle Swarm Optimization for Gantry Control: A Teaching Experiment

The particle swarm optimization algorithm is proposed as a tool to solve the Posicast input command shaping problem. The design technique is addressed, in the context of a simulation teaching experiment, aiming to illustrate second-order system feedforward control. The selected experiment is the well known suspended load or gantry problem, relevant to the crane control. Preliminary simulation results for a quarter-cycle Posicast shaper, designed with the particle swarm algorithm are presented. Illustrating figures extracted from an animation of a gantry example which validate the Posicast design are presented.

Paulo B. de Moura Oliveira, Eduardo J. Solteiro Pires, José Boaventura Cunha

Evolving Reaction-Diffusion Systems on GPU

Reaction-diffusion systems contribute to various morphogenetic processes, and can also be used as computation models in real and artificial chemistries. Evolving reaction-diffusion solutions automatically is interesting because it is otherwise difficult to engineer them to achieve a target pattern or to perform a desired task. However most of the existing work focuses on the optimization of parameters of a fixed reaction network. In this paper we extend this state of the art by also exploring the space of alternative reaction networks, with the help of GPU hardware. We compare parameter optimization and reaction network optimization on the evolution of reaction-diffusion solutions leading to simple spot patterns. Our results indicate that these two optimization modes tend to exhibit qualitatively different evolutionary dynamics: in the former, the fitness tends to improve continuously in gentle slopes, while the latter tends to exhibit large periods of stagnation followed by sudden jumps, a sign of punctuated equilibria.

Lidia Yamamoto, Wolfgang Banzhaf, Pierre Collet

Computational Logic with Applications

Optimal Divide and Query

Algorithmic debugging is a semi-automatic debugging technique that allows the programmer to precisely identify the location of bugs without the need to inspect the source code. The technique has been successfully adapted to all paradigms and mature implementations have been released for languages such as Haskell, Prolog or Java. During three decades, the algorithm introduced by Shapiro and later improved by Hirunkitti has been thought optimal. In this paper we first show that this algorithm is not optimal, and moreover, in some situations it is unable to find all possible solutions, thus it is incomplete. Then, we present a new version of the algorithm that is proven optimal, and we introduce some equations that allow the algorithm to identify all optimal solutions.

David Insa, Josep Silva

A Subterm-Based Global Trie for Tabled Evaluation of Logic Programs

Tabling is an implementation technique that overcomes some limitations of traditional Prolog systems in dealing with redundant sub-computations and recursion. A critical component in the implementation of an efficient tabling system is the design of the table space. The most popular and successful data structure for representing tables is based on a two-level trie data structure, where one trie level stores the tabled subgoal calls and the other stores the computed answers. The Global Trie (GT) is an alternative table space organization designed with the intent to reduce the tables’s memory usage, namely by storing terms in a global trie, thus preventing repeated representations of the same term in different trie data structures. In this paper, we propose an extension to the GT organization, named

Global Trie for Subterms (GT-ST)

, where compound subterms in term arguments are represented as unique entries in the GT. Experimental results using the YapTab tabling system show that GT-ST support has potential to achieve significant reductions on memory usage, for programs with increasing compound subterms in term arguments, without compromising the execution time for other programs.

João Raimundo, Ricardo Rocha

General Artificial Intelligence

Intention-Based Decision Making with Evolution Prospection

We explore a coherent combination, for decision making, of two Logic Programming based implemented systems, Evolution Prospection and Intention Recognition. The Evolution Prospection system has proven to be a powerful system for decision making, designing and implementing several kinds of preferences and useful environment-triggering constructs. It is here enhanced with an ability to recognize intentions of other agents—an important aspect not explored so far. The usage and usefulness of the combined system is illustrated with several extended examples.

The Anh Han, Luís Moniz Pereira

Unsupervised Music Genre Classification with a Model-Based Approach

New music genres emerge constantly resulting from the influence of existing genres and other factors. In this paper we propose a data-driven approach which is able to cluster and classify music samples according to their type/category. The clustering method uses no previous knowledge on the genre of the individual samples or on the number of genres present in the dataset. This way, music


is not imposed by the users’ subjective knowledge about music genres, which may also be outdated. This method follows a model-based approach to group music samples into different clusters only based on their audio features, achieving a perfect clustering accuracy (100%) when tested with 4 music genres. Once the clusters are learned, the classification method can categorize new music samples according to the previously learned created groups. By using Mahalanobis distance, this method is not restricted to spherical clusters, achieving promising classification rates: 82%.

Luís Barreira, Sofia Cavaco, Joaquim Ferreira da Silva

Constrained Sequential Pattern Knowledge in Multi-relational Learning

In this work we present


, a multi-relational framework suitable to explore temporal patterns available in multi-relational databases.


’s main idea consists of exploiting frequent sequence mining, using an efficient and direct method to learn temporal patterns in the form of sequences. Grounded on a coding methodology and on the efficiency of sequential miners, we find the most interesting sequential patterns available and then map these findings into a new table, which encodes the multi-relational timed data using sequential patterns. In the last step of our framework, we use an ILP algorithm to learn a theory on the enlarged relational database that consists on the original multi-relational database and the new sequence relation.

We evaluate our framework by addressing three classification problems. Moreover, we map each one of three different types of sequential patterns: frequent sequences, closed sequences or maximal sequences.

Carlos Abreu Ferreira, João Gama, Vítor Santos Costa

Summarizing Frequent Itemsets via Pignistic Transformation

Since the proposal of the well-known


algorithm and the subsequent establishment of the area known as Frequent Itemset Mining, most of the scientific contribution of the data mining area have been focused on the study of methods that improve its efficiency and its applicability in new domains. The interest in the extraction of this sort of patterns lies in its expressiveness and syntactic simplicity. However, due to the large quantity of frequent patterns that are generally obtained, the evaluation process, necessary for obtaining useful knowledge, it is difficult to be achieved in practice. In this paper we present a formal method to summarize the whole set of mined frequent patterns into a single probability distribution in the framework of the Transferable Belief Model (


). The probability function is obtained applying the Pignistic Transformation on the patterns, obtaining a compact model that synthesizes the regularities present in the dataset and serves as a basis for the knowledge discovery and decision making processes.

In this work, we also present a real case study by describing an application of our proposal in the field of Neuroscience. In particular, our main goal is focused on the behavioral characterization, via pignistic distribution on attentional cognitive variables, of group of children pre-diagnosed with one of the three types of


(Attention Deficit Hyperactivity Disorder).

Francisco Guil-Reyes, María Teresa Daza-Gonzalez

A Simulated Annealing Algorithm for the Problem of Minimal Addition Chains

Cryptosystems require the computation of modular exponentiation, this operation is related to the problem of finding a minimal addition chain. However, obtaining the shortest addition chain of length


is an NP-Complete problem whose search space size is proportional to


!. This paper introduces a novel idea to compute the minimal addition chain problem, through an implementation of a Simulated Annealing (SA) algorithm. The representation used in our SA is based on Factorial Number System (FNS). We use a fine-tuning process to get the best performance of SA using a Covering Array (CA), Diophantine Equation solutions (DE) and Neighborhood Functions (NF). We present a parallel implementation to execute the fine-tuning process using a Message Passing Interface (MPI) and the Single Program Multiple Data (SPMD) model. These features, allowed us to calculate minimal addition chains for benchmarks considered difficult in very short time, the experimental results show that this approach is a viable alternative to solve the solution of the minimal addition chain problem.

Adan Jose-Garcia, Hillel Romero-Monsivais, Cindy G. Hernandez-Morales, Arturo Rodriguez-Cristerna, Ivan Rivera-Islas, Jose Torres-Jimenez

Novelty Detection Using Graphical Models for Semantic Room Classification

This paper presents an approach to the problem of novelty detection in the context of semantic room categorization. The ability to assign semantic labels to areas in the environment is crucial for autonomous agents aiming to perform complex human-like tasks and human interaction. However, in order to be robust and naturally learn the semantics from the human user, the agent must be able to identify gaps in its own knowledge. To this end, we propose a method based on graphical models to identify novel input which does not match any of the previously learnt semantic descriptions. The method employs a novelty threshold defined in terms of conditional and unconditional probabilities. The novelty threshold is then optimized using an unconditional probability density model trained from unlabelled data.

André Susano Pinto, Andrzej Pronobis, Luis Paulo Reis

Intelligent Robotics

A Reinforcement Learning Based Method for Optimizing the Process of Decision Making in Fire Brigade Agents

Decision making in complex, multi agent and dynamic environments such as disaster spaces is a challenging problem in Artificial Intelligence. Uncertainty, noisy input data and stochastic behavior which are common characteristics of such environment makes real time decision making more complicated. In this paper an approach to solve the bottleneck of dynamicity and variety of conditions in such situations based on reinforcement learning is presented. This method is applied to RoboCup Rescue Simulation Fire brigade agent’s decision making process and it learned a good strategy to save civilians and city from fire. The utilized method increases the speed of learning and it has very low memory usage. The effectiveness of the proposed method is shown through simulation results.

Abbas Abdolmaleki, Mostafa Movahedi, Sajjad Salehi, Nuno Lau, Luís Paulo Reis

Humanoid Behaviors: From Simulation to a Real Robot

This paper presents the modifications needed to adapt a humanoid agent architecture and behaviors from simulation to a real robot. The experiments were conducted using the Aldebaran Nao robot model. The agent architecture was adapted from the RoboCup 3D Simulation League to the Standard Platform League with as few changes as possible. The reasons for the modifications include small differences in the dimensions and dynamics of the simulated and the real robot and the fact that the simulator does not create an exact copy of a real environment. In addition, the real robot API is different from the simulated robot API and there are a few more restrictions on the allowed joint configurations. The general approach for using behaviors developed for simulation in the real robot was to: first, (if necessary) make the simulated behavior compliant with the real robot restrictions, second, apply the simulated behavior to the real robot reducing its velocity, and finally, increase the velocity, while adapting the behavior parameters, until the behavior gets unstable or inefficient. This paper also presents an algorithm to calculate the three angles of the hip that produce the desired vertical hip rotation, since the Nao robot does not have a vertical hip joint. All simulation behaviors described in this paper were successfully adapted to the real robot.

Edgar Domingues, Nuno Lau, Bruno Pimentel, Nima Shafii, Luís Paulo Reis, António J. R. Neves

Market-Based Dynamic Task Allocation Using Heuristically Accelerated Reinforcement Learning

This paper presents a Multi-Robot Task Allocation (MRTA) system, implemented on a RoboCup Small Size League team, where robots participate of auctions for the available roles, such as attacker or defender, and use Heuristically Accelerated Reinforcement Learning to evaluate their aptitude to perform these roles, given the situation of the team, in real-time.

The performance of the task allocation mechanism is evaluated and compared in different implementation variants, and results show that the proposed MRTA system significantly increases the team performance, when compared to pre-programmed team behavior algorithms.

José Angelo Gurzoni, Flavio Tonidandel, Reinaldo A. C. Bianchi

Shop Floor Scheduling in a Mobile Robotic Environment

Nowadays, it is far more common to see mobile robotics working in the industrial sphere due to the mandatory need to achieve a new level of productivity and increase profits by reducing production costs. Management scheduling and task scheduling are crucial for companies that incessantly seek to improve their processes, increase their efficiency, reduce their production time and capitalize on their infrastructure by increasing and improving production.

However, when faced with the constant decrease in production cycles, management algorithms can no longer solely focus on the mere management of the resources available, they must attempt to optimize every interaction between them, to achieve maximum efficiency for each production resource.

In this paper we focus on the presentation of the new competition called Robot@Factory, its environment and its main objectives, paying special attention to the scheduling algorithm developed for this specific case study. The findings from the simulation approach have allowed us to conclude that mobile robotic path planning and the scheduling of the associated tasks represent a complex problem that has a strong impact on the efficiency of the entire production process.

Andry Maykol Pinto, Luís F. Rocha, António Paulo Moreira, Paulo G. Costa

Humanized Robot Dancing: Humanoid Motion Retargeting Based in a Metrical Representation of Human Dance Styles

Expressiveness and naturalness in robotic motions and behaviors can be replicated with the usage of captured human movements. Considering dance as a complex and expressive type of motion, in this paper we propose a method for generating humanoid dance motions transferred from human motion capture (MoCap) data. Motion data of samba dance was synchronized to samba music, manually annotated by experts, in order to build a spatiotemporal representation of the dance movement with variability, in relation to the respective musical temporal structure (musical meter). This enabled the determination and generation of variable dance key-poses according to the captured human body model. In order to retarget these key-poses from the original human model into the considered humanoid morphology, we propose methods for resizing and adapting the original trajectories to the robot joints, overcoming its varied kinematic constraints. Finally, a method for generating the angles for each robot joint is presented, enabling the reproduction of the desired poses in a simulated humanoid robot NAO. The achieved results validated our approach, suggesting that our method can generate poses from motion capture and reproduce them on a humanoid robot with a good degree of similarity.

Paulo Sousa, João L. Oliveira, Luis Paulo Reis, Fabien Gouyon

Knowledge Discovery and Business Intelligence

Bankruptcy Trajectory Analysis on French Companies Using Self-Organizing Map

As one of the major business problems, corporate bankruptcy has been extensively studied using a large variety of statistical and machine learning approaches. However, the trajectory of bankruptcy behavior is seldom explored in the literature. In this paper, we use self-organizing map neural networks to analyze the changes of financial situation of companies in several consecutive years through a two-step clustering process. Firstly, the bankruptcy risk is characterized by a feature map, and therefore the temporal sequence is converted to the trajectory vector projected on the map. Afterwards, the trajectory map clusters the trajectory vectors to a number of evolution patterns. The approach is applied to a large database of French companies which contains the financial ratios spawning over a period of four years. Typical behaviors such as the deterioration and amelioration associated with the bankruptcy risk, as well as the influence of financial ratios can be revealed by means of visual interpretation.

Ning Chen, Bernardete Ribeiro, Armando S. Vieira

Network Node Label Acquisition and Tracking

Complex networks are ubiquitous in real-world and represent a multitude of natural and artificial systems. Some of these networks are inherently dynamic and their structure changes over time, but only recently has the research community been trying to better characterize them. In this paper we propose a novel general methodology to characterize time evolving networks, analyzing the dynamics of their structure by labeling the nodes and tracking how these labels evolve. Node labeling is formulated as a clustering task that assigns a classification to each node according to its local properties. Association rule mining is then applied to sequences of nodes’ labels to extract useful rules that best describe changes in the network. We evaluate our method using two different networks, a real-world network of the world annual trades and a synthetic scale-free network, in order to uncover evolution patterns. The results show that our approach is valid and gives insights into the dynamics of the network. As an example, the derived rules for the scale-free network capture the properties of preferential node attachment.

Sarvenaz Choobdar, Fernando Silva, Pedro Ribeiro

Learning to Rank for Expert Search in Digital Libraries of Academic Publications

The task of expert finding has been getting increasing attention in information retrieval literature. However, the current state-of-the-art is still lacking in principled approaches for combining different sources of evidence in an optimal way. This paper explores the usage of learning to rank methods as a principled approach for combining multiple estimators of expertise, derived from the textual contents, from the graph-structure with the citation patterns for the community of experts, and from profile information about the experts. Experiments made over a dataset of academic publications, for the area of Computer Science, attest for the adequacy of the proposed approaches.

Catarina Moreira, Pável Calado, Bruno Martins

Thematic Fuzzy Clusters with an Additive Spectral Approach

This paper introduces an additive fuzzy clustering model for similarity data as oriented towards representation and visualization of activities of research organizations in a hierarchical taxonomy of the field. We propose a one-by-one cluster extracting strategy which leads to a version of spectral clustering approach for similarity data. The derived fuzzy clustering method, FADDIS, is experimentally verified both on the research activity data and in comparison with two state-of-the-art fuzzy clustering methods. Two developed simulated data generators, affinity data of Gaussian clusters and genuine additive similarity data, are described, and comparison of the results over this data are reported.

Susana Nascimento, Rui Felizardo, Boris Mirkin

Automatically Enriching a Thesaurus with Information from Dictionaries

Regarding that information in broad-coverage knowledge bases, such as thesauri, is usually incomplete, merging information from different sources is a good option to amplify coverage. We propose a method for the enrichment of a thesaurus with information acquired automatically from dictionaries: pairs of synonyms are assigned to candidate synsets and, the pairs whose elements are not in the thesaurus are clustered to identify new synsets. This method was used in the enrichment of a Brazilian Portuguese thesaurus with synonyms from a European Portuguese dictionary, and resulted in a larger and broader thesaurus with new words and new concepts. The assignments and the obtained synsets were manually evaluated and yielded correction scores higher than 71% and 85% respectively.

Hugo Gonçalo Oliveira, Paulo Gomes

Visualizing the Evolution of Social Networks

In recent years we witnessed an impressive advance in the social networks field, which became a “hot” topic and a focus of considerable attention. Also, the development of methods that focus on the analysis and understanding of the evolution of data are gaining momentum. In this paper we present an approach to visualize the evolution of dynamic social networks by using Tucker decomposition and the concept of trajectory. Our visualization strategy is based on trajectories of network’s actors in a bidimensional space that preserves its structural properties. Furthermore, this approach can be used to identify similar actors by comparing the shape and position of the trajectories. To illustrate the proposed approach we conduct a case study using a set of temporal friendship networks.

Márcia Oliveira, João Gama

Using Data Mining Techniques to Predict Deformability Properties of Jet Grouting Laboratory Formulations over Time

Jet Grouting (JG) technology is one of the most used soft-soil improvements methods. When compared with other methods, JG is more versatile, since it can be applied to several soil types (ranging from coarse to fine-grained soils) and create elements with different geometric shapes (e.g. columns, panels). In geotechnical works where the serviceability limit state design criteria is required, deformability properties of the improved soil need to be quantified. However, due to the heterogeneity of the soils and the high number of variables involved in the JG process, such design is a very complex and hard task. Thus, in order to achieve a more rational design of JG technology, this paper proposes and compares three data mining techniques in order to estimate the different moduli that can be defined in an unconfined compressed test of JG Laboratory Formulations (JGLF). In particular, we analyze and discuss the predictive capabilities of Artificial Neural Networks, Support Vector Machines or Functional Networks. Furthermore, the key parameters in modulus estimation are identified by performing a 1-D sensitivity analysis procedure. We also analyze the effect of such variables in JGLF behavior.

Joaquim Tinoco, António Gomes Correia, Paulo Cortez

Multi-Agent Systems: Theory and Applications

Doubtful Deviations and Farsighted Play

Nash equilibrium is based on the idea that a strategy profile is stable if no player can benefit from a unilateral deviation. We observe that some locally rational deviations in a strategic form game may not be profitable anymore if one takes into account the possibility of further deviations by the other players. As a solution, we propose the concept of farsighted pre-equilibrium, which takes into account only deviations that do not lead to a decrease of the player’s outcome even if some other deviations follow. While Nash equilibria are taken to include plays that are certainly rational, our pre-equilibrium is supposed to rule out plays that are

certainly irrational

. We prove that positional strategies are sufficient to define the concept, study its computational complexity, and show that pre-equilibria correspond to subgame-perfect Nash equilibria in a meta-game obtained by using the original payoff matrix as arena and the deviations as moves.

Wojciech Jamroga, Matthijs Melissen

Uncertainty and Novelty-Based Selective Attention in the Collaborative Exploration of Unknown Environments

We propose a multi-agent approach to the problem of exploring unknown environments that relies on providing the agents with a measure of interest for the viewpoints of the surrounding environment. Such measure of interest takes into account the expected decrease in uncertainty provided by acquiring the information of objects seen from a viewpoint and the novelty of the potential class label of those objects. This allows the agents to visit selectively the objects that populate the environment. This single agent exploration strategy is combined with a multi-agent exploration strategy relying on a brokering system that allows the coordination of the agent team according to the agents’s personal interest and their distance to the viewpoints. The advantages of these forms of selective attention, together with those of the collaborative multi-agent exploration strategy, are tested in several scenarios, comparing our approach against classical ones.

Luis Macedo, Miguel Tavares, Pedro Gaspar, Amílcar Cardoso

A Dynamic Agents’ Behavior Model for Computational Trust

The development of computational trust models is growing in attention in the community of multi-agent systems and these models are currently seen as of extreme importance in social networks, electronic business and grid computing, among others. However, one of the biggest limitations in validating the existing computational trust models is the absence of realistic models of the behavior of agents. In fact, most of the work done in this area assumes that agents behave following simple and static probabilistic models. In this paper, we present a formal model of behavior of business agents that entail in inter-organizational exchanges, taking as basis diverse literature on socio-economic theories. With this model, we empirically show that some of the computational trust approaches which are more cited in the literature are not able to capture the temporal dynamics in the behavior of the business agents. Based on the results obtained from this study, we enumerate different properties that must be present in computational trust models in order to couple with realistic agents’ behavior.

Joana Urbano, Ana Paula Rocha, Eugénio Oliveira

The BMC Method for the Existential Part of RTCTLK and Interleaved Interpreted Systems

In the paper, we focus on the formal verification of multi-agent systems – modelled by interleaved interpreted systems – by means of the bounded model checking (BMC) method, where specifications are expressed in the existential fragment of the Real-Time Computation Tree Logic augmented to include standard epistemic operators (


). In particular, we define an improved SAT-based BMC for


, and present performance evaluation of our newly developed BMC method by means of the well known train controller and generic pipeline systems.

Bożena Woźna-Szcześniak, Agnieszka Zbrzezny, Andrzej Zbrzezny

Social Simulation and Modeling

Building Spatiotemporal Emotional Maps for Social Systems

In this paper we present a system that gathers from a society of agents their emotional arousal and self-rated motivations as well as their location in order to plot a map of a city or geographical region with information about the motivational and emotional state of the agents that inhabit it. This tool provides the visualization of the agent’s reactions to the external world as well as the ultimate reasons behind them, namely the goals and beliefs, making it possible to know for instance where a society feels stressed and excited and which desires are predominant in specific parts of a geographical region. With a careful analysis of this information, further conclusions may be reached about particular aspects of the agent society.

Pedro Catré, Luis Cardoso, Luis Macedo, Amílcar Cardoso

Text Mining and Applications

An Exploratory Study on the Impact of Temporal Features on the Classification and Clustering of Future-Related Web Documents

In the last few years, a huge amount of temporal written information has become widely available on the Internet with the advent of forums, blogs and social networks. This gave rise to a new challenging problem called future retrieval, which consists of extracting future temporal information, that is known in advance, from web sources in order to answer queries that combine text of a future temporal nature. This paper aims to confirm whether web snippets can be used to form an intelligent web that can detect future expected events when their dates are already known. Moreover, the objective is to identify the nature of future texts and understand how these temporal features affect the classification and clustering of the different types of future-related texts: informative texts, scheduled texts and rumor texts. We have conducted a set of comprehensive experiments and the results show that web documents are a valuable source of future data that can be particularly useful in identifying and understanding the future temporal nature of a given implicit temporal query.

Ricardo Campos, Gaël Dias, Alípio Jorge

Using the Web to Validate Lexico-Semantic Relations

The evaluation of semantic relations acquired automatically from text is a challenging task, which generally ends up being done by humans. Despite less prone to errors, manual evaluation is hardly repeatable, time-consuming and sometimes subjective. In this paper, we evaluate relational triples automatically, exploiting popular similarity measures on the Web. After using these measures to quantify triples according to the co-occurrence of their arguments and textual patterns denoting their relation, some scores revealed to be highly correlated with the correction rate of the triples. The measures were also used to select correct triples in a set, with best



scores around 96%.

Hernani Pereira Costa, Hugo Gonçalo Oliveira, Paulo Gomes

A Resource-Based Method for Named Entity Extraction and Classification

We propose a resource-based Named Entity Classification (NEC) system, which combines named entity extraction with simple language-independent heuristics. Large lists (gazetteers) of named entities are automatically extracted making use of semi-structured information from the Wikipedia, namely infoboxes and category trees. Language-independent heuristics are used to disambiguate and classify entities that have been already identified (or recognized) in text. We compare the performance of our resource-based system with that of a supervised NEC module implemented for the FreeLing suite, which was the winner system in CoNLL-2002 competition. Experiments were performed over Portuguese text corpora taking into account several domains and genres.

Pablo Gamallo, Marcos Garcia

Measuring Spelling Similarity for Cognate Identification

The most commonly used measures of string similarity, such as the Longest Common Subsequence Ratio (LCSR) and those based on Edit Distance, only take into account the number of matched and mismatched characters. However, we observe that cognates belonging to a pair of languages exhibit recurrent spelling differences such as “ph” and “f” in English-Portuguese cognates “phase” and “fase”. Those differences are attributable to the evolution of the spelling rules of each language over time, and thus they should not be penalized in the same way as arbitrary differences found in non-cognate words, if we are using word similarity as an indicator of cognaticity.

This paper describes SpSim, a new spelling similarity measure for cognate identification that is tolerant towards characteristic spelling differences that are automatically extracted from a set of cognates known apriori. Compared to LCSR and EdSim (Edit Distance-based similarity), SpSim yields an F-measure 10% higher when used for cognate identification on five different language pairs.

Luís Gomes, José Gabriel Pereira Lopes

Identifying Automatic Posting Systems in Microblogs

In this paper we study the problem of identifying systems that automatically inject non-personal messages in micro-blogging message streams, thus potentially biasing results of certain information extraction procedures, such as opinion-mining and trend analysis. We also study several classes of features, namely features based on the time of posting, the client used to post, the presence of links, the user interaction and the writing style. This last class of features, that we introduce here for the first time, is proved to be a top performer, achieving accuracy near the 90%, on par with the best features previously used for this task.

Gustavo Laboreiro, Luís Sarmento, Eugénio Oliveira

Determining the Polarity of Words through a Common Online Dictionary

Considerable attention has been given to polarity of words and the creation of large polarity lexicons. Most of the approaches rely on advanced tools like part-of-speech taggers and rich lexical resources such as WordNet. In this paper we show and examine the viability to create a moderate-sized polarity lexicon using only a common online dictionary, five positive and five negative words, a set of highly accurate extraction rules, and a simple yet effective polarity propagation algorithm. The algorithm evaluation results show an accuracy of 84.86% for a lexicon of 3034 words.

António Paulo-Santos, Carlos Ramos, Nuno C. Marques

A Bootstrapping Approach for Training a NER with Conditional Random Fields

In this paper we present a bootstrapping approach for training a Named Entity Recognition (NER) system. Our method starts by annotating persons’ names on a dataset of 50,000 news items. This is performed using a simple dictionary-based approach. Using such training set we build a classification model based on Conditional Random Fields (CRF). We then use the inferred classification model to perform additional annotations of the initial seed corpus, which is then used for training a new classification model. This cycle is repeated until the NER model stabilizes. We evaluate each of the bootstrapping iterations by calculating: (i) the precision and recall of the NER model in annotating a small gold-standard collection (HAREM); (ii) the precision and recall of the CRF bootstrapping annotation method over a small sample of news; and (iii) the correctness and the number of new names identified. Additionally, we compare the NER model with a dictionary-based approach, our baseline method. Results show that our bootstrapping approach stabilizes after 7 iterations, achieving high values of precision (83%) and recall (68%).

Jorge Teixeira, Luís Sarmento, Eugénio Oliveira

Doctoral Symposium on Artificial Intelligence

Domain-Splitting Generalized Nogoods from Restarts

The use of restarts techniques associated with learning nogoods in solving Constraint Satisfaction Problems (CSPs) is starting to be considered of major importance for backtrack search algorithms. Recent developments show how to learn nogoods from restarts and that those nogoods are essential when using restarts. Using a backtracking search algorithm, with 2-way branching, generalized nogoods are learned from the last branch of the search tree, immediately before the restart occurs. In this paper we further generalized the learned nogoods but now using domain-splitting branching and set branching. We believe that the use of restarts and learning of domain-splitting generalized nogoods will improve backtrack search algorithms for certain classes of problems.

Luís Baptista, Francisco Azevedo

A Proposal for Transactions in the Semantic Web

The success of the Semantic Web project has triggered the emergence of new challenges for the research community. Among them, relies the ability of evolving the web by means of actions and updates in accordance with some standard proposals as RIF or SPARQL-Update. However, from the moment that actions and updates are possible, the need to ensure properties regarding the outcome of performing such actions emerges. Moreover, this need also leaves open the specification of such properties and requirements that an intended solution should comply to.

In this paper we motivate the need for employing transactional properties in this new Web and delineate a proposal for the requirements that such solution should provide. Afterwards, we develop a logic, based on the well-known Transaction Logic, that partially achieves such requirements, as a first step of an ongoing work.

Ana Sofia Gomes, José Júlio Alferes


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.




Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!