Skip to main content
Top

2015 | Book

Agents and Artificial Intelligence

6th International Conference, ICAART 2014, Angers, France, March 6-8, 2014, Revised Selected Papers

Editors: Béatrice Duval, Jaap van den Herik, Stephane Loiseau, Joaquim Filipe

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the 6th International Conference on Agents and Artificial Intelligence, ICAART 2014, held in Angers, France, in March 2014.
The 21 revised full papers presented together with one invited paper were carefully reviewed and selected from 225 submissions. The papers are organized in two topical sections on agents and on artificial intelligence.

Table of Contents

Frontmatter

Invited Paper

Frontmatter
The New Era of High-Functionality Interfaces
Abstract
Traditional user interface design works best for applications that only have a relatively small number of operations for the user to choose from. These applications achieve usability by maintaining a simple correspondence between user goals and interface elements such as menu items or icons. But we are entering an era of high-functionality applications, where there may be hundreds or thousands of possible operations. In contexts like mobile phones, even if each individual application is simple, the combined functionality represented by the entire phone constitutes such a high-functionality command set. How are we going to manage the continued growth of high-functionality computing? Artificial Intelligence promises some strategies for dealing with high-functionality situations. Interfaces can be oriented around the goals of the user rather than the features of the hardware or software. New interaction modalities like natural language, speech, gesture, vision, and multi-modal interfaces can extend the interface vocabulary. End-user programming can rethink the computer as a collection of capabilities to be composed on the fly rather than a set of predefined “applications”. We also need new ways of teaching users about what kind of capabilities are available to them, and how to interact in high-functionality situations.
Henry Lieberman, Christopher Fry, Elizabeth Rosenzweig

Agents

Frontmatter
Performance of Communicating Cognitive Agents in Cooperative Robot Teams
Abstract
In this work, we investigate the effectiveness of communication strategies in the coordination of cooperative robot teams. Robots are required to perform search and retrieval tasks, in which they need to search targets of interest in the environment and deliver them back to a home base. To study communication strategies in robot teams, we first discuss a case without communication, which is considered as the baseline, and also analyse various kinds of coordination strategies for robots to explore and deliver the targets in such a setting. We proceed to analyse three communication cases, where the robots can exchange their beliefs and/or goals with one another. Using communicated information, the robots can develop more complicated protocols to coordinate their activities. We use the Blocks World for Teams (BW4T) as the simulator to carry out experiments, and robots in the BW4T are controlled by cognitive agents. The team goal of the robots is to search and retrieve a sequence of colored blocks from the environment. In terms of cooperative teamwork, we have studied two main variations: a variant where all blocks to be retrieved have the same color (no ordering constraints on the team goal) and a variant where blocks of various colors need to be retrieved in a particular order (with ordering constraints). The experimental results show that communication will be particularly helpful to enhance the team performance for the second variant, and exchanging more information does not always yield a better team performance.
Changyun Wei, Koen V. Hindriks, Catholijn M. Jonker
Agent-Based Simulation of Interconnected Wholesale Electricity Markets: An Application to the German and French Market Area
Abstract
In the context of wholesale electricity markets, agent-based models have shown to be an appealing approach. Given its ability to adequately model interrelated markets with its main players and considering detailed data, agent-based models have already provided valuable insights in related research questions, e.g. about adequate regulatory frameworks. In this paper, an agent-based model with an application to the German and French market area is presented. The model is able to analyze short-term as well as long-term effects in electricity markets. It simulates the hourly day-ahead market with limited interconnection capacities between the regarded market areas and determines the market outcome as well as the power plant dispatch. Yearly, each agent has the possibility to invest in new conventional capacities which e.g. allows assessing security of supply related questions in future years. Furthermore, the model can be used for participatory simulations where humans take the place of the models agents. In order to adapt to the ongoing changes in electricity markets, e.g. due to the rise of renewable energies and the integration of European electricity markets, the model is constantly developed further. Future extensions include amongst others the implementations of an intraday market as well as the integration of additional market areas.
Andreas Bublitz, Philipp Ringler, Massimo Genoese, Wolf Fichtner
Cooperative Transportation Using Pheromone Agents
Abstract
This paper presents an algorithm for cooperatively transporting objects by multiple robots without any initial knowledge. The robots are connected by communication networks, and the controlling algorithm is based on the pheromone communication of social insects such as ants. Unlike traditional pheromone based cooperative transportation, we have implemented the pheromone as mobile software agents that control the mobile robots corresponding to the ants. The pheromone agent has the vector value pointing to its birth location inside, which is used to guide a robot to the birth location. Since the pheromone agent can diffuse with migrations between robots as the same manner as physical pheromone, it can attract other robots scattering in a work field to the birth location. Once the robot finds an object, it briefly pushes the object, measuring the degree of the inclination of the object. The robot generates a pheromone agent with the vector value to pushing point suitable for suppressing the inclination of the object. The process of the pushes and generations of pheromone agents enables the efficient transportation of the object. We have implemented a simulator that follows our algorithm, and conducted experiments to demonstrate the feasibility of our approach.
Ryo Takahashi, Munehiro Takimoto, Yasushi Kambayashi
Design and Simulation of a Low-Resource Processing Platform for Mobile Multi-agent Systems in Distributed Heterogeneous Networks
Abstract
The design and simulation of an agent processing platform suitable for distributed computing in heterogeneous sensor networks consisting of low-resource nodes is presented, providing a unique distributed programming model and enhanced robustness of the entire heterogeneous environment in the presence of node, sensor, link, data processing, and communication failures. In this work multi-agent systems with mobile activity-based agents are used for sensor data processing in unreliable mesh-like networks of nodes, consisting of a single microchip with limited low computational resources, which can be integrated into materials and technical structures. The agent behaviour, based on an activity-transition graph model, the interaction, and mobility can be efficiently integrated on the microchip using a configurable pipelined multi-process architecture based on the Petri-Net model and token-based processing. A new sub-state partitioning of activities simplifies and optimizes the processing platform significantly. Additionally, software implementations and simulation models with equal functional behaviour can be derived from the same program source. Hardware, software, and simulation platforms can be directly connected in heterogeneous networks. Agent interaction and communication is provided by a simple tuple-space database. A reconfiguration mechanism of the agent processing system offers activity graph changes at run-time. The suitability of the agent processing platform in large scale networks is demonstrated by using agent-based simulation of the platform architecture at process level with hundreds of nodes.
Stefan Bosse
Behavior Clustering and Explicitation for the Study of Agents’ Credibility: Application to a Virtual Driver Simulation
Abstract
The aim of this article is to provide a method for evaluating the credibility of agents’ behaviors in immersive multi-agent simulations. It is based on a quantitative data collection from both humans and agents simulation logs during an experiment. These data allow us to semi-automatically extract behavior clusters. In order to obtain explicit information about the behaviors, we analyze questionnaires filled by the users and annotations filled by a second set of participants. It enables to draw user categories related to their behavior in the context of the simulation or of their real life habits. We then study the similarities between behavior clusters, user categories, and participants’ annotations. Afterwards, we evaluate the agents’ credibility and make their behaviors explicit by comparing human behaviors to agent ones according to user categories and annotations. Our method is applied to the study of virtual driver simulation through an immersive driving simulator.
Kévin Darty, Julien Saunier, Nicolas Sabouret

Artificial Intelligence

Frontmatter
Knowledge Gradient for Online Reinforcement Learning
Abstract
The most interesting challenge for a reinforcement learning agent is to learn online in unknown large discrete, or continuous stochastic model. The agent has not only to trade-off between exploration and exploitation, but also has to find a good set of basis functions to approximate the value function. We extend offline kernel-based LSPI (or least squares policy iteration) to online learning. Online kernel-based LSPI combines feature of offline kernel-based LSPI and online LSPI. Online kernel-based LSPI uses knowledge gradient policy as an exploration policy to trade-off between exploration and exploitation, and the approximate linear dependency based kernel sparsification method to select basis functions automatically. We compare between online kernel-based LSPI and online LSPI on 5 discrete Markov decision problems, where online kernel-based LSPI outperforms online LSPI according to the optimal policy performance.
Saba Yahyaa, Bernard Manderick
Statistical Response Method and Learning Data Acquisition using Gamified Crowdsourcing for a Non-task-oriented Dialogue Agent
Abstract
This paper presents a proposal of a construction method for non-task-oriented dialogue agents (chatbots) that are based on the statistical response method. The method prepares candidate utterances in advance. From the data, it learns which utterances are suitable for context. Therefore, a dialogue agent constructed using our method automatically selects a suitable utterance depending on a context from candidate utterances. This paper also proposes a low-cost quality-assured method of learning data acquisition for the proposed response method. The method uses crowdsourcing and brings game mechanics to data acquisition.
Results of an experiment using learning data obtained using the proposed data acquisition method demonstrate that the appropriate utterance is selected with high accuracy.
Michimasa Inaba, Naoyuki Iwata, Fujio Toriumi, Takatsugu Hirayama, Yu Enokibori, Kenichi Takahashi, Kenji Mase
A Method for Binarization of Document Images from a Live Camera Stream
Abstract
This paper describes a method for binarization of document images from a live camera stream. The method is based on histogram matching over partial images (referred to as tiles). A method developed previously has been applied successfully to images with artificially added noise. Here, an improved method is presented, in which the user has more direct control over the specification of the binarizer. The resulting system is then taken a step further, by considering the more difficult case of binarization of live camera images. It is demonstrated that the improved method works well for this case, even when the image stream is obtained using a (slightly modified) low-cost web camera with low resolution. For typical images obtained this way, a standard OCR reader is capable of reading the binarized images, detecting around 87.5 % of all words without any error, and with mostly minor, correctable errors for the remaining words.
Mattias Wahde
A Probabilistic Semantics for Cognitive Maps
Abstract
Cognitive maps are a graphical knowledge representation model that describes influences between concepts, each influence being quantified by a value. Most cognitive map models use values the semantics of which is not formally defined. This paper introduces the probabilistic cognitive maps, a new cognitive map model where the influence values are assumed to be probabilities. We formally define this model and redefine the propagated influence, an operation that computes the global influence between two concepts in the map, to be in accordance with this semantics. To prove the soundness of our model, we propose a method to represent any probabilistic cognitive map as a Bayesian network.
Aymeric Le Dorze, Béatrice Duval, Laurent Garcia, David Genest, Philippe Leray, Stéphane Loiseau
New Techniques for Checking Dynamic Controllability of Simple Temporal Networks with Uncertainty
Abstract
A Simple Temporal Network with Uncertainty (STNU) is a structure for representing time-points, temporal constraints, and temporal intervals with uncertain—but bounded—durations. The most important property of an STNU is whether it is dynamically controllable (DC)—that is, whether there exists a strategy for executing its time-points such that all constraints will necessarily be satisfied no matter how the uncertain durations turn out. Algorithms for checking from scratch whether STNUs are dynamically controllable are called (full) DC-checking algorithms. Algorithms for checking whether the insertion of one new constraint into an STNU preserves its dynamic controllability are called incremental DC-checking algorithms. This paper introduces novel techniques for speeding up both full and incremental DC checking. The first technique, called rotating Dijkstra, enables constraints generated by propagation to be immediately incorporated into the network. The second uses novel heuristics that exploit the nesting structure of certain paths in STNU graphs to determine good orders in which to propagate constraints. The third technique, which only applies to incremental DC checking, maintains information acquired from previous invocations to reduce redundant computation. The most important contribution of the paper is the incremental algorithm, called Inky, that results from using these techniques. Like its fastest known competitors, Inky is a cubic-time algorithm. However, a comparative empirical evaluation of the top incremental algorithms, all of which have only very recently appeared in the literature, must be left to future work.
Luke Hunsberger
Adaptive Neural Topology Based on Vapnik-Chervonenkis Dimension
Abstract
In many applications, learning algorithms act in dynamic environments where data flows continuously. In those situations, the learning algorithms should be able to work in real time adjusting its controlling parameters, even its structures, when knowledge arrives. In a previous work, the authors proposed an online learning method for two-layer feedforward neural networks which is able to incorporate new hidden neurons during learning without losing the previously acquired knowledge. In this paper, we present an extension of this previous learning algorithm which includes a mechanism, based on Vapnik-Chervonenkis dimension, to adapt the network topology automatically. The experimental study confirms that the proposed method is able to check whether a modification of the topology is necessary according to the needs of the learning process.
Beatriz Pérez-Sánchez, Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas
Towards the Identification of Outliers in Satellite Telemetry Data by Using Fourier Coefficients
Abstract
Spacecrafts provide a large set of on-board components information such as their temperature, power and pressure. This information is constantly monitored by engineers, who capture the outliers and determine whether the situation is abnormal or not. However, due to the large quantity of information, only a small part of the data is being processed or used to perform anomaly early detection. A common accepted research concept for anomaly prediction as described in literature yields on using projections, based on probabilities, estimated on learned patterns from the past [6] and data mining methods to enhance the conventional diagnosis approach [14]. Most of them conclude on the need to build a pattern identity chart. We propose an algorithm for efficient outlier detection that builds an identity chart of the patterns using the past data based on their curve fitting information. It detects the functional units of the patterns without apriori knowledge with the intent to learn its structure and to reconstruct the sequence of events described by the signal. On top of statistical elements, each pattern is allotted a characteristics chart. This pattern identity enables fast pattern matching across the data. The extracted features allow classification with regular clustering methods like support vector machines (SVM). The algorithm has been tested and evaluated using real satellite telemetry data. The outcome and performance show promising results for faster anomaly prediction.
Fabien Bouleau, Christoph Schommer
A Probabilistic Approach to Represent Emotions Intensity into BDI Agents
Abstract
The BDI (Belief-Desire-Intention) model is a well known reasoning architecture for intelligent agents. According to the original BDI approach, an agent is able to deliberate about what action to do next having only three main mental states: belief, desires and intentions. A BDI agent should be able to choose the more rational action to be done with bounded resources and incomplete knowledge in an acceptable time. As humans need emotions to make immediate decisions with incomplete information, some recent works have extending the BDI architecture in order to integrate emotions. However, as they only use logic to represent emotions, they are not able to define the intensity of the emotions. In this paper we present an implementation of the appraisal process of emotions into BDI agents using a BDI language that integrates logic and probabilistic reasoning. Hence, our emotional BDI implementation allows to differentiate between emotions and affective reactions. This is an important aspect because emotions tend to generate stronger response. Besides, the emotion intensity also determines the intensity of an individual reaction. In particular, we implement the event-based emotions with consequences for self based on the OCC cognitive psychological theory of emotions. We also present an illustrative scenario and its implementation.
João Carlos Gluz, Patricia Augustin Jaques
Revisiting Classical Dynamic Controllability: A Tighter Complexity Analysis
Abstract
Simple Temporal Networks with Uncertainty (STNUs) allow the representation of temporal problems where some durations are uncontrollable (determined by nature), as is often the case for actions in planning. It is essential to verify that such networks are dynamically controllable (DC) – executable regardless of the outcomes of uncontrollable durations – and to convert them to an executable form. We use insights from incremental DC verification algorithms to re-analyze the original, classical, verification algorithm. This algorithm is the entry level algorithm for DC verification, based on a less complex and more intuitive theory than subsequent algorithms. We show that with a small modification the algorithm is transformed from pseudo-polynomial to \(O(n^4)\) which makes it still useful. We also discuss a change reducing the amount of work performed by the algorithm.
Mikael Nilsson, Jonas Kvarnström, Patrick Doherty
ReCon: An Online Task ReConfiguration Approach for Robust Plan Execution
Abstract
The paper presents an approach for the robust plan execution in presence of consumable and continuous resources. Plan execution is a critical activity since a number of unexpected situations could prevent the feasibility of tasks to be accomplished; however, many robotic scenarios (e.g. in space exploration) disallow robotic systems to perform significant deviations from the original plan formulation. In order to both (i) preserve the “stability” of the current plan and (ii) provide the system with a reasonable level of autonomy in handling unexpected situations, an innovative approach based on task reconfiguration is presented. Exploiting an enriched action formulation grounding on the notion of execution modalities, ReCon replaces the replanning mechanism with a novel reconfiguration mechanism, handled by means of a CSP solver. The paper studies the system for a typical planetary rover mission and provides a rich experimental analysis showing that, when the anomalies refer to unexpected resources consumption, the reconfiguration is not only more efficient but also more effective than a plan adaptation mechanism. The experiments are performed by evaluating the recovery performances depending on constraints on computational costs.
Enrico Scala, Roberto Micalizio, Pietro Torasso
Combining Semantic Query Disambiguation and Expansion to Improve Intelligent Information Retrieval
Abstract
We show in this paper how Semantic Query Disambiguation (SQD) combined with Semantic Query Expansion (SQE) can improve the effectiveness of intelligent information retrieval. Firstly, we propose and assess a possibilistic-based approach mixing SQD and SQE. This approach is based on corpus analysis using co-occurrence graphs modeled by possibilistic networks. Indeed, our model for relevance judgment uses possibility theory to take advantage of a double measure (possibility and necessity). Secondly, we propose and evaluate a probabilistic circuit-based approach combining SQD and SQE in an intelligent information retrieval context. In this approach, both SQD and SQE tasks are based on a graph data model, in which circuits between its nodes (words) represent the probabilistic scores for their semantic proximities. In order to compare the performance of these two approaches, we perform our experiments using the standard ROMANSEVAL test collection for the SQD task and the CLEF-2003 benchmark for the SQE process in French monolingual information retrieval evaluation. The results show the impact of SQD on SQE based on the recall/precision standard metrics for both the possibilistic and the probabilistic circuit-based approaches. Besides, the results of the possibilistic approach outperform the probabilistic ones, since it takes into account of imprecision cases.
Bilel Elayeb, Ibrahim Bounhas, Oussama Ben Khiroun, Narjès Bellamine Ben Saoud
Full-Reference Predictive Modeling of Subjective Image Quality Assessment with ANFIS
Abstract
Digital images often undergo through various processing and distortions which subsequently impacts the perceived image quality. Predicting image quality can be a crucial step to tune certain parameters for designing more effective acquisition, transmission, and storage multimedia systems. With the huge number of images captured and exchanged everyday, automatic prediction of image quality that correlates well with human judgment is steadily gaining increased importance. In this paper, we investigate the performance of three combinations of objective metrics for image quality prediction with an adaptive neuro-fuzzy inference system (ANFIS). Images are processed to extract various attributes which are then used to build a predictive model to estimate a differential mean opinion score for different types of distortions. Using a publicly available and subjectively rated image database, the proposed method is evaluated and compared to individual metrics and an existing technique based on correlation and error measures. The results prove that the proposed method can be a promising approach for predicting subjective quality of images.
El-Sayed M. El-Alfy, Mohammed Rehan Riaz
A Heuristic Automatic Clustering Method Based on Hierarchical Clustering
Abstract
We propose a clustering method which produces valid results while automatically determining an optimal number of clusters. The proposed method achieves these results with minimal user input, of which none pertains to a number of clusters. Our method’s effectiveness in clustering, including its ability to produce valid results on data sets presenting nested or interlocking shapes, is demonstrated and compared with cluster validity analysis to other methods to which a known optimal number of clusters is provided, and to other automatic clustering methods. Depending on the particularities of the data set used, our method has produced results which are roughly equivalent or better than those of the compared methods.
François LaPlante, Nabil Belacel, Mustapha Kardouchi
Robust Signature Discovery for Affymetrix GeneChip® Cancer Classification
Abstract
Phenotype prediction is one of the central issues in genetics and medical sciences research. Due to the advent of high-throughput screening technologies, microarray-based cancer classification has become a standard procedure to identify cancer-related gene signatures. Since gene expression profiling in transcriptome is of high dimensionality, it is a challenging task to discover a biologically functional signature over different cell lines. In this article, we present an innovative framework for finding a small portion of discriminative genes for a specific disease phenotype classification by using information theory. The framework is a data-driven approach and considers feature relevance, redundancy, and interdependence in the context of feature pairs. Its effectiveness has been validated by using a brain cancer benchmark, where the gene expression profiling matrix is derived from Affymetrix Human Genome U95Av2 GeneChip\(^{\textregistered }\). Three multivariate filters based on information theory have also been used for comparison. To show the strengths of the framework, three performance measures, two sets of enrichment analysis, and a stability index have been used in our experiments. The results show that the framework is robust and able to discover a gene signature having a high level of classification performance and being more statistically significant enriched.
Hung-Ming Lai, Andreas Albrecht, Kathleen Steinhöfel
Influence of the Interest Operators in the Detection of Spontaneous Reactions to the Sound
Abstract
Hearing plays a key role in our social participation and daily activities. In health, hearing loss in one of the most common conditions, so its diagnosis and monitoring is highly important. The standard test for the evaluation of hearing is the pure tone audiometry, which is a behavioral test that requires a proper interaction and communication between the patient and the audiologist. This need of understanding is which makes this test unworkable when dealing with patients with severe cognitive decline or other communication disorders. In these particular cases, the audiologist base the evaluation in the detection of spontaneous facial reaction that may indicate auditory perception. With the aim of supporting the audiologist, a screening method that analyzes video sequences and seeks for eye gestural reactions was proposed. In this paper, a comprehensive survey about one of the crucial steps of the methodology is presented. This survey determines the optimal configuration for all of them, and evaluates in detail their combination with different classification techniques. The obtained results provide a global vision of the suitability of the different interest operators.
A. Fernández, J. Marey, M. Ortega, M. G. Penedo
Confidence Measure for Experimental Automatic Face Recognition System
Abstract
This paper deals with automatic face recognition in order to propose and implement an experimental face recognition system. It will be used to automatically annotate photographs taken in completely uncontrolled environment. Recognition accuracy of such a system can be improved by identification of incorrectly classified samples in the post-processing step. However, this step is usually missing in current systems. In this work, we would like to solve this issue by proposing and integrating a confidence measure module to identify incorrectly classified examples. We propose a novel confidence measure approach which combines four partial measures by a multi-layer perceptron. Two individual measures are based on the posterior probability and two other ones use the predictor features. The experimental results show that the proposed system is very efficient, because almost all erroneous examples are successfully detected.
Pavel Král, Ladislav Lenc
Backmatter
Metadata
Title
Agents and Artificial Intelligence
Editors
Béatrice Duval
Jaap van den Herik
Stephane Loiseau
Joaquim Filipe
Copyright Year
2015
Electronic ISBN
978-3-319-25210-0
Print ISBN
978-3-319-25209-4
DOI
https://doi.org/10.1007/978-3-319-25210-0

Premium Partner