Skip to main content

2007 | Buch

AI*IA 2007: Artificial Intelligence and Human-Oriented Computing

10th Congress of the Italian Association for Artificial Intelligence, Rome, Italy, September 10-13, 2007. Proceedings

herausgegeben von: Roberto Basili, Maria Teresa Pazienza

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Talks

Learning to Select Team Strategies in Finite-Timed Zero-Sum Games

Games, by definition, offer the challenge of the presence of an opponent, to which a playing strategy should respond. In finite-timed zero-sum games, the strategy should enable to win the game within a limited playing time. Motivated by robot soccer, in this talk, we will present several approaches towards learning to select team strategies in such finite-timed zero-sum games. We will introduce an adaptive playbook approach with implicit opponent modeling, in which multiple team strategies are represented as variable weighted plays. We will discuss different plays as a function of different game situations and opponents. In conclusion, we will present an MDP-based learning algorithm to reason in particular about current score and game time left. Through extensive simulated empirical studies, we will demonstrate the effectiveness of the learning approach. In addition, the talk will include illustrative examples from robot soccer. The major part of this work is in conjunction with my PhD student Colin McMillen.

Manuela Veloso
Expressive Intelligence: Artificial Intelligence, Games and New Media

Artificial intelligence methods open up new possibilities in art and entertainment, enabling the creation of believable characters with rich personalities and emotions, interactive story systems that incorporate player interaction into the construction of dynamic plots, and interactive installations and sculptural works that are able to perceive and respond to the human environment. At the same time as AI opens up new fields of artistic expression, AI-based art itself becomes a fundamental research agenda, posing and answering novel research questions which would not be raised unless doing AI research in the context of art and entertainment. I call this agenda, in which AI research and art mutually inform each other, Expressive AI. These ideas will be illustrated by looking at several current and past projects, including the interactive drama Facade. As a new game genre, interactive drama involves socially and emotionally charged interaction with characters in the context of a dynamically evolving plot.

Michael Mateas
Artificial Ontologies and Real Thoughts: Populating the Semantic Web?

Corpus linguistic methods are discussed in the context of the automatic extraction of a candidate terminology of a specialist domain of knowledge. Collocation analysis of the candidate terms leads to some insight into the ontological commitment of the domain community or collective. The candidate terminology and ontology can be easily verified and validated and subsequently may be used in the construction of information extraction systems and of knowledge-based systems. The use of the methods is illustrated by an investigation of the ontological commitment of four major collectives: nuclear physics, cell biology, linguistics and anthropology. An analysis of a diachronic corpus allows an insight into changes in basic concepts within a specialism; an analysis of a corpus comprising texts published during a short and fixed time period –a synchronic corpus- shows how different sub-specialisms within a collective commit themselves to an ontology.

Khurshid Ahmad

Knowledge Representation and Reasoning

Model-Based Diagnosability Analysis for Web Services

In this paper we deal with the problem of model-based diagnosability analysis for Web Services. The goal of diagnosability analysis is to determine whether the information one can observe during service execution is sufficient to precisely locate (by means of diagnostic reasoning) the source of the problem. The major difficulty in the context of Web Services is that models are distributed and no single entity has a global view of the complete model. In the paper we propose an approach that computes diagnosability for the decentralized diagnostic framework, described in [1], based on a Supervisor coordinating several Local Diagnosers. We also show that diagnosability analysis can be performed without requiring the Local Diagnosers different operations than those needed for diagnosis. The proposed approach is incremental: each fault is first analyzed independently of the occurrence of other faults, then the results are used to analyze combinations of behavioral modes, avoiding in most cases an exhaustive check of all combinations.

Stefano Bocconi, Claudia Picardi, Xavier Pucel, Daniele Theseider Dupré, Louise Travé-Massuyès
Finite Model Reasoning on UML Class Diagrams Via Constraint Programming

Finite model reasoning in UML class diagrams is an important task for assessing the quality of the analysis phase in the development of software applications in which it is assumed that the number of objects of the domain is finite. In this paper, we show how to encode finite model reasoning in UML class diagrams as a constraint satisfaction problem (CSP), exploiting techniques developed in description logics. In doing so we set up and solve an intermediate CSP problem to deal with the explosion of “class combinations” arising in the encoding. To solve the resulting CSP problems we rely on the use of off-the-shelf tools for constraint modeling and programming. As a result, we obtain, to the best of our knowledge, the first implemented system that performs finite model reasoning on UML class diagrams.

Marco Cadoli, Diego Calvanese, Giuseppe De Giacomo, Toni Mancini
Model Checking and Preprocessing

Temporal Logic Model Checking is a verification method having many industrial applications. This method describes a system as a formal structure called model; some properties, expressed in a temporal logic formula, can be then checked over this model. In order to improve performance, some tools allow to preprocessing the model so that a set of properties can be verified reusing the same preprocessed model. In this article, we prove that this preprocessing cannot possibly reduce complexity, if its result is bound to be of size polynomial in the size of the input. This result also holds if the formula is the part of the data that is preprocessed, which has similar practical implications.

Andrea Ferrara, Paolo Liberatore, Marco Schaerf
Some Issues About Cognitive Modelling and Functionalism

The aim of this paper is to introduce some methodological issues about cognitive explanatory power of AI systems. We use the new concept of

mesoscopic functionalism

which is based on links between computational complexity theory and functionalism. This functionalism tries to introduce an unique intermediate,

mesoscopic,

descriptive level based on the key role of heuristics. The enforcement of constraints at this level can assure a cognitive explanatory power which is not guaranteed from mere selection of modelling technique. So we reconsider the discussions about empirical underdetermination of AI systems, proposed especially for classical systems, and about the research of the “right and unique” technique for cognitive modelling. This allows us to consider the several mainstreams of cognitive artificial intelligence as different attempts to resolve underdetermination and thus, in a way, we can unify them as a manifestation of scientific pluralism.

Francesco Gagliardi
Understanding the Environment Through Wireless Sensor Networks

This paper presents a new cognitive architecture for extracting meaningful, high-level information from the environment, starting from the raw data collected by a Wireless Sensor Network. The proposed framework is capable of building rich internal representation of the sensed environment by means of intelligent data processing and correlation. Furthermore, our approach aims at integrating the connectionist, data-driven model with the symbolic one, that uses a high-level knowledge about the domain to drive the environment interpretation. To this aim, the framework exploits the notion of conceptual spaces, adopting a conceptual layer between the subsymbolic one, that processes sensory data, and the symbolic one, that describes the environment by means of a high level language; this intermediate layer plays the key role of anchoring the upper layer symbols. In order to highlight the characteristics of the proposed framework, we also describe a sample application, aiming at monitoring a forest through a Wireless Sensor Network, in order to timely detect the presence of fire.

Salvatore Gaglio, Luca Gatani, Giuseppe Lo Re, Marco Ortolani
An Implementation of a Free-Variable Tableaux for KLM Preferential Logic P of Nonmonotonic Reasoning: The Theorem Prover FreeP 1.0

We present

FreeP 1.0

, a theorem prover for the KLM preferential logic

P

of nonmonotonic reasoning.

FreeP 1.0

is a SICStus Prolog implementation of a

free-variables

, labelled tableau calculus for

P

, obtained by introducing suitable modalities to interpret conditional assertions. The performances of

FreeP 1.0

are promising.

FreeP 1.0

can be downloaded at

http://www.di.unito.it/~pozzato/FreeP1.0

.

Laura Giordano, Valentina Gliozzi, Nicola Olivetti, Gian Luca Pozzato
Ranking and Reputation Systems in the QBF Competition

Systems competitions play a fundamental role in the advancement of the state of the art in several automated reasoning fields. The goal of such events is to answer the question: “Which system should I buy?”. In this paper, we consider voting systems as an alternative to other procedures which are well established in automated reasoning contests. Our research is aimed to compare methods that are customary in the context of social choice, with methods that are targeted to artificial settings, including a new hybrid method that we introduce.

Massimo Narizzano, Luca Pulina, Armando Tacchella
A Top Down Interpreter for LPAD and CP-Logic

Logic Programs with Annotated Disjunctions and CP-logic are two different but related languages for expressing probabilistic information in logic programming. The paper presents a top down interpreter for computing the probability of a query from a program in one of these two languages. The algorithm is based on the one available for ProbLog. The performances of the algorithm are compared with those of a Bayesian reasoner and with those of the ProbLog interpreter. On programs that have a small grounding, the Bayesian reasoner is more scalable, but programs with a large grounding require the top down interpreter. The comparison with ProbLog shows that the added expressiveness effectively requires more computation resources.

Fabrizio Riguzzi

Multiagent Systems, Distributed AI

A Multi-layered General Agent Model

We propose a layered representation of general agent models with a base layer, composed of basic agent features including control, and a higher layer, consisting of a meta-control with the task of tuning, supervising and modifying the base layer. This provides higher flexibility in how an agent is built and may evolve.

Stefania Costantini, Arianna Tocchio, Francesca Toni, Panagiota Tsintza
Goal Generation with Ordered Beliefs

A rational agent adopts (or changes) its desires/goals when new information becomes available or its “desires” (e.g., tasks it is supposed to carry out) change. In conventional approaches on goal generation a desire is adopted if and only if

all

conditions leading to its generation are satisfied. The fact that certain beliefs might be differently relevant in the process of desire/goal generation is not considered. As a matter of fact, a belief could be crucial for adopting a given goal but less crucial for adopting another goal. Besides, a belief could be more influent than another in the generation of a particular goal.

We propose an approach which takes into account the relevance of beliefs (more or less

useful

and more or less

prejudicial

) in the desire/goal generation process. More precisely, we propose a logical framework to represent changes in the mental state of an agent depending on the acquisition of new information and/or on the arising of new desires, by taking into account the fact that some beliefs may help the generation of a goal while others may prevent it.

We compare this logical framework with one where relevance of beliefs is not accounted for, and we show that the novel framework favors the adoption of a broader set of goals, exhibiting a behavior which imitates more faithfully how goals are generated/adopted in real life.

Célia da Costa Pereira, Andrea G. B. Tettamanzi
Verifying Agent Conformance with Protocols Specified in a Temporal Action Logic

The paper addresses the problem of agents compatibility and their conformance to protocols. We assume that the specification of protocols is given in an action theory by means of temporal constraints and, in particular, communicative actions are defined in terms of their effects and preconditions on the social state of the protocol. We show that the problem of verifying the conformance of an agent with a protocol can be solved by making use of an automata based approach, and that the conformance of a set of agents with a protocol guarantees that their interaction cannot produce deadlock situations and it only gives rise to runs of the protocol.

Laura Giordano, Alberto Martelli

Knowledge Engineering, Ontologies and the Semantic Web

Harvesting Relational and Structured Knowledge for Ontology Building in the WPro Architecture

We present two algorithms for supporting semi-automatic ontology building, integrated in

WPro,

a new architecturefor ontology learning from Web documents. The first algorithm automatically extracts ontological entities from tables, by using specific heuristics and WordNet-based analysis. The second algorithm harvests semantic relations from unstructured texts using Natural Language Processing techniques. The integration in

WPro

allows a friendly interaction with the user for validating and modifying the extracted knowledge, and for uploading it into an existing ontology. Both algorithms show promising performance in the extraction process, and offer a practical means to speed-up the overall ontology building process.

Daniele Bagni, Marco Cappella, Maria Teresa Pazienza, Marco Pennacchiotti, Armando Stellato
English Querying over Ontologies: E-QuOnto

Relational database (DB) management systems provide the standard means for structuring and querying large amounts of data. However, to access such data the exact structure of the DB must be know, and such a structure might be far from the conceptualization of a human being of the stored information. Ontologies help to bridge this gap, by providing a high level conceptual view of the information stored in a DB in a cognitively more natural way. Even in this setting, casual end users might not be familiar with the formal languages required to query ontologies. In this paper we address this issue and study the problem of ontology-based data access by means of natural language questions instead of queries expressed in some formal language. Specifically, we analyze how complex real life questions are and how far from the query languages accepted by ontology-based data access systems, how we can obtain the formal query representing a given natural language question, and how can we handle those questions which are too complex wrt the accepted query language.

Raffaella Bernardi, Francesca Bonin, Diego Calvanese, Domenico Carbotta, Camilo Thorne
Use of Ontologies in Practical NL Query Interpretation

This paper describes how a domain ontology has been used in practical system of query interpretation. It presents a general methodology for building a semantic and language-independent representation of the meaning of the query on the basis of the contents of the ontology. The basic idea is to look for paths on the ontology connecting concepts related to words appearing in the NL query. The final result is what has been called

Ontological Query

, i.e. a semantic description of the user’s target. Since the domain is restricted, the problem of semantic ambiguity is not as relevant as in unrestricted applications, but some hints about how to obtain unambiguous representation will be given.

Leonardo Lesmo, Livio Robaldo

Machine Learning

Evolving Complex Neural Networks

Complex networks like the scale-free model proposed by Barabasi-Albert are observed in many biological systems and the application of this topology to artificial neural network leads to interesting considerations. In this paper, we present a preliminary study on how to evolve neural networks with complex topologies. This approach is utilized in the problem of modeling a chemical process with the presence of unknown inputs (disturbance). The evolutionary algorithm we use considers an initial population of individuals with differents scale-free networks in the genotype and at the end of the algorithm we observe and analyze the topology of networks with the best performances. Experimentation on modeling a complex chemical process shows that performances of networks with complex topology are similar to the feed-forward ones but the analysis of the topology of the most performing networks leads to the conclusion that the distribution of input node information affects the network performance (modeling capability).

Mauro Annunziato, Ilaria Bertini, Matteo De Felice, Stefano Pizzuti
Discovering Relational Emerging Patterns

The discovery of emerging patterns (EPs) is a descriptive data mining task defined for pre-classified data. It aims at detecting patterns which contrast two classes and has been extensively investigated for attribute-value representations. In this work we propose a method, named Mr-EP, which discovers EPs from data scattered in multiple tables of a relational database. Generated EPs can capture the differences between objects of two classes which involve properties possibly spanned in separate data tables. We implemented Mr-EP in a pre-existing multi-relational data mining system which is tightly integrated with a relational DBMS, and then we tested it on two sets of geo-referenced data.

Annalisa Appice, Michelangelo Ceci, Carlo Malgieri, Donato Malerba
Advanced Tree-Based Kernels for Protein Classification

One of the aims of modern Bioinformatics is to discover the molecular mechanisms that rule the protein operation. This would allow us to understand the complex processes involved in living systems and possibly correct dysfunctions. The first step in this direction is the identification of the functional sites of proteins.

In this paper, we propose new kernels for the automatic protein active site classification. In particular, we devise innovative attribute-value and tree substructure representations to model biological and spatial information of proteins in Support Vector Machines. We experimented with such models and the Protein Data Bank adequately pre-processed to make explicit the active site information. Our results show that structural kernels used in combination with polynomial kernels can be effectively applied to discriminate an active site from other regions of a protein. Such finding is very important since it firstly shows a successful identification of catalytic sites for a very large family of proteins belonging to a broad class of enzymes.

Elisa Cilia, Alessandro Moschitti
A Genetic Approach to the Automatic Generation of Fuzzy Control Systems from Numerical Controllers

Control systems are small components that control the behavior of larger systems. In the last years, sophisticated controllers have been widely used in the hardware/software

embedded systems

contained in a growing number of everyday products and appliances. Therefore, the problem of the automatic synthesis of controllers is extremely important. To this aim, several techniques have been applied, like

cell-to-cell mapping

,

dynamic programming

and, more recently,

model checking

. The controllers generated using these techniques are typically

numerical controllers

that, however, often have a huge size and not enough robustness. In this paper we present an automatic iterative process, based on

genetic algorithms

, that can be used to compress the huge information contained in such numerical controllers into smaller and more robust

fuzzy control systems

.

Giuseppe Della Penna, Francesca Fallucchi, Benedetto Intrigila, Daniele Magazzeni
Trip Around the HMPerceptron Algorithm: Empirical Findings and Theoretical Tenets

In a recent work we have carried out

CarpeDiem

, a novel algorithm for the fast evaluation of Supervised Sequential Learning (SSL) classifiers. In this paper we point out some interesting unexpected aspects of the learning behavior of the HMPerceptron algorithm that affect

CarpeDiem

performances. This observation is the starting point of an investigation about the internal working of the HMPerceptron, which unveils crucial details of the internal working of the HMPerceptron learning strategy. The understanding of these details, augment the comprehension of the algorithm meanwhile suggesting further enhancements.

Roberto Esposito, Daniele P. Radicioni
Instance-Based Query Answering with Semantic Knowledge Bases

A procedure founded in

instance-based learning

is presented, for performing a form of analogical reasoning on knowledge bases expressed in a wide range of ontology languages. The procedure exploits a novel semi-distance measure for individuals, that is based on their semantics w.r.t. a number of dimensions corresponding to a committee of features represented by concept descriptions. The procedure can answer by analogy to class’membership queries on the grounds of the classification of a number of training instances (the nearest ones w.r.t. the semi-distance measure). Particularly, it may also predict assertions that are not logically entailed by the knowledge base. In the experimentation, where we compare the procedure to a logical reasoner, we show that it can be quite accurate and augment the scope of its applicability, outperforming previous prototypes that adopted other semantic measures.

Nicola Fanizzi, Claudia d’Amato, Floriana Esposito
A Hierarchical Clustering Procedure for Semantically Annotated Resources

A clustering method is presented which can be applied to relational knowledge bases. It can be used to discover interesting groupings of resources through their (semantic) annotations expressed in the standard languages employed for modeling concepts in the Semantic Web. The method exploits a simple (yet effective and language-independent) semi-distance measure for individuals, that is based on the resource semantics w.r.t. a number of dimensions corresponding to a committee of features represented by a group of concept descriptions (discriminating features). The algorithm is an fusion of the classic

Bisecting k-Means

with approaches based on medoids since they are intended to be applied to relational representations. We discuss its complexity and the potential applications to a variety of important tasks.

Nicola Fanizzi, Claudia d’Amato, Floriana Esposito
Similarity-Guided Clause Generalization

Few works are available in the literature to define similarity criteria between First-Order Logic formulæ, where the presence of relations causes various portions of one description to be possibly mapped in different ways onto another description, which poses serious computational problems. Hence, the need for a set of general criteria that are able to support the comparison between formulæ. This could have many applications; this paper tackles the case of two descriptions (e.g., a definition and an observation) to be generalized, where the similarity criteria could help in focussing on the subparts of the descriptions that are more similar and hence more likely to correspond to each other, based only on their syntactic structure. Experiments on real-world datasets prove the effectiveness of the proposal, and the efficiency of the corresponding implementation in a generalization procedure.

S. Ferilli, T. M. A. Basile, N. Di Mauro, M. Biba, F. Esposito
Structured Hidden Markov Model: A General Framework for Modeling Complex Sequences

Structured Hidden Markov Model (S-HMM) is a variant of Hierarchical Hidden Markov Model that shows interesting capabilities of extracting knowledge from symbolic sequences. In fact, the S-HMM structure provides an abstraction mechanism allowing a high level symbolic description of the knowledge embedded in S-HMM to be easily obtained. The paper provides a theoretical analysis of the complexity of the matching and training algorithms on S-HMMs. More specifically, it is shown that Baum-Welch algorithm benefits from the so called locality property, which allows specific components to be modified and retrained, without doing so for the full model. The problem of modeling duration and of extracting (embedding) readable knowledge from (into) a S-HMM is also discussed.

Ugo Galassi, Attilio Giordana, Lorenza Saitta
Nearest Local Hyperplane Rules for Pattern Classification

Predicting the class of an observation from its nearest neighbors is one of the earliest approaches in pattern recognition. In addition to their simplicity, nearest neighbor rules have appealing theoretical properties, e.g. the asymptotic error probability of the plain 1-nearest-neighbor (NN) rule is at most twice the Bayes bound, which means zero asymptotic risk in the separable case. But given only a finite number of training examples, NN classifiers are often outperformed in practice. A possible modification of the NN rule to handle separable problems better is the nearest local hyperplane (NLH) approach. In this paper we introduce a new way of NLH classification that has two advantages over the original NLH algorithm. First, our method preserves the zero asymptotic risk property of NN classifiers in the separable case. Second, it usually provides better finite sample performance.

Gábor Takács, Béla Pataki

Natural Language Processing

The JIGSAW Algorithm for Word Sense Disambiguation and Semantic Indexing of Documents

Word Sense Disambiguation (WSD) is traditionally considered an AI-hard problem. In fact, a breakthrough in this field would have a significant impact on many relevant fields, such as information retrieval and information extraction. This paper describes JIGSAW, a knowledge-based WSD algorithm that attemps to disambiguate all words in a text by exploiting WordNet senses. The main assumption is that a Part-Of-Speech (POS)-dependent strategy to WSD can turn out to be more effective than a unique strategy. Semantics provided by WSD gives an added value to applications centred on humans as users. Two empirical evaluations are described in the paper. First, we evaluated the accuracy of JIGSAW on Task 1 of SEMEVAL-1 competition. This task measures the effectiveness of a WSD algorithm in an Information Retrieval System. For the second evaluation, we used semantically indexed documents obtained through a WSD process in order to train a naïve Bayes learner that infers “semantic”

sense-based

user profiles as binary text classifiers. The goal of the second empirical evaluation has been to measure the accuracy of the user profiles in selecting relevant documents to be recommended within a document collection.

P. Basile, M. Degemmis, A. L. Gentile, P. Lops, G. Semeraro
Data-Driven Dialogue for Interactive Question Answering

In this paper, a light framework for dialogue based interactive question answering is presented. The resulting architecture is called

REQUIRE

(

Robust Empirical QUestion answering for Intelligent Retrieval

), and represents a flexible and adaptive platform for domain specific dialogue. REQUIRE characterizes as a domain-driven dialogue system, whose aim is to support the specific tasks evoked by interactive question answering scenarios. Among its benefits it should be mentioned its

modularity

and

portability

across different domains, its

robustness

through adaptive models of speech act recognition and planning and its adherence of knowledge representation standard. The framework will be exemplified through its application within a sexual health information service tailored to young people.

Roberto Basili, Diego De Cao, Cristina Giannone, Paolo Marocco
GlossExtractor: A Web Application to Automatically Create a Domain Glossary

We describe a web application,

GlossExtractor

, that receives in input the output of a terminology extraction web application,

TermExtractor

, or a user-provided terminology, and then searches several repositories (on-line glossaries, web documents, user-specified web pages) for sentences that are candidate definitions for each of the input terms. Candidate definitions are then filtered using statistical indicators and machine-learned regular patterns. Finally, the user can inspect the acquired definitions and perform an individual or group validation. The validated glossary is then downloaded in one of several formats.

Roberto Navigli, Paola Velardi
A Tree Kernel-Based Shallow Semantic Parser for Thematic Role Extraction

We present a simple, two-steps supervised strategy for the identification and classification of thematic roles in natural language texts. We employ no external source of information but automatic parse trees of the input sentences. We use a few attribute-value features and tree kernel functions applied to specialized structured features. Different configurations of our thematic role labeling system took part in 2 tasks of the SemEval 2007 evaluation campaign, namely the closed tasks on semantic role labeling for the English and the Arabic languages. In this paper we present and discuss the system configuration that participated in the English semantic role labeling task and present new results obtained after the end of the evaluation campaign.

Daniele Pighin, Alessandro Moschitti
Inferring Coreferences Among Person Names in a Large Corpus of News Collections

We present a probabilistic framework for inferring coreference relations among person names in a news collection. The approach does not assume any prior knowledge about persons (e.g. an ontology) mentioned in the collection and requires basic linguistic processing (named entity recognition) and resources (a dictionary of person names). The system parameters have been estimated on a 5K corpus of Italian news documents. Evaluation, over a sample of four days news documents, shows that the error rate of the system (1.4%) is above a baseline (5.4%) for the task. Finally, we discuss alternative approaches for evaluation.

Octavian Popescu, Bernardo Magnini
Dependency Tree Semantics: Branching Quantification in Underspecification

Dependency Tree Semantics (DTS) is a formalism that allows to underspecify quantifier scope ambiguities. This paper provides an introduction of DTS and highlights its linguistic and computational advantages. From a linguistics point of view, DTS is able to represent the so-called Branching Quantifier readings, i.e. those readings in which two or more quantifiers have to be evaluated in parallel. From a computational point of view, DTS features an easy syntax–semantics interface wrt a Dependency Grammar and allows for incremental disambiguations.

Livio Robaldo

Information Retrieval and Extraction

User Modelling for Personalized Question Answering

In this paper, we address the problem of personalization in question answering (QA). We describe the personalization component of YourQA, our web-based QA system, which creates individual models of users based on their reading level and interests.

First, we explain how user models are dynamically created, saved and updated to filter and re-rank the answers. Then, we focus on how the user’s interests are used in YourQA. Finally, we introduce a methodology for user-centered evaluation of personalized QA. Our results show a significant improvement in the user’s satisfaction when their profiles are used to personalize answers.

Silvia Quarteroni, Suresh Manandhar
A Comparison of Genetic Algorithms for Optimizing Linguistically Informed IR in Question Answering

In this paper we compare four selection strategies in evolutionary optimization of information retrieval (IR) in a question answering setting. The IR index has been augmented by linguistic features to improve the retrieval performance of potential answer passages using queries generated from natural language questions. We use a genetic algorithm to optimize the selection of features and their weights when querying the IR database. With our experiments, we can show that the genetic algorithm applied is robust to strategy changes used for selecting individuals. All experiments yield query settings with improved retrieval performance when applied to unseen data. However, we can observe significant runtime differences when applying the various selection approaches which should be considered when choosing one of these approaches.

Jörg Tiedemann
A Variant of N-Gram Based Language Classification

Rapid classification of documents is of high-importance in many multilingual settings (such as international institutions or Internet search engines). This has been, for years, a well-known problem, addressed by different techniques, with excellent results. We address this problem by a simple n-grams based technique, a variation of techniques of this family. Our n-grams-based classification is very robust and successful, even for 20-fold classification, and even for short text strings. We give a detailed study for different lengths of strings and size of n-grams and we explore what classification parameters give the best performance. There is no requirement for vocabularies, but only for a few training documents. As a main corpus, we used a EU set of documents in 20 languages. Experimental comparison shows that our approach gives better results than four other popular approaches.

Andrija Tomović, Predrag Janičić

Planning and Scheduling

SAT-Based Planning with Minimal-#actions Plans and “soft” Goals

Planning as Satisfiability (SAT) is the best approach for optimally solving classical planning problems. The SAT-based planner

satplan

has been the winner in the deterministic track for optimal planners in the 4th International Planning Competition (IPC-4) and the co-winner in the last 5th IPC (together with another SAT-based planner). Given a planning problem

Π

,

satplan

works by (

i

) generating a SAT formula

Π

n

with a fixed “makespan”

n

, and (

ii

) checking

Π

n

for satisfiability. The algorithm stops if

Π

n

is satisfiable, and thus a plan has been found, otherwise

n

is increased.

Despite its efficiency, and the optimality of the makespan,

satplan

has significant deficiency related in particular to “plan quality”, e.g., the number of actions in the returned plan, and the possibility to express and reason on “soft” goals.

In this paper, we present

satplan

 ≺ 

, a system, modification of

satplan

, which makes a significant step towards the elimination of

satplan

’s limitations. Given the optimal makespan,

satplan

 ≺ 

returns plans with minimal number of actions and maximal number of satisfied “soft” goals, with respect to both cardinality and subset inclusions. We selected several benchmarks from different domains from all the IPCs: on these benchmarks we show that the plan quality returned by

satplan

 ≺ 

is often significantly higher than the one returned by

satplan

.

Quite surprisingly, this is often achieved without sacrificing efficiency while obtaining results that are competitive with the winning system of the ”SimplePreferences” domain in the satisfying track of the last IPC.

Enrico Giunchiglia, Marco Maratea
Plan Diagnosis and Agent Diagnosis in Multi-agent Systems

The paper discusses a distributed approach for monitoring and diagnosing the execution of a plan where concurrent actions are performed by a team of cooperating agents.

The paper extends the notion of plan diagnosis(introduced by Roos et al. for the execution of a multi-agent plan) with the notion of agent diagnosis. While plan diagnosis is able to capture the distinction between primary and secondary failures, the agent diagnosis makes apparent the actual health status of the agents.

The paper presents a mechanism of failure propagation which captures the interplay between agent diagnosis and plan diagnosis; this mechanism plays a critical role in the understanding at what extent a fault affecting the functionalities of an agent affects the global plan too. A relational formalism is adopted for modeling both the nominal and the abnormal execution of the actions.

Roberto Micalizio, Pietro Torasso
Boosting the Performance of Iterative Flattening Search

Iterative Flattening search is a local search schema introduced for solving scheduling problems with a makespan minimization objective. It is an iterative two-step procedure, where on each cycle of the search a subset of ordering decisions on the critical path in the current solution are randomly retracted and then recomputed to produce a new solution. Since its introduction, other variations have been explored and shown to yield substantial performance improvement over the original formulation. In this spirit, we propose and experimentally evaluate further improvements to this basic local search schema. Specifically, we examine the utility of operating with a more flexible solution representation, and of integrating iterative-flattening search with a complementary tabu search procedure. We evaluate these extensions on large benchmark instances of the Multi-Capacity Job-Shop Scheduling Problem (

mcjssp

) which have been used in previous studies of iterative flattening search procedures.

Angelo Oddi, Nicola Policella, Amedeo Cesta, Stephen F. Smith
Real-Time Trajectory Generation for Mobile Robots

This paper presents a computationally effective trajectory generation algorithm for omni-directional mobile robots. This method uses the Voronoi diagram to find a sketchy path that keeps away from obstacles and then smooths this path with a novel use of Bezier curves. This method determines velocity magnitude of a robot along the curved path to meet optimality conditions and dynamic constrains using Newton method. The proposed algorithm has been implemented on real robots, and experimental results in different environments are presented.

Alireza Sahraei, Mohammad Taghi Manzuri, Mohammad Reza Razvan, Masoud Tajfard, Saman Khoshbakht

AI and Applications

Curricula Modeling and Checking

In this work, we present a constrained-based representation for specifying the goals of “course design”, that we call curricula model, and introduce a graphical language, grounded into Linear Time Logic, to design curricula models which include knowledge of proficiency levels. Based on this representation, we show how model checking techniques can be used to verify that the user’s learning goal is supplied by a curriculum, that a curriculum is compliant to a curricula model, and that competence gaps are avoided.

Matteo Baldoni, Cristina Baroglio, Elisa Marengo
Case–Based Support to Small–Medium Enterprises: The Symphony Project

This paper presents Symphony, an IMS (Intelligent Manufacturing System) developed in the context of an interregional project supported by the European Commission. Symphony was a three year project that aimed at the development of an integrated set of tools for management of enterprizes, in order facilitate the continuous creation, exploration and exploitation of business opportunities through strategic networking. In particular, Symphony is devoted to support human resource managers of Small–Medium Enterprises in their decision making process about the looking for newcomers and/or assigning people to jobs. About this topic, the paper focuses on SymMemory, a case–based module of the main system that has been developed to identify what features are necessary to evaluate a person, aggregate them into a suitable case–structure representing a person or job profile and comparing profiles according to a specific similarity algorithm.

Stefania Bandini, Paolo Mereghetti, Esther Merino, Fabio Sartori
Synthesizing Proactive Assistance with Heterogeneous Agents

This paper describes outcome from a project aimed at creating an instance of integrated environment endowed with heterogeneous software and robotic agents to actively assist an elderly person at home. Specifically, a proactive environment for continuous daily activity monitoring has been created in which an autonomous robot acts as the main interactor with the person. This paper describes how the synergy of different technologies guarantees an overall intelligent behavior capable of personalized and contextualized interaction with the assisted person.

Amedeo Cesta, Gabriella Cortellessa, Federico Pecora, Riccardo Rasconi
Robust Color-Based Skin Detection for an Interactive Robot

Detection of human skin in an arbitrary image is generally hard. Most color-based skin detection algorithms are based on a static color model of the skin. However, a static model cannot cope with the huge variability of scenes, illuminants and skin types. This is not suitable for an interacting robot that has to find people in different rooms with its camera and without any a priori knowledge about the environment nor of the lighting.

In this paper we present a new color-based algorithm called VR filter. The core of the algorithm is based on a statistical model of the colors of the pixels that generates a dynamic boundary for the skin pixels in the color space. The motivation beyond the development of the algorithm was to be able to correctly classify skin pixels in low definition images with moving objects, as the images grabbed by the omnidirectional camera mounted on the robot. However, our algorithm was designed to correctly recognizes skin pixels with any type of camera and without exploiting any information on the camera.

In the paper we present the advantages and the limitations of our algorithm and we compare its performances with the principal existing skin detection algorithms on standard perspective images.

Alvise Lastra, Alberto Pretto, Stefano Tonello, Emanuele Menegatti
Building Quality-Based Views of the Web

Due to the fast growing of the information available on the Web, the retrieval of relevant content is increasingly hard. The complexity of the task is concerned both with the semantics of contents and with the filtering of quality-based sources. A recent strategy addressing the overwhelming amount of information is to focus the search on a snapshot of internet, namely a Web view. In this paper, we present a system supporting the creation of a quality-based view of the Web. We give a brief overview of the software and of its functional architecture. More emphasis is on the role of AI in supporting the organization of Web resources in a hierarchical structure of categories. We survey our recent works on document classifiers dealing with a twofold challenge. On one side, the task is to recommend classifications of Web resources when the taxonomy does not provide examples of classification, which usually happens when taxonomies are built from scratch. On the other side, even when taxonomies are populated, classifiers are trained with few examples since usually when a category achieves a certain amount of Web resources the organization policy suggests a refinement of the taxonomy. The paper includes a short description of a couple of case studies where the system has been deployed for real world applications.

Enrico Triolo, Nicola Polettini, Diego Sona, Paolo Avesani

Special Track: AI and Robotics

Reinforcement Learning in Complex Environments Through Multiple Adaptive Partitions

The application of Reinforcement Learning (RL) algorithms to learn tasks for robots is often limited by the large dimension of the state space, which may make prohibitive its application on a tabular model. In this paper, we describe LEAP (Learning Entities Adaptive Partitioning), a model-free learning algorithm that uses overlapping partitions which are dynamically modified to learn near-optimal policies with a small number of parameters. Starting from a coarse aggregation of the state space, LEAP generates refined partitions whenever it detects an

incoherence

between the current action values and the actual rewards from the environment. Since in highly stochastic problems the adaptive process can lead to over-refinement, we introduce a mechanism that

prunes

the macrostates without affecting the learned policy. Through refinement and pruning, LEAP builds a multi-resolution state representation specialized only where it is actually needed. In the last section, we present some experimental evaluation on a grid world and a complex simulated robotic soccer task.

Andrea Bonarini, Alessandro Lazaric, Marcello Restelli
Uses of Contextual Knowledge in Mobile Robots

In this paper, we analyze work on mobile robotics with the goal of highlighting the uses of contextual knowledge aiming at a flexible and robust performance of the system. In particular, we analyze different robotic tasks, ranging from robot behavior to perception, and then propose to characterize “contextualization” as a design pattern. As a result, we argue that many different tasks indeed can exploit contextual information and, therefore, a single explicit representation of knowledge about context may lead to significant advantages both in the design and in the performance of mobile robots.

D. Calisi, A. Farinelli, G. Grisetti, L. Iocchi, D. Nardi, S. Pellegrini, D. Tipaldi, V. A. Ziparo
Natural Landmark Detection for Visually-Guided Robot Navigation

The main difficulty to attain fully autonomous robot navigation outdoors is the fast detection of reliable visual references, and their subsequent characterization as landmarks for immediate and unambiguous recognition. Aimed at speed, our strategy has been to track salient regions along image streams by just performing on-line pixel sampling. Persistent regions are considered good candidates for landmarks, which are then characterized by a set of subregions with given color and normalized shape. They are stored in a database for posterior recognition during the navigation process. Some experimental results showing landmark-based navigation of the legged robot Lauron III in an outdoor setting are provided.

Enric Celaya, Jose-Luis Albarral, Pablo Jiménez, Carme Torras
Real-Time Visual Grasp Synthesis Using Genetic Algorithms and Neural Networks

This paper addresses the problem of automatic grasp synthesis of unknown planar objects. In other words, we must compute points on the object’s boundary to be reached by the robotic fingers such that the resulting grasp, among infinite possibilities, optimizes some given criteria. Objects to be grasped are represented as superellipses, a family of deformable 2D parametric functions. They can model a large variety of shapes occurring often in practice by changing a small number of parameters. The space of possible grasp configurations is analyzed using genetic algorithms. Several quality criteria from existing literature together with kinematical and mechanical considerations are considered. However, genetic algorithms are not suitable to applications where time is a critical issue. In order to achieve real-time characteristics of the algorithm, neural networks are used: a huge training-set is collected off-line using genetic algorithms, and a feedforward network is trained on these values. We will demonstrate the usefulness of this approach in the process of grasp synthesis, and show the results achieved on an anthropomorphic arm/hand robot.

Antonio Chella, Haris Dindo, Francesco Matraxia, Roberto Pirrone
Attention-Based Environment Perception in Autonomous Robotics

This paper describes a robotic architecture that uses visual attention mechanisms for autonomous navigation in unknown indoor environments. A foveation mechanism based on classical bottom-up gaze shifts allows the robot to autonomously select landmarks, defined as salient points in the camera images. Landmarks are memorized in a behavioral fashion, coupling sensing and acting to achieve a representation view and scale independent. Selected landmarks are stored in a topological map; during the navigation a top-down mechanism controls the attention system to achieve robot localization. Experiments and results show that our system is robust to noise and odometric errors, being at the same time adaptable to different environments and acting conditions.

Antonio Chella, Irene Macaluso, Lorenzo Riano
A 3D Virtual Model of the Knee Driven by EMG Signals

A 3D virtual model of the human lower extremity has been developed for the purpose of examining how the neuromuscular system controls the muscles and generates the desired movement. Our virtual knee currently incorporates the major muscles spanning the knee joint and it is used to estimate the knee joint moment. Beside that we developed a graphical interface that allows the user to visualize the skeletal geometry and the movements imparted to it. The purpose of this paper is to describe the design objectives and the implementation of our EMG-driven virtual knee. We finally compared the virtual knee behavior with the torque performed by the test subject in order to obtain a qualitative validation of our model. Within the next future our aim is to develop a real-time EMG-driven exoskeleton for knee rehabilitation.

Massimo Sartori, Gaetano Chemello, Enrico Pagello

Special Track: AI and Expressive Media

‘O Francesca, ma che sei grulla?’ Emotions and Irony in Persuasion Dialogues

In this paper we investigate the interaction between emotional and non-emotional aspects of persuasion dialogues, from the viewpoint of both the system (the Persuader), when reasoning on the persuasion attempt, and the user (the Receiver), when reacting to it. We are working on an Embodied Conversational Agent (ECA) which applies natural argumentation techniques to persuade users to improve their behaviour in the healthy eating domain. The ECA observes the user’s attitude during the dialogue, so as to select an appropriate persuasion strategy or to respond intelligently to user’s reactions to suggestions received. We grounded our work on the analysis of two corpora: a corpus of ‘natural’ persuasion examples and a corpus of user’s reactions to persuasion attempts.

Irene Mazzotta, Nicole Novielli, Vincenzo Silvestri, Fiorella de Rosis
Music Expression Understanding Based on a Joint Semantic Space

A paradigm for music expression understanding based on a joint semantic space, described by both affective and sensorial adjectives, is presented. Machine learning techniques were employed to select and validate relevant low level features, and an interpretation of the clustered organization based on action and physical analogy is proposed.

Luca Mion, Giovanni De Poli
Towards Automated Game Design

Game generation systems perform automated, intelligent design of games (

i.e.

videogames, boardgames), reasoning about both the abstract rule system of the game and the visual realization of these rules. Although, as an instance of the problem of creative design, game generation shares some common research themes with other creative AI systems such as story and art generators, game generation extends such work by having to reason about

dynamic, playable

artifacts. Like AI work on creativity in other domains, work on game generation sheds light on the human game design process, offering opportunities to make explicit the tacit knowledge involved in game design and test game design theories. Finally, game generation enables new game genres which are radically customized to specific players or situations; notable examples are cell phone games customized for particular users and newsgames providing commentary on current events. We describe an approach to formalizing game mechanics and generating games using those mechanics, using WordNet and ConceptNet to assist in performing common-sense reasoning about game verbs and nouns. Finally, we demonstrate and describe in detail a prototype that designs micro-games in the style of Nintendo’s

WarioWare

series.

Mark J. Nelson, Michael Mateas
Tonal Harmony Analysis: A Supervised Sequential Learning Approach

We have recently presented

CarpeDiem

, an algorithm that can be used for speeding up the evaluation of Supervised Sequential Learning (SSL) classifiers.

CarpeDiem

provides impressive time performance gain over the state-of-art Viterbi algorithm when applied to the tonal harmony analysis task. Along with interesting computational features, the algorithm reveals some properties that are of some interest to Cognitive Science and Computer Music. To explore the question whether and to what extent the implemented system is suitable for cognitive modeling, we first elaborate about its design principles, and then assess the quality of the analyses produced. A threefold experimentation reviews the learned weights, the classification errors, and the search space in comparison to the actual problem space; data about these points are reported and discussed.

Daniele P. Radicioni, Roberto Esposito
Words Not Cast in Stone

An advertising message induces in the recipient a positive (or negative) attitude toward the subject to advertise, for example through the evocation of a appropriate emotion. This paper is about the use of text processing techniques for proposing solutions to advertising professionals, opening up the way to a full automatization of the whole process. The system has two steps: (i) the creative variation of familiar expressions, taking into account the affective content of the produced text, (ii) the automatic animation (semantically consistent with the affective text content) of the resulting headline, using kinetic typography techniques.

Carlo Strapparava, Alessandro Valitutti, Oliviero Stock

Special Track: Intelligent Access to Multimedia Information

Annotations as a Tool for Disclosing Hidden Relationships Between Illuminated Manuscripts

Image digital archives of illuminated manuscripts can become a useful tool for researchers in different disciplines. To this aim, it is proposed to provide them with tools for annotating images to disclose hidden relationships between illustrations belonging to different works. Relationships can be modeled as typed links, which induce an hypertext over the archive. In this paper we present a formal model for annotations, which is the basis to build methods for automatically processing existing relationships among link types and exploiting the properties of the graph which models the hypertext.

Maristella Agosti, Nicola Ferro, Nicola Orio
Mining Web Data for Image Semantic Annotation

In this paper, an unsupervised image classification technique combining features from different media levels is proposed. In particular geometrical models of visual features are here integrated with textual descriptions derived through Information Extraction processes from Web pages. While the higher expressivity of the combined individual descriptions increases the complexity of the adopted clustering algorithms, methods for dimensionality reduction (i.e. LSA) are applied effectively. The evaluation on an image classification task confirms that the proposed Web mining model outperforms other methods acting on the individual levels for cost-effective annotation.

Roberto Basili, Riccardo Petitti, Dario Saracino
Content Aware Image Enhancement

We present our approach, integrating imaging and vision, for content-aware enhancement and processing of digital photographs. The overall quality of images is improved by a modular procedure automatically driven by the image class and content.

Gianluigi Ciocca, Claudio Cusano, Francesca Gasparini, Raimondo Schettini
Semantic Annotation of Complex Human Scenes for Multimedia Surveillance

A Multimedia Surveillance System (MSS) is considered for automatically retrieving semantic content from complex outdoor scenes, involving both human behavior and traffic domains. To characterize the dynamic information attached to detected objects, we consider a deterministic modeling of spatio-temporal features based on abstraction processes towards fuzzy logic formalism. A situational analysis over conceptualized information will not only allow us to describe human actions within a scene, but also to suggest possible interpretations of the behaviors perceived, such as situations involving thefts or dangers of running over. Towards this end, the different levels of semantic knowledge implied throughout the process are also classified into a proposed taxonomy.

Carles Fernández, Pau Baiget, Xavier Roca, Jordi Gonzàlez
Synthesis of Hypermedia Using OWL and Jess

An hypermedia is a spatio-temporal hypertext, namely a collection of media connected by synchronization and linking relations. HyperJessSyn is a tool which uses logic programming and Semantic Web technologies for synthesizing hypermedia according to descriptions of discourse structure and of presentation layout. A rule-based system in Jess applies inference rules to interpret from an OWL graph semantic and navigation relations among media instances, and production rules to turn media contents descriptions in XML/MPEG-7 format in an XMT-A/MPEG-4 hypermedia script via XSL transformations. The system has been used to produce an hyper-guide for virtual visiting a museum.

Alberto Machì, Antonino Lo Bue
NaviTexte, a Text Navigation Tool

In this paper, we describe NaviTexte, a software devoted to text navigation. First, we explain our conception of text navigation, which exploits linguistic information in texts to offer dynamic reading paths to a reader. Second, we describe a text representation specially defined to support our approach. Then we present a language for modeling navigation knowledge and its implementation framework. At last, two experimentations are presented: one aiming at teaching French at Danish students and the other one proposes text navigation as an alternative at the process of summarization based on sentences extraction.

Javier Couto, Jean-Luc Minel
TV Genre Classification Using Multimodal Information and Multilayer Perceptrons

Multimedia content annotation is a key issue in the current convergence of audiovisual entertainment and information media. In this context, automatic genre classification (AGC) provides a simple and effective solution to describe video contents in a structured and well understandable way. In this paper a method for classifying the genre of TV broadcasted programmes is presented. In our approach, we consider four groups of features, which include both low-level visual descriptors and higher level semantic information. For each type of these features we derive a characteristic vector and use it as input data of a multilayer perceptron (MLP). Then, we use a linear combination of the outputs of the four MLPs to perform genre classification of TV programmes. The experimental results on more than 100 hours of broadcasted material showed the effectiveness of our approach, achieving a classification accuracy of ~92%.

Maurizio Montagnuolo, Alberto Messina

Posters

Hierarchical Text Categorization Through a Vertical Composition of Classifiers

In this paper we present a hierarchical approach to text categorization aimed at improving the performances of the corresponding tasks. The proposed approach is explicitly devoted to cope with the problem related to the unbalance between relevant and non relevant inputs. The technique has been implemented and tested by resorting to a multiagent system aimed at performing information retrieval tasks.

Andrea Addis, Giuliano Armano, Francesco Mascia, Eloisa Vargiu
Text Categorization in Non-linear Semantic Space

Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed by using a set of manually classified documents, i.e. a training collection. Term-based representation of documents has found widespread use in TC. However, one of the main shortcomings of such methods is that they largely disregard lexical semantics and, as a consequence, are not sufficiently robust with respect to variations in word usage. In this paper we design, implement, and evaluate a new text classification technique. Our main idea consists in finding a series of projections of the training data by using a new, modified LSI algorithm, projecting all training instances to the low-dimensional subspace found in the previous step, and finally inducing a binary search on the projected low-dimensional data. Our conclusion is that, with all its simplicity and efficiency, our approach is comparable to SVM accuracy on classification.

Claudio Biancalana, Alessandro Micarelli
A System Supporting Users of Cultural Resource Management Semantic Portals

Cultural Resource Management (CRM) represents an interesting application domain for innovative approaches, models and technologies developed by computer science researchers. This paper presents NavEditOW, a system for the navigation, query and updating of ontologies through the web, as a tool providing suitable functionalities for the design and development of semantic portals in the CRM area. NavEditOW supports ontology maintainers, content editors, and end-users, that have little or no specific knowledge on Semantic Web technologies and on related formal tools. A description of the application of the tool to the representation and management of archaeological knowledge for the description of publications in an e-library is also provided.

Andrea Bonomi, Glauco Mantegari, Alessandro Mosca, Matteo Palmonari, Giuseppe Vizzari
Interactive Analysis of Time in Film Stories

In this work we propose some principles that regulate temporal anchorages by means of which the spectator places the events of the story on the temporal axis of the fabula, the structure where the causality and the order of the events of a story are recorded. The approach adopted for the segmentation of the story of a film uses both syntactic elements, such as the scene and the sequence, and semantic elements, such as the events of the story, defined as actions that occur in a determined interval of time (diegetic). The base representation, chosen for the analysis of the time in a story, is a particular formulation of spectator beliefs, in which the time (explicit) is present both as the element that determines the variation of spectator beliefs during the vision of the film, and also as object of belief. In this paper, we propose also a system that supplies an interactive aid (as an example to a generic expert of cinema) in order to annotate the events of the story of a film. This system supplies in addition an aid in the analysis of the flashbacks, the forwards and the repetitions of events, and may apply temporal reasoning rules to order, both partially and totally, the events of the story.

Francesco Mele, Antonio Calabrese, Roberta Marseglia
Towards MKDA: A Knowledge Discovery Assistant for Researches in Medicine

Nowadays doctors are generating a huge amount of raw data. These data, analyzed with data mining techniques, could be sources of new knowledge. Unluckily such tasks need skilled data analysts, and not so much researchers in Medicine are also data mining experts. In this paper we present a web based system for knowledge discovery assistance in Medicine able to advice a medical researcher in this kind of tasks. The user must define only the experiment specifications in a formal language we have defined. The system GUI helps users in their composition. Then the system plans a Knowledge Discovery Process (KDP) on the basis of rules in a knowledge base. Finally the system executes the KDP and produces a model as result. The system works through the co-operation of different web services specialized in different tasks. The system is still under development.

Vincenzo Cannella, Giuseppe Russo, Daniele Peri, Roberto Pirrone, Edoardo Ardizzone
Mobile Robots and Intelligent Environments

This paper deals with a knowledge representation architecture for distributed systems. The aim is to adopt a common framework to deal with an “intelligent space”, i.e., an ecosystem composed by artificial entities which cooperate to perform an intelligent multi-source data fusion. This information is used to coordinate the behavior of mobile robots and intelligent appliances. The experimental results discuss this interaction with respect to the fulfillment of complex service tasks.

Francesco Capezio, Fulvio Mastrogiovanni, Antonio Sgorbissa, Renato Zaccaria
Multi-robot Interacting Through Wireless Sensor Networks

This paper addresses the issue of coordinating the operations of multiple robots in an indoor environment. The framework presented here uses a composite networking architecture, in which a hybrid wireless network, composed by commonly available WiFi devices, and the more recently developed wireless sensor networks. Such architecture grants robots to enhance their perceptive capabilities and to exchange information so as to coordinate actions in order to achieve a global common goal. The proposed framework is described with reference to an experimental setup that extends a previously developed robotic tour guide application in the context of a multi-robot application.

Antonio Chella, Giuseppe Lo Re, Irene Macaluso, Marco Ortolani, Daniele Peri
Design of a Multiagent Solution for Demand-Responsive Transportation

Mobility patterns in large cities has changed in the last decades making traditional fix-line public transportation no longer efficient to tackle the increasing complexity. Demand-responsive transportation leverages as an alternative where routes, departure times, vehicles and even operators, can be matched to the identified demand, allowing a more user-oriented and cost effective approach to service provision. In this context, the design of a multiagent system is presented following the agent-oriented software engineering methodology (AOSE) PASSI.

Claudio Cubillos, Sandra Gaete, Franco Guidi-Polanco, Claudio Demartini
Planning the Behaviour of a Social Robot Acting as a Majordomo in Public Environments

In this paper we propose the use of a social robot as a majordomo interface between users and Smart Environments. We focus, in particular, on the need for the robot to plan its behaviour by taking into account factors that are relevant in public environments in which the robot has a “social” role. In this context, once the situation has been recognized, the robot uses a probabilistic model to trigger social attitudes towards the user that influence, as a consequence, the activation of its high-level goals. According to the most probable goals, the behaviour plan is computed by applying a utility-based approach that allows selecting the most convenient actions in that situation.

Berardina De Carolis, Giovanni Cozzolongo
Enhancing Comprehension of Ontologies and Conceptual Models Through Abstractions

In addition to the Database Comprehension Problem, where diagrammatic conceptual data models are too large for a modeller or domain expert to comprehend or manage, an Ontology Comprehension Problem is emerging. Formal ontologies are, however, more amenable to automated abstractions to improve understandability. Three ways of abstraction are defined with 11 abstraction functions that use foundational ontology categories. Usability of the abstraction functions is enhanced by associating the functions with a basic framework of levels and abstraction hierarchy, thereby facilitating querying and visualizing ontologies.

C. Maria Keet
Recognizing Chinese Proper Nouns with Transformation-Based Learning and Ontology

This paper proposes an approach based on the Ontology and transformation-based error-driven learning (TBL) to recognize Chinese proper nouns. Firstly, our approach redefines the label set and tags Chinese words according to the usage of proper nouns and their context, and then it extracts Characteristic Information (CI) of the proper noun from the text and merges them based on the Ontology. Secondly, it tags the training corpus following the new definition of Multi-dimension Attribute Points (MAP), and then extracts rules using the TBL approach. Finally, it recognizes proper nouns by utilizing the rule set and Ontology. The experimental results in our open test show that the precision is 92.5% and the recall is 86.3%.

Peifeng Li, Qiaoming Zhu, Lei Wang
Toward Image-Based Localization for AIBO Using Wavelet Transform

This paper describes a similarity measure for images to be used in image-based localization for autonomous robots with low computational resources. We propose a novel signature to be extracted from the image and to be stored in memory. The proposed signature allows, at the same time, memory saving and fast similarity calculation. The signature is based on the calculation of the 2D Haar Wavelet Transform of the gray-level image. We present experiments showing the effectiveness of the proposed image similarity measure. The used images were collected using the AIBOs ERS-7 of the RoboCup Team Araibo of the University of Tokyo on a RoboCup field, however, the proposed image similarity measure does not use any information on the structure of the environment and do not exploit the peculiar features of the RoboCup environment.

Alberto Pretto, Emanuele Menegatti, Enrico Pagello, Yoshiaki Jitsukawa, Ryuichi Ueda, Tamio Arai
Crosslingual Retrieval in an eLearning Environment

In this paper we are reporting about an ongoing project LT4eL (Language Technolohy for eLearning) aiming at improving the effectiveness of retrieval and accessibility of learning objects within a learning management system. We elaborate the process of building the domain ontology and present the multilingual support offered to the application.

Cristina Vertan, Kiril Simov, Petya Osenova, Lothar Lemnitzer, Alex Killing, Diane Evans, Paola Monachesi
Constraint-Based School Timetabling Using Hybrid Genetic Algorithms

In this paper, a hybrid genetic algorithm (HGA) has been developed to solve the constraint-based school timetabling problem (CB-STTP). HGA has a new operator called repair operator, in addition to standard crossover and mutation operators. A timetabling tool has been developed for HGA to solve CB-STTP. The timetabling tool has been tested extensively using real-word data obtained the Technical and Vocational High Schools in Turkey. Experimental results have presented that performance of HGA is better than performance of standard GA.

Tuncay Yigit
Backmatter
Metadaten
Titel
AI*IA 2007: Artificial Intelligence and Human-Oriented Computing
herausgegeben von
Roberto Basili
Maria Teresa Pazienza
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-74782-6
Print ISBN
978-3-540-74781-9
DOI
https://doi.org/10.1007/978-3-540-74782-6