Skip to main content
Top

2009 | Book

KI 2009: Advances in Artificial Intelligence

32nd Annual German Conference on AI, Paderborn, Germany, September 15-18, 2009. Proceedings

Editors: Bärbel Mertsching, Marcus Hund, Zaheer Aziz

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The 32nd Annual German Conference on Arti?cial Intelligence, KI 2009 (KI being the German acronym for AI), was held at the University of Paderborn, Germany on September 15–18, 2009, continuing a series of successful events. Starting back in 1975 as a national meeting, the conference now gathers - searchers and developers from academic ?elds and industries worldwide to share their research results covering all aspects of arti?cial intelligence. This year we received submissions from 23 countries and 4 continents. Besides the inter- tional orientation, we made a major e?ort to include as many branches of AI as possible under the roof of the KI conference. A total of 21 area chairs represe- ing di?erent communities within the ?eld of AI selected further members of the program committee and helped the local organizers to acquire papers. The new approach appealed to the AI community: we had 126 submissions, which cons- tuted an increase of more than 50%, and which resulted in 14 parallel sessions on the following topics agents and intelligent virtual environments AI and engineering automated reasoning cognition evolutionary computation Robotics experience and knowledge management history and philosophical foundations knowledge representation and reasoning machine learning and mining natural language processing planning and scheduling spatial and temporal reasoning vision and perception o?ering cutting edge presentations and discussions with leading experts. Thirty-one percent of the contributions came from outside German-speaking countries.

Table of Contents

Frontmatter

Planning and Scheduling

Solving Fully-Observable Non-deterministic Planning Problems via Translation into a General Game

In this paper, we propose a symbolic planner based on BDDs, which calculates strong and strong cyclic plans for a given non-deterministic input. The efficiency of the planning approach is based on a translation of the non-deterministic planning problems into a two-player turn-taking game, with a set of actions selected by the solver and a set of actions taken by the environment.

The formalism we use is a PDDL-like planning domain definition language that has been derived to parse and instantiate general games. This conversion allows to derive a concise description of planning domains with a minimized state vector, thereby exploiting existing static analysis tools for deterministic planning.

Peter Kissmann, Stefan Edelkamp
Planning with h  +  in Theory and Practice

Many heuristic estimators for classical planning are based on the so-called

delete relaxation

, which ignores negative effects of planning operators. Ideally, such heuristics would compute the actual goal distance in the delete relaxation, i.e, the cost of an

optimal relaxed plan

, denoted by

h

 + 

. However, current delete relaxation heuristics only provide (often inadmissible) estimates to

h

 + 

because computing the correct value is an NP-hard problem.

In this work, we consider the approach of planning with the actual

h

 + 

heuristic from a theoretical and computational perspective. In particular, we provide

domain-dependent complexity results

that classify some standard benchmark domains into ones where

h

 + 

can be computed efficiently and ones where computing

h

 + 

is NP-hard. Moreover, we study

domain-dependent implementations

of

h

 + 

which show that the

h

 + 

heuristic provides very informative heuristic estimates compared to other state-of-the-art heuristics.

Christoph Betz, Malte Helmert
A Framework for Interactive Hybrid Planning

Hybrid planning provides a powerful mechanism to solve real-world planning problems. We present a domain-independent, mixed-initiative approach to plan generation that is based on a formal concept of hybrid planning. It allows for any interaction modalities and models of initiative while preserving the soundness of the planning process. Adequately involving the decision competences of end-users in this way will improve the application potential as well as the acceptance of the technology.

Bernd Schattenberg, Julien Bidot, Sascha Geßler, Susanne Biundo
A Memory-Efficient Search Strategy for Multiobjective Shortest Path Problems

The paper develops vector frontier search, a new multiobjective search strategy that achieves an important reduction in space requirements over previous proposals. The complexity of a resulting multiobjective frontier search algorithm is analyzed and its performance is evaluated over a set of random problems.

L. Mandow, J. L. Pérez de la Cruz
Perfect Hashing for State Spaces in BDD Representation

In this paper we design minimum perfect hash functions on the basis of BDDs that represent all reachable states

S

 ⊆ {0,1}

n

. These functions are one-to-one on

S

and can be evaluated quite efficiently. Such hash functions are useful to perform search in a bitvector representation of the state space. The time to compute the hash value with standard operations on the BDD

G

is (

n

|

G

|), the time to compute the inverse is

O

(

n

2

|

G

|). When investing

O

(

n

) bits per node, we arrive at

O

(|

G

|) preprocessing time and optimal time

O

(

n

) for ranking and unranking.

Martin Dietzfelbinger, Stefan Edelkamp
On the Benefit of Fusing DL-Reasoning with HTN-Planning

Keeping planning problems as small as possible is a must in order to cope with complex tasks and environments. Earlier, we have described a method for cascading Description Logic (

dl

) representation and reasoning on the one hand, and Hierarchical Task Network (

htn

) action planning on the other. The planning domain description as well as the fundamental

htn

planning concepts are represented in

dl

and can therefore be subject to

dl

reasoning. From these representations, concise planning problems are generated for

htn

planners. We show by way of case study that this method yields significantly smaller planning problem descriptions than regular representations do in

htn

planning. The method is presented through a case study of a robot navigation domain and the blocks world domain. We present the benefits of using this approach in comparison with a pure

htn

planning approach.

Ronny Hartanto, Joachim Hertzberg
Flexible Timeline-Based Plan Verification

Flexible temporal planning is a general technique that has demonstrated wide applications possibilities in heterogeneous domains. A key problem for widening applicability of these techniques is the robust connection between plan generation and execution. This paper describes how a model-checking verification tool, based on UPPAAL-TIGA, is suitable for verifying flexible temporal plans. Moreover, we further investigate a particular perspective, i.e., the one of verifying dynamic controllability before actual plan execution.

A. Cesta, A. Finzi, S. Fratini, A. Orlandini, E. Tronci
Solving Non-deterministic Planning Problems with Pattern Database Heuristics

Non-determinism arises naturally in many real-world applications of action planning. Strong plans for this type of problems can be found using AO* search guided by an appropriate heuristic function. Most domain-independent heuristics considered in this context so far are based on the idea of ignoring delete lists and do not properly take the non-determinism into account. Therefore, we investigate the applicability of pattern database (PDB) heuristics to non-deterministic planning. PDB heuristics have emerged as rather informative in a deterministic context. Our empirical results suggest that PDB heuristics can also perform reasonably well in non-deterministic planning. Additionally, we present a generalization of the pattern additivity criterion known from classical planning to the non-deterministic setting.

Pascal Bercher, Robert Mattmüller
An Exploitative Monte-Carlo Poker Agent

We describe the poker agent

AKI-RealBot

which participated in the 6-player Limit Competition of the third Annual AAAI Computer Poker Challenge in 2008. It finished in second place, its performance being mostly due to its superior ability to exploit weaker bots. This paper describes the architecture of the program and the Monte-Carlo decision tree-based decision engine that was used to make the bot’s decision. It will focus the attention on the modifications which made the bot successful in exploiting weaker bots.

Immanuel Schweizer, Kamill Panitzek, Sang-Hyeun Park, Johannes Fürnkranz

Vision and Perception

Interaction of Control and Knowledge in a Structural Recognition System

In this contribution knowledge-based image understanding is treated. The knowledge is coded declaratively in a production system. Applying this knowledge to a large set of primitives may lead to high computational efforts. A particular accumulating parsing scheme trades soundness for feasibility. Per default this utilizes a bottom-up control based on the quality assessment of the object instances. The point of this work is in the description of top-down control rationales to accelerate the search dramatically. Top-down strategies are distinguished in two types: (i) Global control and (ii) localized focus of attention and inhibition methods. These are discussed and empirically compared using a particular landmark recognition system and representative aerial image data from GOOGLE-earth.

Eckart Michaelsen, Michael Arens, Leo Doktorski
Attention Speeds Up Visual Information Processing: Selection for Perception or Selection for Action?

Attention speeds up information processing. Although this finding has a long history in experimental psychology, it has found less regard in computational models of visual attention. In psychological research, two frameworks explain the function of attention.

Selection for perception

emphasizes that perception- or consciousness-related processing presupposes selection of relevant information, whereas

selection for action

emphasizes that action constraints make selection necessary. In the present study, we ask whether or how far attention, as measured by the speed-up of information processing, is based on selection for perception or selection for action. The accelerating effect was primarily based on selection for perception, but there was also a substantial effect of selection for action.

Katharina Weiß, Ingrid Scharlau
Real-Time Scan-Line Segment Based Stereo Vision for the Estimation of Biologically Motivated Classifier Cells

In this paper we present a real-time scan-line segment based stereo vision for the estimation of biologically motivated classifier cells in an active vision system. The system is challenged to overcome several problems in autonomous mobile robotic vision such as the detection of incoming moving objects and estimating its 3D motion parameters in a dynamic environment. The proposed algorithm employs a modified optimization module within the scan-line framework to achieve valuable reduction in computation time needed for generating real-time depth map. Moreover, the experimental results showed high robustness against noises and unbalanced light condition in input data.

M. Salah E. -N. Shafik, Bärbel Mertsching
Occlusion as a Monocular Depth Cue Derived from Illusory Contour Perception

When a three dimensional scene is projected to the two dimensional receptive field of a camera or a biological vision system, all depth information is lost. Even without a knowledgebase, i. e. without knowing what object can be seen, it is possible to reconstruct the depth information. Beside stereoscopic depth cues, also a number of moncular depth cues can be used. One of the most important monocular depth cues ist the occlusion of object boundaries. Therefore one of the elaborated tasks for the low level image processing stage of a vision system is the completion of cluttered or occluded object boundaries and the depth assignment of overlapped boundaries. We describe a method for depth ordering and figure-ground segregation from monocular depth cues, namely the arrangement of so-called illusory contours at junctions in the edge map of an image. Therefore, a computational approach to the perception of illusory contours, based on the tensor voting technique, is introduced and compared with an alternative contour completion realized by spline interpolation. While most approaches assume, that the position of junctions and the orientations of associated contours are already known, we also consider the preprocessing steps that are necessary for a robust perception task. This implies the anisotropic diffusion of the input image in order to simplify the image contents while preserving the edge information.

Marcus Hund, Bärbel Mertsching
Fast Hand Detection Using Posture Invariant Constraints

The biggest challenge in hand detection and tracking is the high dimensionality of the hand’s kinematic configuration space of about 30 degrees of freedom, which leads to a huge variance in its projections. This makes it difficult to come to a tractable model of the hand as a whole. To overcome this problem, we suggest to concentrate on posture invariant local constraints, that exist on finger appearances. We show that, besides skin color, there is a number of additional geometric and photometric invariants. This paper presents a novel approach to real-time hand detection and tracking by selecting local regions that comply with these posture invariants. While most existing methods for hand tracking rely on a color based segmentation as a first preprocessing step, we integrate color cues at the end of our processing chain in a robust manner. We show experimentally that our approach still performs robustly above cluttered background, when using extremely low quality skin color information. With this we can avoid a user- and lighting-specific calibration of skin color before tracking.

Nils Petersen, Didier Stricker
A Novel and Efficient Method to Extract Features and Vector Creation in Iris Recognition System

The selection of the optimal feature subset and the classification has become an important issue in the field of iris recognition. In this paper we propose several methods for iris feature subset selection and vector creation. In this paper we propose a new feature extraction method for iris recognition based on contourlet transform. Contourlet transform captures the intrinsic geometrical structures of iris image. For reducing the feature vector dimensions we use the method for extract only significant bit and information from normalized iris images. In this method we ignore fragile bits. At last, the feature vector is created by two methods: Co-occurrence matrix properties and contourlet coefficients. For analyzing the desired performance of our proposed method, we use the CASIA dataset, which is comprised of 108 classes with 7 images in each class and each class represented a person. Experimental results show that the proposed increase the classification accuracy and also the iris feature vector length is much smaller versus the other methods.

Amir Azizi, Hamid Reza Pourreza
What You See Is What You Set – The Position of Moving Objects

Human observers consequently misjudge the position of moving objects towards the direction of motion. This so called flash-lag effect is supposed to be related to very basic processes such as processing latencies in the human brain. In our study we show that this effect can be inversed by changing the task-set of the observer. A top-down change of the observers attentional set leads to a different perception of otherwise identical scenes. Cognitive theories regard the misperception of the moving object as an important feature of attention-mediated processing, because it reflects the prioritized processing of important objects.

Heinz-Werner Priess, Ingrid Scharlau
Parameter Evolution: A Design Pattern for Active Vision

In current robot applications the developer has to deal with changing environments, making a one-time calibration of algorithm parameters for the vision system impossible. A design pattern dealing with this problem thereby incorporating evolutionary strategies is presented and demonstrated on an example. The example shows that it is possible for the vision system to adjust its parameters automatically and to achieve results near to the optimum.

Michael Müller, Dennis Senftleben, Josef Pauli

Machine Learning and Data Mining

Clustering Objects from Multiple Collections

Clustering methods cluster objects on the basis of a similarity measure between the objects. In clustering tasks where the objects come from more than one collection often part of the similarity results from features that are related to the collections rather than features that are relevant for the clustering task. For example, when clustering pages from various web sites by topic, pages from the same web site often contain similar terms. The collection-related part of the similarity hinders clustering as it causes the creation of clusters that correspond to collections instead of topics. In this paper we present two methods to restrict clustering to the part of the similarity that is not associated with membership of a collection. Both methods can be used on top of standard clustering methods. Experiments on data sets with objects from multiple collections show that our methods result in better clusters than methods that do not take collection information into account.

Vera Hollink, Maarten van Someren, Viktor de Boer
Generalized Clustering via Kernel Embeddings

We generalize traditional goals of clustering towards distinguishing components in a non-parametric mixture model. The clusters are not necessarily based on point locations, but on higher order criteria. This framework can be implemented by embedding probability distributions in a Hilbert space. The corresponding clustering objective is very general and relates to a range of common clustering concepts.

Stefanie Jegelka, Arthur Gretton, Bernhard Schölkopf, Bharath K. Sriperumbudur, Ulrike von Luxburg
Context-Based Clustering of Image Search Results

In this work we propose to cluster image search results based on the textual contents of the referring webpages. The natural ambiguity and context-dependence of human languages lead to problems that plague modern image search engines: A user formulating a query usually has in mind just one topic, while the results produced to satisfy this query may (and usually do) belong to the different topics. Therefore, only part of the search results are relevant for a user. One of the possible ways to improve the user’s experience is to cluster the results according to the topics they belong to and present the clustered results to the user. As opposed to the clustering based on visual features, an approach utilising the text information in the webpages containing the image is less computationally intensive and provides the resulting clusters with semantically meaningful names.

Hongqi Wang, Olana Missura, Thomas Gärtner, Stefan Wrobel
Variational Bayes for Generic Topic Models

The article contributes a derivation of variational Bayes for a large class of topic models by generalising from the well-known model of latent Dirichlet allocation. For an abstraction of these models as systems of interconnected mixtures, variational update equations are obtained, leading to inference algorithms for models that so far have used Gibbs sampling exclusively.

Gregor Heinrich, Michael Goesele

Evolutionary Computation

Surrogate Constraint Functions for CMA Evolution Strategies

Many practical optimization problems are constrained black boxes. Covariance Matrix Adaptation Evolution Strategies (CMA-ES) belong to the most successful black box optimization methods. Up to now no sophisticated constraint handling method for Covariance Matrix Adaptation optimizers has been proposed. In our novel approach we learn a meta-model of the constraint function and use this surrogate model to adapt the covariance matrix during the search at the vicinity of the constraint boundary. The meta-model can be used for various purposes, i.e. rotation of the mutation ellipsoid, checking the feasibility of candidate solutions or repairing infeasible mutations by projecting them onto the constraint surrogate function. Experimental results show the potentials of the proposed approach.

Oliver Kramer, André Barthelmes, Günter Rudolph
Rake Selection: A Novel Evolutionary Multi-Objective Optimization Algorithm

The optimization of multiple conflictive objectives at the same time is a hard problem. In most cases, a uniform distribution of solutions on the Pareto front is the main objective. We propose a novel evolutionary multi-objective algorithm that is based on the selection with regard to equidistant lines in the objective space. The so-called rakes can be computed efficiently in high dimensional objective spaces and guide the evolutionary search among the set of Pareto optimal solutions. First experimental results reveal that the new approach delivers a good approximation of the Pareto front with uniformly distributed solutions. As the algorithm is based on a (

μ

 + 

λ

)-Evolution Strategy with birth surplus it can use

σ

-self-adaptation. Furthermore, the approach yields deeper insights into the number of solutions that are necessary for a uniform distribution of solutions in high-dimensional objective spaces.

Oliver Kramer, Patrick Koch
A Comparison of Neighbourhood Topologies for Staff Scheduling with Particle Swarm Optimisation

The current paper uses a real-life scenario from logistics to compare various forms of neighbourhood topologies within particle swarm optimization (PSO). Overall, gbest (all particles are connected with each other and change information) outperforms other well-known topologies, which is in contrast to some other results in the literature that associate gbest with premature convergence. However, the advantage of gbest is less pronounced on simpler versions of the application. This suggests a relationship between the complexity of instances from an identical class of problems and the effectiveness of PSO neighbourhood topologies.

Maik Günther, Volker Nissen
Controlling a Four Degree of Freedom Arm in 3D Using the XCSF Learning Classifier System

This paper shows for the first time that a Learning Classifier System, namely XCSF, can learn to control a realistic arm model with four degrees of freedom in a three-dimensional workspace. XCSF learns a locally linear approximation of the Jacobian of the arm kinematics, that is, it learns linear predictions of hand location changes given joint angle changes, where the predictions are conditioned on current joint angles. To control the arm, the linear mappings are inverted—deriving appropriate motor commands given desired hand movement directions. Due to the locally linear model, the inversely desired joint angle changes can be easily derived, while effectively resolving kinematic redundancies on the fly. Adaptive PD controllers are used to finally translate the desired joint angle changes into appropriate motor commands. This paper shows that XCSF scales to three dimensional workspaces. It reliably learns to control a four degree of freedom arm in a three dimensional work space accurately and effectively while flexibly incorporating additional task constraints.

Patrick O. Stalph, Martin V. Butz, Gerulf K. M. Pedersen
An Evolutionary Graph Transformation System as a Modelling Framework for Evolutionary Algorithms

In this paper an heuristic method for the solving of complex optimization problems is presented which is inspired equally by genetic algorithms and graph transformation. In short it can be described as a genetic algorithm where the individuals (encoding solutions of the given problem) are always graphs and the operators to create new individuals are provided by graph transformation. As a case study this method is used to solve the independent set problem.

Hauke Tönnies

Natural Language Processing

Semi-automatic Creation of Resources for Spoken Dialog Systems

The increasing number of spoken dialog systems calls for efficient approaches for their development and testing. Our goal is the minimization of hand-crafted resources to maximize the portability of this evaluation environment across spoken dialog systems and domains. In this paper we discuss the user simulation technique which allows us to learn general user strategies from a new corpus. We present this corpus, the VOICE Awards human-machine dialog corpus, and show how it is used to semi-automatically extract the resources and knowledge bases necessary in spoken dialog systems, e.g., the ASR grammar, the dialog classifier, the templates for generation, etc.

Tatjana Scheffler, Roland Roller, Norbert Reithinger
Correlating Natural Language Parser Performance with Statistical Measures of the Text

Natural language parsing, as one of the central tasks in natural language processing, is widely used in many AI fields. In this paper, we address an issue of parser performance evaluation, particularly its variation across datasets. We propose three simple statistical measures to characterize the datasets and also evaluate their correlation to the parser performance. The results clearly show that different parsers have different performance variation and sensitivity against these measures. The method can be used to guide the choice of natural language parsers for new domain applications, as well as systematic combination for better parsing accuracy.

Yi Zhang, Rui Wang
Comparing Two Approaches for the Recognition of Temporal Expressions

Temporal expressions are important structures in natural language. In order to understand text, temporal expressions have to be extracted and normalized. In this paper we present and compare two approaches for the automatic recognition of temporal expressions, based on a supervised machine learning approach and trained on TimeBank. The first approach performs a token-by-token classification and the second one does a binary constituent-based classification of chunk phrases. Our experiments demonstrate that on the TimeBank corpus constituent-based classification performs better than the token-based one. It achieves F1-measure values of 0.852 for the detection task and 0.828 when an exact match is required, which is better than the state-of-the-art results for temporal expression recognition on TimeBank.

Oleksandr Kolomiyets, Marie-Francine Moens
Meta-level Information Extraction

This paper presents a novel approach for meta-level information extraction (IE). The common IE process model is extended by utilizing transfer knowledge and meta-features that are created according to already extracted information. We present two real-world case studies demonstrating the applicability and benefit of the approach and directly show how the proposed method improves the accuracy of the applied information extraction technique.

Peter Kluegl, Martin Atzmueller, Frank Puppe
Robust Processing of Situated Spoken Dialogue

Spoken dialogue is notoriously hard to process with standard language processing technologies. Dialogue systems must indeed meet two major challenges. First, natural spoken dialogue is replete with disfluent, partial, elided or ungrammatical utterances. Second, speech recognition remains a highly error-prone task, especially for complex, open-ended domains. We present an integrated approach for addressing these two issues, based on a robust incremental parser. The parser takes word lattices as input and is able to handle ill-formed and misrecognised utterances by selectively relaxing its set of grammatical rules. The choice of the most relevant interpretation is then realised via a discriminative model augmented with contextual information. The approach is fully implemented in a dialogue system for autonomous robots. Evaluation results on a Wizard of Oz test suite demonstrate very significant improvements in accuracy and robustness compared to the baseline.

Pierre Lison, Geert-Jan M. Kruijff
iDocument: Using Ontologies for Extracting and Annotating Information from Unstructured Text

Due to the huge amount of text data in the WWW, annotating unstructured text with semantic markup is a crucial topic in Semantic Web research. This work formally analyzes the incorporation of domain ontologies into information extraction tasks in iDocument. Ontology-based information extraction exploits domain ontologies with formalized and structured domain knowledge for extracting domain-relevant information from un-annotated and unstructured text. iDocument provides a pipeline architecture, an extraction template interface and the ability of exchanging domain ontologies for performing information extraction tasks. This work outlines iDocument’s ontology-based architecture, the use of SPARQL queries as extraction templates and an evaluation of iDocument in an automatic document annotation scenario.

Benjamin Adrian, Jörn Hees, Ludger van Elst, Andreas Dengel
Behaviorally Flexible Spatial Communication: Robotic Demonstrations of a Neurodynamic Framework

Human spatial cognitive processes provide a model for developing artificial spatial communication systems that fluidly interact with human users. To this end, we develop a neurodynamic model of human spatial language combining spatial and color terms with neurally-grounded scene representations. Tests of this model implemented on a robotic platform support its viability as a theoretical framework for flexible spatial language behaviors in artificial agents.

John Lipinski, Yulia Sandamirskaya, Gregor Schöner
SceneMaker: Automatic Visualisation of Screenplays

Our proposed software system,

SceneMaker

, aims to facilitate the production of plays, films or animations by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them. During the generation of the story content, SceneMaker will give particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene composition, timing, lighting, music and camera work. Related literature and software on Natural Language Processing, in particular textual affect sensing, affective embodied agents, visualisation of 3D scenes and digital cinematography are reviewed. In relation to other work, SceneMaker will present a genre-specific text-to-animation methodology which combines all relevant expressive modalities. In conclusion, SceneMaker will enhance the communication of creative ideas providing quick pre-visualisations of scenes.

Eva Hanser, Paul Mc Kevitt, Tom Lunney, Joan Condell

Knowledge Representation and Reasoning

A Conceptual Agent Model Based on a Uniform Approach to Various Belief Operations

Intelligent agents equipped with epistemic capabilities are expected to carry out quite different belief operations like answering queries and performing diagnosis, or revising and updating their own state of belief in the light of new information. In this paper, we present an approach allowing to realize such belief operations by making use of so-called c-change operations as a uniform core methodology. The key idea is to apply the binary c-change operator in various ways to create various belief operations. Ordinal conditional functions (OCF) serve as representations of epistemic states, providing qualitative semantical structures rich enough to validate conditionals in a semi-quantitative way. In particular, we show how iterated revision is possible in an OCF environment in a constructive manner, and how to distinguish clearly between revision and update.

Christoph Beierle, Gabriele Kern-Isberner
External Sources of Axioms in Automated Theorem Proving

In recent years there has been a growing demand for Automated Theorem Proving (ATP) in large theories, which often have more axioms than can be handled effectively as normal internal axioms. This work addresses the issues of accessing

external sources of axioms

from a first-order logic ATP system, and presents an implemented ATP system that retrieves external axioms asynchronously, on demand.

Martin Suda, Geoff Sutcliffe, Patrick Wischnewski, Manuel Lamotte-Schubert, Gerard de Melo
Presenting Proofs with Adapted Granularity

When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these presentations are neither intended nor suitable for human use. A challenge therefore is to develop user- and goal-adaptive proof presentation techniques that obey common mathematical practice. We present a flexible and adaptive approach to proof presentation based on classification. Expert knowledge for the classification task can be hand-authored or extracted from annotated proof examples via machine learning techniques. The obtained models are employed for the automated generation of further proofs at an adapted level of granularity.

Marvin Schiller, Christoph Benzmüller
On Defaults in Action Theories

We study the integration of two prominent fields of logic-based AI: action formalisms and non-monotonic reasoning. The resulting framework allows an agent employing an action theory as internal world model to make useful default assumptions. We show that the mechanism behaves properly in the sense that all intuitively possible conclusions can be drawn and no implausible inferences arise. In particular, it suffices to make default assumptions only once (in the initial state) to solve projection problems.

Hannes Strass, Michael Thielscher
Analogy, Paralogy and Reverse Analogy: Postulates and Inferences

Analogy plays a very important role in human reasoning. In this paper, we study a restricted form of it based on analogical proportions, i.e. statements of the form

a is to b as c is to d

. We first investigate the constitutive notions of analogy, and beside the analogical proportion highlights the existence of two noticeable companion relations: one that is just reversing the change from

c

to

d

w. r. t. the one from

a

to

b

, while the last one called

paralogical

proportion expresses that

what a and b have in common, c and d have it also

. Characteristic postulates are identified for the three types of relations allowing to provide set and Boolean logic interpretations in a natural way. Finally, the solving of proportion equations as a basis for inference is discussed, again emphasizing the differences between analogy, reverse analogy, and paralogy, in particular in a three-valued setting, which is also briefly presented.

Henri Prade, Gilles Richard

Cognition

Early Clustering Approach towards Modeling of Bottom-Up Visual Attention

A region-based approach towards modelling of bottom-up visual attention is proposed with an objective to accelerate the internal processes of attention and make its output usable by the high-level vision procedures to facilitate intelligent decision making during pattern analysis and vision-based learning. A memory-based inhibition of return is introduced in order to handle the dynamic scenarios of mobile vision systems. Performance of the proposed model is evaluated on different categories of visual input and compared with human attention response and other existing models of attention. Results show success of the proposed model and its advantages over existing techniques in certain aspects.

Muhammad Zaheer Aziz, Bärbel Mertsching
A Formal Cognitive Model of Mathematical Metaphors

Starting from the observation by Lakoff and Núñez (2000) that the process for mathematical discoveries is essentially one of creating metaphors, we show how Information Flow theory (Barwise & Seligman, 1997) can be used to formalise the basic metaphors for arithmetic that ground the basic concepts in the human embodied nature.

Markus Guhe, Alan Smaill, Alison Pease
Hierarchical Clustering of Sensorimotor Features

In this paper a method for clustering patterns represented by sets of sensorimotor features is introduced. Sensorimotor features as a biologically inspired representation have proofed to be working for the recognition task, but a method for unsupervised learning of classes from a set of patterns has been missing yet. By utilization of Self-Organizing Maps as a intermediate step, a hierarchy can be build with standard agglomerative clustering methods.

Konrad Gadzicki
P300 Detection Based on Feature Extraction in On-line Brain-Computer Interface

We propose a new EEG-based wireless brain computer interface (BCI) with which subjects can “mind-type” text on a computer screen. The application is based on detecting P300 event-related potentials in EEG signals recorded on the scalp of the subject. The BCI uses a simple classifier which relies on a linear feature extraction approach. The accuracy of the presented system is comparable to the state-of-the-art for on-line P300 detection, but with the additional benefit that its much simpler design supports a power-efficient on-chip implementation.

Nikolay Chumerin, Nikolay V. Manyakov, Adrien Combaz, Johan A. K. Suykens, Refet Firat Yazicioglu, Tom Torfs, Patrick Merken, Herc P. Neves, Chris Van Hoof, Marc M. Van Hulle
Human Perception Based Counterfeit Detection for Automated Teller Machines

A robust vision system for the counterfeit detection of bank ATM keyboards is presented. The approach is based on the continuous inspection of a keyboard surface by the authenticity verification of coded covert surface features. For the surface coding suitable visual patterns on the keyboard are selected while considering constraints from the visual imperceptibility, robustness and geometrical disturbances to be encountered from the aging effects. The system’s robustness against varying camera-keyboard distances, lighting conditions and dirt-and-scratches effects is investigated. Finally, a demonstrator working in real-time is developed in order to publicly demonstrate the surface authentication process.

Taswar Iqbal, Volker Lohweg, Dinh Khoi Le, Michael Nolte

History and Philosophical Foundations

Variations of the Turing Test in the Age of Internet and Virtual Reality
(Extended Abstract)

Inspired by Hofstadter’s

Coffee-House Conversation

(1982) and by the science fiction short story

SAM

by Schattschneider (1988), we propose and discuss criteria for non-mechanical intelligence:

We emphasize the practical requirements for such tests in view of massively multiuser online role-playing games (MMORPGs) and virtual reality systems like

Second Life

. In response to these new needs, two variations (Chomsky-Turing and Hofstadter-Turing) of the original Turing-Test are defined and reviewed. The first one is proven undecidable to a deterministic interrogator. Concerning the second variant, we demonstrate Second Life as a useful framework for implementing (some iterations of) that test.

Florentin Neumann, Andrea Reichenberger, Martin Ziegler
A Structuralistic Approach to Ontologies

It is still an open question how the relation between ontologies and their domains can be fixed. We try to give an account of semantic and pragmatic aspects of formal knowledge by describing ontologies in terms of a particular school in philosophy of science, namely

structuralism

. We reconstruct ontologies as empirical theories and interpret expressions of an ontology language by semantic structures of a theory. It turns out that there are relevant aspects of theories which cannot as yet be taken into consideration in knowledge representation. We thus provide the basis for extending the concept of ontology to a theory of the use of a language in a community.

Christian Schäufler, Stefan Artmann, Clemens Beckstein
AI Viewed as a “Science of the Culture”

Last twenty years, many people wanted to improve AI systems by making computer models more faithful to the reality. This paper shows that this tendency has no real justification, because it does not solve the observed limitations of AI. It proposes another view that is to extend the notion of “Sciences of the Artificial”, which has been introduced by Herbert Simon, into to a new “Science of the Culture”. After an introduction and a description of some of the causes of the present AI limitations, the paper recalls what the “Sciences of the Artificial” are and presents the “Sciences of the Culture”. The last part explains the possible consequences of such an extension of AI.

Jean-Gabriel Ganascia
Beyond Public Announcement Logic: An Alternative Approach to Some AI Puzzles

In the paper we present a dynamic model of knowledge. The model is inspired by public announcement logic and an approach to a puzzle concerning knowledge and communication using that logic. The model, using notions of situation and epistemic state as foundations, generalizes structures usually used as a semantics for epistemic logics in static and dynamic aspects. A computer program automatically solving the considered puzzle, implementing the model, is built.

Paweł Garbacz, Piotr Kulicki, Marek Lechniak, Robert Trypuz
Behavioural Congruence in Turing Test-Like Human-Computer Interaction

Intensions of higher-order intentional predicates must be observed in a system if intelligence shall be ascribable to it. To give an operational definition of such predicates, the Turing Test is changed into the McCarthy Test. This transformation can be used to distinguish degrees of behavioural congruence of systems engaged in conversational interaction.

Stefan Artmann

AI and Engineering

Machine Learning Techniques for Selforganizing Combustion Control

This paper presents the overall system of a learning, selforganizing, and adaptive controller used to optimize the combustion process in a hard-coal fired power plant. The system itself identifies relevant channels from the available measurements, classical process data and flame image information, and selects the most suited ones to learn a control strategy based on observed data. Due to the shifting nature of the process, the ability to re-adapt the whole system automatically is essential. The operation in a real power plant demonstrates the impact of this intelligent control system with its ability to increase efficiency and to reduce emissions of greenhouse gases much better then any previous control system.

Erik Schaffernicht, Volker Stephan, Klaus Debes, Horst-Michael Gross
Constraint-Based Integration of Plan Tracking and Prognosis for Autonomous Production

Today’s complex production systems allow to simultaneously build different products following individual production plans. Such plans may fail due to component faults or unforeseen behavior, resulting in flawed products. In this paper, we propose a method to integrate diagnosis with plan assessment to prevent plan failure, and to gain diagnostic information when needed. In our setting, plans are generated from a planner before being executed on the system. If the underlying system drifts due to component faults or unforeseen behavior, plans that are ready for execution or already being executed are uncertain to succeed or fail. Therefore, our approach tracks plan execution using probabilistic hierarchical constraint automata (PHCA) models of the system. This allows to explain past system behavior, such as observed discrepancies, while at the same time it can be used to predict a plan’s remaining chance of success or failure. We propose a formulation of this combined diagnosis/assessment problem as a constraint optimization problem, and present a fast solution algorithm that estimates success or failure probabilities by considering only a limited number

k

of system trajectories.

Paul Maier, Martin Sachenbacher, Thomas Rühr, Lukas Kuhn
Fault Detection in Discrete Event Based Distributed Systems by Forecasting Message Sequences with Neural Networks

In reliable systems fault detection is essential for ensuring the correct behavior. Todays automotive electronical systems consists of 30 to 80 electronic control units which provide up to 2.500 atomic functions. Because of the growing dependencies between the different functionality, very complex interactions between the software functions are often taking place.

Within this paper the diagnosability of the behavior of distributed embedded software systems are addressed. In contrast to conventional fault detection the main target is to set up a self learning mechanism based on artificial neural networks (ANN). For reaching this goal, three basic characteristics have been identified which shall describe the observed network traffic within defined constraints. With a new extension to the reber grammar the possibility to cover the challenges on diagnosability with ANN can be shown.

Falk Langer, Dirk Eilers, Rudi Knorr
Fuzzy Numerical Schemes for Hyperbolic Differential Equations

The numerical solution of hyperbolic partial differential equations (PDEs) is an important topic in natural sciences and engineering. One of the main difficulties in the task stems from the need to employ several basic types of approximations that are blended in a nonlinear way. In this paper we show that fuzzy logic can be used to construct novel nonlinear blending functions. After introducing the set-up, we show by numerical experiments that the fuzzy-based schemes outperform methods based on conventional blending functions. To the knowledge of the authors, this paper represents the first work where fuzzy logic is applied for the construction of simulation schemes for PDEs.

Michael Breuss, Dominik Dietrich
Model-Based Test Prioritizing – A Comparative Soft-Computing Approach and Case Studies

Man-machine systems have many features that are to be considered simultaneously. Their validation often leads to a large number of tests; due to time and cost constraints they cannot exhaustively be run. It is then essential to prioritize the test subsets in accordance with their importance for relevant features. This paper applies soft-computing techniques to the prioritizing problem and proposes a graph model-based approach where preference degrees are indirectly deter mined. Events, which imply the relevant system behavior, are classified, and test cases are clustered using (i) unsupervised neural network clustering, and (ii) Fuzzy c-Means clustering algorithm. Two industrial case studies validate the approach and compare the applied techniques.

Fevzi Belli, Mubariz Eminov, Nida Gokce

Automated Reasoning

Comparing Unification Algorithms in First-Order Theorem Proving

Unification is one of the key procedures in first-order theorem provers. Most first-order theorem provers use the Robinson unification algorithm. Although its complexity is in the worst case exponential, the algorithm is easy to implement and examples on which it may show exponential behaviour are believed to be atypical. More sophisticated algorithms, such as the Martelli and Montanari algorithm, offer polynomial complexity but are harder to implement.

Very little is known about the practical perfomance of unification algorithms in theorem provers: previous case studies have been conducted on small numbers of artificially chosen problem and compared term-to-term unification while the best theorem provers perform set-of-terms-to-term unification using term indexing.

To evaluate the performance of unification in the context of term indexing, we made large-scale experiments over the TPTP library containing thousands of problems using the COMPIT methodology. Our results confirm that the Robinson algorithm is the most efficient one in practice. They also reveal main sources of inefficiency in other algorithms. We present these results and discuss various modification of unification algorithms.

Kryštof Hoder, Andrei Voronkov
Atomic Metadeduction

We present an extension of the first-order logic sequent calculus SK that allows us to systematically add inference rules derived from arbitrary axioms, definitions, theorems, as well as local hypotheses – collectively called assertions. Each derived deduction rule represents a pattern of larger SK-derivations corresponding to the use of that assertion. The idea of metadeduction is to get shorter and more concise formal proofs by allowing the replacement of any assertion in the antecedent of a sequent by derived deduction rules that are available locally for proving that sequent. We prove the soundness and completeness for atomic metadeduction, which builds upon a permutability property for the underlying sequent calculus SK with liberalized

δ

 + 

 + 

-rule.

Serge Autexier, Dominik Dietrich

Spatial and Temporal Reasoning

Toward Heterogeneous Cardinal Direction Calculus

Cardinal direction relations are binary spatial relations determined under an extrinsically-defined direction system (e.g.,

north of

). We already have point-based and region-based cardinal direction calculi, but for the relations between other combinations of objects we have only a model. We are, therefore, developing

heterogeneous cardinal direction calculus

, which allows reasoning on cardinal direction relations without regard to object types. In this initial report, we reformulate the definition of cardinal direction relations, identify the sets of relations between various pairs of objects, and develop the methods for deriving upper approximation of converse and composition.

Yohei Kurata, Hui Shi
The Scared Robot: Motivations in a Simulated Robot Arm

This paper investigates potential effects of a motivational module on a robotic arm, which is controlled based on the biological-inspired SURE_REACH system. The motivational module implements two conflicting drives: a goal-location drive and a characteristic-based drive. We investigate the interactions and scaling of these partially competing drives and show how they can be properly integrated into the SURE_REACH system. The aim of this paper is two-fold. From a biological perspective, this paper studies how motivation-like mechanisms may be involved in behavioral decision making and control. From an engineering perspective, the paper strives for the generation of integrated, self-motivated, live-like artificial creatures, which can generate self-induced, goal-oriented behaviors while safely and smartly interacting with humans.

Martin V. Butz, Gerulf K. M. Pedersen
Right-of-Way Rules as Use Case for Integrating GOLOG and Qualitative Reasoning

Agents interacting in a dynamically changing spatial environment often need to access the same spatial resources. A typical example is given by moving vehicles that meet at an intersection in a street network. In such situations right-of-way rules regulate the actions the vehicles involved may perform. For this application scenario we show how the Golog framework for reasoning about action and change can be enhanced by external reasoning services that implement techniques known from the domain of Qualitative Spatial Reasoning.

Florian Pommerening, Stefan Wölfl, Matthias Westphal
Assessing the Strength of Structural Changes in Cooccurrence Graphs

We propose a heuristic for assessing the strength of changes that can be observed in a sequence of cooccurrence graphs from one graph to the next one. We represent every graph by its bandwidth-minimized adjacency matrix. The permutation that describes this minimization is applied to the matrices of the respective following graph. We use a repair count measure to assess the quality of the approximation that is then used to determine whether time frames shall be merged.

Matthias Steinbrecher, Rudolf Kruse
Maximum a Posteriori Estimation of Dynamically Changing Distributions

This paper presents a sequential state estimation method with arbitrary probabilistic models expressing the system’s belief. Probabilistic models can be estimated by Maximum a posteriori estimators (MAP), which fail, if the state is dynamic or the model contains hidden variables. The last typically requires iterative methods like expectation maximization (EM). The proposed approximative technique extends message passing algorithms in factor graphs to realize online state estimation despite of hidden parameters. In addition no conjugate priors or hyperparameter transition models have to be specified. For evaluation, we show the relation to EM and discuss the transition model in detail.

Michael Volkhardt, Sören Kalesse, Steffen Müller, Horst-Michael Gross

Agents and Intelligent Virtual Environments

Kinesthetic Bootstrapping: Teaching Motor Skills to Humanoid Robots through Physical Interaction

Programming of complex motor skills for humanoid robots can be a time intensive task, particularly within conventional textual or GUI-driven programming paradigms. Addressing this drawback, we propose a new programming-by-demonstration method called

Kinesthetic Bootstrapping

for teaching motor skills to humanoid robots by means of intuitive physical interactions. Here, “programming” simply consists of manually moving the robot’s joints so as to demonstrate the skill in mind. The bootstrapping algorithm then generates a low-dimensional model of the demonstrated postures. To find a trajectory through this posture space that corresponds to a robust robot motion, a learning phase takes place in a physics-based virtual environment. The virtual robot’s motion is optimized via a genetic algorithm and the result is transferred back to the physical robot. The method has been successfully applied to the learning of various complex motor skills such as walking and standing up.

Heni Ben Amor, Erik Berger, David Vogt, Bernhard Jung
To See and to Be Seen in the Virtual Beer Garden - A Gaze Behavior System for Intelligent Virtual Agents in a 3D Environment

Aiming to increase the believability of intelligent virtual agents, this paper describes the implementation of a parameterizable gaze behavior system based on psychological notions of human gaze. The resulting gaze patterns and agent behaviors cover a wide range of possible uses due to this parametrization. We also show how we integrated and used the system within a virtual environment.

Michael Wissner, Nikolaus Bee, Julian Kienberger, Elisabeth André
Requirements and Building Blocks for Sociable Embodied Agents

To be sociable, embodied interactive agents like virtual characters or humanoid robots need to be able to engage in mutual coordination of behaviors, beliefs, and relationships with their human interlocutors. We argue that this requires them to be capable of flexible multimodal expressiveness, incremental perception of other’s behaviors, and the integration and interaction of these models in unified sensorimotor structures. We present work on probabilistic models for these three requirements with a focus on gestural behavior.

Stefan Kopp, Kirsten Bergmann, Hendrik Buschmeier, Amir Sadeghipour
Modeling Peripersonal Action Space for Virtual Humans by Learning a Tactile Body Schema

We propose a computational model for building a tactile body schema for a virtual human. The learned body structure of the agent can enable it to acquire a perception of the space surrounding its body, namely its peripersonal space. The model uses tactile and proprioceptive informations and relies on an algorithm which was originally applied with visual and proprioceptive sensor data. As there is not only a technical motivation for devising such a model but also an application of peripersonal action space, an interaction example with a virtual agent is described and the idea of extending the reaching space to a lean-forward space is presented.

Nhung Nguyen, Ipke Wachsmuth
Hybrid Control for Embodied Agents Applications

Embodied agents can be a powerful interface for natural human-computer interaction. While graphical realism is steadily increasing, the complexity of believable behavior is still hard to create and maintain. We propose a hybrid and modular approach to modeling the agent’s control, combining state charts and rule processing. This allows us to choose the most appropriate method for each of the various behavioral processes, e.g. state charts for deliberative processes and rules for reactive behaviors. Our long-term goal is to architect a framework where the overall control is split into modules and submodules employing appropriate control methods, such as state-based or rule-based technology, so that complex yet maintainable behavior can be modeled.

Jan Miksatko, Michael Kipp
Towards System Optimum: Finding Optimal Routing Strategies in Time-Dependent Networks for Large-Scale Evacuation Problems

Evacuation planning crucially depends on good routing strategies. This article compares two different routing strategies in a multi-agent simulation of a large real-world evacuation scenario. The first approach approximates a Nash equilibrium, where every evacuee adopts an individually optimal routing strategy regardless of what this solution imposes on others. The second approach approximately minimizes the total travel time in the system, which requires to enforce cooperative behavior of the evacuees. Both approaches are analyzed in terms of the global evacuation dynamics and on a detailed geographic level.

Gregor Lämmel, Gunnar Flötteröd
Formalizing Joint Attention in Cooperative Interaction with a Virtual Human

Crucial for action coordination of cooperating agents, joint attention concerns the alignment of attention to a target as a consequence of attending to each other’s attentional states. We describe a formal model which specifies the conditions and cognitive processes leading to the establishment of joint attention. This model provides a theoretical framework for cooperative interaction with a virtual human and is specified in an extended belief-desire-intention modal logic.

Nadine Pfeiffer-Leßmann, Ipke Wachsmuth
Towards Determining Cooperation Based on Multiple Criteria

Selfish agents in a multiagent system often behave suboptimal because they only intend to maximize their own profit. Therefore a dilemma occurs between the local optimum and the global optimum (i.e. optimizing the social welfare). We deal with a decision process based on multiple criteria to determine with whom to cooperate. We propose an imitation-based algorithm working locally on the agents and producing high levels of global cooperation. Without global knowledge the social welfare is maximized with the help of local decisions. These decisions are influenced by different meme values which are transmitted through the population.

Markus Eberling

Experience and Knowledge Management

The SEASALT Architecture and Its Realization within the docQuery Project

SEASALT (Sharing Experience using an Agent-based System Architecture LayouT) presents an instantiation of the Collaborating Multi-Expert Systems (CoMES) approach [1]. It offers an application-independent architecture that features knowledge acquisition from a web-community, knowledge modularization, and agent-based knowledge maintenance. The paper introduces an application domain which applies SEASALT and describes each part of the novel architecture for extracting, analyzing, sharing and providing community experiences in an individualized way.

Meike Reichle, Kerstin Bach, Klaus-Dieter Althoff
Case Retrieval in Ontology-Based CBR Systems

This paper presents our knowledge-intensive Case-Based Reasoning platform for diagnosis, COBRA. It integrates domain knowledge along with cases in an ontological structure. COBRA allows users to describe cases using any concept or instance of a domain ontology, which leads to a heterogeneous case base. Cases heterogeneity complicates their retrieval since correspondences must be identified between query and case attributes. We present in this paper our system architecture and the case retrieval phase. Then, we introduce the notions of similarity regions and attributes’ roles used to overcome cases heterogeneity problems.

Amjad Abou Assali, Dominique Lenne, Bruno Debray
Behaviour Monitoring and Interpretation
A Computational Approach to Ethology

Behaviour Monitoring and Interpretation is a new field that developed gradually and inconspicuously over the last decades. With technological advances made at the sensory level and the introduction of ubiquitous computing technologies in the nineties this field has been pushed to a new level. A common methodology of many different research projects and applications can be identified. This methodology is outlined as a framework in this paper and supported by recent work. As a result it shows how BMI automates ethology. Moreover, by bringing in sophisticated AI techniques, it shows how BMI replaces a simple behaviour-interpretation mapping through computational levels between observed behaviours and their interpretations. This is similar to the way of how functionalism and cognitive sciences enabled new explanation models and provided an alternative approach to behaviourism in the early days of AI. First research results can be finally given which back up the usefulness of BMI.

Björn Gottfried
Automatic Recognition and Interpretation of Pen- and Paper-Based Document Annotations

In this paper we present a system which recognizes handwritten annotations on printed text documents and interprets their semantic meaning. This system processes in three steps. In the first step, document analysis methods are applied to identify possible gestures and text regions. In the second step, the text and gestures are recognized using several state-of-the-art recognition methods. In the fourth step, the actual marked text is identified. Finally, the recognized information is sent to the Semantic Desktop, the personal Semantic Web on the Desktop computer, which supports users in their information management. In order to assess the performance of the system, we have performed an experimental study. We evaluated the different stages of the system and measured the overall performance.

Marcus Liwicki, Markus Weber, Andreas Dengel

Robotics

Self-emerging Action Gestalts for Task Segmentation

Task segmentation from user demonstrations is an often neglected component of robot programming by demonstration (PbD) systems. This paper presents an approach to the segmentation problem motivated by psychological findings of gestalt theory. It assumes the existence of certain “action gestalts” that correspond to basic actions a human performs. Unlike other approaches, the set of elementary actions is not prespecified, but is learned in a self-organized way by the system.

Michael Pardowitz, Jan Steffen, Helge Ritter
Prediction and Classification of Motion Trajectories Using Spatio-Temporal NMF

This paper’s intention is to present a new approach for decomposing motion trajectories. The proposed algorithm is based on non-negative matrix factorization, which is applied to a grid like representation of the trajectories. From a set of training samples a number of basis primitives is generated. These basis primitives are applied to reconstruct an observed trajectory. The reconstruction information can be used afterwards for classification. An extension of the reconstruction approach furthermore enables to predict the observed movement into the future. The proposed algorithm goes beyond the standard methods for tracking, since it does not use an explicit motion model but is able to adapt to the observed situation. In experiments we used real movement data to evaluate several aspects of the proposed approach.

Julian P. Eggert, Sven Hellbach, Alexander Kolarow, Edgar Körner, Horst-Michael Gross
A Manifold Representation as Common Basis for Action Production and Recognition

In this paper, we first review our previous work in the domain of dextrous manipulation, where we introduced

Manipulation Manifolds

– a highly structured manifold representation of hand postures which lends itself to simple and robust manipulation control schemes.

Coming from this scenario, we then present our idea of how this generative system can be naturally extended to the recognition and segmentation of the represented movements providing the core representation for a combined system for action production and recognition.

Jan Steffen, Michael Pardowitz, Helge Ritter

Posters

An Argumentation-Based Approach to Handling Inconsistencies in DL-Lite

As a tractable description logic, DL-Lite provides a good compromise between expressive power and computational complexity of inference. It is therefore important to study ways of handling inconsistencies in tractable DL-Lite based ontologies, as classical description logics break down in the presence of inconsistent knowledge bases. In this paper, we present an argumentation-based approach to dealing with inconsistent DL-Lite based ontologies. Furthermore, we mainly develop a graph-based algorithm to implement paraconsistent reasoning in DL-Lite.

Xiaowang Zhang, Zuoquan Lin
Thresholding for Segmentation and Extraction of Extensive Objects on Digital Images

The threshold setting problem is investigated for segmentation and extraction of extensive objects on digital images. Image processing structure is considered which includes thresholding for binarization. A new method for dynamic threshold setting and control is proposed which is based on the analysis of isolated fragments to be extracted in the making of segmentation. Extraction of extensive objects is obtained by the use of sequential erosion of isolated fragments on images. Number of points deleted is used for dynamic thresholding. The method proposed has optimality properties.

Vladimir Volkov
Agent-Based Pedestrian Simulation of Train Evacuation Integrating Environmental Data

Simulating evacuation processes forms an established way of layout evaluation or testing routing strategies or sign location. A variety of simulation projects using different microscopic modeling and simulation paradigms has been performed. In this contribution, we are presenting a particular simulation project that evaluates different emergency system layout for a planned train tunnel. The particular interesting aspect of this project is the integration of realistic dynamic environmental data about temperature and smoke propagation and its effect on the agents equipped with high-level abilities.

Franziska Klügl, Georg Klubertanz, Guido Rindsfüser
An Intelligent Fuzzy Agent for Spatial Reasoning in GIS

In this paper, an intelligent fuzzy agent can identify the values of risks and the environmental damages of the smoke plumes. When smoke plumes move: data extractor extracts the fuzzy areas form NOAA satellite images, spatial decision support system updates information from data base, and topological simulator computes the strength and type of topological relationships and sends the extracted information to a designed knowledge based system. A fuzzy inference subagent infers the information provided by data extractor subagent, topological simulator subagent and knowledge base, and sends the results back to the spatial decision support subsystem. The risk amounts for pixel elements of the forest area are computed and dangerous sites are specified based on the spatial decision support system in GIS environment. Then, a genetic learning agent tries to generate and tune the spatial knowledge bases for the next risk calculation. By the experimental results, the designed system provides flexibility, efficiency and robustness for air pollution monitoring.

Rouzbeh Shad, Mohammad Saadi Mesgari, Hamid Ebadi, Abbas Alimohammadi, Aliakbar Abkar, Alireza Vafaeenezhad
Learning Parametrised RoboCup Rescue Agent Behaviour Using an Evolutionary Algorithm

Although various methods have already been utilised in the RoboCup Rescue simulation project, we investigated a new approach and implemented self-organising agents without any central instance. Coordinated behaviour is achieved by using a task allocation system. The task allocation system supports an adjustable evaluation function, which gives the agents options on their behaviour. Weights for each evaluation function were evolved using an evolutionary algorithm. We additionally investigated different settings for the learning algorithm. We gained extraordinary high scores on deterministic simulation runs with reasonable acting agents.

Michael Kruse, Michael Baumann, Tobias Knieper, Christoph Seipel, Lial Khaluf, Nico Lehmann, Alex Lermontow, Christian Messinger, Simon Richter, Thomas Schmidt, Daniel Swars
Heuristics for Resolution in Propositional Logic

One of the reasons for the efficiency of automated theorem systems is the usage of good heuristics. There are different semantic heuristics such as set of support which make use of additional knowledge about the problem at hand. Other widely employed heuristics work well without making any additional assumptions. A heuristic which seems to be generally useful is to “keep things simple” such as prefer small clause sets over big ones. For the simple case of propositional logic with three variables, we will look at this heuristic and compare it to a heuristic which takes the structure of the clause set into consideration. In the study we will take into account the class of all possible problems.

Manfred Kerber
Context-Aware Service Discovery Using Case-Based Reasoning Methods

This paper presents an architecture for accessing distributed services with embedded systems using message oriented middleware. For the service discovery a recommendation system using case-based reasoning methods is utilized. The main idea is to take the context of each user into consideration in order to suggest appropriate services. We define our context and discuss how its attributes are compared.

The presented prototype was implemented for Ricoh & Sun Developer Challenge. Thus the client software was restricted to Ricoh’s Multi Functional Product as an embedded system. The similarity functions were designed and tested using myCBR, and the service recommender application is based on the jCOLIBRI CBR framework.

Markus Weber, Thomas Roth-Berghofer, Volker Hudlet, Heiko Maus, Andreas Dengel
Forward Chaining Algorithm for Solving the Shortest Path Problem in Arbitrary Deterministic Environment in Linear Time - Applied for the Tower of Hanoi Problem

The paper presents an application of an algorithm which solves the shortest path problem in arbitrary deterministic environment in linear time, using OFF ROUTE acting method and emotional agent architecture. The complexity of the algorithm in general does not depend on the number of states n, but only on the length of the shortest path, in the worst case the complexity can be at most O (n). The algorithm is applied for the Tower of Hanoi problem.

Silvana Petruseva
Early Top-Down Influences in Control of Attention: Evidence from the Attentional Blink

The relevance of top-down information in the deployment of attention has more and more been emphasized in cognitive psychology. We present recent findings about the dynamic of these processes and also demonstrate that task relevance can be adjusted rapidly by incoming bottom-up information. This adjustment substantially increases performance in a subsequent task. Implications for artificial visual models are discussed.

Frederic Hilkenmeier, Jan Tünnermann, Ingrid Scharlau
Probabilistic Models for the Verification of Human-Computer Interaction

In this paper, we present a method for the formalization of probabilistic models of human-computer interaction (HCI) including user behavior. These models can then be used for the analysis and verification of HCI systems with the support of model checking tools. This method allows to answer probabilistic questions like “what is the probability that the user will unintentionally send confidential information to unauthorized recipients.” And it allows to compute average interaction costs and answer questions like “how much time does a user on average need to send an email?”

Bernhard Beckert, Markus Wagner
HieroMate: A Graphical Tool for Specification and Verification of Hierarchical Hybrid Automata

In previous works, Hierarchical Hybrid Automata (HHA) have been proposed as a combination of UML state machine diagrams and hybrid automata to model complex and in particular multi-agent systems. This approach enables formal system specification on different levels of abstraction and expresses real-time system behavior. A prototype was implemented using constraint logic programming (CLP) as a framework to specify and verify these HHA. However, it still requires the user to write a CLP program in order to specify the HHA, which is a tedious and error-prone work. Therefore, this paper aims at simplifying the verification process by introducing a tool environment where a HHA model together with requirements are entered graphically and the process of verification is achieved automatically.

Ammar Mohammed, Christian Schwarz
Building Geospatial Data Collections with Location-Based Games

The traditional, expert-based process of knowledge acquisition is known to be both slow and costly. With the advent of the Web 2.0, community-based approaches have appeared. These promise a similar or even higher level of information quantity by using the collaborative work of voluntary contributors. Yet, the community-driven approach yields new problems on its own, most prominently contributor motivation and data quality. Our former work [1] has shown, that the issue of contributor motivation can be solved by embedding the data collection activity into a gaming scenario. Additionally, good games are designed to be replayable and thus well suited to generate redundant datasets. In this paper we propose semantic view area clustering as a novel approach to aggregate semantically tagged objects to achieve a higher overall data quality. We also introduce the concept of semantic barriers as a method to account for interaction betwen spatial and semantic data. We also successfully evaluate our algorithm against a traditional clustering method.

Sebastian Matyas, Peter Wullinger, Christian Matyas
Design Principles for Embodied Interaction: The Case of Ubiquitous Computing

Designing user interfaces for ubiquitous computing applications is a challenging task. In this paper we discuss how to build intelligent interfaces. The foundations are usability principles that are valid on very general levels. We present a number of established methods for the design process that can help to meet these principle requirements. In particular participatory and iterative so-called human centered approaches are important for interfaces in ubiquitous computing. In particular the question how to make interactional interfaces more intelligent is not trivial and there are multiple approaches to enhance either the intelligence of the system or that of the user. Novel interface approaches, presented herein, follow the idea of embodied interaction and put particular emphasis on the situated use of a system and the mental models humans develop in context.

Rainer Malaka, Robert Porzel
Multidisciplinary Design of Air-Launched Space Launch Vehicle Using Simulated Annealing

The design of Space Launch vehicle is a complex problem that must balance competing objectives and constraints. In this paper we present a novel approach for the multidisciplinary design and optimization of an Air-launched Space Launch vehicle using Simulated Annealing as optimizer. The vehicle performance modeling requires that analysis from modules of mass, propulsion, aerodynamics and trajectory be integrated into the design optimization process. Simulated Annealing has been used as global optimizer to achieve minimum Gross Launch Mass while remaining within certain mission constraints and performance objectives. The mission of ASLV is to deliver a 200kg payload (satellite) to Low Earth Orbit.

Amer Farhan Rafique, He LinShu, Qasim Zeeshan, Ali Kamran
Stochastic Feature Selection in Support Vector Machine Based Instrument Recognition

Automatic instrument recognition is an important task in musical applications. In this paper we concentrate on the recognition of electronic drum sounds from a large commercially available drum sound library. The recognition task can be formulated as classification problem. Each sample is described by one hundred temporal and spectral features. Support Vector Machines turn out to be an excellent choice for this classification task. Furthermore, we concentrate on the stochastic optimization of a feature subset using evolution strategies and compare the results to the classifier that has been trained on the complete feature set.

Oliver Kramer, Tobias Hein
Backmatter
Metadata
Title
KI 2009: Advances in Artificial Intelligence
Editors
Bärbel Mertsching
Marcus Hund
Zaheer Aziz
Copyright Year
2009
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04617-9
Print ISBN
978-3-642-04616-2
DOI
https://doi.org/10.1007/978-3-642-04617-9

Premium Partner