Skip to main content
Top

2008 | Book

Intelligent Virtual Agents

8th International Conference, IVA 2008, Tokyo, Japan, September 1-3, 2008. Proceedings

Editors: Helmut Prendinger, James Lester, Mitsuru Ishizuka

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 8th International Workshop on Intelligent Virtual Agents, IVA 2008, held in Tokyo, Japan, in September 2008. The 18 revised full papers and 28 revised short papers presented together 42 poster papers were carefully reviewed and selected from 99 submissions. The papers are organized in topical sections on motion and empathy; narrative and augmented reality; conversation and negotiation; nonverbal behavior; models of culture and personality; markup and representation languages; architectures for robotic agents; cognitive architectures; agents for healthcare and training; and agents in games, museums and virtual worlds.

Table of Contents

Frontmatter

Emotion and Empathy

The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study

Real-time virtual humans are less believable than hand-animated characters, particularly in the way they perform gaze. In this paper, we provide the results of an empirical study that explores an observer’s attribution of emotional state to gaze. We have taken a set of low-level gaze behaviors culled from the nonverbal behavior literature; combined these behaviors based on a dimensional model of emotion; and then generated animations of these behaviors using our gaze model based on the Gaze Warping Transformation (GWT) [9], [10]. Then, subjects judged the animations displaying these behaviors. The results, while preliminary, demonstrate that the emotional state attributed to gaze behaviors can be predicted using a dimensional model of emotion; and show the utility of the GWT gaze model in performing bottom-up behavior studies.

Brent Lance, Stacy C. Marsella
Affect Simulation with Primary and Secondary Emotions

In this paper the WASABI Affect Simulation Architecture is introduced, in which a virtual human’s cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of “Core Affect” in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.

Christian Becker-Asano, Ipke Wachsmuth
User Study of AffectIM, an Emotionally Intelligent Instant Messaging System

Our research addresses the tasks of recognition, interpretation and visualization of affect communicated through text messaging. In order to facilitate sensitive and expressive interaction in computer-mediated communication, we previously introduced a novel syntactical rule-based approach to affect recognition from text. The evaluation of the developed Affect Analysis Model showed promising results regarding its capability to accurately recognize affective information in text from an existing corpus of informal online conversations. To enrich the user’s experience in online communication, make it enjoyable, exciting and fun, we implemented a web-based IM application, AffectIM, and endowed it with emotional intelligence by integrating the developed Affect Analysis Model. This paper describes the findings of a twenty-person study conducted with our AffectIM system. The results of the study indicated that automatic emotion recognition function can bring a high level of affective intelligence to the IM application.

Alena Neviarouskaya, Helmut Prendinger, Mitsuru Ishizuka
Expressions of Empathy in ECAs

Recent research has shown that empathic virtual agents enable to improve human-machine interaction. Virtual agent’s expressions of empathy are generally fixed intuitively and are not evaluated. In this paper, we propose a novel approach for the expressions of empathy using complex facial expressions like superposition and masking. An evaluation study have been conducted in order to identify the most appropriate way to express empathy. According to the evaluation results people find more suitable facial expressions that contain elements of emotion of empathy. In particular, complex facial expressions seem to be a good approach to express empathy.

Radoslaw Niewiadomski, Magalie Ochs, Catherine Pelachaud

Narrative and Augmented Reality

Archetype-Driven Character Dialogue Generation for Interactive Narrative

Recent years have seen a growing interest in creating virtual agents to populate the cast of characters for interactive narrative. A key challenge posed by interactive characters for narrative environments is devising expressive dialogue generators. To be effective, character dialogue generators must be able to simultaneously take into account multiple sources of information that bear on dialogue, including character attributes, plot development, and communicative goals. Building on the narrative theory of character archetypes, we propose an archetype-driven character dialogue generator that uses a probabilistic unification framework to generate dialogue motivated by character personality and narrative history to achieve communicative goals. The generator’s behavior is illustrated with character dialogue generation in a narrative-centered learning environment,

Crystal Island

.

Jonathan P. Rowe, Eun Young Ha, James C. Lester
Towards a Narrative Mind: The Creation of Coherent Life Stories for Believable Virtual Agents

This paper describes an approach to create coherent life stories for Intelligent Virtual Agents (IVAs) in order to achieve long-term believability. We integrate a computational autobiographic memory, which allows agents to remember significant past experiences and reconstruct their life stories from these experiences, into an emotion-driven planning architecture. Starting from the literature review on episodic memory modelling and narrative agents, we discuss design considerations for believable agents which interact with users repeatedly and over a long period of time. In the main body of the paper we present the narrative structure of human life stories. Based on this, we incorporate three essential discourse units and other characteristics into the design of the autobiographic memory structure. We outline part of the implementation of this memory architecture and describe the plan for evaluating the architecture in long-term user studies.

Wan Ching Ho, Kerstin Dautenhahn
Emergent Narrative as a Novel Framework for Massively Collaborative Authoring

An emergent narrative is a narrative that is dynamically created through the interactions of autonomous intelligent virtual agents and the user. Authoring in such a system means programming characters rather than defining plot and can be a technically and conceptually challenging task. We are currently implementing a tool that helps the author in this task by training the characters through demonstration of example story lines (rehearsals), rather then explicit programming. In this paper we argue that this tool is best used by a group of authors, each providing an example story and that in order to achieve true emergence, collective authoring is required. We compare the rehearsal based authoring method of our authoring tool with other collaborative authoring efforts and underline why both the storytelling medium “emergent narrative” and our particular approach to authoring are better suited for massively collaborative authoring.

Michael Kriegel, Ruth Aylett
Towards Real-Time Authoring of Believable Agents in Interactive Narrative

In this paper we present an authoring tool called Narratoria that allows non-technical experts in the field of digital entertainment to create interactive narratives with 3D graphics and multimedia. Narratoria allows experts in digital entertainment to participate in the generation of story-based military training applications. Users of the tools can create story-arcs, screenplays, pedagogical goals and AI models using a single software application. Using commercial game engines, which provide direct visual output in a real-time feedback-loop, authors can view the final product as they edit.

Martin van Velsen
Male Bodily Responses during an Interaction with a Virtual Woman

This work presents the analysis of the body movement of male participants, while talking with a life-size virtual woman in a virtual social encounter within a CAVE-like system. We consider independent and explanatory variables including whether the participant is the centre of attention in the scenario, whether the participant is shy or confident, and his relationship status. We also examine whether this interaction between the participant and the virtual character changes as the conversation progresses. The results show that the participants tend to have different hand movements, head movements, and posture depending on these conditions. This research therefore provides strong evidence for using body movement as a systematic method to assess the responses of people within a virtual environment, especially when the participant interacts with a virtual character. These results also point the way towards the application of this technology to the treatment of social phobic males.

Xueni Pan, Marco Gillies, Mel Slater
A Virtual Agent for a Cooking Navigation System Using Augmented Reality

In this paper, we propose a virtual agent for a cooking navigation system which use the ubiquitous sensors. The cooking navigation system can recognize the progress of cooking and can show appropriate contents suitable to the situation. Most cooking navigation systems show only a movie and a text. However, the recognition system of cooking is applicable to realize a virtual agent which can perform helpful actions suitable to the users’ behavior. We implemented the virtual agent, and demonstrated the cooking navigation system including the agent.

Kenzaburo Miyawaki, Mutsuo Sano

Conversation and Negotiation

Social Perception and Steering for Online Avatars

This paper presents work on a new platform for producing realistic group conversation dynamics in shared virtual environments. An avatar, representing users, should perceive the surrounding social environment just as humans would, and use the perceptual information for driving low level reactive behaviors. Unconscious reactions serve as evidence of life, and can also signal social availability and spatial awareness to others. These behaviors get lost when avatar locomotion requires explicit user control. For automating such behaviors we propose a steering layer in the avatars that manages a set of prioritized behaviors executed at different frequencies, which can be activated or deactivated and combined together. This approach gives us enough flexibility to model the group dynamics of social interactions as a set of social norms that activate relevant steering behaviors. A basic set of behaviors is described for conversations, some of which generate a social force field that makes the formation of conversation groups fluidly adapt to external and internal noise, through avatar repositioning and reorientations. The resulting social group behavior appears relatively robust, but perhaps more importantly, it starts to bring a new sense of relevance and continuity to the virtual bodies that often get separated from the ongoing conversation in the chat window.

Claudio Pedica, Hannes Vilhjálmsson
Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents

We present a model of negotiation for virtual agents that extends previous work to be more human-like and applicable to a broader range of situations, including more than two negotiators with different goals, and negotiating over multiple options. The agents can dynamically change their negotiating strategies based on the current values of several parameters and factors that can be updated in the course of the negotiation. We have implemented this model and done preliminary evaluation within a prototype training system and a three-party negotiation with two virtual humans and one human.

David Traum, Stacy C. Marsella, Jonathan Gratch, Jina Lee, Arno Hartholt
A Granular Architecture for Dynamic Realtime Dialogue

We present a dialogue architecture that addresses perception, planning and execution of multimodal dialogue behavior. Motivated by realtime human performance and modular architectural principles, the architecture is full-duplex (“open-mic”); prosody is continuously analyzed and used for mixed-control turntaking behaviors (reactive and deliberative) and incremental utterance production. The architecture is fine-grain and highly expandable; we are currently applying it in more complex multimodal interaction and dynamic task environments. We describe here the theoretical underpinnings behind the architecture, compare it to prior efforts, discuss the methodology and give a brief overview of its current runtime characteristics.

Kristinn R. Thórisson, Gudny Ragna Jonsdottir
Generating Dialogues for Virtual Agents Using Nested Textual Coherence Relations

This paper describes recent advances on the Text2Dialogue system we are currently developing. Our system enables automatic transformation of monological text into a dialogue. The dialogue is then “acted out” by virtual agents, using synthetic speech and gestures. In this paper, we focus on the monologue-to-dialogue transformation, and describe how it uses textual coherence relations to map text segments to query–answer pairs between an expert and a layman agent. By creating mapping rules for a few well-selected relations, we can produce coherent dialogues with proper assignment of turns for the speakers in a majority of cases.

Hugo Hernault, Paul Piwek, Helmut Prendinger, Mitsuru Ishizuka
Integrating Planning and Dialogue in a Lifestyle Agent

In this paper, we describe an Embodied Conversational Agent advising users to promote a healthier lifestyle. This embodied agent provides advice on everyday user activities, in order to promote a healthy lifestyle. It operates by generating user activity models (similar to decompositional task models), using a Hierarchical Task Network (HTN) planner. These activity models are refined through various cycles of planning and dialogue, during which the agent suggests possible activities to the user, and the user expresses her preferences in return. A first prototype has been fully implemented (as a spoken dialogue system) and tested with 20 subjects. Early results show a high level of task completion despite the word error rate, and further potential for improvement.

Cameron Smith, Marc Cavazza, Daniel Charlton, Li Zhang, Markku Turunen, Jaakko Hakulinen
Audio Analysis of Human/Virtual-Human Interaction

The audio of the spoken dialogue between a human and a virtual human (VH) is analyzed to explore the impact of H-VH interaction. The goal is to determine if conversing with a VH can elicit detectable and systematic vocal changes. To study this topic, we examined the H-VH scenario of pharmacy students speaking with immersive VHs playing the role of patients. The audio analysis focused on the students’ reaction to scripted empathetic challenges designed to generate an unrehearsed affective response from the human. The responses were analyzed with software developed to analyze vocal patterns during H-VH conversations. The analysis discovered vocal changes that were consistent across participants groups and correlated with known H-H conversation patterns.

Harold Rodriguez, Diane Beck, David Lind, Benjamin Lok

Nonverbal Behavior

Learning Smooth, Human-Like Turntaking in Realtime Dialogue

Giving synthetic agents human-like realtime turntaking skills is a challenging task. Attempts have been made to manually construct such skills, with systematic categorization of silences, prosody and other candidate turn-giving signals, and to use analysis of corpora to produce static decision trees for this purpose. However, for general-purpose turntaking skills which vary between individuals and cultures, a system that can learn them on-the-job would be best. We are exploring ways to use machine learning to have an agent learn proper turntaking during interaction. We have implemented a talking agent that continuously adjusts its turntaking behavior to its interlocutors based on realtime analysis of the other party’s prosody. Initial results from experiments on collaborative, content-free dialogue show that, for a given subset of turn-taking conditions, our modular reinforcement learning techniques allow the system to learn to take turns in an efficient, human-like manner.

Gudny Ragna Jonsdottir, Kristinn R. Thorisson, Eric Nivel
Predicting Listener Backchannels: A Probabilistic Multimodal Approach

During face-to-face interactions, listeners use backchannel feedback such as head nods as a signal to the speaker that the communication is working and that they should continue speaking. Predicting these backchannel opportunities is an important milestone for building engaging and natural virtual humans. In this paper we show how sequential probabilistic models (e.g., Hidden Markov Model or Conditional Random Fields) can automatically learn from a database of human-to-human interactions to predict listener backchannels using the speaker multimodal output features (e.g., prosody, spoken words and eye gaze). The main challenges addressed in this paper are automatic selection of the relevant features and optimal feature representation for probabilistic models. For prediction of visual backchannel cues (i.e., head nods), our prediction model shows a statistically significant improvement over a previously published approach based on hand-crafted rules.

Louis-Philippe Morency, Iwan de Kok, Jonathan Gratch
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions

We present IGaze, a semi-immersive human-avatar interaction system. Using head tracking and an illusionistic 3D effect we let users interact with a talking avatar in an application interview scenario. The avatar features reactive gaze behavior that adapts to the user position according to exchangeable gaze strategies. In user studies we showed that two gaze strategies successfully convey the intended impression of dominance/submission and that the 3D effect was positively received. We argue that IGaze is a suitable setup for exploring reactive nonverbal behavior synthesis in human-avatar interactions.

Michael Kipp, Patrick Gebhard
Estimating User’s Conversational Engagement Based on Gaze Behaviors

In face-to-face conversations, speakers are continuously checking whether the listener is engaged in the conversation. When the listener is not fully engaged in the conversation, the speaker changes the conversational contents or strategies. With the goal of building a conversational agent that can control conversations with the user in such an adaptive way, this study analyzes the user’s gaze behaviors and proposes a method for predicting whether the user is engaged in the conversation based on gaze transition 3-Gram patterns. First, we conducted a Wizard-of-Oz experiment to collect the user’s gaze behaviors as well as the user’s subjective reports and an observer’s judgment concerning the user’s interest in the conversation. Next, we proposed an engagement estimation algorithm that estimates the user’s degree of engagement from gaze transition patterns. This method takes account of individual differences in gaze patterns. The algorithm is implemented as a real-time engagement-judgment mechanism, and the results of our evaluation experiment showed that our method can predict the user’s conversational engagement quite well.

Ryo Ishii, Yukiko I. Nakano
The Effects of Agent Nonverbal Communication on Procedural and Attitudinal Learning Outcomes

This experimental study investigated the differential effects of pedagogical agent nonverbal communication on attitudinal and procedural learning. A 2x2x2 factorial design was employed with 237 participants to investigate the effect of type of instruction (procedural, attitudinal), deictic gesture (presence, absence), and facial expression (presence, absence) on learner attitudes, agent perception (agent persona, gesture, facial expression), and learning. Results indicated that facial expressions were particularly valuable for attitudinal learning, and were actually detrimental for procedural learning. Similarly, gestures were perceived as more valuable for students in the procedural module, even though they did not directly enhance recall.

Amy L. Baylor, Soyoung Kim
Evaluating Data-Driven Style Transformation for Gesturing Embodied Agents

This paper presents an empirical evaluation of a method called “Style transformation” which consists of modifying an existing gesture sequence in order to obtain a new style where the transformation parameters have been extracted from an existing captured sequence. This data-driven method can be used either to enhance key-framed gesture animations or to taint captured motion sequences according to a desired style.

Alexis Heloir, Michael Kipp, Sylvie Gibet, Nicolas Courty

Models of Culture and Personality

Culture-Specific First Meeting Encounters between Virtual Agents

We present our concept of integrating culture as a computational parameter for modeling multimodal interactions with virtual agents. As culture is a social rather than a psychological notion, its influence is evident in interactions, where cultural patterns of behavior and interpretations mismatch. Nevertheless, taking culture seriously its influence penetrates most layers of agent behavior planning and generation. In this article we concentrate on a first meeting scenario, present our model of an interactive agent system and identify, where cultural parameters play a role. To assess the viability of our approach, we outline an evaluation study that is set up at the moment.

Matthias Rehm, Yukiko Nakano, Elisabeth André, Toyoaki Nishida
Virtual Humans Elicit Skin-Tone Bias Consistent with Real-World Skin-Tone Biases

In this paper, we present results from a study that shows that a dark skin-tone VH agent elicits user behavior consistent with real world skin-tone biases. Results from a study with medical students (

n

=21), show participant empathy towards a dark skin-tone VH patient was predicted by their measured bias towards African-Americans. Real world bias was measured using a validated psychological instrument called the implicit association test (IAT). Scores on the IAT were significantly correlated to coders’ ratings of participant empathy. This result indicates that VHs elicit realistic responses and could become an important component in cultural diversity training.

Brent Rossen, Kyle Johnsen, Adeline Deladisma, Scott Lind, Benjamin Lok
Cross-Cultural Evaluations of Avatar Facial Expressions Designed by Western Designers

The goal of the study is to investigate cultural differences in avatar expression evaluation and apply findings from psychological study in human facial expression recognition. Our previous study using Japanese designed avatars showed there are cultural differences in interpreting avatar facial expressions, and the psychological theory that suggests physical proximity affects facial expression recognition accuracy is also applicable to avatar facial expressions. This paper summarizes the early results of the successive experiment that uses western designed avatars. We observed tendencies of cultural differences in avatar facial expression interpretation in western designed avatars.

Tomoko Koda, Matthias Rehm, Elisabeth André
Agreeable People Like Agreeable Virtual Humans

This study explored associations between the five-factor personality traits of human subjects and their feelings of rapport when they interacted with a virtual agent or real humans. The agent, the Rapport Agent, responded to real human speakers’ storytelling behavior, using only nonverbal contingent (i.e., timely) feedback. We further investigated how interactants’ personalities were related to the three components of rapport: positivity, attentiveness, and coordination. The results revealed that more agreeable people showed strong self-reported rapport and weak behavioral-measured rapport in the disfluency dimension when they interacted with the Rapport Agent, while showing no significant associations between agreeableness and self-reported rapport, nor between agreeableness and the disfluency dimension when they interacted with real humans. The conclusions provide fundamental data to further develop a rapport theory that would contribute to evaluating and enhancing the interactional fidelity of an agent on the design of virtual humans for social skills training and therapy.

Sin-Hwa Kang, Jonathan Gratch, Ning Wang, James H. Watt
A Listening Agent Exhibiting Variable Behaviour

Within the Sensitive Artificial Listening Agent project, we propose a system that computes the behaviour of a listening agent. Such an agent must exhibit behaviour variations depending not only on its mental state towards the interaction (e.g., if it agrees or not with the speaker) but also on the agent’s characteristics such as its emotional traits and its behaviour style. Our system computes the behaviour of the listening agent in real-time.

Elisabetta Bevacqua, Maurizio Mancini, Catherine Pelachaud

Markup and Representation Languages

The Next Step towards a Function Markup Language

In order to enable collaboration and exchange of modules for generating multimodal communicative behaviours of robots and virtual agents, the SAIBA initiative envisions the definition of two representation languages. One of these is the Function Markup Language (FML). This language specifies the communicative intent behind an agent’s behaviour. Currently, several research groups have contributed to the discussion on the definition of FML. The discussion reveals agreement on many points but it also points out important issues that need to be dealt with. This paper summarises the current state of affairs in thinking about FML.

Dirk Heylen, Stefan Kopp, Stacy C. Marsella, Catherine Pelachaud, Hannes Vilhjálmsson
Extending MPML3D to Second Life

This paper describes an approach how to integrate virtual agents into the 3D multi-user online world Second Life. For this purpose we have implemented a new client software for Second Life that controls virtual agents (“bots”) and makes use of the Multimodal Presentation Markup Language 3D (MPML3D) to define their behavior. The technical merits and limitations of Second Life are discussed and solutions are provided. A multi-user scenario serves as an example to illustrate our solutions to technical challenges and advantages of using the virtual environment of Second Life.

Sebastian Ullrich, Klaus Bruegmann, Helmut Prendinger, Mitsuru Ishizuka
An Extension of MPML with Emotion Recognition Functions Attached

In this paper, we discuss our research on the multimodal interaction markup language (MIML) which is an extension of multimodal presentation markup language (MPML). Different from MPML, MIML can describe not only the presentations of lifelike agents, but also their emotion detection capability with the facial expression recognition and speech emotion recognition functions attached. Emotional control on lifelike agents provided by MIML makes the human-agent interaction even more intelligent. With the MIML and functions we designed, web-based affective interaction can be described and generated easily.

Xia Mao, Zheng Li, Haiyan Bao

Architectures for Robotic Agents

ITACO: Effects to Interactions by Relationships between Humans and Artifacts

Our purpose in this paper is realizing a natural interaction between humans and artifacts by an ITACO system. The ITACO system is able to construct a relationship between humans and artifacts by a migratable agent. The agent in the ITACO system can migrate to various artifacts within an environment to construct a relationship with humans. We conducted the two experiments to confirm effects for human by a relationship that was constructed between humans and artifacts. The experimental results showed that a relationship gave some influences to human’s behaviors and cognitive abilities. The results also showed that the ITACO system and the migratable agent were an effective method to realize a natural interaction between them.

Kohei Ogawa, Tetsuo Ono
Teaching a Pet Robot through Virtual Games

In this paper, we present a human-robot teaching framework that uses ”virtual” games as a means for adapting a robot to its user through natural interaction in a controlled environment. We present an experimental study in which participants instruct an AIBO pet robot while playing different games together on a computer generated playfield. By playing the games in cooperation with its user, the robot learns to understand the user’s natural way of giving multimodal positive and negative feedback. The games are designed in a way that the robot can reliably anticipate positive or negative feedback based on the game state and freely explore its user’s reward behavior by making good or bad moves. We implemented a two-staged learning method combining Hidden Markov Models and a mathematical model of classical conditioning to learn how to discriminate between positive and negative feedback. After finishing the training the system was able to recognize positive and negative reward based on speech and touch with an average accuracy of 90.33%.

Anja Austermann, Seiji Yamada

Cognitive Architectures

Modeling Self-deception within a Decision-Theoretic Framework

Computational modeling of human belief maintenance and decision-making processes has become increasingly important for a wide range of applications. In this paper, we present a framework for modeling the human capacity for self-deception from a decision-theoretic perspective in which we describe processes for determining a desired belief state, the biasing of internal beliefs towards the desired belief state, and the actual decision-making process based upon the integrated biases. Furthermore, we show that in some situations self-deception can be beneficial.

Jonathan Y. Ito, David V. Pynadath, Stacy C. Marsella
Modeling Appraisal in Theory of Mind Reasoning

Cognitive appraisal theories, which link human emotional experience to their interpretations of events happening in the environment, are leading approaches to model emotions. In this paper, we investigate the computational modeling of appraisal in a multi-agent decision-theoretic framework using POMDP based agents. We illustrate how five key appraisal dimensions (motivational relevance, motivation congruence, accountability, control and novelty) can be derived from the processes and information required for the agent’s decision-making and belief maintenance. Through this illustration, we not only provide a solution for computationally modeling emotion in POMDP based agents, but also demonstrate the tight relationship between emotion and cognition. Our model of appraisal is applied to three different scenarios to illustrate its usage. We also discuss how the modeling of theory of mind (recursive beliefs about self and others) is critical for simulating social emotions.

Mei Si, Stacy C. Marsella, David V. Pynadath
Improving Adaptiveness in Autonomous Characters

Much research has been carried out to build emotion regulation models for autonomous agents that can create suspension of disbelief in human audiences or users. However, most models up-to-date concentrate either on the physiological aspect or the cognitive aspect of emotion. In this paper, an architecture to balance the Physiological vs Cognitive dimensions for creation of life-like autonomous agents is proposed. The resulting architecture will be employed in ORIENT which is part of the EU-FP6 project eCircus. An explanation of the existing architecture, FAtiMA focusing on its benefits and flaws is provided. This is followed by a description of the proposed architecture that combines FAtiMA and the PSI motivational system. Some inspiring work is also reviewed. Finally, a conclusion and directions for future work are given.

Mei Yii Lim, João Dias, Ruth Aylett, Ana Paiva
The Embodiment of a DUAL/AMBR Based Cognitive Model in the RASCALLI Multi-agent Platform

The current paper explores some of the cognitive abilities of an embodied agent based on the DUAL/AMBR architecture. The model has been extended with several capabilities like continuous operation and learning based on encoding of episodes and general knowledge. A new mechanism of analogical transfer has been implemented and demonstrated on a simulated interaction with a user. A focus of interest discussed throughout the paper is how a cognitive model can be embodied in a virtual environment and what are the benefits of combining ’soft’ cognitive capabilites with hard AI based platform. The latter is based on a Mind-Body-Environment metaphor which positions the cognitive agent in a situation similar to the one of a robot in a real environment. In the paper, results from simulations of simple interactions with a hypothetical user are presented and the internal cognitive mechanisms are discussed.

Stefan Kostadinov, Maurice Grinberg
BDI Model-Based Crowd Simulation

We present a new approach for crowd simulation in which a BDI (Belief-Desire-Intention) model is introduced that makes it possible for a character in a simulated environment to work adaptively. Our approach allows the character to realize realistic behavior by adapting its action with the sensed information in a changing environment. We implemented a demo system simulating the BDI model-based NPCs that extinguishes a forest fire with a 3D game engine, Source Engine. We measured the performance to evaluate the scalability and the bottleneck of the system.

Kenta Cho, Naoki Iketani, Masaaki Kikuchi, Keisuke Nishimura, Hisashi Hayashi, Masanori Hattori
The Mood and Memory of Believable Adaptable Socially Intelligent Characters

In this paper a computational model for believable adaptable characters is presented, which takes into account several psychological theories: five factors model of personality [1], the pleasure arousal dominance[2] and the social cognitive factors [3] to create a computation model able to process emotionally coded events in input, alter the character’s mood, the memory associate with the set of entities connected with the event, and in the long run the personality; and produce an immediate emotional reaction in the character, which might or might not be displayed according to the social cognitive factors, the goal of the character and the environment in which the event is taking place.

Mark Burkitt, Daniela M. Romano

Agents for Healthcare and Training

Visualizing the Importance of Medical Recommendations with Conversational Agents

Embodied Conversational Agents (ECA) have the potential to bring to life many kinds of information, and in particular textual contents. In this paper, we present a prototype that helps visualizing the relative importance of sentences extracted from medical texts (clinical guidelines aimed at physicians). We propose to map rhetorical structures automatically recognized in the documents to a set of communicative acts controlling the expression of the ECA. As a consequence, the ECA will dramatize a sentence to reflect its perceived importance and degree of recommendation (advice, requirement, open proposal, etc). This prototype is constituted of three sub-systems: i) a text analysis module, ii) an ECA and iii) a mapping module which converts rhetorical structures produced by the text analysis module into nonverbal behaviors driving the ECA animation. This system could help authors of medical texts to reflect on the potential impact of the writing style they have adopted. The use of ECA re-introduces an affective element which won’t be captured by other methods for analyzing document style.

Gersende Georg, Marc Cavazza, Catherine Pelachaud
Evaluation of Justina: A Virtual Patient with PTSD

Recent research has established the potential for virtual characters to act as virtual standardized patients VP for the assessment and training of novice clinicians. We hypothesize that the responses of a VP simulating Post Traumatic Stress Disorder (PTSD) in an adolescent female could elicit a number of diagnostic mental health specific questions (from novice clinicians) that are necessary for differential diagnosis of the condition. Composites were developed to reflect the relation between novice clinician questions and VP responses. The primary goal in this study was evaluative: can a VP generate responses that elicit user questions relevant for PTSD categorization? A secondary goal was to investigate the impact of psychological variables upon the resulting VP Question/Response composites and the overall believability of the system.

Patrick Kenny, Thomas D. Parsons, Jonathan Gratch, Albert A. Rizzo
Elbows Higher! Performing, Observing and Correcting Exercises by a Virtual Trainer

In the framework of our Reactive Virtual Trainer (RVT) project, we are developing an Intelligent Virtual Agent (IVA) capable to act similarly to a real trainer. Besides presenting the physical exercises to be performed, she keeps an eye on the user. She provides feedback whenever appropriate, to introduce and structure the exercises, to make sure that the exercises are performed correctly, and also to motivate the user. In this paper we talk about the corpora we collected, serving a basis to model repetitive exercises on high level. Then we discuss in detail how the actual performance of the user is compared to what he should be doing, what strategy is used to provide feedback, and how it is generated. We provide preliminary feedback from users and outline further work.

Zsófia Ruttkay, Herwin van Welbergen
A Virtual Therapist That Responds Empathically to Your Answers

Previous research indicates that self-help therapy is an effective method to prevent and treat unipolar depression. While web-based self-help therapy has many advantages, there are also disadvantages to self-help therapy, such as that it misses the possibility to regard the body language of the user, and the lack of personal feedback on the user responses. This study presents a virtual agent that guides the user through the Beck Depression Inventory (BDI) questionnaire, which is used to measure the severity of depression. The agent responds empathically to the answers given by the user, by changing its facial expression. This resembles face to face therapy more than existing web-based self-help therapies. A pilot experiment indicates that the virtual agent has added value for this application.

Matthijs Pontier, Ghazanfar F. Siddiqui

Agents in Games, Museums and Virtual Worlds

IDEAS4Games: Building Expressive Virtual Characters for Computer Games

In this paper we present two virtual characters in an interactive poker game using RFID-tagged poker cards for the interaction. To support the game creation process, we have combined models, methods, and technology that are currently investigated in the ECA research field in a unique way. A powerful and easy-to-use multimodal dialog authoring tool is used for the modeling of game content and interaction. The poker characters rely on a sophisticated model of affect and a state-of-the art speech synthesizer. During the game, the characters show a consistent expressive behavior that reflects the individually simulated affect in speech and animations. As a result, users are provided with an engaging interactive poker experience.

Patrick Gebhard, Marc Schröder, Marcela Charfuelan, Christoph Endres, Michael Kipp, Sathish Pammi, Martin Rumpler, Oytun Türk
Context-Aware Agents to Guide Visitors in Museums

This paper presents an agent-based system for building and operating context-aware services in public museums. Using RFID-tags or location-sensors, the system detects the locations of users and deploys user-assistant agents at computers near the their current locations. When users move between exhibits in a museum, this enables agents to follow them to annotate the exhibits in personalized form and navigate them to the next exhibits along their routes. It provides users with intelligent virtual agents in the real-world and enables them to easily interact with their agents though user movement between physical places. To demonstrate the utility and effectiveness of the system, we constructed and operated location/user-aware visitor-guide services in a science museum as a case study in our development of agent-based ambient computing in wide public spaces.

Ichiro Satoh
Virtual Institutions: Normative Environments Facilitating Imitation Learning in Virtual Agents

The most popular two methods of extending the intelligence of virtual agents are explicit programming of the agents’ decision making apparatus and learning agent behaviors from humans or other agents. The major obstacles of the existing approaches are making the agent understand the environment it is situated in and interpreting the actions and goals of other participants. Instead of trying to solve these problems we propose to formalize the environment in a way that these problems are minimized. The proposed solution, called Virtual Institutions, facilitates formalization of participants’ interactions inside Virtual Worlds, helping the agent to interpret the actions of other participants, understand its options and determine the goals of the principal that is conducting the training of the agent. Such formalization creates facilities to express the principal’s goals during training, as well as establishes a way to communicate desires of the human to the agent once the training is completed.

Anton Bogdanovych, Simeon Simoff, Marc Esteva

Posters

Enculturating Conversational Agents Based on a Comparative Corpus Study

When encountering people who have a different cultural background from our own, many of us feel uncomfortable because gestures and facial expressions may not be familiar to us. Thus, to enhance the believability of conversational agents, culture-specific nonverbal behaviors should be implemented into the agents. In our previous study [1], with the goal of building a user interface that incorporates a user’s cultural background, we have collected comparative conversation corpus in Germany and Japan, and investigated the differences in gestures and posture shifts between these two countries. Based on [1], this paper reports a more detailed analysis about posture shifts, and proposes a chat system with an embodied conversational agent (ECA) that can act as a language trainer.

Afia Akhter Lipi, Yuji Yamaoka, Matthias Rehm, Yukiko I. Nakano
The Reactive-Causal Architecture: Towards Development of Believable Agents

To develop believable agents, agents should be highly autonomous, situated, flexible, and emotional. By this study, our aim is to present an agent architecture called Reactive-Causal Architecture that supports development of believable agents.

Ali Orhan Aydın, Mehmet Ali Orgun, Abhaya Nayak
Gesture Recognition in Flow in the Context of Virtual Theater

Our aim is to put on a short play featuring a real actor and a virtual actor, who will communicate through movements and choreography with mutual synchronization. Although the theatrical context is a good ground for multimodal communication between human and virtual actors, this paper will mainly deal with the ability to perceive the gestures of a real actor. During the performance, our goal is to match real-time observation with recorded examples. The recognition event will be sent to a virtual actor which reply with its own movements. We follow an approach similar to that proposed by [1] or [2]. They made the assumption that the first phase is to summarize the variation of the movement into a symbolic description. Then, the recognition phase is performed when a new trajectory is consistent with this description. The main section will describe our method of creating a signature from gestural data and the recognition system in real-time flow. To conclude, we shall present some results.

Ronan Billon, Alexis Nédélec, Jacques Tisseau
Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules and Models behind Our System

In this paper we presented the rules and algorithms we use to automatically generate non-verbal behavior like gestures and gaze for two embodied virtual agents. They allow us to transform a dialogue in text format into an agent behavior script enriched by eye gaze and conversational gesture behavior. The agents’ gaze behavior is informed by theories of human face-to-face gaze behavior. Gestures are generated based on the analysis of linguistic and contextual information of the input text. Since all behaviors are generated automatically, our system offers content creators a convenient method to compose multimodal presentations, a task that would otherwise be very cumbersome and time consuming.

Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka
Implementing Social Filter Rules in a Dialogue Manager Using Statecharts

This paper presents an implementation of Prendinger and Ishizuka’s [3,4] social filter rules using statecharts. Following their example, we have implemented a waiter character in a coffee-shop that can interact with a user-controlled customer and a system-controlled boss. Due to space limitations, all further references to their work will be implicit, instead we refer to the literature.

The aim with this paper is to show the potential of using Harel statecharts [2] for modelling socially equipped game characters. The work is based on the assumption that statecharts successfully can be used for designing (game) dialogue managers (see e.g. [1]). There are also other advantages in using statecharts, e.g. (1) the fact that the world wide web consortium (W3C) has introduced a new standard for describing (dialogue) flow, StateChartXML (SCXML), that combines the semantics of Harel statecharts with XML syntax, (2) statechart theory is an extension to ordinary finite-state machines (commonly used in games), featuring

Jenny Brusk
Towards Realistic Real Time Speech-Driven Facial Animation

In this work we concentrate on finding correlation between speech signal and occurrence of facial gestures with the goal of creating believable virtual humans. We propose a method to implement facial gestures as a valuable part of human behavior and communication. Information needed for the generation of the facial gestures is extracted from speech prosody by analyzing natural speech in real time. This work is based on the previously developed HUGE architecture for statistically based facial gesturing, and extends our previous work on automatic real time lip sync.

Aleksandra Cerekovic, Goranka Zoric, Karlo Smid, Igor S. Pandzic
Avatar Customization and Emotions in MMORPGs

This paper looks at how avatar customization details affect players’ emotions in MMORPGs, which aspects they prefer in the environment of creating an avatar.

Shun-an Chung, Jim Jiunde Lee
Impact of the Agent’s Localization in Human-Computer Conversational Interaction

How can we deliver more credible and effective conversational interactions between human and machine? The goal of this study is to define the needs and key characteristics of a dialogue before setting these factors against reality and technological advancements in the field of Intelligent Virtual Agents. This is the first step of our research, where we study the importance of the localization of the agent in the context of a human-machine dialogue.

Aurélie Cousseau, François Le Pichon
Evolving Expression of Emotions in Virtual Humans Using Lights and Pixels

An intelligent virtual human should not be limited to gesture, face and voice for the expression of emotion. The arts have shown us that complex affective states can be expressed resorting to lights, shadows, sounds, music, shapes, colors, motion, among many others [1]. Thus, we previously proposed that lighting and the pixels in the screen could be used to express emotions in virtual humans [2]. Lighting expression inspires in the principles of lighting, which are regularly explored in theatre or film production [3]. Screen expression acknowledges that, at the meta level, virtual humans are no more than pixels in the screen which can be manipulated to convey emotions, in a way akin to the visual arts [4]. In particular, we explore the

filtering

technique where the scene is rendered to a temporary texture, modified using shaders and, then, presented to the user. Now, having defined the expression channels, how should we express emotions through them? We are presently exploring an evolutionary approach which relies on genetic algorithms (GAs) to learn mappings between emotions and lighting and screen expression. The GAs’ clear separation between generation and evaluation of alternatives is convenient for this problem. Alternatives can be generated using biologically inspired operators – selection, mutation, crossover, etc. Evaluation, in turn, can rely on artificial critics, which define fitness functions from art theory, or on human critics. Humans can be used to fill in the gaps in the literature as well as accommodate the individual, social and cultural values with respect to the expression of emotion in art [1].

Celso de Melo, Jonathan Gratch
Motivations and Personality Traits in Decision-Making

By modifying the intensity of the motivations to test the adaptability of our real-time model of action selection, we put in evidence a relation between motivations and personality traits. We can give to the virtual human some corresponding personality traits such as greedy, lazy or dirty in order to obtain more interesting and believable virtual humans.

Etienne de Sevin
A Flexible Behavioral Planner in Real-Time

Often real-time behavioral planner can not adapt to the change of the environments and be interrupted if it is necessary. Although Hierarchy is needed to obtain complex and proactive behaviors, some functionality should give flexibility to these hierarchies. This paper describes our motivational model of action which integrates a hierarchical and flexible behavioral planner.

Etienne de Sevin
Face to Face Interaction with an Intelligent Virtual Agent: The Effect on Learning Tactical Picture Compilation

Learning a process control task, such as tactical picture compilation in the Navy, is difficult, because the students have to spend their limited cognitive resources both on the complex task itself and the act of learning. In addition to the resource limits, motivation can be reduced when learning progress is slow. Intelligent Virtual Agents may help to improve tutoring systems by offering a student feedback on their performance via speech and facial expression, which imposes limited ”communication load” and motivating ways of interaction. We present our Intelligent Virtual Agent, called Ashley, and an experiment in which we examined whether learning of tactical picture compilation is better when the feedback is provided by Ashley compared to a standard text-box interface. This first experiment did not show user interface effects, but provided new requirements for the next version of the feedback system and Ashley: The task to learn was yet too simple for substantial feedback effects, and the timing of Ashley’s feedback should be improved, and Ashley’s presence should be better scheduled.

Willem A. van Doesburg, Rosemarijn Looije, Willem A. Melder, Mark A. Neerincx
Creating and Scripting Second Life Bots Using MPML3D

The use of virtual characters becomes invariably common in computer games and other applications. So-called “bots” (computer-controlled virtual characters), may explain virtual settings in a natural way. We describe an example scenario in the online world Second Life [1] in order to explain to content creators how to create bots and script their behavior using the well established authoring language MPML3D [2]. MPML3D does not assume knowledge of a high-level programming language. In Second Life users are represented by their avatars (virtual characters controlled by humans) and its environment provides character models, animations and a graphics engine.

Birgit Endrass, Helmut Prendinger, Elisabeth André, Mitsuru Ishizuka
Piavca: A Framework for Heterogeneous Interactions with Virtual Characters

Animated virtual humans are a vital part of many virtual environments today. Of particular interest are virtual human that we can interact with in some approximation of social interaction. However, creating characters that we can interact with believably is an extremely complex problem. Part of this difficulty is that human interaction is highly multi-modal. This multi-modality implies that there will be a great variety of ways of interacting with a virtual character. The diverse styles of interaction also imply diverse methods of generating behavior. For example animation can be played back from motion capture, generated procedurally, or generated by transforming existing motion capture

Marco Gillies, Xueni Pan, Mel Slater
Interpersonal Impressions of Agents for Developing Intelligent Systems

This paper reports an experiment on the interpersonal impressions given by a character agent.

When one is developing intelligent systems, it is important to consider how provide this feeling of affinity with the system and the presence for communicating with the system that has human-like intelligent functions such as recommendation or persuasion [1]. According to Media Equation [2], people treat computers, television, and new media as real people and places, so if an agent behaves in a disagreeable manner to users, it makes the users uncomfortable.

Kaoru Sumi, Mizue Nagata
Comparing an On-Screen Agent with a Robotic Agent in Non-Face-to-Face Interactions

We investigated the effects of an on-screen agent and a robotic agent on users’ behaviors in a non-face-to-face interaction, which is a much more realistic style than the face-to-face interaction on which most studies have focused. The results showed that the participants had more positive attitudes or behaviors toward the robotic agents than the on-screen agent. We discuss the results of this investigation and focus in particular on how to create comfortable interactions between users and various interactive agents appearing in different media.

Takanori Komatsu, Yukari Abe
Sustainability and Predictability in a Lasting Human–Agent Interaction

Sustainability is one of important features to be considered in a human–agent interaction (HAI) process, because humans are unavoidably accustomed to the agents and it seems to become boredom as the agents’ behaviors come to be predictable. To be clear the relationship between the sustainability and the predictability, the interaction processes between human subjects and a virtual entertainment robot with either of three different interaction models are evaluated.

Toshiyuki Kondo, Daisuke Hirakawa, Takayuki Nozawa
Social Effects of Virtual Assistants. A Review of Empirical Results with Regard to Communication

Early as well as recent evaluation studies indicated that embodied conversational agents induce social reactions on the part of the user. When confronted with virtual agents, human participants show communication behaviors that are similar to those shown in human-to-human interaction. The paper gives a review on relevant research.

Nicole C. Krämer
SoNa: A Multi-agent System to Support Human Navigation in a Community, Based on Social Network Analysis

When one joins an existing community, she or he may have little-to-no knowledge of what other members already know. “Traditional” and electronic media are useful to collect information, but when the request is related to social experience, it is hard to locate and extract social knowledge from essentially subjective individual accounts. Asking somebody would then be a better way to search information, but it has disadvantages, too: the practicable area of potential contacts is limited by the range of one’s personal network, the contacted individuals’ knowledge, time limitations, etc. Communication on a personal basis in a new community is always trial-and-error driven, and it does not guarantee obtaining the information of interest even when such information is freely available from some of the community members. As a result of this, newcomers usually experience significant difficulties in adaptation to the community. In an attempt to help overcome this problem, the presented study proposes a system to assist personalized knowledge sharing. The system allows the user to navigate a social space of a community and to locate members who would address the user’s needs.

Shizuka Kumokawa, Victor V. Kryssanov, Hitoshi Ogawa
Animating Unstructured 3D Hand Models

We present a novel approach for automatically animating unstructured hand models. Skeletons of the hand models are automatically extracted. Local frames on hand skeletons are created to skin and animate hand models. Self-intersection of hand surfaces is avoided. Our method produces realistic hand animation efficiently.

Jituo Li, Li Bai, Yangsheng Wang
Verification of Expressiveness of Procedural Parameters for Generating Emotional Motions

Body movements are crucial for emotion expression of a virtual agent. However, the perception of expressiveness of an animation has always been a subjective matter. Research in psychology has asked professional actors to perform emotional motions to study how body gestures deliver emotions [2]. In this work, we aim to design an animation system that can generate human body motions in a more systematic manner and then study the expressiveness of the generated animation as a preceding step for the study of the relation between motion and emotion.

Yueh-Hung Lin, Chia-Yang Liu, Hung-Wei Lee, Shwu-Lih Huang, Tsai-Yen Li
Individualised Product Portrayals in the Usability of a 3D Embodied Conversational Agent in an eBanking Scenario

A cohort of 65 participants interacted with an extrovert male Embodied Conversational Agent (ECA) in four individualised product portrayals. Results suggest that customers respond more positively to portrayals by ECAs of savings account product offers in contrast to investment account product offers, both in terms of usability and relevance.

Alexandra Matthews, Nicholas Anderson, James Anderson, Mervyn Jack
Multi-agent Negotiation System in Electronic Environments

The main purpose of the study is to investigate how costly information could affect the performance of several types of searching agents in electronic commerce environments. Internet searching agents are tools allowing consumers to compare on-line Web-stores’ prices for a product. The existing agents base their search on a predefined list of Web-stores and, as such, they can be qualified as fixed-sample size searching agents. However, with the implementation of new Internet pricing schemes, this search rule may evolve toward more flexible search methods allowing for an explicit trade-off between the communication costs and the product price. In this setting, the sequential optimal search rule is a possible alternative. However, its adoption depends on its expected performance. The present paper analyses the relative performances of two types of search agents on a virtual market with costly information. At the theoretical equilibrium of the market, we show that the sequential rule-based searching agents with a reservation price always allow consumers to pay lower total costs. We further test the robustness of this result by simulating a market where both sellers’ and buyers’ search agents have only partial information about the market structure.

Dorin Militaru
A Study of the Use of a Virtual Agent in an Ambient Intelligence Environment

In this paper we present the results in the evaluation of the use of a virtual agent together with a spoken dialogue system in an ambient intelligence environment. To develop the study, 35 different subjects had to perform eight different tasks and fill in a questionnaire with their impressions. From the answers we can conclude that the use of the virtual agent does not provide an improvement in their appreciation of the conversation performance but it offers them a more human-like interaction. Only a minority of users preferred to maintain the conversation without the visual support of the virtual agent.

Germán Montoro, Pablo A. Haya, Sandra Baldassarri, Eva Cerezo, Francisco José Serón
Automatic Torso Engagement for Gesturing Characters

Acting theorists such as Grotowski [1] have argued that full body engagement is a key principle in creating expressive movement. Unfortunately, torso use has received limited attention in virtual character research and torso involvement is often lacking. In procedural gesture animation, the character’s gestural movements often begin at the shoulder and the character’s torso is left relatively stiff. This reduces both the believability and expressiveness of the resulting motion.

Michael Neff
Modeling the Dynamics of Virtual Agent’s Social Relations

To create believable virtual agents, both the social context and the emotions appearing during the interaction should be taken into account. Most existing models of social context are statically defined. However, research shows that the affective experience during an interaction can induce a modification of social context and more particularly of social relations. In this paper, we propose to explore the dynamics of virtual agent’s social relations. The agent’s social relations are updated according to emotions that appear during the interaction (both the agent’s triggered emotions and those expressed by its interlocutor). The model proposed is qualitatively defined. In its implementation, numerical functions are proposed.

Magalie Ochs, Nicolas Sabouret, Vincent Corruble
Proposal of an Artificial Emotion Expression System Based on External Stimulus and Emotional Elements

Recently, for intelligent agents, the concept of emotion has become more and more important and many researchers think that emotion is one of the major factors for more natural and friendly interaction between human and agents [1]. There have been several researches to build more emotional creatures.

Seungwon Oh, Minsoo Hahn
A Reactive Architecture Integrating an Associative Memory for Sensory-Driven Intelligent Behavior

Realistic human-like agents are nowadays able to follow goals, plan actions, manipulate objects[1], show emotions and even converse with human people. Despite these agents being endowed with many abilities, the question of intelligence, even for simple animal agents, is still pending. Our research positions itself in an artificial life and virtual reality context. It corresponds to the direction of recent team research[2] suggesting that a creature’s morphology and environment complexity are closely tied. The work presented in this paper intends to show in addition that human expert cognitive modeling is not a necessary prerequisite of behavioral intelligence.

David Panzoli, Hervé Luga, Yves Duthen
Social Responses to Virtual Humans: Automatic Over-Reliance on the "Human" Category

In the human computer interaction (HCI) literature, responding socially to virtual humans means that people exhibit behavioral responses to virtual humans as if they are humans. Based partly on the work of Nass and his colleagues (for a review, see [1]), there is general agreement in the research community that people do respond socially to virtual humans. What seems to be vital, in terms of understanding why people respond socially to virtual humans, is how we respond socially to another person and how we know about others’ temporary states (e.g., emotions, intentions) and enduring dispositions (e.g., beliefs, abilities).

Sung Park, Richard Catrambone
Adaptive Self-feeding Natural Language Generator Engine

A Natural Language Generation (NLG) engine is proposed based on the combination of NLG and Expert Systems. The combination of these techniques paves the way to employ user defined behaviors in virtual worlds as inputs to an expert system. Adaptive Algorithms can then be used to retrieve information from the Internet to give feedback to the user via the NLG engine. The combination of these AI techniques can bring about some benefits such as believability in the interaction between AI-driven and human-driven avatars in virtual worlds.

Jovan David Rebolledo Méndez, Kenji Nakayama
A Model of Motivation for Virtual-Worlds Avatars

This paper presents a model of motivation for AI-driven avatars in virtual worlds. This work aims at providing the avatar with motivational capabilities to detect and react to different motivational states aiming to enhance and maintain user’s motivation. The algorithm and its associated rules are presented. Future work consists of assessing this model in virtual world applications in the context of a quiz.

Genaro Rebolledo-Mendez, David Burden, Sara de Freitas
Towards Virtual Emotions and Emergence of Social Behaviour

In our approach we explore emotional effects of virtual agents

without

human interaction. The idea is the following: emotion is considered a valuable means to explain social phenomena like the emergence of costly punishment. Now, would it be possible to put together a population of agents which mimics human society? Would it even be possible to show that certain personality configurations and emotional behavior

emerge

from interactions in a virtual agent society experiment? In previous work we selected the public goods game with punishment option because of its intriguing property of affect-based decisions [1]. In this paper we present our first approach to virtual agent population modelling which adopts a setting of the public goods game as it is used for studies on cooperative behaviour (see [1][3]). The discrete version of the game distinguishes between the roles of the punisher, the cooperator and the defector. As shown in [3] the iterated public goods game leads to a predominance of the defectors. This is changed by giving the participants the freedom to choose whether to participate in the game or not which gives the punishers a chance to get back into the game and it is shown that eventually they become dominant (see [3]).

Dirk M. Reichardt
Using Virtual Agents for the Teaching of Requirements Elicitation in GSD

Requirements elicitation is particularly difficult in Global Software Development (GSD) environments owing principally to cultural differences and communication problems derived from the geographical distance that separates stakeholders. For this reason it is necessary to train professionals in the skills needed to confront a requirements elicitation in GSD. We have, therefore, designed a simulator which, by using virtual agents, will enable students and professionals to acquire a subset of the skills necessary for requirements elicitation in GSD.

Miguel Romero, Aurora Vizcaíno, Mario Piattini
A Virtual Agent’s Behavior Selection by Using Actions for Focusing of Attention

In this paper we present an approach for focusing of IVA’s attention in order to select appropriate behavior in continuously changing virtual environment. An action for focusing of attention represents the act of moving the attention to the currently relevant attributes of the local virtual environment.

Haris Supic
Emergent Narrative and Late Commitment

Emergent narrative is an approach to interactive storytelling in which stories result from local interactions of autonomous characters. We describe a technique for emergent narrative that enables the characters to fill in the story world during the simulation when this is useful for the developing story.

Ivo Swartjes, Edze Kruizinga, Mariët Theune, Dirk Heylen
“I Would Like to Trust It but” Perceived Credibility of Embodied Social Agents: A Proposal for a Research Framework

We suggest a research framework to evaluate users’ perceived credibility when they interact with an ESA. Our research framework only addresses ESA’s nonverbal features, thus we do not take the content and the rhetorical arguments of the message delivered by ESAs into account.

Federico Tajariol, Valérie Maffiolo, Gaspard Breton
Acceptable Dialogue Start Supporting Agent for Avatar-Mediated Multi-tasking Online Communication

In recent years, the instant messaging tools have become popular for daily online communication. The feature of the communication style with these tools is multi-tasking online communication. The users of these tools have a problem in recognizing the status of interaction partners. The start of dialogue has a risk of unintended interruption of the partner. The automatic status estimation and ambient display of the status is expected to assist the avoidance of the interruption. Therefore, we prototyped an acceptable dialogue start supporting (ADSS) agent system, for assisting pleasantly-acceptable dialogue start in avatar-mediated multi-tasking communication (AMMCS). The ADSS agent estimates and expresses the user uninterruptibility, and appeals the dialogue request by the communication partner. Fig. 1 shows the overview of ADSS agent and screenshot of the user desktop while using the agent. Each communication partner is displayed as an avatar in a small individual window.

Takahiro Tanaka, Kyouhei Matsumura, Kinya Fujita
Do You Know How I Feel? Evaluating Emotional Display of Primary and Secondary Emotions

In this paper we report on an empirical study on how well different facial expressions of primary and secondary emotions [2] can be recognized from the face of our emotional virtual human Max [1]. Primary emotions like happiness are more primitive, onto-genetically earlier types of emotions, which are expressed by direct mapping on basic emotion display; secondary emotions like relief or gloating are considered cognitively more elaborated emotions and require a more subtle rendition. In order to validate the design of our virtual agent, which entails devising facial expressions for both kinds of emotion, we tried to find answers to the questions: How well can emotions be read from a virtual agent’s face by human observers? Are there differences in the recognizability between more primitve primary and more cognitively elaborated secondary emotions?

Julia Tolksdorf, Christian Becker-Asano, Stefan Kopp
Comparing Emotional vs. Envelope Feedback for ECAs

Opinions in the scientific community differ about what makes an ECA effective. This study investigated whether emotional expressions influence the effectiveness of human-agent-interaction the participants´ emotional status after the experiment. 70 participants took part in a small talk (10 min.) situation with the ECA MAX. We implemented two different types of feedback: emotional feedback (EMO), which provided a feedback about the emotional state of MAX (including smiles and compliments) and envelope feedback (ENV), which provided a feedback about the comprehension of the participants´ contributions and presents MAX as an attentive listener. We found that smiling was the only nonverbal behaviour which was significantly recognized and that the emotional feedback led to increased feelings of interest in the participants. There were no effects with regard to the evaluation of the effectiveness of the conversation.

Astrid von der Pütten, Christian Reipen, Antje Wiedmann, Stefan Kopp, Nicole C. Krämer
Intelligent Agents Living in Social Virtual Environments – Bringing Max into Second Life

When developing cognitive agents capable of interacting with humans, it is often challenging to provide a suitable environment in which agent and user are cosituated. This paper presents a straightforward approach to use

Second Life

(SL) [1] as a persistent, “near natural“, and socially rich environment for research on autonomous agents in complex surroundings, learning social skills, and how they are perceived by humans.

Erik Weitnauer, Nick M. Thomas, Felix Rabe, Stefan Kopp
Backmatter
Metadata
Title
Intelligent Virtual Agents
Editors
Helmut Prendinger
James Lester
Mitsuru Ishizuka
Copyright Year
2008
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-85483-8
Print ISBN
978-3-540-85482-1
DOI
https://doi.org/10.1007/978-3-540-85483-8

Premium Partner