Skip to main content
main-content

Über dieses Buch

th Welcome to the proceedings of the 10 International Conference on Intelligent Virtual Agents (IVA), held 20-22 September, 2010 in Philadelphia, Pennsylvania, USA. Intelligent Virtual Agents are interactive characters that exhibit human-like qualities and communicate with humans or with each other using natural human modalities such as behavior, gesture, and speech. IVAs are capable of real-time perception, cognition, and action that allow them to participate in a dynamic physical and social environment. IVA 2010 is an interdisciplinary annual conference and the main forum for prese- ing research on modeling, developing, and evaluating Intelligent Virtual Agents with a focus on communicative abilities and social behavior. The development of IVAs - quires expertise in multimodal interaction and several AI fields such as cognitive modeling, planning, vision, and natural language processing. Computational models are typically based on experimental studies and theories of human-human and hum- robot interaction; conversely, IVA technology may provide interesting lessons for these fields. Visualizations of IVAs require computer graphics and animation te- niques, and in turn supply significant realism problem domains for these fields. The realization of engaging IVAs is a challenging task, so reusable modules and tools are of great value. The fields of application range from robot assistants, social simulation, and tutoring to games and artistic exploration. The enormous challenges and diversity of possible applications of IVAs have - sulted in an established annual conference.

Inhaltsverzeichnis

Frontmatter

Behavior Modeling

Constraints-Based Complex Behavior in Rich Environments

In order to create a system capable of planning complex, constraints-based behaviors for an agent operating in a rich environment, two complementary frameworks were integrated. Linear Temporal Logic mission planning generates controllers that are guaranteed to satisfy complex requirements that describe reactive and possibly infinite behaviors. However, enumerating all the relevant information as a finite set of Boolean propositions becomes intractable in complex environments. The PAR (Parameterized Action Representation) framework provides an abstraction layer where information about actions and the state of the world is maintained; however, its planning capabilities are limited. The integration described in this paper combines the strengths of these two frameworks and allows for the creation of complex virtual agent behavior that is appropriate to environmental context and adheres to specified constraints.

Jan M. Allbeck, Hadas Kress-Gazit

Smart Events and Primed Agents

We describe a new organization for virtual human responses to dynamically occurring events. In our approach behavioral responses are enumerated in the representation of the event itself. These Smart Events inform an agent of plausible actions to undertake. We additionally introduce the notion of agent priming, which is based on psychological concepts and further restricts and simplifies action choice. Priming facilitates multi-dimensional agents and in combination with Smart Events results in reasonable, contextual action selection without requiring complex reasoning engines or decision trees. This scheme burdens events with possible behavioral outcomes, reducing agent computation to evaluation of a case expression and (possibly) a probabilistic choice. We demonstrate this approach in a small group scenario of agents reacting to a fire emergency.

Catherine Stocker, Libo Sun, Pengfei Huang, Wenhu Qin, Jan M. Allbeck, Norman I. Badler

Using Artificial Team Members for Team Training in Virtual Environments

In a good team, members do not only perform their individual task, they also coordinate their actions with other members of the team. Developing such team skills usually involves exercises with all members playing their role. This approach is costly and has organizational and educational drawbacks. We developed a more efficient and flexible approach by setting training in virtual environments, and using intelligent software agents to play the role of team members. We developed a general framework for developing agents that, in a controlled fashion, execute the behavior that enables the human player (i.e., trainee) to effectively learn team skills. The framework is tested by developing and implementing various types of team agents in a game-based virtual environment.

Jurriaan van Diggelen, Tijmen Muller, Karel van den Bosch

A Comprehensive Taxonomy of Human Motives: A Principled Basis for the Motives of Intelligent Agents

We present a hierarchical taxonomy of human motives, based on similarity judgments of 161 motives gleaned from an extensive review of the motivation literature from McDougall to the present. This taxonomy provides a theoretically and empirically principled basis for the motive structures of Intelligent Agents. 220 participants sorted the motives into groups, using a Flash interface in a standard web browser. The co-occurrence matrix was cluster analyzed. At the broadest level were five large clusters concerned with Relatedness, Competence, Morality and Religion, Self-enhancement / Self-knowledge, and Avoidance. Each of the broad clusters divided into more specific motives. We discuss using this taxonomy as the basis for motives in Intelligent Agents, as well as its relationship to other motive organizations.

Stephen J. Read, Jennifer Talevich, David A. Walsh, Gurveen Chopra, Ravi Iyer

The Impact of a Mixed Reality Display Configuration on User Behavior with a Virtual Human

Understanding the human-computer interface factors that influence users’ behavior with virtual humans will enable more effective human-virtual human encounters. This paper presents experimental evidence that using a mixed reality display configuration can result in significantly different behavior with a virtual human along important social dimensions. The social dimensions we focused on were engagement, empathy, pleasantness, and naturalness. To understand how these social constructs could be influenced by display configuration, we video recorded the verbal and non-verbal response behavior to stimuli from a virtual human under two fundamentally different display configurations. One configuration presented the virtual human at life-size and was embedded into the environment, and the other presented the virtual human using a typical desktop configuration. We took multiple independent measures of participant response behavior using a video coding instrument. Analysis of these measures demonstrates that display configuration was a statistically significant multivariate factor along all dimensions.

Kyle Johnsen, Diane Beck, Benjamin Lok

A Multimodal Real-Time Platform for Studying Human-Avatar Interactions

A better understanding of the human user’s expectations and sensitivities to the real-time behavior generated by virtual agents can provide insightful empirical data and infer useful principles to guide the design of intelligent virtual agents. In light of this, we propose and implement a research framework to systematically study and evaluate different important aspects of multimodal real-time interactions between humans and virtual agents. Our platform allows the virtual agent to keep track of the user’s gaze and hand movements in real time, and adjust his own behaviors accordingly. Multimodal data streams are collected in human-avatar interactions including speech, eye gaze, hand and head movements from both the human user and the virtual agent, which are then used to discover fine-grained behavioral patterns in human-agent interactions. We present a pilot study based on the proposed framework as an example of the kinds of research questions that can be rigorously addressed and answered. This first study investigating human-agent joint attention reveals promising results about the role and functioning of joint attention in human-avatar interactions.

Hui Zhang, Damian Fricker, Chen Yu

Realizing Multimodal Behavior

Closing the Gap between Behavior Planning and Embodied Agent Presentation

Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression...) is challenging. It requires a high degree of animation control, in particular when reactive behaviors are required. We suggest to distinguish

realization planning

, where gesture and speech are processed symbolically using the behavior markup language (BML), and

presentation

which is controlled by a lower level animation language (EMBRScript). Reactive behaviors can bypass planning and directly control presentation. In this paper, we show how to define a behavior lexicon, how this lexicon relates to BML and how to resolve timing using formal constraint solvers. We conclude by demonstrating how to integrate reactive emotional behaviors.

Michael Kipp, Alexis Heloir, Marc Schröder, Patrick Gebhard

Gesture and Expression

Designing an Expressive Avatar of a Real Person

The human ability to express and recognize emotions plays an important role in face-to-face communication, and as technology advances it will be increasingly important for computer-generated avatars to be similarly expressive. In this paper, we present the detailed development process for the Lifelike Responsive Avatar Framework (LRAF) and a prototype application for modeling a specific individual to analyze the effectiveness of expressive avatars. In particular, the goals of our pilot study (

n

= 1,744) are to determine whether the specific avatar being developed is capable of conveying emotional states (Ekmanös six classic emotions) via facial features and whether a realistic avatar is an appropriate vehicle for conveying the emotional states accompanying spoken information. The results of this study show that happiness and sadness are correctly identified with a high degree of accuracy while the other four emotional states show mixed results.

Sangyoon Lee, Gordon Carlson, Steve Jones, Andrew Johnson, Jason Leigh, Luc Renambot

Interactive Motion Modeling and Parameterization by Direct Demonstration

While interactive virtual humans are becoming widely used in education, training and therapeutic applications, building animations which are both realistic and parameterized in respect to a given scenario remains a complex and time–consuming task. In order to improve this situation, we propose a framework based on the direct demonstration and parameterization of motions. The presented approach addresses three important aspects of the problem in an integrated fashion: (1) our framework relies on an interactive real-time motion capture interface that empowers non–skilled animators with the ability to model realistic upper-body actions and gestures by direct demonstration; (2) our interface also accounts for the interactive definition of clustered example motions, in order to well represent the variations of interest for a given motion being modeled; and (3) we also present an

inverse blending

optimization technique which solves the problem of precisely parameterizing a cluster of example motions in respect to arbitrary spatial constraints. The optimization is efficiently solved on-line, allowing autonomous virtual humans to precisely perform learned actions and gestures in respect to arbitrarily given targets. Our proposed framework has been implemented in an immersive multi-tile stereo visualization system, achieving a powerful and intuitive interface for programming generic parameterized motions by demonstration.

Carlo Camporesi, Yazhou Huang, Marcelo Kallmann

Speed Dating with an Affective Virtual Agent - Developing a Testbed for Emotion Models

In earlier studies, user involvement with an embodied software agent and willingness to use that agent were partially determined by the aesthetics of the design and the moral fiber of the character. We used these empirical results to model agents that in their turn would build up affect for their users much the same way as humans do for agents. Through simulations, we tested these models for internal consistency and were successful in establishing the relationships among the factors as suggested by the earlier user studies. This paper reports on the first confrontation of our agent system with real users to check whether users recognize that our agents function in similar ways as humans do. Through a structured questionnaire, users informed us whether our agents evaluated the user’s aesthetics and moral stance while building up a level of involvement with the user and a degree of willingness to interact with the user again.

Matthijs Pontier, Ghazanfar Siddiqui, Johan F. Hoorn

Individualized Gesturing Outperforms Average Gesturing – Evaluating Gesture Production in Virtual Humans

How does a virtual agent’s gesturing behavior influence the user’s perception of communication quality and the agent’s personality? This question was investigated in an evaluation study of co-verbal iconic gestures produced with the Bayesian network-based production model GNetIc. A network learned from a corpus of several speakers was compared with networks learned from individual speaker data, as well as two control conditions. Results showed that automatically GNetIc-generated gestures increased the perceived quality of an object description given by a virtual human. Moreover, gesturing behavior generated with individual speaker networks was rated more positively in terms of likeability, competence and human-likeness.

Kirsten Bergmann, Stefan Kopp, Friederike Eyssel

Level of Detail Based Behavior Control for Virtual Characters

We take the idea of Level of Detail (LOD) from its traditional use in computer graphics and apply it to the behavior of virtual characters. We describe how our approach handles LOD determination and how we used it to reduce the simulation quality of multiple aspects of the characters’ behavior in an existing application.

Felix Kistler, Michael Wißner, Elisabeth André

Virtual Agents Based Simulation for Training Healthcare Workers in Hand Hygiene Procedures

The goal of our work is the design and implementation of a virtual agents based interactive simulation for teaching and training healthcare workers in hand hygiene protocols. The interactive training simulation features a virtual instructor who teaches the trainee the Five Moments of hand hygiene, recommended by the Centers of Disease Control and the World Health Organization, via instructions and demonstrations in a tutorial phase. In an interactive training phase, a virtual health care worker demonstrates by interacting with a virtual patient and the patient’s environment in ten randomly generated virtual scenarios. After watching each scenario the trainee evaluates if the virtual healthcare worker’s actions are in accordance with the Five Moments of Hand Hygiene procedure. In a feedback phase, the trainee receives feedback on their performance in the training phase after which the trainee can either exit or return to any phase of the interactive simulation. We describe the design and development of the hospital environment, simulated virtual instructor, health care worker and patient, and the interactive simulation components towards teaching and training in healthcare best practices associated with hand hygiene.

Jeffrey Bertrand, Sabarish V. Babu, Philip Polgreen, Alberto Segre

Modeling Behavioral Manifestations of Coordination and Rapport over Multiple Conversations

Speaking Rate as a Relational Indicator for a Virtual Agent

Many potential applications of virtual agents require an agent to conduct multiple conversations with users. An effective and engaging agent should modify its behavior in realistic ways over these conversations. To model these changes, we gathered a longitudinal video corpus of human-human counseling conversations, and constructed a model of changes in articulation rates over multiple conversations. Articulation rates are observed to increase over time, both within a single conversation and across conversations. However, articulation rates increased mainly for words spoken separately from larger phrases. We also present a preliminary evaluation study, showing that implementing such changes in a virtual agent has a measurable effect on user attitudes toward the agent.

Daniel Schulman, Timothy Bickmore

DelsArtMap: Applying Delsarte’s Aesthetic System to Virtual Agents

Procedural animation presents significant advantages for generating content, especially character animation, in virtual worlds. Artistic, aesthetic models have much to offer procedural character animation to help address the loss of expressivity that sometimes results. In particular, we examine the contribution of François Delsarte’s system and formalize it into a mapping between emotional states and static character poses. We then show an implementation of this model in UNITY.

Michael Nixon, Philippe Pasquier, Magy Seif El-Nasr

Backchannels and Simulation

Backchannel Strategies for Artificial Listeners

We evaluate multimodal rule-based strategies for backchannel (BC) generation in face-to-face conversations. Such strategies can be used by artificial listeners to determine when to produce a BC in dialogs with human speakers. In this research, we consider features from the speaker’s speech and gaze. We used six rule-based strategies to determine the placement of BCs. The BCs were performed by an intelligent virtual agent using nods and vocalizations. In a user perception experiment, participants were shown video fragments of a human speaker together with an artificial listener who produced BC behavior according to one of the strategies. Participants were asked to rate how likely they thought the BC behavior had been performed by a human listener. We found that the number, timing and type of BC had a significant effect on how human-like the BC behavior was perceived.

Ronald Poppe, Khiet P. Truong, Dennis Reidsma, Dirk Heylen

Learning Backchannel Prediction Model from Parasocial Consensus Sampling: A Subjective Evaluation

Backchannel feedback is an important kind of nonverbal feedback within face-to-face interaction that signals a person’s interest, attention and willingness to keep listening. Learning to predict when to give such feedback is one of the keys to creating natural and realistic virtual humans. Prediction models are traditionally learned from large corpora of annotated face-to-face interactions, but this approach has several limitations. Previously, we proposed a novel data collection method, Parasocial Consensus Sampling, which addresses these limitations. In this paper, we show that data collected in this manner can produce effective learned models. A subjective evaluation shows that the virtual human driven by the resulting probabilistic model significantly outperforms a previously published rule-based agent in terms of rapport, perceived accuracy and naturalness, and it is even better than the virtual human driven by real listeners’ behavior in some cases.

Lixing Huang, Louis-Philippe Morency, Jonathan Gratch

RIDE: A Simulator for Robotic Intelligence Development

Recently, robot hardware platforms have improved significantly and many researchers are working on applying artificial intelligence to robots. However, developing robots is challenging and takes exorbitant costs. Also, even if you primarily focus on robot intelligence, you may not be able to evaluate intelligence algorithms without recognition, manipulation among others. In this paper, we present Robot Intelligence Development Environment (RIDE), a simulator for robotic intelligence development. RIDE makes it possible to test robot intelligence easily with its RIDE editor and built-in synthetic vision mechanism. Also we show the feasibility of RIDE with case studies.

HyunRyong Jung, Meongchul Song

A Velocity-Based Approach for Simulating Human Collision Avoidance

We present a velocity-based model for realistic collision avoidance among virtual characters. Our approach is elaborated from experimental data and is based on the simple hypothesis that an individual tries to resolve collisions long in advance by slightly adapting its motion.

Ioannis Karamouzas, Mark Overmars

Influence of Personality Traits on Backchannel Selection

Our aim is to build a real-time Embodied Conversational Agent able to act as an interlocutor in interaction, generating automatically verbal and non verbal signals. These signals, called backchannels, provide information about the listener’s mental state towards the perceived speech. The ECA reacts differently to user’s behavior depending on its predefined personality. Personality influences the generation and the selection of backchannels. In this paper, we propose a listener’s action selection algorithm working in real-time to choose the type and the frequency of backchannels to be displayed by the ECA in accordance with its personality. The algorithm is based on the extroversion and neuroticism dimensions of personality. We present an evaluation on how backchanels managed by this algorithm are congruent with intuitive expectations of participants in terms of behavior specific to different personalities.

Etienne de Sevin, Sylwia Julia Hyniewska, Catherine Pelachaud

Multimodal Backchannels for Embodied Conversational Agents

One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate multimodal backchannel signals on visual

and

acoustic modalities. A perceptual study has been performed to understand how context-free multimodal backchannels are interpreted by users.

Elisabetta Bevacqua, Sathish Pammi, Sylwia Julia Hyniewska, Marc Schröder, Catherine Pelachaud

A Virtual Interpreter for the Italian Sign Language

In this paper, we describe a software module for the animation of a virtual interpreter that translates from Italian to the Italian Sign Language (LIS).

The system we describe takes a ”synthetic” approach to the generation of the sign language, by composing and parametrizing pre-captured and hand-animated signs, to adapt them to the context in which they occur.

Vincenzo Lombardo, Fabrizio Nunnari, Rossana Damiano

Personality

How Our Personality Shapes Our Interactions with Virtual Characters - Implications for Research and Development

There is a general lack of awareness for the influence of users´ personality traits on human-agent-interaction (HAI). Numerous studies do not even consider explanatory variables like age and gender although they are easily accessible. The present study focuses on explaining the occurrence of social effects in HAI. Apart from the original manipulation of the study we assessed the users´ personality traits. Results show that participants´ personality traits influenced their subjective feeling after the interaction, as well as their evaluation of the virtual character and their actual behavior. From the various personality traits those traits which relate to persistent behavioral patterns in social contact (agreeableness, extraversion, approach avoidance, self-efficacy in monitoring others, shyness, public self-consciousness) were found to be predictive, whereas other personality traits and gender and age did not affect the evaluation. Results suggest that personality traits are better predictors for the evaluation outcome than the actual behavior of the agent as it has been manipulated in the experiment. Implications for research on and development of virtual agents are discussed.

Astrid M. von der Pütten, Nicole C. Krämer, Jonathan Gratch

Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents

A significant goal in multi-modal virtual agent research is to determine how to vary expressive qualities of a character so that it is perceived in a desired way. The “Big Five” model of personality offers a potential framework for organizing these expressive variations. In this work, we focus on one parameter in this model – extraversion – and demonstrate how both verbal and non-verbal factors impact its perception. Relevant findings from the psychology literature are summarized. Based on these, an experiment was conducted with a virtual agent that demonstrates how language generation, gesture rate and a set of movement performance parameters can be varied to increase or decrease the perceived extraversion. Each of these factors was shown to be significant. These results offer guidance to agent designers on how best to create specific characters.

Michael Neff, Yingying Wang, Rob Abbott, Marilyn Walker

Developing Interpersonal Relationships with Virtual Agents through Social Instructional Dialog

Virtual pedagogical agents are used to teach skills like intercultural negotiation. In this work, we looked at how introducing social conversational strategies into instructional dialog affects learners’ interpersonal relations with such virtual agents. We discuss the development of a model for

social instructional dialog

(SID), and a comparison task informational dialog model. SID is designed to support students in taking a social orientation towards learning, through the use of conversational strategies that are theorized to produce interpersonal effects: self-disclosure, narrative, and affirmation. We discuss the implementation of these models in a virtual agent that instructs learners on negotiation and Iraqi culture. Finally, we report on the results of an empirical study with 39 participants in which we found that the SID model had significant effects on learners’ interpersonal relations with the agent. While SID engendered greater feelings of entitativity and shared perspective with the agent, it also significantly lowered ratings of trust. These findings may guide development of dialog for future agents.

Amy Ogan, Vincent Aleven, Julia Kim, Christopher Jones

Multiple Agent Roles in an Adaptive Virtual Classroom Environment

We present the design of a cast of pedagogical agents impersonating different educational roles in an interactive virtual learning environment. Teams of those agents are used to create different learning scenarios in order to provide learners with an engaging and motivating learning experience. Authors can employ an easy to use multimodal dialog authoring tool to adapt lecture and dialog content as well as interaction management to meet their respective requirements.

Gregor Mehlmann, Markus Häring, René Bühling, Michael Wißner, Elisabeth André

Creating Individual Agents through Personality Traits

In the era of globalization, concepts such as individualization and personalization become more and more important in virtual systems. With the goal of creating a more familiar interaction between human and machines, it makes sense to create a consistent and believable model of personality. This paper presents an explicit model of personality, based in the Five Factor Model, which aims at the creation of distinguishable personalities by using the personality traits to automatically influence cognitive processes: appraisal, planning,coping, and bodily expression.

Tiago Doce, João Dias, Rui Prada, Ana Paiva

Bossy or Wimpy: Expressing Social Dominance by Combining Gaze and Linguistic Behaviors

This paper examines the interaction of verbal and nonverbal information for conveying social dominance in intelligent virtual agents (IVAs). We expect expressing social dominance to be useful in applications related to persuasion and motivation; here we simply test whether we can affect users’ perceptions of social dominance using procedurally generated conversational behavior. Our results replicate previous results showing that gaze behaviors affect dominance perceptions, as well as providing new results showing that, in our experiments, the linguistic expression of disagreeableness has a significant effect on dominance perceptions, but that extraversion does not.

Nikolaus Bee, Colin Pollock, Elisabeth André, Marilyn Walker

Interaction Strategies

Warmth, Competence, Believability and Virtual Agents

Believability is a key issue for virtual agents. Most of the authors agree that emotional behavior and personality have a high impact on agents’ believability. The social capacities of the agents also have an effect on users’ judgment of believability. In this paper we analyze the role of plausible and/or socially appropriate emotional displays on believability. We also investigate how people judge the believability of the agent, and whether it provokes social reactions of humans toward the agent.

The results of our study in the domain of software assistants, show that (a) socially appropriate emotions lead to higher perceived believability, (b) the notion of believability is highly correlated with the two major socio-cognitive variables, namely competence and warmth, and (c) considering an agent believable can be different from considering it human-like.

Radosław Niewiadomski, Virginie Demeure, Catherine Pelachaud

Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides

To increase the interest and engagement of middle school students in science and technology, the InterFaces project has created virtual museum guides that are in use at the Museum of Science, Boston. The characters use natural language interaction and have near photoreal appearance to increase and presents reports from museum staff on visitor reaction.

William Swartout, David Traum, Ron Artstein, Dan Noren, Paul Debevec, Kerry Bronnenkant, Josh Williams, Anton Leuski, Shrikanth Narayanan, Diane Piepol, Chad Lane, Jacquelyn Morie, Priti Aggarwal, Matt Liewer, Jen-Yuan Chiang, Jillian Gerten, Selina Chu, Kyle White

Interaction Strategies for an Affective Conversational Agent

The development of Embodied Conversational Agents (ECA) as Companions brings several challenges for both affective and conversational dialogue. These include challenges in generating appropriate affective responses, selecting the overall shape of the dialogue, providing prompt system response times and handling interruptions. We present an implementation of such a Companion showing the development of individual modules that attempt to address these challenges. Further, to resolve resulting conflicts, we present encompassing interaction strategies that attempt to balance the competing requirements. Finally, we present dialogues from our working prototype to illustrate these interaction strategies in operation.

Cameron Smith, Nigel Crook, Johan Boye, Daniel Charlton, Simon Dobnik, David Pizzi, Marc Cavazza, Stephen Pulman, Raul Santos de la Camara, Markku Turunen

”Why Can’t We Be Friends?” An Empathic Game Companion for Long-Term Interaction

The ability of artificial companions (virtual agents or robots) to establish meaningful relationships with users is still limited. In humans, a key aspect of such ability is empathy, often seen as the basis of social cooperation and pro-social behaviour. In this paper, we present a study where a social robot with empathic capabilities interacts with two users playing a chess game against each other. During the game, the agent behaves in an empathic manner towards one of the players and in a neutral way towards the other. In an experiment conducted with 40 participants, results showed that users to whom the robot was empathic provided higher ratings in terms of companionship.

Iolanda Leite, Samuel Mascarenhas, André Pereira, Carlos Martinho, Rui Prada, Ana Paiva

Towards an Episodic Memory for Companion Dialogue

We present an episodic memory component for enhancing the dialogue of artificial companions with the capability to refer to, take up and comment on past interactions with the user, and to take into account in the dialogue long-term user preferences and interests. The proposed episodic memory is based on RDF representations of the agent’s experiences and is linked to the agent’s semantic memory containing the agent’s knowledge base of ontological data and information about the interests of the user

Gregor Sieber, Brigitte Krenn

Generating Culture-Specific Gestures for Virtual Agent Dialogs

Integrating culture into the behavioral model of virtual agents has come into focus lately. When investigating verbal aspects of behavior, nonverbal behaviors are desirably added automatically, driven by the speech-act. In this paper, we present a corpus driven approach of generating gestures in a culture-specific way that accompany agent dialogs. The frequency of gestures and gesture-types, the correlation of gesture-types and speech-acts as well as the expressivity of gestures have been analyzed in the two cultures of Germany and Japan and integrated into a demonstrator.

Birgit Endrass, Ionut Damian, Peter Huber, Matthias Rehm, Elisabeth André

Avatars in Conversation: The Importance of Simulating Territorial Behavior

Is it important to model human social territoriality in simulated conversations? Here we address this question by evaluating the believability of avatars’ virtual conversations powered by our simulation of territorial behaviors. Participants were asked to watch two videos and answer survey questions to compare them. The videos showed the same scene with and without our technology. The results support the hypothesis that simulating territorial behaviors can increase believability. Furthermore, there is evidence that our simulation of small scale group dynamics for conversational territories is a step in the right direction, even though there is still a large margin for improvement.

Claudio Pedica, Hannes Högni Vilhjálmsson, Marta Lárusdóttir

The Impact of Linguistic and Cultural Congruity on Persuasion by Conversational Agents

We present an empirical study on the impact of linguistic and cultural tailoring of a conversational agent on its ability to change user attitudes. We designed two bilingual (English and Spanish) conversational agents to resemble members of two distinct cultures (Anglo-American and Latino) and conducted the study with participants from the two corresponding populations. Our results show that cultural tailoring and participants’ personality traits have a significant interaction effect on the agent’s persuasiveness and perceived trustworthiness.

Langxuan Yin, Timothy Bickmore, Dharma E. Cortés

A Multiparty Multimodal Architecture for Realtime Turntaking

Many dialogue systems have been built over the years that address some subset of the many complex factors that shape the behavior of participants in a face-to-face conversation. The Ymir Turntaking Model (YTTM) is a broad computational model of conversational skills that has been in development for over a decade, continuously growing in the number of factors it addresses. In past work we have shown how it addresses realtime dialogue, communicative gesture, perception of turntaking signals (e.g. prosody, gaze, manual gesture), dialogue planning, learning of multimodal turn signals, and dynamic adaptation to human speaking style. The architectural principles of the YTTM prescribe smaller architectural granularity than most other models, and its principles allow non-destructive additive expansion. In this paper we show how the YTTM accommodates

multi-party dialogue

. The extension has been implemented in a virtual environment; we present data for up to 12 simulated participants participating in realtime cooperative dialogue. The system includes dynamically adjustable parameters for impatience, willingness to give turn and eagerness to speak.

Kristinn R. Thórisson, Olafur Gislason, Gudny Ragna Jonsdottir, Hrafn Th. Thorisson

Emotion

The Influence of Emotions in Embodied Agents on Human Decision-Making

Acknowledging the social functions that emotions serve, there has been growing interest in the interpersonal effect of emotion in human decision making. Following the paradigm of experimental games from social psychology and experimental economics, we explore the interpersonal effect of emotions expressed by embodied agents on human decision making. The paper describes an experiment where participants play the iterated prisoner’s dilemma against two different agents that play the same strategy (tit-for-tat), but communicate different goal orientations (cooperative vs. individualistic) through their patterns of facial displays. The results show that participants are sensitive to differences in the facial displays and cooperate significantly more with the cooperative agent. The data indicate that emotions in agents can influence human decision making and that the nature of the emotion, as opposed to mere presence, is crucial for these effects. We discuss the implications of the results for designing human-computer interfaces and understanding human-human interaction.

Celso M. de Melo, Peter Carnevale, Jonathan Gratch

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

This paper focuses on dimensional prediction of emotions from spontaneous conversational head gestures. It maps the amount and direction of head motion, and occurrences of head nods and shakes into arousal, expectation, intensity, power and valence level of the observed subject as there has been virtually no research bearing on this topic. Preliminary experiments show that it is possible to automatically predict emotions in terms of these five dimensions (arousal, expectation, intensity, power and valence) from conversational head gestures. Dimensional and continuous emotion prediction from spontaneous head gestures has been integrated in the SEMAINE project [1] that aims to achieve sustained emotionally-colored interaction between a human user and Sensitive Artificial Listeners.

Hatice Gunes, Maja Pantic

An Intelligent Virtual Agent to Increase Involvement in Financial Services

In order to enhance user involvement in financial services, this paper proposes to combine the idea of adaptive personalisation with intelligent virtual agents. To this end, a computational model for human decision making in financial context is incorporated within an intelligent virtual agent. To test whether the agent enhances user involvement, a web application has been developed, in which users have to make a number of investment decisions. This application has been evaluated in an experiment for a number of participants interacting with the system and afterwards providing their judgement by means of a questionnaire. The preliminary results indicate that the virtual agent can show appropriate emotional expressions related to states like happiness, greed and fear, and has high potential to enhance user involvement.

Tibor Bosse, Ghazanfar F. Siddiqui, Jan Treur

Exploration on Affect Sensing from Improvisational Interaction

We report work on adding an improvisational AI actor to an existing virtual improvisational environment, a text-based software system for dramatic improvisation in simple virtual scenarios, for use primarily in learning contexts. The improvisational AI actor has an affect-detection component, which is aimed at detecting affective aspects (concerning emotions, moods, value judgments, etc.) of human-controlled characters’ textual ”speeches”. The AI actor will also make an appropriate response based on this affective understanding, which intends to stimulate the improvisation. The work also accompanies basic research into how affect is conveyed linguistically. A distinctive feature of the project is a focus on the metaphorical ways in which affect is conveyed. Moreover, we have also introduced affect detection using context profiles. Finally, we have reported user testing conducted for the improvisational AI actor and evaluation results of the affect detection component. Our work contributes to the conference themes on affective user interfaces, affect inspired agent and improvisational or dramatic interaction.

Li Zhang

Using Virtual Humans to Bootstrap the Creation of Other Virtual Humans

Virtual human (VH) experiences are increasingly used for training interpersonal skills such as military leadership, classroom education, and doctor-patient interviews. These diverse applications of conversational VHs have a common and unexplored thread – a significant additional population would be afforded interpersonal skills training if VHs were available to simulate

either

interaction partner. We propose a computer-assisted approach to generate a virtual medical student from hundreds of interactions between a virtual patient and real medical students. This virtual medical student is then used to train standardized patients – human actors who roleplay the part of patients in practice doctor-patient encounters. Practice with a virtual medical student is expected to lead to greater standardization of roleplay encounters, and more accurate evaluation of medical student competency. We discuss the method for generating VHs from an existing corpus of human-VH interactions and present observations from a pilot experiment to determine the utility of the virtual medical student for training.

Brent Rossen, Juan Cendan, Benjamin Lok

Making It Personal: End-User Authoring of Health Narratives Delivered by Virtual Agents

We describe a design study in which five different tools are compared for end-user authoring of personal stories to be told by an embodied conversational agent. The tools provide varying degrees of control over the agent’s verbal and nonverbal behavior. Results indicate that users are more satisfied when their stories are delivered by a virtual agent compared to plain text, are more satisfied when provided with tools to control the agent’s prosody compared to facial display of emotion, and are most satisfied when they have the most control over all aspects of the agent’s delivery.

Timothy Bickmore, Lazlo Ring

MAY: My Memories Are Yours

In human relations engagement and continuous communication is promoted by the process of

sharing experiences

. This type of social behaviour plays an important role in the maintenance of relationships with our peers and it is grounded by cognitive features of memory. Aiming at creating agents that sustain long-term interactions, we developed MAY, a conversational virtual companion that gathers memories

shared

by the user into a three layer knowledge base, divided in Lifetime Periods, General Events and Event-Specific-Knowledge. We believe that its cue sensitive structure increases agent adaptability and gives it capabilities to perform in a social environment, being able to infer about the user’s common and uncommon events. Results show that these agent’s capabilities contribute to development of intimacy and companionship.

Joana Campos, Ana Paiva

Expression of Behaviors in Assistant Agents as Influences on Rational Execution of Plans

Assistant Agents help ordinary people about computer tasks, in many ways, thanks to their rational reasoning capabilities about the current model of the world. However they face strong acceptability issues because of the lack of naturalness in their interaction with users. A promising approach is to provide Assistant Agents with a personality model and allow them to achieve behavioral reasoning in conjunction with rational reasoning. In this paper, we propose a formal framework to study the relationships between the rational and behavioral processes, based on the expression of the behaviors in terms of influence operators on the rational execution of actions and plans.

Jean-Paul Sansonnet, François Bouchet

Reflecting User Faces in Avatars

This paper presents a model to generate personalized facial animations for avatars using Performance Driven Animation (PDA). This approach allows the users to reflect their face expressions in his/her avatar, considering as input a small set of feature points provided by Computer Vision (CV) tracking algorithms. The model is based on the MPEG-4 Facial Animation standard, and uses a hierarchy of the animation parameters to provide animation of face regions where it lacks CV data. To deform the face, we use two skin mesh deformation methods, which are computationally cheap and provide avatar animation in real time. We performed an evaluation with subjects in order to qualitatively evaluate our method. Results show that the proposed model can generate coherent and visually satisfactory animations.

Rossana Baptista Queiroz, Adriana Braun, Juliano Lucas Moreira, Marcelo Cohen, Soraia Raupp Musse, Marcelo Resende Thielo, Ramin Samadani

User Studies

How a Virtual Agent Should Smile?

Morphological and Dynamic Characteristics of Virtual Agent’s Smiles

A smile may communicate different meanings depending on subtle characteristics of the facial expression. In this article, we have studied the morphological and dynamic characteristics of amused, polite, and embarrassed smiles displayed by a virtual agent. A web application has been developed to collect virtual agent’s smile descriptions corpus directly constructed by users. Based on the corpora and using a decision tree classification technique, we propose an algorithm to determine the characteristics of each type of the smile that a virtual agent may express. The proposed algorithm enables one to generate a variety of facial expressions corresponding to the polite, embarrassed, and amused smiles.

Magalie Ochs, Radosław Niewiadomski, Catherine Pelachaud

How Turn-Taking Strategies Influence Users’ Impressions of an Agent

Different turn-taking strategies of an agent influence the impression that people have of it. We recorded conversations of a human with an interviewing agent, controlled by a wizard and using a particular turn-taking strategy. A questionnaire with 27 semantic differential scales concerning personality, emotion, social skills and interviewing skills was used to capture these impressions. We show that it is possible to influence factors such as agreeableness, assertiveness, conversational skill and rapport by varying the agent’s turn-taking strategy.

Mark ter Maat, Khiet P. Truong, Dirk Heylen

That Avatar Is Looking at Me! Social Inhibition in Virtual Worlds

What effect does controlling an avatar, while in the presence of other virtual agents, have on task performance in virtual worlds? Would the type of view have an influence on this effect? We conducted a study to observe the effects of social inhibition/facilitation traditionally seen in human-to-human interaction. The theory of social inhibition/facilitation states that the presence of others causes people to perform worse on complex tasks and better on simple tasks. Simple tasks are well-learned, easy tasks, while complex tasks require more thought processes to complete the task. Participants interacted in a virtual world through control of an avatar. Using this avatar, they completed both simple and complex math tasks in both 1st person and 3rd person views, either in the presence of another female virtual agent, male agent, or alone. The results from this study show that gender of virtual agents has an effect on real humans’ sense of presence in the virtual world. Trends exist for inhibition and facilitation based on the gender of the agent and the view type. We have also identified several challenges in conducting experimental studies in virtual worlds. Our results may have implications on designing for education and training purposes in virtual worlds.

Austen L. Hayes, Amy C. Ulinski, Larry F. Hodges

Know Your Users! Empirical Results for Tailoring an Agent´s Nonverbal Behavior to Different User Groups

Since embodied agents are considered as equally usable by all kinds of users, not much attention has been paid to the influence of users´ attributes on the evaluation of agents in general and their (nonverbal) behaviour in particular. Here, we present evidence from three empirical studies with the agent Max, which focus on the effects of participants‘ gender, age and computer literacy. The results show that all three attributes have an influence on the feelings of the participants during their interaction with Max, on the evaluation of Max, as well as on the participants‘ nonverbal behavior.

Nicole C. Krämer, Laura Hoffmann, Stefan Kopp

The Persona Zero-Effect: Evaluating Virtual Character Benefits on a Learning Task with Repeated Interactions

Embodied agents have the potential to become a highly natural human-computer interaction device – they are already is use as tutors, presenters and assistants. However, it remains an open question whether adding an agent to an application has a measurable impact, positive or negative, in terms of motivation and learning performance. Prior studies are very diverse with respect to design, statistical power and outcome; and repeated interactions are rarely considered. We present a controlled user study of a vocabulary trainer application that evaluates the effect on motivation and learning performance. Subjects interacted either with a no-agent and with-agent version in a between-subjects design in repeated sessions. As opposed to prior work (e.g. Persona Effect), we found neither positive nor negative effects on motivation and learning performance, i.e. a

Persona Zero-Effect

. This means that adding an agent does not benefit the performance but also, does not distract.

Jan Miksatko, Kerstin H. Kipp, Michael Kipp

High Score! - Motivation Strategies for User Participation in Virtual Human Development

Conversational modeling requires an extended time commitment, and the difficulty associated with capturing the wide range of conversational stimuli necessitates extended user participation. We propose the use of leaderboards, narratives and deadlines as motivation strategies to persuade user participation in the conversational modeling for virtual humans. We evaluate the applicability of leaderboards, narratives and deadlines through a user study conducted with medical students (n=20) for modeling the conversational corpus of a virtual patient character. Leaderboards, narratives and deadlines were observed to be effective in improving user participation. Incorporating these strategies had the additional effect of making user responses less reflective of real world conversations.

Shivashankar Halan, Brent Rossen, Juan Cendan, Benjamin Lok

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise