Skip to main content

2010 | Buch

Entertainment Computing - ICEC 2010

9th International Conference, ICEC 2010, Seoul, Korea, September 8-11, 2010. Proceedings

herausgegeben von: Hyun Seung Yang, Rainer Malaka, Junichi Hoshino, Jung Hyun Han

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The 9th International Conference on Entertainment Computing (ICEC 2010) was held in September 2010 in Seoul Korea. After Pittsburgh (2008) and Paris (2009), the event returned to Asia. The conference venue was the COEX Exhibition Hall in one of the most vivid and largest cities of the world. This amazing mega-city was a perfect location for the c- ference. Seoul is on the one hand a metropolitan area with modern industries, univer- ties and great economic power. On the other hand, it is also a place with a very fas- nating historical and cultural background. It bridges the past and the future as well as east and west. Entertainment computing also aims at building bridges from technology to leisure, education, culture and work. Entertainment computing at its core has a strong focus on computer games. However, it is not only about computer games. The last ICEC c- ferences have shown that entertainment computing is a much wider field. For instance in games, technology developed for games can be used for a wide range of appli- tions such as therapy or education. Moreover, entertainment does not necessarily have to be understood as games. Entertainment computing finds its way to stage perfo- ances and all sorts of new interactive installations.

Inhaltsverzeichnis

Frontmatter

Long Papers

New Interfaces and Entertainment Robots

Baby Robot “YOTARO”

YOTARO is a baby-type robot developed to create a new communication perspective between robots and humans through interaction experience based on the reproduction of a baby’s behaviors and user actions. YOTARO exhibits different emotions and reactions, such as smiling, crying, sleeping, sneezing, and expressing anger. It is controlled by an emotion control program that executes in response to inputs such as touching its soft and warm face, touching its stomach, and shaking a rattle. The output is in the form of interactive reactions such as emission of sounds, change of expressions, limb movements, sniveling, and variation in skin color. In addition, we used questionnaires to observe the impression on users before and after their experience with YOTARO.

Hiroki Kunimura, Chiyoko Ono, Madoka Hirai, Masatada Muramoto, Wagner Tetsuya Matsuzaki, Toshiaki Uchiyama, Kazuhito Shiratori, Junichi Hoshino
A Card Playing Humanoid for Understanding Socio-emotional Interaction

This paper describes the groundwork for designing a social and emotional interaction between a human and robot in game-playing. We considered that understanding deception in terms of mind reading plays a key role in realistic interactions for social robots. In order to understand the human mind, the humanoid robot observes nonverbal deception cues through multimodal perception during poker playing which is one of human social activities. Additionally, the humanoid manipulates the real environment which includes not only the game but also people to create a feeling of interacting with life-like machine and drive affective responses in determining the reaction.

Min-Gyu Kim, Kenji Suzuki
DreamThrower: Creating, Throwing and Catching Dreams for Collaborative Dream Sharing

The

DreamThrower

is a novel technology that explores virtually creating, throwing and catching dreams. It detects users’ dream state by measuring rapid eye movement (REM). Once the dream state is detected, sound and light stimuli is played to alter the dream. Users report on their dream, and they can send the stimuli that they have used to another person via an on-line website. A working prototype accurately detects REM sleep. Based on preliminary results, the sound and light stimuli were found to have little influence on their dreams. Our prototype’s ability to detect REM effectively coupled to a social network to share dream stimuli opens up a fun game environment even if the stimuli itself does not have a significant impact. Instead, user engagement with the social network may be sufficient to alter dreams. Further studies are needed to determine whether stimulus during REM can be created to alter dreams significantly.

Noreen Kamal, Ling Tsou, Abir Al Hajri, Sidney Fels
Everyone Can Do Magic: An Interactive Game with Speech and Gesture Recognition

This paper presents a novel game design that allows players to learn how to cast magic spells that combine hand gestures and speech. This game uses the imperfect recognition performance in speech and gesture recognition systems to its advantage to make the game challenging and interesting. Our game uses a Wii remote encased in a wand and a microphone to track player’s gestures and speech which are then recognized to determine if they have performed the spell correctly. Visual feedback then provides confirmation of success. Through the game, players learn to adjust their speaking and movement patterns in order to meet the requirements of the recognition systems. This effectively mimics the characteristics of casting spells correctly such that players are trying to adjust their performance so that an “oracle” recognizes their speech and movement to have a magical outcome. A user study has confirmed the validity of the idea and establishes the accuracy required to create an interesting game based on the theory of channels of flow.

Chris Wang, Zhiduo Liu, Sidney Fels

User Interfaces

Onomatopen: Painting Using Onomatopoeia

We propose an interactive technique using onomatopoeia. Onomatopoeia are imitative words such as “Zig-zag” and “Tick-tock”. Some Asian languages, especially Japanese and Korean, have many onomatopoeia words, which are frequently used in ordinary conversation, as well as in the written language. Almost all onomatopoeic words represent the texture of materials, the state of things and emotions. We consider that onomatopoeia allows users to effectively communicate sensory information to a computer. We developed a prototype painting system called Onomatopen, which enables a user to switch brushes and apply effects using onomatopoeia. For example, if the user draws a line while saying “Zig-zag Zig-zag...”, a jagged line is drawn. As a result of our user test, we found that users can easily understand the usage and enjoy drawing with this application more than with conventional painting software.

Keisuke Kambara, Koji Tsukada
Helping Hands: Designing Video Games with Interpersonal Touch Interaction

Increasingly, the movements of players’ physical bodies are being used as a method of controlling and playing video games. This trend is evidenced by the recent development of interpersonal touch-based games; multiplayer games which players control by physically touching their partners. Although a small number of interpersonal touch-based games have recently been designed, the best practices for creating video games based on this unconventional interaction technique remain poorly explored and understood. In this paper, we provide an overview of interpersonal touch interaction in video games and present a set of design heuristics for the effective use of interpersonal touch interaction in video games. We then use these heuristics to analyze three current interpersonal touch-based games in order to show how these heuristics reflect on the current state of the art. Finally, we present our vision for the future of this interaction modality in video games.

Cody Watts, Ehud Sharlin, Peter Woytiuk
Investigating the Affective Quality of Motion in User Interfaces to Improve User Experience

This study focuses on

motion

in user interfaces as a design element which can contribute to an improved user experience of digital media entertainment. The design for user experience is necessary to deal with user emotion especially in the entertainment domain. As a means to approach emotion, we studied affective qualities which are the features of an artifact that can influence emotion. In user interface design, motion has not been practically dealt with in this perspective. Through empirical study, we verified that motion plays a significant role in forming the affective quality of user interfaces and found that content type and application type has influence on this effect. Moreover, a preliminary investigation was made on the use of the Effort system from Laban’s theory for the design of motion in terms of affective quality.

Doyun Park, Ji-Hyun Lee

Serious Games and Collaborative Interaction

The MINWii Project: Renarcissization of Patients Suffering from Alzheimer’s Disease Through Video Game-Based Music Therapy

MINWii, a new serious video game targeting Alzheimer and demented patients, is a simple Music Therapy tool usable by untrained care givers. Its objective is to improve patients’ self-image (

renarcissization

) to reduce behavioral symptoms, which are an important cause of institutionalization. With MINWii, elderly gamers use Wiimotes to improvise or play predefined songs on a virtual keyboard. We detail our design process, which addresses the specific features of dementia: this iterative refinement scheme, built upon qualitative, small scale experiments in a therapeutic environment, led to a shift of MINWii’s original focus from creativity to reminiscence. A large majority of our patients, with mild to moderate dementia, expressed a strong interest in our system, which was confirmed by feedback from the care givers. A fully controlled usability study of MINWii is currently under way, which should lead to future research assessing its actual therapeutic impact.

Samuel Benveniste, Pierre Jouvelot, Renaud Péquignot
Virtual Team Performance Depends on Distributed Leadership

In this paper we present a detailed analysis of World of Warcraft virtual team collaboration. A number of competitive synchronous virtual teams were investigated in-situ and unobtrusively. We observed a large gap in team performance between the various teams. An initial statistic study showed that, in teams of this level, individual player performance was not the primary driver for the large discrepancy in team performance. This led to the argument that differences in intra-team collaboration and communication might be a significant driver for the discrepancy in team performance. In total 16 hours of audio recordings of gaming sessions of virtual teams were analyzed. The analysis indicates that distributed leadership instead of authoritative leadership is more common in successful synchronous virtual teams.

Nico van Dijk, Joost Broekens
Nonverbal Behavior Observation: Collaborative Gaming Method for Prediction of Conflicts during Long-Term Missions

This paper presents a method for monitoring mental state of small isolated crews during long-term missions (such as space mission, polar expeditions, submarine crews, meteorological stations, and etc). It combines the records of negotiation game with monitoring of the nonverbal behavior of the players. We analyze the records of negotiation game that has taken place between the crew members who were placed in isolated environment for 105 days during the Mars-500 experiment. The outcomes of the analysis, differently from the previously made conclusions, show that there was not a significant deviation of the rational choice of the players. We propose an extension of the method that includes monitoring of the nonverbal behavior of the players next to recording the game records. The method is focused on those aspects of psychological and sociological states that are crucial for the performance of the crew. In particular, we focus on measuring of emotional stress, initial signs of conflicts, trust, and ability to collaborate.

Natalia Voynarovskaya, Roman Gorbunov, Emilia Barakova, Rene Ahn, Matthias Rauterberg
Engaging Autistic Children in Imitation and Turn-Taking Games with Multiagent System of Interactive Lighting Blocks

In this paper game scenarios that aim to establish elements of cooperative play such as imitation and turn taking between children with autism and a caregiver are investigated. Multiagent system of interactive blocks is used to facilitate the games. The training elements include verbal description followed by imitation of video-modeled play episodes. By combining this method with the tangible multiagent platform of interactive blocks (i-blocks) children with autism could imitate play episodes that involved turn taking with a caregiver. The experiment showed that most of the children managed to imitate the play scenarios after video modeling, and repeat the behaviors with the tangible and appealing block platform. When all the actions were well understood by the autistic children, they performed willingly turn taking cooperative behaviors, which they normally do not do.

Jeroen C. J. Brok, Emilia I. Barakova

Tools and Networks

Multiple Page Recognition and Tracking for Augmented Books

An augmented book is an application that augments virtual 3D objects to a real book via AR technology. For augmented books, some markerless methods have been proposed so far. However, they can only recognize one page at a time. This leads to restrictions on the utilization of augmented books. In this paper, we present a novel markerless tracking method capable of recognizing and tracking multiple pages in real-time. The proposed method builds on our previous work using the generic randomized forest (GRF). The previous work finds out one page in the entire image using the GRF, whereas the proposed method detects multiple pages by dividing an image into subregions, applying the GRF to each subregion and discovering spatial locality from the GRF results.

Kyusung Cho, Jaesang Yoo, Jinki Jung, Hyun S. Yang
Online Scene Modeling for Interactive AR Applications

Augmented reality applications require 3D model of environment to provide even more realistic experience. Unfortunately, however, most of researches on 3D modeling have been restricted to an offline process up to now, which conflicts with characteristics of AR such as realtime and online experience. In addition, it is barely possible not only to generate 3D model of whole world in advance but also trasfer the burden of 3D model generation to a user, which limits the usability of AR. Thus, it is required to draw the 3D model generation to an online stage from an offline stage. In this paper, we propose an online scene modeling method to generate 3D model of a scene, based on the keyframe-based SLAM which supports AR experience even in an unknown scene by generating a map of 3D points. The scene modeling process in this paper is a little computationally expensive in terms of real-time but it doesn’t restrict real-time property of AR because it is executed on a background process. Therefore, a user will be provided with an interactive AR applications that support interactions between the real and virtual world even in an unknown environment.

Jaesang Yoo, Kyusung Cho, Jinki Jung, Hyun S. Yang
Unnecessary Image Pair Detection for a Large Scale Reconstruction

This paper proposes an algorithm to detect unnecessary image pairs for efficient structure from motion. Since image pair with small baseline is considered as a poor condition for reconstruction, we focus on computing cameras closely located. We address a term, “

remoteness

” which indicates the distance between two images in this paper. The

remoteness

is not affected by image’s intrinsic parameters because camera intrinsic matrix is applied to put the extracted features in the normalized coordinate. The

remoteness

is computed using feature disparity in normalized coordinate. Therefore, we can detect redundant image pair captured at the near position without reconstruction. The proposed algorithm is proved by experimental results with Notre Dame images.

Jaekwang Lee, Chang-Joon Park
Online Gaming Traffic Generator for Reproducing Gamer Behavior

In this paper, we proposed an online gaming traffic generator reflecting user behavior patterns. We analyzed the packet size and inter departure time distributions of a popular FPS game (Left4Dead) and MMORPG (World of Warcraft) for regenerating gaming traffic. The proposed traffic generator generates an inter departure time and gaming packetbased on analytical model of the gamer behaviors, then transmits the packet according to the inter departure time. Packet generation results show that generated packets of World of Warcraft is much different with analytical model, unlike Left4Dead. It is caused by Nagle algorithm and Delayed Acknowledgments of TCP. Thus, we disabled the Nagle algorithm in the proposed traffic generator. The generation results show that the revised proposed traffic generator guarantees goodness of fit in the generated traffic distribution.

Kwangsik Shin, Jinhyuk Kim, Kangmin Sohn, Changjoon Park, Sangbang Choi

Game Theory and User Studies

Click or Strike: Realistic versus Standard Game Controls in Violent Video Games and Their Effects on Aggression

The motion detection technology used in innovative game controlling devices like the Nintendo

Wii-Remote®

provides experiences of realistic and immersive game play. In the present study (N=62) it was tested whether this technology may also provoke stronger aggression-related effects than standard forms of interaction (i.e., keyboard and mouse). With the aid of a gesture recognition algorithm, a violent action role-playing game was developed to compare different modes of interaction within an otherwise identical game environment. In the

Embodied Gestures

condition participants performed realistic striking movements that caused the virtual character to attack and kill other in-game characters with a club or sword. In the

Standard Interaction

condition attacks resulted from simple mouse clicks. After the game session, participants showed a similar increase in negative feelings in both groups. When provided with ambiguous scenarios, however, participants in the

Embodied Gestures

condition tended to show more hostile cognitions (i.e., anger) than the

Standard Interaction

group. Results further corroborate the complexity of aggression-related effects in violent video games, especially with respect to situational factors like realistic game controls.

André Melzer, Ingmar Derks, Jens Heydekorn, Georges Steffgen
Logos, Pathos, and Entertainment

Various new forms of entertainment using information and media technologies have emerged and been accepted among people all over the world. Casual and serious games, as well as communication using mobile phones, blogs, and Twitter, are such kinds of new entertainment. It is important to discuss the basic characteristics of such entertainment and to understand the direction to which these new forms are leading human societies. This paper provides a comparative study of entertainment between developing countries and developed countries, and between ancient times and the present day. The future relationship between entertainment and society is also described.

Ryohei Nakatsu
The Video Cube Puzzle: On Investigating Temporal Coordination

We have created a novel computer-based 3D puzzle, named

Video Cube Puzzle

to investigate human beings’ temporal coordination abilities. Ten adult participants were studied solving ten cubic video puzzles of varying difficulties using a within-subject design. The ten puzzles have two segmentation variations, 2x2x2 and 3x3x3, and five texture variations, solid colours and four videos of drastically different contents. Only 60% of the subjects were able to complete the entire problem set. The results indicate that random imagery and “active” videos make for easier Video Cube Puzzles. Similarly, a geometric increase in difficulty was noted as the number of segments in the puzzle increased. The challenging nature of temporal video cube puzzles appears to be partly due to people’s poor ability to process temporal information when using a spatial representation of the timeline using a three dimensional volume. Additional studies are suggested to explore this further. As a new type of game however, the Video Cube Puzzle allows the complexity of the puzzle to be easily varied from simple to extremely complex providing a way to have a continuous pathway to skill and control leading to a satisfying experience when the puzzle is solved.

Eric Yim, William Joseph Gaudet, Sid Fels
Emotions: The Voice of the Unconscious

In the paper the idea is presented that emotions are the result of a high dimensional optimization process happening in the unconscious mapped onto the low dimensional conscious. Instead of framing emotions as a separate subcomponent of our cognitive architecture, we argue for emotions as the main characteristic of the communication between the unconscious and the conscious. We see emotions as the conscious experiences of affect based on complex internal states. Based on this holistic view we recommend a different design and architecture for entertainment robots and other entertainment products with ‘emotional’ behavior. Intuition is the powerful information processing function of the unconscious while emotion is the result of this process communicated to the conscious. Emotions are the perception of the mapping from the high dimensional problem solving space of the unconscious to the low dimensional space of the conscious.

Matthias Rauterberg

Short Papers

Game Theory, User Studies and Story Telling

Analyzing the Parameters of Prey-Predator Models for Simulation Games

We describe and analyze emergent behavior and its effect for a class of prey-predators’ simulation models. The simulation uses rule-based agent behavior and follows a prey-predator structure modulated by a number of user-assigned parameters. As part of our analysis, we present key parameter estimations for mapping the prey-predators’ simulation parameters to a functional relationship with the LV(Lotka-Volterra) model, and how the parameters interact and drive the evolution of the simulation.

Seongdong Kim, Christoph Hoffmann, Varun Ramachandran
Analyzing Computer Game Narratives

In many computer games narrative is a core component with the game centering on an unfolding, interactive storyline which both motivates and is driven by the game-play. Analyzing narratives to ensure good properties is thus important, but scalability remains a barrier to practical use. Here we develop a formal analysis system for interactive fiction narratives. Our approach is based on a relatively high-level game language, and borrows analysis techniques from compiler optimization to improve performance. We demonstrate our system on a variety of non-trivial narratives analyzing a basic reachability problem, the path to win the game. We are able to analyze narratives orders of magnitude larger than the previous state-of-the-art based on lower-level representations. This level of performance allows for verification of narrative properties at practical scales.

Clark Verbrugge, Peng Zhang
Cultural Computing – How Can Technology Contribute the Spiritual Aspect of Our Communication?

The author is carrying out technology studies to explore and expand human emotions, sensibility, and consciousness by making innovative use of artistic creativity. We develop interfaces for experiencing and expressing the "essence of culture” such as human feelings, ethnicity, and story. History has shown that human cultures have common and unique forms such as behavior and grammar. We suggest a computer model for that process and a method of interactive expression and experiencing cultural understanding using IT called "cultural computing". We particularly examine Japanese culture, although it is only a small subject of computing.

Naoko Tosa

Interaction and User Experience

System and Context – On a Discernable Source of Emergent Game Play and the Process-Oriented Method

Mobile games are based on the physical movement of players in a game world, combining real world with virtual dimensions. As the real world defies control, the magic circle, the border of the game world, becomes permeable for influences of everyday life. Neither the players nor the designers nor the researchers are able to foresee and fully control the consequences of players’ actions in this world. In our paper we introduce a case study. Within this empirical study the difference between the game as a system on the one hand and the context of play on the other hand becomes discernable as a source of emergent game play. We then elaborate on its meaning for the process-oriented method.

Barbara Grüter, Miriam Oks, Andreas Lochwitz
Re-envisioning the Museum Experience: Combining New Technology with Social-Networking

The goal of the project was to design an integrated system for the California Academy of Sciences that combined new technology with a social-networking based website to promote educational learning. Five mini-games were developed for the iPad and connected to a series of websites through a database. The use of new technology drew in users that would not have otherwise engaged in the experience. Connecting with a social-networking website opens up many future possible implications for expanding edutainment.

Madhuri Koushik, Eun Jung Lee, Laura Pieroni, Emily Sun, Chun-Wei Yeh
Interactive Environments: A Multi-disciplinary Approach towards Developing Real-Time Performative Spaces

The research paper exemplifies upon a series of real-time information exchange driven design-research experiments conducted by the Hyperbody research group (HRG), Faculty of Architecture, TU Delft. These interactive spatial prototypes, while successfully integrating the digital with the physical domains, foster multiple usability of space and are appropriately termed as ‘The Muscle Projects’ based on the pneumatic muscle driven actuation technologies used per project. The interactive nature of the projects is realized through harnessing a synergistic merger between the fields of ambient sensing, control systems, architectural design, pneumatic systems and computation (real-time game design techniques). The prototypes are thus visualized as complex adaptive systems, continually engaged in activities of data-exchange and optimal augmentation of their morphologies in accordance with contextual variations.

Nimish Biloria
Explorations in Player Motivations: Virtual Agents

Creating believable agents with personality is a popular research area in game studies but academic research in this area usually focuses on one facet of personality - for example, only on moods or character traits. The present study proposes a motivational framework to predict goal-directed behaviour of virtual agents in a computer game and explores the opportunities of using personality inventories based on the same motivational framework to design virtual agents with personality. This article claims that motivation to reach a goal is influenced by psychological needs which are represented with an equation that determines the strength of a character’s motivational force. The framework represented by this study takes into account psychological needs and their interrelations for analyzing choices of virtual agents in a computer game.

Barbaros Bostan

Serious Games

Integration of CityGML and Collada for High-Quality Geographic Data Visualization on the PC and Xbox 360

Computer games and serious geographic information systems (GIS) share many requirements with regard to storage, exchange, and visualization of geographic data. Furthermore, there is a demand for high-fidelity photo-realistic and non-photo-realistic visualization. This poses at least two questions: Is there a single data format standard suitable for serious GIS-based applications and computer games that supports state-of-the-art visual quality? How can computer games and serious applications benefit from each other, especially platform-wise? In this paper we will investigate both questions by taking a closer look at the CityGML standard in comparison to COLLADA and we will report on our findings in integrating CityGML with mainstream game technology. The main contribution of this paper to the field is a suggested way of integrating an important features of CityGML and Collada for high-quality visualization, i.e. programmable shader effects, and demonstrating the feasibility of employing a game console as a cheap and widely available device for geodata visualization and possibly other geodata-centric applications.

Marc Herrlich, Henrik Holle, Rainer Malaka
Virtual Blowgun System for Breathing Movement Exercise

Breathing is the most basic requirement for having good health. However, unhealthy breathing like overbreathing and hyperventilation will happened easily without any awareness. We propose an experimental breathing movement exercise system - Virtual Blowgun System (VBS), offering an easy way of breathing exercise, for people of different physical strength, without space and safety limitations.

Peichao Yu, Kazuhito Shiratori, Jun’ichi Hoshino
Development of a Virtual Electric Wheelchair – Simulation and Assessment of Physical Fidelity Using the Unreal Engine 3

This paper demonstrates how an existing game technology as a component off-the-shelf can be used as a basis to build a serious game for assistive technology for disabled people.Using the example of a virtual electric wheelchair simulator, we present how to use a computer game physics engine to achieve a realistic simulation of driving an electric wheelchair in a virtual environment. Focus of the paper is the conversion of driving characteristics of prevalently used electric wheelchairs into the virtual physics system of the used computer game engine. The used parameters are systematically balanced between the virtual and the real world to evaluate the realism of the driving characteristics of an electric wheelchair using the integrated physics simulation of the Unreal Engine 3.

Marc Herrlich, Ronald Meyer, Rainer Malaka, Helmut Heck
Event-Based Data Collection Engine for Serious Games

Games with a purpose other than entertainment can be called Serious Games. In this paper, we describe a generic event-based Data Collection Engine (DCE) that has been developed for Serious Games on the Unity Game Engine. Further, we describe a framework that allows for the manipulation and feedback of the collected data back into the game in real-time. The player experiences the visuals, sounds and the game itself that is streamed over the web. The player engages with an enriching, multimedia experience allowing him/her to be immersed in the game. By suitably designing the serious game we could determine the behavior of the player in real world under the given scenario or other scenarios. The DCE is optimized to collect relevant data streamed online without affecting the performance of the game. Also, the DCE is highly flexible and can be setup to collect data for any game developed on the Unity Engine.

Amith Tudur Raghavendra
Culturally Sensitive Computer Support for Creative Co-authorship of a Sex Education Game

We describe a computer-supported game authoring system for educators to co-author a game to help teaching sensitive content, specifically sex education. Our approach provides educators the ability to co-author the game to tailor it to the class based on a computer-supported interface that draws upon a large, cultural database. By targeting the game to the culture of the students, they feel their values, beliefs and vocabulary are being considered in the game, providing better comprehension of the content, leading to stronger learning engagement that is helpful for highly charged, sometimes uncomfortable and sensitive material such as sex. We studied our design in the classroom and observed that giving educators co-authorship of the game helps them adopt using online games.

Junia C. Anacleto, Johana M. R. Villena, Marcos A. R. Silva, Sidney Fels

Tools and Methods

Real-Time Caustics in Dynamic Scenes with Multiple Directional Lights

We present a real-time GPU caustics rendering technique in dynamic scenes under multiple directional lights taking into account light occlusion. Our technique renders caustics cast on receiver objects as well as volumetric caustics. We precompute caustic patterns of caustic objects for several directional lights and store them in caustic images. During the rendering, we interpolate the precomputed caustic patterns based on a given light direction. One of the applications of our technique is to render approximate caustics under environment illumination. To achieve this, we propose an environment cube map segmentation technique which divides cube maps into several light regions with each region is represented using one directional light.

Budianto Tandianus, Henry Johan, Hock Soon Seah
An Extraction Method of Lip Movement Images from Successive Image Frames in the Speech Activity Extraction Process

In this paper, we propose an extraction method of lip movement images from successive image frames and present the possibility to utilize lip movement images in the speech activity extraction process of speech recognition phase. The image frames are acquired from the PC image camera with the assumption that facial movement is limited during talking. First of all, one new lip movement image frame is generated with comparing two successive image frames each other. Second, the fine image noises are removed. Each fitness rate is calculated by comparing the lip feature data as objectly separated images. It is analyzed whether or not there is the lip movement image through verification to the objects and three images which have higher rates in their fitnesses. As a result of linking the speech & image processing system, the interworking rate shows 99.3% even in the various illumination environments. It was visually confirmed that lip movement images are tracked and can be utilized in speech activity extraction process.

Eung-Kyeu Kim, Soo-Jong Lee, Nohpill Park
Rule-Based Camerawork Controller for Automatic Comic Generation from Game Log

We propose a rule-based camerawork controller for a recently proposed a comic generation system. Five camerawork rules are derived through an analysis of online-game webcomics about Lineage 2, one rule for each of the five event types: chatting, fighting, moving, approaching, and special. Each rule consists of three parts relating to the three camera parameters: camera angle, camera position, and zoom position. Each camera-parameter part contains multiples shot types whose value indicates the frequency of their usages in the analyzed webcomics. In this paper, comic frames generated with the proposed camerawork controller are shown and compared with those generated with our previous controller based on heuristic rules, confirming the effectiveness of the proposed camerawork controller.

Ruck Thawonmas, Ko Oda, Tomonori Shuda
A Framework for Constructing Entertainment Contents Using Flash and Wearable Sensors

Multimedia interactive contents that can be controlled by user’s motion attract a great deal of attention especially in entertainment such as gesture-based games. A system that provides such interactive contents detects the human motions using several body-worn sensors. To develop such a system, the contents creator must have enough knowledge about various sensors. In addition, since sensors and contents are deeply associated in contents, it is difficult to change/add sensors for such contents. In this paper, we propose a framework that helps contents creators who do not have enough knowledge on sensors. In our framework, an interactive content is divided into two layers; sensor management layer and content layer. We confirmed that creators can create interactive contents easier with our framework.

Tsutomu Terada, Kohei Tanaka
Research on Eclipse Based Media Art Authoring Tool for the Media Artist

A media art contents authoring tool based on Eclipse called Exhibition Contents Authoring System (ECAS) is presented in this paper. Visual editor of ECAS is implemented using Graphical Modeling Framework which is composed of Eclipse Modeling Framework and Graphical Editing Framework. The rest of the system were implemented using Eclipse Rich Client Platform framework. With this tool, artists could present their works easily by drag-and-dropping icons without programming skills.

Songlin Piao, Jae-Ho Kwak, Whoi-Yul Kim
BAAP: A Behavioral Animation Authoring Platform for Emotion Driven 3D Virtual Characters

Emotion, as an important aspect of human intelligence, has been playing a significant role in virtual characters. We propose an improved three-level structure of affective model as “personality-emotion-mood” for intelligent and emotional virtual characters. We also present the emotion state space, as well as the emotion updating functions, to generate authentic and expressive emotions. In order to achieve the complexity and variety of behaviors, we bring forward a behavior organizing structure as the behavior tree, which defines four kinds of behavior organizations as well as the behavior tag and behavior message, to manage virtual characters’ behaviors. At the end, we achieve an experimental platform BAAP, which prove our emotion model and behavior organizing structure to be effective and practical in generating intelligent and emotional behavioral animations.

Ling Li, Gengdai Liu, Mingmin Zhang, Zhigeng Pan, Edwin Song
Choshi Design System from 2D Images

This paper proposes a Choshi design system. Choshi is a new method for carving of paper, which is uneven 3D shape and unique colors of papers. Choshi, derived from carving overlaid colored papers, has the following three features:

1

Each layer consists of a single piece of paper of one color.

1

The color must be selected from a number of existing colors.

1

Choshi has an overlaid structure where carved papers are overlaid on other carved papers.

The goal of the proposed Choshi design system has two issues: to enable a wider variety of people to easily and successfully create Choshi art, and to reduce the difficulty and tedium, of creating a Choshi art piece.

Natsuki Takayama, Shubing Meng, Takahashi Hiroki
Player’s Model: Criteria for a Gameplay Profile Measure

Game designers empirically use psychological and sociological player’s model to create the gameplay of their video games. These models are generally implicit and always informal. A formal analysis of the player’s model leads to define efficient player behavior profile. It can have numerous applications, for instance adaptation of the content to the player’s ability and interest. Our work tries to find a rational way to assess Players Styles, concept suggested by Bartle [1] in 1996. The first step, state of the art of the player model, shows already some interesting criteria that can be used to classify player’s styles.

Emmanuel Guardiola, Stephane Natkin

Robots and New Interfaces

A Laban-Based Approach to Emotional Motion Rendering for Human-Robot Interaction

A motion-rendering system that adds target emotion to basic movements of human form robot (HFR) by modifying the movements was created. Pleasure, anger, sadness or relaxation is considered as target emotion. This method not only keeps the user interested, but it also makes the user perceive the robot’s emotions and form an attachment to the robot more easily. An experiment was conducted using a real HFR to test how well our system adds target emotion to basic movements. The average of the success rates for adding the target emotion to basic motions were over 60%. This suggests that our method succeeded in adding the target emotions to arbitrary movements.

Megumi Masuda, Shohei Kato, Hidenori Itoh
A Biofeedback Game with Physical Actions

We developed a biofeedback game in which players can take other physical actions besides simply “relaxing”. We used the skin conductance response for sensing a player’s surge of excitement and penalized players when they did not attack enemies in situations because they were not calm enough to meet the biofeedback threshold. We conducted a subjective experiment to to see whether people found the game enjoyable. Most participants felt the game was enjoyable.

Nagisa Munekata, Teruhisa Nakamura, Rei Tanaka, Yusuke Domon, Fumihiko Nakamura, Hitoshi Matsubara
Dial-Based Game Interface with Multi-modal Feedback

This paper introduces a dial-based haptic interface for a brickout game. Conventionally brickout games are played through a mouse or a keyboard. However, these input devices cannot provide sufficient intuitive interface to move the game paddle or provide multi-modal feedback for the user. We developed a haptic game device that gives the user haptic feedback during the game as well as visual and sound feedback. The user can move the position of the paddle by spinning the dial knob and feels various multi-modal effects according to the game context. Basic haptic effects include detent, vibration, friction and barrier. We can generate any combinations of these effects by adjusting the amount, frequency, and direction of torque along the rotational path. The result of a user-study showed that the proposed haptic dial interface made a simple brickout game more fun and more interesting. Additionally, the users were able to focus on the game more easily than when they played using a mouse.

Wanjoo Park, Laehyun Kim, Hyunchul Cho, Sehyung Park
Tangible Interactive Art Using Marker Tracking in Front Projection Environment: The Face Cube

The Face Cube is a work of interactive art which is targeted to children. To implement this art work, we use a camera-projector system. Instead of rear projection and edge detection method, we choose front projection approach and histogram-based detection method for interaction. This paper describes how to design the Face Cube and marker design for robust interaction, efficient way to remove projection lights from the front projection system for marker recognition, histogram-based marker detection, marker information management.

Chan-young Bang, Jin Lee, Hye-won Jung, Ok-young Choi, Jong-Il Park
Entertaining Education: User Friendly Cutting Interface for Digital Textbooks

Nowadays, the new paradigm demands digital textbooks which contain interactive contents. Our goal is to design the digital textbook providing effective multimedia and cutting interface for interactive education. To achieve this purpose, we propose the user friendly cutting interface and interactive animation for digital textbook. This interface complemented current digital textbook interface which is mostly in text based. We will discuss effectiveness of our interface for elementary students and how much our interface gives positive effect in learning.

Gahee Kim, Hyun-Roc Yang, Kyung-Kyu Kang, Dongho Kim

Posters

Affective Interacting Art

This paper to study the potential of expressing ink-and-wash painting through interaction, and present a direction that can coincide with modern paintings by developing ink-and-wash painting from a traditional aspect through analyzing the theories and tech-niques instilled in my works.This work is an interactive visualization of an oriental cymbidium using modern technology which our oriental ancestors painted for mental training. During the old days in the orient, people used to wipe cymbidium leaves or painted cymbidium for mental training by having a cymbidium always by their side. Through the act of wiping cymbidium leaves with utmost care, a cymbidium instilled with ancient philosophical ideas is visualized.

Youngmi Kim, Jongsoo Choi
Flexible Harmonic Temporal Structure for Modeling Musical Instrument

Multipitch estimation is an important and difficult problem in entertainment computing. In this paper a flexible harmonic temporal structure for modeling musical instrument was proposed for estimating pitch in real music. Unlike the previous research, the proposed model does multipitch estimation according to the specific characteristics of specific musical instrument and uses EM algorithm to estimate the parameters in the model. Through choosing parameters suitable for its own characters for specific instrument, the proposed model preponderated over the common model.

Jun Wu, Yu Kitano, Takuya Nishimoto, Nobutaka Ono, Shigeki Sagayama
Towards a Service-Oriented Architecture for Interactive Ubiquitous Entertainment Systems

Ubiquitous computing is not only applied to doing daily activities and integrated into everyday objects but for entertainment, and gaming as well. In this research, we explore the relevance of ubiquitous computing to entertainment systems using devices such as mobile devices. We introduce a service-oriented architecture for ubiquitous entertainment systems to establish collaborative relationships between heterogeneous devices to provide users an interactive and ubiquitous entertainment and fun.

Giovanni Cagalaban, Seoksoo Kim
Narrative Entertainment System with Tabletop Interface

We propose the Narrative Entertainment System with Tabletop Interface. This system is using a miniature-shaped interface called Physical Character. By recognizing the operation of Physical Character, virtual actor’s behavior is under control, and that offers a method to create a story while playing with miniatures in a familiar way since our childhood. In the form of intuitive operation with this interface, you are not only an observer, but also a creator.

Takashi Mori, Katsutoki Hamana, Chencheng Feng, Jun’ichi Hoshino
Automated Composing System for Sub-melody Using HMM: A Support System for Composing Music

We propose an automated composing system for sub-melodies focusing especially on pitch and rhythm. We constructed the system using a Hidden Markov Model (HMM). In a composing experiment, we obtained various melodies depending on the song set used for learning, and the results suggest that this system can learn the features of song sets that are selected while considering music genres, music culture, or nuances of composers.

Ryosuke Yamanishi, Keisuke Akita, Shohei Kato
A Study on the Development of Mobile Based SNS-UCC Writing Tool

Driven by the development of wireless services with the advent of smart phone along with iPhone, wireless services are drawing attentions rather than PC internet based wired services. Since this technique is a mobile device judged as being in an environment suitable to provide SNS, there also need be researches. This research proposed mobile based SNS-UCC design through mobile based SNS-UCC writing tool development cases.

Hae Sun No, Dae Woong Rhee
Project Sonology: An Experimental Project Exploring the Possibilities of Sound and Audio as the Primary Element of Interactive Entertainment

The goal of the project is to show that audio can successfully be the primary element of interactive entertainment by delivering pure audio experiences that demonstrate both the creative potential and emotional power of an audio experience. We develop two proofs of concept with the technical foundation supported by prototypes. The core technology is a combination of a 3D game engine and an audio engine used to build sound environments. The interactions are based on 3D trackers and surround sound headphones.

Shih-Han Chan, Dae Hong Kim, Eugene Massey, Katelyn Mueller, Fadzuli Said
Multipresence-Enabled Mobile Spatial Audio Interfaces

Mobile telephony offers an interesting platform for building multipresence-enabled applications that utilize the phone as a social or commercial assistant. The main objective of this research is to develop multipresence-enabled audio windowing systems for visualization, attention, and privacy awareness of narrowcasting (selection) functions in collaborative virtual environments (CVEs) for mobile devices such as 3rd- and 4th-generation mobile phones. Mobile audio windowing system enhances auditory information on mobile phones and encourages modernization of office- and mobile-based conferencing.

Owen Noel Newton Fernando, Michael Cohen, Adrian David Cheok
Fluxion: An Innovative Fluid Dynamics Game on Multi-touch Handheld Device

We explore the possibility of implementing real-time fluid simulation on iPhone to create an innovative game experience. Using fluid dynamics and water tri-states as game mechanics, players can manipulate fluid and solve puzzles through the unique input controls of iPhone, such as accelerometer and multi-touch. We implement particle-based fluid simulation and integrate our particle system with a physics engine, Box2D, to realize the interactions between particles and rigid body. The playtest showed that Fluxion is not only a fun game, but also educational since it provides players the basic concepts of how fluid behaves in the real world.

Chun-Ta Chen, Jy-Huey Lin, Wen-Chun Lin, Fei Wang, Bing-Huan Wu
Robotic Event Extension Experience

In this paper we present our experiences from extending the Eurobot contest for students of the age up to 30 by a category for pupils up to 18. We show two different models of the extension and present our experiences acquired after implementing them in 2008 and 2009.

David Obdrzalek
A Sound Engine for Virtual Cities

This paper is a position paper to specify and implement a general-purpose sound engine for virtual cities. The work is motivated by the project Terra Dynamica funded by the French government. We present a state of the art of the virtual urban sound spaces emphasizing various types of virtual cities and their relationships to auditory space. We then discuss the choice of a sound engine, sound spatialization and scene description languages as ongoing works.

Shih-Han Chan, Cécile Le Prado, Stéphane Natkin, Guillaume Tiger
NetPot: Easy Meal Enjoyment for Distant Diners

We capture key factors of a group meal with communication and interface technologies to make a meal more enjoyable for diners who cannot be collocated. We determined three factors of a popular group meal, Chinese hotpot, that are essential for a group meal experience: interacting as a group with food, a central shared hotpot, and a feeling that others are nearby. We developed a prototype system to maintain these factors for an online meal with remote friends. Our technique is of interest to designers creating technology for isolated diners.

Zoltan Foley-Fisher, Vincent Tsao, Johnty Wang, Sid Fels
Glasses-Free 3D Display System Using Grating Film for Viewing Angle Control

We developed a glasses-free 3D stereoscopic display using an LCD display panel and a special grating film for stereoscopic viewing. The display screen is divided in half in order that left and right regions provide the stereoscopic images for left and right eyes. Because both stereoscopic images are not in the same position, it is difficult for the observer to view the 3D image by the stereoviewing. The grating film can solve this problem because it shifts both left and right images to the same position. Moreover this grating film can give us glasses-free 3D viewing because of its view control effect. As the result, the each eye can perceive separated stereoscopic images for left and right eyes without special glasses such as polarized glasses.

Masahide Kuwata, Kunio Sakamoto
Omni-Directional Display System for Group Activity on Life Review Therapy

The authors have researched support system of the reminiscence and life review activity. This support system consists of an interactive tabletop display and interface system. On the reminiscence and life review activity, a therapist puts pictures on the table so as to trigger a talk. However some observers may perceive upside down images if they sit down opposite the therapist. To overcome this problem, we have developed the display system which can be viewed from any direction. In this paper, we propose a 4-views tabletop flat display system for cooperative activity on a round table.

Tomoyuki Honda, Kunio Sakamoto
Light-Weight Monocular 3D Display Unit Using Polypyrrole Linear Actuator

The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. We have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

Yuuki Kodama, Kunio Sakamoto
Easy Robot Programming for Beginners and Kids Using Command and Instruction Marker Card

Robots usually have multiple components, such as motors, sensors, microcontrollers and embedded computers. A robot programming to control motors and measure the output of sensors is complicated. Therefore it is troublesome for beginners to write the program of a robot control. To solve this problem, this paper describes a card programming for controlling a robot.

Masahiro Nishiguchi, Kunio Sakamoto
Automatic Mobile Robot Control and Indication Method Using Augmented Reality Technology

A mobile robot is an automatic machine that is capable of movement in a given environment. Many techniques of automatic control are proposed. A line tracer is one of the most popular robots. The line tracer goes along a white line on the floor. The authors developed a mobile robot which moves to indicated point automatically. All you have to do is to indicate a goal point. In this paper, we propose an automatic mobile robot system controlled by a marker and remote indication using the augmented reality technology.

Koji Ohmori, Kunio Sakamoto
Eye Contact Communication System between Mobile Robots Using Invisible Code Display

The authors have been developing the mobile robots which can cooperation between robots. The robots should communicate with each other in order to cooperate together. Therefore, the inter communication between robots is very important problem to be solved. These robots generally utilize wireless transmission system. The transmission sets send and receive on the same frequency or channel to establish the radio communication. This is called working simplex. The robots cannot start communication if both sets use different frequency channels. It is important to perform an initial configuration for establishing the radio signal transmission at a first contact among strange mobile robots. To solve this problem, this paper describes an information transmission system using an invisible code on displays which show an expression of robot’s eyes.

Takeru Furukawa, Kunio Sakamoto
The ‘Interactive’ of Interactive Storytelling: Customizing the Gaming Experience

In this article, we define interactive storytelling as a gaming experience where the form and content of the game is customized in real time and tailored to the preferences and needs of the player to maximixe enjoyment. The primary focus of interactive storytelling should not be on the attributes of the technology or characteristics of the medium, such as the AI techniques, planning formalisms, story representations, etc. but on different interaction levels provided by computer games and basic components of player enjoyment such as difficulty levels and gaming rewards. In conducting an analysis of interactive storytelling systems, we propose a user-centered approach to interactive storytelling by defining different customization levels for an optimum gaming experience.

Barbaros Bostan, Tim Marsh
Remote Context Monitoring of Actions and Behaviors in a Location through the Usage of 3D Visualization in Real-Time

“Remote Context Monitoring of Actions and Behavior in a Location Through the Usage of a 3D Visualization in Real-time” is a software application designed to read large amounts of data from a database and use that data to recreate the context that events occurred to improve understanding of the data.

John Conomikes, Zachary Pacheco, Salvador Barrera, Juan Antonio Cantu, Lucy Beatriz Gomez, Christian de los Reyes, Juan Manuel Mendez Villarreal, Takao Shime, Yuki Kamiya, Hideki Kawai, Kazuo Kunieda, Keiji Yamada
Wave Touch: Educational Game on Interactive Tabletop with Water Simulation

In this paper, we present an underwater exploration game called Wave Touch, designed specifically for a category of devices known as, interactive tabletops. The game provides users with a fun way to learn about important historical artifacts. An emphasis is placed on making Wave Touch entertaining to the user, a goal which is satisfied through the use of interactive tabletops and realistic water simulation. We also present the techniques we used to enable real-time water simulation effects in the game.

JoongHo Lee, Won Moon, Kiwon Yeom, DongWook Yoon, Dong-young Kim, JungHyun Han, Ji-Hyung Park
Healthy Super Mario: Tap Your Stomach and Shout for Active Healthcare

The purpose of game provides fun and enjoyment to users. Most of the game, however, has physical dysfunction with providing fun experience. As a solution about this problem, the body movement based game like Nintendo Wii was presented. However, it just prevents the physical dysfunction by adopting body movement as game input and requires the special controller to play a game. In this research, we suggest a new game input style which is tapping stomach and shouting. It is not only preventing the dysfunction but also promoting health.

Jaewook Jung, Shin Kang, Hanbit Park, Minsoo Hahn
Interactive Manipulation Model of Group of Individual Bodies for VR Cooking System

A new high-speed manipulation model for a group of individual bodies (GIB) such as sand and lava is proposed in this paper. One of the goals of this research is to use it for applications such as home VR cooking systems for representing eg. a mass of fried rice. In this model, GIB is represented as a height field. Variation in the height field represents movement of the GIB. Transformation of GIB in wide spaces which is beyond adjacent grids is considered. GIB is treated as one object, which means that calculation is done efficiently on one object. Transformation of GIB is calculated quickly. Interactivity has a priority over correct movement of GIB in this model.

Atsushi Morii, Daisuke Yamamoto, Kenji Funahashi
Smile Like Hollywood Star: Face Components as Game Input

Most of the commercialized games have used controller by manipulated hands or foots. Recently, some studies tried to facial expressions or emotion as game input, but it didn’t directly manipulate a game. It adjusted the difficulty of game. In this study, we suggest a new type of game input interface using face components as input, directly, and present one offline game and two online games by using this method.

Jaewook Jung, Hanbit Park, Shin Kang, Minsoo Hahn
Study on an Information Processing Model with Psychological Memory for a Kansei Robot

In this paper, we propose an information processing model for a kansei robot. This model handles memory based on human psychology. We expect that on incorporating the model, a robot can exhibit human characteristics because of using psychological memory. To verify the model, we first perform a comparison between the results of the experiment performed using this model and that of an actual psychological experiment. The results of the comparison suggest that the memory functions of the model are similar to the human memory functions. Second, we conduct the process of learning movement actions to verify that the robot on which the model was implemented learned movement for moving to many places and decreasing its curiosity.

Yuta Kita, Masataka Tokumaru, Noriaki Muranaka
Vegetation Interaction Game: Digital SUGOROKU of Vegetation Succession for Children

In this study, we redesign and develop a new digital sugoroku game based on the phenomenon of vegetation succession. A practical evaluation in an elementary school that consisted of game play and fieldwork activity was conducted. The results of the evaluation showed that the game was effective in stimulating the interest of the students who participated in the game, and was able to support their learning in a joyful way.

Akiko Deguchi, Shigenori Inagaki, Fusako Kusunoki, Etsuji Yamaguchi, Yoshiaki Takeda, Masanori Sugimoto
Penmanship Learning Support System: Feature Extraction for Online Handwritten Characters

This paper proposes a feature extraction method for online handwritten characters for a penmanship learning support system. This system has a database of model characters. It evaluates the characters a learner writes by comparing them with the model characters. However, if we prepare feature information for every character, information must be input every time a model character is added. Therefore, we propose a method of automatically extracting features from handwritten characters. In this paper, we examine whether it correctly identifies the turns in strokes as features. The resulting extraction rate is 80% and in the remaining 20% of cases, it extracted an area near a turn.

Tatsuya Yamaguchi, Noriaki Muranaka, Masataka Tokumaru
Development of Wall Amusements Utilizing Gesture Input

Focusing on an infrared camera and a near-infrared radar, we have developed a system for new amusements which we can operate by gesture input toward a screen or moving our bodies in front of a screen projected on a wall. The infrared camera is used for operations performed by means of gestures, and the near-infrared radar is used for operations performed by larger movement of a human body by detecting the position of the person or the state of his feet near the floor surface. A screen is projected on a wall by an ultra-short throw projector. This system can be set up anywhere if there is a certain sized wall.

Takahisa Iwamoto, Atsushi Karino, Masayuki Hida, Atsushi Nishizaki, Tomoyuki Takami
Study on an Emotion Generation Model for a Robot Using a Chaotic Neural Network

This paper proposes an emotion-generation model for complex change using a chaotic neural network (CNN). Using a CNN, the proposed model will solve the problem of past studies that have indicated that robotic emotion changes are simplistic. The model uses the principle of an adaptation level, which is used in Russell’s emotion model to generate emotion. This paper considers the effectiveness of this approach using simulation, and shows that the model can express a change of “adaptation”. In addition, through the chaos of CNN, the proposed model can express different changes, even if the values of CNN’s input values remain the same.

Hiroyuki Sumitomo, Masataka Tokumaru, Noriaki Muranaka
Interactive Tabu Search vs. Interactive Genetic Algorithm

We propose an interactive tabu search (ITS) to be used for the development support of a product that fits a human’s feeling. Interactive evolutionary computation (IEC) is one of the technologies used in the development support of products that fit a human’s feeling using a computer and person undergoing a communication. The interactive generic algorithm (IGA) is generally used in the IEC. A major problem with the use of the IEC is the increased burden on the IEC user to evaluate multiple solution candidates. Using the ITS instead of the IGA may reduce this burden, because the ITS user chooses only his most favorite solution candidate among multiple solution candidates. We performed a comparison of the search performance using simulations with the ITS and IGA. As a result of this simulation, the search performance of the ITS exceeded that of the IGA by a range from 2% to 10%.

Tatsuya Hirokata, Masataka Tokumaru, Noriaki Muranaka
Instant First-Person Posture Estimation

We propose an instant posture estimation technique, which can operate only by a stereo image with small (6 cm) baseline. It does not require a priori information of the target user, background information, and markers. This system allows cameras to move freely, because it operates only with a small stereo camera unit. Moreover, if the input image is replaced with a movie or real-time video, this system can be used as a real-time motion tracker. With our proposed technique, robots and computers will be able to non-verbally communicate with unspecified people as well as pre-registered people. Moreover, this system can output not only the posture but also the body size and clothes. Therefore, proposed technique can also be used as a calibration procedure for other motion tracking algorithms.

Takafumi Serizawa, Yasuyuki Yanagida
Monitoring User’s Brain Activity for a Virtual Coach

The system described in this paper is an attempt at developing a coach for sports using a virtual world and multimodal interaction, including brain activity. Users can ride a bicycle through a virtual world while the coach monitors the user’s performance. The system incorporates the user’s brain activity, heart rate and respiration rate. These data are analyzed and the features are sent through to give the virtual coach the instructions for movements and dialogues to coach the user. The Electroencephalogram (EEG) provides ample possibilities to research the brain activity of the user and to provide for an extra modality in the interaction.

Bram van de Laar, Anton Nijholt, Job Zwiers
Developing Educational and Entertaining Virtual Humans Using Elckerlyc

Virtual humans (VHs) are used in many educational and entertainment settings: training and serious gaming, interactive information kiosks, tour guides, tutoring, interactive virtual dancers, and much more. Building a complete VH from scratch is a daunting task, and it makes sense to rely on existing platforms. However, when one builds a novel interactive VH application, one needs to be able to adapt and extend the means to control the VH offered by the platform, without reprogramming parts of the platform. This paper describes Elckerlyc, a novel platform for controlling a VH. The focus is on how to easily extend and adapt the system to the needs of a particular application, without programming.

Dennis Reidsma, Herwin van Welbergen, Ronald C. Paul, Bram van de Laar, Anton Nijholt
Backmatter
Metadaten
Titel
Entertainment Computing - ICEC 2010
herausgegeben von
Hyun Seung Yang
Rainer Malaka
Junichi Hoshino
Jung Hyun Han
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-15399-0
Print ISBN
978-3-642-15398-3
DOI
https://doi.org/10.1007/978-3-642-15399-0