Skip to main content

2011 | Buch

Human-Computer Interaction. Interaction Techniques and Environments

14th International Conference, HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

This four-volume set LNCS 6761-6764 constitutes the refereed proceedings of the 14th International Conference on Human-Computer Interaction, HCII 2011, held in Orlando, FL, USA in July 2011, jointly with 8 other thematically similar conferences.

The revised papers presented were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the entire field of Human-Computer Interaction, addressing major advances in knowledge and effective use of computers in a variety of application areas. The papers of this volume are organized in topical sections on touch-based and haptic interaction, gaze and gesture-based interaction, voice, natural language and dialogue, novel interaction techniques and devices, and avatars and embodied interaction.

Inhaltsverzeichnis

Frontmatter

Touch-Based and Haptic Interaction

Frontmatter
Development of a High Definition Haptic Rendering for Stability and Fidelity

In this study, we developed and evaluated a 10kHz high definition haptic rendering system which could display at real-time video-rate (60Hz) for general VR applications. Our proposal required both fidelity and stability in a multi-rate system, with a frequency ratio of approximately 160 times. To satisfy these two criteria, there were some problems to be resolved. To achieve only stability, we could use a virtual coupling method to link a haptic display and a virtual object. However, due to its low coupling impedance, this method is not good for realization of fidelity and quality of manipulation. Therefore, we developed a multi-rate system with two level up-samplings for both fidelity and stability of haptic sensation. The first level up-sampling achieved stability by the virtual coupling, and the second level achieved fidelity by 10kHz haptic rendering to compensate for the haptic quality lost from the coupling process. We confirmed that, with our proposed system, we could achieve both stability and fidelity of haptic rendering through a computer simulation and a 6DOF haptic interface (SPIDAR-G) with a rigid object simulation engine.

Katsuhito Akahane, Takeo Hamada, Takehiko Yamaguchi, Makoto Sato
Designing a Better Morning: A Study on Large Scale Touch Interface Design

In this paper, we described the design process of an individual prototype as it relates to Large Scale Public Touch Interface system design as a whole, and examine ergonomic and usability concerns for Large Scale Public Touch Interface (LSPTI) designs. The design process includes inspirational design, contextual design, storyboarding, paper prototyping, video prototyping and a user testing study. We examined the design process at each stage and proposed improvements for LSPTIs. Results indicate that the ‘color-field’ interaction methodology might be a good alternative to traditional ‘tabbedhyperlink’ interaction in LSPTI implementations.

Onur Asan, Mark Omernick, Dain Peer, Enid Montague
Experimental Evaluations of Touch Interaction Considering Automotive Requirements

Three different usability studies present evaluation methods for cross-domain human-computer-interaction. The first study compares different input devices like touch screen, turn-push-controller or handwriting recognition under regard of human error probability, input speed and subjective usability assessment. The other experiments had a focus on typical automotive issues: interruptibility and the influence of oscillations of the cockpit on the interaction.

Andreas Haslbeck, Severina Popova, Michael Krause, Katrina Pecot, Jürgen Mayer, Klaus Bengler
More than Speed? An Empirical Study of Touchscreens and Body Awareness on an Object Manipulation Task

Touchscreen interfaces do more than allow users to execute speedy interactions. Three interfaces (touchscreen, mouse-drag, on-screen button) were used in the service of performing an object manipulation task. Results showed that planning time was shortest with touch screens, that touchscreens allowed high action knowledge users to perform the task more efficiently, and that only with touchscreens was the ability to rotate the object the same across all axes of rotation. The concept of closeness is introduced to explain the potential advantages of touchscreen interfaces.

Rachelle Kristof Hippler, Dale S. Klopfer, Laura Marie Leventhal, G. Michael Poor, Brandi A. Klein, Samuel D. Jaffee
TiMBA – Tangible User Interface for Model Building and Analysis

Designers in architectural studios, both in education and practice, have worked to integrate digital and physical media ever since they began to utilize digital tools in the design process [1]. Throughout the design process there are significant benefits of working in the digital domain as well as benefits of working physically; confronting architects with a difficult choice. We believe emerging strategies for human-computer interaction such as tangible user interfaces and computer vision techniques present new possibilities for manipulating architectural designs. These technologies can help bridge between the digital and physical worlds. In this paper, we discuss some of these technologies, analyzes several current design challenges and present a prototype that illustrates ways in which a broader approach to human-computer interaction might resolve the problem. The ultimate goal of breaking down the boundary between the digital and physical design platforms is to create a unified domain of "continuous thought" for all design activities.

Chih-Pin Hsiao, Brian R. Johnson
Musical Skin: A Dynamic Interface for Musical Performance

Compared to pop music, the audience of classical music has decreased dramatically. Reasons might be the way of communication between classic music and its audience that depends on vocal expression such as timbre, rhythm and melody in the performance. The fine details of classic music as well its implied emotion among the notes become implicit to the audience. Thus, we apply a new media called dynamic skin for building up the interface between performers and audiences. Such interface is called “Musical Skin” is implemented with dynamic skin design process with the results from gesture analysis of performer/audience. Two skins-system of Musical Skin are implemented with virtual visualization/actuators/sensible spaces. The implementation is tested using scenario and interviews.

Heng Jiang, Teng-Wen Chang, Cha-Lin Liu
Analyzing User Behavior within a Haptic System

Haptic technology has the potential to enhance education, especially for those with severe visual impairments (those that are blind or who have low vision), by presenting abstract concepts through the sense of touch. Despite the advances in haptic research, little research has been conducted in the area of haptic user behavior toward the establishment of haptic interface development and design conventions. To advance haptic research closer to this goal, this study examines haptic user behavior data collected from 9 participants utilizing a haptic learning system, the Heat Temperature Module. ANOVA results showed that differences in the amount of haptic feedback result in significant differences in user behavior, indicating that higher levels of haptic friction feedback result in higher user interaction proportions of data. Results also suggested that minimal thresholds of friction haptic feedback can be established for a desired level of minimum user interaction data proportions, however; more research is needed to establish such thresholds.

Steve Johnson, Yueqing Li, Chang Soo Nam, Takehiko Yamaguchi
Usability Testing of the Interaction of Novices with a Multi-touch Table in Semi Public Space

Touch-sensitive devices are becoming more and more common. Many people use touch interaction, especially on handheld devices like iPhones or other mobile phones. But the question is, do people really understand the different gestures, i.e., do they know which gesture is the correct one for the intended action and do they know how to transfer the gestures to bigger devices and surfaces? This paper reports the results of usability tests which were carried out in semi public space to explore peoples’ ability to find gestures to navigate on a virtual globe. The globe is presented on a multi-touch-table. Furthermore, the study investigated which additional gestures people use intuitively as compared to the ones which are implemented.

Markus Jokisch, Thomas Bartoschek, Angela Schwering
Niboshi for Slate Devices: A Japanese Input Method Using Multi-touch for Slate Devices

We present Niboshi for slate devices, an input system that utilizes a multi-touch interface. Users hold the device with both hands and use both thumbs to input a character in this system. Niboshi for slate devices has four features that improve the performance of inputting text to slate devices: it has a multi-touch input, enables the device to be firmly held with both hands while text is input, can be used without visual confirmation of the input buttons, and has a large text display area with a small interface. The Niboshi system will enable users to type faster and requires less user attention to typing than existing methods.

Gimpei Kimioka, Buntarou Shizuki, Jiro Tanaka
An Investigation on Requirements for Co-located Group-Work Using Multitouch-, Pen-Based- and Tangible-Interaction

Cooperation and coordination is crucial to solve many of our everyday tasks. Even though many computerized tools exist, there is still a lack of effective tools that support co-located group work. There are promising technologies that can add to this, such as tabletop systems, multitouch, tangible and pen-based interaction. There also exist general requirements and principles that aim to support this kind of work. However these requirements are relatively vague and are not focused on concrete usage scenarios. In this study a user centered approach has been applied in order to develop a co-located group work system based on those general requirements but also on a real use case. The requirements are transformed into concepts and a running prototype that was evaluated with users. As a result not only the usability of the system has been proven but also a catalogue of even more specific requirements for co-located group work systems could be derived.

Karsten Nebe, Tobias Müller, Florian Klompmaker
Exploiting New Interaction Techniques for Disaster Control Management Using Multitouch-, Tangible- and Pen-Based-Interaction

This paper shows the proceedings and results of an user centered design process that has been applied in order to analyze how processes of management in disaster control can be optimized while using new interaction techniques like multitouch, tangible and pen-based interaction. The study took part in cooperation with the German Federal Agency for Technical Relief. Its statutory tasks include the provision of technical assistance at home and humanitarian aid abroad. Major focus of this work is the IT-support for coordination and management tasks. As result we introduce our prototype application, the software- and hardware requirements towards it as well as the interaction design that was influenced by the outcome of the user centered design process.

Karsten Nebe, Florian Klompmaker, Helge Jung, Holger Fischer
Saving and Restoring Mechanisms for Tangible User Interfaces through Tangible Active Objects

In this paper we present a proof of concept for saving and restoring mechanisms for Tangible User Interfaces (TUIs). We describe our actuated Tangible Active Objects (TAOs) and explain the design which allows equal user access to a dial-based fully tangible actuated menu metaphor. We present a new application extending an existing TUI for interactive sonification of process data with saving and restoring mechanisms and we outline another application proposal for family therapists.

Eckard Riedenklau, Thomas Hermann, Helge Ritter
Needle Insertion Simulator with Haptic Feedback

We introduce a novel injection simulator with haptic feedback which provides realistic physical experience to the medical user. Needle insertion requires very dexterous hands-on skills and fast and appropriate response to avoid dangerous situations for patients. In order to train the injection operation, the proposed injection simulator has been designed to generate delicate force feedback to simulate the needle penetration into various tissues such as skin, muscle, and blood vessels. We have developed and evaluated the proposed simulator with medical doctors and realized that the system offers very realistic haptic feedback with dynamic visual feedback.

Seungjae Shin, Wanjoo Park, Hyunchul Cho, Sehyung Park, Laehyun Kim
Measurement of Driver’s Distraction for an Early Prove of Concepts in Automotive Industry at the Example of the Development of a Haptic Touchpad

This contribution shows how it is possible to integrate the user’s behavior in the development process in a very early stage of concept. Therefore innovative applied methodologies for objectifying human behavior such as eye tracking or video observation like the Dikablis/ DLab environment in the Audi driving simulator are necessary. A demonstrative example therefore is the predevelopment of a touchpad with an adjustable haptic surface as a concept idea for infotainment interaction with the Audi MMI. First an overview of the idea of capturing human behavior for evaluating concept ideas in a very early stage of the development process is given and how it is realized with the Dikablis and DLab environment. Furthermore the paper describes the concept idea of the innovative control element of the haptic touchpad as well as the accompanied upcoming demands for research and how these questions were clarified. At the end some example results are given.

Roland Spies, Andreas Blattner, Christian Lange, Martin Wohlfarter, Klaus Bengler, Werner Hamberger
A Tabletop-Based Real-World-Oriented Interface

In this paper, we propose a Tangible User Interface which enables users to use applications on a PC desktop in the same way as a paper and pen on a desk in the real world. Also, the proposed system is cheaper to implement and can be easily setup anywhere. By using the proposed system, we found that it was easier to use than normal application user interfaces.

Hiroshi Takeda, Hidetoshi Miyao, Minoru Maruyama, David Asano
What You Feel Is What I Do: A Study of Dynamic Haptic Interaction in Distributed Collaborative Virtual Environment

In this paper we present the concept of “What You Feel Is What I Do (WYFIWID)”. The concept is fundamentally based on a haptic guide that allows an expert to control the hand of a remot trainee. When haptic guide is active then all movements of the expert’s hand (via input device) in the 3D space are haptically reproduced by the trainee’s hand via a force feedback device. We use haptic guide to control the trainee’s hand for writing alphabets and drawing geometrical forms. Twenty subjects participated in the experiments to evaluate.

Sehat Ullah, Xianging Liu, Samir Otmane, Paul Richard, Malik Mallem
A Framework Interweaving Tangible Objects, Surfaces and Spaces

In this paper, we will introduce the ROSS framework, an integrated application development toolkit that extends across different tangible platforms such as multi-user interactive tabletop displays, full-body interaction spaces, RFID-tagged objects and smartphones with multiple sensors. We will discuss how the structure of the ROSS framework is designed to accommodate a broad range of tangible platform configurations and illustrate its use on several prototype applications for digital media content interaction within education and entertainment contexts.

Andy Wu, Jayraj Jog, Sam Mendenhall, Ali Mazalek
The Effect of Haptic Cues on Working Memory in 3D Menu Selection

We investigated the effect of haptic cues on working memory in 3D menu selection. We conducted a 3D menu selection task in two different conditions: visual only and visual with haptic. For the visual condition, participants were instructed to select 3D menu items and memorize the order of selection. For the visual with haptic condition, we used magnetic haptic effect on each 3D menu item. Results showed that participants needed less number of trials for memorizing the selection sequence in visual with haptic condition than in visual only condition. Subjective data, collected from a questionnaire, indicated that visual with haptic condition was more suitable for selection and memorization.

Takehiko Yamaguchi, Damien Chamaret, Paul Richard

Gaze and Gesture-Based Interaction

Frontmatter
Face Recognition Using Local Graph Structure (LGS)

In this paper, a novel algorithm for face recognition based on Local Graph Structure (LGS) has been proposed. The features of local graph structures are extracted from the texture in a local graph neighborhood then it’s forwarded to the classifier for recognition. The idea of LGS comes from dominating set points for a graph of the image. The experiments results on ORL face database images demonstrated the effectiveness of the proposed method. The advantages of LGS, very simple, fast and can be easily applied in many fields, such as biometrics, pattern recognition, and robotics as preprocessing.

Eimad E. A. Abusham, Housam K. Bashir
Eye-gaze Detection by Image Analysis under Natural Light

We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. Our practical eye-gaze input system is capable of classifying the horizontal eye-gaze of users with a high degree of accuracy. However, it can only detect three directions of vertical eye-gaze. If the detection resolution in the vertical direction is increased, more indicators will be displayed on the screen. To increase the resolution of vertical eye-gaze detection, we apply a limbus tracking method, which is also the conventional method used for horizontal eye-gaze detection. In this paper, we present a new eye-gaze detection method by image analysis using the limbus tracking method. We also report the experimental results of our new method.

Kiyohiko Abe, Shoichi Ohi, Minoru Ohyama
Multi-user Pointing and Gesture Interaction for Large Screen Using Infrared Emitters and Accelerometers

This paper presents PlusControl, a novel multi-user interaction system for cooperative work with large screen. This system is designed for a use with economic deictic and control gestures in air and it allows free mobility in the environment to the users. PlusControl consists in light worn devices with infrared emitters and Bluetooth accelerometers. In this paper the architecture of the system is presented. A prototype has been developed in order to test and evaluate the system performances. Results show that PlusControl is a valuable tool in cooperative scenarios.

Leonardo Angelini, Maurizio Caon, Stefano Carrino, Omar Abou Khaled, Elena Mugellini
Gesture Identification Based on Zone Entry and Axis Crossing

Hand gesture interfaces have been proposed as an alternative to the remote controller, and products with such interfaces have appeared in the market. We propose the vision-based unicursal gesture interface (VUGI) as an extension of our unicursal gesture interface (UGI) for TV remotes with touchpads. Since UGI allows users to select an item on a hierarchical menu comfortably, it is expected that VUGI will yield easy-to-use hierarchical menu selection. Moreover, gestures in the air such as VUGI offer an interface area that is larger than that provided by touchpads. Unfortunately, since the user loses track of his/her finger position, it is not easy to input commands continuously using VUGI. To solve this problem, we propose the dynamic detection zone and the detection axes. An experiment confirms that subjects can input VUGI commands continuously.

Ryosuke Aoki, Yutaka Karatsu, Masayuki Ihara, Atsuhiko Maeda, Minoru Kobayashi, Shingo Kagami
Attentive User Interface for Interaction within Virtual Reality Environments Based on Gaze Analysis

Eye movements can carry a rich set of information about someone’s intentions. In the case of physically impaired people gaze can be the only communication channel they can use. People with severe disabilities are usually assisted by helpers during everyday life activity, which in time can lead to a development of an effective visual communication protocol between helper and disabled. This protocol allows them to communicate at some extent only by glancing one towards the other. Starting from this premise, we propose a new model of attentive user interface featured with some of the visual comprehension abilities of a human helper. The purpose of this user interface is to be able to identify user’s intentions, and so to assist him/her in the process of achieving simple interaction goals (i.e. object selection, task selection). Implementation of this attentive interface is accomplished by way of statistical analysis of user’s gaze data, based on a hidden Markov model.

Florin Barbuceanu, Csaba Antonya, Mihai Duguleana, Zoltan Rusak
A Low-Cost Natural User Interaction Based on a Camera Hand-Gestures Recognizer

The search for new simplified interaction techniques is mainly motivated by the improvements of the communication with interactive devices. In this paper, we present an interactive TVs module capable of recognizing human gestures through the PS3Eye low-cost camera. We recognize gestures by the tracking of human skin blobs and analyzing the corresponding movements. It provides means to control a TV in an ubiquitous computing environment. We also present a new free gestures icons library created to allow easy representation and diagramming.

Mohamed-Ikbel Boulabiar, Thomas Burger, Franck Poirier, Gilles Coppin
Head-Computer Interface: A Multimodal Approach to Navigate through Real and Virtual Worlds

This paper presents a novel approach for multimodal interaction which combines user mental activity (thoughts and emotions), user facial expressions and user head movements. In order to avoid problems related to computer vision (sensitivity to lighting changes, reliance on camera position, etc.), the proposed approach doesn’t make use of optical techniques. Furthermore, in order to make human communication and control smooth, and avoid other environmental artifacts, the used information is non-verbal. The head’s movements (rotations) are detected by a bi-axial gyroscope; the expressions and gaze are identified by electromyography and electrooculargraphy; the emotions and the thoughts are monitored by electroencephalography. In order to validate the proposed approach we developed an application where the user can navigate through a virtual world using his head. We chose Google Street View as virtual world. The developed application was conceived for a further integration with a electric wheelchair in order to replace the virtual world with a real world. A first evaluation of the system is provided.

Francesco Carrino, Julien Tscherrig, Elena Mugellini, Omar Abou Khaled, Rolf Ingold
3D-Position Estimation for Hand Gesture Interface Using a Single Camera

The hand gesture interface is the state of the art technology to provide the better human-computer interaction. This paper proposes two methods to estimate the 3D-position of the hand for hand gesture interface using a single camera. By using the methods in the office environment, it shows that the camera is not restricted to a fixed position in front of the user and can be placed at any position facing the user. Also, the reliability and usefulness of the proposed methods are demonstrated by applying them to the mouse gesture recognition software system.

Seung-Hwan Choi, Ji-Hyeong Han, Jong-Hwan Kim
Hand Gesture for Taking Self Portrait

We present a new interaction technique enabling user to manipulate digital camera when taking self-portrait pictures. User can control camera’s functions such as pan, tilt, and shutter using hand gesture. The preview of camera and GUIs are shown on a large display. We developed two interaction techniques. First one is a hover button that triggers camera’s shutter. Second one is cross motion interface that controls pan and tilt. In this paper, we explain algorithms in detailed manner and show the preliminary experiment for evaluating speed and accuracy of our implementation. Finally, we discuss promising applications using proposed technique.

Shaowei Chu, Jiro Tanaka
Hidden-Markov-Model-Based Hand Gesture Recognition Techniques Used for a Human-Robot Interaction System

In this paper, we present part of a human-robot interaction system that recognizes meaningful gestures composed of continuous hand motions in real time based on hidden Markov models. This system acting as an interface is used for humans making various kinds of hand gestures to issue specific commands for conducting robots. To accomplish this, we define four basic types of directive gestures made by a single hand, which are moving upward, downward, leftward, and rightward individually. They serve as fundamental conducting gestures. Thus, if another hand is incorporated to making gestures, there are at most twenty-four kinds of compound gestures by the combination of the directive gestures using both hands. At present, we prescribe eight kinds of compound gestures employed in our developed human-robot interaction system, each of which is assigned a motion or functional control command, including moving forward, moving backward, turning left, turning right, stop, robot following, robot waiting, and ready, so that users can easily operate an autonomous robot. Experimental results reveal that our system can achieve an average gesture recognition rate of 96% at least. It is very satisfactory and encouraged.

Chin-Shyurng Fahn, Keng-Yu Chu
Manual and Accelerometer Analysis of Head Nodding Patterns in Goal-oriented Dialogues

We studied communication patterns in face-to-face dialogues between people for the purpose of identifying conversation features that can be exploited to improve human-computer interactions. We chose to study the psychological counseling setting as it provides good examples of task-oriented dialogues. The dialogues between two participants, therapist and client, were video recorded. The participants’ head movements were measured by using head-mounted accelerometers. The relationship between the dialogue process and head nodding frequency was analyzed on the basis of manual annotations. The segments where nods of the two participants correlated were identified on the basis of the accelerometer data. Our analysis suggests that there are characteristic nodding patterns in different dialogue stages.

Masashi Inoue, Toshio Irino, Nobuhiro Furuyama, Ryoko Hanada, Takako Ichinomiya, Hiroyasu Massaki
Facial Expression Recognition Using AAMICPF

Recently, many interests have been focused on the facial expression recognition research because of its importance in many applications area. In the computer vision area, the object recognition and the state recognition are very important and critical. Variety of researches have been done and proposed but those are very difficult to solve. We propose, in this paper, to use Active Appearance Model (AAM) with Particle filter for facial expression recognition system. AAM is very sensitive about initial shape. So we improve accuracy using Particle filter which is defined by the initial state to particles. Our system recognizes the facial expression using each criteria expression vector. We find better result than using basic AAM and 10% improvement has been made with AAA-IC.

Jun-Sung Lee, Chi-Min Oh, Chil-Woo Lee
Verification of Two Models of Ballistic Movements

The study of ballistic movement time and ballistic movement variability can help us understand how our motor system works and further predict the relationship of speed-accuracy tradeoffs while performing complex hand-control movements. The purposes of this study were (1) to develop an experiment for measuring ballistic movement time and variability and (2) to utilize the measured data to test the application of the two models for predicting the two types of ballistic movement data. In this preliminary study, four participants conducted ballistic movements of specific amplitudes, by using a personal computer, a drawing tablet and a self-developed experimental program. The results showed that (1) the experiment successfully measured ballistic movement time and two types of ballistic movement variability, (2) the two models described well the measured data, and (3) a modified model was proposed to fit better the variable error in the direction of the direction of the movement.

Jui-Feng Lin, Colin G. Drury
Gesture Based Automating Household Appliances

Smart homes can be a potential application which provides unobtrusive support for the elderly or disabled that promote independent living. In providing ubiquitous service, specially designed controller is needed. In this paper, a simple gesture based automating controller for various household appliances that includes simple lightings to complex electronic devices is introduced. The system uses the gesture-based recognition system to read messages from the signer and sends command to respective appliances through the household appliances sensing system. A simple server has been constructed to perform simple deterministic algorithm on the received messages to execute matching exercise which in turn triggers specific events. The proposed system offers a new and novel approach in smart home controller system by utilizing gesture as a remote controller. The adapted method of this innovative approach, allows user to flexibly and conveniently control multiple household appliances with simple gestures.

Wei Lun Ng, Chee Kyun Ng, Nor Kamariah Noordin, Borhanuddin Mohd. Ali
Upper Body Gesture Recognition for Human-Robot Interaction

This paper proposes a vision-based human-robot interaction system for mobile robot platform. A mobile robot first finds an interested person who wants to interact with it. Once it finds a subject, the robot stops in the front of him or her and finally interprets her or his upper body gestures. We represent each gesture as a sequence of body poses and the robot recognizes four upper body gestures: “Idle”, “I love you”, “Hello left”, and “Hello right”. A key posebased particle filter determines the pose sequence and key poses are sparsely collected from the pose space. Pictorial Structure-based upper body model represents key poses and these key poses are used to build an efficient proposal distribution for the particle filtering. Thus, the particles are drawn from key pose-based proposal distribution for the effective prediction of upper body pose. The Viterbi algorithm estimates the gesture probabilities with a hidden Markov model. The experimental results show the robustness of our upper body tracking and gesture recognition system.

Chi-Min Oh, Md. Zahidul Islam, Jun-Sung Lee, Chil-Woo Lee, In-So Kweon
Gaze-Directed Hands-Free Interface for Mobile Interaction

While mobile devices have allowed people to carry out various computing and communication tasks everywhere, it has generally lacked the support for task execution while the user is in motion. This is because the interaction schemes of most mobile applications are centered around the device visual display and when in motion (with the important body parts, such as the head and hands, moving), it is difficult for the user to recognize the visual output on the small hand-carried device display and respond to make the timely and proper input. In this paper, we propose an interface which allows the user to interact with the mobile devices during motion without having to look at it or use one’s hands. More specifically, the user interacts, by gaze and head motion gestures, with an invisible virtual interface panel with the help of a head-worn gyro sensor and aural feedback. Since the menu is one of the most prevailing methods of interaction, we investigate and focus on the various forms of menu presentation such as the layout and the number of comfortably selectable menu items. With head motion, it turns out 4×2 or 3×3 grid menu is more effective. The results of this study can be further extended for developing a more sophisticated non-visual oriented mobile interface.

Gie-seo Park, Jong-gil Ahn, Gerard J. Kim
Eye-Movement-Based Instantaneous Cognition Model for Non-verbal Smooth Closed Figures

This study attempts to perform a comprehensive investigation of non-verbal instantaneous cognition of images through the “same-different” judgment paradigm using non-verbal smooth closed figures, which are difficult to memorize verbally, as materials for encoding experiments. The results suggested that the instantaneous cognition of non-verbal smooth closed figures is influenced by the contours’ features (number of convex parts) and inter-stimulus intervals. In addition, the results of percent correct recognitions suggested that the accuracy of the “same-different” judgment may be influenced by the differences between the points being gazed when memorizing and recognizing and factors involved in the visual search process when recognizing. The results may have implications for the interaction design guideline about some instruments for visualizing a system state.

Yuzo Takahashi, Shoko Koshi

Voice, Natural Language and Dialogue

Frontmatter
VOSS -A Voice Operated Suite for the Barbadian Vernacular

Mobile devices are rapidly becoming the default communication device of choice. The rapid advances being experienced in this area has resulted in mobile devices undertaking many of the tasks once restricted to desktop computers. One key area is that of voice recognition and synthesis. Advances in this area have produced new voice-based applications such as visual voice mail and voice activated search. The rise in popularity of these types of applications has resulted in the incorporation of a variety of major languages, ensuring a more global use of the technology.

David Byer, Colin Depradine
New Techniques for Merging Text Versions

Versioning helps users to keep track of different sets of edits on a document. Version merging methods enable users to determine which parts of which version they wish to include in the next or final version. We explored several existing and two new methods (highlighting and overlay) in single and multiple window settings. We present the results of our quantitative user studies, which show that the new highlighting and overlay techniques are preferred for version merging tasks. The results suggest that the most useful methods are those which clearly and easily present information that is likely important to the user, while simultaneously hiding less important information. Also, multi window version merging is preferred over single window merging.

Darius Dadgari, Wolfgang Stuerzlinger
Modeling the Rhetoric of Human-Computer Interaction

The emergence of potential new human-computer interaction styles enabled through technological advancements in artificial intelligence, machine learning, and computational linguistics makes it increasingly more important to formalize and evaluate these innovative approaches. In this position paper, we propose a multi-dimensional conversation analysis framework as a way to expose and quantify the structure of a variety of new forms of human-computer interaction. We argue that by leveraging sociolinguistic constructs referred to as authoritativeness and heteroglossia, we can expose aspects of novel interaction paradigms that must be evaluated in light of usability heuristics so that we can approach the future of human-computer interaction in a way that preserves the usability standards that have shaped the state-of-the-art that is tried and true.

Iris Howley, Carolyn Penstein Rosé
Recommendation System Based on Interaction with Multiple Agents for Users with Vague Intention

We propose an agent-based recommendation system interface for users with vague intention based on interaction with multiple character agents, which are talking each other about their recommendations. This interface aims the user to make his/her intentions and/or potential opinions clear with hearing agents’ conversation about recommendations. Whenever the user hits on any opinion, he/she can naturally join the conversation for getting more favorite recommendation. According to the result of experimental evaluation, the system with proposed interface can introduce more recommendations without any additional frustrations than the conventional recommendation systems with single agent.

Itaru Kuramoto, Atsushi Yasuda, Mitsuru Minakuchi, Yoshihiro Tsujino
A Review of Personality in Voice-Based Man Machine Interaction

In this paper, we will discuss state-of-the-art techniques for personality-aware user interfaces, and summarize recent work in automatically recognizing and synthesizing speech with “personality”. We present an overview of personality “metrics”, and show how they can be applied to the perception of voices, not only the description of personally known individuals. We present use cases for personality-aware speech input and/ or output, and discuss approaches at defining “personality” in this context. We take a middle-of-the-road approach, i.e. we will not try to uncover all fundamental aspects of personality in speech, but we’ll also not aim for ad-hoc solutions that serve a single purpose, for example to create a positive attitude in a user, but do not generate transferable knowledge for other interfaces.

Florian Metze, Alan Black, Tim Polzehl
Can Indicating Translation Accuracy Encourage People to Rectify Inaccurate Translations?

The accuracy of machine translation affects how well people understand each other when communicating. Translation repair can improve the accuracy of translated sentences. Translation repair is typically only used when a user thinks that his/her message is inaccurate. As a result, translation accuracy suffers, because people’s judgment in this regard is not always accurate. In order to solve this problem, we propose a method that provides users with an indication of the translation accuracy of their message. In this method, we measure the accuracy of translated sentences using an automatic evaluation method, providing users with three indicators: a percentage, a five-point scale, and a three-point scale. We verified how well these indicators reduce inaccurate judgments, and concluded the following: (1) the indicators did not significantly affect the inaccurate judgments of users; (2) the indication using a five-point scale obtained the highest evaluation, and that using a percentage obtained the second highest evaluation. However, in this experiment, the values we obtained from automatically evaluating translations were not always accurate. We think that incorrect automatic-evaluated values may have led to some inaccurate judgments. If we improve the accuracy of an automatic evaluation method, we believe that the indicators of translation accuracy can reduce inaccurate judgments. In addition, the percentage indicator can compensate for the shortcomings of the five-point scale. In other words, we believe that users may judge translation accuracy more easily by using a combination of these indicators.

Mai Miyabe, Takashi Yoshino
Design of a Face-to-Face Multilingual Communication System for a Handheld Device in the Medical Field

In the medical field, a serious problem exists with regard to communication between hospital staff and foreign patients. For example, medical translators cannot provide support in cases in which round-the-clock support is required during hospitalization. We propose the use of a multilingual communication support system called the Petit Translator between people speaking different languages in the hospital setting. From the results of experiments performed in such a setting, we found the following: (1) by clicking the conversation scene, the interface can retrieve the parallel text more efficiently than the paper media, and (2) when a questioner appropriately limits the type of reply for a respondent, prompt conversation can occur.

Shun Ozaki, Takuo Matsunobe, Takashi Yoshino, Aguri Shigeno
Computer Assistance in Bilingual Task-Oriented Human-Human Dialogues

In 2008, the percentage of people with a migration background in Germany had already reached more than 15% (12 Million people). Among that 15%, the ratio of seniors aged 50 years or older was 30% [1]. In most cases, their competence of the German language is adequate for dealing with everyday situations. However sometimes in emergency or medical situations, their knowledge of German is not sufficient to communicate with medical professionals and vice versa. These seniors are part of the main target group within the German Ministry of Research and Education (BMBF) research project SmartSenior [2] and we have developed a software system that assists multilingual doctor-patient conversations to overcome language and cultural barriers. The main requirements of such a system are robustness, accurate translations in respect to context and mobility, adaptability to new languages and topics and of course an appropriate user interface. Furthermore, we have equipped the system with additional information to convey cultural facts about different countries. In this paper, we present the architecture and ideas behind the system as a whole as well as related work in the area of computer aided translation and a first evaluation of the system.

Sven Schmeier, Matthias Rebel, Renlong Ai
Developing and Exploiting a Multilingual Grammar for Human-Computer Interaction

How to build a grammar that can accept as many as possible user inputs is one of the central issues in human-computer interaction. In this paper, we report about a corpus-based multilingual grammar, which has the aim to parse naturally occurring utterances that are used frequently by subjects in a domain-specific spoken dialogue system. The goal is achieved by the following approach: utterance classification, syntax analysis, and grammar formulation.

Xian Zhang, Rico Andrich, Dietmar Rösner

Novel Interaction Techniques and Devices

Frontmatter
Dancing Skin: An Interactive Device for Motion

Dynamic skin with its complex and dynamic characteristics provides valuable interaction device for different context. The main cause is the motion design and its corresponded structure/material. Starting with an understanding of skin/thus dynamic skin, we move to motion samples for case studies for unleashing the design process of motion in dynamic skin. The problem is to find a pattern of motion in dynamic skin. How to penetrate architectonic to cause the cortex to produce motion and we penetrates various types of street dance movement for motion design. This systemic skin construction can be a reference for building basic structure of folding form type skin and joint, developing motion which it needs, also provides dancer an interface that can interchange with other far-ended dancer through the Internet, regarding as a new manifestation and perform way for the street dance and its dancers.

Sheng-Han Chen, Teng-Wen Chang, Sheng-Cheng Shih
A Hybrid Brain-Computer Interface for Smart Home Control

Brain-computer interfaces (BCI) provide a new communication channel between the human brain and a computer without using any muscle activities. Applications of BCI systems comprise communication, restoration of movements or environmental control. Within this study we propose a combined P300 and steady-state visually evoked potential (SSVEP) based BCI system for controlling finally a smart home environment. Firstly a P300 based BCI system was developed and tested in a virtual smart home environment implementation to work with a high accuracy and a high degree of freedom. Secondly, in order to initiate and stop the operation of the P300 BCI a SSVEP based toggle switch was implemented. Results indicate that a P300 based system is very well suitable for applications with several controllable devices and where a discrete control command is desired. A SSVEP based system is more suitable if a continuous control signal is needed and the number of commands is rather limited. The combination of a SSVEP based BCI as a toggle switch to initiate and stop the P300 selection yielded in all subjects very high reliability and accuracy.

Günter Edlinger, Clemens Holzner, Christoph Guger
Integrated Context-Aware and Cloud-Based Adaptive Home Screens for Android Phones

The home screen in Android phones is a highly customizable user interface where the users can add and remove widgets and icons for launching applications. This customization is currently done on the mobile device itself and will only create static content. Our work takes the concept of Android home screen [3] one step further and adds flexibility to the user interface by making it context-aware and integrated with the cloud. Overall results indicated that the users have a strong positive bias towards the application and that the adaptation helped them to tailor the device to their needs by using the different context aware mechanisms.

Tor-Morten Grønli, Jarle Hansen, Gheorghita Ghinea
Evaluation of User Support of a Hemispherical Sub-display with GUI Pointing Functions

In this paper, we discuss the effectiveness of a new human interface device for PC user support. Recently, as the Internet utilization rate has increased every year, the usage of PCs by elderly people has also increased in Japan. However, the digital divide between elderly people and PC beginners has widened. To eliminate this digital divide, we consider improving the users’ operability and visibility as our goal. We propose a new hemispherical human-computer-interface device for PCs, which integrates a hemispherical sub-display and a pointing device. Then we evaluate the interface device in terms of its effectiveness of operability and visibility. As seen from the analyses of a subjective evaluation, our interface device obtained good impressions results for both elderly people and PC beginners.

Shinichi Ike, Saya Yokoyama, Yuya Yamanishi, Naohisa Matsuuchi, Kazunori Shimamura, Takumi Yamaguchi, Haruya Shiba
Uni-model Human System Interface Using sEMG

Today’s high-end computer systems contain technologies that only few individuals could have imagined a few years ago. However the conscious input device ergonomics design is still lagging; for example, the extensive usage of computer mouse results in various upper extremity musculoskeletal disorders. This endower towards the developed of HSI system, that act as an alternative or replacement device for computer mouse; thereby one could avoid musculoskeletal disorders. On the other hand, the developed system can also act as an aid tool for individuals with upper extremity disabled. The above issue can be addressed by developing a framework for Human System Interface (HSI) using biological signal as an input signal. The objective of this paper is to develop the framework for HSI system using Surface Electromyogram for individuals with various degrees of upper extremity disabilities. This framework involves the data acquisition of muscle activity, translator algorithm that analysis and translate the EMG as control signal and a platform independent tool to provide mouse cursor control. Thus developed HSI system is validate on applications like web-browsing, simple arithmetic calculation with the help of GUI tool designed.

Srinivasan Jayaraman, Venkatesh Balasubramanian
An Assistive Bi-modal User Interface Integrating Multi-channel Speech Recognition and Computer Vision

In this paper, we present a bi-modal user interface aimed both for assistance to persons without hands or with physical disabilities of hands/arms, and for contactless HCI with able-bodied users as well. Human being can manipulate a virtual mouse pointer moving his/her head and verbally communicate with a computer, giving speech commands instead of computer input devices. Speech is a very useful modality to reference objects and actions on objects, whereas head pointing gesture/motion is a powerful modality to indicate spatial locations. The bi-modal interface integrates a tri-lingual system for multi-channel audio signal processing and automatic recognition of voice commands in English, French and Russian as well as a vision-based head detection/tracking system. It processes natural speech and head pointing movements in parallel and fuses both informational streams in a united multimodal command, where each modality transmits own semantic information: head position indicates 2D head/pointer coordinates, while speech signal yields control commands. Testing of the bi-modal user interface and comparison with contact-based pointing interfaces was made by the methodology of ISO 9241-9.

Alexey Karpov, Andrey Ronzhin, Irina Kipyatkova
A Method of Multiple Odors Detection and Recognition

In this paper, we propose a method to detect and recognize multiple odors, and implement a multiple odor recognition system. Multiple odor recognition technology has not yet been developed, since existing odor recognition techniques which have been researched and developed by components analysis and pattern recognition techniques only deal with single odors at a time. Multiple odors represent a dynamic odor change from no odor to a single odor and multiple odors, which is the most common situation in a real-world environment. Therefore, it is necessary to sense and recognize techniques for dynamic odor changes. To recognize multiple odors, the proposed method must include odor data acquisition using a smell sensor array, odor detection using entropy, feature extraction using Principal Component Analysis, recognition candidate selection using Tree Search, and recognition using Euclidean Distance. To verify the validity of this study, a performance evaluation was conducted using a 132 odor database. As a result, the odor detection rate is approximately 95.83% and the odor recognition rate is approximately 88.97%.

Dong-Kyu Kim, Yong-Wan Roh, Kwang-Seok Hong
Report on a Preliminary Study Using Breath Control and a Virtual Jogging Scenario as Biofeedback for Resilience Training

Alternative methods of treating psychological stress are needed to treat some veterans of recent military conflicts. The use of virtual world technologies is one possible platform for treatment that is being explored by the “Coming Home” project at the University of Southern California’s Institute for Creative Technologies (ICT). One of the novel ways ICT is attempting to mitigate stress via virtual worlds is with a virtual jogging scenario, where the movement of an avatar is controlled via rhythmic breathing into a standard microphone. We present results from a preliminary study of 27 participants that measured the mood and arousal effects produced by engaging in this scenario.

Jacquelyn Ford Morie, Eric Chance, J. Galen Buckwalter
Low Power Wireless EEG Headset for BCI Applications

Miniaturized, low power and low noise circuits and systems are instrumental in bringing EEG monitoring to the home environment. In this paper, we present a miniaturized, low noise and low-power EEG wireless platform integrated into a wearable headset. The wireless EEG headset achieves remote and wearable monitoring of up to 8 EEG channels. The headset can be used with dry or gel electrodes. The use of the headset as a brain computer interface is demonstrated and evaluated. In particular, the capability of the system in measuring P300 complexes is quantified. Applications of this prototype are foreseen in the clinical, lifestyle and entertainment domains.

Shrishail Patki, Bernard Grundlehner, Toru Nakada, Julien Penders
Virtual Mouse: A Low Cost Proximity-Based Gestural Pointing Device

Effectively addressing the portability of a computer mouse has motivated researchers to generate diverse solutions. Eliminating the constraints of mouse form factor by adopting vision-based techniques has recognized as an effective approach. However, current solutions cost significant computing power and require additional learning, thus making them inapplicable in industry. This work presents the Virtual Mouse, a low-cost proximity-based pointing device, consisting of 10 IR transceivers, a multiplexer, a microcontroller and pattern recognition rules. With this embedded device on the side of a laptop computer, a user can drive the cursor and activate related mouse events intuitively. Preliminary testing results prove the feasibility, and issues are also reported for future improvements.

Sheng Kai Tang, Wen Chieh Tseng, Wei Wen Luo, Kuo Chung Chiu, Sheng Ta Lin, Yen Ping Liu
Innovative User Interfaces for Wearable Computers in Real Augmented Environment

To be able to move freely in an environment, the user needs a wearable configuration that is composed of a set of interaction devices, which allows the interaction at least one hand free. Taking into account the location (physical, geographical or logical) and the aimed activities of the user, the interaction style and devices must be in appropriate relation with the context. In this paper, we present our design approach and a series of real proposals of wearable user interfaces. Our research is investigating innovative environment dependent and environment independent interfaces. We describe these interfaces, their configurations, real examples of use and the evaluation of selected techniques.

Yun Zhou, Bertrand David, René Chalon

Avatars and Embodied Interaction

Frontmatter
Influence of Prior Knowledge and Embodiment on Human-Agent Interaction

An experiment was conducted to capture characteristics of Human-Agent Interactions in a collaborative environment. The goal was to explore the following two issues: (1) Whether the user’s emotional state is more stimulated when the user has a human schema, as opposed to a computer agent schema, and (2) Whether the user’s emotional state is more stimulated when the user interacts with a human-like ECA (Embodied Conversational Agent), as opposed to a non human-like ECA or when there is no ECA. Results obtained in the experiment suggest that: (a) participants with a human schema produce higher ratings, compared to those with a computer agent schema, on the emotional (interpersonal stress and affiliation emotion) scale of communication; (b) A human-like interface is associated with higher ratings, compared to the cases of a robot-like interface and a no ECA interface, on the emotional (e.g., interpersonal stress and affiliation emotion) scale of communication.

Yugo Hayashi, Victor V. Kryssanov, Kazuhisa Miwa, Hitoshi Ogawa
The Effect of Physical Embodiment of an Animal Robot on Affective Prosody Recognition

Difficulty understanding or expressing affective prosody is a critical issue for people with autism. This study was initiated with a question, how to improve emotional communications of children with autism with technological aids. Researchers have encouraged the use of robots as new intervention tools for children with autism, but there was no study to empirically evaluate a robot compared to a traditional computer in the interventions. From these backgrounds, this study investigated the potentials of an animal robot for affective prosody recognition compared to a traditional PC simulator. For this pilot study, however, only neurotypical students participated. Participants recognized Ekman’s basic emotions from both a dinosaur Robot, “Pleo” and a virtual simulator of the Pleo. The physical Pleo showed more promising recognition tendencies and was clearly favored over the virtual one. With this promising result, we may be able to leverage the other advantages of the robot in interventions for children with autism.

Myounghoon Jeon, Infantdani A. Rayan
Older User-Computer Interaction on the Internet: How Conversational Agents Can Help

Using a qualitative study employing a role-playing approach with human agents, this study identifies the potential roles of conversational agents in enhancing older users’ computer interactions on the Internet in e-commerce environments. Twenty-five participants aged 65 or older performed a given shopping task with a human agent playing the role of a conversational agent. The activity computer screens were video-recorded and the participant-agent conversations were audio-recorded. Through navigation path analysis as well as content analysis of the conversations, three major issues hindering older users’ Internet interaction are identified: (1) a lack of prior computer knowledge, (2) a failure to locate information or buttons, and (3) confusions related to meanings of information. The navigation path analysis also suggests potential ways conversational agents may assist older users to optimize their search strategies. Implications and suggestions for future studies are discussed.

Wi-Suk Kwon, Veena Chattaraman, Soo In Shim, Hanan Alnizami, Juan Gilbert
An Avatar-Based Help System for Web-Portals

In this paper we present an avatar-based help system for web-portals that should provide various kinds of user assistance. Along with helping users on individual elements of a web page, it is also capable to offer step-by-step guidance supporting users to complete specific tasks. Furthermore users can input free text questions in order to get additional information on related topics. Thus the avatar features a single point of reference, when the user feels the need for assistance. Different to typical systems based on dedicated help sections consisting of standalone HTML pages, help is instantly available and displayed directly at the element the user is currently working on.

Helmut Lang, Christian Mosch, Bastian Boegel, David Michel Benoit, Wolfgang Minker
mediRobbi: An Interactive Companion for Pediatric Patients during Hospital Visit

Young children often feel terribly anxious while visiting a doctor. We designed mediRobbi, an interactive robotic companion, to help pediatric patients feel more relaxed and comfortable in hospital visits. mediRobbi can guide and accompany the pediatric patients through their medical procedures. The sensors and servomotors enable mediRobbi to respond to its environmental inputs and the reactions from young children as well. The ultimate goal of this study is to transform an intimidating medical situation into a joyful adventure game for the pediatric patients.

Szu-Chia Lu, Nicole Blackwell, Ellen Yi-Luen Do
Design of Shadows on the OHP Metaphor-Based Presentation Interface Which Visualizes a Presenter’s Actions

We describe the design of shadows of an overhead projector (OHP) metaphor-based presentation interface that visualizes a presenter’s action. Our interface work with graphics tablet devices. It superimposes a pen-shaped shadow based on position, altitude and azimuth of a pen. A presenter can easily point the slide with the shadow. Moreover, an audience can observe the presenter’s actions by the shadow. We performed two presentations using a prototype system and gather feedback from them. We decided on the design of the shadows on the basis of the feedback.

Yuichi Murata, Kazutaka Kurihara, Toshio Mochizuki, Buntarou Shizuki, Jiro Tanaka
Web-Based Nonverbal Communication Interface Using 3DAgents with Natural Gestures

In this paper, we assumed that the nonverbal communication by using 3DAgents with natural gestures had various advantages compared with only the traditional voice and video communication, and we developed the IMoTS (

I

nteractive

Mo

tion

T

racking

S

ystem) to verify this hypothesis. The features of this system are that the natural gestures of 3DAgents are captured easily by using interactive GUI from the 2D video images in which some characteristic human behaviors are captured, transmitted, and reproduced by natural gestures of 3DAgents. From the experimental results, we showed that the accuracy of captured gestures which often used in web communications was within the level of detectable limit. And we found that human behaviors could be characterized by the mathematical formula, and some of the information could be transmitted, especially some personalities such as quirks and likeness had the predominant effects of impressions and memories of human.

Toshiya Naka, Toru Ishida
Taking Turns in Flying with a Virtual Wingman

In this study we investigate miscommunications in interactions between human pilots and a virtual wingman, represented by our virtual agent Ashley. We made an inventory of the type of problems that occur in such interactions using recordings of Ashley in flight briefings with pilots and designed a perception experiment to find evidence of human pilots providing cues on the occurrence of miscommunications. In this experiment, stimuli taken from the recordings are rated by naive participants on successfulness. Results show the largest part of miscommunications concern floor management. Participants are able to correctly assess the success of interactions, thus indicating cues for such judgment are present, though successful interactions are better recognized. Moreover, we see stimulus modality (audio, visual or combined) does not influence the ability of participants to judge the success of the interactions. From these results, we present recommendations for further developing virtual wingmen.

Pim Nauts, Willem van Doesburg, Emiel Krahmer, Anita Cremers
A Configuration Method of Visual Media by Using Characters of Audiences for Embodied Sport Cheering

In sports bars, where people watch live sports on TV, it is not possible to experience the atmosphere of the stadium. In this study, we focus on the importance of embodiment in sport cheering, and we develop a prototype of an embodied cheering support system. A stadium-like atmosphere can be created by arraying crowds of audience characters in a virtual stadium, and users can perceive a sense of unity and excitement by cheering with embodied motions and interacting with the audience characters.

Kentaro Okamoto, Michiya Yamamoto, Tomio Watanabe
Introducing Animatronics to HCI: Extending Reality-Based Interaction

As both software and hardware technologies have been improved during the past two decades, a number of interfaces have been developed by HCI researchers. As these researchers began to explore the next generation of interaction styles, it was inevitable that they use a lifelike robot (or animatronic) as the basis for interaction. However, the main use up to this point for animatronic technology had been “edutainment.” Only recently was animatronic technology even considered for use as an interaction style. In this research, various interaction styles (conventional GUI, AR, 3D graphics, and introducing an animatronic user interface) were used to instruct users on a 3D construction task which was constant across the various styles. From this experiment the placement, if any, of animatronic technology in the realitybased interaction framework will become more apparent.

G. Michael Poor, Robert J. K. Jacob
Development of Embodied Visual Effects Which Expand the Presentation Motion of Emphasis and Indication

Although visual presentation software typically has a pen function, it tends to remain unused by most users. In this paper, we propose a concept of embodied visual effects that expresses emphasis and indication of presentation motions using a pen display. First, we measured the timing of presentation motions of pen use achieved while in sitting and standing positions. Next, we evaluated the timing of underlining and explanation through a synthesis analysis from the viewpoint of the attendees. Then, on the basis of the results of our measurements and evaluation, we developed several visual effects. These visual effects, which express the embodied motions and control the embodied timing, are implemented as system prototypes.

Yuya Takao, Michiya Yamamoto, Tomio Watanabe
Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer Interaction

Although humanlike robots and computer agents are fundamentally recognized as familiar, considerable similar external representation occasionally reduces their familiarities. We experimentally investigated relationships between the similarities and the familiarities of multi-modal agents which had face and voice representation, with the results indicating that similarities of the agents didn’t simply increase their familiarities. The results in our experiments implied that external representation of computer agents for communicative interactions should not be very similar to human but appropriately similar in order to gain familiarities.

Kaori Tanaka, Tatsunori Matsui, Kazuaki Kojima
Backmatter
Metadaten
Titel
Human-Computer Interaction. Interaction Techniques and Environments
herausgegeben von
Julie A. Jacko
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-21605-3
Print ISBN
978-3-642-21604-6
DOI
https://doi.org/10.1007/978-3-642-21605-3

Neuer Inhalt