Skip to main content

2012 | Buch

Gesture and Sign Language in Human-Computer Interaction and Embodied Communication

9th International Gesture Workshop, GW 2011, Athens, Greece, May 25-27, 2011, Revised Selected Papers

herausgegeben von: Eleni Efthimiou, Georgios Kouroupetroglou, Stavroula-Evita Fotinea

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes revised selected papers from the 9th International Gesture Workshop, GW 2011, held in Athens, Greece, in May 2011. The 24 papers presented were carefully reviewed and selected from 35 submissions. They are ordered in five sections named: human computer interaction; cognitive processes; notation systems and animation; gestures and signs: linguistic analysis and tools; and gestures and speech.

Inhaltsverzeichnis

Frontmatter

Human Computer Interaction

Gestures in Assisted Living Environments
Abstract
This paper is concerned with multimodal assisted living environments, particularly based on gesture interaction. The research of ambient assisted living is about the provision of a safe, comfortable, and inde-pendent lifestyle at a domestic environment. We refer to spatial gestures and gesture recognition software and present an observational user study related to gestures in the Bremen Ambient Assisted Living Lab (BAALL), a 60m2 apart-ment suitable for the elderly and people with physical or cognitive impairments.
Dimitra Anastasiou
Using Wiimote for 2D and 3D Pointing Tasks: Gesture Performance Evaluation
Abstract
We present two studies to comparatively evaluate the performance of gesture-based 2D and 3D pointing tasks. In both of them, a Wiimote controller and a standard mouse were used by six participants. For the 3D experiments we introduce a novel configuration analogous to the ISO 9241-9 standard methodology. We examine the pointing devices’ conformance to Fitts’ law and we measure eight extra parameters that describe more accurately the cursor movement trajectory. For the 2D tasks using Wiimote, Throughput is 41,2% lower than using the mouse, target re-entry is almost the same, and missed clicks count is three times higher. For the 3D tasks using Wiimote, Throughput is 56,1% lower than using the mouse, target re-entry is increased by almost 50%, and missed clicks count is sixteen times higher.
Fitts’ law, 3D pointing, Gesture User Interface, Wiimote
Georgios Kouroupetroglou, Alexandros Pino, Athanasios Balmpakakis, Dimitrios Chalastanis, Vasileios Golematis, Nikolaos Ioannou, Ioannis Koutsoumpas
Choosing and Modeling the Hand Gesture Database for a Natural User Interface
Abstract
This paper presents a database of natural hand gestures (‘IITiS Gesture Database’) recorded with motion capture devices. For the purpose of benchmarking and testing the gesture interaction system we have selected twenty-two natural hand gestures and recorded them on three different motion capture gloves with a number of participants and movement speeds. The methodology for the gesture selection, details of the acquisition process, and data analysis results are presented in the paper.
Przemysław Głomb, Michał Romaszewski, Sebastian Opozda, Arkadiusz Sochan
User Experience of Gesture Based Interfaces: A Comparison with Traditional Interaction Methods on Pragmatic and Hedonic Qualities
Abstract
Studies into gestural interfaces – and interfaces in general - typically focus on pragmatic or usability aspects (e.g., ease of use, learnability). Yet the merits of gesture-based interaction likely go beyond the purely pragmatic and impact a broader class of experiences, involving also qualities such as enjoyment, stimulation, and identification. The current study compared gesture-based interaction with device-based interaction, in terms of both their pragmatic and hedonic qualities. Two experiments were performed, one in a near-field context (mouse vs. gestures), and one in a far-field context (Wii vs. gestures). Results show that, whereas device-based interfaces generally scored higher on perceived performance, and the mouse scored higher on pragmatic quality, embodied interfaces (gesture-based interfaces, but also the Wii) scored higher in terms of hedonic quality and fun. A broader perspective on evaluating embodied interaction technologies can inform the design of such technologies and allow designers to tailor them to the appropriate application.
Maurice H. P. H. van Beurden, Wijnand A. Ijsselsteijn, Yvonne A. W. de Kort
Low Cost Force-Feedback Interaction with Haptic Digital Audio Effects
Abstract
We present the results of an experimental study of Haptic Digital Audio Effects with and without force feedback. Participants experienced through a low cost Falcon haptic device two new real-time physical audio effect models we have developed under the CORDIS-ANIMA formalism. The results indicate that the haptic modality changed the user’s experience significantly.
Alexandros Kontogeorgakopoulos, Georgios Kouroupetroglou

Cognitive Processes

The Role of Spontaneous Gestures in Spatial Problem Solving
Abstract
When solving spatial problems, people often spontaneously produce hand gestures. Recent research has shown that our knowledge is shaped by the interaction between our body and the environment. In this article, we review and discuss evidence on: 1) how spontaneous gesture can reveal the development of problem solving strategies when people solve spatial problems; 2) whether producing gestures can enhance spatial problem solving performance. We argue that when solving novel spatial problems, adults go through deagentivization and internalization processes, which are analogous to young children’s cognitive development processes. Furthermore, gesture enhances spatial problem solving performance. The beneficial effect of gesturing can be extended to non-gesturing trials and can be generalized to a different spatial task that shares similar spatial transformation processes.
Mingyuan Chu, Sotaro Kita
Effects of Spectral Features of Sound on Gesture Type and Timing
Abstract
In this paper we present results from an experiment in which infrared motion capture technology was used to record participants’ movement in synchrony to different rhythms and different sounds. The purpose was to determine the effects of the sounds’ spectral and temporal features on synchronization and gesture characteristics. In particular, we focused on the correlation between sounds and three gesture features: maximum acceleration, discontinuity, and total quantity of motion. Our findings indicate that discrete, discontinuous motion resulted in better synchronization, while spectral features of sound had a significant effect on the quantity of motion.
Mariusz Kozak, Kristian Nymoen, Rolf Inge Godøy
Human-Motion Saliency in Complex Scenes
Abstract
We present a new and original method for human motion analysis and evaluation, developed on the basis of the role played by attention in the perception of human motion. Attention is particularly relevant both in a multi-motion scene and in social interactions, when it comes to select and discern why and what to focus on. The first crucial role of attention concerns the saliency of human motion within a scene where other dynamics might occur. The second role, in social-close interactions, is highlighted by the selectivity shown towards gesture modalities both in peripheral and central vision. Experiments for both modeling and testing have been based on a dynamic 3D gaze tracker.
Fiora Pirri, Matia Pizzoli, Matei Mancas
What, Why, Where and How Do Children Think? Towards a Dynamic Model of Spatial Cognition as Action
Abstract
The Spatial Cognition in Action (SCA) model described here takes a dynamic systems approach to the biological concept of the Perception-Cognition-Action cycle. This partial descriptive feature based model is theoretically developed and empirically informed by the examination of children’s embodied gestures that are rooted in action. The model brings together ecological and corporeal paradigms with evidence from neurobiological and cognitive science research. Such a corporeal approach places the ‘action ready body’ centre stage. A corpus of gesture repertoires from both neuro-atypical and neuro-typical children has been created from ludic interaction studies. The model is proposed as a dynamic construct for intervention, involving the planning and design of interaction technology for neuro-atypical children’s pedagogy and rehabilitation.
Marilyn Panayi

Notation Systems and Animation

A Labanotation Based Ontology for Representing Dance Movement
Abstract
In this paper, we present a Knowledge Based System for describing and storing dances that takes advantage of the expressivity of Description Logics. We propose exploiting the tools of the Semantic Web Technologies in representing and archiving dance choreographies by developing a Dance Ontology in OWL-2. Description Logics allow us to express complex relations and inference rules for the domain of dance movement, while Reasoning capabilities make it easy to extract new knowledge from existing knowledge. Furthermore, we can search within the ontology based on the steps and movements of dances by writing SPARQL queries. The constructing elements of the ontology and their relationships to construct the dance model are based on the semantics of the Labanotation system, a widely applied language that uses symbols to denote dance choreographies.
Katerina El Raheb, Yannis Ioannidis
ISOcat Data Categories for Signed Language Resources
Abstract
As the creation of signed language resources is gaining speed world-wide, the need for standards in this field becomes more acute. This paper discusses the state of the field of signed language resources, their metadata descriptions, and annotations that are typically made. It then describes the role that ISOcat may play in this process and how it can stimulate standardisation without imposing standards. Finally, it makes some initial proposals for the thematic domain ‘sign language’ that was introduced in 2011.
Onno Crasborn, Menzo Windhouwer
Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann
Abstract
Staccato, the Segmentation Agreement Calculator According to Thomann, is a software tool for assessing the degree of agreement of multiple segmentations of some time-related data (e.g., gesture phases or sign language constituents). The software implements an assessment procedure developed by Bruno Thomann and will be made publicly available. The article discusses the rationale of the agreement assessment procedure and points at future extensions of Staccato.
Andy Lücking, Sebastian Ptock, Kirsten Bergmann
How Do Iconic Gestures Convey Visuo-Spatial Information? Bringing Together Empirical, Theoretical, and Simulation Studies
Abstract
We investigate the question of how co-speech iconic gestures are used to convey visuo-spatial information in an interdisciplinary way, starting with a corpus-based empirical and theoretical perspective on how a typology of gesture form and a partial ontology of gesture meaning are related. Results provide the basis for a computational modeling approach that allows us to simulate the production of speaker-specific gesture forms to be realized with virtual agents. An evaluation of our simulation results and our methodology shows that the model is able to successfully approximate human gestural behavior use of iconic gestures, and moreover, that gestural behavior can improve how humans rate a virtual agent in terms of eloquence, competence, human-likeness, or likeability.
Hannes Rieser, Kirsten Bergmann, Stefan Kopp
Thumb Modelling for the Generation of Sign Language
Abstract
We present a simple kinematic model of the thumb for the animation of virtual characters. The animation is made through a purely kinematic approach, thus requires very precise limitations on the rotations of the thumb to be realistic. The thumb is made opposable thanks to the addition of two bones simulating the carpo-metacarpal complex. The bones are laid out to build a virtual axis of rotation allowing the thumb to move in the opposed position. The model is then evaluated by generating 22 static hand-shapes of Sign Language and compared to previous work in animation.
Maxime Delorme, Michael Filhol, Annelies Braffort

Gestures and Signs: Linguistic Analysis and Tools

Toward a Motor Theory of Sign Language Perception
Abstract
Researches on signed languages still strongly dissociate linguistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the different concepts and provide avenues for future work.
Sylvie Gibet, Pierre-François Marteau, Kyle Duarte
Analysis and Description of Blinking in French Sign Language for Automatic Generation
Abstract
The present paper tackles blinking description within the context of automatic generation of Sign Languages (SLs). Blinking is not much taken into account into SL processing systems even though its importance is underlined in several studies. Our purpose is to improve knowledge on blinking so as to be able to generate them. We present the methodology we used for this purpose and the results we obtained. We list the main categories we have identified in our corpus, and present in more details an excerpt of our results corresponding to the most frequent category, i.e. segmentation.
Annelies Braffort, Emilie Chételat-Pelé
Grammar/Prosody Modelling in Greek Sign Language: Towards the Definition of Built-In Sign Synthesis Rules
Abstract
The aim of the present work is to discuss a limited set of issues which concern the grammar modelling of Greek Sign Language (GSL) within the framework of improving the naturalness and more specifically the grammaticality of synthetic GSL signing. This preliminary study addresses the linguistic issues relating to specific grammar structures and their related prosodic expressive markers through experimental implementation of the respective rules within a sign synthesis support environment that was initially developed in order to create lexical resources for GSL synthesis.
Athanasia-Lida Dimou, Theodore Goulas, Eleni Efthimiou
Combining Two Synchronisation Methods in a Linguistic Model to Describe Sign Language
Abstract
The context is Sign Language modelling for synthesis with 3d virtual signers as output. Sign languages convey multi-linear information, hence allow for many synchronisation patterns between the articulators of the body. Addressing the problem that current models usually at best only cover one type of those patterns, and in the wake of the recent description model Zebedee, we introduce the Azalee extension, made to enable the description of any type of synchronisation in Sign Language.
Michael Filhol
Sign Segmentation Using Dynamics and Hand Configuration for Semi-automatic Annotation of Sign Language Corpora
Abstract
This paper address the problem of sign language video annotation. Nowadays sign language segmentation is manually performed. This is time consuming, error prone and no reproducible. In this paper we intend to provide an automatic approach to segment signs. We use a particle filter based approach to track hands and head. Motion features are used to classify segments performed with one or two hands and to detect events. Events that have been detected in the middle of a sign are removed considering hand shape features. Hand shape is characterized using similarity measurements. Evaluation has been performed and has shown the performance and limitation of the proposed approach.
Matilde Gonzalez, Christophe Collet

Gestures and Speech

Integration of Gesture and Verbal Language: A Formal Semantics Approach
Abstract
The paper presents a formal framework to model the fusion of gesture meaning with the meaning of the co-occurring verbal fragment. The framework is based on the formalization of two simple concepts, intersectivity and iconicity, which form the core of most descriptive accounts of the interaction of the two modalities. The formalization is presented as an extension of a well-known framework for the analysis of meaning in natural language. We claim that a proper formalization of these two concepts is sufficient to provide a principled explanation of gestures accompanying different types of linguistic expressions. The formalization also aims at providing a general mechanism (iconicity) by which the meaning of gestures is extracted from their formal appearance.
Gianluca Giorgolo
Generating Co-speech Gestures for the Humanoid Robot NAO through BML
Abstract
We extend and develop an existing virtual agent system to generate communicative gestures for different embodiments (i.e. virtual or physical agents). This paper presents our ongoing work on an implementation of this system for the NAO humanoid robot. From a specification of multi-modal behaviors encoded with the behavior markup language, BML, the system synchronizes and realizes the verbal and nonverbal behaviors on the robot.
Quoc Anh Le, Catherine Pelachaud
Interaction between Speech and Gesture: Strategies for Pointing to Distant Objects
Abstract
Referring to objects using multimodal deictic expressions is an important form of communication. This work addresses the question on how pragmatic factors affect content distribution between the modalities speech and gesture. This is done by analyzing a study on deictic pointing gestures to objects under two conditions: with and without speech. The relevant pragmatic factor was the distance to the referent object. As one main result two strategies were identified which were used by participants to adapt their gestures to the condition. This knowledge can be used, e.g., to improve the naturalness of pointing gestures employed by embodied conversational agents.
Thies Pfeiffer
Making Space for Interaction: Architects Design Dialogues
Abstract
This exploratory research has taken a set of theoretical concepts as the basis for testing a visualisation of body-centric gesture space: 1). Kendon’s transactional segments, 2). the manubrium as a central anatomical marker for bodily movement, and 3). physical reach space. Using these, a 3D model of gesture space has been designed in order to be applied to empirical data from architects design meetings, articulating the role of gesture space overlaps within the interaction.
Multi-dimensional drawing techniques have resulted in detailed visualisations of these overlaps. Illustrations show that the dialogue contributions can be mapped to distinct locations in the changing shared spaces, creating a spatial framework for the analysis and visualisation of the multi-dimensional topology of the interaction. This paper discusses a Case Study where this type of modelling can be applied empirically, indexing speech and gesture to the drawing subspaces of a group of architects.
Claude P. R. Heath, Patrick G. T. Healey
Iconic Gestures in Face-to-Face TV Interviews
Abstract
This paper presents a study of iconic gestures as attested in a corpus of Greek face-to-face television interviews. The communicative significance of the iconic gestures situated in an interactional context is examined with regards to their semantics as well as the syntactic properties of the accompanying speech. Iconic gestures are classified according to their semantic equivalents, and are further linked to the phrasal units of the words co-occurring with them, in order to provide evidence about the actual syntactic structures that induce them. The findings support the communicative power of iconic gestures and suggest a framework for their interpretation based on the interplay of semantic and syntactic cues.
Maria Koutsombogera, Harris Papageorgiou
Backmatter
Metadaten
Titel
Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
herausgegeben von
Eleni Efthimiou
Georgios Kouroupetroglou
Stavroula-Evita Fotinea
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-34182-3
Print ISBN
978-3-642-34181-6
DOI
https://doi.org/10.1007/978-3-642-34182-3