Hostname: page-component-8448b6f56d-t5pn6 Total loading time: 0 Render date: 2024-04-25T01:22:00.365Z Has data issue: false hasContentIssue false

The role of gesture in designing

Published online by Cambridge University Press:  11 July 2011

Willemien Visser
Affiliation:
CNRS, Paris, and INRIA, Rocquencourt, France
Mary Lou Maher
Affiliation:
University of Sydney, Sydney, Australia
Rights & Permissions [Opens in a new window]

Abstract

This paper introduces the special issue of AI EDAM on the role of gesture in designing. It starts with the context of the papers submitted and a summary of the papers accepted. We then introduce gesture studies, one of the two main domains with which this Special Issue is concerned. We do not introduce design research: we suppose the readers of AI EDAM are familiar with this domain. After this general introduction to the domain of gesture studies, we provide an overview of gestures in design, that is, the research environment of the papers in this Special Issue. We then discuss some dimensions on which these papers differ, as well as some on which they are related.

Type
Guest Editorial
Copyright
Copyright © Cambridge University Press 2011

1. INTRODUCTION

This Special Issue of AI EDAM concerns the role of gesture in designing. This topic is relatively new in the field of design research and is only recently become of interest to research in computational support for designers. This Special Issue aims to raise awareness of recent research and to inspire additional research at the intersection of theory and practice.

Gesture has been studied from various perspectives, sometimes with respect to computer support for human communication and collaboration but also with respect to the psychology of gesture. Some examples are the following:

Although gesture is most commonly assumed to play a role in communication, it has been shown that gesture also plays an important role in thinking (McNeill, Reference McNeill1992). These findings have implications for the role of gesture in designing: the role it plays in design thinking and the role it plays in design collaboration. Studies of designers, working alone or collaborating, have been primarily concerned with studies of verbal protocols and very little with gestures. With this Special Issue, we raise awareness of the role of gesture in designing, primarily through the research on gesture when designers communicate and collaborate, but also in the implications of research related to gesture and thought on the design of HCI devices. At this stage, the studies reported here serve as a precursor to the development of computational support for design and design collaboration, and provide methodological approaches for understanding the impact of computational support and novel HCI technology on design thinking.

The analysis of the function of gesture in face-to-face collaborative design may have implications for environments that support remote collaborative design. Until now, these systems mainly support pen-based pointing or (other) “command” gestures. If such environments are to effectively support designers collaborating from remote locations, then representational and other types of gestures must also be visible and transmitted to the design partners.

To advance this important topic, the editors of this Special Issue sent out an open call for papers that provide theoretical or empirical contributions to the role of gesture in designing, either in the context of computer-supported collaborative work (CSCW) in the domain of design or as a precursor to designing effective computational support and mediation for design. Relevant research on the role of gesture in designing can come from all the disciplines involved in gesture studies: artificial intelligence, HCI, or CSCW perspectives as well as cognitive-science disciplines, such as psychology and pragmatics.

In the Call for Papers, we suggested the following topics, but we announced explicitly that this was not an exhaustive list:

  • theoretical aspects of gesture in design interaction;

  • the role of gestures in design thinking;

  • gesture and multimodal interaction in design interaction: gesture with speech, writing, drawing, and other modalities;

  • artificial intelligence and cognitive models of gesture in design interaction;

  • the role of gesture and multimodal interaction in remote design collaboration;

  • HCI and studies of gesture in collaborative design environments;

  • new HCI technologies that enable gesture in design environments;

  • gesture and multimodal interaction in CSCW design environments; and

  • the role of gestures in defining an external representation of the design model (either to the computer or to a person).

1.1. Organization of this paper

This Guest Editorial introduces the Special Issue, starting with the context of the papers submitted and a summary of the papers accepted. We then introduce gesture studies, one of the two main domains with which this Special Issue is concerned. We do not introduce design research: we suppose the readers of AI EDAM are familiar with this domain. After this general introduction to the domain of gesture studies, we provide an overview of gestures in design, that is, the research environment of the papers in this Special Issue. We then discuss some dimensions on which these papers differ and some on which they are related.

2. PAPERS ACCEPTED

The role of gesture in designing is new to the AI EDAM readership and authorship: we received notification that nine authors intended to submit a paper. This led to seven actual submissions, which were each reviewed by at least three reviewers. Three papers were accepted for publication. Although these papers do not cover the entire domain of gesture in designing and are not representative of the scope of gesture in designing, they provide a contribution to three important areas:

  • the role of pointing in design meetings,

  • a computational approach to identifying gestures in design protocols, and

  • the role of gesturing in graspable user interfaces.

“Getting the Point: The Role of Gesture in Managing Intersubjectivity in a Design Activity” by Jared Donovan, Trine Heinemann, Ben Matthews, and Jacob Buur describes the complexity of pointing as it is employed in a design workshop. Using the method of interaction analysis, the authors argue that pointing is not merely employed to index, locate, or fix a reference to an object, but rather constitutes a practice for reestablishing intersubjectivity and solving interactional trouble, such as misunderstandings or disagreements, by virtue of enlisting something as part of the participants’ shared experience. The authors discuss implications for how such practices might be supported with computer mediation, arguing for a “bricolage” approach to systems development that emphasizes the provision of resources for users to collaboratively negotiate the accomplishment of intersubjectivity rather than systems that only support pointing as a specific gestural action.

In “Using Speech to Identify Gesture Pen Strokes in Collaborative, Multimodal Device Descriptions,” James Herold and Thomas F. Stahovich argue that a challenge in building collaborative design tools that use speech and sketch input is in distinguishing gesture pen strokes from those gestures that represent device structure, that is, object strokes. Starting from previous work that had shown the critical importance of speech–sketch alignment in order for a gesture/object classifier to establish this distinction, Herold and Stahovich, in their present study, develop a new alignment technique. The authors report experiments that showed that speech features are the most important for distinguishing gestures, thus indicating the critical importance of the speech–sketch alignment. The authors' new technique automates the alignment and employs a two-step process, that is, speech segmentation followed by alignment of the speech segments with the pen strokes. Herold and Stahovich describe their two-step technique and present data showing results that improve the accuracy of gesture classification over an existing automated process, and that the automated technique performs nearly as well as the benchmark manual speech–sketch alignment.

The starting point of Elise van den Hoven and Ali Mazalek, in “Grasping Gestures: Gesturing With Physical Artifacts,” is that in HCI, gestures are used more and more to facilitate communication with digital applications because their expressive nature enables less constraining and more intuitive digital interactions than conventional user interfaces. The authors call attention to the fact that interaction devices often make use of hand-held objects, or graspable interaction devices. In most cases, the physical objects as interaction devices are used for sensing or input, such as the mouse. In contrast, tangible interaction devices often make use of physical objects as embodiments of digital information. The physical objects in tangible user interfaces thus serve two purposes: as a physical embodiment of a digital object and as controls for modifying the associated digital information. Building on this, the authors emphasize the potential of gesture interaction to make use of the physical properties of hand-held objects to enhance or change the functionality of the gestures made. This combination of gestural interaction and tangible interaction—that is, gesturing while holding physical artifacts—underlies the authors’ concept of “tangible gesture interaction.”

3. GESTURE STUDIES

Gestures have been studied since antiquity. Kendon (Reference Kendon2004), who gives a detailed historical presentation of the work in this domain, has been himself one of the first modern authors starting to do research on gesture and other “nonverbal” communication, such as gaze and posture, in the 1960s of the 20th century (for a presentation of Kendon's work see Müller, Reference Müller2007; see also some other representatives of these early gestures studies: Efron, Reference Efron1941/1972; Ekman & Friesen, Reference Ekman and Friesen1969). As a research community, however, “gesture studies” have only existed for about 20 to 30 years (Kendon, Reference Kendon2004). It is an interdisciplinary field: gesture researchers come from many disciplines, especially anthropology, linguistics (in particular, pragmatics), psychology, sociology, semiotics, computer science, neuroscience, communication sciences, (art) history, performance studies, music, theatre, and dance.

“Founded in 2002, the International Society for Gesture Studies is the only international scholarly association devoted to the study of human gesture,” as one can read on the International Society for Gesture Studies Website (http://www.gesturestudies.com/). The Society organizes conferences and supports the international journal Gesture (http://www.benjamins.com/cgi-bin/t_seriesview.cgi?series=GEST).

One often reads that gesture studies are concerned with how people use their hands and other parts of their body for communicative purposes. This communicative function of gesture seems obvious because of our everyday, prescientific experience. What may seem more surprising is that people may also gesture when their interlocutor cannot see them, for example, during a telephone conversation (Bavelas et al., Reference Bavelas, Gerwing, Sutton and Prevost2008) or because they are blind (Iverson & Goldin-Meadow, Reference Iverson and Goldin-Meadow1997). Unsighted people also gesture when they did not learn to sign using a sign language (Goldin-Meadow, Reference Goldin-Meadow1999). In addition, people may use gesture when they are completely alone, for example, in order to solve a problem. These findings build on and are consistent with McNeill's (1992) views on the relationship between gesture and thought. This research also has implications on the role of gesture in designing to be more than just communicating to another designer, and therefore on the way that we design interfaces to digital design models.

Here, we leave aside gesture studies in previous centuries, where, for example, several authors analyzed its rhetorical use (see Kendon, Reference Kendon2004). The first contemporary studies in this domain have been concerned with gesture used in face-to-face conversation, often in narrative situations (McNeill, Reference McNeill1992; Bavelas et al., Reference Bavelas, Coates and Johnson2000) and in learning (Roth, Reference Roth2001; Goldin-Meadow, Reference Goldin-Meadow2009). Universal and cultural aspects of gesture are also a recurrent topic, as are the relationship of gesture to thought and language and, related to this, the role of gesture in human evolution and child development and the evolution of sign languages from gesture. Studies on sign language, the way it is used and its relations with other gestural communication constitute an important subdomain in gesture studies (Liddell, Reference Liddell2003). Similarly, the use of gesture in HCI is a contemporary issue in the study of gestures (Herold & Stahovich's and Van den Hoven & Mazalek's papers in this issue reflect this). Often this research aims to make human–system or system-mediated human–human communication more “multimodal,” that is, not limited to the verbal and/or graphical modalities. Gesture in a professional context (e.g., in designing) unmediated by computer systems, which is still a frequent situation, has come into focus more recently.

We also ignore the question of what “is” a “gesture.” In their review of gesture in HCI, Van den Hoven and Mazalek discuss this topic. Their paper seems an appropriate place for such a discussion, given that in HCI the term gesture is often used for behavior that other researchers in gesture studies would rather qualify as a “manipulative” or “practical action” or as a “command” that is the object of a particular type of “communication,” given the way in which the computer “understands” this “gesture.”

In his review on the “recognition and comprehension of hand gestures,” Sowa (Reference Sowa, Wachsmuth and Knoblich2008) remarks that, in HCI, “the term gesture input refers to a range of different interaction styles, many of which have little or nothing in common with coverbal gestures observed in human communication” (p. 39). In Sowa's opinion, his review of the computational approaches shows that there is still “a huge gap between gesture recognition and comprehension technology in HCI and the potential of coverbal gesture as a carrier of meaning in human communication. The majority of systems still focus on gesture recognition as a pattern classification problem” (p. 52).

In face-to-face interaction, gesture may play many roles. Some examples of gesture use in a face-to-face social interactional situation are gestures used in turn taking for interaction management or in modeling one's interlocutor's “personality.” Dominance, for example, a supposed “personality trait,” is expressed by kinesic cues. “Dominant people are often more active, and gestures associated with speech are correlated with dominance,” as stated by Gatica-Perez (Reference Gatica-Perez2009, p. 1781) in his review of automatic nonverbal analysis of social interaction in small groups.

3.1. Studies of gestures in design

Design generally involves teams of designers who collaborate on a project (Olson & Olson, Reference Olson and Olson2000; Stempfle & Badke-Schaub, Reference Stempfle and Badke-Schaub2002; Détienne, Reference Détienne2006). Although individual participants in a design team may make independent contributions to the project, collaborative design assumes that contributions are based on the interaction among different participants (Visser, Reference Visser2006). This interaction occurs through different modalities (i.e., different semiotic systems): verbal, graphical, gestural, and other modalities (gaze, posture, prosody). Research in the domain of design, however, has given much less attention to gesture (and other nonverbal modalities, except graphical) than other expression and/or interaction modalities that designers use. Until now, verbal interaction has received the most attention by far (Cross et al., Reference Cross, Christiaans and Dorst1996; Gero & Tang, Reference Gero and Tang2001). A substantial amount of research has concerned graphical interaction (Purcell & Gero, Reference Purcell and Gero1998; Gross & Do, Reference Gross and Do2004), but the role of gesture in designing has been the object of few studies.

The scarcity of research on gesture in designing should not be interpreted as a decreasing importance of gesture's role in collaborative design interaction, although some people, even design researchers, think or consider it is. Gesture continues to be seen as mostly playing a supplementary role compared to verbal (and graphical) interaction: gestures “illustrate” representations constructed verbally, they are not considered to play an equivalent, and thus essential, role in interaction. Nevertheless, with respect to design interaction between human designers in face-to-face interaction, empirical studies have shown that gesture is being used frequently in design meetings, and serves varying functions.

In an analysis of the empirical studies on the use of gestures in face-to-face collaborative design situations (Tang, Reference Tang1991; Bekker et al., Reference Bekker, Olson and Olson1995; Murphy, Reference Murphy2005), Visser (Reference Visser, McDonnell, Lloyd, Reid, Luck and Cross2009) highlighted two functions.

  1. 1. Gesture offers specific possibilities to render spatial [especially three-dimensional (3-D)] and motion-related qualities of design objects, and to embody action sequences through their mimicked simulation.

  2. 2. Gesture plays an important organizational role.

The function of gestures can also be organizational. Visser (Reference Visser, Kopp and Wachsmuth2010a) distinguished two types of such gestures.

  1. 1. “Interactive” gestures (Bavelas et al., Reference Bavelas, Chovil, Lawrie and Wade1992) are used to manage the interaction between the different participants in the design meeting.

  2. 2. Gestures can also play a role in organizing the functional design activities of generation, transformation, and evaluation of design proposals (Visser, Reference Visser2006).

The use of gestures in the construction of representations of design objects is fundamental. We already underlined the role of gesture in representing both spatial, especially 3-D, and nonstatic qualities of design objects, such as their motion or the action sequences in which they are involved (see also Bischel, Stahovich, Peterson, Davis, & Adler, Reference Bischel, Stahovich, Peterson, Davis and Adler2009). Such qualities are central in domains of design related to physical objects, such as architectural design, and mechanical, industrial, and other forms of engineering design. They are difficult, if not impossible, to render verbally or to represent in drawings in the plane. That is one of the reasons why computerized design environments can be so useful. They offer the possibility to represent design objects in 3-D and to work with these representations. However, without such systems (but also when using them, see below the interest of TUI identified by Kim & Maher, Reference Kim and Maher2008; see also Van den Hoven & Mazalek, this issue), speech and even graphical 2-D representations are poor instruments to represent these qualities of design objects. Based on her view of designing as the construction of representations (Visser, Reference Visser2006), Visser (Reference Visser, Kopp and Wachsmuth2010a) distinguished two families of representational gestures: gestures that designate and gestures that specify design entities (representational gestures proper).

A particular type of representational gestures are those that serve to express feelings, emotions, and other less factual qualities of design objects than, for example, their size or location. Visser (Reference Visser2010b) analyzed how architectural designers used metaphoric gestures in order to represent the atmosphere of the building they were designing (e.g., its intimate or bold character). The use of gestures and other nonverbal modalities, such as gaze and posture, in order to translate feelings and emotions is a timely research topic in the research on embodied conversational agents (Cassell, Reference Cassell2001; Ruttkay & Pelachaud, Reference Ruttkay and Pelachaud2004).

Bischel et al. (Reference Bischel, Stahovich, Peterson, Davis and Adler2009) conducted an experimental study in which designers in a remote communication situation were asked to describe a mechanical device to another designer. This study underlies Herold and Stahovich's (this issue) paper. In order to explain the devices to their colleague, the designers observed by Bischel et al. (Reference Bischel, Stahovich, Peterson, Davis and Adler2009) made gestures. The authors identified “six common types of gestures . . . used either to illustrate behavior or to provide spatial context for a part of the description.” They distinguished two functional categories: “‘selection gestures,’ used to relate a spoken description to a spatial location in the sketch, and ‘motion gestures,’ used to give spatial context to how things move or interact” (p. 1402). Selection gestures were the most frequent type. They are the famous “deictic” gestures (see, e.g., McNeill, Reference McNeill1992). Herold and Stahovich take over these two categories, distinguishing gesture pen strokes “to indicate motion” (Bischel et al.'s, 2009, motion gestures, e.g., drawing an arrow to express the movement by an object or a part of it) from gestures produced “to single out a component being discussed” (Bischel et al.'s, 2009, selection gestures, e.g., drawing a circle around an object or a part of it). These are, however, two types of “gesture strokes,” which Herold and Stahovich, through their “gesture/object classifier,” wish to be able to distinguish from “nongesture strokes” or “object strokes.”

3.2. Gesture in computer-supported design systems

Computer-supported design environments highlight the importance of studies of gesture because they restrict the ways in which designers can communicate their design ideas as input to a digital model, as well as restrict the ability to communicate gesture when the computer mediates a remote design session. Early examples of CSCW include the use of multimodal systems enabled by the use of cameras and microphones to transmit and, in some cases, superimpose gesture to remote participants. Donovan, Heinemann, Matthews, and Buur (this issue) provide a good overview of these early systems, showing that many of them are still highly relevant for today's needs for computer-mediated design collaboration. In the interaction modalities provided in HCI, be it between human designers (computer-mediated interaction) or between humans and the system, the use of “gesture” is primarily associated with pen- or stylus-based input or with the use of data gloves that track movement and translate the movement to input. Van den Hoven and Mazalek (this issue) present and advocate that gesture is an important consideration in designing and evaluating HCI for designers and that pointing is only one of many gestures to be considered.

Moving beyond the pen-based interface, the use of tabletop systems as a platform for design meetings has introduced the use of graspable objects as input devices (e.g., Ulmer & Ishii, 1997; Maher et al., Reference Maher, Yohann Daruwala and Chen2004). These tabletop systems are primarily used for collaboration, where a design team works around a single tabletop. There have been some studies of remote collaboration using tabletop systems, where gesture is recorded and displayed on the remote sites (Obeyesekare et al., Reference Obeysekare, Williams, Durbin, Rosenblum, Rosenberg, Grinstein, Ramamurti, Landsberg and Sandberg1996; Schmalstieg et al., Reference Schmalstieg, Encarnação and Szalavári1999). Although these novel HCI environments involve the use of hand and arm movements, little has been studied with respect to these movements as gestures.

Kim and Maher (Reference Kim and Maher2008), for example, compared the use of a traditional keyboard and mouse interface to the tangible interaction on a desktop when designers are collaborating on a design configuration task. They specifically observed differences in the frequency and occurrence of types of gestures in the two types of interface, with more gestures occurring in the tangible interface. They also found that designers, when using the tangible interface, had more segments coded as cognitive behaviors associated with generating creative designs. The implications of studies of this kind are that interactive devices can be designed specifically to encourage gesture rather than to restrict the use of gesture in computer environments for collaborative design.

3.3. This Special Issue

This section discusses four dimensions on which the three papers in this Special Issue on gesture in design differ and are related.

3.3.1. Design situation: Face-to-face versus remote collaboration

Following from the focus on gesture as communication, many design researchers study gesture in a collaborative design scenario. In this issue, two of the three papers report on studies of collaborating designers. Herold and Stahovich study remote collaboration, although their results may have implications for face-to-face collaboration. Donovan, Heinemann, Matthews, and Buur study face-to-face design collaboration and report on the implications for remote collaboration. Van den Hoven and Mazalek do not report on the study of designers; however, their survey and analysis of gesture and tangible interaction has implications for the design of computer-mediated remote collaboration.

Herold and Stahovich develop and evaluate a method for the alignment of speech and gesture using data collected while designers were communicating remotely using a tablet PC with a pen interface and drawing program. In this study, the designers communicated using a microphone and earphones while located in different rooms. Therefore, the authors’ data on multimodal interaction comes from a remote collaboration environment in which the designers can communicate only by voice and pen strokes. Herold and Stahovich's technique for identifying discrete gestures in design communication is not specifically based on remote collaboration, but it is tested in that environment. Their work is clearly related to analyzing multimodal data in remote collaborative settings, but arguably may also be used to automatically analyze multimodal data of designers using tablet PCs in a face-to-face setting.

Donovan, Heinemann, Matthews, and Buur study pointing while observing designers collaborating in a face-to-face scenario. They develop a technique for tracing the pointing action on a video of the design session to identify the roles of pointing. The study identifies several roles of pointing that are not associated with identifying an object. The purpose of the study is to highlight the numerous roles that gesture, and specifically pointing, can play in design and how pointing is used to establish understanding and a shared representation. They include a survey of computer-mediated environments for remote collaboration and argue for a “bricolage” approach, that is, the end users bring together the elements of their environment to support remote collaboration. The end users, for example, “[try] to identify the recurring elements of systems (e.g., projector-camera pairings, display surfaces, drawing implements) and consider how these might be incorporated into new kinds of systems that [they] could bring together in particular ways to suit their needs.”

3.3.2. Methodology

Methodology is a critical aspect of understanding the study of the role of gesture in designing. Psychological studies of gesture provide a precedent for this area, but due to the large number of confounding variables in the complex scenario of collaborative designing, the methodologies relevant to studying designing draw from various computational, social, and behavioral science methodologies. The three papers in this Special Issue are very different methodologically: real-world versus experimental setting, qualitative analysis of observations made on designers versus development, and quantitative analysis of gesture features versus analysis of the literature.

Donovan, Heinemann, Matthews and Buur conducted a case study in a professional working context: they identify and discuss the pointing gestures made in one particular face-to-face design workshop. The interaction analysis method they use, inspired by conversation analysis and ethnomethodology, is a social science method. The authors focused on the gestures made by the six participants in the second part of the workshop, which lasted for just over 2 h. As part of their study, they develop an approach to characterizing the gestures by tracing over the video image of the design session. These traces provide a way of seeing and comparing the different gestures in a still image. Their methodology is effective in providing an exploratory account of the variety of gestures situated in a very specific context and place.

Herold and Stahovich's research commences with an experimental approach, based on the experimental study conducted by Bischel et al. (Reference Bischel, Stahovich, Peterson, Davis and Adler2009). In Herold and Stahovich's study, the designers are placed in remote locations with specific computer and communication devices, and given a fixed period of time to work together. The data collected during this period of time is the basis for Herold and Stahovich's contribution: an automated approach to segmenting and identifying gestures using a gesture and speech-alignment technique. The methodology is drawn primarily from computational science, in which the computational approach is evaluated against other computational methods and a manual method.

Van den Hoven and Mazalek provide a critical survey of HCI technologies with a focus on the design opportunities for new technologies that lie at the intersection of gesture and tangible interaction. They start with an overview of the study of gesture and then consider gestures in HCI in three areas: 3-D space, such as gloves; 2-D surfaces, such as pens and fingers; and with physical objects in hand, such as batons, game controllers, toys, and custom tangibles. The authors conclude their paper with a discussion of design guidelines for tangible devices for designers based on gesture interaction, because of the possibilities it offers through the use of physical devices that facilitate, support, enhance, or track gestures people make for digital interaction purposes.

3.3.3. Domains of design

The two papers in this Special Issue that report on the study of designers studied mechanical engineering designers. The third paper, a survey paper, is concerned with the design of HCI technology.

The data collected and analyzed by Herold and Stahovich is a mechanical engineering design scenario. They claim that their method for aligning speech and gesture is relevant for any domain that involves drawing a sketch or a diagram and explaining its elements. Some examples of other domains that the authors identify as being similar are giving driving directions, explaining the solution to a problem in a physics lecture, and explaining a sports play.

Donovan, Heinemann, Matthews and Buur also observed mechanical engineering design, and specifically, “a collaborative project focusing on designing a new type of sustainable energy generator that can replace the noisy, polluting, and fault-prone diesel engines that are currently used to power independent camps and shelters for landmine clearing operations in Angola.” Even though the authors do not discuss this question, we assume that their observations concerning the use of pointing are not specific to mechanical engineering design.

Van den Hoven and Mazalek are concerned with the design of HCI technology. The design opportunities identified by the authors concern the possibilities that tangible gesture interaction offers through the use of physical devices for facilitating, supporting, enhancing, or tracking gestures people make for digital interaction purposes. The authors do not allude to specific domains of design that might take advantage of environments in which such digital interaction could be used.

3.3.4. Type of gestures studied

Countless classifications of gestures have been made in the classical gesture-studies literature (McNeill, Reference McNeill2000; Kendon, Reference Kendon2004). Although one of the papers in this issue, Van den Hoven and Mazalek, presents a review of some of the distinctions made by several authors, the other papers focus on specific types of gestures. Pointing is probably the gesture that has been most studied and implemented in HCI systems. It is thus not surprising that all three papers in this Special Issue are concerned with pointing in one way or another.

Donovan, Heinemann, Matthews, and Buur analyze pointing gestures. In their analysis, they focus on the use of these gestures that go beyond identifying a specific object and characterize pointing as “a practice for re-establishing intersubjectivity and solving interactional trouble such as misunderstandings or disagreements.” The authors analyze how pointing may “enlist” something “as part of the [design] participants’ shared experience.” The authors describe in detail four instances of pointing.

Van den Hoven and Mazalek claim that pointing gestures are also the gestures that are made most frequently in the great majority of today's HCI systems, probably because they are the most easily interpreted gestures in current HCI technology. As Van den Hoven and Mazalek notice, other authors, and we add, laypeople, might qualify many of these “gestures” rather as “actions” or “practical actions,” for example, manipulative or performative spatial. Van den Hoven and Mazalek, in addition to presenting gestures used in HCI, emphasize the possibility of designing for gestures made while holding physical artifacts, that is, the intersection of gesture and tangible interaction.

Herold and Stahovich examine pen strokes performed in collaborative design situations in which designers are allowed to hold a multimodal dialog, in this case talking versus sketching and “gesturing” through pen strokes. The authors distinguish two types of “gesture strokes” (besides “object strokes”; see above). One of those are gestures resolving deictic references. The authors note that these gestures can take many forms such as tapping, circling, highlighting, and tracing. Interestingly enough, the authors do not speak of “pointing” (except in their discussion of “Related Work”).

Gesture pen strokes are useful for the designers in their interaction, but, for the most part, only temporarily. That is why it is important to distinguish them from other pen strokes. To keep a trace of all the gesture strokes obscures the sketch on which they have been made. Their accumulation causes essential features of the sketches, representing the structure of the device under design, difficult to discern. Thus, in Herold and Stahovich's paper, the gestures made over design sketches are identified in order to get rid of them! In doing so, Herold and Stahovich aim to contribute to the construction of more useful collaborative design tools that allow speech and sketch input.

4. CONCLUSION

This Special Issue provides a starting point for further research on the role of gesture in designing with a focus on the use of computational systems. We have seen that computational systems provide a role in facilitating and automating the analysis of data that includes gesture, speech, and video. We have also seen that the design of new technologies for interacting with design information can take into consideration the role of gesture in designing. We anticipate that increasing interest in the role of gesture in design thinking and design collaboration will have a major impact on how we support and augment designers using computational systems.

Willemien Visser is a cognitive psychologist and Senior Researcher at CNRS-Télécom ParisTech and INRIA, the French National Institute for Research in Computer Science and Control. Her current research interests concern multimodal interaction in collaborative activities, especially the articulation between the different semiotic systems, with respect to their role and contribution. At present, her focus is on gesturing in design meetings. Willemien is a leading researcher in cognitive design research. In her 2006 book The Cognitive Artifacts of Designing, she presents a critical review of the research performed in the last 30 years in this domain as well as her own view of design as a construction of representations.

Mary Lou Maher is a Professor of design computing at the University of Sydney. Mary Lou received a BEng in civil engineering at Columbia University in 1979 and an MS and a PhD at Carnegie Mellon University, completing the PhD in 1984. She was an Assistant Professor at Carnegie Mellon University when she moved to the University of Sydney in 1990. Mary Lou became a Program Director of the Information and Intelligent Systems Division at the National Science Foundation in the United States in 2006. She has established research emphasis on CreativeIT and has been part of the Human Centered Computing, Cyber-Enabled Discovery and Innovation, and Social–Computational Programs. Dr. Maher's current research topics include computational creativity, designing and designing in 3-D virtual worlds, collaborative and collective design, tangible interfaces to design models, and motivated reinforcement learning.

References

REFERENCES

Bavelas, J.B., Chovil, N., Lawrie, D.A., & Wade, A. (1992). Interactive gestures. Discourse Processes 15, 469489.CrossRefGoogle Scholar
Bavelas, J.B., Coates, L., & Johnson, T. (2000). Listeners as co-narrators. Journal of Personality and Social Psychology 79, 941952.CrossRefGoogle ScholarPubMed
Bavelas, J.B., Gerwing, J., Sutton, C., & Prevost, D. (2008). Gesturing on the telephone: independent effects of dialogue and visibility. Journal of Memory and Language 58, 495520.CrossRefGoogle Scholar
Bekker, M.M., Olson, J.S., & Olson, G.M. (1995). Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design. Proc. DIS95, Conf. Designing Interactive Systems: Processes, Practices, Methods, & Techniques, pp. 157166.CrossRefGoogle Scholar
Bischel, D.T., Stahovich, T., Peterson, E., Davis, R., & Adler, A. (2009). Combining speech and sketch to interpret unconstrained descriptions of mechanical devices. Proc. 21st Int. Joint Conf. Artificial Intelligence (IJCAI-09), pp. 14011406.Google Scholar
Calbris, G. (1990). The Semiotics of French Gestures. Bloomington, IN: Indiana University Press.Google Scholar
Cassell, J. (2001). Embodied conversational agents: representation and intelligence in user interface. AI Magazine 22(3), 6783.Google Scholar
Cassell, J., & Stone, M. (1999). Living hand to mouth: psychological theories about speech and gesture in interactive dialogue systems. Proc. AAAI 1999 Fall Symp. Psychological Models of Communication in Collaborative Systems, pp. 3442.Google Scholar
Cross, N., Christiaans, H., & Dorst, K. (Eds.). (1996). Analysing Design Activity. Chichester: Wiley.Google Scholar
Détienne, F. (2006). Collaborative design: managing task interdependencies and multiple perspectives. Interacting with Computers 18(1), 120.CrossRefGoogle Scholar
Efron, D. (1941/1972). Gesture, Race and Culture. The Hague: Mouton & Co. (Original work published 1941)Google Scholar
Ekman, P., & Friesen, W. (1969). The repertoire of non-verbal behavior: categories, origins, usage and coding. Semiotica 1(1), 4998.CrossRefGoogle Scholar
Gatica-Perez, D. (2009). Automatic nonverbal analysis of social interaction in small groups: a review. Image and Vision Computing 27, 17751787.CrossRefGoogle Scholar
Gero, J.S., & Tang, H.-H. (2001). The differences between retrospective and concurrent protocols in revealing the process-oriented aspects of the design process. Design Studies 22, 283295.CrossRefGoogle Scholar
Goldin-Meadow, S. (1999). The role of gesture in communication and thinking. Trends in Cognitive Sciences 3(11), 419429.CrossRefGoogle ScholarPubMed
Goldin-Meadow, S. (2003). The Resilience of Language: What Gesture Creation in Deaf Children Can Tell Us About How All Children Learn Language. New York: Taylor & Francis.Google Scholar
Goldin-Meadow, S. (2009). How gesture promotes learning throughout childhood. Child Development Perspectives 3, 106111.CrossRefGoogle ScholarPubMed
Gross, M.D., & Do, E.Y.-L. (2004). The three Rs of drawing and design computation. A drawing centered view of design process. Proc. Design Computing and Cognition '04, pp. 613632. Dordrecht: Kluwer.CrossRefGoogle Scholar
Iverson, J., & Goldin-Meadow, S. (1997). What's communication got to do with it? Gesture in children blind from birth. Developmental Psychology 33, 453467.CrossRefGoogle Scholar
Kendon, A. (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Kim, M.J., & Maher, M.L. (2008). The impact of tangible user interfaces on spatial cognition during collaborative design. Design Studies 29(3), 222253.CrossRefGoogle Scholar
Kraut, R.E., Fussell, S.R., & Siegel, J. (2003). Visual information as a conversational resource in collaborative physical tasks. Human–Computer Interaction 18(1), 1349.CrossRefGoogle Scholar
Liddell, S.K. (2003). Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Maher, M.L., Yohann Daruwala, Y., & Chen, E. (2004). A design workbench with tangible interfaces for 3D design. Interaction Symp., pp. 491522. Sydney: UTS Printing Services.Google Scholar
McNeill, D. (1992). Hand and Mind. What Gestures Reveal About Thought. Chicago: University of Chicago Press.Google Scholar
McNeill, D. (Ed.). (2000). Language and Gesture. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Mitra, S., & Acharya, T. (2007). Gesture recognition: a survey. Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions 37(3), 311324.CrossRefGoogle Scholar
Müller, C. (2007). A semiotic profile: Adam Kendon. Semiotix. A Global Information Bulletin 2007(9). Accessed at http://www.semioticon.com/semiotix/semiotix9/sem-9-03.html on February 21, 2011.Google Scholar
Murphy, K.M. (2005). Collaborative imagining: the interactive use of gestures, talk, and graphic representation in architectural practice. Semiotica 156(1/4), 113145.Google Scholar
Obeysekare, U., Williams, C., Durbin, J., Rosenblum, L., Rosenberg, R., Grinstein, F., Ramamurti, R., Landsberg, A., & Sandberg, W. (1996). Virtual workbench. A non-immersive virtual environment for visualizing and interacting with 3D objects for scientific visualization. Proc. 7th Conf. Visualization '96, pp. 345349. San Francisco, CA: IEEE Computer Society Press.Google Scholar
Olson, G.M., & Olson, J.S. (2000). Distance matters. Human–Computer Interaction 15, 139178.CrossRefGoogle Scholar
Pavlovic, V.I., Sharma, R., & Huang, T.S. (1997). Visual interpretation of hand gestures for Human–Computer Interaction: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 677695.CrossRefGoogle Scholar
Purcell, A.T., & Gero, J.S. (1998). Drawings and the design process: a review of protocol studies in design and other disciplines and related research in cognitive psychology. Design Studies 19(4), 389430.CrossRefGoogle Scholar
Roth, W.-M. (2001). Gestures: their role in teaching and learning. Review of Educational Research 71(3), 365392.CrossRefGoogle Scholar
Ruttkay, Z., & Pelachaud, C. (Eds.). (2004). From Brows to Trust: Evaluating Embodied Conversational Agents, Vol. 7. Heidelberg: Springer.CrossRefGoogle Scholar
Schmalstieg, D., Encarnação, M., & Szalavári, Z. (1999). Using transparent props for interaction with the virtual table. Proc. 1999 Symp. Interactive 3D Graphics, pp. 147153. Atlanta, GA: ACM Press.CrossRefGoogle Scholar
Sowa, T. (2008). The recognition and comprehension of hand gestures. A review and research agenda. In Modeling Communication (Wachsmuth, I., & Knoblich, G., Eds.), LNAI, Vol. 4930, pp. 3856. Berlin: Springer.Google Scholar
Stempfle, J., & Badke-Schaub, P. (2002). Thinking in design teams. An analysis of team communication. Design Studies 23(5), 473496.CrossRefGoogle Scholar
Tang, J.C. (1991). Findings from observational studies of collaborative work. International Journal of Man–Machine Studies 34, 143160.CrossRefGoogle Scholar
Ullmer, B., & Ishii, H. (1997). The metaDESK: models and prototypes for tangible user interfaces. Proc. User Interface Software and Technology (UIST’97), pp. 1421. New York: ACM Press.Google Scholar
Visser, W. (2006). The Cognitive Artifacts of Designing. Mahwah, NJ: Erlbaum.CrossRefGoogle Scholar
Visser, W. (2009). The function of gesture in an architectural design meeting. In About: Designing. Analysing Design Meetings (McDonnell, J., Lloyd, P., Reid, F., Luck, R., & Cross, N., Eds.), pp. 269284. London: Taylor & Francis.Google Scholar
Visser, W. (2010 a). Function and form of gestures in a collaborative design meeting. In Gesture in Embodied Communication and Human–Computer Interaction. 8th Int. Gesture Workshop, GW 2009 (Kopp, S., & Wachsmuth, I., Eds.), LNCS, Vol. 5934, pp. 6172. Heidelberg: Springer.Google Scholar
Visser, W. (2010 b). Use of metaphoric gestures in an architectural design meeting: expressing the atmosphere of the building. Proc. 4th Conf. Int. Society for Gesture Studies (ISGS), p. 284.Google Scholar