Skip to main content

About this book

The four-volume set LNCS 10513—10516 constitutes the proceedings of the 16th IFIP TC 13 International Conference on Human-Computer Interaction, INTERACT 2017, held in Mumbai, India, in September 2017.
The total of 68 papers presented in these books was carefully reviewed and selected from 221 submissions. The contributions are organized in topical sections named: Part I: adaptive design and mobile applications; aging and disabilities; assistive technology for blind users; audience engagement; co-design studies; cultural differences and communication technology; design rationale and camera-control. Part II: digital inclusion; games; human perception, cognition and behavior; information on demand, on the move, and gesture interaction; interaction at the workplace; interaction with children. Part III: mediated communication in health; methods and tools for user interface evaluation; multi-touch interaction; new int

eraction techniques; personalization and visualization; persuasive technology and rehabilitation; and pointing and target selection.

Table of Contents


Digital Inclusion


Contextualizing ICT Based Vocational Education for Rural Communities: Addressing Ethnographic Issues and Assessing Design Principles

Recently, combining Information and Communication Technologies (ICT) with Technical Vocational Education and Training (TVET) for a low literate population is gaining interest, as this can lead to more effective socio-economic development. This strategy can more easily provide employment and bring community wide change because of the improved quality and relevance of education materialQuery. Although TVET providers are present throughout India that uses some ICT, challenges remain for prospective students including illiteracy, language, resource limits and gender boundaries. Providing TVET that is accessible to low-literate people in rural village communities requires a shift in the design of ICT so that it is universally useable, even for communities like tribal India that has a largely oral culture. In this article, we detail the design and development of an ICT driven TVET model for a mostly illiterate audience in rural India and measure its efficacy. Through our ethnographic and usability study with 60 low-literate oral and novice village users, we present the issues faced and the solutions we incorporated into our new model. The results show that users performed better in the vocational course units with the solutions incorporated.
K. P. Sachith, Aiswarya Gopal, Alexander Muir, Rao R. Bhavani

Enhancing Access to eLearning for People with Intellectual Disability: Integrating Usability with Learning

eLearning can provide people with Intellectual Disability (ID) extended learning opportunities in using information technology, thus potentially increasing digital inclusion. In order to make this a motivating experience, designs of eLearning are required to be compatible with their cognitive abilities. It is as yet unclear how to design an engaging eLearning environment that integrates usability with learning. This paper aims to explore the applicability of learning theories along with usability guidelines in designing an eLearning environment for people with ID. We discuss psychological theories on teaching and learning, and literature on challenges and opportunities of eLearning for people with ID. Based on that understanding, we present guidelines that integrate different learning theories with eLearning, for learner centered interaction design of eLearning modules for people with ID. We present a case study of applying these inclusive design considerations to an eLearning module about health information access.
Theja Kuruppu Arachchi, Laurianne Sitbon, Jinglan Zhang

Identifying Support Opportunities for Foreign Students: Disentangling Language and Non-language Problems Among a Unique Population

This study investigates how foreign students address language-related and other problems as a means of identifying opportunities to support them with language and social technologies. We identify support opportunities by distinguishing between different types of problems – e.g. whether they are language-related and whether they involve essential activities in their lives or at school – and the extent to which support already exists. Our unique sample of 15 foreign graduate students who live in Japan but study in English helped us disentangle problems relating to language skills versus those relating to other challenges. We examine these issues using a multi-method approach where students used a mobile app to record experiences and interactions over five weeks, and then discussed this data during an in-depth interview. We use our results to identify specific support opportunities that can be addressed through the development of social and language technologies.
Jack Jamieson, Naomi Yamashita, Jeffrey Boase


Status Quo and Lessons Learned from a Persona-Based Presentation Metaphor of WCAG
In this paper, we examine how personas need to be designed to transport the information of accessibility resources like the Web Content Accessibility Guidelines (WCAG) in a user-centered way, while preserving their vivid nature. We discuss the benefits and issues, e.g., that using only impairments as a tie is not sufficient and comes with side-effects. We conducted a study to state the status quo of linking WCAG to personas by measuring the user experience of a system highlighting this connection, to the WCAG Quick Reference. Furthermore, this work highlights some issues when deploying those resources in lectures for teaching accessibility, pin-points some solutions to overcome these issues and reports on our lessons learned on the usage of this user-centered presentation metaphor of WCAG.
Alexander Henka, Gottfried Zimmermann

Women in Crisis Situations: Empowering and Supporting Women Through ICTs

Women are more likely to experience poverty than their male counterparts, through negative life events that can potentially place women in a crisis situation. Past studies highlight that there is a need for a better understanding of the tools that could both support and empower women in crisis situations. We respond to this with a study that illustrates how we may be able to generate ideas for designing technologies that are both empowering and supportive. In collaboration with a non-profit community care center in Australia, we undertook a qualitative study of thirteen women in crisis situations to better understand the issues they faced. We took an in-situ approach, where we provided video and disposable cameras to these participants letting them record their experiences. Through an analysis of their videos and photos followed by semi-structured interviews, we show that while each participant had different life experiences that initially appear unrelated, there are three common challenges they face. These are: their living conditions, social isolation and stigma. As our findings are from an exclusively female perspective, through this research we contribute to the HCI literature on understanding the specific issues faced by women in crisis situations and aim to inform designs for technology that can support and empower women in challenging circumstances.
Tara Capel, Dhaval Vyas, Margot Brereton



Effects of Image-Based Rendering and Reconstruction on Game Developers Efficiency, Game Performance, and Gaming Experience

Image-based rendering and reconstruction (IBR) approaches minimize time and costs to develop video-game assets, aiming to assist small game studios and indie game developers survive in the competitive video-game industry. To further investigate the interplay of IBR on developers’ efficiency, game performance, and players’ gaming experience we conducted two evaluation studies: a comparative, ecologically valid study with professional game developers who created games with and without an IBR-based game development pipeline, and a user study, based on eye-tracking and A/B testing, with gamers who played the developed games. The analysis of the results indicates that IBR tools provide a credible solution for creating low cost video game assets in short time, sacrificing game performance though. From a player’s perspective, we note that the IBR approach influenced players’ preference and gaming experience within contexts of varying levels of player’s visual intersections related to the IBR-created game assets.
George E. Raptis, Christina Katsini, Christos Fidas, Nikolaos Avouris

Exploring in-the-Wild Game-Based Gesture Data Collection

This paper presents an automatic 3D gesture collection concept and architecture based on a rhythm game for public displays. The system was implemented using an off-the-shelf gesture controller, was deployed on a public vertical screen, and was used to study the effects of alternative gesture guidance conditions. In the evaluation presented, we examined how alternative gesture guidance conditions affect users’ engagement. The study showed that demonstration animation (CDA) and tracking state feedback (TSI) each encourages sustained game engagement. The underlying concept and architecture presented here offer actionable UI design insight to help creating large gesture corpus from diverse populations.
Kiyoshi Oka, Weiquan Lu, Kasım Özacar, Kazuki Takashima, Yoshifumi Kitamura

From Objective to Subjective Difficulty Evaluation in Video Games

This paper describes our research investigating the perception of difficulty in video games, defined as players’ estimation of their chances of failure. We discuss our approach as it relates to psychophysical studies of subjective difficulty and to cognitive psychology research into the overconfidence effect. The starting point for our study was the assumption that the strong motivational pull of video games may lead players to become overconfident, and thereby underestimate their chances of failure. We design and implement a method for an experiment using three games, each representing a different type of difficulty, wherein players bet on their capacity to succeed. Our results confirm the existence of a gap between players’ actual and self-evaluated chances of failure. Specifically, players seem to underestimate high levels of difficulty. The results do not show any influence on difficulty underestimation from the players gender, feelings of self-efficacy, risk aversion or gaming habits.
Thomas Constant, Guillaume Levieux, Axel Buendia, Stéphane Natkin

Improved Memory Elicitation in Virtual Reality: New Experimental Results and Insights

Eliciting accurate and complete knowledge from individuals is a non-trivial challenge. In this paper, we present the evaluation of a virtual-world based approach, informed by situated cognition theory, which aims to assist with knowledge elicitation. In this approach, we place users into 3D virtual worlds which represent real-world locations and ask users to describe information related to tasks completed in those locations. Through an empirical A/B evaluation of 62 users, we explore the differences in recall ability and behaviour of those viewing the virtual world via a virtual reality headset and those viewing the virtual world on a monitor. Previous results suggest that the use of a virtual reality headset was able to meaningfully improve memory recall ability within the given scenario. In this study, we adjust experiment protocol to explore the potential confounds of time taken and tool usability. After controlling for these possible confounds, we once again found that those given a virtual reality headset were able to recall more information about the given task than those viewing the virtual world on a monitor.
Joel Harman, Ross Brown, Daniel Johnson

Practice in Reality for Virtual Reality Games: Making Players Familiar and Confident with a Game

Game designers include training levels in video games to prepare players so that they can enjoy the game. The training levels of virtual reality (VR) games are typically assumed to be within the virtual world of the game. New players must learn about a new game in such an unfamiliar virtual world. A tutorial in the real world offers a potential way to enable players to learn about a new game and to practice the skills in a familiar world. To explore any effects of a real-world tutorial in VR games, an experiment was conducted, the results of which show that a real-world tutorial is effective in helping new players feel confident about and familiar with a VR game before playing it. However, it is not as effective as virtual-world tutorial in increasing game performance.
Jeffrey C. F. Ho

Human Perception, Cognition and Behaviour


I Smell Creativity: Exploring the Effects of Olfactory and Auditory Cues to Support Creative Writing Tasks

Humans perceive different objects, scenes or places using all their senses. Our sensory richness also plays an important role for creative activities. Humans also recall those sensory experiences in order to spark creativity, e.g. while writing a text. This paper presents a study with 100 students, divided in groups, that explores the effect of auditory and olfactory cues and their combination during a creative writing exercise. Our results provide useful insights suggesting that olfactory cues have an important role in the creative process of users and even when this type of cues are combined with auditory cues. We believe, that this type of modalities should gain more relevance on the development of creativity support tools and environments for supporting the creative writing process.
Frederica Gonçalves, Diogo Cabral, Pedro Campos, Johannes Schöning

Night Mode, Dark Thoughts: Background Color Influences the Perceived Sentiment of Chat Messages

The discussion of color in HCI often remains restricted to issues of legibility, aesthetics or color preferences. Little attention has been given to the emotional and semantic effects of color on digital content. At the example of black and white, this paper reviews previous studies in psychology and reports an experiment that investigates the influence of black, white and gray user interface backgrounds on the perception of sentiment in chat messages on a social media platform ( Of sixty-seven participants, those who rated the messages against a black background perceived them more negatively than those who worked against a white background. The results suggest that user sentiment perception can be influenced by interface color, especially for ambiguous textual content laced with irony and sarcasm. We claim that this knowledge can be applied in persuasive interaction and user experience design across the entirety of the digital landscape.
Diana Löffler, Lennart Giron, Jörn Hurtienne

Subjective Usability, Mental Workload Assessments and Their Impact on Objective Human Performance

Self-reporting procedures and inspection methods have been largely employed in the fields of interaction and web-design for assessing the usability of interfaces. However, there seems to be a propensity to ignore features related to end-users or the context of application during the usability assessment procedure. This research proposes the adoption of the construct of mental workload as an additional aid to inform interaction and web-design. A user-study has been performed in the context of human-web interaction. The main objective was to explore the relationship between the perception of usability of the interfaces of three popular web-sites and the mental workload imposed on end-users by a set of typical tasks executed over them. Usability scores computed employing the System Usability Scale were compared and related to the mental workload scores obtained employing the NASA Task Load Index and the Workload Profile self-reporting assessment procedures. Findings advise that perception of usability and subjective assessment of mental workload are two independent, not fully overlapping constructs. They measure two different aspects of the human-system interaction. This distinction enabled the demonstration of how these two constructs cab be jointly employed to better explain objective performance of end-users, a dimension of user experience, and informing interaction and web-design.
Luca Longo

What is User’s Perception of Naturalness? An Exploration of Natural User Experience

Natural User Interfaces (NUI) is now a well-researched topic. The principles of NUI in literature primarily focus on designing the user interfaces to be intuitively easy to use. But is it enough for a software product to just have an intuitive user interface to give a natural experience? Designing for a product imbibing overall naturalness requires encompassing of all the aspects of user experience, which is beyond just an interface design. This study contributes by taking a holistic approach in identifying what users perceive to be natural and what experiences make them feel so. We involved 36 participants with diverse demographics and personalities, giving them a variety of stimuli to elicit their perceptions of naturalness. These were found to be a combination of what they perceived to be natural through visual, cognitive as well as real life past and present experiences. The insights from this research helped us in deriving inferences on designing for what we call as Natural User Experience (NUX). We found that the level of naturalness does not remain same over time for the users; rather it goes through a stage based cycle. We also evolved strategies for improving the naturalness by advancing user’s experience across these stages.
Sanjay Ghosh, Chivukula Sai Shruthi, Himanshu Bansal, Arvind Sethia

Information on Demand, on the Move, and Gesture Interaction


Presenting Information on the Driver’s Demand on a Head-Up Display

Head-up displays present driving-related information close to the road scene. The content is readily accessible, but potentially clutters the driver’s view and occludes important parts. This can lead to distraction and negatively influence driving performance. Superimposing display content only on demand – triggered by the driver whenever needed – might provide a good tradeoff between the accessibility of relevant information and the distraction caused by its display. In this paper we present a driving simulator study that investigated the influence of the self-triggered superimposition on workload, distraction and performance. In particular, we compared a gaze-based and a manually triggered superimposition with the permanent display of information and a baseline (speedometer only). We presented four pieces of information with different relevance and update frequency to the driver. We found an increased workload and distraction for the gaze- and manually triggered HUDs as well as an impact on user experience. Participants preferred to have the HUD displayed permanently and with only little content.
Renate Haeuslschmid, Christopher Klaus, Andreas Butz

Seeing Through the Eyes of Heavy Vehicle Operators

Interaction Designers of heavy vehicles are challenged by two opposing forces, the increasingly information-driven systems resulting in higher visual load, and a must to support a focus on the area of operation. To succeed in the interaction design and application of new technology, a good understanding of the user and the activity is needed. However, field studies are related with substantial efforts for both researcher and operator. This paper investigates and shows how quick non-intrusive studies can be held, by bridging practice from one HCI area into another, i.e. applying guerilla testing approaches used in mobile and web development into the heavy vehicles domain, an area not used to this practice. An exploratory study is performed, on a diverse set of vehicles in the field. This study describes and presents examples how both qualitative and quantitative conclusions can be extracted on the user attentiveness to digital systems and surrounding.
Markus Wallmyr

TrackLine: Refining touch-to-track Interaction for Camera Motion Control on Mobile Devices

Controlling a film camera to follow an actor or object in an aesthetically pleasing way is a highly complex task, which takes professionals years to master. It entails several sub-tasks, namely (1) selecting or identifying and (2) tracking the object of interest, (3) specifying the intended location in the frame (e.g., at 1/3 or 2/3 horizontally) and (4) timing all necessary camera motions such that they appear smooth in the resulting footage. Traditionally, camera operators just controlled the camera directly or remotely and practiced their motions in several repeated takes until the result met their own quality criteria. Automated motion control systems today assist with the timing and tracking sub-tasks, but leave the other two to the camera operator using input methods such as touch-to-track, which still present challenges in timing and coordination. We designed a refined input method called TrackLine which decouples target and location selection and adds further automation with even improved control. In a first user study controlling a virtual camera, we compared TrackLine to touch-to-track and traditional joystick control and found that the results were objectively both more accurate and more easily achieved, which was also confirmed by the subjective ratings of our participants.
Axel Hoesl, Sarah Aragon Bartsch, Andreas Butz

Understanding Gesture Articulations Variability

Interfaces based on mid-air gestures often use a one-to-one mapping between gestures and commands, but most remain very basic. Actually, people exhibit inherent intrinsic variations for their gesture articulations because gestures carry dependency with both the person producing them and the specific context, social or cultural, in which they are being produced. We advocate that allowing applications to map many gestures to one command is a key step to give more flexibility, avoid penalizations, and lead to better user interaction experiences. Accordingly, this paper presents our results on mid-air gesture variability. We are mainly concerned with understanding variability in mid-air gesture articulations from a pure user-centric perspective. We describe a comprehensive investigation on how users vary the production of gestures under unconstrained articulation conditions. The conducted user study consisted in two tasks. The first one provides a model of user conception and production of gestures; from this study we also derive an embodied taxonomy of gestures. This taxonomy is used as a basis for the second experiment, in which we perform a fine grain quantitative analysis of gesture articulation variability. Based on these results, we discuss implications for gesture interface designs.
Orlando Erazo, Yosra Rekik, Laurent Grisoni, José A. Pino

Watching Your Back While Riding Your Bike

Designing for Preventive Self-care During Motorbike Commuting
This paper presents our early exploratory work investigating if, and how motorbike riders would engage with visual cues on lower-back posture to adjust their body posture while riding, and in turn prevent lower back injuries due to physical stress. The design exploration reported is part of a larger series of investigations looking into the broader question of integrating measures for preventive self-care with existing everyday activities (e.g. daily motorcycle commute) by means of digital technology. We are guided by the concept of embodied self-monitoring grounded in theories on the embodied and circumstantial nature of human actions, a construct previously used to guide design oriented research in the domain of out-of-clinic physical rehabilitation. We follow a research-through-design approach with the sketching of user experience as our primary mode of inquiry, as we look to expand opportunities for interaction design in the domain of preventive self-care. We report on the outcome of in-situ enactments performed by four motorbike riders as co-explorers engaging with our interactive soft&hardware sketches while actually riding in traffic. In-situ enactments and follow-up interviews with the riders encourage us to (a) further elaborate our interactive sketches for motorbike commuting, and (b) investigate more broadly the design of digital technology in support of preventive self-care as an integrated part of mundane activities such as, in the case at hand, the daily motorcycle commute.
Tomas Sokoler, Naveen L. Bagalkot

Interaction at the Workplace


FeetForward: On Blending New Classroom Technologies into Secondary School Teachers’ Routines

Secondary school teachers have complex, intensive and dynamic routines in their classrooms, which makes their attentional resources limited for human-computer interaction. Leveraging principles of peripheral interaction can reduce attention demanded by technologies and interactions could blend more seamlessly into the everyday routine. We present the design and deployment of FeetForward - an open-ended, and foot-based peripheral interface to facilitate teachers’ use of interactive whiteboards. FeetForward was used as a technology probe to explore the design of new classroom technologies which are to become peripheral and routine. The deployment took place with three teachers in their classrooms for five weeks. Based on in-depth and longitudinal interviews with the teachers, we discuss about how FeetForward integrated into teachers’ routines, what its effects were on teaching and whether its foot-based interaction style was suitable for peripheral interaction. Subsequently, implications on design of peripheral classroom technologies were generalized.
Pengcheng An, Saskia Bakker, Berry Eggen

Human-Building Interaction: When the Machine Becomes a Building

Acknowledging the current digitalizing of buildings and their existence as interactive objects, this article sets out to consolidate Human-Building Interaction (HBI) as a new research domain within HCI. It exposes fundamental characteristics of HBI such as user immersion in the “machine” and extensive space and time scales, and proposes an operational definition of the domain. Building upon a comprehensive survey of relevant cross-disciplinary research, HBI is characterized in terms of dimensions representing the interaction space and modalities that can be invoked to enhance interactions. Specific methodological challenges are discussed, and illustrative research projects are presented demonstrating the relevance of the domain. New directions for future research are proposed, pointing out the domain’s potentially significant impact on society.
Julien Nembrini, Denis Lalanne

Investigating Wearable Technology for Fatigue Identification in the Workplace

Fatigue has been identified as a significant contributor to workplace accident rates. However, risk minimisation is a process largely based on self-reporting methodologies, which are not suitable for fatigue identification in high risk industries. Wearable technology which is capable of collecting physiological data such as step and heart rates as an individual performs workplace tasks has been proposed as a possible solution. Such devices are minimally intrusive to the individual and so can be used throughout the working day. Much is promised by the providers of such technology, but it is unclear how suitable it is for in-situ measurements in real-world work scenarios. To investigate this, we performed a series of studies designed to capture physiological and psychological data under differing (physical and mental) loading types with the intention of finding out how suitable such equipment is. Using reaction time (simple and choice) as a measure of performance we found similar correlations exist between loading duration and our measured indicators as those found in large-scale laboratory studies using state of the art equipment. Our results suggest that commercially available activity monitors are capable of collecting meaningful data in workplaces and are, therefore, worth investigating further for this purpose.
Christopher Griffiths, Judy Bowen, Annika Hinze

Leveraging Conversational Systems to Assists New Hires During Onboarding

The task of onboarding a new hire consumes great amounts of resources from organizations. The faster a “newbie” becomes an “insider”, the higher the chances of job satisfaction, retention, and advancement in their position. Conversational agents (AI agents) have the potential to effectively transform productivity in many enterprise workplace scenarios so applying them to the onboarding process can prove to be a very solid use case for such agents. In this work, we present a conversational system to aid new hires through their onboarding process. Users interact with the system via an instant messaging platform, to fulfill their work related information needs as if it were a human assistant. We describe the end-to-end process involved in building a domain specific conversational system and share our experiences in deploying it to 344 new hires in a month-long study. The feasibility of our approach is evaluated by analyzing message logs and questionnaires. Through three different measures, we observed an accuracy of about 60% at the message level and a higher than average retention rate for the agent. Our results suggest that this agent-based approach can very well compete with the existing tools for new hires.
Praveen Chandar, Yasaman Khazaeni, Matthew Davis, Michael Muller, Marco Crasso, Q. Vera Liao, N. Sadat Shami, Werner Geyer

RemindMe: Plugging a Reminder Manager into Email for Enhancing Workplace Responsiveness

Reminding others to do something or bringing something to someone’s attention by sending reminders is common in the workplace. Our goal was to create a system to reduce the cognitive overhead for employees to manage their email, specifically the incoming and outgoing requests with their colleagues and others. We build on prior research on social request management, interruptions, and cognitive psychology in the design of such a system that includes an email reminder creation algorithm, with a built-in learning mechanism for improving such reminders over time, and a reminder delivery user interface. The system is delivered to users through a browser plugin, allowing it to be built on top of an existing web-based email system within an enterprise.
Casey Dugan, Aabhas Sharma, Michael Muller, Di Lu, Michael Brenndoerfer, Werner Geyer

The Cost of Improved Overview: An Analysis of the Use of Electronic Whiteboards in Emergency Departments

Forming and maintaining an overview of an information space is key to competent action in many situations and often supported by overview displays. We investigate the cost of the improved overview associated with the introduction of electronic whiteboards in four emergency departments (EDs). In such a dynamic environment the work that goes into keeping the whiteboard current is, we contend, an indicator of the cost of maintaining an overview. On the basis of log data for the period 2012–2014 we find that the ED clinicians make an average of 1.91 whiteboard changes per minute to keep the whiteboard current. Performing these changes takes an estimated 6647 h a year in each ED. While the whiteboard is well-like and has improved the clinicians’ overview, our cost-of-overview estimation shows that it consumes substantial staff resources. This reflects the value the clinicians assign to having an overview but also reveals the amount of resources removed from other activities to maintain this overview.
Morten Hertzum

Interaction with Children


An Interactive Elementary Tutoring System for Oral Health Education Using an Augmented Approach

The conventional elementary education system in India is mostly guided by formal content development, focusing on areas like math, language, science and social-science. Children tend to retain very little knowledge about other important areas of learning like heath care, which needs to be developed in their foundation years. The education on oral health is one such example which is not given the focus they ought to be. Considering its importance in early education, we propose a learning environment where children would gain knowledge through constant interaction with an intelligent tutoring system. The system addresses the challenges in developing a learning environment for children by introducing audio-visual effects, 3D animations and customizing the tutoring process to provide user-controlled pace of learning. It also employs the Wii Remote for imparting a tangible hardware interaction with the interface. This paper describes the proposed system and the studies conducted on treatment and control groups to evaluate its efficacy and compare the learning outcome at various domains. Experimental results depict positive effects on learning in the proposed technology-enhanced environment and paves a way for the deployment of more interactive, technology-driven learning process in the elementary education system.
Mitali Sinha, Suman Deb

Empowered and Informed: Participation of Children in HCI

The participation of end users in design, research and evaluation has long been a feature of HCI. Traditionally these end users consent to participate in the general belief that they are contributing some knowledge that will eventually improve things for themselves or others. The involvement of children in research in HCI creates new challenges for ethical participation. This paper brings together current research on ethical participation and models of participation, and presents three tools, CHECk, ActiveInfo and PICO- Art, as well as a set of practical ideas, for researchers to adapt and use in their work with children. The paper explores how effective different aspects of the different tools are, and offers a set of practical suggestions based on observational assessments. The main contribution is a culturally adaptable ethical toolkit and a protocol for ethical working with children in HCI.
Janet C. Read, Matthew Horton, Daniel Fitton, Gavin Sim

Gaze Awareness in Agent-Based Early-Childhood Learning Application

Use of technological devices for early childhood learning is increasing. Now, kindergarten and primary school children use interactive applications on mobile phones and tablet computers to support and complement classroom learning. With increase in cognitive technologies, there is further potential to make such applications more engaging by understanding the user context. In this paper, we present the Little Bear, a gaze aware pedagogical agent, that tailors its verbal and non-verbal behavior based on the visual attention of the child and employs means to reorient the attention of the child, when distracted from the learning activity. We used the Little Bear agent in a learning application to enable teaching the vocabulary of everyday fruits and vegetables. Our user-study (n = 12) with preschoolers shows that children interacted longer and showed improved short-term retention of the vocabulary using the gaze aware agent compared to a baseline touch-based application. Our results demonstrate the potential of gaze aware application design for early childhood learning.
Deepak Akkil, Prasenjit Dey, Deepshika Salian, Nitendra Rajput

Puffy: A Mobile Inflatable Interactive Companion for Children with Neurodevelopmental Disorder

Puffy is a robotic companion that has been designed in cooperation with a team of therapists and special educators as a learning & play companion for children with Neurodevelopmental Disorder (NDD). Puffy has a combination of features that support multisensory stimuli and multimodal interaction and make this robot unique with respect to existing robotic devices used for children with NDD. The egg-shaped body of Puffy is inflatable, soft, and mobile. Puffy can interpret child’s gestures and movements, facial expressions and emotions; it communicates with the child using voice, lights and projections embedded in its body, as well as movements in space. The paper discusses the principles and requirements underlying the design of Puffy. They take into account the characteristics of NDD and the special needs of children with disorders in the NDD spectrum, and provide guidelines for designers and developers who work in socially assistive robotics for this target group. We also compare Puffy against 21 existing commercial or research robots that have been used with NDD children, and briefly report a preliminary evaluation of our robot.
Franca Garzotto, Mirko Gelsomini, Yosuke Kinoe


Additional information