Skip to main content
Top

2013 | Book

Social Robotics

5th International Conference, ICSR 2013, Bristol, UK, October 27-29, 2013, Proceedings

Editors: Guido Herrmann, Martin J. Pearson, Alexander Lenz, Paul Bremner, Adam Spiers, Ute Leonards

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 5th International Conference on Social Robotics, ICSR 2013, held in Bristol, UK, in October 2013. The 55 revised full papers and 13 abstracts were carefully reviewed and selected from 108 submissions and are presented together with one invited paper. The papers cover topics such as human-robot interaction, child development and care for the elderly, as well as technical issues underlying social robotics: visual attention and processing, motor control and learning.

Table of Contents

Frontmatter

Invited Paper

Building Companionship through Human-Robot Collaboration

While human-robot collaboration has been studied intensively in the literature, little attention has been given to understanding the role of collaborative endeavours on enhancing the companionship between humans and robots. In this position paper, we explore the possibilities of building the human-robot companionship through collaborative activities. The design guideline of a companion robot Nancy developed at SRL is introduced, and preliminary studies on human-robot collaboration conducted at SRL and I

2

R are elaborated. Critical issues and technical challenges in human-robot collaboration systems are discussed. Through these discussions, we aim to draw the attention of the social robotics community to the importance of human-robot collaboration in companionship building, and stimulate more research effort in this emerging area.

Yanan Li, Keng Peng Tee, Shuzhi Sam Ge, Haizhou Li

Full Papers

Older People’s Involvement in the Development of a Social Assistive Robot

The introduction of social assistive robots is a promising approach to enable a growing number of elderly people to continue to live in their own homes as long as possible. Older people are often an excluded group in product development; however this age group is the fastest growing segment in most developed societies. We present a participatory design approach as a methodology to create a dialogue with older people in order to understand the values embodied in robots. We present the results of designing and deploying three participatory workshops and implementing a subsequent robot mock-up study. The results indicate that robot mock-ups can be used as a tool to broaden the knowledge-base of the users’ personal goals and device needs in a variety of ways, including supporting age-related changes, supporting social interaction and regarding robot aesthetic. Concerns that robots may foster inactivity and laziness as well as loss of human contact were repeatedly raised and must be addressed in the development of assistive domestic robots.

Susanne Frennert, Håkan Eftring, Britt Östlund
What Older People Expect of Robots: A Mixed Methods Approach

This paper focuses on how older people in Sweden imagine the potential role of robots in their lives. The data collection involved mixed methods, including focus groups, a workshop, a questionnaire and interviews. The findings obtained and lessons learnt from one method fed into another. In total, 88 older people were involved. The results indicate that the expectations and preconceptions about robots are multi-dimensional and ambivalent. Ambivalence can been seen in the tension between the benefits of having a robot looking after the older people, helping with or carrying out tasks they no longer are able to do, and the parallel attitudes, resilience and relational inequalities that accompany these benefits. The participants perceived that having a robot might be “good for others but not themselves”, “good as a machine not a friend” while their relatives and informal caregivers perceived a robot as “not for my relative but for other older people”.

Susanne Frennert, Håkan Eftring, Britt Östlund
Modelling Human Gameplay at Pool and Countering It with an Anthropomorphic Robot

Interaction between robotic systems and humans becomes increasingly important in industry, the private and the public sector. A robot which plays pool against a human opponent involves challenges most human robot interaction scenarios have in common: planning in a hybrid state space, numerous uncertainties and a human counterpart with a different visual perception system. In most situations it is important that the robot predicts human decisions to react appropriately. In the following, an approach to model and counter the behavior of human pool players is described. The resulting model allows to predict the stroke a human chooses to perform as well as the outcome of that stroke. This model is combined with a probabilistic search algorithm and implemented to an anthropomorphic robot. By means of this approach the robot is able to defeat a player with better manipulation skills. Furthermore it is outlined how this approach can be applied to other non-deterministic games or to tasks in a continuous state space.

Konrad Leibrandt, Tamara Lorenz, Thomas Nierhoff, Sandra Hirche
Automated Assistance Robot System for Transferring Model-Free Objects From/To Human Hand Using Vision/Force Control

This paper will propose an assistance robot system which is able to transfer model-free objects from/to human hand with the help of visual servoing and force control. The proposed robot system is fully automated, i.e. the handing-over task is performed exclusively by the robot and the human will be considered as the weakest party, e.g. elderly, disabled, blind, etc. The proposed system is supported with different real time vision algorithms to detect, to recognize and to track: 1. Any object located on flat surface or conveyor. 2. Any object carried by human hand. 3. The loadfree human hand. Furthermore, the proposed robot system has integrated vision and force feedback in order to: 1. Perform the handing-over task successfully starting from the free space motion until the full physical human-robot integration. 2. Guarantee the safety of the human and react to the motion of the human hand during the handing-over task. The proposed system has shown a great efficiency during the experiments.

Mohamad Bdiwi, Alexey Kolker, Jozef Suchý, Alexander Winkler
Robot-Mediated Interviews: Do Robots Possess Advantages over Human Interviewers When Talking to Children with Special Needs?

Children that have a disability are up to four times more likely to be a victim of abuse than typically developing children. However, the number of cases that result in prosecution is relatively low. One of the factors influencing this low prosecution rate is communication difficulties. Our previous research has shown that typically developing children respond to a robotic interviewer very similar compared to a human interviewer. In this paper we conduct a follow up study investigating the possibility of Robot-Mediated Interviews with children that have various special needs. In a case study we investigated how 5 children with special needs aged 9 to 11 responded to the humanoid robot KASPAR compared to a human in an interview scenario. The measures used in this study include duration analysis of responses, detailed analysis of transcribed data, questionnaire responses and data from engagement coding. The main questions in the interviews varied in difficulty and focused on the theme of animals and pets. The results from quantitative data analysis reveal that the children interacted with KASPAR in a very similar manner to how they interacted with the human interviewer, providing both interviewers with similar information and amounts of information regardless of question difficulty. However qualitative analysis suggests that some children may have been more engaged with the robotic interviewer.

Luke Jai Wood, Kerstin Dautenhahn, Hagen Lehmann, Ben Robins, Austen Rainer, Dag Sverre Syrdal
Multidomain Voice Activity Detection during Human-Robot Interaction

The continuous increase of social robots is leading quickly to the cohabitation of humans and social robots at homes. The main way of interaction in these robots is based on verbal communication. Usually social robots are endowed with microphones to receive the voice signal of the people they interact with. However, due to the principle the microphones are based on, they receive all kind of non verbal signals too. Therefore, it is crucial to differentiate whether the received signal is voice or not.

In this work, we present a Voice Activity Detection (VAD) system to manage this problem. In order to achieve it, the audio signal captured by the robot is analyzed on-line and several characteristics, or

statistics

, are extracted. The statistics belong to three different domains: the time, the frequency, and the time-frequency. The combination of these statistics results in a robust VAD system that, by means of the microphones located in a robot, is able to detect when a person starts to talk and when he ends.

Finally, several experiments are conducted to test the performance of the system. These experiments show a high percentage of success in the classification of different audio signal as voice or unvoice.

Fernando Alonso-Martin, Álvaro Castro-González, Javier F. Gorostiza, Miguel A. Salichs
A Low-Cost Classroom-Oriented Educational Robotics System

Over the past few years, there has been a growing interest in using robots in education. The use of these tangible devices in combination with problem-based learning activities results in more motivated students, higher grades and a growing interest in the STEM areas. However, most educational robotics systems still have some restrictions like high cost, long setup time, need of installing software in children’s computers, etc. We present a new, low-cost, classroom-oriented educational robotics system that does not require the installation of any software. It can be used with computers, tablets or smartphones. It also supports multiple robots and the system can be setup and is ready to be used in under 5 minutes. The robotics system that will be presented has been successfully used by two classes of 3rd and 4th graders. Besides improving mathematical reasoning, the system can be employed as a motivational tool for any subject.

Mário Saleiro, Bruna Carmo, Joao M. F. Rodrigues, J. M. H. du Buf
Social Navigation - Identifying Robot Navigation Patterns in a Path Crossing Scenario

The work at hand addresses the question: What kind of navigation behavior do humans expect from a robot in a path crossing scenario? To this end, we developed the ”Inverse Oz of Wizard” study design where participants steered a robot in a scenario in which an instructed person is crossing the robot’s path. We investigated two aspects of robot behavior: (1) what are the expected actions? and (2) can we determine the expected action by considering the spatial relationship?

The overall navigation strategy, that was performed the most, was driving straight towards the goal and either stop when the person and the robot came close or drive on towards the goal and pass the path of the person. Furthermore, we found that the spatial relationship is significantly correlated with the performed action and we can precisely predict the expected action by using a Support Vector Machine.

Christina Lichtenthäler, Annika Peters, Sascha Griffiths, Alexandra Kirsch
Affordance-Based Activity Placement in Human-Robot Shared Environments

When planning to carry out an activity, a mobile robot has to choose its placement during the activity. Within an environment shared by humans and robots, a social robot should take restrictions deriving from spatial needs of other agents into account. We propose a solution to the problem of obtaining a target placement to perform an activity taking the action possibilities of oneself and others into account. The approach is based on affordance spaces agents can use to perform activities and on socio-spatial reasons that count for or against using such a space.

Felix Lindner, Carola Eschenbach
Exploring Requirements and Alternative Pet Robots for Robot Assisted Therapy with Older Adults with Dementia

Robot assisted therapy has been applied in care for older adults who suffer from dementia for over ten years. Strong effects like improved interaction and signs of a higher sense of wellbeing have been reported. Still it is unclear which features are needed and which robotic pets would are suitable for this therapy. In this explorative research we interviewed 36 professional caregivers, both experienced and inexperienced in relationship to RAT and compiled a list of requirements. Next, we used this list to compare commercially available robotic pets. We found that many pet robots are usable, although seal robot Paro meets the requirements best, being superior on sustainability, realistic movements and interactivity. Finally, a test with alternative pets showed that different subjects were attracted to different pets and a subsequential questionnaire revealed that some caregivers were not only willing to try alternatives for Paro, but also suggesting that alternative pets could in some cases be more suitable.

Marcel Heerink, Jordi Albo-Canals, Meritchell Valenti-Soler, Pablo Martinez-Martin, Jori Zondag, Carolien Smits, Stefanie Anisuzzaman
Smooth Reaching and Human-Like Compliance in Physical Interactions for Redundant Arms

This work collectively addresses human-like smoothness and compliance to external contact force in reaching tasks of redundant robotic arms, enhancing human safety potential and facilitating physical human-robot interaction. A model based prescribed performance control algorithm is proposed, producing smooth, repeatable reaching movements for the arm and a compliant behavior to an external contact by shaping the reaching target superimposing the position output from a human-like impedance model. Simulation results for a 5dof human-arm like robot demonstrate the performance of the proposed controller.

Abdelrahem Atawnih, Zoe Doulgeri
Recognition and Representation of Robot Skills in Real Time: A Theoretical Analysis

Sharing reusable knowledge among robots has the potential to sustainably develop robot skills. The bottlenecks to sharing robot skills across a network are how to recognise and represent reusable robot skills in real-time and how to define reusable robot skills in a way that facilitates the recognition and representation challenge. In this paper, we first analyse the considerations to categorise reusable robot skills that manipulate objects derived from R.C. Schank’s script representation of human basic motion, and define three types of reusable robot skills on the basis of the analysis. Then, we propose a method with potential to identify robot skills in real-time. We present a theoretical process of skills recognition during task performance. Finally, we characterise reusable robot skill based on new definitions and explain how the new proposed representation of robot skill is potentially advantageous over current state-of-the-art work.

Wei Wang, Benjamin Johnston, Mary-Anne Williams
Robots in Time: How User Experience in Human-Robot Interaction Changes over Time

This paper describes a User Experience (UX) study on industrial robots in the context of a semiconductor factory cleanroom. We accompanied the deployment of a new robotic arm, without a safety fence, over one and a half years. Within our study, we explored if there is a UX difference between robots which have been used for more than 10 years within a safety fence (type A robot) and a newly deployed robot without fence (type B robot). Further, we investigated if the UX ratings change over time. The departments of interest were the oven (type A robots), the etching (type B robot), and the implantation department (type B robot). To observe experience changes over time, a UX questionnaire was developed and distributed to the operators at three defined points in time within these departments. The first survey was conducted one week after the deployment of robot B (n=23), the second survey was deployed six months later (n=21), and the third survey was distributed one and a half years later (n=23). Our results show an increasing positive UX towards the newly deployed robots with progressing time, which partly aligns with the UX ratings of the robots in safety fences. However, this effect seems to fade after one year. We further found that the UX ratings for all scales for the established robots were stable at all three points in time.

Roland Buchner, Daniela Wurhofer, Astrid Weiss, Manfred Tscheligi
Interpreting Robot Pointing Behavior

The ability to draw other agents’ attention to objects and events is an important skill on the critical path to effective human-robot collaboration. People use the act of pointing to draw other people’s attention to objects and events for a wide range of purposes. While there is significant work that aims to understand people’s pointing behavior, there is little work analyzing how people interpret robot pointing. Since robots have a wide range of physical bodies and cognitive architectures, interpreting pointing will be determined by a specific robot’s morphology and behavior. Humanoids and robots whose heads, torso and arms resemble humans that point may be easier for people to interpret, however if such robots have different perceptual capabilities to people then misinterpretation may occur. In this paper we investigate how ordinary people interpret the pointing behavior of a leading state-of-the-art service robot that has been designed to work closely with people. We tested three hypotheses about how robot pointing is interpreted. The most surprising finding was that the direction and pitch of the robot’s head was important in some conditions.

Mary-Anne Williams, Shaukat Abidi, Peter Gärdenfors, Xun Wang, Benjamin Kuipers, Benjamin Johnston
Coping with Stress Using Social Robots as Emotion-Oriented Tool: Potential Factors Discovered from Stress Game Experiment

It is known from psychology that humans cope with stress by either changing stress-induced situations (which is called problem-oriented coping strategy) or by changing his/her internal perception about the stress-induced situations (which is called emotion-oriented coping strategy). Social robots, as emerging tools that have abilities to socially interact with humans, can be of great use to help people coping with stress. The paper discusses some recent studies about ways of dealing with stress in stressful situations by using social robots. Moreover, we focus on presenting our experimental design and the discovered factors that can allow social robots to assist humans in dealing with stress in an emotion-oriented manner.

Thi-Hai-Ha Dang, Adriana Tapus
Study of a Social Robot’s Appearance Using Interviews and a Mobile Eye-Tracking Device

During interaction with a social robot, its appearance is a key factor. This paper presents the use of methods more widely utilized in market research in order to verify the design of social robot FLASH during a short-term interaction. The proposed approach relies on in-depth interviews with the participants as well as data from a mobile device which allows the tracking of one’s gaze.

Michał Dziergwa, Mirela Frontkiewicz, Paweł Kaczmarek, Jan Kędzierski, Marta Zagdańska
Playful Interaction with Voice Sensing Modular Robots

This paper describes a voice sensor, suitable for modular robotic systems, which estimates the energy and fundamental frequency,

F

0

, of the user’s voice. Through a number of example applications and tests with children, we observe how the voice sensor facilitates playful interaction between children and two different robot configurations. In future work, we will investigate if such a system can motivate children to improve voice control and explore how to extend the sensor to detect emotions in the user’s voice.

Bjarke Heesche, Ewen MacDonald, Rune Fogh, Moises Pacheco, David Johan Christensen
Social Comparison between the Self and a Humanoid
Self-Evaluation Maintenance Model in HRI and Psychological Safety

We investigate whether the SEM model (Self-evaluation maintenance model) can be applied in HRI in relation to psychological safety of a robot. The SEM model deals with social comparisons, and predicts the cognitive mechanism that works to enhance or maintain the relative goodness of the self. The results obtained from 139 participants show that the higher self-relevance of a task is related to a lower evaluation of the robot regardless of actual level of performance. Simultaneously, a higher evaluation of performance relates to higher safety. This study replicates the prediction of the SEM model. In this paper, we discuss the generality of these results.

Hiroko Kamide, Koji Kawabe, Satoshi Shigemi, Tatsuo Arai
Psychological Anthropomorphism of Robots
Measuring Mind Perception and Humanity in Japanese Context

Using a representative sample, we explored the validity of measures of psychological anthropomorphism in Japanese context. We did so by having participants evaluate both robots and human targets regarding “mind perception” (Gray et al., 2007) and “human essence” (Haslam, 2006)” , respectively. Data from 1,200 Japanese participants confirmed the factor structure of the measures and their overall good psychometric quality. Moreover, the findings emphasize the important role of valence for humanity attribution to both people and robots. Clearly, the proposed self-report measures enlarge the existing repertoire of scales to assess psychological anthropomorphism of robots in Japanese context.

Hiroko Kamide, Friederike Eyssel, Tatsuo Arai
The Ultimatum Game as Measurement Tool for Anthropomorphism in Human–Robot Interaction

Anthropomorphism is the tendency to attribute human characteristics to non–human entities. This paper presents exploratory work to evaluate how human responses during the ultimatum game vary according to the level of anthropomorphism of the opponent, which was either a human, a humanoid robot or a computer. Results from an online user study (N=138) show that rejection scores are higher in the case of a computer opponent than in the case of a human or robotic opponent. Participants also took significantly longer to reply to the offer of the computer rather than to the robot. This indicates that players might use similar ways to decide whether to accept or reject offers made by robotic or human opponents which are different in the case of a computer opponent.

Elena Torta, Elisabeth van Dijk, Peter A. M. Ruijten, Raymond H. Cuijpers
Human-Robot Upper Body Gesture Imitation Analysis for Autism Spectrum Disorders

In this paper we combine robot control and data analysis techniques into a system aimed at early detection and treatment of autism. A humanoid robot - Zeno is used to perform interactive upper body gestures which the human subject can imitate or initiate. The result of interaction is recorded using a motion capture system, and the similarity of gestures performed by human and robot is measured using the Dynamic Time Warping algorithm. This measurement is proposed as a quantitative similarity measure to objectively analyze the quality of the imitation interaction between the human and the robot. In turn, the clinical hypothesis is that this will serve as a consistent quantitative measurement, and can be used to obtain information about the condition and possible improvement of children with autism spectrum disorders. Experimental results with a small set of child subjects are presented to illustrate our approach.

Isura Ranatunga, Monica Beltran, Nahum A. Torres, Nicoleta Bugnariu, Rita M. Patterson, Carolyn Garver, Dan O. Popa
Low-Cost Whole-Body Touch Interaction for Manual Motion Control of a Mobile Service Robot

Mobile service robots for interaction with people need to be easily maneuverable by their users, even if physical restrictions make a manual pushing and pulling impossible. In this paper, we present a low cost approach that allows for intuitive tactile control of a mobile service robot while preserving constraints of a differential drive and obstacle avoidance. The robot’s enclosure has been equipped with capacitive touch sensors able to recognize proximity of the user’s hands. By simulating forces applied by the touching hands, a desired motion command for the robot is derived and combined with other motion objectives in a local motion planner (based on Dynamic Window Approach in our case). User tests showed that this haptic control is intuitively understandable and outperforms a solution using direction buttons on the robot’s touch screen.

Steffen Müller, Christof Schröter, Horst-Michael Gross
Human-Robot Interaction between Virtual and Real Worlds: Motivation from RoboCup @Home

The main purpose of this work is to investigate on the HRI issue based on recent RoboCup @Home competitions, in order to formulate an alternative research framework for HRI studies. Based on the survey of the current competition mechanism and benchmarking approach, and the analysis of the actual team performances, the shortcoming of current approach in motivating high level cognitive robot intelligence development is identified. A new proposal of an alternative research framework with HRI between virtual and real world is illustrated based on the SIGVerse system. The implementations of the research framework in various developments are discussed to show the feasibility of this framework in supporting HRI development. The level of abstraction in embodiment and multimodal interaction can be adjusted in the simulation based on the HRI requirements, to reduce the complexity from the integration problem. The framework is motivating subject experts to connect high level intelligence (e.g. dialogue engine, task planning) to low level robot control for “deeper” HRI. The HRI simulation is reproducible and scalable by the distributed system architecture that enables participation of a large group of human users (over the network/internet) and robots in a complex multi-agent multi-user social interaction.

Jeffrey Too Chuan Tan, Tetsunari Inamura, Komei Sugiura, Takayuki Nagai, Hiroyuki Okada
Habituation and Sensitisation Learning in ASMO Cognitive Architecture

As social robots are designed to interact with humans in unstructured environments, they need to be aware of their surroundings, focus on significant events and ignore insignificant events in their environments. Humans have demonstrated a good example of adaptation to habituate and sensitise to significant and insignificant events respectively. Based on the inspiration of human habituation and sensitisation, we develop novel habituation and sensitisation mechanisms and include these mechanisms in ASMO cognitive architecture. The capability of these mechanisms is demonstrated in the ‘Smokey robot companion’ experiment. Results show that Smokey can be aware of their surroundings, focus on significant events and ignore insignificant events. ASMO’s habituation and sensitisation mechanisms can be used in robots to adapt to the environment. It can also be used to modify the interaction of components in a cognitive architecture in order to improve agents’ or robots’ performances.

Rony Novianto, Benjamin Johnston, Mary-Anne Williams
Effects of Different Kinds of Robot Feedback

In this paper, we investigate to what extent tutors’ behavior is influenced by different kinds of robot feedback. In particular, we study the effects of online robot feedback in which the robot responds either contingently to the tutor’s social behavior or by tracking the objects presented. Also, we investigate the impact of the robot’s learning success on tutors’ tutoring strategies. Our results show that only in the condition in which the robot’s behavior is socially contingent, the human tutors adjust their behavior to the robot. In the developmentally equally plausible object-driven condition, in which the robot tracked the objects presented, tutors do not change their behavior significantly, even though in both conditions the robot develops from a prelinguistic stage to producing keywords. Socially contingent robot feedback has thus the potential to influence tutors’ behavior over time. Display of learning outcomes, in contrast, only serves as feedback on robot capabilities when it is coupled with online social feedback.

Kerstin Fischer, Katrin S. Lohan, Chrystopher Nehaniv, Hagen Lehmann
The Frankenstein Syndrome Questionnaire – Results from a Quantitative Cross-Cultural Survey

This paper describes the results from a cross-cultural survey of attitudes towards humanoid robots conducted in Japan and with a Western sampe. The survey used the tentatively titled “Frankenstein Syndrome Questionnaire” and combined responses both from a Japanese and Western sample in order to explore common, cross-cultural factor structures in these responses. In addition, the differences between samples in terms of relationships between factors as well as other intra-sample relationships were examined. Findings suggest that the Western sample’s interfactor relationships were more structured than the Japanese sample, and that intra-sample characteristics such as age and gender were more prevalent in the Western sample than the Japanese sample. The results are discussed in relation to the notion of the Frankenstein Syndrome advanced by Kaplan [1].

Dag Sverre Syrdal, Tatsuya Nomura, Kerstin Dautenhahn
Region of Eye Contact of Humanoid Nao Robot Is Similar to That of a Human

Eye contact is an important social cue in human-human interaction, but it is unclear how easily it carries over to humanoid robots. In this study we investigated whether the tolerance of making eye contact is similar for the Nao robot as compared to human lookers. We measured the region of eye contact (REC) in three conditions (sitting, standing and eye height). We found that the REC of the Nao robot is similar to that of human lookers. We also compared the centre of REC with the robot’s gaze direction when looking straight at the observer’s nose bridge. We found that the nose bridge lies slightly above the computed centre of the REC. This can be interpreted as an asymmetry in the downward direction of the REC. Taken together these results enable us to model eye contact and the tolerance for breaking eye contact with the Nao robot.

Raymond H. Cuijpers, David van der Pol
Exploring Robot Etiquette: Refining a HRI Home Companion Scenario Based on Feedback from Two Artists Who Lived with Robots in the UH Robot House

This paper presents an exploratory Human-Robot Interaction study which investigated robot etiquette, in particular focusing on understanding the types and forms of robot behaviours that people might expect from a robot that lives and shares space with them in their home. The experiment was intended to tease out the participants’ reasoning behind their choices preferences and suggestions for passive robot behaviours that could usefully complement the robot’s active behaviours in order to allow the robot to exhibit considerate and socially intelligent interactions with people.

Kheng Lee Koay, Michael L. Walters, Alex May, Anna Dumitriu, Bruce Christianson, Nathan Burke, Kerstin Dautenhahn
The Good, The Bad, The Weird: Audience Evaluation of a “Real” Robot in Relation to Science Fiction and Mass Media

When researchers develop robots based on a user-centered design approach, two important questions might emerge: How does the representation of robots in science fiction and the mass media impact the general attitude naïve users have towards robots and how will it impact the attitude towards the specifically developed robot? Previous research has shown that many expectations of naïve users towards real robots are influenced by media representations. Using three empirical studies (focus group, situated interviews, online survey) as a case in point, this paper offers a reflection on the interrelation of media representations and the robot IURO(Interactive Urban Robot). We argue that when it comes to the evaluation of a robot, “good” and “bad” media representations impact the attitude of the participants in a different way. Our results indicate that the previous experience of fictional robots through the media leads to “weird”, double-minded feelings towards real robots. To compensate this, we suggest using the impact of the mass media to actively shape people’s attitude towards real robots.

Ulrike Bruckenberger, Astrid Weiss, Nicole Mirnig, Ewald Strasser, Susanne Stadler, Manfred Tscheligi
Systems Overview of Ono
A DIY Reproducible Open Source Social Robot

One of the major obstacles in the study of HRI (human-robot interaction) with social robots is the lack of multiple identical robots that allow testing with large user groups. Often, the price of these robots prohibits using more than a handful. A lot of the commercial robots do not possess all the necessary features to perform specific HRI experiments and due to the closed nature of the platform, large modifications are nearly impossible. While open source social robots do exist, they often use high-end components and expensive manufacturing techniques, making them unsuitable for easy reproduction. To address this problem, a new social robotics platform, named Ono, was developed. The design is based on the DIY mindset of the maker movement, using off-the-shelf components and more accessible rapid prototyping and manufacturing techniques. The modular structure of the robot makes it easy to adapt to the needs of the experiment and by embracing the open source mentality, the robot can be easily reproduced or further developed by a community of users. The low cost, open nature and DIY friendliness of the robot make it an ideal candidate for HRI studies that require a large user group.

Cesar Vandevelde, Jelle Saldien, Maria-Cristina Ciocci, Bram Vanderborght
Sharing Spaces, Sharing Lives – The Impact of Robot Mobility on User Perception of a Home Companion Robot

This paper examines the role of spatial behaviours in building human-robot relationships. A group of 8 participants, involved in a long-term HRI study, interacted with an artificial agent using different embodiments over a period of one and a half months. The robot embodiments had similar interactional and expressive capabilities, but only one embodiment was capable of moving. Participants reported feeling closer to the robot embodiment capable of physical movement and rated it as more likable. Results suggest that while expressive and communicative abilities may be important in terms of building affinity and rapport with human interactants, the importance of physical interactions when negotiating shared physical space in real time should not be underestimated.

Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, Michael L. Walters, Wan Ching Ho
Qualitative Design and Implementation of Human-Robot Spatial Interactions

Despite the large number of navigation algorithms available for mobile robots, in many social contexts they often exhibit inopportune motion behaviours in proximity of people, often with very “unnatural” movements due to the execution of segmented trajectories or the sudden activation of safety mechanisms (e.g., for obstacle avoidance). We argue that the reason of the problem is not only the difficulty of modelling human behaviours and generating opportune robot control policies, but also the way human-robot spatial interactions are represented and implemented. In this paper we propose a new methodology based on a

qualitative

representation of spatial interactions, which is both flexible and compact, adopting the well-defined and coherent formalization of Qualitative Trajectory Calculus (QTC). We show the potential of a QTC-based approach to abstract and design complex robot behaviours, where the desired robot’s motion is represented together with its actual performance in one coherent approach, focusing on spatial interactions rather than pure navigation problems.

Nicola Bellotto, Marc Hanheide, Nico Van de Weghe
Unsupervised Learning Spatio-temporal Features for Human Activity Recognition from RGB-D Video Data

Being able to recognize human activities is essential for several applications, including social robotics. The recently developed commodity depth sensors open up new possibilities of dealing with this problem. Existing techniques extract hand-tuned features, such as HOG3D or STIP, from video data. They are not adapting easily to new modalities. In addition, as the depth video data is low quality due to the noise, we face a problem: does the depth video data provide extra information for activity recognition? To address this issue, we propose to use an unsupervised learning approach generally adapted to RGB and depth video data. we further employ the multi kernel learning (MKL) classifier to take into account the combinations of different modalities. We show that the low-quality depth video is discriminative for activity recognition. We also demonstrate that our approach achieves superior performance to the state-of-the-art approaches on two challenging RGB-D activity recognition datasets.

Guang Chen, Feihu Zhang, Manuel Giuliani, Christian Buckl, Alois Knoll
Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions

We present a data collection setup for exploring turn-taking in three-party human-robot interaction involving objects competing for attention. The collected corpus comprises 78 minutes in four interactions. Using automated techniques to record head pose and speech patterns, we analyze head pose patterns in turn-transitions. We find that introduction of objects makes addressee identification based on head pose more challenging. The symmetrical setup also allows us to compare human-human to human-robot behavior within the same interaction. We argue that this symmetry can be used to assess to what extent the system exhibits a human-like behavior.

Martin Johansson, Gabriel Skantze, Joakim Gustafson
I Would Like Some Food: Anchoring Objects to Semantic Web Information in Human-Robot Dialogue Interactions

Ubiquitous robotic systems present a number of interesting application areas for socially assistive robots that aim to improve quality of life. In particular the combination of smart home environments and relatively inexpensive robots can be a viable technological solutions for assisting elderly and persons with disability in their own home. Such services require an easy interface like spoken dialogue and the ability to refer to physical objects using semantic terms. This paper presents an implemented system combining a robot and a sensor network deployed in a test apartment in an elderly residence area. The paper focuses on the creation and maintenance (anchoring) of the connection between the semantic information present in the dialogue with perceived physical objects in the home. Semantic knowledge about concepts and their correlations are retrieved from on-line resources and ontologies, e.g. WordNet, and sensor information is provided by cameras distributed in the apartment.

Andreas Persson, Silvia Coradeschi, Balasubramanian Rajasekaran, Vamsi Krishna, Amy Loutfi, Marjan Alirezaie
Situated Analysis of Interactions between Cognitively Impaired Older Adults and the Therapeutic Robot PARO

In order to explore the social and behavioral mechanisms behind the therapeutic effects of PARO, a robot resembling a baby seal, we conducted an eight-week-long study of the robot’s use in a group activity with older adults in a local retirement facility. Our research confirms PARO’s positive effects on participants by increasing physical and verbal interaction as evidenced by our behavioral analysis of video recorded interactions. We also analyzed the behavioral patterns in the group interaction, and found that the mediation of the therapist, the individual interpretations of PARO by different participants, and the context of use are significant factors that support the successful use of PARO in therapeutic implementations. In conclusion, we discuss the importance of taking the broader social context into account in robot evaluation.

Wan-Ling Chang, Selma Šabanović, Lesa Huber
Closing the Loop: Towards Tightly Synchronized Robot Gesture and Speech

To engage in natural interactions with humans, social robots should produce speech-accompanying non-verbal behaviors such as hand and arm gestures. Given the special constraints imposed by the physical properties of a humanoid robot, successful multimodal synchronization is difficult to achieve. Introducing the first closed-loop approach to speech and gesture generation for humanoid robots, we propose a multimodal scheduler for improved synchronization based on two novel features, namely an experimentally fitted forward model and a feedback-based adaptation mechanism. Technical results obtained with the implemented scheduler demonstrate the feasibility of our approach; empirical results from an evaluation study highlight the implications of the present work.

Maha Salem, Stefan Kopp, Frank Joublin
Teleoperation of Domestic Service Robots: Effects of Global 3D Environment Maps in the User Interface on Operators’ Cognitive and Performance Metrics

This paper investigates the suitability of visualizing global 3D environment maps generated from RGB-D sensor data in teleoperation user interfaces for service robots. We carried out a controlled experiment involving 27 participants, four teleoperation tasks, and two types of novel global 3D mapping techniques. Results show substantial advantages of global 3D mapping over a control condition for three of the four tasks. Global 3D mapping in the user interface lead to reduced search times for objects and to fewer collisions. In most situations it also resulted in less operator workload, higher situation awareness, and higher accuracy of operators’ mental models of the remote environment.

Marcus Mast, Michal Španěl, Georg Arbeiter, Vít Štancl, Zdeněk Materna, Florian Weisshardt, Michael Burmester, Pavel Smrž, Birgit Graf
Artists as HRI Pioneers: A Creative Approach to Developing Novel Interactions for Living with Robots

In this article we present a long-term, continuous human-robot co-habitation experiment, which involved two professional artists, whose artistic work explores the boundary between science and society. The artists lived in the University of Hertfordshire Robot House full-time with various robots with different characteristics in a smart home environment. The artists immersed themselves in the robot populated living environment in order to explore and develop novel ways to interact with robots. The main research aim was to explore in a qualitative way the impact of a continuous weeklong exposure to robot companions and sensor environments on humans. This work has developed an Integrative Holistic Feedback Approach (IHFA) involving knowledgeable users in the design process of appearances, functionality and interactive behaviour of robots.

Hagen Lehmann, Michael L. Walters, Anna Dumitriu, Alex May, Kheng Lee Koay, Joan Saez-Pons, Dag Sverre Syrdal, Luke Wood, Joe Saunders, Nathan Burke, Ismael Duque-Garcia, Bruce Christianson, Kerstin Dautenhahn
iCharibot: Design and Field Trials of a Fundraising Robot

In this work, we address the problem of increasing charitable donations through a novel, engaging fundraising robot: the Imperial Charity Robot (iCharibot). To better understand how to engage passers-by, we conducted a field trial in outdoor locations at a busy area in London, spread across 9 sessions of 40 minutes each. During our experiments, iCharibot attracted 679 people and engaged with 386 individuals. Our results show that interactivity led to longer user engagement with the robot. Our data further suggests both saliency and interactivity led to an increase in the total donation amount. These findings should prove useful for future design of robotic fundraisers in particular and for social robots in general.

Miguel Sarabia, Tuan Le Mau, Harold Soh, Shuto Naruse, Crispian Poon, Zhitian Liao, Kuen Cherng Tan, Zi Jian Lai, Yiannis Demiris
“It Don’t Matter If You’re Black or White”?
Effects of Robot Appearance and User Prejudice on Evaluations of a Newly Developed Robot Companion

Previous work shows that other people are evaluated more positively when they are perceived as part of the evaluator’s ingroup. This phenomenon – ingroup bias – has been robustly documented in social psychological intergroup research. To test this effect to the domain of social robots, we conducted an experiment with 61 Caucasian (White) American participants who rated either a robot that resembled the participants’ Caucasian ingroup or a social outgroup on

mind

perception

(i.e., the attribution of

agency

and

experience

). First, we predicted that the ingroup robot would be evaluated more positively than the outgroup robot. Moreover, we assessed the Caucasian Americans’ endorsement of anti-Black prejudice. Thus, second, we hypothesized interaction effects of

robot type

and

level of prejudice

on mind perception, meaning that effects of ingroup bias should be particularly pronounced under high (vs. low) levels of self-reported Anti- Black prejudice. Interestingly, we did not obtain main effects for

robot type

on mind perception. Contrary to our hypothesis, participants even seemed to attribute more

agency

and

experience

to the outgroup robot relative to the ingroup robot. As expected, no main effects for

levels of prejudice

on

mind perception

emerged. Importantly, however, we obtained the predicted interaction effects on the central dependent measures. Obviously, the interplay of design choices and user attitudes may bias anthropomorphic inferences about robot companions, posing a psychological barrier to human-robot companionship and pleasant human-robot interaction.

Friederike Eyssel, Steve Loughnan
A Humanoid Robot Companion for Wheelchair Users

In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.

Miguel Sarabia, Yiannis Demiris
Tuning Cost Functions for Social Navigation

Human-Robot Interaction literature frequently uses Gaussian distributions within navigation costmaps to model proxemic constraints around humans. While it has proven to be effective in several cases, this approach is often hard to tune to get the desired behavior, often because of unforeseen interactions between different elements in the costmap. There is, as far as we are aware, no general strategy in the literature for how to predictably use this approach.

In this paper, we describe how the parameters for the soft constraints can affect the robot’s planned paths, and what constraints on the parameters can be introduced in order to achieve certain behaviors. In particular, we show the complex interactions between the Gaussian’s parameters and elements of the path planning algorithms, and how undesirable behavior can result from configurations exceeding certain ratios. There properties are explored using mathematical models of the paths and two sets of tests: the first using simulated costmaps, and the second using live data in conjunction with the ROS Navigation algorithms.

David V. Lu, Daniel B. Allan, William D. Smart
Child-Robot Interaction: Perspectives and Challenges

Child-Robot Interaction (cHRI) is a promising point of entry into the rich challenge that social HRI is. Starting from three years of experiences gained in a cHRI research project, this paper offers a view on the opportunities offered by letting robots interact with children rather than with adults and having the interaction in real-world circumstances rather than lab settings. It identifies the main challenges which face the field of cHRI: the technical challenges, while tremendous, might be overcome by moving away from the classical perspective of seeing social cognition as residing inside an agent, to seeing social cognition as a continuous and self-correcting interaction between two agents.

Tony Belpaeme, Paul Baxter, Joachim de Greeff, James Kennedy, Robin Read, Rosemarijn Looije, Mark Neerincx, Ilaria Baroni, Mattia Coti Zelati
Training a Robot via Human Feedback: A Case Study

We present a case study of applying a framework for learning from numeric human feedback—

tamer

—to a physically embodied robot. In doing so, we also provide the first demonstration of the ability to train multiple behaviors by such feedback without algorithmic modifications and of a robot learning from free-form human-generated feedback without any further guidance or evaluative feedback. We describe transparency challenges specific to a physically embodied robot learning from human feedback and adjustments that address these challenges.

W. Bradley Knox, Peter Stone, Cynthia Breazeal
Real Time People Tracking in Crowded Environments with Range Measurements

Social and assistive robots have recognised benefit for future patient care and elderly management. For real-life applications, these robots often navigate within crowded environments. One of the basic requirements is to detect how people move within the scene and what is the general pattern of their dynamics. Laser range sensors have been applied for people tracking in many applications, as they are more precise, robust to lighting conditions and have broader field of view compared to colour or depth cameras. However, in crowded environments they are prone to environmental noise and can produce a high false positive rate for people detection. The purpose of this paper is to propose a robust method for tracking people in crowded environments based on a laser range sensor. The main contribution of the paper is the development of an enhanced Probability Hypothesis Density (PHD) filter for accurate tracking of multiple people in crowded environments. Different object detection modules are proposed for track initialisation and people tracking. This separation reduces the misdetection rate while increasing the tracking accuracy. Targets are initialised using a people detector module, which provides a good estimation of where people are located. Each person is then tracked using different object detection module with a high accuracy. The state of each person is then updated by the PHD filter. The proposed approach was tested with challenging datasets, showing an increase in performance using two metrics.

Javier Correa, Jindong Liu, Guang-Zhong Yang
Who, How, Where: Using Exemplars to Learn Social Concepts

This article introduces exemplars as a method of stereotype learning by social robot. An exemplar is a highly specific representation of an interaction with a particular person. Methods that allow a robot to create prototypes from its set of exemplars are presented. Using these techniques, we develop the situation prototype, a representation that captures information about an environment, the people typically found in an environment, and their actions. We show that this representation can be used by a robot to reason about several important social questions.

Alan R. Wagner, Jigar Doshi
An Asynchronous RGB-D Sensor Fusion Framework Using Monte-Carlo Methods for Hand Tracking on a Mobile Robot in Crowded Environments

Gesture recognition for human-robot interaction is a prerequisite for many social robotic tasks. One of the main technical difficulties is hand tracking in crowded and dynamic environments. Many existing methods have only been shown to work in clutter-free settings.

This paper proposes a sensor fusion based hand tracking algorithm for crowded environments. It is shown to significantly improve the accuracy of existing hand detectors, based on depth and RGB information. The main novelties of the proposed method include: a) a Monte-Carlo RGB update process to reduce false positives; b) online skin colour learning to cope with varying skin colour, clothing and illumination conditions; c) an asynchronous update method to integrate depth and RGB information for real-time applications. Tracking performance is evaluated in a number of controlled scenarios and crowded environments. All datasets used in this work have been made publicly available.

Stephen McKeague, Jindong Liu, Guang-Zhong Yang
Using Spatial Semantic and Pragmatic Fields to Interpret Natural Language Pick-and-Place Instructions for a Mobile Service Robot

We present a methodology for enabling mobile service robots to follow natural language instructions for object pick-and-place tasks from non-expert users, with and without user-specified constraints, and with a particular focus on spatial language understanding. Our approach is capable of addressing both the semantic and pragmatic properties of object movement-oriented natural language instructions, and in particular, proposes a novel computational field representation for the incorporation of spatial pragmatic constraints in mobile manipulation task planning. The design and implementation details of our methodology are also presented, including the grammar utilized and our procedure for pruning multiple candidate parses based on context. The paper concludes with an evaluation of our approach implemented on a simulated mobile robot operating in both 2D and 3D home environments.

Juan Fasola, Maja J. Matarić
Bodily Mood Expression: Recognize Moods from Functional Behaviors of Humanoid Robots

Our goal is to develop bodily mood expression that can be used during the execution of functional behaviors for humanoid social robots. Our model generates such expression by stylizing behaviors through modulating behavior parameters within functional bounds. We have applied this approach to two behaviors, waving and pointing, and obtained parameter settings corresponding to different moods and interrelations between parameters from a design experiment. This paper reports an evaluation of the parameter settings in a recognition experiment under three conditions: modulating all parameters, only important parameters, and only unimportant parameters. The results show that valence and arousal can be well recognized when the important parameters were modulated. Modulating only the unimportant parameters is promising to express weak moods. Speed parameters, repetition, and head-up-down were found to correlate with arousal, while speed parameters may correlate more with valence than arousal when they are slow.

Junchao Xu, Joost Broekens, Koen Hindriks, Mark A. Neerincx
Cooperative Robot Manipulator Control with Human ‘pinning’ for Robot Assistive Task Execution

This paper presents the use of a multi-agent controller in application to a human-robot cooperative task where the human is the lead and two robotic manipulators act as agents allowing for physical assistance of the human in a lifting task: the human aids in providing direction for two synchronized robot arms placing a tray with a water glass in a human-robot interaction experiment.

Novel adaptive multi-agent theory is exploited to achieve precise coordination between the arms, while being lead by the human. A novel finite-time adaptation scheme aids changing structures, such as the removal of the leading agent, so that consensus and synchronisation is retained. This is permitted by the decentralized control structure, where each agent is supported by an agent-specific controller and information exchanged between the agents is limited to position and velocity of each manipulator. Thus, the controller is robust to structural changes in the multi-agent network, e.g. the removal of the pinning agent.

Muhammad Nasiruddin Mahyuddin, Guido Herrmann
Effects of Politeness and Interaction Context on Perception and Experience of HRI

Politeness is believed to facilitate communication in human interaction, as it can minimize the potential for conflict and confrontation. Regarding the role of politeness strategies for human-robot interaction, conflicting findings are presented in the literature. Thus, we conducted a between-participants experimental study with a receptionist robot to gain a deeper understanding of how politeness on the one hand, and the type of interaction itself on the other hand, might affect and shape user experience and evaluation of HRI. Our findings suggest that the interaction context has a greater impact on participants’ perception of the robot in HRI than the use – or lack – of politeness strategies.

Maha Salem, Micheline Ziadee, Majd Sakr
Facial Expressions and Gestures to Convey Emotions with a Humanoid Robot

This paper presents the results of a perceptual study with ZECA (Zeno Engaging Children with Autism), a robot able to display facial expressions. ZECA is a robotic tool used to study human-robot interactions with children with Autism Spectrum Disorder. This study describes the first steps towards this goal. Facial expressions and gestures conveying emotions such as sadness, happiness, or surprise are displayed by the robot. The design of the facial expressions based on action units is presented. The participants answered a questionnaire intended to verify if these expressions with or without gestures were recognized as such in the corresponding video. Results show that participants were successfully able to recognize the emotion featured in the corresponding video, and the gestures were a valuable addition to the recognition.

Sandra Costa, Filomena Soares, Cristina Santos
PEPITA: A Design of Robot Pet Interface for Promoting Interaction

Sharing social information such as photos and experiences is becoming a part of everyday interaction. In this study, we introduce a robot pet, which is able to bring people together by promoting interaction both among humans who share the same place and those separated by distance. However, the use of communication technologies has led to concerns when people are immersed in virtual experiences and real-life social interactions decrease as a consequence. To avoid this, we propose a way of sharing interactions in the same space by using a projector and allowing distant communication via internet in order to share one basic kind of human communication such as hugs. For this first stage, we present the design and implementation of the proposed robot pet which is based on smartphone and a projector to display photos and the pet’s internal states. In addition, an LED color study in a public space is also given as a set of design parameters.

Eleuda Nuñez, Kyohei Uchida, Kenji Suzuki
Backmatter
Metadata
Title
Social Robotics
Editors
Guido Herrmann
Martin J. Pearson
Alexander Lenz
Paul Bremner
Adam Spiers
Ute Leonards
Copyright Year
2013
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-02675-6
Print ISBN
978-3-319-02674-9
DOI
https://doi.org/10.1007/978-3-319-02675-6

Premium Partner