Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 8th International Conference on Social Robotics, ICSR 2016, held in Kansas City, MO, USA, in November 2016. The 98 revised full papers presented were carefully reviewed and selected from 107 submissions.
The theme of the 2016 conference is Sociorobotics: Design and implementation of social behaviors of robots interacting with each other and humans. In addition to technical sessions, ICSR 2016 included three workshops: The Synthetic Method in Social Robotics (SMSR 2016), Social Robots: A Tool to Advance Interventions for Autism, and Using Social Robots to Improve the Quality of Life in the Elderly.



Learning Robot Navigation Behaviors by Demonstration Using a RRT Planner

This paper presents an approach for learning robot navigation behaviors from demonstration using Optimal Rapidly-exploring Random Trees (RRT$$^{*}$$∗) as main planner. A new learning algorithm combining both Inverse Reinforcement Learning (IRL) and RRT$$^{*}$$∗ is developed in order to learn the RRT$$^{*}$$∗’s cost function from demonstrations. This cost function can be used later in a regular RRT$$^{*}$$∗ for robot planning including the learned behaviors in different scenarios. Simulations show how the method is able to recover the behavior from the demonstrations.

Noé Pérez-Higueras, Fernando Caballero, Luis Merino

Adaptive Robot Assisted Therapy Using Interactive Reinforcement Learning

In this paper, we present an interactive learning and adaptation framework that facilitates the adaptation of an interactive agent to a new user. We argue that Interactive Reinforcement Learning methods can be utilized and integrated to the adaptation mechanism, enabling the agent to refine its learned policy in order to cope with different users. We illustrate our framework with a use case in the domain of Robot Assisted Therapy. We present our results of the learning and adaptation experiments against different simulated users, showing the motivation of our work and discussing future directions towards the definition and implementation of our proposed framework.

Konstantinos Tsiakas, Maria Dagioglou, Vangelis Karkaletsis, Fillia Makedon

Personalization Framework for Adaptive Robotic Feeding Assistance

The deployment of robots at home must involve robots with pre-defined skills and the capability of personalizing their behavior by non-expert users. A framework to tackle this personalization is presented and applied to an automatic feeding task. The personalization involves the caregiver providing several examples of feeding using Learning-by-Demostration, and a ProMP formalism to compute an overall trajectory and the variance along the path. Experiments show the validity of the approach in generating different feeding motions to adapt to user’s preferences, automatically extracting the relevant task parameters. The importance of the nature of the demonstrations is also assessed, and two training strategies are compared.

Gerard Canal, Guillem Alenyà, Carme Torras

A Framework for Modelling Local Human-Robot Interactions Based on Unsupervised Learning

This paper addresses the problem of teaching a robot interaction behaviors using the imitation learning paradigm. Particularly, the approach makes use of Gaussian Mixture Models (GMMs) to model the physical interaction of the robot and the person when the robot is teleoperated or guided by an expert. The learned models are integrated into a sample-based planner, an RRT*, at two levels: as a cost function in order to plan trajectories considering behavior constraints, and as configuration space sampling bias to discard samples with low cost according to the behaviors. The algorithm is successfully tested in the laboratory using an actual robot and real trajectories examples provided by an expert.

Rafael Ramón-Vigo, Noé Pérez-Higueras, Fernando Caballero, Luis Merino

Using Games to Learn Games: Game-Theory Representations as a Source for Guided Social Learning

This paper examines the use of game-theoretic representations as a means of representing and learning both interactive games and patterns of interaction in general between a human and a robot. The paper explores the means by which a robot could generate the structure of a game. In addition to offering the formal underpinnings necessary for reasoning about strategy, game theory affords a method for representing the interactive structure of a game computationally. We investigate the possibility of teaching a robot the structure of a game via instructions, question and answer sessions led by the robot, and a mix of instruction and question and answer. Our results demonstrate that the use of game-theoretic representations may offer new advantages in terms of guided social learning.

Alan Wagner

User Evaluation of an Interactive Learning Framework for Single-Arm and Dual-Arm Robots

Social robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and modify a segment of the robot arm trajectory. The framework uses gesture teleoperation and reinforcement learning to learn new motions. In the current work, we compared the user experience with the proposed framework implemented on the single-arm and dual-arm Barrett’s 7-DOF WAM robots equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of trials with two groups of 6 participants using two robot settings in different order for counterbalancing. The experimental results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate in a shorter time.

Aleksandar Jevtić, Adrià Colomé, Guillem Alenyà, Carme Torras

Formalizing Normative Robot Behavior

We address the task of modeling, generating and evaluating normative behavior for interactive robots. Normative behavior is essential for coherent deployment of these robots in human populated spaces. We develop a first unifying, intuitive and general formalism of the task that subsumes most previous approaches which have focused mainly on specific tasks. We present concrete and practical definitions of norms and show how to generate and evaluate behavior that adheres to such norms. We then demonstrate the formalism on a socially normative navigation task for service robots. Further, we discuss the key challenges in realizing such behaviors, and in particular, the role of perception and uncertainty.

Billy Okal, Kai O. Arras

Decision-Theoretic Human-Robot Interaction: Designing Reasonable and Rational Robot Behavior

Autonomous robots are moving out of research labs and factory cages into public spaces; people’s homes, workplaces, and lives. A key design challenge in this migration is how to build autonomous robots that people want to use and can safely collaborate with in undertaking complex tasks. In order for people to work closely and productively with robots, robots must behave in way that people can predict and anticipate. Robots chose their next action using the classical sense-think-act processing cycle. Robotists design actions and action choice mechanisms for robots. This design process determines robot behaviors, and how well people are able to interact with the robot. Crafting how a robot will choose its next action is critical in designing social robots for interaction and collaboration. This paper identifies reasonableness and rationality, two key concepts that are well known in Choice Theory, that can be used to guide the robot design process so that the resulting robot behaviors are easier for humans to predict, and as a result it is more enjoyable for humans to interact and collaborate. Designers can use the notions of reasonableness and rationality to design action selection mechanisms to achieve better robot designs for human-robot interaction. We show how Choice Theory can be used to prove that specific robot behaviors are reasonable and/or rational, thus providing a formal, useful and powerful design guide for developing robot behaviors that people find more intuitive, predictable and fun, resulting in more reliable and safe human-robot interaction and collaboration.

Mary-Anne Williams

Physiologically Inspired Blinking Behavior for a Humanoid Robot

Blinking behavior is an important part of human nonverbal communication. It signals the psychological state of the social partner. In this study, we implemented different blinking behaviors for a humanoid robot with pronounced physical eyes. The blinking patterns implemented were either statistical or based on human physiological data. We investigated in an online study the influence of the different behaviors on the perception of the robot by human users with the help of the Godspeed questionnaire. Our results showed that, in the condition with human-like blinking behavior, the robot was perceived as being more intelligent compared to not blinking or statistical blinking. As we will argue, this finding represents the starting point for the design of a ‘holistic’ social robotic behavior.

Hagen Lehmann, Alessandro Roncone, Ugo Pattacini, Giorgio Metta

Infinite Personality Space for Non-fungible Robots

We outline a novel method for defining robot personality for the purposes of individual differentiation. Rather than a designer-developed set of behaviors where a users’ preferences are learned and inserted into pre-written scripts, our approach allows for each robot to have and express a unique personality. This uniqueness reduces the fungibility of the robots, which may lead to increased user engagement.

Daniel H. Grollman

Investigating the Differences in Effects of the Persuasive Message’s Timing During Science Learning to Overcome the Cognitive Dissonance

Based on conceptual change theory, cognitive dissonance is known as an important factor in conceptual change. Thus, those who design and build educational robots will need to understand how best to provide ways for robots to implicitly persuade students to change their bad attitudes when encountering a cognitively dissonant situation. Building on diverse literature, we examine how to make students change their bad attitudes of avoiding difficult science exercises. More precisely, we intend to make students overcome cognitive dissonance by choosing to redo a difficult science exercise that they had previously answered incorrectly rather than jumping to another exercise. First, we introduce the concept of gamma window. Then we investigate how different timings of the persuasive strategy affect how students overcome the cognitive dissonance and avoid learned helplessness.

Khaoula Youssef, Jaap Ham, Michio Okada

Investigating the Effects of the Persuasive Source’s Social Agency Level and the Student’s Profile to Overcome the Cognitive Dissonance

Educational robots are regarded as beneficial tools in education due to their capabilities of improving learning motivation. Using cognitive dissonance as a teaching tool has been popular in science education too. A considerable number of researchers have argued that cognitive dissonance has an important role in the student’s attitudes change. This paper presents a design for a cutting-edge experiment where we describe a procedure that induces cognitive dissonance. We propose to use an educational robot that helps the student overcome the cognitive dissonance during science learning. We make the difference between students that base their decisions on thinking (though-minded) and those that mostly base their decisions on feeling (relational). The main mission of the study was to implicitly lead students to evolve a positive implicit attitude supporting redoing difficult scientific exercises to understand one’s errors and to avoid learned helplessness. Based on the assumption that relational students are emotional (easily alienated), we investigate whether they are easy to be persuaded in comparison to though-minded students. Also, we verify whether it is possible to consider an educational robot for such a mission. We compare different persuasive sources (tablet showing a persuasive text, an animated robot and a human) encouraging the student to strive for cognitive closure, to verify which of these sources leads to better implicit attitude supporting defeating one’s self to assimilate difficult scientific exercises. Finally, we explore which of the persuasive sources better fits each of both student’s profiles.

Khaoula Youssef, Jaap Ham, Michio Okada

Responsive Social Agents

Feedback-Sensitive Behavior Generation for Social Interactions

How can we generate appropriate behavior for social artificial agents? A common approach is to (1) establish with controlled experiments which action is most appropriate in which setting, and (2) select actions based on this knowledge and an estimate of the setting. This approach faces challenges, as it can be very hard to acquire and reason with all the required knowledge. Estimating the setting is challenging too, as many relevant aspects of the setting (e.g. personality of the interactee) can be unobservable. We formally describe an alternative approach that can handle these challenges; responsiveness. This is the idea that a social agent can utilize the many feedback cues given in social interactions to continuously adapt its behavior to something more appropriate. We theoretically discuss the relative advantages and disadvantages of these two approaches, which allows for more explicitly considering their application in social agents.

Jered Vroon, Gwenn Englebienne, Vanessa Evers

A Human-Robot Competition: Towards Evaluating Robots’ Reasoning Abilities for HRI

For effective Human-Robot Interaction (HRI), a robot should be human and human-environment aware. Perspective taking, effort analysis and affordance analysis are some of the core components in such human-centered reasoning. This paper is concerned with the need for benchmarking scenarios to assess the resultant intelligence, when such reasoning blocks function together. Despite the various competitions involving robots, there is a lack of approaches considering the human in their scenarios and in the reasoning processes, especially those targeting HRI. We present a game that is centered upon a human-robot competition, and motivate how our scenario, and the idea of a robot and a human competing, can serve as a benchmark test for both human-aware reasoning as well as inter-robot social intelligence. Based on subjective feedback from participants, we also provide some pointers and ingredients for evaluation matrices.

Amit Kumar Pandey, Lavindra de Silva, Rachid Alami

The Effects of Cognitive Biases in Long-Term Human-Robot Interactions: Case Studies Using Three Cognitive Biases on MARC the Humanoid Robot

The research presented in this paper is part of a wider study investigating the role cognitive bias plays in developing long-term companionship between a robot and human. In this paper we discuss, how cognitive biases such as misattribution, Empathy gap and Dunning-Kruger effects can play a role in robot-human interaction with the aim of improving long-term companionship. One of the robots used in this study called MARC (See Fig. 1) was given a series of biased behaviours such as forgetting participant’s names, denying its own faults for failures, unable to understand what a participant is saying, etc. Such fallible behaviours were compared to a non-biased baseline behaviour. In the current paper, we present a comparison of two case studies using these biases and a non-biased algorithm. It is hoped that such humanlike fallible characteristics can help in developing a more natural and believable companionship between Robots and Humans. The results of the current experiments show that the participants initially warmed to the robot with the biased behaviours.

Mriganka Biswas, John Murray

Ethical Decision Making in Robots: Autonomy, Trust and Responsibility

Autonomy Trust and Responsibility

Autonomous robots such as self-driving cars are already able to make decisions that have ethical consequences. As such machines make increasingly complex and important decisions, we will need to know that their decisions are trustworthy and ethically justified. Hence we will need them to be able to explain the reasons for these decisions: ethical decision-making requires that decisions be explainable with reasons. We argue that for people to trust autonomous robots we need to know which ethical principles they are applying and that their application is deterministic and predictable. If a robot is a self-improving, self-learning type of robot whose choices and decisions are based on past experience, which decision it makes in any given situation may not be entirely predictable ahead of time or explainable after the fact. This combination of non-predictability and autonomy may confer a greater degree of responsibility to the machine but it also makes them harder to trust.

Fahad Alaieri, André Vellino

How Facial Expressions and Small Talk May Influence Trust in a Robot

In this study, we address the level of trust that a human being displays during an interaction with a robot under different circumstances. The influencing factors considered are the facial expressions of a robot during the interactions, as well as the ability of making small talk. To examine these influences, we ran an experiment in which a robot tells a story to a participant, and then asks for help in form of donations. The experiment was implemented in four different scenarios in order to examine the two influencing factors on trust. The results showed the highest level of trust gained when the robot starts with small talk and expresses facial expression in the same direction of storytelling expected emotion.

Raul Benites Paradeda, Mojgan Hashemian, Rafael Afonso Rodrigues, Ana Paiva

A Study on Trust in a Robotic Suitcase

This work presents a study on human-robot interaction between a prototype of a robotic suitcase – aBag – and people using it. Importantly, for an autonomous robotic suitcase to be successful as a product, people need to trust it. Therefore, a study was performed, where participants used aBag (remotely operated using the Wizard of Oz technique) for carrying their belongings. Two different conditions were created: (1) aBag follows the participant at a close range; (2) aBag follows the participant on a further distance. We expected that participants would trust more aBag when it was following them at a close range, but interestingly participants seemed to trust more when aBag was further away. Also, regardless of the conditions, the level of trust in aBag was significantly higher after the interaction compared to before, bringing positive results to the development of this kind of robotic apparatus.

Beatriz Quintino Ferreira, Kelly Karipidou, Filipe Rosa, Sofia Petisca, Patrícia Alves-Oliveira, Ana Paiva

How Much Should a Robot Trust the User Feedback? Analyzing the Impact of Verbal Answers in Active Learning

This paper assesses how the accuracy in user’s answers influence the learning of a social robot when it is trained to recognize poses using Active Learning. We study the performance of a robot trained to recognize the same poses actively and passively and we show that, sometimes, the user might give simplistic answers producing a negative impact on the robot’s learning. To reduce this effect, we provide a method based on lowering the trust in the user’s responses. We conduct experiments with 24 users, indicating that our method maintains the benefits of AL even when the user answers are not accurate. With this method the robot incorporates domain knowledge from the users, mitigating the impact of low quality answers.

Victor Gonzalez-Pacheco, Maria Malfaz, Jose Carlos Castillo, Alvaro Castro-Gonzalez, Fernando Alonso-Martín, Miguel A. Salichs

Recommender Interfaces: The More Human-Like, the More Humans Like

Social robots, when used for information providing, are able to affect humans’ trustworthiness and willingness to interact with them. In this work, we conducted an experimental study aimed at observing if the users’ acceptance of recommendations, as well as their engagement in the interaction, is elicited when using a humanoid robot with respect to a common application on a mobile phone. We conducted an experimental study on movie recommendation where the two interfaces provide the same contents, but through different communication channels. In detail, the robot will attend to the participants in a socially contingent fashion, signaled via head and gaze orientation, speech, eye color and gestures related to the genre of the recommended movie, and the app will provide textual and graphical movie presentation. Results show that while the users perceive the interaction with the mobile application more natural, the social robot is able to enhance the users’ satisfaction and provides a good and stable acceptance rate also when facing participants with various degrees of English proficiency.

Mariacarla Staffa, Silvia Rossi

Designing a Social Robot to Assist in Medication Sorting

Being able to sort one’s own medications is a critical self-management task for people with Parkinson’s disease. We analyzed the medication sorting task and gathered design considerations. Then we developed an autonomous robot to assist in the task. We used guidelines provided by occupational therapists to determine the level of assistance provided by the robot. Finally, an evaluation of the effectiveness of the robot with student evaluators determined that people trusted the robot to reliably assist and that people had a positive emotional experience completing the task.

Jason R. Wilson, Linda Tickle-Degnen, Matthias Scheutz

Other-Oriented Robot Deception: How Can a Robot’s Deceptive Feedback Help Humans in HRI?

Deception is a common and essential behavior of social agents. By increasing the use of social robots, the need for robot deception is also growing to achieve more socially intelligent robots. It is a goal that robot deception should be used to benefit humankind. We define this type of benevolent deceptive behavior as other-oriented robot deception. In this paper, we explore an appropriate context in which a robot can potentially use other-oriented deceptive behaviors in a beneficial way. Finally, we conduct a formal human-robot interaction study with elderly persons and demonstrate that using other-oriented robot deception in a motor-cognition dual task can benefit deceived human partners. We also discuss the ethical implications of robot deception, which is essential for advancing research on this topic.

Jaeeun Shim, Ronald C. Arkin

Ethically-Guided Emotional Responses for Social Robots: Should I Be Angry?

Emotions play a critical role in human-robot interaction. Human-robot interaction in social contexts will be more effective if robots can understand human emotions and express (display) emotions accordingly as a means to communicate their own internal state. In this paper we present a novel computational model of robot emotion generation based on appraisal theory and guided by ethical judgement. There have been recent advances in developing emotion for robots. However, despite the extensive research on robot emotion, it is difficult to say if a particular robot is exhibiting appropriate emotions or even showing that it can empathize with humans by exhibiting similar emotions to humans in the same situation. A key question is - to what extent should a robot direct anger toward a young child or an elderly person for an act that it should show anger towards an ordinary adult to signal danger or stupidity? Realizing the need for an ethically guided approach to emotion expressions in social robots as they interact with people, we present a novel Ethical Emotion Generation System (EEGS) for the expression of the most acceptable emotions in social robots.

Suman Ojha, Mary-Anne Williams

Interactive Navigation of Mobile Robots Based on Human’s Emotion

In this paper, an interactive navigation approach based on human’s emotion is proposed for mobile robot obstacle avoidance. By assuming human’s obstacle avoidance behavior is related to emotion, the variable artificial potential field is implemented to generate different obstacle avoidance behaviors when the robot encounters humans with attractive emotions and repulsive emotions. A virtual emotional barrier is added outside the physical barrier for human obstacles with repulsive emotions such that the robot tends to leave more personal space for them. Simulations on MATLAB and experiments on an ROS-based TurtleBot 2 robot have been conducted to verify the effectiveness of the proposed scheme.

Rui Jiang, Shuzhi Sam Ge, Nagacharan Teja Tangirala, Tong Heng Lee

Social Human-Robot Interaction: A New Cognitive and Affective Interaction-Oriented Architecture

In this paper, we present CAIO, a Cognitive and Affective Interaction-Oriented architecture for social human-robot interactions (HRI), allowing robots to reason on mental states (including emotions), and to act physically, emotionally and verbally. We also present a short scenario and implementation on a Nao robot.

Carole Adam, Wafa Johal, Damien Pellier, Humbert Fiorino, Sylvie Pesty

MuDERI: Multimodal Database for Emotion Recognition Among Intellectually Disabled Individuals

Social robots with empathic interaction is a crucial requirement towards deliverance of an effective cognitive stimulation among individuals with Intellectual Disability (ID) and has been challenged by absence of any particular database. Project REHABIBOTICS presents a first ever multimodal database of individuals with ID, recorded in a nearly real world settings for analysis of human affective states. MuDERI is an annotated multimodal database of audiovisual recordings, RGB-D videos and physiological signals of 12 participants in actual settings, which were recorded as participants were elicited using personalized real world objects and/or activities. The database is publicly available.

Jainendra Shukla, Miguel Barreda-Ángeles, Joan Oliver, Domènec Puig

“How Is His/Her Mood”: A Question That a Companion Robot May Be Able to Answer

Mood, as one of the human affects, plays a vital role in human-human interaction, especially due to its long lasting effects. In this paper, we introduce an approach in which a companion robot, capable of mood detection, is employed to detect and report the mood state of a person to his/her partner to make him/her prepared for upcoming encounters. Such a companion robot may be used at home or at work which would be able to improve the interaction experience for couples, partners, family members, etc. We have implemented the proposed approach using a vision-based method for mood detection. The approach has been tested by an experiment and a follow up study. Descriptive and statistical analysis were performed to analyze the gathered data. The results show that this type of information can have positive impact on interaction of partners.

Mojgan Hashemian, Hadi Moradi, Maryam S. Mirian

Emotion in Robots Using Convolutional Neural Networks

These years, emotion recognition has been one of the hot topics in computer science and especially in Human-Robot Interaction (HRI) and Robot-Robot Interaction (RRI). By emotion (recognition and expression), robots can recognize human behavior and emotion better and can communicate in a more human way. On that point are some research for unimodal emotion system for robots, but because, in the real world, Human emotions are multimodal then multimodal systems can work better for the recognition. Yet, beside this multimodality feature of human emotion, using a flexible and reliable learning method can help robots to recognize better and makes more beneficial interaction. Deep learning showed its force in this area and here our model is a multimodal method which use 3 main traits (Facial Expression, Speech and gesture) for emotion (recognition and expression) in robots. We implemented the model for six basic emotion states and there are some other states of emotion, such as mix emotions, which are really laborious to be picked out by robots. Our experiments show that a significant improvement of identification accuracy is accomplished when we use convolutional Neural Network (CNN) and multimodal information system, from 91 % reported in the previous research [27] to 98.8 %.

Mehdi Ghayoumi, Arvind K. Bansal

Rhythmic Timing in Playful Human-Robot Social Motor Coordination

Future robots for everyday human environments will need to be capable of physical collaboration and play. We previously designed a robotic system for constant-tempo human-robot hand-clapping games. Since rhythmic timing is crucial in such interactions, we sought to endow our robot with the ability to speed up and slow down to match the human partner’s changing tempo. We tackled this goal by observing human-human entrainment, modeling human synchronization behaviors, and piloting three adaptive tempo behaviors on a Rethink Robotics Baxter Research Robot. The pilot study indicated that a fading memory difference learning timing model may perform best in future human-robot gameplay. We will use the findings of this study to improve our hand-clapping robotic system.

Naomi T. Fitter, Dylan T. Hawkes, Katherine J. Kuchenbecker

The Effects of an Impolite vs. a Polite Robot Playing Rock-Paper-Scissors

There is a growing interest in the Human-Robot Interaction community towards studying the effect of the attitude of a social robot during the interaction with users. Similar to human-human interaction, variations in the robot’s attitude may cause substantial differences in the perception of the robot. In this work, we present a preliminary study to assess the effects of the robot’s verbal attitude while playing rock-paper-scissors with several subjects. During the game the robot was programmed to behave either in a polite or impolite manner by changing the content of the utterances. In the experiments, 12 participants played with the robot and completed a questionnaire to evaluate their impressions. The results showed that a polite robot is perceived as more likable and more engaging than a rude, defiant robot.

Álvaro Castro-González, José Carlos Castillo, Fernando Alonso-Martín, Olmer V. Olortegui-Ortega, Victor González-Pacheco, María Malfaz, Miguel A. Salichs

Qualitative User Reactions to a Hand-Clapping Humanoid Robot

Playful interactions serve an important role in human development and interpersonal bonding. Accordingly, we believe future robots may need to know how to play games to connect with people in meaningful ways. To begin exploring how users perceive playful human-robot interaction, we conducted a study with 20 participants. Each user played simple hand-clapping games with the Rethink Robotics Baxter Research Robot during a one-hour-long session. Qualitative data collected from surveys and experiment recordings resoundingly demonstrate that this interaction is viable: all users successfully completed the experiment, all users enjoyed at least one game, and nineteen of the 20 users identified at least one potential personal use for Baxter. Hand-clapping tempo was highly salient to users, and human-like robot errors were more widely accepted than mechanical errors. These findings can motivate and guide roboticists who want to design social-physical human-robot interactions.

Naomi T. Fitter, Katherine J. Kuchenbecker

Nonlinear Controller of Arachnid Mechanism Based on Theo Jansen

This paper presents a new motion controller for arachnid mechanism based on Theo Jansen that is capable of performing path-following tasks. The proposed controller has the advantage of simultaneously performing the approximation of the arachnid robot to the proposed path by the shortest route and limiting its velocity. Furthermore, it is presents the kinematic modeling of the arachnid mechanism where it is considered that its mass center is located at the legs’ axis center of the robot. In addition, the stability is proven through Lyapunov’s method. To validate the proposed control algorithm, experimental results are included and discussed.

Víctor H. Andaluz, David Pérez, Darwin Sáchez, Cristina Bucay, Carlos Sáchez, Vicente Morales, David Rivas

Designing and Assessing Expressive Open-Source Faces for the Baxter Robot

Facial expressions of both humans and robots are known to communicate important social cues to human observers. Nevertheless, faces for use on the flat panel display screens of physical multi-degree-of-freedom robots have not been exhaustively studied. While surveying owners of the Rethink Robotics Baxter Research Robot to establish their interest, we designed a set of 49 Baxter faces, including seven colors (red, orange, yellow, green, blue, purple, and gray) and seven expressions (afraid, angry, disgusted, happy, neutral, sad, and surprised). Online study participants (N = 568) drawn equally from two countries (US and India) then rated photographs of a physical Baxter robot displaying randomized subsets of the faces. Face color, facial expression, and onlooker country of origin all significantly affected the perceived pleasantness and energeticness of the robot, as well as the onlooker’s feelings of safety and pleasedness, with facial expression causing the largest effects. The designed faces are available to researchers online.

Naomi T. Fitter, Katherine J. Kuchenbecker

Spontaneous Human-Robot Emotional Interaction Through Facial Expressions

One of the main issues in the field of social and cognitive robotics is the robot’s ability to recognize emotional states and emotional interaction between robots and humans. Through effective emotional interaction, robots will be able to perform many tasks in human society. In this research, we have developed a robotic platform and a vision system to recognize the emotional state of the user through its facial expressions, which leads to a more realistic human-robot interaction (HRI). First, a number of features are extracted according to points detected by a vision system from the face of the user. Then, the emotional state of the user is analyzed with the help of these features. For the decision making unit, a state machine is designed that utilizes the results obtained from the emotional state analysis to generate the robot’s response. Finally, a fuzzy algorithm is used to improve the quality of emotional interaction and the results are implemented on a commercial humanoid robot platform which has the ability of producing facial expressions.

Ali Meghdari, Minoo Alemi, Ali Ghorbandaei Pour, Alireza Taheri

Functional and Non-functional Expressive Dimensions: Classification of the Expressiveness of Humanoid Robots

In Human-Robot Interaction (HRI), an important quantity of work has been done to investigate the reaction of people toward expressive robots. However, the large variability of available expression modalities (e.g., gaze, gestures, speech modulation) can make comparison between results difficult. We believe that developing a common taxonomy to describe these modalities would contribute to the standardization of HRI experiments. This paper proposes the first version of a classification system based on an analysis of humanoid robots commonly seen and used in HRI studies. Features from the face of robots are discussed in terms of functional and non-functional dimensions, and a short-hand notation is developed to describe these features.

François Ferland, Adriana Tapus

Facing Emotional Reactions Towards a Robot – An Experimental Study Using FACS

As robots are starting to enter not only our professional lives but also our domestic lives, new questions arise: What about the emotional impact of getting into contact with this new ‘species’? Can robots elicit emotional reactions from humans? Assuming that humans may react in the same way towards robots as they do towards humans (Media Equation), it is important to investigate the factors influencing this emotional experience. But systematic research in this area remains scarce. An exception is the study conducted by Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, and Eimler in 2013 addressing the question of emotional reactions towards a robot experimentally. Taking the study by Rosenthal-von der Pütten et al. as a starting point and following their suggested multi-method approach as well as Scherer’s assumption about the five components of emotions, we added the motor expression component to further investigate the multilevel phenomenon of emotional reactions towards robots. We used the Facial Action Coding System (FACS) to analyze the facial expressions of participants viewing videos of the robot dinosaur Pleo in a friendly interaction or being tortured. Participants showed Action Units associated with intrinsic unpleasantness more frequently in the torture-video-condition than during the reception of the normal video. Participants also reported more negative feelings after the torture video than the normal video. The findings indicate the importance of investigating emotional reactions towards robots for a social robot to be an ideal companion. Furthermore, this paper shows that the application of FACS to research in human-robot interaction is a fruitful and insightful enhancement to commonly used self-report measures. We conclude this paper with some recommendations for improving the design of social robots.

Isabelle M. Menne, Christin Schnellbacher, Frank Schwab

Head and Face Design for a New Humanoid Service Robot

The design process of robotic faces as focal points of human-robot interactions plays a very important role in the construction of successful robot companions. The face of a robot represents an attentional anchor for the human user, and, if designed appropriately, enables a comfortable and intuitive communication. In this paper, we describe the user centered design process of the face display of a newly constructed humanoid robot companion called R1. We will introduce a novel solution for robotic face displays, illustrate our design decisions based on the results of an online survey, and present our final face design. We will discuss the lessons learned during the design process and argue that our solution and design will enable R1 to be a successful artificial interlocutor in different human-robot interaction scenarios.

Hagen Lehmann, Anand Vazhapilli Sureshbabu, Alberto Parmiggiani, Giorgio Metta

The Influence of Robot Appearance and Interactive Ability in HRI: A Cross-Cultural Study

It has been shown that human perception of robots changes after the first interaction. It is not clear, however, to which extent the robot’s appearance and interactive abilities influences such changes in perception. In this paper, participants’ perception of two robots with different appearance and interactive modalities are compared before and after a short interaction with the robots. Data from Japanese and Australian participants is evaluated and compared. Experimental results show significant differences in perception depending on the robot type and the time of interaction. As a result of cultural background, perception changes were observed only for Japanese participants on isolated key concepts.

Kerstin Sophie Haring, David Silvera-Tawil, Katsumi Watanabe, Mari Velonaki

Congruency Matters - How Ambiguous Gender Cues Increase a Robot’s Uncanniness

Most research on the uncanny valley effect is concerned with the influence of human-likeness and realism as a trigger of an uncanny feeling in humans. There has been a lack of investigation on the effect of other dimensions, for example, gender. Back-projected robotic heads allow us to alter visual cues in the appearance of the robot in order to investigate how the perception of it changes. In this paper, we study the influence of gender on the perceived uncanniness. We conducted an experiment with 48 participants in which we used different modalities of interaction to change the strength of the gender cues in the robot. Results show that incongruence in the gender cues of the robot, and not its specific gender, influences the uncanniness of the back-projected robotic head. This finding has potential implications for both the perceptual mismatch and categorization ambiguity theory as a general explanation of the uncanny valley effect.

Maike Paetzel, Christopher Peters, Ingela Nyström, Ginevra Castellano

Collaborative Visual Object Tracking via Hierarchical Structure

This paper focuses on the optimization and improvement of visual-based object tracking algorithm. Reflecting from previously used tracking algorithm, we approach the problem using L2-regularized least squares to solve the sparse representation matrix of the object appearance model and propose an efficient collaborative algorithm to track the object. A hierarchical framework and selective multi-memory based online dictionary update are developed to upgrade the speed of the algorithm and improve the robustness by considering both current and history appearance into the template. In addition, key-point feature matching is novelly proposed to further enhance the accuracy of the tracking algorithm by calculating an optical flow based similarity degree. Finally, the proposed algorithm is verified using comprehensive image sequence datasets to demonstrate its effectiveness on coping with various tracking challenges, such as object deformations, illumination changes and partial occlusions.

Fangwen Tu, Shuzhi Sam Ge, Henry Pratama Suryadi, Yazhe Tang, Chang Chieh Hang

Data Augmentation for Object Recognition of Dynamic Learning Robot

The training of deep learning networks for robot object recognition requires a large database of training images for satisfactory performance. The term “dynamic learning” in this paper refers to the ability of a robot to learn new features under offline conditions by observing its surrounding objects. A training framework for robots to achieve object recognition with satisfactory performance under offline training conditions is proposed. A coarse but fast method of object saliency detection is developed to facilitate raw image collection. Additionally, a training scheme referred to as a Dynamic Artificial Database (DAD) is proposed to tackle the problem of overfitting when training neural networks without validation data.

Jiunn Yuan Chan, Shuzhi Sam Ge, Chen Wang, Mingming Li

Rotational Coordinate Transformation for Visual-Inertial Sensor Fusion

Visual and inertial sensors are used collaboratively in many applications because of their complementary properties. The problem associated with sensor fusion is relative coordinate transformations. This paper presents a quaternion-based method to estimate the relative rotation between visual and inertial sensors. Rotation between a camera and an inertial measurement unit (IMU) is represented by quaternions, which are separately measured to allow the sensor to be optimized individually. Relative quaternions are used so that the global reference is not required to be known. The accuracy of the coordinate transformation was evaluated by comparing with a ground-truth tracking system. The experiment analysis proves the effectiveness of the proposed method in terms of accuracy and robustness.

Hongsheng He, Yan Li, Jindong Tan

Developing an Interactive Gaze Algorithm for Android Robots

We implemented a gaze algorithm for interacting with multiple observers as a precursor to a multi-party conversation system. By acknowledging multiple participants in a natural manner, we seek to set the stage for smoother and more effective human-robot conversations featuring proper turn-taking using attention shifts. The android robot EveR-4, developed at the Korea Institute of Industrial Technology for human-robot interaction (HRI) applications was used. The robot wore a dress and was made up to replicate interacting naturally with a real woman as much as possible. Using a RGB-D camera, peoples faces and positions were tracked so that the robot’s attention could be given to everyone appropriately. An importance value was assigned to each detected face based on the length of time it was detected and its distance to the robot. Facial expressions were made by the robot when people were seen to increase observers’ sense of interaction. We observed peoples reactions to our implementation at an exhibition and made note of how we can improve the overall system to be more life-like and realistic.

Brian D. Cruz, Byeong-Kyu Ahn, Hyun-Jun Hyung, Dong-Wook Lee

Recovery Behavior of Artificial Skin Materials After Object Contact

As social robots and lifelike prosthetics get into closer contact with humans, understanding the mechanical behavior of the embedding skin materials for prosthetic and social robotic fingertips is of great importance. The time-dependent behavior can alter the performance of the embedded sensors. This paper investigates two types of embedding materials (i.e. silicone and polyurethane) for their recovery after contact with a surface. A visco-hyperelastic finite element model of a fingertip is described. This model allows the visualization of the materials’ responses after a creep test. This analysis was performed to investigate the recovery time of the materials after contact was made. Simulation results show the differences between the two materials. The results are useful for materials selection and to further investigate other design alternatives and to minimize the effects of the time delay.

John-John Cabibihan, Mohammad Khaleel Abu Basha, Kishor Sadasivuni

One-Shot Evaluation of the Control Interface of a Robotic Arm by Non-experts

In this paper we study the relation between the performance of use and user preferences for a robotic arm control interface. We are interested in the user preference of non-experts after a one-shot evaluation of the interfaces on a test task. We also probe into the possible relation between user performance and individual factors. After a focus group study, we choose to compare the robotic arm joystick and a graphical user interface. Then, we studied the user performance and subjective evaluation of the interfaces during an experiment with the robot arm Jaco and N = 23 healthy adults. Our preliminary results show that the user preference for a particular interface does not seem to depend on their performance in using it: for example, many users expressed their preference for the joystick while they were better performing with the graphical interface. Contrary to our expectations, this result does not seem to relate to the user’s individual factors that we evaluated, namely desire for control and negative attitude towards robots.

Sebastian Marichal, Adrien Malaisé, Valerio Modugno, Oriane Dermy, François Charpillet, Serena Ivaldi

A Novel Parallel Pinching and Self-adaptive Grasping Robotic Hand

This paper introduces a novel underactuated hand, the PASA-GB hand which has a hybrid grasping mode. The hybrid grasping mode is a combination of parallel pinching (PA) grasp and self-adaptive enveloping (SA) grasp. In order to estimate the performance of grasping objects, the potential energy method is used to analyze the stabilities of the PASA-GB hand. The calculation of force distribution shows the influence of the size and position of objects and provides a method to optimize the force distribution. Experimental results verify the wide adaptability and high practicability of the PASA-GB hand.

Dayao Liang, Wenzeng Zhang

PCSS Hand: An Underactuated Robotic Hand with a Novel Parallel-Coupled Switchable Self-adaptive Grasp

This paper proposes a novel concept of underactuated grasping mode, called PCSS grasping mode. This mode has switchable hybrid grasping functions: parallel self-adaptive grasping (PASA) and coupled self-adaptive grasping (COSA), being able to grasp larger range of objects with different shapes and dimensions than traditional PASA and COSA hands. The PCSS grasping can execute different grasping modes: a parallel pinching (PA); a coupled hooking (CO); a self-adaptive encompassing (SA); parallel and self-adaptive hybrid grasping (PASA); coupled and self-adaptive hybrid grasping (COSA). A PCSS Hand is developed with three PCSS fingers and 6 degrees of freedom. Simulation analysis shows the high stability and the versatility of the PCSS Hand.

Shuang Song, Wenzeng Zhang

JLST Hand: A Novel Powerful Self-adaptive Underactuated Hand with Joint-Locking and Spring-Tendon Mechanisms

This paper proposes a novel spring-tendon self-adaptive underactuated hand, called JLST hand, which can perform simultaneous multi-joint locking grasp. A JLST hand is designed with three JLST fingers and 6 degrees of freedom (DOFs). The JLST hand has more stable grasp ability and a larger grasping force than the normal spring-tendon robot hand, because the JLST hand has simultaneous multi-joint locking mechanisms. The JLST finger uses one motor to realize grasping objects, releasing objects and locking joints. The spring-tendon mechanism is based on the spring force to grasp objects and using the motor to pull the tendon to release the object. The JLST finger also uses one motor pulling the tendon to release the object and spring force to grasp objects, but the difference is JLST finger also uses the motor to lock multi-joints. Once the motor is turning forward, the finger releases the object; once the motor is turning backward, the motor releases the tendon so that the finger will grasp; the blocker is going to lock the joint since the motor keeps turning backward. The calculation and simulation results show that the JLST hand has the high stability of grasp and is more powerful than the normal spring-tendon hand.

Jiuya Song, Wenzeng Zhang

Path Analysis for the Halo Effect of Touch Sensations of Robots on Their Personality Impressions

Physical human–robot interaction plays an important role in social robotics, and touch is one of the key factors that influences human’s impression of robots. However, very few studies have explored different conditions, and therefore, few systematic results have been obtained. As the first step toward addressing this issue, we studied the types of impressions of robot personality that humans may experience when they touch a soft part of a robot. In the study, the left forearm of a child-like android robot “Affetto” was exposed; this forearm was made of silicone rubber and can be replaced with one of other three forearms providing different sensations of hardness upon touching. Participants were asked to touch the robot’s forearm and to fill evaluation questionnaires on 19 touch sensations and 46 personality impressions under each of four conditions with different forearms. Four impression factors for touch sensations and three for personality impressions were extracted from the evaluation scores by the factor analysis method. The causal relationships between these factors were analyzed by the path analysis method. Several significant causal relationships were found, for example, between preferable touch sensations and likable personality impressions. The results will help design robots’ personality impression by designing touch sensations more systematically.

Yuki Yamashita, Hisashi Ishihara, Takashi Ikeda, Minoru Asada

A Human-Robot Speech Interface for an Autonomous Marine Teammate

There is current interest in creating human-robot teams in which a human operator is in its own conveyance teaming up with several autonomous teammates. In this work we focus on human-robot teamwork in the marine environment as it is challenging and can serve as a surrogate for other environments. Marine elements such as wind speed, air temperature, water, obstacles, and ambient noise can have drastic implications for team performance. Our goal is to create a human-robot system that can join many humans and many robots together to cooperatively perform tasks in such challenging environments. In this paper, we present our human-robot speech dialog system and compare participant responses to having human versus autonomous vehicle teammates escorting and holding station at locations of interest.

Michael Novitzky, Hugh R. R. Dougherty, Michael R. Benjamin

Annotation of Utterances for Conversational Nonverbal Behaviors

Nonverbal behaviors play an important role in communication for both humans and social robots. However, adding contextually appropriate animations by hand is time consuming and does not scale well. Previous researchers have developed automated systems for inserting animations based on utterance text, yet these systems lack human understanding of social context and are still being improved. This work proposes a middle ground where untrained human workers label semantic information, which is input to an automatic system to produce appropriate gestures. To test this approach, untrained workers from Mechanical Turk labeled semantic information, specifically emotion and emphasis, for each utterance, which was used to automatically add animations. Videos of a robot performing the animated dialogue were rated by a second set of participants. Results showed untrained workers are capable of providing reasonable labeling of semantic information and that emotional expressions derived from the labels were rated more highly than control videos. More study is needed to determine the effects of emphasis labels.

Allison Funkhouser, Reid Simmons

Identifying Engagement from Joint Kinematics Data for Robot Therapy Prompt Interventions for Children with Autism Spectrum Disorder

Prompts are used by therapists to help children with autism spectrum disorder learn and acquire desirable skills and behaviors. As social robots are more regularly translated into similar therapy settings, a critical part of ensuring effectiveness of these robot therapy system is providing them with the ability to detect engagement/disengagement states of the child in order to provide prompts at the right time. In this paper, we examine the various features related to body movement that can be utilized to define engagement levels and develop a model using these features for identifying engagement/disengagement states. The model was validated in a pilot study with child participants. Results show that our engagement model can achieve a recognition rate of 97 %.

Bi Ge, Hae Won Park, Ayanna M. Howard

Social Robots and Teaching Music to Autistic Children: Myth or Reality?

Music-based therapy is an appropriate tool to facilitate multisystem development in children with autism. The focus of this study is to implement a systematic and hierarchical music-based scenario in order to teach the fundamentals of music to children with autism through a social robot. To this end, we have programmed a NAO robot to play the xylophone and the drum. After running our designed robot-assisted clinical interventions on three high-functioning and one low functioning autistic children, fairly promising results have been observed. We indicated that the high-functioning participants have learned how to play the musical notes, short sentences, and simple rhythms. Moreover, the program affected positively on autism severity, fine movement and communication skills of the autistic subjects. The initial results observed indicate promising potentials for involving social robots in music-based autism therapy.

Alireza Taheri, Ali Meghdari, Minoo Alemi, Hamidreza Pouretemad, Pegah Poorgoldooz, Maryam Roohbakhsh

Development of an ABA Autism Intervention Delivered by a Humanoid Robot

Applied Behavioral Analysis (ABA) techniques are widely used and accepted by the Autism research community as an effective Autism therapy method. ABA techniques have been recently introduced to use with Socially Assistive Robots (SAR) to deliver Autism therapy. Nonetheless, little research has been published to investigate the use of robot-based ABA in teaching socio-emotional skills for children diagnosed with Autism Spectrum Disorder (ASD). This paper presents the development of an ABA-based autonomous therapy system delivered by a humanoid robot, the Zeno R-50. Specifically, an intervention methodology with a prompt and an ABA reinforcement protocol to target skills associated with facial and situational emotion recognition and understanding are presented. Eleven children diagnosed with Autism were recruited to participate in the pre-pilot study screening. Initial results are investigated to discover the children’s preferred reinforcers and also to find ways of improving the therapy system before proceeding through full study groups. Results demonstrate the successful detection of reinforcers and show there is correlation between reinforcer preference and age. Results on two children who have completed interventions are presented and improvements to the protocol are discussed to contribute to the understanding of SAR in teaching socio-emotional skills to children diagnosed with ASD.

Michelle Salvador, Anna Sophia Marsh, Anibal Gutierrez, Mohammad H. Mahoor

Interactive Therapy Approach Through Collaborative Physical Play Between a Socially Assistive Humanoid Robot and Children with Autism Spectrum Disorder

This paper presents an exploratory study in which children with autism spectrum disorder (ASD) interact with a NAO humanoid robot in an interactive football game scenario. The study was conducted during 4 sessions with three boys diagnosed with ASD. It observed improvements in therapeutic outcomes such as social interaction, communication, eye contact, joint attention and turn taking. Qualitative and quantitative analysis were conducted using various approaches such as video documenting, surveys and assessment scales. In order to establish the efficacy of the study children interacted with their typically developing peers and parents post intervention to relate skills learned in robot-human setting to human-human setting. The quantitative results gathered and analyzed from pre and post implementation showed an increased in execution and duration of target behaviors. Manual coding and qualitative analysis of videos also verified that the proposed robot mediated play setting demonstrated improvements in social development of children with ASD in areas of communicative competence, turn taking, social interaction and proxemics and eye contact.

Saima Tariq, Sara Baber, Asbah Ashfaq, Yasar Ayaz, Muhammad Naveed, Saba Mohsin

Examine the Potential of Robots to Teach Autistic Children Emotional Concepts: A Preliminary Study

In this preliminary study, we developed a set of humanoid robot body movements which are used to express four basic emotional concepts and a set of learning activities. The goal of this study is to collect feedback from subject matter experts and validate our design. We will integrate them to improve the designs and guide autistic children to learn emotional concepts. To validate our designs, we conducted an online survey among general public people and four in-person interviews among subject matter experts. Results show that the body movement Happiness and Sadness could express emotions accurately, while the Anger and Fear movements need more improvements. According to the subject matter experts, this robot mediate instruction is engaging and appropriate. To better match autistic children, the instructional content should be tailored for individual learners.

Huanhuan Wang, Pai-Ying Hsiao, Byung-Cheol Min

Longitudinal Impact of Autonomous Robot-Mediated Joint Attention Intervention for Young Children with ASD

Literature suggests that a robot is able to capture attention and elicit positive social communication behaviors in many children with ASD. However, there are few studies reported regarding the longitudinal impact of autonomous robot-mediated interventions. In this paper, we introduce a new autonomous robotic system for teaching children with ASD joint attention skills, which is among the core developmental impairments in ASD. This system automatically tracks the participant’s behavior during intervention and adaptively adjusts the interaction pattern based on the participant’s performance. Based on this system, we report a longitudinal robot-mediated joint attention intervention on a user study for 6 children with ASD. First, four sessions of one-target interventions were conducted for each participant. Then we tested their joint attention skills after 8 months in two sessions on: (1) response to one target; (2) response to two targets. The results showed that this autonomous robotic system was able to elicit improved one-target joint attention performance in young children with ASD over the course of 8 months. The result also suggested that the joint attention skills that the participants practiced in the one-target sessions might help them interact in a more difficult task. The robot also attracted the participants’ attention constantly during this long term intervention.

Zhi Zheng, Guangtao Nie, Amy Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar

Culture as a Driver for the Design of Social Robots for Autism Spectrum Disorder Interventions in the Middle East

In this paper, we discuss the prevalence of Autism Spectrum Disorder (ASD) in the Gulf region. We examine the importance of providing state-of-the-art ASD interventions, and highlight social robots as therapeutic tools that have gained popularity for their use in ASD therapy in the West. We also elaborate on the features of social robots that make them effective and describe how they can be used in such settings. We then emphasize the significance of taking cultural context into account in order to develop indigenous tools for ASD therapy, and explain the different ways in which social robots can be made culturally adaptive to maximize their potential impact on children with ASD.

Hifza Javed, John-John Cabibihan, Mohammad Aldosari, Asma Al-Attiyah

Robo2Box: A Toolkit to Elicit Children’s Design Requirements for Classroom Robots

We describe the development and first evaluation of a robot design toolkit (Robo2Box) aimed at involving children in the design of classroom robots. We first describe the origins of the Robo2Box elements based on previous research with children and interaction designers drawing their preferred classroom robots. Then we describe a study in which 31 children created their own classroom robot using the toolkit. We present children’s preferences based on their use of the different elements of the toolkit, compare their designs with the drawings presented in previous research, and suggest changes for improvement of the toolkit.

Mohammad Obaid, Asım Evren Yantaç, Wolmet Barendregt, Güncel Kırlangıç, Tilbe Göksun

Interaction with Artificial Companions: Presentation of an Exploratory Study

The MoCA project aims to design and study children-companion relationship through virtual agents, personal robots and communicating devices. In this article, we present an exploratory study of the free 30 min long interactions between children and a set of artificial, robot-like and virtual companions. We present a preliminary overview of the results obtained pending further analyses.

Matthieu Courgeon, Charlotte Hoareau, Dominique Duhaut

Design and Development of Dew: An Emotional Social-Interactive Robot

There is a growing concern of depression and anxiety issues among children. While parents interpreting their quietness as shyness, depression and anxiety are directly experienced by these children. In this paper, we present the design of a social interactive robot, Dew, which is specially designed for these group of people. The design philosophy of Dew is presented with concept design, hardware and software specifications. With an expressive emotional design for its human-robot interaction, Dew is envisioned to approach these group of young people emotionally, and is expected to smoothen their emotions.

Yiping Xia, Chen Wang, Shuzhi Sam Ge

RASA: A Low-Cost Upper-Torso Social Robot Acting as a Sign Language Teaching Assistant

This paper presents the design characteristics of a new Robot Assistant for Social Aims (RASA), being an upper-torso humanoid robot platform currently under final stages of development. This project addresses the need for developing affordable humanoid platforms designed to be utilized in new areas of social robotics research, primarily teaching Persian Sign Language (PSL) to children with hearing disabilities. RASA is characterized by three features which are hard to find at the same time in today’s humanoid robots: its dexterous hand-arm systems enabling it to perform sign language, low development cost, and easy maintenance. In this paper, design procedures and considerations are briefly discussed and then the mechatronic hardware design of the robot is presented accordingly.

Mohammad Zakipour, Ali Meghdari, Minoo Alemi

Robust Children Behavior Tracking for Childcare Assisting Robot by Using Multiple Kinect Sensors

Recently, the requirement for the high qualified childcare schools keeps increasing, but the number of qualified nursery teachers is far from enough. Developing a childcare assisting robot is highly necessary to help the works of nursery teachers. To work like a human nursery teacher, the first challenge for the robot is to understand the behaviors of the children automatically so that the robot can give adaptive reactions to the children. In this paper, we developed a robust children behavior tracking system by using multiple Kinect sensors. Each of the child is detected and recognized by integrating his/her personal features of face, color and motion. The tracking process is realized by using the Markov Chain Monte Carlo (MCMC) particle filter. The experiments are conducted in a childcare school to show the usefulness of our system.

Bin Zhang, Tomoaki Nakamura, Rena Ushiogi, Takayuki Nagai, Kasumi Abe, Takashi Omori, Natsuki Oka, Masahide Kaneko

Learning with or from the Robot: Exploring Robot Roles in Educational Context with Children

The goal of this ongoing research is to examine the feasibility of using a social humanoid robot to teach children the basics of programming. We focus on exploring robot’s adaptive strategies in order to facilitate both effective educational applications and engaging child-robot interaction. In this paper we present our preliminary work, which explores robot’s social roles (peer versus teacher) and their effect on learning. The child needs to learn the basics of programming in order to walk the robot through the maze via drag-and-drop instructions on the tablet screen. The findings suggest that children complete the task much quicker with the peer robot while a teacher robot is shown to be more effective for learning.

Nazgul Tazhigaliyeva, Yerassyl Diyas, Dmitriy Brakk, Yernar Aimambetov, Anara Sandygulova

Automatic Adaptation of Online Language Lessons for Robot Tutoring

Teaching with robots is a developing field, wherein one major challenge is creating lesson plans to be taught by a robot. We introduce a novel strategy for generating lesson material, in which we draw upon an existing corpus of electronic lesson material and develop a mapping from the original material to the robot lesson, thereby greatly reducing the time and effort required to create robot lessons. We present a system, KubiLingo, in which we implement content mapping for language lessons. With permission, we use Duolingo as the source of our content. In a study with 24 users, we demonstrate that user performance improves by a statistically similar amount with a robot lesson as with Duolingo lesson. We find that KubiLingo is more distracting and less likeable than Duolingo, indicating the need for improvements to the robot’s design.

Leah Perlmutter, Alexander Fiannaca, Eric Kernfeld, Sahil Anand, Lindsey Arnold, Maya Cakmak

Robots in the Classroom: What Teachers Think About Teaching and Learning with Education Robots

In the present study, we investigated teachers’ attitudes toward teaching with education robots and robot-mediated learning processes. We further explored predictors of attitudes, and investigated teachers’ willingness to use robots in diverse learning settings. To do so, we conducted a survey with 59 German school teachers. Our results suggest that teachers held rather negative attitudes toward education robots. Further, our findings indicate a positive association between technology commitment and teachers’ attitudes. Teachers reported a preferable use of robots in domains related to science, technology, engineering, and mathematics (STEM). Regarding expectations toward the future use of education robots, teachers mentioned their motivational potential, using robots as information source, or easy handling. Teachers’ concerns, however, were associated with the disruption of teaching processes, additional workload, or the fear that robots might replace interpersonal relationships. Implications of our findings for theory and design of education robots are discussed.

Natalia Reich-Stiebert, Friederike Eyssel

The Influence of a Social Robot’s Persona on How it is Perceived and Accepted by Elderly Users

The demographic change causes an imbalance between the number of elderly in need of support and the number of caring staff. Therefore, it is important to help older adults keep their independence. Forgetting is a common obstacle people have to face when they become older which can be moderated by social robots by reminding on tasks. Since most elderly people are not used to robots a challenge in HRI is to identify aspects of a robot’s design to promote its acceptance. We present two different personas (companion vs. assistant) for a robotic platform by manipulating verbal and nonverbal behavior. A study was conducted in assisted living accommodations with the robot reminding on appointments to review if the persona influences the robot’s acceptance. Results indicate that the companion version of the robot was better accepted and perceived more likeable and intelligent compared to the assistant version.

Andrea Bartl, Stefanie Bosch, Michael Brandt, Monique Dittrich, Birgit Lugrin

From Social Practices to Social Robots – User-Driven Robot Development in Elder Care

It has been shown that the development of social robots for the elder care sector is primarily technology driven and relying on stereotypes about old people. We are focusing instead on the actual social practices that will be targeted by social robots. We provide details of this interdisciplinary approach and highlight its applicability and usefulness with field examples from an elder care home. These examples include ethnographic field studies as well as workshops with staff and residents. The goal is to identify and agree with both groups on social practices, where the use of a social robot might be beneficial. The paper ends with a case study of a robot, which frees staff from repetitive and time consuming tasks while at the same time allowing residents to reclaim some independence.

Matthias Rehm, Antonia L. Krummheuer, Kasper Rodil, Mai Nguyen, Bjørn Thorlacius

Co-design and Robots: A Case Study of a Robot Dog for Aging People

The day-to-day experiences of aging citizens differ significantly from young, technologically savvy engineers. Yet, well-meaning engineers continue to design technologies for aging citizens, informed by skewed stereotypes of aging without deep engagements from these users. This paper describes a co-design project based on the principles of Participatory Design that sought to provide aging people with the capacity to co-design technologies that suit their needs. The project combined the design intuitions of both participants and designers, on equal footing, to produce a companion robot in the form of a networked robotic dog. Besides evaluating a productive approach that empowers aging people in the process of co-designing and evaluating technologies for themselves, this paper presents a viable solution that is playful and meaningful to these elderly people; capable of enhancing their independence, social agency and well-being.

Tuck W. Leong, Benjamin Johnston

An Effort to Develop a Web-Based Approach to Assess the Need for Robots Among the Elderly

Our aim was to develop an approach to assess the need for assistive and/or social robots among the elderly. The research methods consisted of literature review, inquiries from assistive technology professionals, and surveys among wellbeing technology professionals and elderly people. The results presented that there was lack of approaches in assessing the need for robots. Even if the web-based tool might be useful, developing such an approach and algorithms seemed to be a requiring task.

Kimmo J. Vänni, Annina K. Korpela

Predicting the Intention of Human Activities for Real-Time Human-Robot Interaction (HRI)

The modeling methodology of Human-object relation is needed for human intention recognition in assistive robotics. When helping the elderly, the future action prediction is an essential task in real-time human-robot interaction. Since the future actions of humans are ambiguous, robots need to carefully conclude about the appropriate action. This requires a mathematical model to evaluate all possible future actions, corresponding probabilities and the possible relation between humans and the objects.Our contribution is the modeling methodology for the human activities using the probabilistic state machine (PSM). Not only the objects, but also latent human poses and the relationships between humans and the objects are here considered. The probabilistic model allows uncertainties and variations in the object affordances. In experiments, we show how the intention recognition w.r.t. drinking activity is analysed using our approach.

Vibekananda Dutta, Teresa Zielinska

The ENRICHME Project: Lessons Learnt from a First Interaction with the Elderly

The main purpose of the ENRICHME European project is to develop a socially assistive robot that can help the elderly, adapt to their needs, and has a natural behavior. In this paper, we present some of the lessons learnt from the first interaction between the robot and two elderly people from one partner care facility (LACE Housing Ltd, UK). The robot interacted with the two participants for almost one hour. A tremendous amount of sensory data was recorded from the multi-sensory system (i.e., audio data, RGB-D data, thermal images, and the data from the skeleton tracker) for better understanding the interaction, the needs, the reactions of the users, and the context. This data was processed offline. Before the interaction between the two elderly residents and the robot, a demo was shown to all the residents of the facility. The reactions of the residents were positive and they found the robot useful. The first lessons learnt from this interaction between Kompaï robot and the elderly are reported.

Roxana Agrigoroaie, François Ferland, Adriana Tapus

Design and Implementation of a Task-Oriented Robot for Power Substation

With the dramatically increasing number of substations, the robots are expected to inspect equipment in power industry. This paper presents a task-oriented robot for inspection in power substation. The patrol mode of the robot comprises teleoperation, regular inspection, special inspection and a key return mode. The robot only relies on a low-cost magnetic sensor for lateral positioning and radio frequency identification (RFID) technology for longitudinal positioning when working under patrol mode. The positioning error is proven to be within 5 mm, comparing 20 cm by integrated GPS-DR navigation. The test result shows that the robot could work efficiently and reliably in power substation.

Haojie Zhang, Bo Su, Zhibao Su

The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces

MuMMER (MultiModal Mall Entertainment Robot) is a four-year, EU-funded project with the overall goal of developing a humanoid robot (SoftBank Robotics’ Pepper robot being the primary robot platform) with the social intelligence to interact autonomously and naturally in the dynamic environments of a public shopping mall, providing an engaging and entertaining experience to the general public. Using co-design methods, we will work together with stakeholders including customers, retailers, and business managers to develop truly engaging robot behaviours. Crucially, our robot will exhibit behaviour that is socially appropriate and engaging by combining speech-based interaction with non-verbal communication and human-aware navigation. To support this behaviour, we will develop and integrate new methods from audiovisual scene processing, social-signal processing, high-level action selection, and human-aware robot navigation. Throughout the project, the robot will be regularly deployed in Ideapark, a large public shopping mall in Finland. This position paper describes the MuMMER project: its needs, the objectives, R&D challenges and our approach. It will serve as reference for the robotics community and stakeholders about this ambitious project, demonstrating how a co-design approach can address some of the barriers and help in building follow-up projects.

Mary Ellen Foster, Rachid Alami, Olli Gestranius, Oliver Lemon, Marketta Niemelä, Jean-Marc Odobez, Amit Kumar Pandey

Introducing IOmi - A Female Robot Hostess for Guidance in a University Environment

In this paper we introduce IOmi: a life-sized female humanoid hostess robot intended for serving as guidance in indoor public spaces. Its design methodology, adapted from industrial design approaches, is intended to be applicable in different scenarios, considering the final users of the robot, the intended use of the agent, and the contextual environment. Results from the first test inside a Latin American university environment clarified the needs of the potential users and suggested new directions of research.

Eiji Onchi, Cesar Lucho, Michel Sigüenza, Gabriele Trovato, Francisco Cuellar

Colleague or Tool? Interactivity Increases Positive Perceptions of and Willingness to Interact with a Robotic Co-worker

Human-robot interaction is increasingly likely in workplaces in the near future. In “co-working” relationships, humans may appreciate socially interactive robots for their anthropomorphic likeability, or functional robots for their strict task orientation. The current study examines the comparative perceived advantages of robots that behave interactively or functionally towards humans during a task with a superordinate goal. Survey results from 33 participants assessed perceptions of, and perceived cooperation with, robots during the task. Results indicated that participants stood physically closer to and rated Interactive robots as more anthropomorphic, sympathetic, and respected than Functional robots, but they did not rate the two types of robots differently in terms of cooperation. The more participants anthropomorphized, sympathized with, and respected the robots, the more willingness they reported to working with robots in the future.

Benjamin C. Oistad, Catherine E. Sembroski, Kathryn A. Gates, Margaret M. Krupp, Marlena R. Fraune, Selma Šabanović

Help Me! Sharing of Instructions Between Remote and Heterogeneous Robots

Service robots frequently face similar tasks. However, they are still not able to share their knowledge efficiently on how to accomplish those tasks. We introduce a new framework, which allows remote and heterogeneous robots to share instructions on the tasks assigned to them. This framework is used to initiate tasks for the robots, to receive or provide instructions on how to accomplish the tasks, and to ground the instructions in the robots’ capabilities. We demonstrate the feasibility of the framework with experiments between two geographically distributed robots and analyze the performance of the proposed framework quantitatively.

Jianmin Ji, Pooyan Fazli, Song Liu, Tiago Pereira, Dongcai Lu, Jiangchuan Liu, Manuela Veloso, Xiaoping Chen

Enabling Symbiotic Autonomy in Short-Term Interactions: A User Study

The presence of robots in everyday environments is increasing day by day, and their deployment spans over various applications: industrial and working scenarios, health care assistance in public areas or at home. However, robots are not yet comparable to humans in terms of capabilities; hence, in the so-called Symbiotic Autonomy, robots and humans help each other to complete tasks. Therefore, it is interesting to identify the factors that allow to maximize human-robot collaboration, which is a new point of view with respect to the HRI literature and very much leaning toward a social behavior. In this work, we analyze a subset of such variables as possible influencing factors of humans’ Collaboration Attitude in a Symbiotic Autonomy framework, namely: Proxemics setting, Activity Context, and Gender and Height as valuable features of the users. We performed a user study that takes place in everyday environments expressed as activity contexts, such as relaxing and working ones. A statistical analysis of the collected results shows a high dependence of the Collaboration Attitude in different Proxemics settings and Gender.

Francesco Riccio, Andrea Vanzo, Valeria Mirabella, Tiziana Catarci, Daniele Nardi

Conceptual Framework for RoboDoc: A New Social Robot for Research Assistantship

Conducting a research becomes a challenging task when a large number of scientific data is available and needs to be organized. In addition, research facilities may not be easily accessible due to the limitation in using hi-tech equipment and the resulting expenses. While, there are sophisticated tools for managing research data and assisting the researcher in conducting a research, they are not accessible through a union integrated manner, and sometimes the researcher is overwhelmed with these tools, with additional effort to learn how to work with them. To overcome these challenges, we propose a personalized intelligent assistant as a social robot, called RoboDoc, to assist the researcher in research steps such as idea generation, designing research methodology, and collecting and analyzing research data. We introduce a conceptual framework for RoboDoc to describe its services and design requirements, and at the end we briefly discuss our current efforts in developing RoboDoc.

Azadeh Mohebi, Ramin Golshaie, Soheil Ganjefar, Ammar Jalalimanesh, Parnian Afshar, Ali Aali Hosseini, Seyyed Alireza Ghoreishi, Amir Badamchi

Mechanical Design of Christine, the Social Robot for the Service Industry

Based on the strong understanding of a natural human-robot interaction, this paper presents a social robot named Christine with a human-like exterior which is developed to work in the service sector. Although many social robots have been developed and excelled in control systems, several humanoid robot have either fallen into the uncanny valley or not been accepted by the public yet. The mechanical design of Christine has a great improvement in aesthetics without compromising its functionality. There are 7 degrees of freedoms (DOFs) through the head of Christine for representing 3 head gestures and 5 facial emotions and the social intelligence is implemented based on vision and audio subsystems. The hardware architecture of Christine, which includes processor, vision, and motion system will also be presented systematically.

Yi Mei Foong, Xiaomei Liu, Shuzhi Sam Ge, Jie Guo

Influence of User’s Personality on Task Execution When Reminded by a Robot

One of the main purposes of companion robots is to use them to remind their users about the tasks they have to do. The interaction requires robots to adapt to the person with respect to their preferences. The performance of the human when a robot reminds them to do a certain task is of great importance. Findings in social psychology show that personality influences the way that humans interact. In this work, we conducted an experiment of task reminders in an office-like environment with a robot reminding tasks to a person while the person is doing other office-activities, with the goal of searching for positive influences of the robot on user’s personality. Nine different conditions were studied with the robot varying its behavior and appearance. Results show that the user’s personality has an influence on his/her time to perform a task while being reminded by a robot to perform such task, showing that people with high conscientiousness are more promoted by the robot to finish the task earlier than people with low conscientiousness, and also that introverted people are more motivated by the robot to finish the task earlier than extroverted people.

Arturo Cruz-Maya, Adriana Tapus

Comparing Ways to Trigger Migration Between a Robot and a Virtually Embodied Character

The question of whether to use a robot or a virtually-embodied character for applications in need of a socially intelligent agent depends on the requirements of the task at hand. To overcome limitations of both types of embodiment and benefit from advantages provided by both, we can complement a physical robot with a virtual counterpart. In order to link the two embodiments such that users perceive they are interacting with the same entity, the concept of “migration” from one embodiment to the other needs to be addressed. In this work, we investigate a particular aspect of this concept, namely how to best perform the triggering of migration, within the context of a physical activity motivation scenario for adolescents. We design two methods, a proximity-based method and a control, and compare their effects on adolescents’ perceptions of our agent. Results show that users perceive the agent as more of a friend and more socially present in the proximity-based than in the control condition. This emphasizes the importance of investigating different facets of entity migration for systems in need of employing both a physical and virtual embodiment for an artificial agent.

Elena Corina Grigore, Andre Pereira, Jie Jessica Yang, Ian Zhou, David Wang, Brian Scassellati

Does the Safety Demand Characteristic Influence Human-Robot Interaction?

While it is increasingly common to have robots in real-world environments, many Human-Robot Interaction studies are conducted in laboratory settings. Evidence shows that laboratory settings have the potential to skew participants’ feelings of safety. This paper probes the consequences of this Safety Demand Characteristic and its impacts on the field of Human-Robot Interaction. We collected survey and video data from 19 participants who had varied consent forms describing different levels of risk for participating in the study. Participants were given a distractor task to prevent them from knowing the purpose of the study. We hypothesized that participants would feel less safe with the changed consent form and that participants’ views of the robot would change depending on the version of consent. The results showed that features of the robot were viewed by participants differently depending on the perceived risks of participating in the study, warranting further inspection.

Jamie Poston, Houston Lucas, Zachary Carlson, David Feil-Seifer

On Designing Socially Acceptable Reward Shaping

For social robots, learning from an ordinary user should be socially appealing. Unfortunately, machine learning demands an enormous amount of human data, and a prolonged interactive teaching session becomes anti-social. We have addressed this problem in the context of reward shaping for reinforcement learning. For efficient reward shaping, a continuous stream of rewards is expected from the teacher. We present a simple framework which seeks rewards for a small number of steps from each of a large number of human teachers. Therefore, it simplifies the job of an individual teacher. The framework was tested with online crowd workers on a transport puzzle. We thoroughly analyzed the quality of the learned policies and crowd’s teaching behavior. Our results showed that nearly perfect policies can be learned using this framework. The framework was generally acceptable in the crowd’s opinion.

Syed Ali Raza, Jesse Clark, Mary-Anne Williams

Motivational Effects of Acknowledging Feedback from a Socially Assistive Robot

Preventing diseases of affluence is one of the major challenges for our future society. Recently, robots have been introduced as support for people on dieting or rehabilitation tasks. In our current work, we are investigating how the companionship and acknowledgement of a socially assistive robot (SAR) can influence the user to persist longer on a planking task. We conducted a 2 (acknowledgement vs. no-acknowledgment) x 2 (instructing vs. exercising together) x 1 (baseline) study with 96 subjects. We observed a motivational gain if the robot is exercising together with the user or if the robot is giving acknowledging feedback. However, we could not find an increase in motivation if the robot is showing both behaviors. We attribute the later finding to ceiling effects and discuss why we could not find an additional performance gain. Moreover, we highlight implications for SAR researchers developing robots to motivate people to extend exercising duration.

Sebastian Schneider, Franz Kummert

Who Am I? What Are You? Identity Construction in Encounters Between a Teleoperated Robot and People with Acquired Brain Injury

The paper highlights how the material affordances of a teleoperated robot (Telenoid) enable identity construction in interactions with people living with acquired brain injury (ABI). The focus is set on the identity construction of the robot in relation to both its operator and the interlocutors. The analysis is based on video recordings of a workshop in which people with ABI were communicating with a teleoperated robot for the first time. A detailed multimodal conversation analysis of video-recorded interactions demonstrates how identity construction (a) is embedded in the situated and interactional unfolding of the encounter and (b) is fragmented and reflexively intertwined with the identity construction of the other parties. The paper discusses how an understanding of identity as situated and interactional constructions contributes to the field of HRI and how teleoperated robots can be used in the field of communication impairment.

Antonia L. Krummheuer

Contribution Towards Evaluating the Practicability of Socially Assistive Robots – by Example of a Mobile Walking Coach Robot

This paper wants to make a further contribution towards a more transparent and systematic technical evaluation of implemented services and underlying HRI and navigation functionalities of socially assistive robots for public or domestic applications. Based on a set of selected issues, our mobile walking coach robot developed in the recently finished research project ROREAS (Robotic Rehabilitation Assistant for Walking and Orientation Training of Stroke Patients) was evaluated in three-stage function and user tests, in order to demonstrate the strengths and weaknesses of the developed assistive solution regarding the achieved autonomy and practicability for clinical use from technical point of view.

Horst-Michael Gross, Markus Eisenbach, Andrea Scheidig, Thanh Quang Trinh, Tim Wengefeld

Philosophy of Social Robotics: Abundance Economics

The aim of this paper is to present conceptual resources that address social robotics from a philosophical, social, and economic perspective. Since social robotics is an emerging and potentially high-impact area, it is necessary to consider the ethics and philosophy of social robotics and its potential impact on society. Philosophical, economic, and ethical issues are addressed first generally, revealing that social robotics is most-centrally a situation of human-machine collaboration. Second, economic issues are examined more specifically, positing that social robotics might figure prominently in both an automation economy that focuses on reduced requirements for human labor and an abundance economy that targets improved human quality of life. The stakes of social robotics are high and could mean both quantitative and qualitative benefits, and take advantage of the close connection with humans to help negotiate and buffer interactions between humans and a world with an increasing and expanding presence of technology.

Melanie Swan

Toward a Hybrid Society

The Transformation of Robots, from Objects to Social Agents

Social robots are machines developed to interact with humans. Unlike other technological devices, their presence in society requires accepting and treating them as social agents. This has important implications in terms of social changes for humans’ personal and social identity, and social interactions. We aim to explain the core features that characterize social robots by highlighting what makes them distinct from other types of innovative technology. Equally important, we illustrate how social psychology can provide a useful perspective to understand human-robot interactions. To do so, we focus on studies that have investigated the role of intergroup relations and social identity in the context of human-machine interactions to demonstrate that robots may comprise a new type of social outgroup in future society.

Francesco Ferrari, Friederike Eyssel

Iterative Design of a System for Programming Socially Interactive Service Robots

Service robots, such as the Savioke Relay, are becoming available in human environments such as hotels. It is important for these robots to not only be functional, but also to have appropriate socially interactive behaviors. In this paper, we first present results from a formative study with service industry customers. A key demand we discover is that the robot should be aware of people present around the robot. We incorporate these lessons into the design of iCustomPrograms, a system for programming socially interactive behaviors for service robots. Next, we perform two field studies with iCustomPrograms and iterate its design. In the first field study, which took place at an airport, we witness people initiating interaction with the robot in unanticipated ways. The second field study, which took place over 2 weeks at 5 service industry properties, evaluates the socially interactive applications created with iCustomPrograms. Our experiences and findings from each study not only show the usefulness of our system in the field, but also provide insights for the design of future interactive applications for service robots.

Michael Jae-Yoon Chung, Justin Huang, Leila Takayama, Tessa Lau, Maya Cakmak

Engagement Detection During Deictic References in Human-Robot Interaction

Humans are typically skilled interaction partners and detect even small problems during an interaction. In contrast, interactive robot systems often lack the basic capabilities to sense the engagement of their interaction partners and keep a common ground. This becomes even more problematic if humanoid robots with human-like behavior are used as they build up high expectations in terms of their cognitive capabilities. This paper contributes an approach for analyzing human engagement during object references in an explanation scenario based on time series alignment. An experimental guide scenario in a smart home environment was used to collect a training and test dataset where the engagement classification is carried out by human operators. The experiments already performed on the dataset give deeper insights into the presented task and motivate an incremental, mixed modality approach to engagement classification. While some of the results rely on external sensors they give an outlook on the requirements and possibilities for HRI scenarios with next-gen social robots.

Timo Dankert, Michael Goerlich, Sebastian Wrede, Raphaela Gehle, Karola Pitsch

Making Turn-Taking Decisions for an Active Listening Robot for Memory Training

In this paper we present a dialogue system and response model that allows a robot to act as an active listener, encouraging users to tell the robot about their travel memories. The response model makes a combined decision about when to respond and what type of response to give, in order to elicit more elaborate descriptions from the user and avoid non-sequitur responses. The model was trained on human-robot dialogue data collected in a Wizard-of-Oz setting, and evaluated in a fully autonomous version of the same dialogue system. Compared to a baseline system, users perceived the dialogue system with the trained model to be a significantly better listener. The trained model also resulted in dialogues with significantly fewer mistakes, a larger proportion of user speech and fewer interruptions.

Martin Johansson, Tatsuro Hori, Gabriel Skantze, Anja Höthker, Joakim Gustafson

Look at Me Now: Investigating Delayed Disengagement for Ambiguous Human-Robot Stimuli

Human-like appearance has been shown to positively affect perception of and attitudes towards robotic agents. In particular, the more human-like robots look, the more participants are willing to ascribe human-like states to them (i.e., having a mind, emotions, agency). The positive effect of human-likeness on agent ratings, however, does not translate to better performance in human-robot interaction (HRI). Performance first increases as human-likeness increases, then drops dramatically as soon as human-likeness reaches around 70 % to finally reach its maximum at 100 % humanness. The goal of the current paper is to investigate whether attentional mechanisms, in particular delayed disengagement, are responsible for the drop in performance for very human-like, but not perfectly human agents. The idea is that robots with a high degree of human-likeness capture attention and thus make it harder to orient attention away from them towards task-relevant stimuli in the periphery resulting in bad performance. To investigate this question, faces of differing degrees of human-likeness (0 %, 30 %, 70 %, 100 %, non-social control) are presented to participants in an eye-tracking experiment and the time it takes participants to orient towards a peripheral stimulus is measured. Results show significant delayed disengagement for all stimuli, but no stronger delayed disengagement for very human-like agents, making delayed disengagement an unlikely source for the negative effect of human-like appearance on performance in HRI.

Melissa A. Smith, Eva Wiese

Concurrency Simulation in Soccer

Soccer is a multiplayer concurrent strategic game, one of the most popular sports in the world. Each soccer match is a social phenomenon for itself, with high level social impact from local to international instances. We use context-free grammars and automatons for modeling soccer, then develop a concurrent computing system for this game simulation. We achieve game simulations using real statistical data from distinguished midfielders and forwarders players in the Spanish League of Soccer. Tests are made on the base of varying the teams’ formations, either 4-3-3, 4-4-2, 5-3-2, then obtaining the probabilistic advantage for each formation including the specific players’ statistics.

Jonathan Tellez-Giron, Matías Alvarado

Let the User Decide! User Preferences Regarding Functions, Apps, and Interfaces of a Smart Home and a Service Robot

In an online survey, we studied user expectations and preferences for functions and apps in the context of a smart apartment. Furthermore, we explored which type of interface users would choose for an interaction with the smart apartment. Equally important, we investigated users’ acceptance of a service robot in the smart home. Results showed high levels of acceptance for both, the smart apartment and the robot, although the preferred interface for the apartment was context dependent. We discuss implications of the current survey and highlight key aspects to be taken into consideration when developing innovation technology for the home context.

Birte Schiffhauer, Jasmin Bernotat, Friederike Eyssel, Rebecca Bröhl, Jule Adriaans

Welcome to the Future – How Naïve Users Intuitively Address an Intelligent Robotics Apartment

The purpose of this Wizard-of-Oz study was to explore the intuitive verbal and non-verbal goal-directed behavior of naïve participants in an intelligent robotics apartment. Participants had to complete seven mundane tasks, for instance, they were asked to turn on the light. Participants were explicitly instructed to consider nonstandard ways of completing the respective tasks. A multi-method approach revealed that most participants favored speech and interfaces like switches and screens to communicate with the intelligent robotics apartment. However, they required instructions to use the interfaces in order to perceive them as competent targets for human-machine interaction. Hence, first important steps were taken to investigate how to design an intelligent robotics apartment in a user-centered and user-friendly manner.

Jasmin Bernotat, Birte Schiffhauer, Friederike Eyssel, Patrick Holthaus, Christian Leichsenring, Viktor Richter, Marian Pohling, Birte Carlmeyer, Norman Köster, Sebastian Meyer zu Borgsen, René Zorn, Kai Frederic Engelmann, Florian Lier, Simon Schulz, Rebecca Bröhl, Elena Seibel, Paul Hellwig, Philipp Cimiano, Franz Kummert, David Schlangen, Petra Wagner, Thomas Hermann, Sven Wachsmuth, Britta Wrede, Sebastian Wrede

Better Than Human: About the Psychological Superpowers of Robots

Social interaction is crucial for psychological wellbeing. However, for the elderly, desiring to live independently in their homes for as long as possible, getting the emotional care needed can become challenging. Robots as social companions may help. To design companions, we argue to focus on the hybrid nature of robots in between being a “thing” and a “human” thus utilizing the unique “capabilities” of a robot. We discuss six psychological superpowers of robots rooted in their “thingness” rather than “humanness.” Robots are void of competitiveness, have endless patience, can be unconditionally subordinated, have the ability to contain themselves, do not take things personally and can assume responsibility. These qualities all relate to everyday companionship, but may be difficult to actually realize for fellow humans. By exploiting these superpowers, robot companions can become meaningful – not as a substitute for other humans, but as a novel, complementary form of social interaction.

Julika Welge, Marc Hassenzahl

A Method for Establishing Correspondences Between Hand-Drawn and Sensor-Generated Maps

Maps, and specifically floor plans, are useful for planning a variety of tasks from arranging furniture to designating conceptual or functional spaces (e.g., kitchen, walkway). However, maps generated directly from robot sensor data can be hard to interpret and use for this purpose, especially for individuals who are not used to them, because of sensor and odometry measurement errors and the probabilistic nature of the mapping algorithms themselves. In this paper, we present an algorithm for quickly laying a floor plan (or other conceptual map) onto a map generated from sensor data, creating a one-to-one mapping between the two This allows humans interacting with the robot to use a more readily-understandable representation of the world, while the robot itself uses the sensor-generated map.We look at two use cases: specifying “no-go” regions within a room, and visually locating objects within a room. Although a user study showed no statistical difference between the two types of maps in terms of performance on this spatial memory task, we argue that floor plans are closer to the mental maps people naturally draw to characterize spaces, and are easier to use for untrained individuals.

Leo Bowen-Biggs, Suzanne Dazo, Yili Zhang, Alexander Hubers, Matthew Rueben, Ross Sowell, William D. Smart, Cindy M. Grimm

Erratum to: Introducing IOmi - A Female Robot Hostess for Guidance in a University Environment

Eiji Onchi, Cesar Lucho, Michel Sigüenza, Gabriele Trovato, Francisco Cuellar


Weitere Informationen

Premium Partner