Skip to main content
Top

2021 | Book

Social Robotics

13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings

Editors: Dr. Haizhou Li, Prof. Dr. Shuzhi Sam Ge, Yan Wu, Agnieszka Wykowska, Assist. Prof. Hongsheng He, Xiaorui Liu, Prof. Dongyu Li, Jairo Perez-Osorio

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 13th International Conference on Social Robotics, ICSR 2021, held in Singapore, Singapore, in November 2021. The conference was held as a hybrid event.

The 64 full papers and 15 short papers presented were carefully reviewed and selected from 114 submissions. The conference presents topics on humans and intelligent robots and on the integration of robots into the fabric of our society. The theme of the 2021 edition was “Robotics in our everyday lives”, emphasizing on the increasing importance of robotics in human daily living.

Table of Contents

Frontmatter

Stereotypes and Biases in HRI

Frontmatter
Women Are Funny: Influence of Apparent Gender and Embodiment in Robot Comedy

Previous robotics work has identified significant effects of perceived gender and embodiment on human perceptions of robots, but these topics have yet to be investigated in the context of robot comedy. The presented study explored the effects of gender and embodiment on audience members’ perceptions of a robotic comedian. Participants (N = 153) observed either an audio-only clip or a video of a robotic comedian, with either a male or a female voice. We measured self-reported ratings of robot attributes. Results showed that neither gender nor physical form influenced joke humorousness or robot attribute ratings, however those who viewed a video of the robot reported feeling more connected to the comedian. These findings suggest that, unlike in past studies of human comedy to date, gender stereotypes and physical appearance may not affect perceptions of robot comedy performance.

Nisha Raghunath, Paris Myers, Christopher A. Sanchez, Naomi T. Fitter
Cross-Cultural Timeline of the History of Thought of the Artificial

The current world landscape in opinions and attitudes about robotics is highly variegated in different parts of the world. This landscape is a result of the sum of the effects of multiple factors, which date from millennia ago, as waves of philosophical thought, religion and historical events overlapped and allegedly influenced the concept of human and of the artificial. This paper provides a survey of such factors, and attempts to trace possible lines between causes and consequences. The analysis seems to indicate the presence of a West/East split which marks the main differences in intending the role of social agents, humanoids, transhumanism and labour automation.

Gabriele Trovato, Nikolaos Mavridis, Alexander Huerta-Mercado, Ryad Chellali
People’s Perceptions of Gendered Robots Performing Gender Stereotypical Tasks

HRI research shows that people prefer robot appearances that fit their given task but also identify stereotypical social perceptions of robots caused by a gendered appearance. This study investigates stereotyping effects of both robot genderdness (male vs. female) and assigned task (analytical vs. social) on people’s evaluations of trust, social perception, and humanness in an online vignette study (n = 89) with a between subject’s design. People deem robots more competent and receive higher capacity trust when they perform analytical tasks compared to social tasks, independent of the robot’s gender. An observed trend in the data implies a tendency to dehumanize robots as an effect of their gendered appearance, sometimes as an interaction effect with performed task when this contradicts gender stereotypical expectations. Our results stress further exploration of robot gender by varying gender cues and considering alternative task descriptions, as well as highlight potential new directions in studying human misconduct towards robots.

Sven Y. Neuteboom, Maartje M. A. de Graaf
Gender Revealed: Evaluating the Genderedness of Furhat’s Predefined Faces

In this study, we employed Furhat to investigate how people attribute gender to a robot and whether the attribution of gender might elicit stereotypes already at a first impression. We involved 223 participants in an online study and asked them to rate 15 of Furhat’s predefined faces in terms of femininity, masculinity, communion, and agency, and identify which facial cues they based their attribution of gender upon. Our results show that Furhat’s predefined faces are attributed the same gender predicted by their names, except for one face which was perceived as androgynous. They disclose that feminine robots are perceived as less agentic than masculine robots already at a first impression, and reveal that vocal cues have higher relevance than facial cues in determining the gender attributed to a robot. Besides providing a complete account of the genderedness of Furhat’s predefined faces, the present study also raises awareness of the importance of gender in the design of robots and provides a starting point to design more inclusive robotic technologies.

Giulia Perugia, Alessandra Rossi, Silvia Rossi
Cultural Values, but not Nationality, Predict Social Inclusion of Robots

Research highlighted that Western and Eastern cultures differ in socio-cognitive mechanisms, such as social inclusion. Interestingly, social inclusion is a phenomenon that might transfer from human-human to human-robot relationships. Although the literature has shown that individual attitudes towards robots are shaped by cultural background, little research has investigated the role of cultural differences in the social inclusion of robots. In the present experiment, we investigated how cultural differences, in terms of nationality and individual cultural stance, influence social inclusion of the humanoid robot iCub, in a modified version of the Cyberball game, a classical experimental paradigm measuring social ostracism and exclusion mechanisms. Moreover, we investigated whether the individual tendency to attribute intentionality towards robots modulates the degree of inclusion of the iCub robot during the Cyberball game. Results suggested that the individuals’ stance towards collectivism and tendency to attribute a mind to robots both predicted the level of social inclusion of the iCub robot in our version of the Cyberball game.

Serena Marchesi, Cecilia Roselli, Agnieszka Wykowska

Socially Intelligent Robots

Frontmatter
From Movement Kinematics to Object Properties: Online Recognition of Human Carefulness

When manipulating objects, humans finely adapt their motions to the characteristics of what they are handling. Thus, an attentive observer can foresee hidden properties of the manipulated object, such as its weight, temperature, and even whether it requires special care in the manipulation. This study is a step towards endowing a humanoid robot with this last capability. Specifically, we study how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object. We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera. Only for short movements without obstacles, carefulness recognition did not perform well. The prompt recognition of movement carefulness from observing the partner’s action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.

Linda Lastrico, Alessandro Carfì, Francesco Rea, Alessandra Sciutti, Fulvio Mastrogiovanni
Automated Lip-Reading Robotic System Based on Convolutional Neural Network and Long Short-Term Memory

In Iranian Sign Language (ISL), alongside the movement of fingers/arms, the dynamic movement of lips is also essential to perform/recognize a sign completely and correctly. In a follow up of our previous studies in empowering the RASA social robot to interact with individuals with hearing problems via sign language, we have proposed two automated lip-reading systems based on DNN architectures, a CNN-LSTM and a 3D-CNN, on the robotic system to recognize OuluVS2 database words. In the first network, CNN was used to extract static features, and LSTM was used to model temporal dynamics. In the second one, a 3D-CNN network was used to extract appropriate visual and temporal features from the videos. The accuracy rate of 89.44% and 86.39% were obtained for the presented CNN-LSTM and 3D-CNN networks, respectively; which were fairly promising for our automated lip-reading robotic system. Although the proposed non-complex networks did not provide the highest accuracy for this database (based on the literature), 1) they were able to provide better results than some of the more complex and even pre-trained networks in the literature, 2) they are trained very fast, and 3) they are quite appropriate and acceptable for the robotic system during Human-Robot Interactions (HRI) via sign language.

Amir Gholipour, Alireza Taheri, Hoda Mohammadzade
Developing a Robot’s Empathetic Reactive Response Inspired by a Bottom-Up Attention Model

This paper describes the development of a reactive behavioral response framework for the tabletop robot Haru. The framework enables the robot to react to external stimuli through a repertoire of expressive routines. The behavioral response framework is inspired by the simple reactive behaviors of organisms (e.g. reflexes) based on a bottom-up attention model. First, a participatory study for behavior elicitation was conducted. We explored the possible expressive behaviors of the robot and the possible stimuli trigger. These stimuli-response (S-R) pairs are designed befitting the robot’s characteristics. Then, we developed a perception and a reactive behavior module that automatically translates any perceived stimulus into expressive behavioral responses. We evaluated the proposed S-R framework using Haru in an interaction setting and our results show an increase in human attention activity indicative of its positive impact to conveying the robot’s sense of agency.

Randy Gomez, Yu Fang, Serge Thill, Ricardo Ragel, Heike Brock, Keisuke Nakamura, Yurii Vasylkiv, Eric Nichols, Luis Merino
Toward the Realization of Robots that Exhibit Altruistic Behaviors

In this paper, we will discuss and investigate the conditions and requirements for future robots to be efficiently altruistic. Robots, which are integrated into our lives and become a part of society, will have to carefully balance their time between their labor and altruistic behaviors as human workers are used to do. First, we single out three essential points for achieving this balance: 1) The robot should perform altruistic actions without impacting the performance of its designated tasks more than allowed by its owner. 2) The robot should take into account its expected future workload when predicting the impact of engaging in an altruistic action. 3) The robot should take into account the benefit for the society when engaging in an altruistic action. Then, we propose a general behavioral model that makes it possible to achieve this balance. Simulation results show that a robot using the proposed behavioral model could carry out some altruistic actions and still be performing its assigned labor efficiently.

Hajime Katagiri, Jani Even, Takayuki Kanda
Nadine the Social Robot: Three Case Studies in Everyday Life

This paper describes three case studies performed in everyday life with our social humanoid robot, Nadine. The development of AI, vision, and NLP over the last few years has made it possible to improve social robots’ awareness ability of their environment and has enabled them to understand spoken interactions. We have developed a software platform with several modules that allow us to introduce Nadine to real-life settings. We have brought Nadine to three different places in real-life setting. The first one was to prepare Nadine for an exhibition at the Artscience Museum in Singapore. The second one was to let Nadine work as a customer agent in AIA insurance company along with real employees. The third one, very recent, was to use Nadine as a companion in a elderly home. In this paper, we describe the three case studies we performed in different environments and the lessons we have learned from the outcomes of those experiences. We conclude by proposing new research avenues and the missing pieces that could make these social robots available to us in our everyday life.

Nadia Magnenat Thalmann, Nidhi Mishra, Gauri Tulsulkar

Social Context Awareness

Frontmatter
Investigating Customers’ Perceived Sensitivity of Information Shared with a Robot Bartender

Personalised experiences with service robots positively affect people’s perception of the robot and, consequently, foster the success of the interaction. This implies that people need to share their personal information with the robot, which could let people feel uneasy when such interactions happen in public spaces or in the presence of strangers. Therefore, it is difficult for a service robot to personalise a human-robot interaction (HRI) when this can lead to a breach of privacy. As a first step, the current study investigated people’s perception of the sensitivity of various categories of potentially private personal information that are likely to be used by a service robot in a public business, such as a bar. We conducted a questionnaire-based study, where participants rated 15 personal information that they could share with either a human or robot bartender. The potentially private information was rated by participants according to their level of sensitivity. We analysed responses from 76 participants. We clearly identified information that are perceived as highly sensitive, such as those related to a person’s identity (e.g. sexual orientation, political beliefs), and as low in sensitivity, such as those related to personal interests (e.g. sports, TV shows). Our findings also showed that older people consider sharing their preference of drinks more sensitive than younger people, especially when the bartender is a robot. We did not find significant differences in users’ ratings due to their gender.

Alessandra Rossi, Giulia Perugia, Silvia Rossi
Sex Robots: Auto-erotic Devices, Fetishes or New Form of Transitional Object for Adults?

How to characterize the object status of the sex robot? Although its global anthropomorphism, based on its hyper-realism, confers on it an indisputable reality, we seek to show that its mode of existence is floating. Either as an auto-erotic device whose role would be to close the body of the subject on himself/herself – even more elaborately than by the use of a sex toy or a sex machine. Or, as a fetish when it comes to a sex doll deprived of its genitals, a mute a-sexual figure who returns, in particular, the male subject to his unreachable and therefore untouchable female daydreams. Or, finally, as a transitional object, touchable, treatable, comforting but which places the subject in an area of illusion where, according to the very terms of Winnicott, the subject is in danger of dementia.

Bertrand Tondu
Explaining Before or After Acting? How the Timing of Self-Explanations Affects User Perception of Robot Behavior

Explanations are a useful tool to improve human-robot interaction and the topic of what a good explanation should entail has received much attention. While a robot’s behavior can be justified upon request after its execution, the intention to act can also be signaled by a robot prior to the execution. In this paper we report results from a pre-registered study on the effects of a social robot proactively giving a self-explanation before vs. after the execution of an undesirable behavior. Contrary to our expectations we found that explaining a behavior before its execution did not yield positive effects on the users’ perception of the robot or the behavior. Instead, the robot’s behavior was perceived as less desirable when explained before the execution rather than afterwards. Exploratory analyses further revealed that even though participants felt less uncertain about what was going to happen next, they also felt less in control, had lower trust and lower contact intentions with a robot that explained before it acted.

Sonja Stange, Stefan Kopp
Human vs Robot Lie Detector: Better Working as a Team?

Human interaction often entails lies. Understanding when a partner is being deceitful is an important social skill, that also robots will need, to properly navigate social exchanges. In this work, we investigate how good are human observers at detecting false claims and which features they base their judgment on. Moreover, we compare their performance with that of an algorithm for lie detection developed for the robot iCub and based uniquely on pupillometry. We ran an online survey asking participants to classify as truthful or deceptive 20 videos of individuals describing complex drawings to iCub, either correctly or untruly. They also had to rate their confidence and provide a written motivation for each classification. Responders achieved an average accuracy of 53.9% with a higher score on detecting lies (55.4%) with respect to true statements (52.8%). Also, they performed better and more confidently on the videos iCub failed to classify than on the ones iCub correctly detected. Interestingly, the human observers listed a wide range of behavioral features as means to decide whether a speaker was lying, while the robot’s judgment was driven by pupil size only. This suggests that an avenue for improving lie detection could be a joint effort between humans and robots, where human sensitivity to subtle behavioral cues could complement the quantitative assessment of physiological signals feasible to the robot. Finally, based on the reported motivations, we speculate and give hints on how the lie detection field should evolve in the future, aiming to portability to real-world interactions.

Dario Pasquali, Davide Gaggero, Gualtiero Volpe, Francesco Rea, Alessandra Sciutti
Appropriate Robot Reactions to Erroneous Situations in Human-Robot Collaboration

The capability of executing proper recovery strategies for different types of error situations is important for collaborative robots implemented in everyday lives. To understand people’s perception on the effective robot reaction to robotic failure, we conducted an online study where we asked participants to rate seven different robot reactions to handle three different types of error situations. An analysis of the result shows that in general, robots that employ error recovery strategies are rated significantly better than those who ignore the error situations. The strategy in which the robot expresses its regret for its own errors had the highest average rating in terms of anthropomorphism, while the strategy in which the robot apologises for its errors had the highest average likeability and perceived intelligence ratings. Further analysis show that the recovery plans are rated better if implemented in planning errors compared to social norm violations. Finally, we found that user’s gender and personality traits significantly affect participants’ ratings on error handling strategies, which suggests that personally-tailored error handling strategies might work best for future collaborative robots.

Dito Eka Cahya, Manuel Giuliani

Intuitive Interaction for Human-Robot Collaboration

Frontmatter
Simplified Robot Programming Framework for a Gearbox Assembly Application

In this paper, we are presenting a framework for multi-modal human-robot interaction (HRI), where complex robotic tasks can be programmed using a skill-based approach and intuitive HRI modalities. This approach is demonstrated using a gearbox assembly application in a realistic industrial environment. Our system includes mobile and static robots for actuation, 2D and 3D cameras for sensing, and GUIs, spatial- and see-through- Augmented Reality for HRI.

Nikhil Somani, Lee Li Zhen, Srinivasan Lakshminarayanan, Rukshan Hettiarachchi, Pang Wee-Ching, Gerald Seet Gim Lee, Domenico Campolo
Gaze Assisted Visual Grounding

There has been an increasing demand for visual grounding in various human-robot interaction applications. However, the accuracy is often limited by the size of the dataset that can be collected, which is often a challenge. Hence, this paper proposes using the natural implicit input modality of human gaze to assist and improve the visual grounding accuracy of human instructions to robotic agents. To demonstrate the capability, mechanical gear objects are used. To achieve that, we utilized a transformer-based text classifier and a small corpus to develop a baseline phrase grounding model. We evaluate this phrase grounding system with and without gaze input to demonstrate the improvement. Gaze information (obtained from Microsoft Hololens2) improves the performance accuracy from 26% to 65%, leading to more efficient human-robot collaboration and applicable to hands-free scenarios. This approach is data-efficient as it requires only a small training dataset to ground the natural language referring expressions.

Kritika Johari, Christopher Tay Zi Tong, Vigneshwaran Subbaraju, Jung-Jae Kim, U-Xuan Tan
Towards a Programming-Free Robotic System for Assembly Tasks Using Intuitive Interactions

Although industrial robots are successfully deployed in many assembly processes, high-mix, low-volume applications are still difficult to automate, as they involve small batches of frequently changing parts. Setting up a robotic system for these tasks requires repeated re-programming by expert users, incurring extra time and costs. In this paper, we present a solution which enables a robot to learn new objects and new tasks from non-expert users without the need for programming. The use case presented here is the assembly of a gearbox mechanism. In the proposed solution, first, the robot can autonomously register new objects using a visual exploration routine, and train a deep learning model for object detection accordingly. Secondly, the user can teach new tasks to the system via visual demonstration in a natural manner. Finally, using multimodal perception from RGB-D (color and depth) cameras and a tactile sensor, the robot can execute the taught tasks with adaptation to changing configurations. Depending on the task requirements, it can also activate human-robot collaboration capabilities. In summary, these three main modules enable any non-expert user to configure a robot for new applications in a fast and intuitive way.

Nicolas Gauthier, Wenyu Liang, Qianli Xu, Fen Fang, Liyuan Li, Ruihan Gao, Yan Wu, Joo Hwee Lim
Controlling Industrial Robots with High-Level Verbal Commands

Industrial robots today are still mostly pre-programmed to perform a specific task. Despite previous research in human-robot interaction in the academia, adopting such systems in industrial settings is not trivial and has rarely been done. In this paper, we introduce a robotic system that we control with high-level verbal commands, leveraging some of the latest neural approaches to language understanding and a cognitive architecture for goal-directed but reactive execution. We show that a large-scale pre-trained language model can be effectively fine-tuned for translating verbal instructions into robot tasks, better than other semantic parsing methods, and that our system is capable of handling through dialogue a variety of exceptions that happen during human-robot interaction including unknown tasks, user interruption, and changes in the world state.

Dongkyu Choi, Wei Shi, Ying Siu Liang, Kheng Hui Yeo, Jung-Jae Kim

Safe and Ethical Interaction

Frontmatter
Why We Need Emotional Intelligence in the Design of Autonomous Social Robots and How Confucian Moral Sentimentalism Can Help

This paper argues for the need to develop emotion in social robots to enable them to become artificial moral agents. The paper considers four dimensions of this issue: what, why, which, and how. The main thesis is that we need to build not just emotional intelligence, but also ersatz emotions, in autonomous social robots. Moral sentimentalism and moral functionalism are employed as the theoretical models. However, this paper argues that the popularly endorsed moral sentiment empathy is the wrong model to implement in social robots. In its stead, I propose the four moral sentiments (commiseration, shame/disgust, respect and deference, and the sense of right and wrong) in Confucian moral sentimentalism as our starting point for the top-down affective structure of robot design.

JeeLoo Liu
How to Tune Your Draggin’: Can Body Language Mitigate Face Threat in Robotic Noncompliance?

When social robots communicate moral norms, such as when rejecting inappropriate commands, humans expect them to do so with appropriate tact. Humans use a variety of strategies to carefully tune their harshness, including variations in phrasing and body language. In this work, we experimentally investigate how robots may similarly use variations in body language to complement changes in the phrasing of moral language.

Aidan Naughton, Tom Williams
Migratable AI : Investigating Users’ Affect on Identity and Information Migration of a Conversational AI Agent

Conversational AI agents are becoming ubiquitous and provide assistance to us in our everyday activities. In recent years, researchers have explored the migration of these agents across different embodiments in order to maintain the continuity of the task and improve user experience. In this paper, we investigate user’s affective responses in different configurations of the migration parameters. We present a 2 x 2 between-subjects study in a task-based scenario using information migration and identity migration as parameters. We outline the affect processing pipeline from the video footage collected during the study and report user’s responses in each condition. Our results show that users reported highest joy and were most surprised when both the information and identity was migrated; and reported most anger when the information was migrated without the identity of their agent.

Ravi Tejwani, Boris Katz, Cynthia Breazeal
The Self-Evaluation Maintenance Model in Human-Robot Interaction: A Conceptual Replication

Understanding human-robot social comparison is critical for creating psychologically safe robots (i.e., robots that do not cause psychological discomfort). However, there has been limited research examining social comparison processes in human-robot interaction (HRI). We aimed to conceptually replicate prior research suggesting that the Self-Evaluation Maintenance (SEM) model of social comparison applies to HRI. In short, the SEM model describes the mechanisms in which others can impact one’s self-evaluation. We applied the model to an online presentation of a humanoid robot, RUDY. We predicted that task relevance would moderate the relationship between the robot’s performance level and participant evaluations of the robot. When RUDY engaged in a low-relevance task (guessing someone’s age), participants would evaluate RUDY accurately (i.e., they would rate RUDY more positively when it performed well than when it performed poorly). However, when RUDY engaged in a high-relevance task (understanding how people feel), participants would evaluate RUDY inaccurately (i.e., they would rate RUDY negatively regardless of its actual performance). Contrary to our hypothesis, we found that participants in both the high- and low-relevance conditions evaluated RUDY accurately. Our results suggest that SEM effects may not generalize to all types of tasks and robots. A “highly relevant” task might mean something different depending on the exact nature of the human-robot relationship. Given the inconsistency between these findings and past research, discerning the boundary conditions for SEM effects may be crucial for developing psychologically safe robots.

Mira E. Gruber, P. A. Hancock
Birds of a Feather Flock Together: A Study of Status Homophily in HRI

Homophily, a person’s bias for having ties with people who are similar to themselves in social ways, has a vital role in creating a social connection between people. Studying homophily in human-robot interactions can provide valuable insights for improving those interactions. In this paper, we investigate whether similar interests have a positive effect on a human-robot interaction similar to the positive impact it can have on human-human interaction. We explore whether sharing similar interests can affect trust. This experiment consisted of two NAO robots; each gave differing speeches. For each participant, their national origin was asked in the pre-questionnaire, and during the sessions, one of the robot’s topics was either personalized or not to their national origin. Since one robot shared a familiar topic, we expected to observe bonding between humans and the robot. We gathered data from a post-questionnaire and analyzed them. The results summarize the hypotheses here. We conclude that homophily plays a significant role in human-robot interaction, affecting trust in a robot partner.

Roya Salek Shahrezaie, Bashira Akter Anima, David Feil-Seifer

Communication

Frontmatter
“Space Agency”: A “Strong Concept” for Designing Socially Interactive, Robotic Environments

What if our surrounding built environment could understand our emotions, predict our needs, and otherwise assist us, both physically and socially? What if we could interact with private and public spaces as if these were our friends, partners, and companions — “Space Agents”? “Space Agents” are here defined as robotic, smart built environments designed to be perceived or interacted with as socially intelligent agents. In this paper, we consider Space Agency both as a “Strong Concept” (a category of generative, intermediate-level design knowledge), and as a new research field of “socially interactive smart built environment” for Social Robotics, HAI, and HCI communities. “Space Agency” is considered with respect to previous empirical and theoretical works of HCI and Architecture and also by our own recent work on a socially adaptive wall. We conclude this paper by advancing the generalizability, novelty, and substantivity of “Space Agency” as a Strong Concept, abstracted beyond specific design instances which designers and researchers, in turn, can use to ideate and generate new design instances of social robots.

Yixiao Wang, Keith Evan Green
Early Prediction of Student Engagement-Related Events from Facial and Contextual Features

Intelligent tutoring systems have great potential in personalizing the educational experience by processing some key features from the user and educational task to optimize learning, engagement, or other performance measures. This paper presents an approach that uses a combination of facial features from the user of an educational app and contextual features about the progress of the task to predict key events related to user engagement. Our approach trains Gaussian Mixture Models from automatically processed screen-capture videos and propagates the probability of events over the course of an activity. Results show the advantage of including contextual features in addition to facial features when predicting these engagement-related events, which can be used to intervene appropriately during an educational activity.

Roshni Kaushik, Reid Simmons
Common Reality: An Interface of Human-Robot Communication and Mutual Understanding

An interface that can share effective and comprehensive mutual understanding is critical for human-robot interaction. This paper designs a novel human-robot interaction interface that enables humans and robots to interact by their shared mutual understanding of the context. The interface superimposes robot-centered reality and human-centered reality on the working space to construct a mutual understanding environment. The common-reality interface enables humans to communicate with robots through speech and immersive touching. The mutual understanding is constructed by the user’s commands, localization of objects, recognition of objects, object semantics, and augmented trajectories. The user’s vocal commands are interpreted to formal logic, and finger touching is detected and represented by coordinates. Real-world experiments have been done to show the effectiveness of the proposed interface.

Fujian Yan, Vinod Namboodiri, Hongsheng He
Autonomous Group Detection, Delineation, and Selection for Human-Agent Interaction

If a human and a robot team need to approach a specific group to make an announcement or delivery, how will the human describe which group to approach, and how will the robot approach the group? The robots will need to take a relatively arbitrary description of a group, identify that group from onboard sensors, and accurately approach the correct group. This task requires the robot to reason over and delineate individuals and groups from other individuals and groups. We ran a study on how people describe groups for delineation and identified the features most likely used by a person. We then present a framework that allows for an agent to detect, delineate, and select a given social group from the context of a description. We also present a group detection algorithm that works on a mobile platform in real-time and provide a formalization for a Group Selection Problem.

Ben Wright, J. Malcolm McCurry, Wallace Lawson, J. Gregory Trafton
A Framework of Controlled Robot Language for Reliable Human-Robot Collaboration

Effective and efficient communication is critical for human-robot collaboration and human-agent teaming. This paper presents the design of a Controlled Robot Language (CRL) and its formal grammar for instruction interpretation and automated robot planning. The CRL framework defines a formal language domain that deterministically maps linguistic commands to logical semantic expressions. As compared to Controlled Natural Language, which aims for general knowledge representation, CRL expressions are particularly designed to parse human instructions in automated robot planning. The grammar of CRL is developed in accordance with the IEEE CORA ontology, which defines the majority of formal English domain, accepting large range of intuitive instructions. For sentences outside the grammar coverage, CRL checker is used to detect linguistic patterns, which can be further processed by CRL translator to recover back an equivalent expression in CRL grammar. The final output is formal semantic representation using first-order logic in large discourse. The CRL framework was evaluated on various corpora and it outperformed CRL in balancing coverage and specificity.

Dang Tran, Fujian Yan, Yimesker Yihun, Jindong Tan, Hongsheng He
Age-Related Differences in the Perception of Eye-Gaze from a Social Robot

The sensibility to deictic gaze declines naturally with age and often results in reduced social perception. Thus, the increasing efforts in developing social robots that assist older adults during daily life tasks need to consider the effects of aging. In this context, as non-verbal cues such as deictic gaze are important in natural communication in human-robot interaction, this paper investigates the performance of older adults, as compared to younger adults, during a controlled, online (visual search) task inspired by daily life activities, while assisted by a social robot. This paper also examines age-related differences in social perception. Our results showed a significant facilitation effect of head movement representing deictic gaze from a Pepper robot on task performance. This facilitation effect was not significantly different between the age groups. However, social perception of the robot was less influenced by its deictic gaze behavior in older adults, as compared to younger adults. This line of research may ultimately help informing the design of adaptive non-verbal cues from social robots for a wide range of end users.

Lucas Morillo-Mendez, Martien G. S. Schrooten, Amy Loutfi, Oscar Martinez Mozos
Iterative Design of an Emotive Voice for the Tabletop Robot Haru

Designing a voice for a social robot is particularly challenging because the voice needs to convincingly convey a target personality while maintaining rich, emotive capabilities in order to foster the development of bonds with humans. In this paper, we describe the ongoing design and implementation process of a voice for a social robot. To aid in our design and analysis, we identify three desirable characteristics for its voice: 1. convincingness, 2. emotiveness, and 3. consistency. In this paper, we present a preliminary study that investigates convincingness by comparing samples taken from human voice talents and eliciting human judgements about their appropriateness. This study compares human judgements, elicited through surveys, on a range of characteristics related to convincingness, emotions conveyed, and impressions of the overall consistency of the voice. Finally, we discuss the implications of the survey findings for designing a voice for a social robot.

Eric Nichols, Sarah Rose Siskind, Waki Kamino, Selma Šabanović, Randy Gomez
Designing Nudge Agents that Promote Human Altruism

Previous studies have found that nudging is key to promoting altruism in human-human interaction. However, in social robotics, there is still a lack of study on confirming the effect of nudging on altruism. In this paper, we apply two nudge mechanisms, peak-end and multiple viewpoints, to a video stimulus performed by social robots (virtual agents) to see whether a subtle change in the stimulus can promote human altruism. An experiment was conducted online through crowd sourcing with 136 participants. The result shows that the participants who watched the peak part set at the end of the video performed better at the Dictator game, which means that the nudge mechanism of the peak-end effect actually promoted human altruism.

Chenlin Hang, Tetsuo Ono, Seiji Yamada
Perceptions of Quantitative and Affective Meaning from Humanoid Robot Hand Gestures

People use their hands in a variety of ways to communicate information along with speech during face-to-face conversation. Humanoid robots designed to converse with people need to be able to use their hands in similar ways, both to increase the naturalness of the interaction and to communicate additional information in the same way people do. However, there are few studies of the particular meanings that people derive from robot hand gestures, particularly for more abstract gestures such as so-called metaphoric gestures that may be used to communicate quantitative or affective information. We conducted an exhaustive study of the 51 hand gestures built into a commercial humanoid robot to determine the quantitative and affective meaning that people derive from observing them without accompanying speech. We find that hypotheses relating gesture envelope parameters (e.g., height, distance from body) to metaphorically corresponding quantitative and affective concepts are largely supported.

Timothy Bickmore, Prasanth Murali, Yunus Terzioglu, Shuo Zhou
Evaluation of a Humanoid Robot’s Emotional Gestures for Transparent Interaction

Effective and successful interactions between robots and people are possible only when they both are able to infer the other’s intentions, beliefs, and goals. In particular, robots’ mental models need to be transparent to be accepted by people and facilitate the collaborations between the involved parties. In this study, we focus on investigating how to create legible emotional robots’ behaviours to be used to make their decision-making process more transparent to people. In particular, we used emotions to express the robot’s internal status and feedback during an interactive learning process. We involved 28 participants in an online study where they rated the robot’s behaviours, designed in terms of colours, icons, movements and gestures, according to the perceived intention and emotions.

Alessandra Rossi, Marcus M. Scheunemann, Gianluca L’Arco, Silvia Rossi

Rehabilitation and Therapy

Frontmatter
Social Robots for Older Adults with Dementia: A Narrative Review on Challenges & Future Directions

Worldwide, approximately 50 million people live with Alzheimer’s disease or other dementias and there are nearly 10 million new cases every year. Social robots have been a promising approach to supplement human caregivers in dementia care. In this narrative review, we reviewed 62 articles to gain insight into the attitudes and perceptions among people with dementia and other stakeholders worldwide towards using social robots to assist dementia care. Then, we discussed some critical factors and challenges found in these studies influencing people’s perceptions as well as future directions in this field. The primary influencing challenges include cultural factors, users’ limited experiences with technologies, methodological challenges underlying qualitative studies as well as technological malfunctions in current robot system. We further suggested several aspects to be taken into more consideration in future research, including collaborations with other stakeholders, design of individual or group use for dementia care, an adaptive level of autonomy in a social robot and long-term human-robot interaction.

Daniel Woods, Fengpei Yuan, Ying-Ling Jao, Xiaopeng Zhao
Using Plantar Pressure and Machine Learning to Automatically Evaluate Strephenopodia for Rehabilitation Exoskeleton: A Pilot Study

Stroke patients often suffer from strephenopodia, which seriously affects their walking ability and rehabilitation. However, lower limb rehabilitation robots lack the evaluation and automatic correction function of strephenopodia. There are practical demands for convenient, automatic, and quantitative assessments of the angle of strephenopodia to adjust the orthopedic strength in time to remind stroke patients to use their muscles to realize the movements. In this study, we proposed a novel methodology for automatically predicting the angles of strephenopodia based on a plantar pressure system using machine learning methods. Three machine learning methods were implemented to build stochastic function mapping from gait features to strephenopodia angles, showing good reliability and precision prediction of the strephenopodia angle [determination coefficient (R2) ≥ 0.80]. Results showed that our method is convenient to implement and outperforms previous methods in accuracy. Therefore, measurements derived from the plantar pressure system are proper estimators of the strephenopodia angle and are beneficial to lower limb rehabilitation exoskeleton for stroke population training.

Jinjin Nong, Zikang Zhou, Xiaoming Xian, Guowei Huang, Peiwen Li, Longhan Xie
Learning-Based Strategy Design for Robot-Assisted Reminiscence Therapy Based on a Developed Model for People with Dementia

In this paper, the robot-assisted Reminiscence Therapy (RT) is studied as a psychosocial intervention to persons with dementia (PwDs). We aim at a conversation strategy for the robot by reinforcement learning to stimulate the PwD to talk. Specifically, to characterize the stochastic reactions of a PwD to the robot’s actions, a simulation model of a PwD is developed which features the transition probabilities among different PwD states consisting of the response relevance, emotion levels and confusion conditions. A Q-learning (QL) algorithm is then designed to achieve the best conversation strategy for the robot. The objective is to stimulate the PwD to talk as much as possible while keeping the PwD’s states as positive as possible. In certain conditions, the achieved strategy gives the PwD choices to continue or change the topic, or stop the conversation, so that the PwD has a sense of control to mitigate the conversation stress. To achieve this, the standard QL algorithm is revised to deliberately integrate the impact of PwD’s choices into the Q-value updates. Finally, the simulation results demonstrate the learning convergence and validate the efficacy of the achieved strategy. Tests show that the strategy is capable to duly adjust the difficulty level of prompt according to the PwD’s states, take actions (e.g., repeat or explain the prompt, or comfort) to help the PwD out of bad states, and allow the PwD to control the conversation tendency when bad states continue.

Fengpei Yuan, Ran Zhang, Dania Bilal, Xiaopeng Zhao
Designing a Socially Assistive Robot to Support Older Adults with Low Vision

Socially assistive robots (SARs) have shown great promise in supplementing and augmenting interventions to support the physical and mental well-being of older adults. However, past work has not yet explored the potential of applying SAR to lower the barriers of long-term low vision rehabilitation (LVR) interventions for older adults. In this work, we present a user-informed design process to validate the motivation and identify major design principles for developing SAR for long-term LVR. To evaluate user-perceived usefulness and acceptance of SAR in this novel domain, we performed a two-phase study through user surveys. First, a group (n = 38) of older adults with LV completed a mailed-in survey. Next, a new group (n = 13) of older adults with LV saw an in-clinic SAR demo and then completed the survey. The study participants reported that SARs would be useful, trustworthy, easy to use, and enjoyable while providing socio-emotional support to augment LVR interventions. The in-clinic demo group reported significantly more positive opinions of the SAR’s capabilities than did the baseline survey group that used mailed-in forms without the SAR demo.

Emily Zhou, Zhonghao Shi, Xiaoyang Qiao, Maja J. Matarić, Ava K. Bittner
A Preliminary Study of Robotic Media Effects on Older Adults with Mild Cognitive Impairment in Solitude

We investigate how older adults with mild cognitive impairment can be affected by a robot companion at home in a long-term study. For this purpose, we selected two participants living alone and set up the robotic communication media RoBoHoN at their homes so that participants could interact with it at any time for one and half to three months. After the trial, we conducted interviews to collect data regarding their feelings about, attachment to, and relationship with the robot. We also interviewed their family members. Our exploratory research revealed that participants had developed various ways of adaptation to their new life with the companion. Based on the interview results, we identified their mental stability as a media effect. The physical, psychological, and social aspects of the effects were analyzed and discussed for better understanding of issues and challenges to be addressed in further studies.

Ryuji Yamazaki, Shuichi Nishio, Yuma Nagata, Yuto Satake, Maki Suzuki, Miyae Yamakawa, David Figueroa, Manabu Ikeda, Hiroshi Ishiguro
Psychiatrists’ Views on Robot-Assisted Diagnostics of Peripartum Depression

Social robots are rising to prominence as tools in healthcare and mental healthcare. In this paper, we investigate robot-assisted diagnostics of peripartum depression (PPD) in women. To design robots that are accepted by users and comply with trustworthy Artificial Intelligence principles, we use semi-structured interviews to explore the views of potential stakeholders - psychiatrists. We aim to answer three research questions regarding 1) the usefulness of robots in the diagnosis of PPD, 2) potential ethical issues, and 3) the roles that robots and clinicians may play in the diagnostic process. Results show that psychiatrists are only willing to let robots take minor responsibilities, and feel that robots may be more useful in situations where there is a shortage of clinicians.

Mengyu Zhong, Ayesha Mae Bilal, Fotios C. Papadopoulos, Ginevra Castellano
Social Robots in Care Homes for Older Adults
Observations from Participatory Design Workshops

Evaluations of social robots for older adults in care home environments during the past 20 years have shown mostly positive results. However, many of these studies have been short-term and with few participants, as well as limited to few countries. Recent evidence, however, indicates that social robots might not work in all settings or for everyone. Therefore, we conducted a participatory workshop with key stakeholders as an attempt to begin to disentangle the many interrelated factors behind a successful implementation. The result showed similarities in preferred embodiment and morphology, differences in behavioural complexity and task performance, as well as a maybe surprising lack of interest in emotional support. It further showed that older adults living in care homes prior—to meeting social robots—showed relatively little interest in these robots. Based on these observations, we formulate future research directions.

Sofia Thunberg, Tom Ziemke
Robot-Assisted Training with Swedish and Israeli Older Adults

This paper explores robot-assisted training in a cross-cultural context with older adults. We performed user studies with 28 older adults with two different assistive training robots: an adaptive robot, and a non-adaptive robot, in two countries (Sweden and Israel). In the adaptive robot group, the robot suggested playing music and decreased the number of repetitions based on the participant’s level of engagement. We analyzed the facial expressions of the participants in these two groups. Results revealed that older adults in the adaptive robot group showed more varying facial expressions. The adaptive robot created a distraction for the older adults since it talked more than the non-adaptive robot. This result suggests that a robot designed for older adults should utilize the right amount of communication capabilities. The Israeli participants expressed more positive attitudes towards robots and rated the perceived usefulness of the robot higher than the Swedish participants.

Neziha Akalin, Maya Krakovsky, Omri Avioz-Sarig, Amy Loutfi, Yael Edan
Virtual Social Robot Enhances the Social Skills of Children with HFA

Social skills are the skills that humans use to communicate with each other verbally and non-verbally. The deficit of social skills is a core symptom of children with autism spectrum disorder (ASD). Physical social robots and virtual environments have been popular training tools for children with ASD in recent years.The Jammo-VRobot environment is a virtual desktop environment that employs a 3D virtual humanoid robot (Jammo VRobot) to enhance the social skills of children with high-functioning autism (HFA) through a social skills training program guided by a parent or a teacher. The social skill training programme targets three social skills: imitation, emotion recognition and expression, and intransitive gesture. The evaluation process was conducted mostly online with some on-site, including children with HFA (aged 4–12 years). The experimental sessions reveal encouraging results showing that the Jammo-VRobot environment helps in training and enhancing the target three skills of the participants.

Maha Abdelmohsen, Yasmine Arafa
Questioning Items’ Link in Users’ Perception of a Training Robot for Elders

Socially Assistive robots are becoming more common in modern society. These robots can accomplish a variety of tasks for people that are exposed to isolation and difficulties. Among those, elderly people are the largest part, and with them, robotics can play new roles. Elderly people are the ones who usually suffer a major technological gap, and it is worth evaluating their perception when dealing with robots. To this end, the present work addresses the interaction of elderly people during a training session with a humanoid robot. The analysis has been carried out by means of a questionnaire, using four key factors: Motivation, Usability, Likability, and Sociability. The results can contribute to the design and the development of social interaction between robots and humans in training contexts to enhance the effectiveness of human-robot interaction.

Emanuele Antonioni, Piercosma Bisconti, Nicoletta Massa, Daniele Nardi, Vincenzo Suriani

Teleoperation and Industrial Applications

Frontmatter
Teleoperating Multi-robot Furniture
Exploring Methods to Remotely Arrange Multiple Furniture Robots Deployed in a Multi-use Space

This paper presents the first study evaluating methods in remote teleoperation of multi-robot furniture for realistic applications. In a within-subjects user study (N = 12), we tested two robot control methods designed to work at different levels of abstraction in a custom web-based user interface (UI): clicking and dragging to indicate a desired position and orientation for a single ChairBot (“set goal”), and selecting from a list of preset arrangements for multiple ChairBots (“select arrangement”). Participants were asked to use this UI to rearrange ChairBots in a living room across three birthday-themed arrangement prompts. We found overlapping preferences for how distinct participants set of the room for particular party phases, and received high experience and usability ratings for the novel web-based multi-ChairBot controller design. Self-reported survey responses suggest that our design is easy to learn and usable. Our works provides insight to future controls design for and research on multi-robot furniture systems.

Brett Stoddard, Mark-Robin Giolando, Heather Knight
Human-Robot Coordination in Agile Robot Manipulation

With the practical demands in flexible and adaptive robot manipulation skills in various environment settings, there are more challenges to be tackled to enable the robot with valid responsive behaviors in task handling. This paper discuss on the methods to achieve agile robot manipulations tasks with the advantages from human-robot teleoperation, robot perception, knowledge-based robot programming, robot motion planning and robot skill learning. Teleoperation serves as a typical Human-Robot Interaction (HRI) manner to allow the human user to guide the robot behavior in a direct manner. Robot automation, including the sensor perception (object detection, pose estimation etc.), motion planning and motion control that can handle well defined problems, but is also lack of general sense of understanding capability and not good at solving not fully defined task challenges. An agile robotic system should have a knowledge database which defines the skill sets required for the robot to handle various robot tasks. Meanwhile, the system should be able to take human assistance inputs through HRI when the robot is stuck. Moreover, the system should be able to pick up new skill set with each human knowledge input. Methodology discussion is the main scope of the paper, and preliminary experiments on telemanipulation with UR5 robot are demonstrated to show the flexible robot guidance through HRI inputs. Future work will aim to add in perception, motion planning, and picking up skill modules to come up with a more agile solution in handling variation of tasks.

Qilong Yuan, Ivan Sim Wan Leong
Partial-Map-Based Monte Carlo Localization in Architectural Floor Plans

Mobile robots, in modern technology, demand a more robust localization in a complex environment. Currently, the most commonly used 2D LiDAR localization system for mobile robots requires maps that are constructed by 2D SLAM. Such systems do not cope well with dynamic environments and also have high deployment costs when moving robots to a new environment setting as they require the reconstruction of a map for each new place. In modern days, a floor plan is indispensable for an indoor environment. It typically represents essential structures such as walls, corners, pillars, etc. for humans to navigate in the environment. This information turns out to be crucial for robot localization. In this paper, we propose an approach for 2D LiDAR localization in an architectural floor plan. We use partial simultaneous localization and mapping (PSLAM) algorithm to generate a map while we concurrently aligned it to the floor plan using Monte Carlo Localization (MCL) method. Real-world experiments have been conducted with our proposed method which results in robust robot localization, the algorithm is even evaluated on a large discrepancies floor plan (discrepancies between the floor plan and real-world). Our algorithm demonstrates that its capabilities of localizing in real-time applications.

Chee Leong Chan, Jun Li, Jian Le Chan, Zhengguo Li, Kong Wah Wan
A Survey on Object Detection Performance with Different Data Distributions

Detecting objects in a dynamic scene is a critical step for robotic navigation. A mobile robot may need to slow down in presence of children, elderly or dense crowds. A robot’s movement needs to be precise and socially adjustable especially in a hospital setting. Identifying key objects in a scene can provide important contextual awareness to a robot. Traditional approaches used handcrafted features along with object proposals to detect objects in images. Recently, object detection has made tremendous progress over the past few years thanks to deep learning and convolutional neural networks. Networks such as SSD, YOLO, and Faster R-CNN have made significant improvements over traditional techniques while maintaining real-time inference speed. However, current existing datasets used for benchmarking these models tend to contain mainly outdoor images using a high-quality camera setup that is usually different from a robotic vision setting where a robot moves around in a dynamic environment resulting in sensor noise, motion blur, and change in data distribution. In this work, we introduce our custom dataset collected in a realistic hospital environment consisting of distinct objects such as hospital beds, tables, and wheelchairs. We also use state-of-art object detectors to showcase the current performance and gaps in a robotic vision setting using our custom CHART dataset and other public datasets.

Ramanpreet Singh Pahwa, Richard Chang, Wang Jie, Sankeerthana Satini, Chandrashekar Viswanathan, Du Yiming, Vernica Jain, Chen Tai Pang, Wan Kong Wah
Design and Development of a Teleoperation System for Affective Tabletop Robot Haru

The experimental tabletop robot Haru, used for affective telepresence research, enables a teleoperator to communicate a variety of information to a remote user through the robotic medium from a distance. However, the robot’s rich communicative modality poses some problems to the teleoperator. Based on their experience of controlling the robot, teleoperators feel the need to be constantly attentive to and engaged with the stream of data from the remote user in order to achieve a seamless and affective interaction. Consequently, teleoperators report feeling fatigued, resulting in a decrease in time using the teleoperation system. In addition, the bulk of the data stream containing information about the remote user poses data privacy concerns. In this paper, we describe the design and development of an improved affective teleoperation system that focuses on privacy, controllability, and mental fatigue. The proposed system enables a teleoperator to maintain the same degree of robot control with a minimal amount of data from the remote user. Moreover, our studies show that the proposed system drastically reduces teleoperation fatigue as shown by the increase in time the system is in use.

Yurii Vasylkiv, Ricardo Ragel, Javier Ponce-Chulani, Luis Merino, Eleanor Sandry, Heike Brock, Keisuke Nakamura, Irani Pourang, Randy Gomez

Psychical HRI

Frontmatter
A Boxed Soft Robot Conveying Emotions by Changing Apparent Stiffness of Its Lid

A social robot that convey its emotions by changing its apparent stiffness is presented herein. A user interacts with a box-shaped robot by pushing its lid. The apparent stiffness of the robot is controlled by an electromagnetic brake installed on the lid based on the emotional state of the robot. To control the stiffness, we implement two approaches: (i) control the reaction force when a user presses the lid; (ii) control the temporal restoring behavior when a user releases the lid. The experimental results show a capability of the robot in providing variable apparent stiffness and a potential in eliciting the emotional impact of users through haptic human-robot interactions.

Hiroya Kawai, Taku Hachisu, Masakazu Hirokawa, Kenji Suzuki
Motion Intention Recognition Based on Air Bladders

Recognition of human motion intention plays an important role in many robotic applications, such as human-assistive exoskeletons and rehabilitation robots. Motion intention recognition (MIR) based on physiological signals is one of the most common and intuitive methods. However, physiological signals are sensitive to environmental disturbances and suffer from complex preparation. In this paper, we proposed a novel air bladder-based MIR method, in which the human-robot interaction (HRI) force is measured directly by four air bladders. The air bladders can be installed at the end of a robot to interact with the user’s arm. We validate the linearity and repeatability of the air bladders through comprehensive experiments. In addition, we compare the performance of the proposed air bladder-based MIR method with the conventional method based on force sensors and surface electromyography (sEMG) signals. Experiments show that the proposed method can capture the change of the external force, even when the force changes rapidly. Moreover, the performance of our method is more comparative and robust in caparison with the sEMG-based MIR method.

Weifeng Wu, Chengqi Lin, Gengliang Lin, Siqi Cai, Longhan Xie
Design and Control of a Seven Degrees-of-Freedom Semi-exoskeleton Upper Limb Robot

Previous studies have shown that patient’s voluntary participation is one of the key factors in improving rehabilitation effects. End-effector and exoskeleton type robots have been developed to support rehabilitation training at different impedance levels. However, these robots either fail to take the movement of the shoulder girdle into account or suffer from complex and massive shoulder mechanisms. In this paper, we merge the advantages of the end-effector and exoskeleton type robots and propose a simple and effective semi-exoskeleton upper limb robot with seven degrees of freedom to support the impedance training of the human shoulder complex and elbow joint. Besides, an admittance control scheme is developed to generate desired movements during training. Experiments on five subjects are conducted to assess the feasibility and performance of the proposed robot. Results show that the proposed robot has satisfactory performance in terms of shoulder kinematic compatibility and human-robot interaction. This study could pave way for a practical rehabilitation robot for patients with stroke in real-life.

Chengqi Lin, Weifeng Wu, Gengliang Lin, Siqi Cai, Longhan Xie
A Novel Center of Mass (CoM) Perception Approach for Lower-Limbs Stroke Rehabilitation

Lower limb rehabilitation robots are of great significance for poststroke patients to regain locomotion ability. However, most rehabilitation robots fail to take the movement of CoM of human body into account. Considering that CoM is an essential index to assess the recovery effect and improve the treatment, we propose a simple, economic, portable, and highly efficient CoM perception approach based on Kinect camera. This novel method is capable of detecting the displacement and rotation of CoM in multi-planes. Results of walking tests show that our approach has competitive performance in capturing the variation trends of CoM compared with multi-cameras motion capture system, especially in some directions with large displacement variation. The high accuracy, simple and low-cost detection of CoM is a major step forward towards practical application in the assessment of rehabilitation after stroke.

Youwei Liu, Biao Liu, Zikang Zhou, Siqi Cai, Longhan Xie
Increasing Torso Contact: Comparing Human-Human Relationships and Situations

Since building relationships between humans and robots continue to increase, the importance of touch interactions between humans and social robots is also growing. However, due to such limitations such as robot performance, most of these robots perform touch interaction with specific motions. In human-human touch interaction, the touch method reflects relationships and situations. This study investigates how touch interactions reflect relationships and situations with others to obtain the design guidelines for touch interactions for social robots. We experimentally investigated how participants performed touch interactions with a mannequin. Our participants performed touch interactions in three specific situations (consoling/forgiving/sharing happiness) with a partner of a three specific intimacy (intimate/acquaintance/ stranger). We analyzed their touch behaviors. When the relationship was intimate, many participants touched the mannequin’s torsos in every situation. This touch motion decreased as the intimacy level with others reduced, and a touching motion with both hands or just one increased.

Yuya Onishi, Hidenobu Sumioka, Masahiro Shiomi

Children-Robot Interaction

Frontmatter
User Requirements for Developing Robot-Assisted Interventions for Autistic Children

Various benefits are being envisioned for enhancing autism interventions with a robot. But what features should such interventions have if they are to be successful? While there are quite a few papers that describe specific user requirements or needs, a more comprehensive account thereof should help to inform the development of such interventions. We therefore present a literature review on the user requirements for robot-assisted interventions. We report on various themes that emerged from our analysis and discuss how enhancing an intervention with a robot might fulfil those requirements.

Bob R. Schadenberg, Dennis Reidsma, Dirk K. J. Heylen, Vanessa Evers
“iCub Says: Do My Motor Sounds Disturb You?” Motor Sounds and Imitation with a Robot for Children with Autism Spectrum Disorder

In Socially Assistive Robotics, robots are used as social partners for children with Autism Spectrum Disorder. However, it is important to keep in mind that this population shows auditory hypo- or hypersensitivity, which results in avoiding or seeking behaviors towards sounds. Robots, from their mechanical embodiment, exhibit motor noises, and we aimed here to investigate their impact in two imitation games with iCub on a computer screen. We observed that participants who reported negative responses to unexpected loud noises were more able to focus on a “Simon says” game when the robot’s motor noises were canceled.

Pauline Chevalier, Federica Floris, Tiziana Priolo, Davide De Tommaso, Agnieszka Wykowska
Longing to Connect: Could Social Robots Improve Social Bonding, Attachment, and Communication Among Children with Autism and Their Parents?

Social robots are showing promise in assisting children with an autism spectrum disorder to improve social, language, and behavioral skills. However, this emerging technology has yet to find a permanent place in the homes of children with autism in part because the long-term benefits of robot-assisted therapy are still undetermined. We present this autoethnographic case of a 10-year-old boy with autism and his mother to explore the perceived benefits of the long-term, in-home use of a social robot as it relates to the facilitation of parent-child bonding, attachment, communication, and social learning.

Lisa Armstrong, Yeol Huh
Design and Fabrication of a Floating Social Robot: CeB the Social Blimp

Robotic blimps have a wide range of applications, such as monitoring activities in their surroundings, advertising, and performing on stages. They also have remarkable capacities to be used as a social robot. In this research, a manually operated social robotic blimp has been developed intending to interact with children as a social agent and attract adults’ attention as an entertainer in indoor public environments. Since the appearance of a social robot has a significant impact on its acceptance, first, we acquired opinions of several participants on the shape of a desired floating robot. The results revealed that a simple spherical structure adequately draws people’s attention. After design and fabrication of the robot, a survey on its social behaviors was distributed among 82 people. The results indicated that the participants prefer a medium-sized robot, and they also feel almost safe when the robot works around them. Furthermore, the results showed that people genuinely appreciate the opportunity to have a mutual conversation with the designed social blimp. Moreover, the participants believed that the designed blimp could be an entertaining social flying robot with which it is easy to interact. The outcome of this survey will be beneficial in designing and developing a social blimp with a focus on interacting with children and entertaining people.

Erfan Etesami, Alireza Nemati, Ali F. Meghdari, Shuzhi Sam Ge, Alireza Taheri
Assessment of Engagement and Learning During Child-Robot Interaction Using EEG Signals

Social robots are being increasingly employed for educational purposes, such as second language tutoring. Past studies in Child-Robot Interaction (CRI) have demonstrated the positive effect of an embodied agent on engagement and consequently learning of the children. However, these studies commonly use subjective or behavioral metrics of engagement that are measured after the interaction is over. In order to gain better understanding of children’s engagement with a robot during the learning phase, this study employed objective measures of EEG. Two groups of Japanese children participated in a language learning task; one group learned French vocabulary from a storytelling robot while seeing pictures of the target words on a computer screen and the other group listened to the same story with only pictures on the screen and without the robot. The engagement level and learning outcome of the children were measured using EEG signals and a post-interaction word recognition test. While no significant difference was observed between the two groups in their test performance, the EEG Engagement Index ( $$\frac{\beta }{\theta + \alpha }$$ β θ + α ) showed a higher power in central brain regions of the children that learned from the robot. Our findings provide evidence for the role of social presence and engagement in CRI and further shed light on cognitive mechanisms of language learning in children. Additionally, our study introduces EEG Engagement Index as a potential metric for future brain-computer interfaces that monitor engagement level of children in educational settings in order to adapt the robot behavior accordingly.

Maryam Alimardani, Stephanie van den Braak, Anne-Lise Jouen, Reiko Matsunaka, Kazuo Hiraki

Social Perception of Robots

Frontmatter
Individuals Expend More Effort to Compete Against Robots Than Humans After Observing Competitive Human–Robot Interactions

In everyday life, we often observe and learn from interactions between other individuals—so-called third-party encounters. As robots are poised to become an increasingly familiar presence in our daily lives, third-party encounters between other people and robots might offer a valuable approach to influence people’s behaviors and attitudes towards robots. Here, we conducted an online experiment where participants (n = 48) watched videos of human—robot dyads interacting in a cooperative or competitive manner. Following this observation, we measured participants’ behavior and attitudes towards the human and robotic agents. First, participants played a game with the agents to measure whether their behavior was affected by their observed encounters. Second, participants’ attitudes toward the agents were measured before and after the game. We found that the third-party encounters influenced behavior during the game but not attitudes towards the observed agents. Participants showed more effort towards robots than towards humans, especially when the human and robot agents were framed as competitive in the observation phase. Our study suggests that people’s behaviors towards robots can be shaped by the mere observation of third-party encounters between robots and other people.

Rosanne H. Timmerman, Te-Yi Hsieh, Anna Henschel, Ruud Hortensius, Emily S. Cross
Modulating the Intentional Stance: Humanoid Robots, Narrative and Autistic Traits

To enhance collaboration between humans and robots it might be important to trigger towards humanoid robots, similar social cognitive mechanisms that are triggered towards humans, such as the adoption of the intentional stance (i.e., explaining an agents behavior with reference to mental states). This study aimed (1) to measure whether a film modulates participants’ tendency to adopt the intentional stance toward a humanoid robot and; (2) to investigate whether autistic traits affects this adoption. We administered two subscales of the InStance Test (IST) (i.e. ‘isolated robot’ subscale and ‘social robot’ subscale) before and after participants watched a film depicting an interaction between a humanoid robot and a human. On the isolated robot subscale, individuals with low autistic traits were more likely to adopt the intentional stance towards a humanoid robot after they watched the film, but there was no effect on individuals with high autistic traits. On the social robot subscale (i.e. when the robot is interacting with a human) both individuals with low and high autistic traits decreased in their adoption of the intentional stance after they watched the film. This suggests that the content of the narrative and an individual’s social cognitive abilities, affects the degree to which the intentional stance towards a humanoid robot is adopted.

Ziggy O’Reilly, Davide Ghiglino, Nicolas Spatola, Agnieszka Wykowska
The Personality of a Robot. An Adaptation of the HEXACO – 60 as a Tool for HRI

In this paper, we report on a study in which we used an other-report version of the HEXACO–60, a questionnaire designed to assess human personality, to evaluate how people perceive the personality traits of robots. The results showed that a four-factor measurement model fitted the data better than the expected six-factor one and suggested that the domains of the perceived personality structure of robots might differ from those of humans.

Giulia Siri, Serena Marchesi, Agnieszka Wykowska, Carlo Chiorri
Shall I Be Like You? Investigating Robot’s Personalities and Occupational Roles for Personalised HRI

The aim of this work is to understand how individuals’ personality differences affect their interaction with robots considering the robots expressed personalities and their occupational roles. For this purpose, we analysed the link between the degree of extroversion/introversion of the user and the one expressed by the robot during two different tasks: a cognitive task (i.e., movie recommendation) and a service task (i.e., bartending). We observed that participants showed a greater preference for a robot with an extroverted attitude in both tasks. The degree of pleasantness of the robot was affected by the users’ personality. Moreover, participants preferred an extroverted robot for the occupational categories of Entertainer and Organizer, while they associated an introvert robot to Producer and Organizer roles.

Mariacarla Staffa, Alessandra Rossi, Benedetta Bucci, Davide Russo, Silvia Rossi
Impact of Social Presence of Humanoid Robots: Does Competence Matter?

An emerging research trend associating social robotics and social-cognitive psychology offers preliminary evidence that the mere presence of humanoid robots may have the same effects as human presence on human performance, provided the robots are anthropomorphized to some extent (attribution to mental states to the robot being present). However, whether these effects also depend on the evaluation potential of the robot remains unclear. Here, we investigated this critical issue in the context of the Stroop task allowing the estimation of robotic presence effects on participants’ reaction times (RTs) to simple and complex stimuli. Participants performed the Stroop task twice while being randomly assigned to one of three conditions: alone then in the presence of a robot presented as competent versus incompetent on the task at hand (“evaluative” vs. “nonevaluative” robot condition), or systematically alone (control condition). Whereas the presence of the incompetent robot did not change RTs (compared to the control condition), the presence of the competent robot caused longer RTs on both types of Stroop stimuli. The robot being exactly the same in both conditions, to the notable exception of its evaluation potential, these findings indicate that the presence of humanoid robots with such a potential may divert attention away from the central task in humans.

Loriane Koelsch, Frédéric Elisei, Ludovic Ferrand, Pierre Chausse, Gérard Bailly, Pascal Huguet

Brief Research Reports

Frontmatter
Designing for Perceived Robot Empathy for Children in Long-Term Care

We describe a mixed-methods approach toward the design and evaluation of social robots that can offer emotional support for children in long-term care environments. Based on the results of a needfinding interview with a local expert, our specific aim was to design a robot that would be perceived as empathetic. An online human-subject study (n = 26) provided preliminary support for a hypothesis that this design goal could be achieved by designing robots to maintain the flow of conversation and ask related followup questions to further understand interlocutors’ feelings.

Alexandra Bejarano, Olivia Lomax, Peyton Scherschel, Tom Williams
What Happened While I Was Away? Leveraging Visual Transition Techniques to Convey Robot States in Multi-robot Teleoperation

In real-time multi-robot teleoperation, the operator faces a challenge of maintaining sufficient awareness of all robots in a team. We propose a novel approach to supporting operators, in instances where operators switch between controlling or observing multiple robots in a team. Just as how cinema or video games use visual and narrative techniques to support viewers when transitioning between scenes, we argue that multi-robot teleoperation interfaces should likewise leverage this transition time to provide pertinent information. That is, when switching to a new robot, the interface should take the opportunity to bring the operator up to speed, highlighting what happened while they were away, what current robot states are, and what specifics of the new robot being controlled are; thus, supporting situational awareness. In this paper, we outline this agenda and present our initial exploration and analysis of this informative visual transition.

Stela H. Seo, James E. Young
A Collaborative Robotic Approach for Inspection and Anomaly Detection in Industrial Applications

Inspection and quality assurance is an important step in manufacturing systems, including newly manufactured or re-manufactured parts. Currently, there is a heavy reliance on the knowledge of experienced workers in interpreting the data from inspection sensors and detecting anomalies. Using robots to perform automated inspection becomes challenging in high-mix settings, where the work-pieces to be inspected change frequently and require the robot to be re-programmed. In this paper, we propose a human-robot collaboration approach, where part of the work involving fixturing, sensor attachment and work-piece handling is done by the human, whereas the data collection, processing and anomaly detection is done autonomously using AI techniques. Our inspection algorithm is a generic approach using dilated convolutional neural network (DCNN) based multivariate time series predictive analytics. We demonstrate our approach on a gearbox inspection application, where we use time-series data streams captured from vibration sensors mounted on the gearbox. We have conducted experiments to demonstrate the effectiveness of the proposed DCNN solution for anomaly detection in a human robot collaborative assembly system.

Miaolong Yuan, Amirul Muhammad, Hettiarachchi Rukshan, Daniel Tan, Nikhil Somani
Personalization and Localization to Improve Social Robots’ Behaviors: A Literature Review

Personalization and localization are essential when developing social robots for different sectors, including education, industry, healthcare or restaurants. This requires adjusting the robot's behavior to an individual's needs, preferences, or personality when referring to personalization or the social convention or country's culture when referring to localization. Current literature presents different models that enable personalization and localization, each with its advantages and drawbacks. This work aims to help researchers in social robotics by reviewing and analyzing different papers in this domain. We focus our review on exploring various technical methods used to make decisions and adapt social robots’ non-verbal and verbal skills, including the state-of-the-art techniques in the sector of artificial intelligence.

Mehdi Hellou, Norina Gasteiger, Ho Seok Ahn
Robot-Generated Mixed Reality Gestures Improve Human-Robot Interaction

We investigate the effectiveness of robot-generated mixed reality gestures. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time, and that robots can pair long referring expressions with mixed reality gestures without cognitively overloading users.

Nhan Tran, Trevor Grant, Thao Phung, Leanne Hirshfield, Christopher Wickens, Tom Williams
Prioritising Design Features for Companion Robots Aimed at Older Adults: Stakeholder Survey Ranking Results

Companion robots are social robots often resembling animals with potential wellbeing benefits for older adults. However, some such devices have failed possibly through inappropriate design. Method: Questionnaires were completed by 113 participants at nine health and care events. Participants were predominantly relevant professionals. Participants approached our interaction station, interacted with eight companion robots or alternatives, then completed questionnaires; ranking aesthetic, behaviour, technology, feel and interaction features and estimating affordable price. Results: Features ranked highly were: interactive response to vocalisations and touch, huggable size, soft fur, variety of behaviours/sounds, realistic movements, eye contact with large cute eyes, being realistic, familiar, easy to use and possessing simulated warmth. Participants thought  −£225 was affordable. Conclusion: We contribute priority features for stakeholders to inform future developments. Contrasting unfamiliar embodiment of some devices, stakeholders support familiar, realistic aesthetics, with implications for enhanced acceptability, adoption and more consistent wellbeing outcomes.

Hannah Bradwell, Rhona Winnington, Serge Thill, Ray B. Jones
Acceptance of Robotic Transportation in Small Workshops: A China-Iran Cross-Cultural Study

In this research, we developed a cost-effective automated guided vehicle for small clothing production workshops that ordinary workers in such workplaces can operate. We acquired workers’ opinions in multiple workshops about using the robots (with some social capabilities) and working alongside them. We performed the tests in China and Iran to investigate and compare different preferences and priorities in two countries with different cultural backgrounds. We used the UTAUT questionnaire and conducted two-way ANOVA tests considering two independent factors: Nationality and Gender. The results showed that workers in Iran and China have relatively similar expectations from a social AGV, with no contrast in the mean score of women and men. We also observed that Iranian participants have fewer concerns about the safety of working around the robot, which may be due to greater exposure of Chinese workers to automated robots and the potential dangers of working in the same environment with the robot. Moreover, we observed that female workers of both nationalities feel they need more help to work with the robots than male workers. We hope that the results presented in this cross-cultural study assists in developing an understanding of the attitude of workers in small workshops toward automated transportation.

Alireza Nemati, Alireza Taheri, Dongjie Zhao, Ali F. Meghdari, Shuzhi Sam Ge
Reactive Patterns for Human-Robot Object Handovers

Object handovers are one of the most basic physical collaborative tasks in human-robot interaction. They are interesting because they take place in close human-robot vicinity where both’s peripersonal spaces overlap. Thus, a successful and smooth object handover requires to communicate the intention in terms of the object transition point, the timing of action, and the initiative of giving and receiving. In this paper, we model several reactive patterns extracted from human-robot handover experiments, propose an integrated robotic system implementing these strategies, and evaluate the timing, modality, and human-likeness of its implementation.

Sebastian Meyer zu Borgsen, Sven Wachsmuth
Anthropomorphism and Its Negative Attitudes, Sociability, Animacy, Agency, and Disturbance Requirements for Social Robots: A Pilot Study

A social robot that meets the acceptability requirements of the target end-users presents a significant challenge to robot designers. The design process is often iterative and requires continuous improvements and optimization over time. One key aspect in designing an acceptable social robot is anthropomorphism. Social roboticists have developed assessment tools to evaluate different aspects for the perception of the observer. In this study, we evaluated the attitude of children toward four robots with different degrees of anthropomorphic traits. Questionnaires based on the Negative Attitude toward Robots Scale (NARS) and the Human-Robot Interaction Evaluation Scale (HRIES) were used to acquire the responses of 33 participants. To identify any changes due to interactions, a pre-test questionnaire was given prior to the interaction with a robot. It was then followed by a post-test questionnaire. Statistical tests were used to analyze the effects of gender, test (i.e., pre-test vs post-test), and the four robots, on the observers’ perception. Statistical differences were found between the four robots in the subscales of HRIES, namely, Sociability, Animacy, and Disturbance. The preferences of the children were leaning toward the humanoid robot (i.e., Alpha) with the moderate anthropomorphic traits in the Disturbance subscale. Low to moderate correlations were found between the subscales of NARS and HRIES.

Ahmad Yaser Alhaddad, Asma Mecheter, Mohammed Abdul Wadood, Ali Salem Alsaari, Houssameldin Mohammed, John-John Cabibihan
Intention Signaling for Mobile Social Service Robots - The Example of Plant Watering

If social service robots are to be successful in becoming part of our everyday life, they need to have proper signaling of their intention to be predictable. This is especially important when interacting with elderly people, who might be less acceptive of technology. In this study we report on such a device, a plant watering robot (PWR) for elderly care centers, which has social signaling capabilities. Signaling is achieved by using a social agent (myKeepOn) and a steerable propeller/fan on the deck of a ship-like mobile robot. We conducted an online survey to assess how predictable the robot’s behavior is. We have found that the social agent was very effective in conveying turning intentions of the robot, while the propeller was less able to do so. When signals from the two sources collide in meaning, people were not as confused as expected, and prediction rates kept steady.

Oskar Palinko, Philipp Graf, Lakshadeep Naik, Kevin Lefeuvre, Christian Sønderskov Zarp, Norbert Krüger
Use of Social Robots in the Classroom

Research about using social robots in the classroom is a growing topic with many unstudied/understudied problems. With the capabilities of these robots expanding rapidly, it has become necessary to explore the uses of robots in addressing persistent challenges in education. The rapid advancement in the technology of artificial intelligence, such as affective computing and natural language processing, makes the reality of social robots in schools more possible now than ever. In the early stages of research, there are many questions to be answered about these robots and their potential applications, technological limitations, and ethical uses. Although the use of social robots in the classroom may be in its infancy, there are many studies that help define where the research stands today.

Jordis Blackburn, Cody Blankenship, Fengpei Yuan, Lynn Hodge, Xiaopeng Zhao
Application of Human-Robot Interaction Features to Design and Purchase Processes of Home Robots

Production of home robots, such as robotic vacuum cleaners, currently focuses more on the technology and its engineering than the needs of people and their interaction with robots. An observation supporting this view is that the home robots are not customizable. In other words, buyers cannot select the features and built their home robots to order. Stemmed from this observation, the paper proposes an approach that starts with a classification of features of home robots. This classification concerns robot interaction with humans and the environment, a home in our case. Following the classification, the proposed approach utilizes a new hybrid model based on a built-to-order model and dynamic eco-strategy explorer model, enabling designers to develop a production line and buyers to customize their home robots with the classified features. Finally, we applied the proposed approach to robotic vacuum cleaners. We developed a feature model for robotic vacuum cleaners, from which we formed a common uses scenario model.

Nur Beril Yapici, Tugkan Tuglular, Nuri Basoglu
Physical Human-Robot Interaction Through Hugs with CASTOR Robot

Hugs play an essential role in social bonding between people. This study evaluates the hug interactions with a robot identifying the perception. Four hug release methods in adults were applied, a short-time hug, a long-time hug, a touch-controlled hug, and a pressure-controlled hug. The social robot CASTOR was integrated into this study, a modification was made in its arms to perform the hugging action, and a pressure sensor in its upper back. 12 adults (5 females and 7 males) participated in the study. Results showed that the perception of friendliness comparing the short-time hug and the pressure-controlled hug had differences ( $$p=0.036$$ p = 0.036 ), making the pressure-controlled hug more friendly. In the case of natural perception, the touch-controlled hug was more natural comparing with the short-time hug ( $$p=0.047$$ p = 0.047 ). This study presents the feasibility of implementing CASTOR in hugging interactions.

María Gaitán-Padilla, Juan C. Maldonado-Mejía, Leodanis Fonseca, Maria J. Pinto-Bernal, Diego Casas, Marcela Múnera, Carlos A. Cifuentes
Exploring Communicatory Gestures for Simple Multi-robot Systems

The presented online study (N = 405) explores the impact of translational (towards, away, sideways) and rotational (spin and circle) motion patterns on the perceived communications of a three-robot group. All gestures were performed relative to a small humanoid figure at two speeds (slow and fast). Three of the gestures strongly predicted communicatory interpretation: sideways and away were seen as scared or fearful, and spin was seen as excited and joyful. Circle had low convergence and was seen as confused or frustrated. Towards, on the other hand, had a bimodal distribution: slowly towards was seen as greeting, whereas fast towards was seen as confrontational. The context prompts (party vs. meeting) did not affect participant interpretations.

Jaden Berger, Alexandra Bacula, Heather Knight
Control of Pneumatic Artificial Muscles with SNN-based Cerebellar-Like Model

Soft robotics technologies have gained growing interest in recent years, which allows various applications from manufacturing to human-robot interaction. Pneumatic artificial muscle (PAM), a typical soft actuator, has been widely applied to soft robots. The compliance and resilience of soft actuators allow soft robots to behave compliant when interacting with unstructured environments, while the utilization of soft actuators also introduces nonlinearity and uncertainty. Inspired by Cerebellum’s vital functions in control of human’s physical movement, a neural network model of Cerebellum based on spiking neuron networks (SNNs) is designed. This model is used as a feed-forward controller in controlling a 1-DOF robot arm driven by PAMs. The simulation results show that this Cerebellar-based system achieves good performance and increases the system’s response speed.

Hongbo Zhang, Yunshuang Li, Yipin Guo, Xinyi Chen, Qinyuan Ren
Backmatter
Metadata
Title
Social Robotics
Editors
Dr. Haizhou Li
Prof. Dr. Shuzhi Sam Ge
Yan Wu
Agnieszka Wykowska
Assist. Prof. Hongsheng He
Xiaorui Liu
Prof. Dongyu Li
Jairo Perez-Osorio
Copyright Year
2021
Electronic ISBN
978-3-030-90525-5
Print ISBN
978-3-030-90524-8
DOI
https://doi.org/10.1007/978-3-030-90525-5

Premium Partner