Skip to main content
Top

2024 | Book

Social Robotics

15th International Conference, ICSR 2023, Doha, Qatar, December 3–7, 2023, Proceedings, Part I

Editors: Abdulaziz Al Ali, John-John Cabibihan, Nader Meskin, Silvia Rossi, Wanyue Jiang, Hongsheng He, Shuzhi Sam Ge

Publisher: Springer Nature Singapore

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The two-volume set LNAI 14453 and 14454 constitutes the refereed post-conference proceedings of the 15th International Conference on Social Robotics, ICSR 2023, held in Doha, Qatar, during December 4–7, 2023.
The 68 revised full papers presented in these proceedings were carefully reviewed and selected from 83 submissions. They deal with topics around the interaction between humans and intelligent robots and on the integration of robots into the fabric of society. This year the special topic is "Human-Robot Collaboration: Sea; Air; Land; Space and Cyberspace”, focusing on all physical and cyber-physical domains where humans and robots collaborate.

Table of Contents

Frontmatter

Regular Papers

Frontmatter
Virtual Reality Hand Tracking for Immersive Telepresence in Rehabilitative Serious Gaming

Intelligent systems face an increasingly complex array of challenges in the rapidly evolving landscape of human-device interactions. To address these challenges, there arises a pressing need for the development of refined methods of information transfer, capable of capturing subtle nuances such as body language and tonal subtleties. The system presented in this paper is designed to highlight a novel method of refined human-device interactions and its potential impact on the medical domain. Leveraging the Oculus SDK in the Unity Game Engine alongside the Oculus VR headset, our system allows users to interact with virtual objects naturally, mirroring real-world hand movements while simultaneously promoting upper limb rehabilitation through engaging gameplay scenarios. The integration of haptic feedback enriches the immersive experience, enabling users to not only visualize but also feel their virtual interactions. The system accommodates varying levels of mobility, adapting its complexity to individual progress. Furthermore, our VR rehabilitation system has garnered positive outcomes from user assessments, demonstrating the system’s effectiveness and user satisfaction. While limited in addressing severe upper limb impairments, the system’s flexibility allows for modular improvements and broader clinical integration.

Noaman Mazhar, Aya Gaballa, Amit Kumar Pandey, John-John Cabibihan
Human Perception of Emotional Responses to Changes in Auditory Attributes of Humanoid Agents

Human-robot interaction has emerged as an increasingly prominent discourse within the domain of robotic technologies. In this context, the interaction of both visual and verbal cues assumes an essential role in shaping user experiences. The research problem of this study revolves around investigating the potential impact of auditory attribute alterations in humanoid agents, namely robots and avatars, on users’ emotional responses. The study recruited a participant cohort comprising 14 individuals, aged 18 to 35, to engage in an experimental process of observing avatar videos with distinctive auditory attributes. These attributes encompassed two voice pitches, specifically alto voice and bass voice, as well as two speech styles denoted as frozen style and casual style. Through data collection, a repository of 13,600 data points was amassed from the participants, and subsequently subjected to rigorous analysis via the ANOVA methodology. The empirical findings demonstrate that users reveal sensitive, emotional responsiveness when faced with avatar videos characterized by varying auditory attributes. This pilot study establishes a foundational framework poised to guide future research undertakings aimed at inspiring user experiences through the deliberate manipulation of auditory attributes inherent in humanoid robots and avatars.

Zhao Zou, Fady Alnajjar, Michael Lwin, Abdullah Al Mahmud, Muhammed Swavaf, Aila Khan, Omar Mubin
Leveraging the RoboMaker Service on AWS Cloud Platform for Marine Drone Digital Twin Construction

Drones and other robotic technologies enable us to explore and study the world without the need for human guidance. This enables us to interact with previously uncharted areas that pose a risk to human safety, including the underwater biosphere. A drone may encounter a variety of problems when operating in such an unpredictable and hazardous environment. In these contexts, creating a digital twin of a marine drone can provide several advantages, such as for example to enhance the management, maintenance, and performance of the marine drone through data-driven approaches, benefiting efficiency and safety. In this work, we present the development of a digital twin prototype, which is a virtual representation of a physical entity of a marine drone-type unmanned surface vehicle. ROS, Blender, and the AWS cloud platform were used to create the system. The three-dimensional model of the maritime drone was created in Blender, and the AWS Robomaker service was utilized for simulation testing and possible deployment of the robotic application without managing any infrastructure. The goal is to present the tools and architecture that were developed to collect data using non-invasive ways and generate a marine digital twin.

Mariacarla Staffa, Emanuele Izzo, Paola Barra
Trust Assessment with EEG Signals in Social Human-Robot Interaction

The role of trust in human-robot interaction (HRI) is becoming increasingly important for effective collaboration. Insufficient trust may result in disuse, regardless of the robot’s capabilities, whereas excessive trust can lead to safety issues. While most studies of trust in HRI are based on questionnaires, in this work it is explored how participants’ trust levels can be recognized based on electroencephalogram (EEG) signals. A social scenario was developed where the participants played a guessing game with a robot. Data collection was carried out with subsequent statistical analysis and selection of features as input for different machine learning models. Based on the highest achieved accuracy of 72.64%, the findings indicate the existence of a correlation between trust levels and the EEG data, thus offering a promising avenue for real-time trust assessment during interactions, reducing the reliance on retrospective questionnaires.

Giulio Campagna, Matthias Rehm
Feasibility Study on Eye Gazing in Socially Assistive Robotics: An Intensive Care Unit Scenario

Recently, there has been an increasing interest in the adoption of socially assistive robots to support and alleviate the workload of clinical personnel in hospital settings. This work proposes the adoption of a socially assistive robot in Intensive Care Units to evaluate the criticality scores of bedridden patients. Within this scenario, the human gaze represents a key clinical cue for assessing a patient’s conscious state. In this work, a user study involving 10 participants role-playing 4 levels of consciousness is performed. The collected videos were manually annotated considering 6 gazing directions and an open-source automatic tool was used to extract head pose and eye gazing features. Different feature sets and classification models were compared to find the most appropriate configuration to detect user gaze in this scenario. Results have suggested that the most accurate gazing estimation is obtained when the head pose information is combined with the eye orientation (0.85). Additionally, the framework proposed in this study seems to be user-independent, thereby encouraging the deployment of appropriate robotic solutions in real assistive contexts.

Alessandra Sorrentino, Andrea Magnotta, Laura Fiorini, Giovanni Piccinino, Alessandro Anselmo, Nicola Laurieri, Filippo Cavallo
Clustering Social Touch Gestures for Human-Robot Interaction

Social touch provides a rich non-verbal communication channel between humans and robots. Prior work has identified a set of touch gestures for human-robot interaction and described them with natural language labels (e.g., stroking, patting). Yet, no data exists on the semantic relationships between the touch gestures in users’ minds. To endow robots with touch intelligence, we investigated how people perceive the similarities of social touch labels from the literature. In an online study, 45 participants grouped 36 social touch labels based on their perceived similarities and annotated their groupings with descriptive names. We derived quantitative similarities of the gestures from these groupings and analyzed the similarities using hierarchical clustering. The analysis resulted in 9 clusters of touch gestures formed around the social, emotional, and contact characteristics of the gestures. We discuss the implications of our results for designing and evaluating touch sensing and interactions with social robots.

Ramzi Abou Chahine, Steven Vasquez, Pooyan Fazli, Hasti Seifi
Attainable Digital Embodied Storytelling Using State of the Art Tools, and a Little Touch

How closely can a robot capable of generating non-verbal behavior approximate a human narrator? What are the missing features that limit the naturalness and expressiveness of the robot as a storyteller? In this paper we explore this topic by identifying the key aspects to effectively convey the content of a story and therefore by analysing appropriate methodologies and tools that allow to automatically enrich the expressiveness of the generated behavior. Lastly, we will explore some modifications to weigh up the gap between robot and human-like behavior. Demonstration videos reveal that albeit the communicative capabilities of the robot are appropriate, there is still room for improvement.

Unai Zabala, Alexander Diez, Igor Rodriguez, Agnese Augello, Elena Lazkano
GERT: Transformers for Co-speech Gesture Prediction in Social Robots

Social robots are becoming an important part of our society and should be recognised as viable interaction partners, which include being perceived as i) animate beings and ii) capable of establishing natural interactions with the user. One method of achieving both objectives is allowing the robot to perform gestures autonomously, which can become problematic when those gestures have to accompany verbal messages. If the robot uses predefined gestures, an issue that needs solving is selecting the most appropriate expression given the robot’s speech. In this work, we propose three transformer-based models called GERT, which stands for Gesture-Enhanced Robotics Transformer, that predict the co-speech gestures that better match the robot’s utterances. We have compared the performance of the three models of different sizes to prove their usability in the gesture prediction task and the trade-off between size and performance. The results show that all three models achieve satisfactory performance (F-score between 0.78 and 0.86).

Javier Sevilla-Salcedo, Enrique Fernández-Rodicio, José Carlos Castillo, Álvaro Castro-González, Miguel A. Salichs
Investigating the Impact of Human-Robot Collaboration on Creativity and Team Efficiency: A Case Study on Brainstorming in Presence of Robots

The main objective of this research is to explore the impact of deploying social robots in team-based intellectual cooperation, specifically during brainstorming sessions. A total of 72 participants (36 females and 36 males) were involved, with groups of four participants (2 females and 2 males) engaging in brainstorming sessions. In nine sessions, three Nao robots were present; while in the other nine sessions, three human fellows participated instead of robots. The creativity of participants was assessed by measuring the average number of unique ideas generated using Bouchard and Hare’s coding rules. The sessions with robots showed a significant increase in participant creativity. After the sessions, the participants completed a questionnaire, which revealed higher satisfaction, reduced production blocking, decreased free-riding, and increased synergy in sessions where robots were present. These findings were further supported by the video analysis. Future research can explore the long-term effects of interacting with social robots, including those equipped with artificial intelligence.

Alireza Taheri, Sean Khatiri, Amin Seyyedzadeh, Ali Ghorbandaei Pour, Alireza Siamy, Ali F. Meghdari
A Set of Serious Games Scenarios Based on Pepper Robots as Rehab Standing Frames for Children with Cerebral Palsy

One of the major issues in pediatric rehabilitation practices relates to children refusing to participate in or perform associated exercises targeted to improve their physical condition. Technology and serious games are effective approaches to engage and motivate children and assist therapists in rehabilitation exercises. This Paper tries to elicit from children's requirements for the objective of designing efficient serious games scenarios that facilitate the rehabilitation procedure. A novel set of six rehabilitation game scenarios on standing frame for robotic assistance involving children with cerebral palsy is presented. We discuss the use of serious games on a standing frame in terms of humanoid robot limitations and capabilities. The scenarios have been developed based on specialists’ observations and in situ consultations with therapists at a pediatric rehabilitation center. Our findings are expected to help in future research tailored toward studying the effectiveness of adding humanoid robots to rehabilitation games to increase children’s motivation, engagement, and enjoyment.

Leila Mouzehkesh Pirborj, Fady Alnajjar, Stephen Mathew, Rafat Damseh, Muthu Kumar Nadimuthu
Can a Robot Collaborate with Alpana Artists? A Concept Design of an Alpana Painting Robot

The possibility of robots working along with Alpana artist is a developing field of research at the point of intersection between technology and traditional art. Alpana painters have the skill and creativity necessary for the delicate design process, but robots can execute these designs with greater accuracy and efficiency. In this kind of collaboration, artists would develop ideas for the Alpana, and robots would speed up the drawing. This combination makes sure that even when technology aids in the expression of Alpana art, its heart and soul remain human. Combining human creativity and robotic accuracy has the potential to produce a harmonious combination that embraces current achievements while retaining tradition. A growing natural process that respects tradition while embracing innovation is fostered by this mix of tradition and technology. The paper's contributions are its investigation of human-robot collaboration in Alpana art, its presentation of a conceptual Alpana painting robot, and its emphasis on maintaining cultural legacy while accepting technology progress.

Farhad Ahmed, Zarin Tasnim, Zerin Tasnim, Mohammad Shidujaman, Salah Uddin Ahmed
Human-Robot Interaction Studies with Adults in Health and Wellbeing Contexts - Outcomes and Challenges

Social robots have great potential to support individuals’ health and wellbeing. We did a follow up study based on previous work of a 2021 systematic review following PRISMA guidelines that identified 443 research articles evaluating social robots in health/wellbeing contexts with adult participants. In this paper we performed a new analysis that showed that while the vast majority of the articles reported positive outcomes related to the use of social robots in these contexts, only half of those articles supported the results by statistical tests and comparisons, highlighting a need for future studies with robust methodologies that can further study and inform the use of social robots for supporting adults in health/wellbeing. We discuss that different qualitative and quantitative methodologies are equally valuable, as long as the conclusions are based on the data collected. We also encourage publication of studies with well designed and executed methodologies that lead to neutral or negative outcomes. This is in line with other scientific research fields that emphasize the need to report on such results to avoid needless replication of studies that have already been done and could provide important lessons for the field, but never got published.

Moojan Ghafurian, Kerstin Dautenhahn, Arsema Teka, Shruti Chandra, Samira Rasouli, Ishan Baliyan, Rebecca Hutchinson
Ethical Decision-Making for Social Robots in Elderly Care Scenario: A Computational Approach

As integrating social robots in elderly care scenarios becomes increasingly prevalent, the need for ethical decision-making frameworks to govern their actions is critically important. This paper presents a comprehensive computational approach using supervised machine learning algorithms to address the ethical considerations inherent in robot-assisted fetching tasks for the elderly. Drawing upon established ethical principles and novel moral dimensions specific to elderly care, we develop an intricate framework encompassing diverse entities and scenarios using a greet or beat approach. To validate the framework, we conducted a pilot study involving thirty participants experienced in caregiving. Through an interactive application, participants designed scenarios, decided whether the robot should fetch objects, and provided reasons for their choices. Their decisions were then compared with predictions generated by a set of machine learning algorithms trained on a dataset of various scenarios. Our results shed light on the diverse ethical perspectives in elderly care and the feasibility of automating ethical decision-making for social robots in this domain. This research contributes to the burgeoning field of roboethics, offering insights and tools to guide the responsible deployment of robots in assistive elderly care, ultimately promoting the well-being and ethical treatment of elderly individuals.

Siri Dubbaka, B. Sankar
Virtual Reality Serious Game with the TABAN Robot Avatar for Educational Rehabilitation of Dyslexic Children

Emerging technologies such as social robotics and virtual reality have found a wide application in the field of education and tutoring particularly for children with special needs. Taban is a novel social robot that has been designed and programmed specifically for educational interaction with dyslexic children, who have various problems in reading despite their normal intelligence. In this paper, the acceptability and eligibility of a virtual reality serious game with the presence of the Taban social robot avatar was studied among nineteen children six of whom were dyslexic. In this game, children perform attractive practical exercises while interacting with the Taban avatar in a virtual environment to strengthen their reading skills; then the game automatically evaluates their performance and the avatar gives them appropriate feedback. The sense of immersion in the 3D virtual space and the presence of the Taban robot avatar motivates the children to do the assignments. The results of the psychological assessment using the SAM questionnaire are promising and illustrate that the game was highly accepted by both groups of children. Moreover, according to statistical analysis, the performance of children with dyslexia in the exercises was significantly weaker than their typically developing peers. Thus, this V2R lexicon game has the potential for screening dyslexia.

O. Amiri, M. Shahab, M. M. Mohebati, S. A. Miryazdi, H. Amiri, A. Meghdari, M. Alemi, H. R. Pouretemad, A. Taheri
Impact of Explanations on Transparency in HRI: A Study Using the HRIVST Metric

This paper presents an exploration of the role of explanations provided by robots in enhancing transparency during human-robot interaction (HRI). We conducted a study with 85 participants to investigate the impact of different types and timings of explanations on transparency. In particular, we tested different conditions: (1) no explanations, (2) short explanations, (3) detailed explanations, (4) short explanations for unexpected robot actions, and (5) detailed explanations for unexpected robot actions. We used the Human-Robot Interaction Video Sequencing Task (HRIVST) metric to evaluate legibility and predictability. The preliminary results suggest that providing a short explanation is sufficient to improve transparency in HRI. The HRIVST score for short explanations is higher and very close to the score for detailed explanations of unexpected robot actions. This work contributes to the field by highlighting the importance of tailored explanations to enhance the mutual understanding between humans and robots.

Nandu Chandran Nair, Alessandra Rossi, Silvia Rossi
The Effectiveness of Social Robots in Stress Management Interventions for University Students

Stress affects many students, leaving them vulnerable to burnout. Social robots can provide personalized and non-judgmental support for individuals to engage in behavioral and cognitive therapy. This study investigated the effectiveness of a robot-assisted stress management intervention in reducing stress among university students. In a between-subjects design, students practiced a deep breathing exercise, either guided by a Pepper robot or using a laptop. To evaluate the effect of each technology, Galvanic Skin Response (GSR), Perceived Stress Questionnaire (PSQ) and the Unified Theory of Acceptance and Use of Technology (UTAUT) survey were collected. The results from PSQ and GSR showed no difference between the two technologies in reducing stress subjectively and physiologically. However, UTAUT reports indicated that participants in the Robot group were more inclined to use the robot in future practices, and that a more positive impression of the robot contributed to a stronger reduction of their self-reported stress levels.

Andra Rice, Katarzyna Klęczek, Maryam Alimardani
Data-Driven Generation of Eyes and Head Movements of a Social Robot in Multiparty Conversation

Given the importance of gaze in Human-Robot Interactions (HRI), many gaze control models have been developed. However, these models are mostly built for dyadic face-to-face interaction. Gaze control models for multiparty interaction are more scarce. We here propose and evaluate data-driven gaze control models for a robot game animator in a three-party interaction. More precisely, we used Long Short-Term Memory networks to predict gaze target and context-aware head movements given robot’s communication intents and observed activities of its human partners. After comparing objective performance of our data-driven model with a baseline and ground truth data, an online audiovisual perception study was conducted to compare the acceptability of these control models in comparison with low-anchor incongruent speech and gaze sequences driving the Furhat robot. The results show that our data-driven prediction of gaze targets is viable, but that third-party raters are not so sensitive to controls with congruent head movements.

Léa Haefflinger, Frédéric Elisei, Béatrice Bouchot, Brice Varini, Gérard Bailly
The Ambiguity of Robot Rights

The prominence of robots as interdependent social agents continues to grow, leading to important conversations about legal and ethical considerations not just for humans, but also potentially for these autonomous agents (e.g., robot rights). Physical properties of the robot form factor have been shown to significantly impact human interactions with the system, including how law and policy-makers see and ascribe characteristics to it. For social robots in particular, an anthropomorphized or humanoid form factor can lead to assumptions about the robot’s personhood, with potentially harmful ethical consequences. In this paper, we review current outlooks on social robots with regards to policy and personhood, particularly limits of current debates. We then provide a suggested redefinition of personhood for a robot with an emphasis on the dissociation of personhood from the humanoid form. We propose the treatment of robot personhood in terms of as interdependent group personhood rather than physical or anthropomorphic features, and suggest corresponding design principles and regulations about features such as system opacity.

Anisha Bontula, David Danks, Naomi T. Fitter
The Impact of Robots’ Facial Emotional Expressions on Light Physical Exercises

To address the global challenge of population aging, our goal is to enhance successful aging through the introduction of robots capable of assisting in daily physical activities and promoting light exercises, which would enhance the cognitive and physical well-being of older adults. Previous studies have shown that facial expressions can increase engagement when interacting with robots. This study aims to investigate how older adults perceive and interact with a robot capable of displaying facial emotions while performing a physical exercise task together. We employed a collaborative robotic arm with a flat panel screen to encourage physical exercise across three different facial emotion conditions. We ran the experiment with older adults aged between 66 and 88. Our findings suggest that individuals perceive robots exhibiting facial expressions as less competent than those without such expressions. Additionally, the presence of facial expressions does not appear to significantly impact participants’ levels of engagement, unlike other state-of-the-art studies. This observation is likely linked to our study’s emphasis on collaborative physical human-robot interaction (pHRI) applications, as opposed to socially oriented pHRI applications. Additionally, we foresee a requirement for more suitable non-verbal social behavior to effectively enhance participants’ engagement levels.

Nourhan Abdulazeem, Yue Hu
Feasibility Study on Parameter Adjustment for a Humanoid Using LLM Tailoring Physical Care

The increasing demand for care of the elderly, coupled with the shortage of caregivers, necessitates the introduction of robotic assistants capable of performing care tasks both intelligently and safely. Central to these tasks, especially those involving tactile interaction, is the ability to make human-in-the-loop adjustments based on individual preferences. In this study, our primary goal was to design and evaluate a system that captures user preferences prior to initiating a tactile care task. Our focus was on range-of-motion training exercises, emphasizing communication that demonstrates motion using the LLM approach. The system combines physical demonstrations with verbal explanations, ensuring adaptability to individual preferences before initiating range-of-motion training. Using the humanoid robot Dry-AIREC, augmented with the linguistic capabilities of ChatGPT, our system was evaluated with 14 young participants. The results showed that the robot could perform the range-of-motion exercises with tactile interactions while simultaneously communicating with the participant. Thus, our proposed system emerges as a promising approach for range-of-motion exercises rooted in human-preference-centered human-robot interaction. Interestingly, although there wasn’t a significant shift in the overall positive subjective impressions when the tuning was performed using ChatGPT, there was an increase in the number of participants who gave the highest rating to the experience.

Tamon Miyake, Yushi Wang, Pin-chu Yang, Shigeki Sugano
Enhancing Hand Hygiene Practices Through a Social Robot-Assisted Intervention in a Rural School in India

This paper discusses pilot deployment of a social robot “WallBo” that investigated the effectiveness in promoting and encouraging handwashing practices among children in a rural school in India. The results suggest an overall 85.06% handwashing compliance, 51.60% improvement from the baseline handwashing compliance and an overall ~ 50% knowledge improvement about handwashing. We also present students’ perception about “WallBo” and feedback from the pupils and teachers.

Amol Deshmukh, Kohinoor Monish Darda, Mugdha Mahesh Mhatre, Ritika Pandey, Aalisha R. Jadhav, Emily Cross
Paired Robotic Devices with Subtle Expression of Sadness for Enriching Social Connectedness

Various factors contribute to feelings of loneliness and compromised emotional well-being. One potential strategy to address this issue involves creating systems that enhance social connectedness between users at distance. In the literature, many of these systems utilize physically embodied devices and foster a sense of presence and intimacy. However, there has been limited exploration of targeting and sharing negative emotions. This study introduces a novel robotic device with a caricatured appearance named BlueBot, capable of subtle expressions of sadness. This study outlines the technical aspects and design principles underpinning touch-based non-verbal communication. Additionally, we present findings from a pilot test and in-the-wild field trial, describing users’ responses and behaviors to investigate the impact on social connectedness. Our observations indicate that the developed system offers a nuanced approach that not only heightens awareness of the other individual’s presence but also enhances emotional sensitivity. Several design insights for future similar studies are derived from our research.

Misako Uchida, Eleuda Nunez, Modar Hassan, Masakazu Hirokawa, Kenji Suzuki
Explorative Study on the Non-verbal Backchannel Prediction Model for Human-Robot Interaction

Previous studies on backchannel prediction model have suggested that replicating human backchannel can enhance user’s human-robot interaction experience. In this study, we propose a real-time non-verbal backchannel prediction model which utilizes both an acoustic feature and a temporal feature. Our goal is to improve the quality of robot’s backchannel and user’s experience. To conduct this research, we collected a human-human interview dataset. Using this dataset, we proceeded to develop three distinct backchannel prediction models: a temporal, an acoustic, and a mixed (temporal & acoustic) model. Subsequently, we conducted a user study to compare the perception of robot implemented with the three models. The results demonstarted that the robot employing the mixed model was preferred by participants and exhibited moderate frequency of backchannel. These results emphasize the advantages of incorporating acoustic and temporal features in developing backchannel prediction model to enhance the quality of human-robot interactions, specifically with regards to backchannel frequency and timing.

Sukyung Seok, Tae-Hee Jeon, Yu-Jung Chae, ChangHwan Kim, Yoonseob Lim
Detection of Rarely Occurring Behaviors Based on Human Trajectories and Their Associated Physical Parameters

The complexity of detecting rarely occurring behaviors through human trajectories is closely related to a lack of data, unclear behavioral characteristics, and complex variations in their related physical parameters (e.g., velocity and orientation angles, etc.). In this context, we propose a methodology to maximize the detection performance of rarely occurring behaviors in public places by investigating the data collection process, trajectory representation based on detected skeleton poses from videos, and the use of 2D (X, Y) trajectory positional data only versus its combination with their associated physical parameters as the input for trajectory learning models. In order to evaluate the proposed method, we studied a rare Japanese behavior in public places called UroKyoro, which is a combination of the two Japanese words Urouro and Kyorokyoro. This behavior includes aimlessly moving while frequently looking in both directions. Since there is a lack of related data from real-life cases, we hired professional actors to role-play the behavior alone or with normal pedestrians moving around. The learning system was trained using limited and augmented data. The trajectory learning system, trained with combined human trajectories and orientation angles following the proposed method, succeeds in detecting the studied behavior with an accuracy of 91.33%, outperforming the accuracy of the trained model using only human 2D (X, Y) trajectories by 4.33%. The results show the effectiveness of the proposed method to detect complex, rarely occurring human behaviors by training the LSTM classifier with a combination of human trajectories and physical parameters. However, the effectiveness of physical parameters on training performance may differ from one case study to another based on behavioral characteristics.

Hesham M. Shehata, Nam Do, Shunl Inaoka, Trung Tran Quang
Improving of Robotic Virtual Agent’s Errors Accepted by Agent’s Reaction and Human’s Preference

One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. In this study, we focused on a task between an agent and a human in which the agent makes a mistake. To investigate significant factors for designing a robotic agent that can promote humans’ empathy, we experimentally examined the hypothesis that agent reaction and human’s preference affect human empathy and acceptance of the agent’s mistakes. In this experiment, participants allowed the agent to manage their schedules by answering the questions they were asked. The experiment consisted of a four-condition, three-factor mixed design with agent reaction, selected agent’s body color for human’s preference, and pre- and post-task as factors. The results showed that agent reaction and human’s preference did not affect empathy toward the agent but did allow the agent to make mistakes. It was also shown that empathy for the agent decreased when the agent made a mistake on the task. The results of this study provide a way to influence impressions of the robotic virtual agent’s behaviors, which are increasingly used in society.

Takahiro Tsumura, Seiji Yamada
How Language of Interaction Affects the User Perception of a Robot

Spoken language is the most natural way for a human to communicate with a robot. It may seem intuitive that a robot should communicate with users in their native language. However, it is not clear if a user’s perception of a robot is affected by the language of interaction. We investigated this question by conducting a study with twenty-three native Czech participants who were also fluent in English. The participants were tasked with instructing the Pepper robot on where to place objects on a shelf. The robot was controlled remotely using the Wizard-of-Oz technique. We collected data through questionnaires, video recordings, and a post-experiment feedback session. The results of our experiment show that people perceive an English-speaking robot as more intelligent than a Czech-speaking robot (z = 18.00, p-value = 0.02). This finding highlights the influence of language on human-robot interaction. Furthermore, we discuss the feedback obtained from the participants via the post-experiment sessions and its implications for HRI design.

Barbara Sienkiewicz, Gabriela Sejnova, Paul Gajewski, Michal Vavrecka, Bipin Indurkhya
Is a Humorous Robot More Trustworthy?

As more and more social robots are being used for collaborative activities with humans, it is crucial to investigate mechanisms to facilitate trust in the human-robot interaction. One such mechanism is humour: it has been shown to increase creativity and productivity in human-human interaction, which has an indirect influence on trust. In this study, we investigate if humour can increase trust in human-robot interaction. We conducted a between-subjects experiment with 40 participants to see if the participants are more likely to accept the robot’s suggestion in the Three-card Monte game, as a trust check task. Though we were unable to find a significant effect of humour, we discuss the effect of possible confounding variables, and also report some interesting qualitative observations from our study: for instance, the participants interacted effectively with the robot as a team member, regardless of the humour or no-humour condition.

Barbara Sienkiewicz, Bipin Indurkhya
A Pilot Usability Study of a Humanoid Avatar to Assist Therapists of ASD Children

In this article, we report on a pilot study consisting of an evaluation of the usability satisfaction and effectiveness of a preliminary telerobotic system to assist therapists of children with ASD. Unlike existing pre-programmed robotic systems, our solution beamed therapists in a humanoid robot (Pepper) to reproduce in real-time the therapist’s gestures, speech and visual feedback aiming to embody the therapist in a humanoid robot avatar and be able to perform activities during an ESDM intervention. Evaluations of our system, used by eleven therapists in internal tests during mock session without children, are reported and suggest that future use in real therapy sessions with ASD children can begin.

Carole Fournier, Cécile Michelon, Arnaud Tanguy, Paul Audoyer, Véronique Granit, Amaria Baghdadli, Abderrahmane Kheddar
Primitive Action Recognition Based on Semantic Facts

To interact with humans, a robot has to know actions done by each agent presents in the environment, robotic or not. Robots are not omniscient and can’t perceive every actions made but, as humans do, we can equip the robot with the ability to infer what happens from the perceived effects of these actions on the environment.In this paper, we present a lightweight and open-source framework to recognise primitive actions and their parameters. Based on a semantic abstraction of changes in the environment, it allows to recognise unperceived actions. In addition, thanks to its integration into a cognitive robotic architecture implementing perspective-taking and theory of mind, the presented framework is able to estimate the actions recognised by the agent interacting with the robot. These recognition processes are refined on the fly based on the current observations. Tests on real robots demonstrate the framework’s usability in interactive contexts.

Adrien Vigné, Guillaume Sarthou, Aurélie Clodic
Two-Level Reinforcement Learning Framework for Self-sustained Personal Robots

As social robots become integral to daily life, effective battery management and personalized user interactions are crucial. We employed Q-learning with the Miro-E robot for balancing self-sustained energy management and personalized user engagement. Based on our approach, we anticipate that the robot will learn when to approach the charging dock and adapt interactions according to individual user preferences. For energy management, the robot underwent iterative training in a simulated environment, where it could opt to either “play” or “go to the charging dock”. The robot also adapts its interaction style to a specific individual, learning which of three actions would be preferred based on feedback it would receive during real-world human-robot interactions. From an initial analysis, we identified a specific point at which the Q values are inverted, indicating the robot’s potential establishment of a battery threshold that triggers its decision to head to the charging dock in the energy management scenario. Moreover, by monitoring the probability of the robot selecting specific behaviours during human-robot interactions over time, we expect to gather evidence that the robot can successfully tailor its interactions to individual users in the realm of personalized engagement.

Koyo Fujii, Patrick Holthaus, Hooman Samani, Chinthaka Premachandra, Farshid Amirabdollahian
Robot Companions and Sensors for Better Living: Defining Needs to Empower Low Socio-economic Older Adults at Home

Population ageing has profound implications for economies and societies, demanding increased health and social services. The global older adult population is steadily growing, presenting challenges. Addressing this reality, investing in older adults’ healthcare means enhancing their well-being while minimizing expenditures. Strategies aim to support older adults at home, but resource disparities pose challenges. Importantly, socio-economic factors influence peoples’ quality of life and wellbeing, thus they are associated with specific needs. Socially Assistive Robots (SARs) and monitoring technologies (wearable and environmental sensors) hold promise in aiding daily life, with older adults showing willingness to embrace them, particularly if tailored to their needs. Despite research on perceptions of technology, the preferences and needs of socio-economically disadvantaged older adults remain underexplored. This study investigates how SARs and sensor technologies can aid low-income older adults, promoting independence and overall well-being. For this purpose, older adults (aged ≥ 65 years) with low income were recruited, and a series of focus groups were conducted to comprehend how these technologies could address their needs. Thematic analysis results highlighted five key dimensions, specifically: 1) promote and monitor an active lifestyle, 2) help with daily errands and provide physical assistance, 3) reduce isolation and loneliness, 4) considerations regarding monitoring technologies, and 5) barriers affecting SARs and monitoring technologies usage and acceptance. These dimensions should be considered during SARs and sensors design to effectively meet users’ requirements, enhance their quality of life, and support caregivers.

Roberto Vagnetti, Nicola Camp, Matthew Story, Khaoula Ait-Belaid, Joshua Bamforth, Massimiliano Zecca, Alessandro Di Nuovo, Suvo Mitra, Daniele Magistro
Large-Scale Swarm Control in Cluttered Environments

In the evolving era of social robots, managing a swarm of autonomous agents to perform particular tasks has become essential for numerous industries. The task becomes more challenging for large-scale swarms and complex environments, which have not been fully explored yet. Therefore, this research introduces a methodology incorporating multiple coordinated robotic shepherds to effectively guide large-scale agent swarms in obstacle-laden terrains. The proposed framework commences with deploying an unsupervised machine-learning algorithm to categorise the swarm into clusters. Then, a shepherding algorithm with coordinated robotic shepherds drives the sub-swarms towards the goal. Also, a path planner based on an evolutionary algorithm is proposed to help robotic shepherds move in a way that minimises the dispersion of each sub-swarm and avoids potential hazards and obstructions. The proposed approach is tested on different scenarios, with the results showing a success rate of 100% in guiding swarms with sizes up to 3000 agents.

Saber Elsayed, Mohamed Mabrok
Alpha Mini as a Learning Partner in the Classroom

Social robots such as NAO and Pepper are being used in some schools and universities. NAO is very agile and therefore entertaining. Pepper has the advantage that it has an integrated display where learning software of all kinds can be executed. One disadvantage of both is their high price. Schools can hardly afford such robots. This problem was the starting point for the project described here, which took place in 2023 at the School of Business FHNW. The aim was to create a learning application with an inexpensive social robot that has the same motor capabilities as NAO and the same knowledge transfer capabilities as Pepper. The small Alpha Mini from Ubtech was chosen. It was possible to connect it to an external device. This runs a learning game suitable for teaching at primary level. Alpha Mini provides explanations and feedback in each case. Three teachers tested the learning application, raised objections, and made suggestions for improvement. Social robots like Alpha Mini are an interesting solution for knowledge transfer in schools when they can communicate with other devices.

Oliver Bendel, Andrin Allemann
Comprehensive Feedback Module Comparison for Autonomous Vehicle-Pedestrian Communication in Virtual Reality

Autonomous driving technologies can minimize accidents. Communication from an autonomous vehicle to a pedestrian with a feedback module will improve the pedestrians’ safety in autonomous driving. We compared several feedback module options in a Virtual Reality environment to identify which module best increases public acceptance, legibility, and trust in the autonomous vehicle’s decision, and to identify preference. The results of this study show that participants prefer symbols or text over lights and road projection with no significant difference between symbols and text. Further, our results show that the preferred text interaction mode option when the vehicle is not driving is “Walk,” “Safe to cross,” “Go ahead” and “Waiting”, and the preferred symbol interaction mode option is the walking person as on a traffic light, with no significant preference between the cross advisory symbol and the pedestrian crossing sign.

Melanie Schmidt-Wolf, Eelke Folmer, David Feil-Seifer
Backmatter
Metadata
Title
Social Robotics
Editors
Abdulaziz Al Ali
John-John Cabibihan
Nader Meskin
Silvia Rossi
Wanyue Jiang
Hongsheng He
Shuzhi Sam Ge
Copyright Year
2024
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9987-15-3
Print ISBN
978-981-9987-14-6
DOI
https://doi.org/10.1007/978-981-99-8715-3

Premium Partner