Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 10th International Conference on Social Robotics, ICSR 2018, held in Qingdao, China, in November 2018.The 60 full papers presented were carefully reviewed and selected from 79 submissions. The theme of the 2018 conference is: Social Robotics and AI. In addition to the technical sessions, ICSR 2018 included 2 workshops:Smart Sensing Systems: Towards Safe Navigation and Social Human-Robot Interaction of Service Robots.



Online Learning of Human Navigational Intentions

We present a novel approach for online learning of human intentions in the context of navigation and show its advantage in human tracking. The proposed approach assumes humans to be motivated to navigate with a set of imaginary social forces and continuously learns the preferences of each human to follow these forces. We conduct experiments both in simulation and real-world environments to demonstrate the feasibility of the approach and the benefit of employing it to track humans. The results show the correlation between the learned intentions and the actions taken by a human subject in controlled environments in the context of human-robot interaction.

Mahmoud Hamandi, Pooyan Fazli

Autonomous Assistance Control Based on Inattention of the Driver When Driving a Truck Tract

This article proposes the autonomous assistance of a Truck based on a user’s inattention analysis. The level of user inattention is associated with a standalone controller of driving of assistance and path correction. The assistance algorithm is based on the kinematic model of the Truck and the level of user inattention In addition, a 3D simulator is developed in a virtual environment that allows to emulate the behavior of the vehicle and user in different weather conditions and paths. The experimental results using the virtual simulator, show the correct performance of the algorithm of assistance proposed.

Elvis Bunces, Danilo Zambrano

The Robotic Archetype: Character Animation and Social Robotics

This paper delves into the surprisingly under-considered convergence between Hollywood animation and ‘Big Tech’ in the field of social robotics, exploring the implications of character animation for human-robot interaction, and highlighting the emergence of a robotic character archetype. We explore the significance and possible effects of a Hollywood-based approach to character design for human-robot sociality, and, at a wider level, consider the possible impact of this for human relationality and the concept of ‘companionship’ itself. We conclude by arguing for greater consideration of the socio-political and ethical consequences of importing and perpetuating relational templates that are drawn from powerful media conglomerates like Disney. In addition to facing a possible degradation of social relations, we may also be facing a possible delimitation of social relationality, based on the values, affects, and ideologies circulating in popular Hollywood animation.

Cherie Lacey, Catherine Barbara Caudwell

A Proposed Wizard of OZ Architecture for a Human-Robot Collaborative Drawing Task

Researching human-robot interaction “in the wild” can sometimes require insight from different fields. Experiments that involve collaborative tasks are valuable opportunities for studying HRI and developing new tools. The following describes a framework for an “in the wild” experiment situated in a public museum that involved a Wizard of OZ (WOZ) controlled robot. The UR10 is a non-humanoid collaborative robot arm and was programmed to engage in a collaborative drawing task. The purpose of this study was to evaluate how movement by a non-humanoid robot could affect participant experience. While the current framework is designed for this particular task, the control architecture could be built upon to provide a base for various collaborative studies.

David Hinwood, James Ireland, Elizabeth Ann Jochum, Damith Herath

Factors and Development of Cognitive and Affective Trust on Social Robots

The purpose of this study is to investigate the factors that contribute to cognitive and affective trust of social robots. Also investigated were the changes within two different types of trust over time and variables that influence trust. Elements of trust extracted from literature were used to evaluate people’s trust of social robot in an experiment. As a result of a factor analysis, ten factors that construct trust were extracted. These factors were further analyzed in relations with both cognitive and affective trust. Factors such as Security, Teammate, and Performance were found to relate with cognitive trust, while factors such as Teammate, Performance, Autonomy, and Friendliness appeared to relate with affective trust. Furthermore, changes in cognitive and affective trust over the time phases of the interaction were investigated. Affective trust appeared to develop in the earlier phase, while cognitive trust appeared to develop over the whole period of the interaction. Conversation topics had influence on affective trust, while robot’s mistakes had influence on the cognitive trust. On the other hand, prior experiences with social robots did now show any significant relations with neither cognitive nor affective trust. Finally, Familiarity attitude appeared to relate with both cognitive and affective trust, while other sub-dimensions of robot attitudes such as Interest, Negative attitude, and Utility appeared to relate with affective trust.

Takayuki Gompei, Hiroyuki Umemuro

Smiles of Children with ASD May Facilitate Helping Behaviors to the Robot

Helping behaviors are one of the important prosocial behaviors in order to develop social communication skills based on empathy. In this study, we examined the potentials of using a robot as a recipient of help, and helping behaviors to a robot. Also, we explored the relationships between helping behaviors and smiles that is an indicator of a positive mood. The results of this study showed that there might be a positive correlation between the amount of helping behaviors and the number of smiles. It implies that smiles may facilitate helping behaviors to the robot. This preliminary research indicates the potentials of robot-assisted interventions to facilitate and increase helping behaviors of children with Autism Spectrum Disorder (ASD).

SunKyoung Kim, Masakazu Hirokawa, Soichiro Matsuda, Atsushi Funahashi, Kenji Suzuki

If Drones Could See: Investigating Evaluations of a Drone with Eyes

Drones are often used in a context where they interact with human users. They, however, lack the social cues that their robotic counterparts have. If drones would possess such cues, would people respond to them more positively? This paper investigates people’s evaluations of a drone with eyes versus one without. Results show mainly positive effects, i.e. a drone with eyes is seen as more social and human-like than a drone without eyes, and that people are more willing to interact with it. These findings imply that adding eyes to a drone that is designed to interact with humans may make this interaction more natural, and as such enable a successful introduction of social drones.

Peter A. M. Ruijten, Raymond H. Cuijpers

Validation of the Design of a Robot to Study the Thermo-Emotional Expression

The thermal sensation can be used by humans to interpret emotions. Hence, a series of questions arise as to whether the robot can express its emotional state through the temperature of its body. Therefore, in this study, we carry out the design process of a robot and its validation as a platform to study the thermo-emotional expression. The designed robot can vary the temperature of its skin between 10–55 °C. In this range, it is possible to perform thermal stimuli already studied that have an emotional interpretation, and also to study new ones where the pain receptors are activated. The robot’s shape is designed to look like the body of a creature that is neither human nor animal. In addition, it was designed in such a way that the physical interaction occurs mainly in its head. This is because it was decided to locate the robot’s thermal system there. The results of an experiment with a free interaction showed that the main regions to be caressed were the superior, lateral and upper diagonal faces of the cranium. These regions coincide with the location of the robot’s thermal system. Therefore, the robot can transmit different thermal stimuli to the human when a physical interaction occurs. Consequently, the designed robot will be appropriate to study the body temperature of the robot as a medium to express its emotional state.

Denis Peña, Fumihide Tanaka

Training Autistic Children on Joint Attention Skills with a Robot

Children with Autism Spectrum Disorder have issues with the development of social skills and communication. One such skills is that of joint attention (JA). JA is the sharing of attention between two people in regards to an object. There are two mechanism of JA, initiating joint attention (IJA) and responding to joint attention (RJA). This article details an experiment wherein a social robot was used to train children with ASD on their JA skills. This experiment contained a robot training group and a control group. Both groups’ JA skills were tested before and after training with the robot (or a waiting period for the control group). The groups did not significantly differ on their pre-tests scores for RJA or IJA. The training group had significant improvements in both their IJA and RJA scores, while the control group did not have significant improvements. However, the groups did not significantly differ on their post-test scores for either RJA or IJA.

Kelsey Carlson, Alvin Hong Yee Wong, Tran Anh Dung, Anthony Chern Yuen Wong, Yeow Kee Tan, Agnieszka Wykowska

Robotic Understanding of Scene Contents and Spatial Constraints

The aim of this paper is to create a model which is able to be used to accurately identify objects as well as spacial relationships in a dynamic environment. This paper proposed methods to train a deep learning model which recognizes unique objects and positions of key items in an environment. The model requires a low amount of images compared to others and also can recognize multiple objects in the same frame due to the utilization of region proposal networks. Methods are also discussed to find the position of recognized objects which can be used for picking up recognized items with a robotic arm. The system utilizes logic operations to be able to deduct how different objects relate to each other in regard to their placement from one another based off of the localization technique. The paper discusses how to create spacial relationships specifically.

Dustin Wilson, Fujian Yan, Kaushik Sinha, Hongsheng He

Social Robots and Wearable Sensors for Mitigating Meltdowns in Autism - A Pilot Test

Young individuals with ASD may exhibit challenging behaviors. Among these, self-injurious behavior (SIB) is the most devastating for a person’s physical health and inclusion within the community. SIB refers to a class of behaviors that an individual inflicts upon himself or herself, which may potentially result in physical injury (e.g. hitting one’s own head with the hand or the wrist, banging one’s head on the wall, biting oneself and pulling out one’s own hair). We evaluate the feasibility of a wrist-wearable sensor in detecting challenging behaviors in a child with autism prior to any visible signs through the monitoring of the child’s heart rate, electrodermal activity, and movements. Furthermore, we evaluate the feasibility of such sensor to be used on an ankle instead of the wrist to reduce harm due to hitting oneself by hands and to improve wearable tolerance. Thus, we conducted two pilot tests. The first test involved a wearable sensor on the wrist of a child with autism. In a second test, we investigated wearable sensors on the wrist and on the ankle of a neurotypical child. Both pilot test results showed that the readings from the wearable sensors correlated with the children’s behaviors that were obtained from the videos taken during the tests. Wearable sensors could provide additional information that can be passed to social robots or to the caregivers for mitigating SIBs.

John-John Cabibihan, Ryad Chellali, Catherine Wing Chee So, Mohammad Aldosari, Olcay Connor, Ahmad Yaser Alhaddad, Hifza Javed

Autonomous Control Through the Level of Fatigue Applied to the Control of Autonomous Vehicles

In this article we present the detection of fatigue level of a vehicle driver and according to this level the autonomous driving assistance is implemented, the detection of fatigue level is based on facial recognition using deep learning (Deep Learning) developed in the Matlab software, applying neural networks previously trained and designed, this detection sends us a metric that comprises four levels, according to the metric the position and velocity control of a simulated Car-Like vehicle in the Unity3D software is performed which presents a user friendly environment with the use of haptic devices, the development of the control algorithm is based on path correction which calculates the shortest distance to reenter the desired path.

Oscar A. Mayorga, Víctor H. Andaluz

Dialogue Models for Socially Intelligent Robots

Dialogue capability is an important functionality of robot agents: interactive social robots must not only help humans in their everyday tasks, they also need to explicate their own actions, instruct human partners about practical tasks, provide requested information, and maintain interesting chat about a wide range of topics. This paper discusses the type of architecture required for such dialogue capability, emphasizing the need for robot communication to afford natural interaction and provide complementarity to standard cognitive architectures.

Kristiina Jokinen

Composable Multimodal Dialogues Based on Communicative Acts

In Social Robotics, being able to interact with users in a natural way is a key feature. To achieve this, we need to model dialogues that allow the robot to complete its tasks and to adapt to unforeseen changes in the conversation. We present an approach where these dialogues are modelled as a combination of basic interaction units, called Communicative Acts or CAs. With this, our system aims to provide all the necessary tools so each of the robot’s applications can tailor their own dialogues in a simplier way. These applications make the decisions that need task-related information and request the activation of the CAs in order to create complex dialogues. The CAs handle decisions that require communication-related information (e.g. giving the user some information, or asking a question). They also manage some of the problems that can appear in any interaction, like not being able to understand the other peer, or not getting an answer to a question. A case study associated to a cognitive stimulation exercise is presented in this paper to validate our system.

Enrique Fernández-Rodicio, Álvaro Castro-González, Jose C. Castillo, Fernando Alonso-Martin, Miguel A. Salichs

How Should a Robot Interrupt a Conversation Between Multiple Humans

This paper addresses the question of how and when a robot should interrupt a meeting-style conversation between humans. First, we observed one-to-one human-human conversations. We then employed raters to estimate how easy it was to interrupt each participant in the video. At the same time, we gathered behavioral information about the collocutors (presence of speech, head pose and gaze direction). After establishing that the raters’ ratings were similar, we trained a neural network with the behavioral data as input and the interruptibility measure as output of the system. Once we validated the similarity between the output of our estimator and the actual interruptiblitiy ratings, we proceeded to implement this system on our desktop social robot, CommU. We then used CommU in a human-robot interaction environment, to investigate how the robot should barge-in into a conversation between multiple humans. We compared different approaches to interruption and found that users liked the interruptibility estimation system better than a baseline system which doesn’t pay attention to the state of the speakers. They also preferred the robot to give advance non-verbal notifications of its intention to speak.

Oskar Palinko, Kohei Ogawa, Yuichiro Yoshikawa, Hiroshi Ishiguro

Grasping Novel Objects with Real-Time Obstacle Avoidance

This paper proposes a new approach to grasp novel objects while avoiding real-time obstacles. The general idea is to perform grasping of novel objects and do collision avoidance at the same time. There are two main contributions. Firstly, a fast and robust method of real-time grasp detection is presented based on morphological image processing and machine learning. Secondly, we integrate our robotic grasping algorithms with some existing collision prediction strategies. It is really helpful to grasp objects on the condition that a robot is surrounded by obstacles. Additionally, it is very practical, runs in real-time and can be easily adaptable with respect to different robots and working conditions. We demonstrate our approaches using the Kinect sensor and the Baxter robot with a series of experiments.

Jiahao Zhang, Chenguang Yang, Miao Li, Ying Feng

Augmenting Robot Knowledge Consultants with Distributed Short Term Memory

Human-robot communication in situated environments involves a complex interplay between knowledge representations across a wide variety of modalities. Crucially, linguistic information must be associated with representations of objects, locations, people, and goals, which may be represented in very different ways. In previous work, we developed a Consultant Framework that facilitates modality-agnostic access to information distributed across a set of heterogeneously represented knowledge sources. In this work, we draw inspiration from cognitive science to augment these distributed knowledge sources with Short Term Memory Buffers to create an STM-augmented algorithm for referring expression generation. We then discuss the potential performance benefits of this approach and insights from cognitive science that may inform future refinements in the design of our approach.

Tom Williams, Ravenna Thielstrom, Evan Krause, Bradley Oosterveld, Matthias Scheutz

3D Virtual Path Planning for People with Amyotrophic Lateral Sclerosis Through Standing Wheelchair

This article presents the development of an autonomous control system of an electric standing wheelchair for people with amyotrophic lateral sclerosis. The proposed control scheme is based on the autonomous maneuverability of the standing wheelchair, for which a path planner is implemented to which the desired 3D position is defined through the eye-tracking sensor. The eye-tracking is implemented in a virtual reality environment which allows selecting the desired position of the standing wheelchair. The wheelchair has a standing system that allows the user to position himself on the Z axis according to his needs independently of the displacement in the X-Y plane with respect to the inertial reference system <R>. To verify the performance of the proposed control scheme, several experimental tests are carried out.

Jessica S. Ortiz, Guillermo Palacios-Navarro, Christian P. Carvajal, Víctor H. Andaluz

Physiological Differences Depending on Task Performed in a 5-Day Interaction Scenario Designed for the Elderly: A Pilot Study

This paper investigates the relationship between different physiological parameters and three tasks performed by an elderly individual in a 5-days interaction scenario. The physiological parameters investigated are: the galvanic skin response, the facial temperature variation, the heart rate, and the respiration rate. The three tasks were the same during all interactions. More specifically, the participant started with a relaxation period of 3 min, continued with 5 cognitive games, and finished with a news reading task. Each day consisted of two sessions: morning session (at around 11am), and afternoon session (at around 3pm). Our hypotheses were validated, meaning that we can differentiate between the three tasks by looking only at the physiological parameters, and we can differentiate between two difficulty levels of the cognitive games. A discussion of these results is also provided.

Roxana Agrigoroaie, Adriana Tapus

Character Design and Validation on Aerial Robotic Platforms Using Laban Movement Analysis

Exploring the application of aerial robots in human-robot interaction is a currently active area of research. One step toward achieving more natural human-robot interactions is developing a method in which an aerial robot successfully portrays a character or exhibits character traits that a human can recognize. Recognizable character types are conveyed through movement in many performing arts, including ballet. However, past work has not leveraged the movement expertise of ballet dancers to create a method and portray complex characters on low-degree-of-freedom aerial robots. This paper explores the recognition and differentiation of archetypal characters used in classical ballet using Laban Movement Analysis (LMA) and applies the results from tracking the movements to an aerial robotic platform. Movement sequences were created on a Bebop Drone to emulate these character types. This process was subsequently validated by a user study, highlighting the successful application of state recognition through movement on aerial robots. Such work can be used to create robots with recognizable movement signatures for quick identification by human counterparts in social settings.

Alexandra Bacula, Amy LaViers

Social Robots in Public Spaces: A Meta-review

Social robots can prove to be an effective medium of instruction and communication to users in a public setting. However their range of interaction in current research is not known. In this paper, we overview a range of research works that utilized NAO and Pepper robot in the public settings. Our results show that Education scenarios are one of the more popular and most interaction is centered around providing information. In conclusion, we present key design implications that researchers can employ whilst designing social robot interactions in the public space.

Omar Mubin, Muneeb Imtiaz Ahmad, Simranjit Kaur, Wen Shi, Aila Khan

On the Design of a Full-Actuated Robot Hand with Target Sensing Self-adaption and Slider Crank Mechanism

The robot hand is one of the most important subsystems of an industrial system, it can interact with the environment directly and are often required to deal with objects with different positions, sizes, and shapes. To meet these requirements, many self-adaptive hands have been developed. However, the traditional finger cannot perform a linear translation of its distal phalanx. The linear translation of the distal phalanx is useful in grasping thin objects on a flat surface without additional motion of manipulators. This paper designs a novel linear-parallel and sensing self-adaptive robot hand (LPSS hand) and its corresponding control system, the hand combines the self-adaptive grasping mode and linear-parallel pinching mode, and it can switch among different grasping modes according to signals provided by the sensors. The hand consists of 3 fingers, 6 sensors, and one palm. Each finger includes 2 actuators, 2 phalanxes, and 2 DOF (degree of freedom). The hand was designed based on a novel straight-line mechanism, kinematics and force analysis of the hand are conducted to give more detail properties of the design and provide some method for optimization. The hand has much application potential in the industrial field.

Chao Luo, Wenzeng Zhang

Towards Dialogue-Based Navigation with Multivariate Adaptation Driven by Intention and Politeness for Social Robots

Service robots need to show appropriate social behaviour in order to be deployed in social environments such as healthcare, education, retail, etc. Some of the main capabilities that robots should have are navigation and conversational skills. If the person is impatient, the person might want a robot to navigate faster and vice versa. Linguistic features that indicate politeness can provide social cues about a person’s patient and impatient behaviour. The novelty presented in this paper is to dynamically incorporate politeness in robotic dialogue systems for navigation. Understanding the politeness in users’ speech can be used to modulate the robot behaviour and responses. Therefore, we developed a dialogue system to navigate in an indoor environment, which produces different robot behaviours and responses based on users’ intention and degree of politeness. We deploy and test our system with the Pepper robot that adapts to the changes in user’s politeness.

Chandrakant Bothe, Fernando Garcia, Arturo Cruz Maya, Amit Kumar Pandey, Stefan Wermter

Design and Implementation of Shoulder Exoskeleton Robot

An exoskeleton robot for shoulder rehabilitation training is designed for patients with hemiplegia due to stroke. In respect of the human upper limb physiology, a series of mechanical structures are integrated: the retractable link meets the upper arm size of different people; the adjustable module relieves the discomfort caused by the scapulohumeral rhythm; and the gravity compensation module ensures patient safety. Then estimate the joint torque and power of the robot to determine the hardware and materials and make the robot prototype. Finally, the robot and PC form a CAN bus communication network and design the robot’s control software based on the ROS (Robot Operating System) platform to realize the basic rehabilitation training of the patient’s shoulder flexion/extension, abduction/adduction and internal/external rotation. Finally, the comfort of the exoskeleton robot is evaluated through the actual experience of healthy people and in the form of a questionnaire. The test results verify the rationality and comfort of the exoskeleton robot to some extent.

Wang Boheng, Chen Sheng, Zhu Bo, Liang Zhiwei, Gao Xiang

Cooperative Control of Sliding Mode for Mobile Manipulators

This article describes the design and implementation of a centralized cooperative control algorithm of mobile manipulators (mobile differential platform manipulator and an omnidirectional platform manipulator) for the execution of diverse tasks in which the participation of two or more robots is necessary, e.g., the handling or transport of objects of a high weight, keeping a platform level at a fixed height, among others. For this, a sliding mode control technique is used that is applied to a fixed operating point located in a virtual line that is generated between the end effectors of the manipulator arms. For the validation of the proposed controller, the stability criterion of Lyapunov will be used and the simulation will be performed to validate the performance and performance of the proposed controller between two heterogeneous manipulators.

Jorge Mora-Aguilar, Christian P. Carvajal, Jorge S. Sánchez, Víctor H. Andaluz

When Should a Robot Apologize? Understanding How Timing Affects Human-Robot Trust Repair

If robots are to occupy a space in the human social sphere, then the importance of trust naturally extends to human-robot interactions. Past research has examined human-robot interaction from a number of perspectives, ranging from overtrust in human robot interactions to trust repair. Studies by [15] have suggested a relationship between the success of a trust repair method and the time at which it is employed. Additionally, studies have shown a potentially dangerous tendency in humans to trust robotic systems beyond their operational capacity. It therefore becomes essential to explore the factors that affect trust in greater depth. The study presented in this paper is aimed at building upon previous work to gain insight into the reasons behind the success of trust repair methods and their relation to timing. Our results show that the delayed trust repair is more effective than the early case, which is consistent with the previous results. In the absence of an emergency, the participant’s decision were similar to those of a random selection. Additionally, there seem to be a strong influence of attention on the participants’ decision to follow the robot.

Mollik Nayyar, Alan R. Wagner

“Let There Be Intelligence!”- A Novel Cognitive Architecture for Teaching Assistant Social Robots

This paper endeavors to propose a novel cognitive architecture specialized for Teaching Assistant (TA) social robotic platforms. Designing such architectures could lead to a more systematic approach in using TA robots. The proposed architecture consists of four main blocks: Perception, Memory, Logic and Action Units. The designed cognitive architecture would help robots to perform a variety of visual, acoustic, and spatial sub-tasks based on cognitive theories and modern educational methods. It also provides a way to enable an operator to control the robot with defined plans and teaching scenarios. The proposed architecture is modular, minimalistic, extendable, and ROS compatible. This architecture can help teaching-assistant robots to be involved in common/expected educational scenarios, systematically. Our preliminary exploratory study was a case study that adopted the proposed architecture for RASA, a social robotic platform aimed at teaching Persian Sign Language (PSL) to hearing-impaired children. The last step is the evaluation. We observed that the architecture’s capabilities adequately matched RASA’s needs for its applications in teaching sign language.

Seyed Ramezan Hosseini, Alireza Taheri, Ali Meghdari, Minoo Alemi

Virtual Social Toys: A Novel Concept to Bring Inanimate Dolls to Life

Different social robotics studies are carried out to give robotic gadgets an interactive retrofit. In this research, we tried to present a new perspective on using interactive animated toys in a virtual environment. Such social toys would be able to interact with children in the same way as social robots. To this end, a 3D scanner setup and interface was designed/fabricated and implemented to extract the 3D model of an inanimate toy to be used in a virtual reality game. The children were asked to participate in a fifteen-minute game scenario and play with the interactive virtual social toy. Then the Believability, Acceptance/Attractiveness, and Vibrancy/Interaction of the virtual social toy were assessed quantitatively through a questionnaire. Our preliminary findings showed high scores in all three criteria by the children. However, 2-sample T-tests indicated a significant difference between the children and parents’ viewpoints regarding the effectiveness of this generation of new toys.

Alireza Taheri, Mojtaba Shahab, Ali Meghdari, Minoo Alemi, Ali Amoozandeh Nobaveh, Zeynab Rokhi, Ali Ghorbandaei Pour

Modular Robotic System for Nuclear Decommissioning

Because of the radioactivity, the nuclear environment operation requires robot system to complete. In this paper, a modular robot system for nuclear environment operation is developed, which is easy to maintain, reconfigurable and reliable. Besides, this paper proposes a vision-based method for robot tool changing with the base coordinates of robot inaccurate. The prototype test proves the feasibility of the robot system and the validity of the vision-based tool changing method.

Yuanyuan Li, Shuzhi Sam Ge, Qingping Wei, Dong Zhou, Yuanqiang Chen

A New Model to Enhance Robot-Patient Communication: Applying Insights from the Medical World

Socially assistive robots need to be able to communicate effectively with patients in healthcare applications. This paper outlines research on doctor-patient communication and applies the principles to robot-patient communication. Effective communication skills for physicians include information sharing, relationship building, and shared decision making. Little research to date has systematically investigated the components of physician communication skills as applied to robots in healthcare domains. We propose a new model of robot-patient communication and put forward a research agenda for advancing knowledge of how robots can communicate effectively with patients to influence health outcomes.

Elizabeth Broadbent, Deborah Johanson, Julie Shah

Towards Crossmodal Learning for Smooth Multimodal Attention Orientation

Orienting attention towards another person of interest is a fundamental social behaviour prevalent in human-human interaction and crucial in human-robot interaction. This orientation behaviour is often governed by the received audio-visual stimuli. We present an adaptive neural circuit for multisensory attention orientation that combines auditory and visual directional cues. The circuit learns to integrate sound direction cues, extracted via a model of the peripheral auditory system of lizards, with visual directional cues via deep learning based object detection. We implement the neural circuit on a robot and demonstrate that integrating multisensory information via the circuit generates appropriate motor velocity commands that control the robot’s orientation movements. We experimentally validate the adaptive neural circuit for co-located human target and a loudspeaker emitting a fixed tone.

Frederik Haarslev, David Docherty, Stefan-Daniel Suvei, William Kristian Juel, Leon Bodenhagen, Danish Shaikh, Norbert Krüger, Poramate Manoonpong

A Two-Step Framework for Novelty Detection in Activities of Daily Living

The ability to recognize and model human Activities of Daily Living (ADL) and to detect possible deviations from regular patterns, or anomalies, constitutes an enabling technology for developing effective Socially Assistive Robots. Traditional approaches aim at recognizing an anomaly behavior by means of machine-learning techniques trained on anomalies’ dataset, like subject’s falls. The main problem with these approaches lies in the difficulty to generate these dataset. In this work, we present a two-step framework implementing a new strategy for the detection of ADL anomalies. Indeed, rather than detecting anomaly behaviors, we aim at identifying those that are divergent from normal ones. This is achieved by a first step, where a deep learning technique determine the most probable ADL class related to the action performed by the subject. In a second step, a Gaussian Mixture Model is used to compute the likelihood that the action is normal or not, within that class. We performed an experimental validation of the proposed framework on a public dataset. Results are very close to the best traditional approaches, while at the same time offering the significant advantage that it is much easier to create dataset of normal ADL.

Silvia Rossi, Luigi Bove, Sergio Di Martino, Giovanni Ercolano

Design of Robotic System for the Mannequin-Based Disinfection Training

The purpose of skills training is making trainers master skills correctly and utilize them efficiently in their work and life. As one of the most important skills training of nurses, the disinfection training is often trained on the medical mannequin. The effect of the mannequin-based disinfection training is often not ideal because of the lack of the feedback information generated by the disinfection training of novice nurses. To solve this problem, we design a new robotic system for the mannequin-based disinfection training by transforming the medical mannequin. The robotic system is made up of the force processing module, the location processing module and the data fusion module. The force processing module handles the vertical downward force applied by the forceps with sterilized cotton ball on the medical mannequin, and transmits the force filtered by the moving average filter to the data fusion module. The location processing module transmits the plane position information applied by the forceps to the data fusion module. The data fusion module integrates the location information and the force information to generate the pressure and the three-dimensional coordinate information of the forceps exerted on the medical mannequins. After the test of the West China Medical School, the output data of the newly designed robotic system meets the requirements of the sensing sensitivity and working range for the mannequin-based disinfection training. The new robotic system could also be utilized in other medical training fields, e.g. the massage training, the acupuncture training and so on.

Mao Xu, Shuzhi Sam Ge, Hongkun Zhou

Learning to Win Games in a Few Examples: Using Game-Theory and Demonstrations to Learn the Win Conditions of a Connect Four Game

Teaching robots new skills using minimal time and effort has long been a goal of artificial intelligence. This paper investigates the use of game theoretic representations to represent interactive games and learn their win conditions by interacting with a person. Game theory provides the formal underpinnings needed to represent the structure of a game including the goal conditions. Learning by demonstration, has long sought to leverage a robot’s interactions with a person to foster learning. This paper combines these two approaches allowing a robot to learn a game-theoretic representation by demonstration. This paper demonstrates how a robot can be taught the win conditions for the game Connect Four using a single demonstration and a few trial examples with a question and answer session led by the robot. Our results demonstrate that the robot can learn any win condition for the standard rules of the Connect Four game, after demonstration by a human, irrespective of the color or size of the board and the chips. Moreover, if the human demonstrates a variation of the win conditions, we show that the robot can learn the respective changed win condition.

Ali Ayub, Alan R. Wagner

Semantics Comprehension of Entities in Dictionary Corpora for Robot Scene Understanding

This paper proposes a method to help robots understand object semantics. The method presented in this paper can enhance robot’s performance and efficiency while working with ambiguous instructions to interact with unfamiliar objects. Specifically, the proposed method can reduce the complexity of assigning the functions, properties or other characteristics for each object which robot may interact within a social environment. The method assists the robot to comprehend the scene based on semantics analysis of the dictionary definition. The proposed semantics comprehension method includes the comprehension of dictionary definitions, the formulation of logic representation, and the generation of natural-language descriptions. The applicability of the approach has been demonstrated. The model performance has been evaluated based on precision, recall, and f-score. Both logic representation formulation results and natural language representation results have been displayed.

Fujian Yan, Yinlong Zhang, Hongsheng He

The CPS Triangle: A Suggested Framework for Evaluating Robots in Everyday Life

This paper introduces a conceptual framework: the CPS triangle, which has evolved over four years of research on ‘older people meet robots’. It is a synthesis of domestication theory, modern social practice theory and empirical data. Case studies on the domestication of one current technology, the robotic vacuum cleaner, and two emergent technologies, the eHealth system and the service robot, provide empirical evidence. Considering ‘older people meet robots’ within the framework of the proposed CPS triangle can help us to understand older people’s domestication or rejection of robots. In the CPS triangle, C represents the cognitive dimension; P, the practical dimension; and S, the symbolic dimension. The CPS triangle is meant to serve as a tool rather than a rule. It is recommended that the CPS triangle be tested more widely in a range of contexts. It will require adaptation and customisation for the context of use.

Susanne Frennert

Feature-Based Monocular Dynamic 3D Object Reconstruction

Dynamic 3D object reconstruction becomes increasingly crucial to various intelligent applications. Most existing algorithms, in spite of the accurate performances, have the problems of high cost and complex computations. In this paper, we propose a novel framework for dynamic 3D object reconstruction with a single camera in an attempt to address this problem. The gist of the proposed approach is to reduce the reconstruction problem to a pose estimation problem. We reconstruct the whole object by estimating the poses of its topological segmentations. Experiments are undertaken to validate the effectiveness of the proposed method in comparison with several state-of-art methods.

Shaokun Jin, Yongsheng Ou

Adaptive Control of Human-Interacted Mobile Robots with Velocity Constraint

In this paper, we present an adaptive control for mobile robots moving in human environments with velocity constraints. The mobile robot is commanded to track the desired trajectory while at the same time guarantee the satisfaction of the velocity constraints. Neural networks are constructed to deal with unstructured and unmodeled dynamic nonlinearities. Lyapunov function is employed during the course of control design to implement the validness of the proposed approach. The effectiveness of the proposed framework is verified through simulation studies.

Qing Xu, Shuzhi Sam Ge

Attributing Human-Likeness to an Avatar: The Role of Time and Space in the Perception of Biological Motion

Despite well-developed cognitive control mechanisms in most adult healthy humans, attention can still be captured by irrelevant distracting stimuli occurring in the environment. However, when it comes to artificial agents, such as humanoid robots, one might assume that its attention is “programmed” to follow a task, thus, being distracted by attention-capturing stimuli would not be expected. We were interested in whether a behavior that reflects attentional capture in a humanoid robot would increase its perception as human-like. We implemented human behaviors in a virtual version of iCub robot. Twenty participants’ head movements were recorded, through an inertial sensor, during a solitaire card game, while a series of distracting videos were presented on a screen in their peripheral field of view. Eight participants were selected, and their behavioral reactions (i.e. inertial sensor coordinates, etc.) were extracted and implemented in the simulator. In Experiment 2, twenty-four new participants were asked to rate the human-likeness of the avatar movements. We examined whether movement parameters (i.e. angle amplitude, overall time spent on a distractor) influenced participants’ ratings of human-likeness, and if there was any correlation with sociodemographic factors (i.e. gender, age). Results showed a gender effect on human-likeness ratings. Thus, we computed a GLM analysis including gender as a covariate. A main effect of the time of movement was found. We conclude that humans rely more on temporal then on spatial information when evaluating properties (specifically, human-likeness) of biological motion of humanoid-shaped avatars.

Davide Ghiglino, Davide De Tommaso, Agnieszka Wykowska

Dancing Droids: An Expressive Layer for Mobile Robots Developed Within Choreographic Practice

In viewing and interacting with robots in social settings, users attribute character traits to the system. This attribution often occurs by coincidence as a result of past experiences, and not by intentional design. This paper presents a flexible, expressive prototype that augments an existing mobile robot platform in order to create intentional attribution through a previously developed design methodology, resulting in an altered perception of the non-anthropomorphic robotic system. The prototype allows customization through five modalities: customizable eyes, a simulated breath motion, movement, color, and form. Initial results with human subject audience members show that, while participants found the robot likable, they did not consider it anthropomorphic. Moreover, individual viewers saw shifts in perception according to performer interactions. Future work will leverage this prototype to modulate the reactions viewers might have to a mobile robot in a variety of environments.

Ishaan Pakrasi, Novoneel Chakraborty, Catie Cuan, Erin Berl, Wali Rizvi, Amy LaViers

Semantic-Based Interaction for Teaching Robot Behavior Compositions Using Spoken Language

By enabling users to teach behaviors to robots, social robots become more adaptable, and therefore more acceptable. We improved an application for teaching behaviors to support conditions closer to the real-world: it supports spoken instructions, and remain compatible the robot’s other purposes. We introduce a novel architecture to enable 5 distinct algorithms to compete with each other, and a novel teaching algorithm that remain robust with these constraints: using linguistics and semantics, it can recognize when the dialogue context is adequate. We carry out an adaptation of a previous experiment, so that to produce comparable results, demonstrate that all participants managed to teach new behaviors, and partially verify our hypotheses about how users naturally break down the teaching instructions.

Victor Paléologue, Jocelyn Martin, Amit Kumar Pandey, Mohamed Chetouani

Comfortable Passing Distances for Robots

If autonomous robots are expected to operate in close proximity with people, they should be able to deal with human proxemics and social rules. Earlier research has shown that robots should respect personal space when approaching people, although the quantitative details vary with robot model and direction of approach. It would seem that similar considerations apply when a robot is only passing by, but direct measurement of the comfort of the passing distance is still missing. Therefore the current study measured the perceived comfort of varying passing distances of the robot on each side of a person in a corridor. It was expected that comfort would increase with distance until an optimum was reached, and that people would prefer a left passage over a right passage. Results showed that the level of comfort did increase with distance up to about 80 cm, but after that it remained constant. There was no optimal distance. Surprisingly, the side of passage had no effect on perceived comfort. These findings show that robot proxemics for passing by differ from approaching a person. The implications for modelling human-aware navigation and personal space models are discussed.

Margot M. E. Neggers, Raymond H. Cuijpers, Peter A. M. Ruijten

Reduced Sense of Agency in Human-Robot Interaction

In the presence of others, sense of agency (SoA), i.e. the perceived relationship between our own actions and external events, is reduced. This effect is thought to contribute to diffusion of responsibility. The present study aimed at examining humans’ SoA when interacting with an artificial embodied agent. Young adults participated in a task alongside the Cozmo robot (Anki Robotics). Participants were asked to perform costly actions (i.e. losing various amounts of points) to stop an inflating balloon from exploding. In 50% of trials, only the participant could stop the inflation of the balloon (Individual condition). In the remaining trials, both Cozmo and the participant were in charge of preventing the balloon from bursting (Joint condition). The longer the players waited before pressing the “stop” key, the smaller amount of points that was subtracted. However, in case the balloon burst, participants would lose the largest amount of points. In the joint condition, no points were lost if Cozmo stopped the balloon. At the end of each trial, participants rated how much control they perceived over the outcome of the trial. Results showed that when participants successfully stopped the balloon, they rated their SoA lower in the Joint than in the Individual condition, independently of the amount of lost points. This suggests that interacting with robots affects SoA, similarly to interacting with other humans.

Francesca Ciardo, Davide De Tommaso, Frederike Beyer, Agnieszka Wykowska

Comparing the Effects of Social Robots and Virtual Agents on Exercising Motivation

Preventing diseases of affluence is one of the major challenges for our future society. Researchers introduced robots as a tool to support people on dieting or rehabilitation tasks. However, deploying robots as exercising companions is cost-intensive. Therefore, in our current work, we are investigating how the embodiment of an exercising partner influences the exercising motivation to persist on an abdominal plank exercise. We analyzed and compared data from previous experiments on exercising with robots and virtual agents. The results show that the participants had longer exercising times when paired with a robot companion compared to virtual agents, but not compared to a human partner. However, participants perceived the robots partner as more likable than a human partner. This results have implications for SAR practitioners and are important for the usage of SAR to promote physical activity.

Sebastian Schneider, Franz Kummert

The Relevance of Social Cues in Assistive Training with a Social Robot

This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

Neziha Akalin, Andrey Kiselev, Annica Kristoffersson, Amy Loutfi

Attitudes of Heads of Education and Directors of Research Towards the Need for Social Robotics Education in Universities

We explored the attitudes of the Heads of Education and the Directors of Research towards the need for Social Robotics Courses in Finland. The methods consisted of a cross-sectional survey (n = 21) and data was analyzed with descriptive methods and Pearson correlation tests. The results showed that the attitudes of respondents towards social robots were positive and they stated that robotics courses would be essential for universities. The respondents reported that the social service and healthcare sector will use social robots in the near future, but more training sessions are needed. So far, universities have offered only few applied robotics courses for healthcare students. This study also found that the surveyed universities have not yet taken into account the development of service and social robotics in the healthcare sector.

Kimmo J. Vänni, John-John Cabibihan, Sirpa E. Salin

Coordinated and Cooperative Control of Heterogeneous Mobile Manipulators

This paper proposes a multilayer scheme for the cooperative control of $$ n \ge 2 $$ heterogeneous mobile manipulators that allows to transport an object in common in a coordinated way; for which the kinematic modeling of each mobile manipulator robot is performed. Stability and robustness are demonstrated using the Lyapunov theory in order to obtain asymptotically stable control. Finally, the results are presented to evaluate the performance of the proposed control, which confirms the scope of the controller to solve different movement problems.

María F. Molina, Jessica S. Ortiz

Robotic Healthcare Service System to Serve Multiple Patients with Multiple Robots

This paper presents a robot system for a healthcare environment, especially for a family doctor practice. The robot system includes a sensor manager and a robot system for a general doctor’s practice, which enables multiple robots to serve multiple patients at one time, by sharing vital signs devices. A receptionist robot assigns one patient to one nurse assistant robot using a patient identification system. Our previous work included three subsystems: a receptionist robot, a nurse assistant robot, and a medical server. However, this could only serve one patient and one vital signs device at any one time, which means we can use only one of vital signs devices prepared for patients and wastes their time waiting. In addition, patients should enter their identification data to robot by themselves, which takes another long time as well as can make errors on the data. We implemented the new system with multiple robots and new patient identification system using QR code, and did a pilot study to confirm the new system’s functionalities. The results show the new system talks well with multiple robots to support multiple patients by identifying them using QR codes, and measures their vital signs well by sharing the devices.

Ho Seok Ahn, Sheng Zhang, Min Ho Lee, Jong Yoon Lim, Bruce A. MacDonald

Perception of Control in Artificial and Human Systems: A Study of Embodied Performance Interactions

Robots in human facing environments will move alongside human beings. This movement has both functional and expressive meaning and plays a crucial role in human perception of robots. Secondarily, how the robot is controlled – through methods like movement or programming and drivers like oneself or an algorithm – factors into human perceptions. This paper outlines the use of an embodied movement installation, “The Loop”, to understand perceptions generated between humans and various technological agents, including a NAO robot and a virtual avatar. Participants were questioned about their perceptions of control in the various agents. Initial results with human subjects demonstrate an increased likelihood to rate a robot and a robotic shadow as algorithmically controlled, versus a human performer and a human-shaped VR avatar which were more likely rated as human actor controlled or split between algorithm/human actor. Participants also showed a tendency to rate their own performance in the exercise as needing improvement. Qualitative data, collected in the form of text and drawings, was open-ended and abstract. Drawings of humans and geometric shapes frequently appeared, as did the words “mirror”, “movement”, and variations on the word “awareness”.

Catie Cuan, Ishaan Pakrasi, Amy LaViers

A Robotic Brush with Surface Tracing Motion Applied to the Face

The purpose of this research is to develop an assistive robot for applying make-up. The developed robot consists of a frame for the human face and a robotic brush unit. The robotic brush consists of a cosmetic with two motors and a spring. The motors are used to control the direction of the brush on the face, while the spring controls the force of the brush on the face. The robot is designed to safely interact with the human face by reducing the complexity of its components and control method. We tested the robot on a mannequin head to verify its performance and safety. A pilot study with a single participant was also conducted to evaluate the human-robot interaction.

Yukiko Homma, Kenji Suzuki

MagicHand: In-Hand Perception of Object Characteristics for Dexterous Manipulation

An important challenge in dexterous grasping and manipulation is to perceive the characteristics of an object such as fragility, rigidity, texture, mass and density etc. In this paper, a novel way is proposed to find these important characteristics that help in deciding grasping strategies. We collected Near-infrared (NIR) spectra of objects, classified the spectra to perceive their materials and then looked up the characteristics of the perceived material in a material-to-characteristics table. NIR spectra of six materials including ceramic, stainless steel, wood, cardboard, plastic and glass were collected using SCiO sensor. A Multi-Layer Perceptron (MLP) Neural Networks was implemented to classify the spectra. Also a material-to-characteristics table was established to map the perceived material to their characteristics. The experiment results achieve 99.96% accuracy on material recognition. In addition, a grasping experiment was performed, a robotic hand was trying to grasp two objects which shared similar shapes but made of different materials. The results showed that the robotic hand was able to improve grasping strategies based on characteristics perceived by our algorithm.

Hui Li, Yimesker Yihun, Hongsheng He

Robots and Human Touch in Care: Desirable and Non-desirable Robot Assistance

Care robots are often seen to introduce a risk to human, touch based care. In this study, we analyze care workers’ opinions on robot assistance in elderly services and reflect them to the idea of embodied relationship between a caregiver, care receiver and technology. Our empirical data consists of a survey for professional care workers (n = 3800), including registered and practical nurses working in elderly care. The questionnaire consisted scenarios of robot assistance in care work and in elderly services and the respondents were asked to evaluate whether they see them as desirable. The care workers were significantly more approving of robot assistance in lifting heavy materials compared to moving patients. Generally, the care workers were reserved towards the idea of utilizing autonomous robots in tasks that typically involve human touch, such as assisting the elderly in the bathroom. Stressing the importance of presence and touch in human care, we apply the ideas of phenomenology of the body to understand the envisioned robot-human constellations in care work.

Jaana Parviainen, Tuuli Turja, Lina Van Aerschot

The Effects of Driving Agent Gaze Following Behaviors on Human-Autonomous Car Interaction

Autonomous cars have been gaining attention as a future transportation option due to an envisioning of a reduction in human error and achieving a safer, more energy efficient and more comfortable mode of transportation. However, eliminating human involvement may impact the usage of autonomous cars negatively because of the impairment of perceived safety, and the enjoyment of driving. In order to achieve a reliable interaction between an autonomous car and a human operator, the car should evince intersubjectivity, implying that it possesses the same intentions as those of the human operator. One critical social cue for human to understand the intentions of others is eye gaze behaviour. This paper proposes an interaction method that utilizes the eye gazing behaviours of an in-car driving agent platform that reflects the intentions of a simulated autonomous car that holds the potential of enabling human operators to perceive the autonomous car as a social entity. We conducted a preliminary experiment to investigate whether an autonomous car will be perceived as possessing the same intentions as a human operator through gaze following behaviours of the driving agents as compared to the conditions of random gazing as well as when not using the driving agents at all. The results revealed that gaze-following behaviour of the driving agents induces an increase in the perception of intersubjectivity. Also, the proposed interaction method demonstrated that the autonomous system was perceived as safer and more enjoyable.

Nihan Karatas, Shintaro Tamura, Momoko Fushiki, Michio Okada

Virtual Reality Social Robot Platform: A Case Study on Arash Social Robot

The role of technology in education and clinical therapy cannot be disregarded. Employing robots and computer-based devices as competent and advanced learning tools for children indicates that there is a role for technology in overcoming certain weaknesses of common therapy and educational procedures. In this paper, we present a new platform for a virtual reality social robot (VR-social robot) which could be used as an auxiliary device or a replacement for real social robots. To support the idea, a VR-robot, based on the real social robot Arash, is designed and developed in the virtual reality environment. “Arash” is a social robot buddy particularly designed and realized to improve learning, educating, entertaining, and clinical therapy for children with chronic disease. The acceptance and eligibility of the actual robot among these children have been previously investigated. In the present study, we investigated the acceptability and eligibility of a virtual model of the Arash robot among twenty children. To have a fair comparison a similar experiment was also performed utilizing the real Arash robot. The experiments were conducted in the form of storytelling. The initial results are promising and suggest that the acceptance of a VR-robot is fairly compatible to the real robot since the performance of the VR-robot did not have significant differences with the performance of the real Arash robot. Thereby, this platform has the potential to be a substitute or an auxiliary solution for the real social robot.

Azadeh Shariati, Mojtaba Shahab, Ali Meghdari, Ali Amoozandeh Nobaveh, Raman Rafatnejad, Behrad Mozafari

Novel Siamese Robot Platform for Multi-human Robot Interaction

Service robots have been designed to support people in public places, i.e. schools, museums, shopping malls, etc. However, the existing one-headed robots have a limitation that they cannot interact with a person behind them while interacting with another person in front of them. To overcome the limitation, we propose a novel Siamese robot platform motivated by “Siamese twins” in nature. The proposed Siamese robot consists of two heads, two arms, and a body to cover all directions. Two separate individuals share the arms and the body, and therefore we also propose three coordination schemes to mediate the individuals. Experiments were carried out with a physical Siamese robot, “Siambot” in a real environment, and the results show that the proposed Siamese robot is efficient and effective in multi-human robot interaction. A video showing the strengths of the Siamese robot can be found at .

Woo-Ri Ko, Jong-Hwan Kim

An Attention-Aware Model for Human Action Recognition on Tree-Based Skeleton Sequences

Skeleton-based human action recognition (HAR) has attracted a lot of research attentions because of robustness to variations of locations and appearances. However, most existing methods treat the whole skeleton as a fixed pattern, in which the importance of different skeleton joints for action recognition is not considered. In this paper, a novel CNN-based attention-ware network is proposed. First, to describe the semantic meaning of skeletons and learn the discriminative joints over time, an attention generate network named Global Attention Network (GAN) is proposed to generate attention masks. Then, to encode the spatial structure of skeleton sequences, we design a tree-based traversal (TTTM) rule, which can represent the skeleton structure, as a convolution unit of main network. Finally, the GAN and main network are cascaded as a whole network which is trained in an end-to-end manner. Experiments show that the TTTM and GAN are supplemented each other, and the whole network achieves an efficient improvement over the state-of-the-arts, e.g., the classification accuracy of this network was 83.6% and 89.5% on NTU-RGBD CV and CS dataset, which outperforms any other methods.

Runwei Ding, Chang Liu, Hong Liu

Predicting the Target in Human-Robot Manipulation Tasks

We present a novel approach for fast prediction of human reaching motion in the context of human-robot collaboration in manipulation tasks. The method trains a recurrent neural network to process the three-dimensional hand trajectory and predict the intended target along with its certainty about the position. The network then updates its estimate as it receives more observations while advantaging the positions it is more certain about. To assess the proposed algorithm, we build a library of human hand trajectories reaching targets on a fine grid. Our experiments show the advantage of our algorithm over the state of the art in terms of classification accuracy.

Mahmoud Hamandi, Emre Hatay, Pooyan Fazli

Imitating Human Movement Using a Measure of Verticality to Animate Low Degree-of-Freedom Non-humanoid Virtual Characters

Imitating human motion on robotic platforms is a task which requires ignoring some information about the original human mover as robots have fewer degrees of freedom than a human. In an effort to generate low degree of freedom motion profiles based on human movement, this paper utilizes verticality, computed from motion capture data, to animate virtual characters. After creating correspondences between the verticality metrics and the movement of three and four degree of freedom virtual characters, lay users were asked whether the imitation of the characters’ movements was effective compared to pseudo-random motion profiles. The results showed a statistically significant preference for the verticality method for the higher DOF character and for the higher DOF character over the lower DOF character. Future work includes extending the verticality method to more virtual characters and developing other methodologies of motion generation for users to evaluate a more diverse set of motion profiles. This work can help create automated protocols for replicating human motion, and intent, on artificial systems.

Roshni Kaushik, Amy LaViers

Adaptive Neural Control for Robotic Manipulators Under Constrained Task Space

A fundamental requirement in human-robot interaction is the capability for motion in the constrained task space. The control design for robotic manipulators is investigated in this paper, subject to uncertainties and constrained task space. The neural networks (NN) are employed to estimate the uncertainty of robotic dynamics, while the integral barrier Lyapunov Functional (iBLF) is used to handle the effect of constraint. With the proposed control strategy, the system output can converge to an adjustable constrained space without violating the predefined constrained region. Semi-globally uniformly ultimate boundedness of the closed-loop system is guaranteed via Lyapunov’s stability theory. Simulation examples are provided to illustrate the performance of the proposed strategy.

Sainan Zhang, Zhongliang Tang

Multi-pose Face Registration Method for Social Robot

This paper presents a multi-pose face registration method for social robot application. A social robot means that it is an autonomous robot to interact and communicate with humans. The first thing in communicating with people is to recognize who they are. To do this, the social robot should basically have a face recognition function. Although many face recognition algorithms have been developed in the past, the development of algorithms that are robust to real-time pose changes is underway. In this paper, we try to a multi-pose face registration method for pose invariant face recognition. To measure the robustness of the proposed method, comparisons were made between the registering of front face only versus the registering multiple pose faces based on their respective recognition similarity values. As a result, it was confirmed that the confidence value of similarity always keeps a high value when the proposed method is used compared to when not using it, despite the fact that the face was entered in various poses.

Ho-Sub Yoon, Jaeyoon Jang, Jaehong Kim


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!