Skip to main content

Über dieses Buch

This book provides an overview of recent research developments in the automation and control of robotic systems that collaborate with humans. A measure of human collaboration being necessary for the optimal operation of any robotic system, the contributors exploit a broad selection of such systems to demonstrate the importance of the subject, particularly where the environment is prone to uncertainty or complexity. They show how such human strengths as high-level decision-making, flexibility, and dexterity can be combined with robotic precision, and ability to perform task repetitively or in a dangerous environment.

The book focuses on quantitative methods and control design for guaranteed robot performance and balanced human experience from both physical human-robot interaction and social human-robot interaction. Its contributions develop and expand upon material presented at various international conferences. They are organized into three parts covering:

one-human–one-robot collaboration;

one-human–multiple-robot collaboration; and

human–swarm collaboration.

Individual topic areas include resource optimization (human and robotic), safety in collaboration, human trust in robot and decision-making when collaborating with robots, abstraction of swarm systems to make them suitable for human control, modeling and control of internal force interactions for collaborative manipulation, and the sharing of control between human and automated systems, etc. Control and decision-making algorithms feature prominently in the text, importantly within the context of human factors and the constraints they impose. Applications such as assistive technology, driverless vehicles, cooperative mobile robots, manufacturing robots and swarm robots are considered. Illustrative figures and tables are provided throughout the book.

Researchers and students working in controls, and the interaction of humans and robots will learn new methods for human–robot collaboration from this book and will find the cutting edge of the subject described in depth.



Chapter 1. Introduction

Human–robot interaction (HRI) has spanned almost every aspect in our daily life including aerospace, transportation, manufacturing, healthcare, agriculture, etc. As the first chapter of the book, we seek to provide an overview of the field of HRI, and in particular the recent trends in the modeling, control, and decision-making of human–robot collaboration (HRC) systems. HRC systems synergize humans’ advantages (e.g., high-level decision-making, dexterity, etc.) and robots’ capabilities (e.g., performing repetitive, dangerous tasks) and may positively impact human experience and trust in the robotic systems and hence improve the overall task performance. Although significant progress has been made in the field of HRI from an interdisciplinary perspective, extant works lack quantitative analysis and performance guarantees for HRC systems. To fill this gap, the book provides a detailed presentation of recent HRI developments in control and decision-making community. We also provide a detailed taxonomy in order to frame the contributions included in the book.
Yue Wang, Fumin Zhang

Chapter 2. Robust Shared-Control for Rear-Wheel Drive Cars

The chapter studies the shared-control problem for the kinematic model of a group of rear-wheel drive cars in a static (i.e., time-invariant) and in a dynamic (i.e., time-varying) environment. The design of the shared controller is based on either absolute positions or “correlated positions”, such as distances to the obstacles and angle differences. The shared control is used to guarantee the safety of the car when the driver behaves dangerously. Formal properties of the closed-loop-system with shared control are established by a Lyapunov-like analysis. We also consider uncertainties in the dynamics and prove that the shared controller is able to help the driver drive the car safely in the presence of bounded disturbances. Finally, the effectiveness of the controller is verified by typical case studies, such as turning, overtaking, and emergency braking, through MATLAB simulations.
Jingjing Jiang, Alessandro Astolfi

Chapter 3. Baxter-On-Wheels (BOW): An Assistive Mobile Manipulator for Mobility Impaired Individuals

People with severe mobility impairment such as quadriplegia require help from human assistants to manage activities of daily living. Various assistive robotic devices have been proposed and some are commercially available, but they mostly have limited functionalities. We propose a cost-effective mobile robotic manipulator , BOW, or Baxter -on-Wheels, suitable for operations by mobility impaired but cognitively sound individuals. The BOW combines a human-friendly industrial robot (Baxter by Rethink Robotics) with a commercial electric wheelchair for an integrated and versatile, yet low cost, system. The human user can typically only command a small number of degrees of freedom due to the limitation of motion range or strength. To determine the complete robot motion, we propose a shared-control strategy blending the human command with autonomous redundancy resolution . The resolved velocity algorithm solves an on-line optimization matching the robot motion with the human commanded motion. Additional considerations such as collision prevention, singularity avoidance, satisfaction of joint limits , and exclusion of nonintuitive base motion, are incorporated as part of the optimization objective function or constraints. This constrained optimization problem is strictly convex and may be efficiently solved as a quadratic program. This approach allows multiple modes of operations, selectable by the user, including: end-effector position control, end-effector orientation control, combined position/orientation control, force control , and dual-arm control. We present the experimental results of two illustrative applications on the BOW: end-effector position control for a pick-and-place task and a board cleaning task involving both motion and force control. In both cases, the user only provides a 3-degree-of-freedom command, but can still effectively manipulate the motion and force of the robot end-effector, while the autonomous controller provides intuitive and safe internal motion.
Lu Lu, John T. Wen

Chapter 4. Switchings Between Trajectory Tracking and Force MinimizationForce minimization in Human–Robot CollaborationHuman-robot collaboration (HRC)

A framework of switchings between trajectory tracking and force minimization is proposed for human–robot collaboration through physical interactions. In particular, the robot is able to follow a predefined reference trajectory when there is no human intervention and the robot’s control objective is trajectory tracking; and the robot can be intervened by the human on the fly and moved to the human’s target position by applying an interaction force when the robot’s control objective is the minimization of the interaction force. Dynamic models of both the robot and the human are considered and their control objectives described. Switchings are realized by adaptation of the cost function. An optimal control problem is formulated to achieve the robot’s control objective, which is solved by employing dynamic programming . The validity of the proposed framework is verified through simulation studies.
Yanan Li, Keng Peng Tee, Shuzhi Sam Ge

Chapter 5. Estimating Human Intention During a Human–Robot Cooperative Task Based on the Internal ForceInternal force Model

Several successful strategies have been proposed for collaborative physical human–robot interactions (pHRI). However, few have recognized the role the internal force plays in making the collaboration smooth. The aim of this paper is to investigate this role. In order to identify the characteristics of forces applied in a natural (human-like) interaction, we first study the human–human cooperation in a dyadic reaching movement task. We propose a novel method to estimate the internal force and show that it has several advantages compared to the existing methods. We then show that there is a component in the dyad’s internal force that is strongly correlated with the object’s velocity. We use this component as an abstract model for the human intent. This allows us to formulate a cooperation policy that allows the robot to properly respond to the human. We suggest that integrating this policy with the existing cooperation strategies improves the collaboration between the human and the robot.
Ehsan Noohi, Miloš Žefran

Chapter 6. A Learning Algorithm to Select Consistent Reactions to Human Movements

A balance between adaptiveness and consistency is desired for a robot to select control laws to generate reactions to human movements . Learning algorithms are usually employed for the robot to predict the human actions, and then select appropriate reactions accordingly. Two popular classes of learning algorithms, the weighted majority algorithms and the online Winnow algorithms , are biased for either strong adaptiveness or strong consistency. The dual expert algorithm (DEA), proposed in this chapter, is able to achieve a tradeoff between adaptiveness and consistency. We give theoretical analysis to rigorously characterize the performance of the DEA. Both simulation results and experimental data are demonstrated to confirm that DEA enables a robot to learn the preferred reaction to pass a human in a hallway setting. The results may be generalized to other types of human–robot collaboration tasks .
Carol Young, Fumin Zhang

Chapter 7. Assistive Optimal Control-on-Request with Application in Standing Balance Therapy and Reinforcement

This chapter develops and applies a new control-on-request (COR) method to improve the capability of existing shared control interfaces. These COR enhanced interfaces allow users to request on-demand bursts of assistive computer control authority when manual/shared control tasks become too challenging. To enable the approach, we take advantage of the short duration of the desired control responses to derive an algebraic solution for the optimal switching control for differentiable nonlinear systems. Simulation studies show how COR interfaces present an opportunity for human–robot collaboration in standing balance therapy . In particular, we use the Robot Operating System (ROS) to show that optimal control-on-request achieves both therapy objectives of active patient participation and safety. Finally, we explore the potential of a COR interface as a vibrotactile feedback generator to dynamically reinforce standing balance through sensory augmentation.
Anastasia Mavrommati, Alex Ansari, Todd D. Murphey

Chapter 8. Intelligent Human–Robot Interaction Systems Using Reinforcement Learning and Neural Networks

In this chapter, an intelligent human–robot system with adjustable robot autonomy is presented to assist the human operator to perform a given task with minimum workload demands and optimal performance. The proposed control methodology consists of two feedback loops: an inner loop that makes the robot with unknown dynamics behave like a prescribed impedance model as perceived by the operator, and an outer loop that finds the optimal parameters of this model to adjust the robot’s dynamics to the operator skills and minimize the tracking error . A nonlinear robust controller using neural networks is used in the inner loop to make the nonlinear unknown robot dynamics behave like a prescribed impedance model. The problem of finding the optimal parameters of the prescribed impedance model is formulated as an optimal control problem in the outer loop. The objective is to minimize the human effort and optimize the closed-loop behavior of the human–machine system for a given task. This design must take into account the unknown human dynamics as well as the desired overall performance of the human–robot system, which depends on the task. To obviate the requirement of the knowledge of the human model, reinforcement learning is used to learn the solution to the given optimal control problem online in real time.
Hamidreza Modares, Isura Ranatunga, Bakur AlQaudi, Frank L. Lewis, Dan O. Popa

Chapter 9. Regret-Based Allocation of Autonomy in Shared Visual DetectionShared visual detection for Human–Robot Collaborative Assembly in Manufacturing

Appropriate human–robot collaboration (HRC) in assembly in manufacturing will enable more flexible assembly. During the collaboration, the robot needs to make decisions on various issues such as detection of correct assembly parts and correct assembly style to ensure quality, and obstacles to ensure safety. The robot’s decisions may not be reliable due to limitations of the detection systems and disturbances. Human intervention is then necessary though too much involvement of human will increase human workload. Hence, allocation of autonomy through switching between autonomous and manual vision modes seems to be reasonable. Bayesian sequential decision-making can be used to determine the optimal allocation of autonomous and manual modes, but this approach does not fit with human’s decision style, which will result in lack of interests of the human in the collaboration. Human regret plays a critical role in decision-making under uncertainties in detection phenomena. In this case, regret-based suboptimal allocation of autonomous and manual modes in the decision-making is more humanlike, may ensure similar mental models between human and robot and thus a better fit with human psychology, which will potentially improve assembly performance. In this chapter, we include regret in Bayesian decision-making in the detection of correct assembly parts by the robot to provide a risk-based however humanlike decision-making framework, which dynamically switches between autonomous and manual modes in the decision processes in detection phenomena. We then evaluate the effectiveness of the framework for HRC in an assembly task.
S. M. Mizanoor Rahman, Zhanrui Liao, Longsheng Jiang, Yue Wang

Chapter 10. Considering Human Behavior Uncertainty and Disagreements in Human–Robot Cooperative Manipulation

Physical cooperation between humans and robots has high potential impact in many critical application areas such as flexible manufacturing, mobility aids, rehabilitation , general service and medical robotics, education, and training. Among all robot control approaches for physical cooperation, goal-oriented robotic assistance based on human behavior models has demonstrated superior performance in terms of human effort minimization. However, disagreements between robot expectations and human intentions render undesired internal wrenches producing discomfort and safety risks to the human. In this chapter, we introduce an optimal control scheme adapting to both human behavior uncertainty and disagreements. First, we present a characterization of effective (motion-inducing) and internal (squeezing) force/torque components resulting from disagreements. Second, a risk-sensitive optimal control scheme anticipates human actions while adapting to both uncertainty and internal force/torque components. Results demonstrate superior performance in terms of both implicit and subjective measures in an experiment with human users.
José Ramón Medina, Tamara Lorenz, Sandra Hirche

Chapter 11. Designing the Robot Behavior for Safe Human–Robot Interactions

Recent advances in robotics suggest that human–robot interaction (HRI) is no longer a fantasy, but is happening in various fields such as industrial robots, autonomous vehicles, and medical robots. Human safety is one of the biggest concerns in HRI. As humans will respond to the robot’s movement, interactions need to be considered explicitly by the robot. A systematic approach to design the robot behavior toward safe HRI is discussed in this chapter. By modeling the interactions in a multiagent framework, the safety issues are understood as conflicts in the multiagent system. By mimicking human’s social behavior, the robot’s behavior is constrained by the ‘no-collision’ social norm and the uncertainties it perceives for human motions. An efficient action is then found within the constraints. Both analysis and human-involved simulation verify the effectiveness of the method.
Changliu Liu, Masayoshi Tomizuka

Chapter 12. When Human Visual Performance Is Imperfect—How to Optimize the Collaboration Between One Human Operator and Multiple Field Robots

In this chapter, we consider a robotic field exploration and classification task where the field robots have a limited communication with a remote human operator, and also have constrained motion energy budgets. We then extend our previously proposed paradigm for human–robot collaboration (Cai and Mostofi, Proceedings of the American control conference, pp 440–446, 2015 [4]), (Cai and Mostofi, Proceedings of Robotics: Science and Systems, 2016 [5]) to the case of multiple robots. In this paradigm, the robots predict human visual performance , which is not necessarily perfect, and optimize seeking help from humans accordingly (Cai and Mostofi, Proceedings of the American control conference, pp 440–446, 2015 [4]), (Cai and Mostofi, Proceedings of Robotics: Science and Systems, 2016 [5]. More specifically, given a probabilistic model of human visual performance from (Cai and Mostofi, Proceedings of the American control conference, pp 440–446, 2015 [4]), in this chapter we show how multiple robots can properly optimize motion, sensing, and seeking help. We mathematically and numerically analyze the properties of robots’ optimum decisions, in terms of when to ask humans for help, when to rely on their own judgment and when to gather more information from the field. Our theoretical results shed light on the properties of the optimum solution. Moreover, simulation results demonstrate the efficacy of our proposed approach and confirm that it can save resources considerably.
Hong Cai, Yasamin Mostofi

Chapter 13. Human-Collaborative Schemes in the Motion Control of Single and Multiple Mobile RobotsMobile robot

In this chapter we show and compare several representative examples of human-collaborative schemes in the control of mobile robots , with a particular emphasis on the aerial robot case. We first provide a simplified yet descriptive model of the robot and its interactions. We then use this model to define a taxonomy that highlights the main aspects of these collaboration schemes, such as the physical domain of the robots, the degree of autonomy , the force interaction with the operator (e.g., the unilateral versus the bilateral haptic shared control ), the near-operation versus the teleoperation, the contact-free versus the physically interactive situation, the use of onboard sensors, and the presence of a time horizon in the operator reference. We then specialize the proposed taxonomy to the multi-robot case in which we further distinguish the methods depending on their level of centralization, the presence of leader–follower schemes, of formation control schemes, the ability to preserve graph theoretical properties, and to perform cooperative physical interaction. The common denominator of all the examples presented in this chapter is the presence of a operator in the control loop. The main goal of the chapter is to introduce the reader and provide a first-level analysis on the several ways to effectively include human operators in the control of both single and multiple aerial robots and, by extension, of more generic mobile robots .
Antonio Franchi

Chapter 14. A Passivity-Based Approach to Human–Swarm Collaboration and Passivity Analysis of Human Operators

In this chapter, we present a passivity-based approach to a human–swarm collaboration problem. In the system, the operator is assumed to have a device to command a limited number of accessible robots, and to be in front of a monitor which displays certain information fed back from the robots. The intended control objective is then to render positions/velocities of the group of kinematic robots synchronized to desired references, which a human operator has, under distributed information exchanges among the robots and the operator. To this end, we first design a cooperative controller to be implemented on every robot and point out passivity of the collective robot dynamics. Inspired by this passivity property, we also determine the information visually fed back to the operator. Asymptotic position/velocity synchronization together with input–output stability for time-varying references is then demonstrated by assuming passivity of an appropriately defined human operator decision process. The aforementioned human passivity assumption is also studied through experiments. It is observed that the passivity of the decision process in the position control mode may be violated depending on the network connection and individual characteristics. Hence, a passivation scheme is presented for the operator’s decision process and it is demonstrated for three different interconnection structures and five different trial subjects.
T. Hatanaka, N. Chopra, J. Yamauchi, M. Fujita

Chapter 15. Human–Swarm Interactions via Coverage of Time-Varying Densities

One of the main challenges in human–swarm interactions is the construction of suitable abstractions that make an entire robot team amenable to human control. For such abstractions to be useful, they need to scale gracefully as the number of robots increases. In this work, we consider the use of time-varying density functions to externally influence a robot swarm. Density functions abstract away the size of the robot team and describe instead the concentration of agents over the domain of interest. This allows a human operator to design densities so as to manipulate the robot swarm as a whole, instead of at the individual robot level. We discuss coverage of time-varying density functions as a mechanism to translate densities into robotic movement, and provide a series of control laws that guarantee optimal coverage by the robot team. Distributed approximations allow the solutions to scale with the size of the robot team. This renders coverage a viable choice of method for influencing a robot swarm. Finally, we provide a framework for the design of density functions that shape the swarm to achieve specified geometric configurations within the domain of interest. We show through robotic implementation in two different platforms the viability of human–swarm interactions with the proposed schemes.
Yancy Diaz-Mercado, Sung G. Lee, Magnus Egerstedt

Chapter 16. Co-design of Control and Scheduling for Human–Swarm Collaboration Systems Based on Mutual Trust

In this chapter, we investigate the collaboration between multi-human networks and swarm networks. We focus on planar motion of a network of mobile swarm clusters, where each swarm cluster has its own coordinates and leading agent. Swarm agents are coordinated to achieve a common goal that is otherwise unable to reach by a single agent. More specific, cooperative control laws are devised for leaders of each swarm cluster and followers within each cluster to enable the swarm network to simultaneously reach navigation and collision avoidance goals. However, it is highly unlikely that a large-scale swarm network can self-organize efficiently without any human intervention. Hence, a small number of human operators, much fewer than that of the swarm agents, are in the loop to collaborate with the swarm agents. Two modes are considered for the swarm motion control, i.e., manual mode and autonomous mode. To evaluate the effectiveness of the collaboration between swarms and human networks, we set up two unilateral trust models for a human-swarm collaboration system . We also introduce a novel measurement, called “fitness,” to pair each human with his/her swarm cluster. A dynamic scheduling method, called “minimum gap first,” is then proposed to schedule the collaboration of each human-swarm pair. Since there exists time delay during human and multi-swarm collaboration, we design the autonomous agent controller based on the time delay as a function of scheduling states. Our simulation results show that the proposed co-design of scheduling and control algorithms can guarantee effective real-time allocation of human resources and ensure acceptable swarm performance.
Xiaotian Wang, Yue Wang


Weitere Informationen