Elsevier

Mechatronics

Volume 21, Issue 1, February 2011, Pages 272-284
Mechatronics

Transferring human grasping synergies to a robot

https://doi.org/10.1016/j.mechatronics.2010.11.003Get rights and content

Abstract

In this paper, a system for transferring human grasping skills to a robot is presented. In order to reduce the dimensionality of the grasp postures, we extracted three synergies from data on human grasping experiments and trained a neural network with the features of the objects and the coefficients of the synergies. Then, the trained neural network was employed to control robot grasping via an individually optimized mapping between the human hand and the robot hand. As force control was unavailable on our robot hand, we designed a simple strategy for the robot to grasp and hold the objects by exploiting tactile feedback at the fingers. Experimental results demonstrated that the system can generalize the transferred skills to grasp new objects.

Introduction

Humans have remarkable motor skills and outperform most robots over a variety of complex motor tasks. By transferring human skills to robots, we may avoid a long and costly searching process when designing the robot controller. New skill-learning in the robot can also be speeded up if it is based on the transferred skills. To transfer a human skill to a robot, two problems have to be solved: (1) how to extract or model the human skill; (2) how to implement it on the robot whose configuration and sensory-motor systems are quite different from that of humans. Different methods have been proposed for transferring human skills to robots in various tasks and scenarios. Cortesao and Koeppe [1] transferred human skill in the peg-in-hole insertion task to a robot. While a human performed this task, the forces, torques and velocities of the peg were recorded as a function of its pose. Then a neural network trained with these data was used to control the robot in the same task. In another study [2], human expertise in a manipulative task was modeled as an associative mapping in a neural network and implemented on a direct-drive robot. Yang and Chen [3] represented human skills in tele-operation as a parametric model using hidden Markov models. The sensory-motor data representing human skills sometimes have unknown models and are redundant. To deal with this problem, Cortesao and Koeppe [1] proposed a sensor fusion paradigm composed of two independent modules. One module was for the optimised fusion via minimizing the noise power. The other module was a Kalman filter that filters unknown variables. Unlike the above mentioned methods that explicitly model human skills, a human-to-robot skill transfer framework proposed by Oztop [4] exploited the plasticity of the body schema in the human brain. Firstly, their system integrated a 16 DoF robotic hand into the experimenter’s body schema (i.e., the neural representation of his own body). Then the dexterity exhibited by the experimenter with the external limb, the robotic hand, was used for designing the controller of the robot.

Another widely used mechanism for the human-to-robot skill transfer is by imitation, where a robot observes the execution of a task, acquires task knowledge, and then reproduces it. In [5], a robot imitated the grasping and placing of a human model. Skill transfer was realized when the robot learned a goal-directed sequences of motor primitives during the imitation. In [6], a continuous hidden Markov model was trained with characteristic features of the perceived human movements. Then the hidden Markov model was used in a simulated robot to reproduce the human movements. Rather than using robots to observe or imitate human movements, in a recent work [7], a communication language was developed for transferring grasping skills from a nontechnical user to a robot during human–robot interaction.

Although these methods are effective in transferring skills to robots, in most cases, the robots can only learn the demonstrated tasks and it is difficult to generalize to new tasks.

While grasp synthesis is still a tough problem for robot hands (see [8] for a review), humans can grasp and manipulate various objects effortlessly. One challenge in robotic grasping is how to coordinate the several joints of the fingers to generate an appropriate grasp posture for a specific object. Humans and animals have the same problem in the motor control of their huge number of muscles. Selecting the appropriate muscle pattern to achieve a given goal is an extremely complex task due to the high dimensionality of the search space [9]. Recent research in biology suggests that, to deal with this dimensionality problem, animal motor controllers employ a modular organization based on synergies [9], [10]. A synergy refers to a subgroup of muscles or joints that are activated together in a stereotyped pattern [9], which is in contrast to the decoupled control of individual joints in many robots. d’Avella and Bizzi [9] recorded electromyographic activity from 13 muscles of the hind limb of intact and freely moving frogs during their movements, and used multidimensional factorization techniques to extract synergies, which were invariant amplitude and timing relationships among the muscle activations. They have found, in frogs, that combinations of a small number of muscle synergies account for a large fraction of the variation in the muscle patterns observed during jumping, swimming, and walking [9].

Particularly in human hands, synergies refer to the muscular and neurological coupling between the finger joints. The human hand has more than 20 degrees-of-freedom. But, two synergies that co-activate several fingers and joints have been shown to account for 84% of the variance in human hand grasping [11]. The big benefit of synergies is that the computations for motor control can be greatly simplified at the synergy level.

Synergies have been found not only in the hand postures of human grasping, but also in the hand reaching movements before grasping. Sabatini [12] identified neuromuscular synergies in natural movements of the human arm by applying factor analysis to the smoothed rectified electromyographic (EMG) signals of the muscles in human arm. Fod and Matarić [13] recorded human arm movements by fastening four Polhemus magnetic sensors at upper arm, lower arm, wrist, and middle finger. The data were first filtered, and then segmented, and principal component analysis was applied to the segments. The eigenvectors corresponding to a few of the highest eigenvalues provide a basis set of primitives, which can be used, through superposition and sequencing, to reconstruct the training movements in the data as well as novel ones.

Currently, the controllers of some robots have involved synergies. Gullapalli et al. [14] designed an intelligent control architecture that endowed human-like capabilities to a redundant manipulator. In this controller, motor synergies arose when the control of a subset of the available degrees-of-freedom was coupled and coordinated. Rosenstein et al. [15] used trial-and-error learning to evolve synergies in a 3-link robotic manipulator for weightlifting tasks. The robot learns to improve its performance in weightlifting as individual joints become actively coupled at the level of synergies. Brown [16] mechanically hardwired postural synergies into the driving mechanism of a 17 DoF dextrous hand that was driven by only two motors.

The aim of this study is to transfer human grasping skills to a robot hand-arm system via the use of synergies. The skill-transferring scheme in this study has the following characteristics: (1) The transfer of synergies (i.e., the basic building blocks of sensory-motor control in human grasping) to a robot may be more adaptive and flexible when compared with robot imitation or direct modeling of the human skills, because it allows the generalization of learned skills to extend to grasping new objects. (2) Compared with direct skill-transfer at the task level, transferring synergies is much simpler as the number of synergies is very limited when compared with the large number of grasping tasks they can represent.

The transfer involves two stages. Firstly, we extract the synergies from human grasping data and train a neural network with the data. Secondly, we design a novel mapping method by optimization that can map the fingertip positions of the human hand to those of the robot hand.

The rest of the paper is organized as follows. Section 2 describes the extraction of synergies from human grasping data. Section 3 addresses the issues in mapping the human hand to our robot hand. Section 4 solves the inverse kinematics problem for our robot arm using neural networks. Section 5 describes the control strategy we designed for the grasping and holding of objects with the robot hand. Section 6 presents the experimental results on the robot, and the last section concludes this paper.

Section snippets

Extracting synergies from data on human grasping

In our experiments, the subject uses two or three fingers (thumb, index, middle) to make 60 grasps of the objects shown in Fig. 1A. The positions of the hand joints in the grasping postures are recorded with a Shapehand data glove. The position and orientation of the wrist in a fixed world frame are recorded with a Polhemus Patriot magnetic sensor. The Polhemus Patriot system is widely used for 3D motion tracking [17], [18], [19]. It includes a magnetic sensor that is fixed on the wrist of the

Related methods

Before applying the trained neural network on the robot hand, we have to build a map between the robot and human hands. Early works on mapping between human and robot hands were in the area of tele-operation, where a hand master or a data glove operated with a human hand controls a multi-fingered robot hand. As reviewed in [20], three kinds of mapping methods have been developed for tele-operation: (1) linear joint angle mapping [21]; (2) pose mapping [22]; and (3) finger tip position mapping

Inverse kinematics of the robot arm

The output of the neural network in Fig. 4 can give the desired position and orientation of the wrist for the hand to grasp an object. The robot needs to use an inverse kinematics model to calculate the required positions of the seven joints of the robot in order for the wrist to reach the desired position and orientation. In this section, we solve the inverse kinematics problem of the robot hand.

Grasp control strategy

We assume the position and the four features of the object shown in Fig. 4 are known to the robot. For an object to be grasped by the robot, the four features are input to the neural network shown in Fig. 4. The desired position and orientation of the wrist and the grasp posture are first computed with the outputs of the neural network, and then mapped to the robot hand with the mapping method described. Now the task of the robot is to control its arm to reach the desired position and

Experimental results on the robot

To further illustrate the grasp control strategy described above, we take one grasp experiment as an example. The experiment is done in the following five steps:

  • (1)

    The location and the four features (i.e., length, width, height and pose) of the object are measured manually (see the object on the table in Fig. 15A). In the future, this information will be obtained automatically and online by a vision system.

  • (2)

    These features are input to the neural networks shown in Fig. 4 to get the coefficients of

Conclusion

We have designed a system that transfers human grasping skills to a robot hand. The human grasping postures are extracted from our human grasping experiments and described in the synergy space. The low-dimensionality of the synergy space has facilitated the human-to-robot skill transfer. To overcome the problem caused by the large differences between the configuration space of the human hand and the robot hand, we have proposed a mapping method that optimizes the posture mapping individually.

Acknowledgements

We are grateful for support through the REVERB project, EPSRC Grant EP/C516303/1 and the ROSSI Project, EC-FP7, ICT – 216125.

References (27)

  • W. Erlhagen et al.

    Goal-directed imitation for robots: a bio-inspired approach to action understanding and skill learning

    Robot Auton Syst

    (2006)
  • Michael T. Rosenstein et al.

    Learning at the level of synergies for a robot weightlifter

    Robot Auton Syst

    (2006)
  • R. Cortesao et al.

    Sensor fusion for human–robot skill transfer systems

    Adv Robot.

    (2000)
  • S. Liu et al.

    Transferring manipulative skills to robots: representation and acquisition of tool manipulative skills using a process dynamics model

    J Dyn Syst Measure Control

    (1992)
  • Yang JX, Chen Y. Hidden markov model approach to skill learning and its application in telerobotics. In: Proceedings of...
  • Oztop E, Lin LH, Kawato M, Cheng G. Dexterous skills transfer by extending human body schema to a robotic hand. In:...
  • Asfour T, Azad P, Gyarfas F, Dillmann R. Imitation learning of dual-arm manipulation tasks in humanoid robots. In:...
  • M. Ralph et al.

    Toward a natural language interface for transferring grasping skills to robots

    IEEE Trans Robot

    (2008)
  • Bicchi A, Kumar V. Robotic grasping and contact: a review. In: Proceedings of IEEE international conference on robotics...
  • A. d’Avella et al.

    Shared and specific muscle synergies in natural motor behaviors

    Proc Natl Acad Sci

    (2005)
  • M.A. Daley et al.

    Running over rough terrain: guinea fowl maintain dynamic stability despite a large unexpected change in substrate height

    J Exp Biol

    (2006)
  • M. Santello et al.

    Postural synergies for tool use

    J Neurosci

    (1998)
  • A.M. Sabatini

    Identification of neuromuscular synergies in natural upper-arm movements

    Biol Cyber

    (2002)
  • Cited by (83)

    • Template-based imitation learning for manipulating symmetric objects

      2021, Mechatronics
      Citation Excerpt :

      In [14], a data glove is used to sample a series of demonstrations in a simulation environment and the abstract task information is extracted. The data gathered by the data glove can also be used to train a neural network to grasp novel objects [15]. In [16,17], the robot is taught to grasp novel objects through tracking the instructor’s hand motions.

    • Tight Performance Guarantees of Imitator Policies with Continuous Actions

      2023, Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
    • Applying 3D Human Hand Pose Estimation to Teleoperation

      2023, 2023 5th International Conference on Robotics and Computer Vision, ICRCV 2023
    View all citing articles on Scopus
    View full text