Skip to main content

1990 | Buch

Dextrous Robot Hands

herausgegeben von: Subramanian T. Venkataraman, Thea Iberall

Verlag: Springer New York

insite
SUCHEN

Über dieses Buch

Manipulation using dextrous robot hands has been an exciting yet frustrating research topic for the last several years. While significant progress has occurred in the design, construction, and low level control of robotic hands, researchers are up against fundamental problems in developing algorithms for real-time computations in multi-sensory processing and motor control. The aim of this book is to explore parallels in sensorimotor integration in dextrous robot and human hands, addressing the basic question of how the next generation of dextrous hands should evolve. By bringing together experimental psychologists, kinesiologists, computer scientists, electrical engineers, and mechanical engineers, the book covers topics that range from human hand usage in prehension and exploration, to the design and use of robotic sensors and multi-fingered hands, and to control and computational architectures for dextrous hand usage. While the ultimate goal of capturing human hand versatility remains elusive, this book makes an important contribution to the design and control of future dextrous robot hands through a simple underlying message: a topic as complex as dextrous manipulation would best be addressed by collaborative, interdisciplinary research, combining high level and low level views, drawing parallels between human studies and analytic approaches, and integrating sensory data with motor commands. As seen in this text, success has been made through the establishment of such collaborative efforts. The future will hold up to expectations only as researchers become aware of advances in parallel fields and as a common vocabulary emerges from integrated perceptions about manipulation.

Inhaltsverzeichnis

Frontmatter

Lessons Learned from Human Hand Studies

Frontmatter
1. Human Grasp Choice and Robotic Grasp Analysis
Abstract
In studying grasping and manipulation we find two very different approaches to the subject: knowledge-based approaches based primarily on empirical studies of human grasping and manipulation, and analytical approaches based primarily on physical models of the manipulation process. This chapter begins with a review of studies of human grasping, in particular our development of a grasp taxonomy and an expert system for predicting human grasp choice. These studies show how object geometry and task requirements (as well as hand capabilities and tactile sensing) combine to dictate grasp choice. We then consider analytic models of grasping and manipulation with robotic hands. To keep the mathematics tractable, these models require numerous simplifications which restrict their generality. Despite their differences, the two approaches can be correlated. This provides insight into why people grasp and manipulate objects as they do, and suggests different approaches for robotic grasp and manipulation planning. The results also bear upon such issues such as object representation and hand design.
Mark R. Cutkosky, Robert D. Howe
2. Opposition Space and Human Prehension
Abstract
A problem that has plagued both motor psychologists in studying human behavior, and robot designers in reproducing it, is how to quantify that behavior. It has been argued that kinematic and dynamic models are inadequate for explaining human movement unless they also include both the performance constraints and the objectives that affect the neural and neuromuscular inputs. With a dextrous, multi-fingered hand, multiple grasping solutions are possible. This chapter addresses the question faced by the controller, that of how best to use features of the hand to achieve the task goals, given anticipated object properties and predictable interaction outcomes. The opposition space model takes into account the hand’s ability to apply task-related forces while gathering sensory information, in terms of its precision and power capabilities. By separating implementation details from functional goals, the study of human hand functionality can lead to the design of better dextrous robot hands and their controllers.
Thea Iberall, Christine L. MacKenzie
3. Coordination in Normal and Prosthetic Reaching
Abstract
If you want to take a tumbler of water in order to drink from it, you will probably reach for it from the side with the forearm midway between pronation and supination (so that the palm of the hand is turned towards body midline). What dose the reaching movement comprise? At the end of an initial, rapid, distance-covering phase of movement that leaves your hand close to its target, thumb and fingers will have opened sufficiently to allow the tumbler to be encompassed. The second phase is made more slowly and leads upto contact of the hand with the tumbler. In this phase, coordination between transport of the hand by the arm and changes in the distance between thumb and finger (hand aperture) becomes critical if the impact of the collision on contact is to be minimized — especially if the tumbler is full to the brim!
Alan M. Wing
4. Intelligent Exploration by the Human Hand
Abstract
Humans are remarkably successful at identifying and learning about objects haptically. During the course of haptic object identification, purposive “exploratory procedures” are executed. A variety of factors control the course of haptic object exploration by constraining and influencing the selection of the next exploratory procedure to use. These factors include stored information about objects and their perceptual attributes, associations between exploratory procedures and attributes, and constraints arising from the nature of exploratory procedures themselves. A conceptual model describes such factors and their interactions in object processing.
Roberta L. Klatzky, Susan Lederman

Dextrous Hand Control Architectures

Frontmatter
5. A Task-Oriented Dextrous Manipulation Architecture
Abstract
Much of the previous work on dextrous manipulation has concerned itself primarily with hand construction, sensor development, servo-control or grasp planning. Efforts towards the development of an overall manipulation system have either been restricted to a particular set of implementations, or been far too general to be practical. In this chapter, we take the position that a dextrous hand can be studied most efficiently when embedded within the context of an application domain. The main focus of this work is to develop a control architecture for a flexible assembly cell equipped with dextrous hands.
The central idea is to use constraints from the description of the task to be executed to select an appropriate abstract model and physical robot hand-arm system with which to execute the task plan. With this idea as target, we describe how allocation can be performed on the basis of a task criterion for a task plan. We also introduce a set of control models that can be allocated and interfaced to real hand-arm systems for the actual execution of task plans.
Subramanian T. Venkataraman, Damian M. Lyons
6. CONDOR: A Computational Architecture for Robots
Abstract
This chapter presents an overview of the CONDOR architecture, a fully implemented system that controls the Utah/MIT hand. The architecture is especially suited for complex robots that are characterized by a number of joints and consequently demand powerful computer architectures to be controlled and utilized effectively. Using the CONDOR as a representative example, this chapter attempts to introduce the hardware and software issues that are relevant to the design of real-time control systems for robot hands.
Sundar Narasimhan, David M. Siegel, John M. Hollerbach
7. Control Architecture for the Belgrade/USC Hand
Abstract
This chapter describes design and control features of a five-fingered anthropomorphic end-effector designed primarily for grasping tasks. Advantages and limitations of the design are discussed, and special emphasis is placed on its suitability for autonomous, non-numerical or reflex control of grasp. Following a discussion of its mechanical design, we present the controller and sensor features incorporated into the current finger model. A knowledge-based control of hand preshape (prior to grasping) is then outlined, and the hand’s suitability as a testbed for the study of human and robot hand motion control is discussed. The final section of this chapter describes future directions.
George A. Bekey, Rajko Tomovic, Ilija Zeljkovic

Lessons Learned from Dextrous Robot Hands

Frontmatter
8. Issues in Dextrous Robot Hands
Abstract
In this chapter, we study grasp planning and coordinated manipulation by a multifingered robot hand under various contact constraints. First, we formulate the hand kinematics and establish force/velocity transformation relations for a hand manipulation system. Then, we propose two quality measures for evaluating a grasp. These measures can be used to plan for good grasps. Finally, we derive a set of control laws for coordinated manipulation by a robot hand under either fixed points of contact or rolling contact. These control schemes all give decoupled error equations between the position loop and the internal grasp force loop.
Zexiang Li, Shankar Sastry
9. Analysis of Multi-fingered Grasping and Manipulation
Abstract
Grasping and manipulating forces for skillful manipulation of objects by multi-fingered robot hands are discussed in this chapter. Firstly a short discussion of grasping and manipulating forces for two-fingered hands with linear motion is given. Then, for three-fingered hands, a new representation of the internal force among the fingers is given. Based on this representation, the grasping force is defined as an internal force which satisfies the static friction constraint. The concept of grasp mode is also introduced. Manipulating force is then defined as a fingertip force which satisfies the following three conditions: (1) It produces the specified resultant force. (2) It is not in the inverse direction of the grasping force. (3) It does not contain any grasping force component. An algorithm for decomposing a given fingertip force into manipulating and grasping forces is presented.
Tsuneo Yoshikawa, Kiyoshi Nagai
10. Tactile Sensing for Shape Interpretation
Abstract
Dextrous robot hands require intelligent sensing systems to manipulate unknown objects reliably in a complex environment. Tactile sensing and perception are needed to provide information on contact forces and local shape properties to ensure robust manipulation. This chapter considers the problem of inferring global object properties such as size, location and orientation from only sparse local geometric information at three fingers. Tactile sensors can provide local shape information, including surface normals, contact location on the finger, and principal curvatures and their directions. This chapter assumes that an object is a cone (not necessarily circular) but is otherwise unknown. Three contacts can in some cases determine the pose of an unknown cone, but in general, more are required. A grasping system could command finger forces to control an object’s position in a hand using this tactile information. The advantage of not using specific object models is that a system will be flexible enough to handle a wide range of parts without reprogramming, and will be robust to gross and small differences among objects.
Ronald S. Fearing
11. Tactile Sensing and Control for the Utah/MIT Hand
Abstract
Despite recent advances in robot hand design and tactile sensing technology, dextrous robot hands with a comprehensive sense of touch are rare. The few existing dextrous touch systems universally exhibit problems of reliability, sensory coverage, bandwidth, or included volume. In an effort to establish a design methodology for dextrous tactile systems, this chapter will explore the obstacles that must be overcome in order to reliably acquire data from a large number of sensors on a dextrous robot hand. This work emphasizes the often overlooked difficulties of collecting tactile data in a practical and reliable way. After reviewing fundamental features of tactile systems for dextrous hands, we will discuss the ways in which these features affect hand performance, and which methods of addressing tactile sensors are effective and which are not. The design of a robust sensory architecture will be described, and progress on the development of a tactile sensing system for the Utah/MIT Dextrous Hand will be reported.
Ian D. McCammon, Steve C. Jacobsen
12. A New Tactile Sensor Design based on Suspension-Shells
Abstract
Requirements of sensors for dextrous fingers have been discussed and appropriate designs to satisfy these requirements have been proposed. Miniaturization of a tactile sensor element has also been considered. However, only a limited number of such sensors might be attached to the surface of a finger body. In addition, with present day tactile sensing technology, it is difficult to make all of the surfaces sensitive or efficiently handle all the signal lines. In order to solve these problems, a new tactile sensor devoid of blind sectors is proposed. The sensor uses a suspension shell covering the surface of the finger body. Tactile sense is obtained by detecting the relative displacement of the suspension-shell with respect to the finger body.
Tokuji Okada

Panel Discussion

13. Panel Discussion
Subramanian T. Venkataraman, Thea Iberall
Backmatter
Metadaten
Titel
Dextrous Robot Hands
herausgegeben von
Subramanian T. Venkataraman
Thea Iberall
Copyright-Jahr
1990
Verlag
Springer New York
Electronic ISBN
978-1-4613-8974-3
Print ISBN
978-1-4613-8976-7
DOI
https://doi.org/10.1007/978-1-4613-8974-3