Skip to main content

1990 | Buch

Traditional and Non-Traditional Robotic Sensors

herausgegeben von: Thomas C. Henderson

Verlag: Springer Berlin Heidelberg

Buchreihe : NATO ASI Series

insite
SUCHEN

Über dieses Buch

This book contains the written record of the NATO Advanced Research Workshop on Traditional and Non-Traditional Robotic Sensors held in the Hotel Villa del Mare, Maratea, Italy, August 28 - September 1, 1989. This workshop was organized under the auspicies of the NATO Special Program on Sensory Systems for Robotic Control. Professor Frans Groen from the University of Amsterdam and Dr. Gert Hirzinger from the German Aerospace Research Establishment (DLR) served as members of the organizing committee for this workshop. Research in the area of robotic sensors is necessary in order to support a wide range of applications, including: industrial automation, space robotics, image analysis, microelectronics, and intelligent sensors. This workshop focused on the role of traditional and non-traditional sensors in robotics. In particular, the following three topics were explored: - Sensor development and technology, - Multisensor integration techniques, - Application area requirements which motivate sensor development directions. This workshop'brought together experts from NATO countries to discuss recent developments in these three areas. Many new directions (or new directions on old problems) were proposed. Existing sensors should be pushed into new application domains such as medical robotics and space robotics.

Inhaltsverzeichnis

Frontmatter

Sensor Development

Fast Sensory Control of Robot Manipulators
Abstract
A positional deviation sensor for contact-free physical guidance of a manipulator is described. The manipulator is set to track a motion marker close up. Factors limiting and means for improving the performance are pointed out. The result is a system that can be used for real-time training of spray-painting robots. The means are easily extened to general sensory control.
F. Dessen, J. G. Balchen
Force/Torque and Tactile Sensors for Sensor-Based Manipulator Control
Abstract
The autonomy of manipulators, in space as well as in industrial environments can be dramatically enhanced by the use of force/torque and tactile sensors.
In a first part the development and future use of a six-component force/torque sensor for the Hermes Robot Arm (HERA) Basic End-Effector (BEE) is discussed.
Further, a multifunctional gripper system based on tactile sensors is described. The basic transducing element of the sensor is a sheet of pressure-sensitive polymer. Tactile image processing algorithms for slip detection, object position estimation and object recognition are described.
H. Van Brussel, H. Beliën, Bao Chao-Ying
3D Range Imaging Sensors
Abstract
Generalized robotic applications involving vision systems necessitate the development of real time recognition of three dimensional (3-D) surfaces. Range imaging systems collect 3-D coordinate data from object surfaces and can be useful in a wide variety of robotic applications, including shape acquisition, bin picking, assembly, inspection, and robot navigation. Range imaging sensors for such systems are unique imaging devices in that the image data points (pixels) explicitly represent scene surface geometry in a sampled form.
At least five different fundamental physical principles have been used to obtain range images: (1) laser radar, (2) triangulation, (3) interferometry, (4) lens focusing, and (5) tactile sensing. One of these techniques, active laser range sensing, described in detail in this paper, measures surface geometry directly and thus avoids the extensive computations necessary for reconstruction of an approximate range map from multiple camera views or inference from reflectance information. In fact, for many robot vision applications, active scanners represent the only viable method for obtaining the scene information necessary for real time operation. This paper provides a brief overview of range imaging techniques with an emphasis on laser-based active sensing methods and demonstrated sensors that are likely to be used for robot control.
D. J. Conrad, R. E. Sampson
An Alternative Robotic Proboscis
Abstract
A flexible, simple robotic proboscis has been developed based upon the differential extension of 3 extendable tubes. Pneumatic pressure is used to create the tube extension and control is achieved via a micro computer and analogue proportional control valves. Motion and flexibility is very organic and although power/weight ratio is high, absolute payload and stiffness are low.
J. B. C. Davies
Dynamic Robot Vision
Abstract
In computer vision efficient methods for detection and interpretation of motion of objects have been developed. As technology advances, the ambition to include this ability in robot vision systems appears more and more realistic. However, to become of practical use, real time performance (in some sense) is required, and the current possibilities for this are still limited.
Many different approaches to motion analysis have been proposed in the literature. Motion information may be derived from image analysis systems at different levels of the general scheme of image processing and interpretation. However, to achieve a result in terms of motion descriptions, most of these methods depend extensively on image preprocessing (and interpretation) or on integration into an image postprocessing (and interpretation) system.
A number of methods are reviewed and evaluated with regard to dependency on supplementary processing and with regard to current potential for real time application. Also we discuss their weaknesses due to problems of ambiguity and noise. However, one can take into account that real time operation also means continuous operation and thereby that a temporal context is provided. This allows concentration on changes most of which are predictable, and savings in computing as well as improved robustness to noise and ambiguities can be achieved.
In conclusion we find that high level token matching currently is one of the most promising approaches, and an experimental implementation is used to demonstrate a possible approach to motion analysis in real time.
This research has in part been sponsored by the Danish Technical Research Council, FTU grant 5.17.5.6.06
Erik Granum, Henrik I. Christensen
2D and 3D Image Sensors
Abstract
The growing development of robots which do more and more complex work in unstructured environments makes a compact 2D and 3D vision system necessary 3D vision is either very useful or indispensable to resolve some problems connected with autonomous movements. These systems are impeded by the absence of real 3D sensors collecting panoramic range data at medium distance (0 to 10 meters) in a large volume (up to 100 m3). We describe a certain number of solutions to this general problem, first we describe a 3D laser triangulation system implemented for a mobile robot. This system is capable of panoramic vision over a full 360° range around the robot. Numerous sorts of range finders (optical and acoustical) are connected to a 2D machine vision (AVISO ITMI SYSTEM) using various types of cameras (vidicon, CCD,PSD). In this case, we can use all the software package developed for this 2D machine vision (CAIMAN or CALIFE).
To add the intelligence needed for a reliable measurement system, a calibration procedure has been designed. This system could be tested in various fields of application:
  • arm transfer in the neighborhood of an object to be manipulated, trajectory planning, and adaptative positioning, are typical tasks for an intelligent robot system.
  • absolute location of a mobile robot in a crowded surrounding,
  • audiovisual field for incrusting objects or actors in a 3D synthetic picture.
In some cases, an optoelectronic remote tracking and measuring system must be developed to track the location of a moving target (in three dimensions).
Serge Monchaud
Tactile Sensing Using an Optical Transduction Method
Abstract
This paper describes the characteristics of a very high resolution tactile sensor for robotics. The sensor uses a pressure intensity to light intensity transduction method that produces very detailed images of contact. A robot finger-mounted sensor incorporating this method has been built which produces a video output suitable for feeding into a conventional image processing system. The principle and design of the tactile sensor is explained, the main parameters affecting its performance are identified, and experiments using this sensor with an image processing system are described. An example application, using the sensor for grasp error compensation and for inspection, is reported on. Finally, some attributes of tactile sensing are discussed.
Howard R. Nicholls
Absolute Position Measurement Using Pseudo-Random Encoding
Abstract
Many domains in robotics would definitely benefit from an efficient digital absolute position measurement capability. Absolute shaft encoders are attractive for joint control applications as their position is recovered immediately when power is restored after an outage, and they do not accumulate errors as incremental transducers often do [1]. The ability to measure absolute position would be a notable asset for automated guided vehicle (AGV) navigation. This is especially significant in situations where they have to avoid obstacles that may appear on their paths.
Emil M. Petriu
Silicon Tactile Image Sensors
Abstract
Tactile sensing is one of the three possible means through which a human being can perceive image information. Unlike the two others, visual and acoustic perception, tactile sensing requires physical contact with the objects to be perceived or recognized, the information being carried by dynamometric parameters such as force and torque.
P. P. L. Regtien
Multiresolutional Laser Radar
Abstract
The INV-laser radar working group has spent considerable effort in the design and implementation of an advanced 3D-sensor, envisaging medium and large scale applications in robotics and automation, especially real—time measurements of shape, position, orientation and movement of three-dimensional objects as well as environment perception, path finding and docking of robotic vehicles. Using laser pulses instead of microwave pulses the operating principle is quite similar to that of a microwave radar. Realization of the projected laser radar, however, imposes a number of very crucial points.
This paper describes the main difficulties being encountered and presents elaborated solutions like an improved sensor head with so called mirror optics, a new fiber optic concept for ultimate precision, time windowing facilities for target selection, amplitude control, closed—loop real-time data processing for contour tracking and, finally, fast contour estimation procedures with special Kalman-jump—filter algorithms. The INV— laser radar is able to measure three coordinates, reflectivity and range rate of passive 3D—surface points at a primary measuring rate of some 10.000 Hz. After data processing the output data rate is reduced to some 100 Hz performing mm—resolution.
Experimental results of the INV-laser radar demonstrate an extraordinary precision, the ability to find and identify simple 3D-objects by measuring cross-sections and automatic 3D-edge resp. contour tracking as well as the process of taking a 3D—picture of a small workpiece by raster scanning. Since the latter mode consumes too much time it is more efficient first to analyze the 3D-scene by means of a 2D—vision system. Thereafter the laser radar is directed only to the interesting edges and structures in order to add the corresponding depth information.
Rudolf Schwarte
Olfaction Metal Oxide Semiconductor Gas Sensors and Neural Networks
Abstract
Primitive mobile life forms, and the mobile cells of higher animals, derive their motivation and direct their navigation by chemical senses. In the simplest cases these creatures are hard-wired to swim toward nutrient concentration gradients, and to swim against irritant concentration gradients. The vestiges in humans of the sometimes extremely sensitive, selective, and differential chemical senses of primitive forms are taste, our ability to detect and identify four classes of chemicals in water solution on our tongues, and smell or olfaction, our ability to detect and identify many gases, vapors, and complex mixtures in the air passing through our noses.
M. W. Siegel
Development Environment for Tactile Systems
Abstract
A development environment for Robotic Sensor Systems is described. A new system, under development, based on a user perspective will be described, including: A graphical language; macro commands with development facilities; tools for CAD supported sensor simulation; debugging facilities, for output (display) and also using “browsers” for inspection of internal representation structures. Some examples using this environment will be provided, as well as a perspective for future work
Adolfo Steiger Garção, Femando Moura-Pires
Video-speed Triangulation Range Imaging
Abstract
When a light plane is projected on a scene, the resulting profile line can be recorded by a camera system looking from aside. This technique for range imaging with a 1-D or 2-D video camera is well known. It takes the recording of an image line (say 256 pixels) to establish the position of one profile point. If a 1-D video camera (line camera) is replaced by a Position Sensitive photo Detector (PSD) the position is found from only two current values, simultaneously measured. A speed gain of 256 is thus achieved. However, like the line camera the PSD is a 1-D detector that needs a scanning device to measure the different parts of the profile line
We want the speed gain without the scanning. Therefore, we propose to replace the 2-D videocamera by a novel type of line profile detector: an array of PSDs that allows recording of a profile in a video line time (64us)
To realize a video-speed range camera a PSD-array chip has been constructed. On the 10 × 10 mm2 large chip a row of 96 single PSDs have been implemented to be read out externally. In the experimental configuration a fan beam strikes the surface of a test object (cross section 100 × 30 mm2). With a 50 mm camera-lens the illuminated profile of the surface is imaged on the array sensor. The arrangement used is of the triangulation type: the height z at a lateral position xi at the object is sharply imaged at position z’ on the corresponding single PSD numbered “i”. The position z’ and thus the height z follow directly from both currents of the PSD
Using a 2MHz clock and a multiplexing system all 96 PSDs can be read out in less than a video line time. By moving the object relative to the fan beam (either by a conveyor belt or by a scanning mirror) a real time video image of 256 range profiles is formed
In this paper preliminary results are given, based on the read-out of a subset of eight PSDs of the array sensor
P. W. Verbeek, J. Nobel, G. K. Steenvoorden, M. Stuivinga
Image Sensors for Real-Time 3D Acquisition
Part - 1 - Three Dimensional Image Acquisition
Abstract
Development of depth-sensing systems has been influenced to a large degree by discoveries from perceptual psychology. The Human Visual System (HVS) provides one with information about the shape and spatial relationships between objects in an observed scene, even while the scene is changing. The ability of the HVS to function correctly is mainly thanks to a built-in redundancy of operations, several mechanisms being brought into play giving various interpretations of the scene which are combined to give the best most plausible global description of the scene. This process can of course yield inconsistent interpretations, which we call optical illusions
P. Vuylsteke, C. B. Price, A. Oosterlinck
Image Sensors for Real-Time 3D Acquisition
Part - 2 - Back Shape Measurement for Evaluating Scoliosis using a Single Binary Encoded Light Pattern
Abstract
In this paper an acquisition system is described allowing an optical contact free three-dimensional modelling of the human back. Surface curvatures representing local shape are used to calculate a median profile on the surface of the back. Using this surface information a three-dimensional reconstruction of the vertebral column inside the body is carried out. Clinical relevant parameters are extracted out of the obtained shape of the spine making a comparison with radiological findings possible.
M. De Groof, P. Suetens, G. Marchal, A. Oosterlinck

Multisensor Integration

Active Sensing with a Dextrous Robotic Hand
Abstract
What separates sensing with a hand from other more passive sensing modalities such as vision is its active nature. This paper describes our efforts in building a useful active hand sensing environment that can be used for a number of different tasks including intelligent grasping, manipulation, and haptic object recognition. We outline the system we have built to control the hand/arm system, discuss the tactile sensors we have mounted on the hand’s fingers, and elaborate on some exploratory procedures (EP’s) we have implemented to allow the hand to do active tactile sensing for object recognition tasks.
Peter K. Allen
Sensor Integration Using State Estimators
Abstract
Means for including very different types of sensors using one single unit are described. Accumulated data are represented using an undatable dynamic model, a Kaiman filter. The scheme easily handles common phenomena such as skewed sampling, finite resolution measurements and information delays. Included is an example where 3D motion information is collected by one or more vision sensors.
J. G. Balchen, F. Dessen, G. Skofteland
Multisensory Telerobotic Techniques
Abstract
The paper outlines the telerobotic concepts as presently developed for a small multisensory robot to fly with the next spacelab mission D2; the robot is supposed to work in an autonomous mode, teleoperated by astronauts, and teleoperated from ground. Its key feature is a recently developed multisensory gripper with highly integrated, miniaturized sensor technology including stiff and compliant six-axis force-torque sensing, 9 laser range finders (one of them realized as a rotating laser scanner), tactile arrays, grasp force control and a stero camera pair. Perfect modularity in hard- and software with all preprocessing electronics realized in the gripper was one the major design goals. This multisensory information is a key issue when teleoperating the robot from ground. Sensory simulation on ground computers using advanced stereo graphics is supposed to predict the sensorbased path refinement as induced by the real sensors on board. A particularly interesting situation occurs in the experiment”grasping a floating object from ground” with overall delays of more than 4 seconds. Predictive simulation using realtime fusion of stereo images and laserscan information is the challenging technique envisioned here.
J. Dietrich, G. Hirzinger, J. Heindl, J. Schott
Autochthonous Behaviors — Mapping Perception to Action
Abstract
In this paper we describe an approach to high-level multisensor integration organized around certain egocentric behaviors. The task itself determines the sequence of sensing, the sensors used, and the responses to the sensed data. This leads to the encapsulation of robot behavior in terms of logical sensors and logical actuators. A description of this approach is given as well as some examples for dextrous manipulation and mobile robots.
Rod Grupen, Thomas C. Henderson
Interpreting 3D Lines
Abstract
Computational methods to interpret straight line correspondences in images necessitate the observation of a large number of lines, and introduce systems of equations the numerical solution of which often exhibits several ills. Man-made environments, however, contain many special configurations such as parallel lines, perpendicular lines, known angular configurations in general, etc.; if the existence of such configurations can be ascertained from image data, the problem of 3D interpretation becomes drastically simpler. Moreover, if part of the environment is interpreted, this interpretation can be propagated to other parts of the environment. We propose a consistent labelling formulation which allows the interpretation of general configurations of lines by taking into account the occurrence of special configurations and using propagation.
Amar Mitiche, Robert Laganière
Sensor Data Integration for the Control of an Autonomous Robot
Abstract
The Institute for Realtime Computer Systems and Robotics of the University of Karlsruhe is currently developing the autonomous mobile assembly robot KAMRO. The system consists of a mobile platform with an omnidirectional wheel drive and a superstructure on which two assembly robots are mounted. The sensor system of KAMRO is divided into three functional parts which support the assembly of a product by two arms, the docking of the vehicle and the autonomous navigation. To solve the three tasks, the use of a multisensor system is necessary, combining cameras, distance, approach and tactile sensors. The concept of elementary operations forms the framework for the integration of the sensors into the control system of the mobile robot. In this paper the sensor data processing and various elementary operations of the autonomous mobile assembly robot are described in more detail.
Jörg Raczkowsky, Ulrich Rembold
Cooperation of the Inertial and Visual Systems
Abstract
This paper introduces a number of issues concerning the use of an inertial system in cooperation with vision. We first present applications of inertial information in a visual system, and then attack the problem of determining motion and orientation of the robotic system from inertial information. An iterative algorithm is finally given, and studied in detail.
Thierry Viéville, Olivier D. Faugeras
A Sensor Processing Model Incorporating Error Detection and Recovery
Abstract
In this paper we address the problem of providing a sensor-system with the ability to detect measurement errors and to recover from these errors. We propose to equip every sensor module in the sensor system with active and latent tests which verify respectively possible environmental independent and environmental dependent sensor failures. The recovery strategy is based on rules that are local to the different sensor modules. We will show the applicability in an example.
G. A. Weller, F. C. A. Groen, L. O. Hertzberger

Applications

Recognition Strategies for 3-D Objects in Occluded Environments
Abstract
Two different types of approaches will be discussed: one for a model-driven system and the other for generic shape recognition. The model-driven system, called 3D-POLY, has a computational complexity of only O(n 2 ) for single object recognition, where n is the number of surfaces on the model object. This system achieves its computational efficiency by associating a special a priori defined attribute with each object feature and then organizing the object features with respect to this attribute. The generic shape recognition system, called INGEN, is intended for domains where precise models of the objects involved are not available, such as the postal domain; objects in these domains are categorized by their overall shapes with considerable latitude regarding the metrical parameters involved. A unique feature of INGEN, which sets it apart from 3D-POLY, is that object hypotheses are tested by their volumetric consistency, meaning that the hypothesized objects must not violate conditions not only over the space that is visible to the sensor but also over space that may not be visible to the sensor due to occlusions. In other words, while 3D-POLY forms and verifies object hypotheses on the basis of only what is visible to the sensor, INGEN also reasons over the space that is occluded. 3D-POLY is by design limited to drawing all its inferences from the visible data, even in the presence of occlusions, since industrial objects can possess highly complex shape that preclude the kind of volumetric analysis carried out in INGEN.
A. C. Kak, A. J. Vayda, K. D. Smith, C. H. Chen
Sensor-Based Robot Control Requirements for Space
Abstract
NASA has begun the development of the Space Station, a permanently manned facility in space, for a variety of scientific goals. One part of this project is the Flight Telerobotic Servicer (FTS) which will help build and maintain the structure. The FTS is envisioned as a two armed robot with seven degrees of freedom for each arm. When the FTS is launched, it is expected to perform several tasks which include the installation and removal of truss members of the Space Station structure, changeout of a variety of modular units, mating a thermal connector, etc. While the FTS will initially use teleoperation, it is envisioned to become more autonomous as technology advances. In order for the FTS to evolve from teleoperation to autonomy, NASA requires that the NASA/NBS Standard Reference Model (NASREM) be used as the functional architecture for the control system. The quest for autonomy inevitably leads to the need for sophisticated sensors and sensory processing. This paper will explore the requirements for the tasks envisioned for FTS at first launch as well as during its evolution phase and show how those tasks impact research on sensors, sensory processing, and other parts of the FTS control system. Finally, the current state of the NASREM implementation at NIST will be presented.
Ronald Lumia
Fast Mobile Robot Guidance
Abstract
Prior knowledge of the environment constitutes an important help in planning trajectories and avoiding collisions. But sometimes modifications of the workroom due to appearance of possible obstacles are the main drawbacks for getting a high speed of navigation.
The use of multiple sensors provides complementary information that can help to avoid collision.
In order to assure the maintenance of adequate speed of the mobile without producing excessive speed reductions it is nessary to interpret the information acquired from the sensors and focus on the more interesting characteristics of the scene.
The goal of this work is not to develop a navigation system but develop image processors which help us to get the most adequate information needed to carry on the guidance of these mobiles with a response time as low as possible.
The system presented has the function of facilitating the task of fusion of the information obtained from TV cameras and ultrasonic sensors. The facilities provided by the system are oriented to reducing the processing time so that the mobile robot navigation speed is not reduced.
Antonio B. Martínez, Albert Larré
Neural Signal Understanding for Instrumentation
Abstract
This paper reports about neural signal interpretation theory and techniques, for the purpose of classifying the shapes of a set of instrumentation signals, in order to calibrate the device, diagnose anomalies, generate tuning/settings, and to interpret the measurement results. Neural signal understanding research is surveyed, and the selected implementation is described with its performances in terms of correct classification rates, robustness to noise. Formal results on neural net training time and sensitivity to weights, are given. A theory for neural control is given using functional link nets, and an explanation technique is designed to help neural signal understanding. The results hereof are compared to those of a knowledge based signal interpretation system, within the context of the same specific instrument and data.
L. F. Pau, F. Johansen
An Approach to Real-Time on Line Visual Inspection
Abstract
In this paper we present some applications of computer vision in well defined industrial environments. In most of those applications the main constraints we have to cope with are, on one hand, real-time response, and on the other hand uncertain illumination conditions intrinsic to this kind of environments. As far as our experience has shown (we suppose everybody has experienced the same), it is very difficult, while developing the vision system in the laboratory, to take into account all the external factors which will influence the performance of the system on the factory floor. Factors such as illumination fluctuations, dirt, noise, vibrations, and high temperatures, among others, will inevitably lead to an unpredictable behavior of the system and as a consequence to a loss of reliability. In order to face those drawbacks we approach every application with a defined methodology which mainly consists of two phases: modelling and design.
During the modelling phase we establish a theoretical model of the process to be inspected as well as the constraints to be considered. This procedure allows us to implement what is commonly called a local analysis of restricted areas of the image by means of special purpose hardware. The principal goal of this process is to attain the best trade-off between speed and reliability.
During the design phase we must choose a strategy consistent with the model in order to guarantee the efficient selection of primitives for the final implementation.
Vicenç Llario, Jordi Sanromà
Backmatter
Metadaten
Titel
Traditional and Non-Traditional Robotic Sensors
herausgegeben von
Thomas C. Henderson
Copyright-Jahr
1990
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-75984-0
Print ISBN
978-3-642-75986-4
DOI
https://doi.org/10.1007/978-3-642-75984-0