Elsevier

Image and Vision Computing

Volume 14, Issue 10, December 1996, Pages 715-732
Image and Vision Computing

Promissing research
Vision-based robot positioning using neural networks

https://doi.org/10.1016/0262-8856(96)89022-6Get rights and content

Abstract

Most vision-based robot positioning techniques rely on analytical formulations of the relationship between the robot pose and the projected image coordinates of several geometric features of the observed scene. This usually requires that several simple features such as points, lines or circles be visible in the image, which must either be unoccluded in multiple views or else part of a 3D model. Featurematching algorithms, camera calibration, models of the camera geometry and object feature relationships are also necessary for pose determination. These steps are often computationally intensive and error-prone, and the complexity of the resulting formulations often limits the number of controllable degrees of freedom. We provide a comparative survey of existing visual robot positioning methods, and present a new technique based on neural learning and global image descriptors which overcomes many of these limitations. A feedforward neural network is used to learn the complex implicit relationship between the pose displacements of a 6-dof robot and the observed variations in global descriptors of the image, such as geometric moments and Fourier descriptors. The trained network may then be used to move the robot from arbitrary initial positions to a desired pose with respect to the observed scene. The method is shown to be capable of positioning an industrial robot with respect to a variety of complex objects with an acceptable precision for an industrial inspection application, and could be useful in other real-world tasks such as grasping, assembly and navigation.

References (47)

  • R. Horaud et al.

    An analytic solution for the perspective 4-point problem

    Computer Vision, Graphics and Image Processing

    (1989)
  • B. Li

    A new computation of geometric moments

    Pattern Recognition

    (1993)
  • X. Jiang et al.

    Simple and fast computation of moments

    Pattern Recognition

    (1991)
  • P. Corke

    Visual control of robot manipulators — a review

  • C. Venaille et al.

    Application of neural networks to image-based control of robot arms

  • E. Dickmanns et al.

    An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles

    IEEE Trans. Systems, Man, and Cybernetics

    (1990)
  • I. Masaki et al.

    Cost-effective vision systems for intelligent vehicles

  • J. Feddema

    Visual servoing: a technology in search of an application

  • G. Hagar et al.

    Visual servoing: achievements, issues and applications

  • M. Abidi et al.

    The use of multisensor data for robotic applications

    IEEE Trans. Robotics and Automation

    (1990)
  • K. Mandel et al.

    On-line compensation of mobile robot docking errors

    IEEE J. Robotics and Automation

    (December 1987)
  • M. Kabuka et al.

    Position verification of a mobile robot using standard pattern

    IEEE J. Robotics and Automation

    (December 1987)
  • R. Harrell et al.

    A fruit-tracking system for robotic harvesting

    Machine Vision and Applications

    (1989)
  • Z. Zhang et al.

    Automatic calibration and visual servoing for a robot navigation system

  • F. Chaumette

    La relation vision-commande: théorie et application à des tâches robotiques

  • J. Hertz et al.
  • F. Chaumette et al.

    Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing

  • B. Espiau et al.

    A new approach to visual servoing in robotics

    IEEE Trans, on Robotics and Automation

    (June 1992)
  • Z. Bien et al.

    Characterization and use of feature-Jacobian matrix for visual servoing

  • L. Weiss et al.

    Dynamic sensor-based control of robots with visual feedback

    IEEE J. Robotics and Automation

    (October 1987)
  • K. Horn et al.

    Determining optical flow

    Artificial Intelligence

    (1981)
  • P. Allen et al.

    Hand-eye coordination for robotic tracking and grasping

  • N. Papanikolopoulos

    Adaptive control, visual servoing and controlled active vision

  • Cited by (69)

    • Quality Evaluation of Bakery Products

      2016, Computer Vision Technology for Food Quality Evaluation: Second Edition
    • Feature selection for position estimation using an omnidirectional camera

      2015, Image and Vision Computing
      Citation Excerpt :

      In contrast to the localization problem with artificial markers or popular geometrical models, there are a growing number of practical scenarios in which global statistical information is used instead. Some works illustrate localization using various spatially distributed (continuous) signals such as distributed wireless Ethernet signal strength [15], or multi-dimensional magnetic fields [16]. In [17], a neural network is used to learn the implicit relationship between the pose displacements of a 6-DOF robot and the observed variations in global descriptors of the image such as geometric moments and Fourier descriptors.

    View all citing articles on Scopus
    View full text