Skip to main content

2017 | Buch

Robotics, Vision and Control

Fundamental Algorithms In MATLAB® Second, Completely Revised, Extended And Updated Edition

insite
SUCHEN

Über dieses Buch

Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and computer vision. It is written in an accessible but informative style, easy to read and absorb, and includes over 1000 MATLAB and Simulink® examples and over 400 figures. The book is a real walk through the fundamentals of mobile robots, arm robots. then camera models, image processing, feature extraction and multi-view geometry and finally bringing it all together with an extensive discussion of visual servo systems. This second edition is completely revised, updated and extended with coverage of Lie groups, matrix exponentials and twists; inertial navigation; differential drive robots; lattice planners; pose-graph SLAM and map making; restructured material on arm-robot kinematics and dynamics; series-elastic actuators and operational-space control; Lab color spaces; light field cameras; structured light, bundle adjustment and visual odometry; and photometric visual servoing.

“An authoritative book, reaching across fields, thoughtfully conceived and brilliantly accomplished!”

OUSSAMA KHATIB, Stanford

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The term robot means different things to different people. Science fiction books and movies have strongly influenced what many people expect a robot to be or what it can do. Sadly the practice of robotics is far behind this popular conception. One thing is certain though – robotics will be an important technology in this century. Products such as vacuum cleaning robots have already been with us for over a decade and self-driving cars are coming. These are the vanguard of a wave of smart machines that will appear in our homes and workplaces in the near to medium future.
Peter Corke

Foundations

Frontmatter
Chapter 2. Representing Position and Orientation
Abstract
Numbers are an important part of mathematics. We use numbers for counting: there are 2 apples. We use denominate numbers, a number plus a unit, to specify distance: the object is 2 m away. We also call this single number a scalar. We use a vector, a denominate number plus a direction, to specify a location: the object is 2 m due north. We may also want to know the orientation of the object: the object is 2 m due north and facing west. The combination of position and orientation we call pose.
Peter Corke
Chapter 3. Time and Motion
Abstract
In the previous chapter we learned how to describe the pose of objects in 2- or 3-dimensional space. This chapter extends those concepts to poses that change as a function of time. Section 3.1 introduces the derivative of time-varying position, orientation and pose and relates that to concepts from mechanics such as velocity and angular velocity. Discrete-time approximations to the derivatives are covered which are useful for computer implementation of algorithms such as inertial navigation. Section 3.2 is a brief introduction to the dynamics of objects moving under the influence of forces and torques and discusses the important difference between inertial and noninertial reference frames.
Peter Corke

Mobile Robots

Frontmatter
Chapter 4. Mobile Robot Vehicles
Abstract
This chapter discusses how a robot platform moves, that is, how its pose changes with time as a function of its control inputs. There are many different types of robot platform as shown on pages 95–97 but in this chapter we will consider only four important exemplars. Section 4.1 covers three different types of wheeled vehicle that operate in a 2-dimensional world. They can be propelled forwards or backwards and their heading direction controlled by some steering mechanism. Section 4.2 describes a quadrotor, a flying vehicle, which is an example of a robot that moves in 3-dimensional space. Quadrotors are becoming increasing popular as a robot platform since they are low cost and can be easily modeled and controlled.
Peter Corke
Chapter 5. Navigation
Abstract
Robot navigation is the problem of guiding a robot towards a goal. The human approach to navigation is to make maps and erect signposts, and at first glance it seems obvious that robots should operate the same way. However many robotic tasks can be achieved without any map at all, using an approach referred to as reactive navigation. For example, navigating by heading towards a light, following a white line on the ground, moving through a maze by following a wall, or vacuuming a room by following a random path.
Peter Corke
Chapter 6. Localization
Abstract
In our discussion of map-based navigation we assumed that the robot had a means of knowing its position. In this chapter we discuss some of the common techniques used to estimate the location of a robot in the world – a process known as localization.
Peter Corke

Arm-Type Robots

Frontmatter
Chapter 7. Robot Arm Kinematics
Abstract
Kinematics is the branch of mechanics that studies the motion of a body, or a system of bodies, without considering its mass or the forces acting on it.
Peter Corke
Chapter 8. Manipulator Velocity
Abstract
A robot’s end-effector moves in Cartesian space with a translational and rotational velocity – a spatial velocity. However that velocity is a consequence of the velocities of the individual robot joints. In this chapter we introduce the relationship between the velocity of the joints and the spatial velocity of the end-effector.
Peter Corke
Chapter 9. Dynamics and Control
Abstract
In this chapter we consider the dynamics and control of a serial-link manipulator arm. The motion of the end-effector is the composition of the motion of each link, and the links are ultimately moved by forces and torques exerted by the joints. Section 9.1 describes the key elements of a robot joint control system that enables a single joint to follow a desired trajectory; and the challenges involved such as friction, gravity load and varying inertia.
Peter Corke

Computer Vision

Frontmatter
Chapter 10. Light and Color
Abstract
In ancient times it was believed that the eye radiated a cone of visual flux which mixed with visible objects in the world to create a sensation in the observer – like the sense of touch, but at a distance – this is the extromission theory. Today we consider that light from an illuminant falls on the scene, some of which is reflected into the eye of the observer to create a perception about that scene. The light that reaches the eye, or the camera, is a function of the illumination impinging on the scene and the material property known as reflectivity.
Peter Corke
Chapter 11. Image Formation
Abstract
In this chapter we discuss how images are formed and captured, the first step in robot and human perception of the world. From images we can deduce the size, shape and position of objects in the world as well as other characteristics such as color and texture which ultimately lead to recognition.
Peter Corke
Chapter 12. Images and Image Processing
Abstract
Image processing is a computational process that transforms one or more input images into an output image. Image processing is frequently used to enhance an image for human viewing or interpretation, for example to improve contrast. Alternatively, and of more interest to robotics, it is the foundation for the process of feature extraction which will be discussed in much more detail in the next chapter.
Peter Corke
Chapter 13. Image Feature Extraction
Abstract
In the last chapter we discussed the acquisition and processing of images. We learned that images are simply large arrays of pixel values but for robotic applications images have too much data and not enough information. We need to be able to answer pithy questions such as what is the pose of the object? what type of object is it? how fast is it moving? how fast am I moving? and so on. The answers to such questions are measurements obtained from the image and which we call image features. Features are the gist of the scene and the raw material that we need for robot control.
Peter Corke
Chapter 14. Using Multiple Images
Abstract
In the previous chapter we learned about corner detectors which find particularly distinctive points in a scene. These points can be reliably detected in different views of the same scene irrespective of viewpoint or lighting conditions. Such points are characterized by high image gradients in orthogonal directions and typically occur on the corners of objects. However the 3-dimensional coordinate of the corresponding world point was lost in the perspective projection process which we discussed in Chap. 11 – we mapped a 3-dimensional world point to a 2-dimensional image coordinate. All we know is that the world point lies along some ray in space corresponding to the pixel coordinate, as shown in Fig. 11.6. To recover the missing third dimension we need additional information. In Sect. 11.2.3 the additional information was camera calibration parameters plus a geometric object model, and this allowed us to estimate the object’s 3-dimensional pose from 2-dimensional image data.
Peter Corke

Robotics, Vision and Control

Frontmatter
Chapter 15. Vision-Based Control
Abstract
The task in visual servoing is to control the pose of the robot’s end-effector, relative to the goal, using visual features extracted from an image of the goal object. As shown in Fig. 15.1 the camera may be carried by the robot or be fixed in the world. The configuration of Fig. 15.1a has the camera mounted on the robot’s end-effector observing the goal, and is referred to as end-point closedloop or eye-in-hand. The configuration of Fig. 15.1b has the camera at a fixed point in the world observing both the goal and the robot’s end-effector, and is referred to as end-point open-loop. In the remainder of this book we will discuss only the eye-in-hand configuration.
Peter Corke
Chapter 16. Advanced Visual Servoing
Abstract
This chapter builds on the previous one and introduces some advanced visual servo techniques and applications. Section 16.1 introduces a hybrid visual servo method that avoids some of the limitations of the IBVS and PBVS schemes described previously.
Peter Corke
Backmatter
Metadaten
Titel
Robotics, Vision and Control
verfasst von
Peter Corke
Copyright-Jahr
2017
Electronic ISBN
978-3-319-54413-7
Print ISBN
978-3-319-54412-0
DOI
https://doi.org/10.1007/978-3-319-54413-7

Neuer Inhalt