Skip to main content

2014 | Buch

Visual Control of Wheeled Mobile Robots

Unifying Vision and Control in Generic Approaches

insite
SUCHEN

Über dieses Buch

Vision-based control of wheeled mobile robots is an interesting field of research from a scientific and even social point of view due to its potential applicability. This book presents a formal treatment of some aspects of control theory applied to the problem of vision-based pose regulation of wheeled mobile robots. In this problem, the robot has to reach a desired position and orientation, which are specified by a target image. It is faced in such a way that vision and control are unified to achieve stability of the closed loop, a large region of convergence, without local minima and good robustness against parametric uncertainty. Three different control schemes that rely on monocular vision as unique sensor are presented and evaluated experimentally. A common benefit of these approaches is that they are valid for imaging systems obeying approximately a central projection model, e.g., conventional cameras, catadioptric systems and some fisheye cameras. Thus, the presented control schemes are generic approaches. A minimum set of visual measurements, integrated in adequate task functions, are taken from a geometric constraint imposed between corresponding image features. Particularly, the epipolar geometry and the trifocal tensor are exploited since they can be used for generic scenes. A detailed experimental evaluation is presented for each control scheme.

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
In this chapter, we give a general introduction of the book, first specifying the context and the importance of the research on visual control of mobile robots.We have included a complete review of the state of the art on visual control. This review starts from a general point of view where the control is carried out in 6 degrees of freedom, and then, in the context of wheeled mobile robots. We also introduce the mathematical modeling of a wheeled mobile robot, the model of central cameras based on the generic camera model and the models of the visual measurements used for feedback, which are particularly taken from a multi-view geometric constraint.
Héctor M. Becerra, Carlos Sagüés
Robust Visual Control Based on the Epipolar Geometry
Abstract
In this chapter, we present a new control scheme that exploits the epipolar geometry but, unlike previous approaches based on two views, it is extended to three views, gaining robustness in perception. Additionally, robustness is also improved by using a control law based on sliding mode theory in order to solve the problem of pose regulation. The core of the chapter is a novel control law that achieves total correction of the robot pose with no auxiliary images and no 3D scene information, without need of commuting to any visual information other than the epipolar geometry and applicable with any central camera. Additionally, the use of sliding mode control avoids the need of a precise camera calibration in the case of conventional cameras and the control law deals with singularities induced by the epipolar geometry. The effectiveness of the approach is tested via simulations, with kinematic and dynamic models of the robot, and real-world experiments.
Héctor M. Becerra, Carlos Sagüés
A Robust Control Scheme Based on the Trifocal Tensor
Abstract
In this chapter, we rely on the natural geometric constraint for three views, the trifocal tensor. We present a novel image-based visual servoing scheme that also solves the pose regulation problem, in this case by exploiting the properties of omnidirectional images to preserve bearing information. This is achieved by using the additional information of a third image in the geometric model through a simplified trifocal tensor, which can be computed directly from image features avoiding the need of a complete camera calibration for any type of central camera. The main idea of the chapter is that the elements of the tensor are introduced directly in the control law and neither any a prior knowledge of the scene nor any auxiliary image are required. Additionally, a sliding mode control law in a square system ensures stability and robustness for the closed loop. The good performance of the control system is proven via simulations and real-world experiments with a hypercatadioptric imaging system.
Héctor M. Becerra, Carlos Sagüés
Dynamic Pose-Estimation for Visual Control
Abstract
In this chapter, we change fromthe image-based approach to estimate the position and orientation of the camera-robot system in order to regulate the pose in the Cartesian space. This provides the benefits of including a kind of memory in the closed loop, which reduces the dependence of the control on the visual data and facilitates the planning of complex tasks. The camera-robot pose is recovered using a dynamic estimation scheme that exploits visual measurements given by the epipolar geometry and the trifocal tensor. The interest of the chapter is a novel observability study of the pose-estimation problem from measurements given by the aforementioned geometric constraints, as well as the demonstration that the estimated pose is suitable for closed loop control. Additionally, a benefit of exploiting measurements from geometric constraints for pose-estimation is the generality of the estimation scheme, in the sense that it is valid for any visual sensor obeying a central projection model. The effectiveness of the approach is evaluated via simulations and real-world experiments.
Héctor M. Becerra, Carlos Sagüés
Conclusions
Abstract
In this book we have presented and evaluated experimentally solutions to the problem of visual control of wheeled mobile robots using exclusively the information provided by an onboard monocular imaging system. The importance of addressing this problem is motivated by the increasing number of applications with this type of robots for service tasks. In this context, the general focus of the book is to give a formal treatment of the aspects from control theory applied in the particular problem of vision-based navigation of wheeled mobile robots, in such a way that vision and control have been unified to design control schemes with properties of stability, a large region of convergence (without local minima) and good robustness against parametric uncertainty and image noise.
Héctor M. Becerra, Carlos Sagüés
Backmatter
Metadaten
Titel
Visual Control of Wheeled Mobile Robots
verfasst von
Héctor . M Becerra
Carlos Sagüés
Copyright-Jahr
2014
Electronic ISBN
978-3-319-05783-5
Print ISBN
978-3-319-05782-8
DOI
https://doi.org/10.1007/978-3-319-05783-5

Neuer Inhalt