Skip to main content
Top

2017 | Book

Control of Multiple Robots Using Vision Sensors

insite
SEARCH

About this book

This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of:

a method for visual robot homing based on a memory of omni-directional images;a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs;an algorithm to recover a generic motion between two 1-d views and which does not require a third view;a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and control a formation of ground mobile robots; andthree coordinate-free methods for decentralized mobile robot formation stabilization.

The performance of the different methods is evaluated both in simulation and experimentally with real robotic platforms and vision sensors.

Control of Multiple Robots Using Vision Sensors will serve both academic researchers studying visual control of single and multiple robots and robotics engineers seeking to design control systems based on visual sensors.

Table of Contents

Frontmatter
Chapter 1. Introduction
Abstract
We begin the book with an introduction of the topics that it addresses. The background on vision-based multirobot control and the arguments that motivate its study are provided in this chapter. In this discussion, we consider separately three topics pertaining to the overall thematic framework of the monograph, namely computer vision, visual control, and multirobot systems. We also cover possible applications of the research advances and technologies that are presented. Finally, the contributions described in the book are summarized, and an outline of its contents is provided.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 2. Angle-Based Navigation Using the 1D Trifocal Tensor
Abstract
The first problem addressed in the monograph is how to enable mobile robots to autonomously navigate toward specific positions in an environment. Vision sensors have often been used for this purpose, supporting a behavior known as visual homing, in which the robot’s target location is defined by an image. This chapter describes a novel visual homing methodology for robots moving in a planar environment. The employed visual information consists of a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment and the current image taken by the robot. One of the contributions presented is an algorithm that calculates the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion scenarios and provides important robustness properties to the presented technique. A further contribution within the proposed methodology is a novel control law that uses the available angles, with no range information involved, to drive the robot to the goal. This way, the method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. The chapter includes a formal proof of the stability of the proposed control law, and the performance of the visual navigation method is illustrated through simulations and different sets of experiments with real images captured by cameras on board robotic mobile platforms.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 3. Vision-Based Control for Nonholonomic Vehicles
Abstract
This chapter continues the study of methods for vision-based stabilization of mobile robots to desired locations in an environment, focusing on an aspect that is critical for successful real-world implementation, but often tends to be overlooked in the literature: the control inputs employed must take into account the specific motion constraints of commercial robots, and should conform with feasibility, safety, and efficiency requirements. With this motivation, the chapter proposes a visual control approach based on sinusoidal inputs designed to stabilize the pose of a robot with nonholonomic motion constraints. All the information used in the control scheme is obtained from omnidirectional vision, in a robust manner, by means of the 1D trifocal tensor. The method is developed considering particularly a unicycle kinematic robot model, and its contribution is that sinusoids are used in such a way that the generated vehicle trajectories are feasible, smooth, and versatile, improving over previous sinusoidal-based control works in terms of efficiency and flexibility. Furthermore, the analytical expressions for the evolution of the robot’s state are provided and used to propose a novel state-feedback control law. The stability of the proposed approach is analyzed in the chapter, which also reports on results from simulations and experiments with a real robot, carried out to validate the methodology.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 4. Controlling Mobile Robot Teams from 1D Homographies
Abstract
As Chaps. 2 and 3 of the monograph have illustrated, an effective way to address vision-based control when the robots (and their attached cameras) move in a planar environment is to use omnidirectional vision and 1D multiview models. This provides interesting properties in terms of accuracy, simplicity, efficiency and robustness. After exploring the use of the 1D trifocal tensor model, in this chapter we turn our attention to the 1D homography. This model can be computed from just two views but, compared with the trifocal constraint, presents additional challenges: namely, it is dependent on the structure of the scene, and does not permit direct estimation of camera motion. The chapter presents a novel method that overcomes the latter issue by allowing to compute the planar motion between two views from two different 1D homographies. Additionally, this motion estimation framework is applied to a multirobot control task in which multiple robots are driven to a desired formation having arbitrary rotation and translation in a two-dimensional workspace. In particular, each robot exchanges visual information with a set of predefined formation neighbors, and performs a 1D homography-based estimation of the relative positions of these adjacent robots. Then, using a rigid 2D transformation computed from the relative positions, and the knowledge of the position of the group’s global centroid, each robot obtains its motion command. The robots’ individual motions within this distributed formation control scheme naturally result in the full team reaching the desired global configuration. Results from simulations and tests with real images are presented to illustrate the feasibility and effectiveness of the methodologies proposed throughout the chapter.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 5. Control of Mobile Robot Formations Using Aerial Cameras
Abstract
Cameras are versatile and relatively low-cost sensors that provide a lot of useful data. Thanks to these remarkable properties, it is possible to envision a range of different setups when considering vision-based multirobot control tasks. For instance, the vision sensors may be carried by the robots that are to be controlled, or external to them. In addition, cameras can be used in the context of both centralized and distributed control strategies. In this chapter, a system setup relying on external cameras and the two-view homography is proposed, to achieve the objective of driving a set of robots moving on the ground plane to a desired geometric formation. In particular, we propose to use multiple unmanned aerial vehicles (UAVs) as control units. Each of them carries a camera that observes a subset of the ground robotic team and is employed to control it. This gives rise to a partially distributed multirobot control method, which aims to combine the optimality and simplicity of centralized approaches with the scalability and robustness of distributed strategies. Relying on a homography computed for each of the UAV-mounted cameras, our method is purely image-based and has low computational cost. We formally study its stability for unicycle-type robots. In order for the multirobot system to converge to the target formation, certain intersections must be maintained between the sets of ground robots seen by the different cameras. To this end, we also propose a distributed strategy to coordinately control the motion of the cameras by using communication of their gathered information. The effectiveness of the proposed vision-based controller is illustrated via simulations and experiments with real robots.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 6. Coordinate-Free Control of Multirobot Formations
Abstract
It is undoubtedly interesting, from a practical perspective, to solve the problem of multirobot formation stabilization in a decentralized fashion, while allowing the agents to rely only on their independent onboard sensors (e.g., cameras), and avoiding the use of leader robots or global reference frames. However, a key observation that serves as motivation for the work presented in this chapter is that the available controllers satisfying these conditions generally fail to provide global stability guarantees. In this chapter, we provide novel theoretical tools to address this issue; in particular, we propose coordinate-free formation stabilization algorithms that are globally convergent. The common elements of the control methods we describe are that they rely on relative position information expressed in each robot’s independent frame, and that the absence of a shared orientation reference is dealt with by introducing locally computed rotation matrices in the control laws. Specifically, three different nonlinear formation controllers for mobile robots are presented in the chapter. First, we propose an approach relying on global information of the team, implemented in a distributed networked fashion. Then, we present a purely distributed method based on each robot using only partial information from a set of formation neighbors. We finally explore formation stabilization applied to a target enclosing task in a 3D workspace. The developments in this chapter pave the way for novel vision-based implementations of control tasks involving teams of mobile robots, which is the leitmotif of the monograph. The controllers are formally studied and their performance is illustrated with simulations.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Chapter 7. Conclusions and Directions for Future Research
Abstract
In the last chapter of the book, we provide brief concluding remarks on the contents that have been presented, and discuss a number of ideas that may be attractive to pursue in future research efforts.
Miguel Aranda, Gonzalo López-Nicolás, Carlos Sagüés
Backmatter
Metadata
Title
Control of Multiple Robots Using Vision Sensors
Authors
Miguel Aranda
Gonzalo López-Nicolás
Carlos Sagüés
Copyright Year
2017
Electronic ISBN
978-3-319-57828-6
Print ISBN
978-3-319-57827-9
DOI
https://doi.org/10.1007/978-3-319-57828-6