Brought to you by:
Paper

Bio-inspired vision based robot control using featureless estimations of time-to-contact

and

Published 31 January 2017 © 2017 IOP Publishing Ltd
, , Citation Haijie Zhang and Jianguo Zhao 2017 Bioinspir. Biomim. 12 025001 DOI 10.1088/1748-3190/aa53c4

1748-3190/12/2/025001

Abstract

Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.

Export citation and abstract BibTeX RIS

1. Introduction

In nature, various insects or animals rely on vision to control their motion to negotiate dynamic and uncertain environments. Insects can land on different surfaces such as ground or trees safely. Hummingbirds, paragons of precision flight, can brake to gently dock on a flower with pinpoint accuracy in a very short time [1]. Seabirds can adjust when to close wings before diving into water for fish [2]. All of such elegant actions are accomplished only by animals' eyes, rather than distance sensors.

By analyzing the continuous images captured by eyes, insects and birds extract the so-called time-to-contact to guide their motion [3]. Time-to-contact is defined as the time it would take a moving observer to make contact with an object or surface if the current velocity is maintained. Comprehensive experimental studies have shown that time-to-contact is a pervasive cue for animals' navigation. It has been discovered that bees keep a constant rate of image expansion to perch on the flowers which is essentially a constant time-to-contact strategy [4]. Films of pigeons flying to a branch were analyzed, and the results showed that pigeons also adopt time-to-contact to perch [5]. By taking landing image sequences, fruit flies are also found to leverage the inverse of time-to-contact to control the landing and obstacle avoidance process [6]. Besides animals, drivers would also rely on time-to-contact to brake to avoid collision with obstacles or pedestrians [7].

Inspired by biological studies, time-to-contact has been already applied to a variety of robotic platforms such as mobile robots and aerial robots to achieve landing, docking, chasing, obstacle avoidance, and navigation. For mobile robots, docking and obstacle avoidance are the two main tasks. Souhila et al estimated time-to-contact from optic flow to navigate a mobile robot in cluttered indoor environment [8]. Kaneta et al employed time-to-contact to avoid obstacles and chase another mobile robot, in which they obtained time-to-contact using the object size information from consecutive images [9]. McCarthy et al utilized time-to-contact to control the docking of a mobile robot in front of a vertical wall. They estimated time-to-contact using the divergence of optic flow from image sequences by considering the effects caused by the focus of expansion [10]. For aerial robot platform, landing and navigation are the two main tasks. Kendoul proposed several time-to-contact controllers to control a quadcopter platform to realize the docking, landing and navigation [11], of which the time-to-contact is estimated from GPS. To achieve safe landings, Izzo and Croon proposed different time-to-contact control methods [12]. They estimated time-to-contact from optical flow divergence and validated their control algorithm using a Parrot AR drone [13].

There are two widely used time-to-contact estimation methods: size based method [9, 1418] and optic flow divergence based method [8, 12, 13, 1921]. For the size based method, time-to-contact can be calculated from: $\tau =A/\overset{\centerdot}{{A}}\,$ , where A is size of a feature or object in the image and $\overset{\centerdot}{{A}}\,$ is the time derivative of the size. To get the size of the objects, feature extraction and tracking are needed in successive images which is not only time-consuming but also a challenging problem in computer vision, especially in natural environments [14]. For the optic flow divergence based method [13], feature extraction and tracking are also needed for recovering optical flow to estimate the time-to-contact. Similarly, real time control is impeded by requirements for good features and time-consuming extraction and tracking processes.

Recently, the shift of pixel has been utilized to estimated the time-to-contact with good estimation results [22, 23]. In addition, Horn et al proposed a new method to estimate the time-to-contact without relying on feature extraction and tracking, and we call it featureless method in this paper [24, 25]. This method directly manipulates all the intensities in two consecutive images based on the constant brightness assumption to estimate the time-to-contact. Without feature extraction and tracking, the computation time is shorter and the results are more reliable. Nevertheless, only a simplified case when a camera with linear velocities is considered. Further, the estimation method is not applied for the control of robots.

In this paper, we aim to exploit and extend the featureless direct method proposed by Horn et al to estimate the time-to-contact, and then use the estimated time-to-contact to control the motion of mobile robots using the constant time-to-contact strategy [10]. There are mainly three contributions for this paper. First, we extend the featureless method to allow estimation for more general settings when angular velocities exist, which is ubiquitous for robotic platforms. Second, we improve the estimation results by using Kalman filtering on the estimated time-to-contact. Third, combining the estimation method and the constant tau theory, we design an error based controller with gain scheduling strategy to control the motion of a mobile robot for docking. The estimation and control methods presented in this paper can be extended to other robotic platforms, especially for computation-constrained miniature robots [26, 27], for landing, docking, or navigation.

The results presented in this paper are expanded from our previous results in [28], but there are several significant differences. First, we improved the estimation results using Kalman filtering. Second, the controller is improved from a basic proportional controller. Third, we conduct the docking experiment for a mobile robot with angular velocities, for which we haven't performed in [28].

The rest of this paper is organized as follows: section 2 describes the featureless method to estimate time-to-contact with and without angular velocities, Kalman filter and the control algorithm. Based on the estimation and control algorithms, section 3 details the experimental setup, results and discussion. Section 4 concludes the paper.

2. Featureless based control method

In this section, the featureless method to estimate time-to-contact in [25] is introduced first. Then we extend this method by considering angular velocities. After that, Kalman filter is adopted to improve the performance of this algorithm. At the end, an error based proportional controller with gain scheduling is introduced.

Figure 1 shows a typical docking experiment scenario: a mobile robot carrying a camera moves toward a wall with translational velocities Tx, Ty, and Tz and angular velocities ${{\omega}_{x}}$ , ${{\omega}_{y}}$ , and ${{\omega}_{z}}$ . A camera frame is defined as follows: the origin is at the center of the image sensor, Z axis along the optical axis, X and Y axes parallel to the horizontal and vertical axis of the image sensor, respectively. A point with coordinates (X, Y, Z) in the camera frame is projected to the image frame with coordinates (x, y), where the image frame is a 2-dimensional coordinate frame in the image plane with the origin located at the principle point, and x and y axes parallel to the horizontal and vertical directions of the image plane, respectively. The goal of the docking is to let the robot progressively decrease its speed as it is approaching the wall to finally realize a soft contact with the wall.

Figure 1.

Figure 1. Docking experiment scenario.

Standard image High-resolution image

2.1. Time-to-contact without angular velocities

In computer vision, the well-known constant brightness assumption is the brightness E of the image point (x, y) at time t is constant, which can be written as follows [29]:

Equation (1)

where ${{v}_{x}}=\text{d}x/\text{d}t$ , ${{v}_{y}}=\text{d}y/\text{d}t$ are the optic flow vectors, ${{E}_{x}}=\partial E/\partial x$ , ${{E}_{y}}=\partial E/\partial y$ are the brightness gradients, and ${{E}_{t}}=\partial E/\partial t$ is the rate of change for brightness with respect to time.

When the camera has both translational velocities and angular velocities as shown in figure 1, we obtain the optic flow vector equation [30]:

Equation (2)

where λ is focal length. From figure 1 we can know that time-to-contact can be calculated as:

Equation (3)

Plugging the optic flow equation (2) into equation (1), we can solve the equation for time-to-contact, which can be classified into three cases in [24] if the angular velocities are not considered:

2.1.1. Case I

The robot moves along the optical axis which is perpendicular to an upright planar surface:

Equation (4)

where $G=x{{E}_{x}}+y{{E}_{y}}$ , $W=\frac{{{T}_{z}}}{Z}$ and ${\sum}^{}$ is an abbreviation of ${\sum}_{i=1}^{m}{\sum}_{j=1}^{n}$ , where i, j are the pixel index in an $m\times n$ image. Unless otherwise stated, ${\sum}^{}$ will have the same meaning in the rest of this paper.

2.1.2. Case II

The robot translates in an arbitrary direction, but the optical axis is perpendicular to an upright planar surface:

Equation (5)

where  $U=-\lambda \frac{{{T}_{x}}}{Z}$ ,  $V=-\lambda \frac{{{T}_{y}}}{Z}$ .

2.1.3. Case III

The robot translates along the optical axis which is relative to an upright planar surface of arbitrary orientation:

Equation (6)

where

m and n are the slopes of planar surface in X and Y directions in camera frame and the surface equation can be written as

Equation (7)

where Z0 is the intersection of the optical axis and the surface. In [25], the estimation algorithm for case II generates the best result when there is only translational movement. Therefore, we adopt the algorithm in case II for our docking experiments in section 3.

2.2. Time-to-contact with angular velocities

The previous three cases assume there is no angular velocity for the camera, which is not true for general problems (an aerial robot with a camera is a typical example). As a result, we need to extend the method to incorporate angular velocities. To include the angular velocities, we plug equation (2) into the constant brightness assumption equation and consider the surface in equation (7), leading to the following equation:

Equation (8)

where U and V are the same with previous definitions in case II. M, N and W are the same with previous definitions in case III, and

Equation (9)

Note that with a gyroscope to measure ${{\omega}_{x}}$ , ${{\omega}_{y}}$ , and ${{\omega}_{z}}$ , we can know J and K for a certain image. The equation is similar with case IV when the robot is moving with an arbitrary trajectory and the orientation of the surface is arbitrary in [24]. We can formulate a least square problem to find the five unknown parameters U, V, M, N and W that minimize the following error sum over the interested image area:

Equation (10)

Letting P  =  1  +  xM/W  +  yN/W and $Q={{E}_{x}}U/W+$ ${{E}_{y}}V/W+G$ yields

Equation (11)

To solve the best values for the five unknown parameters, we differentiate the above sum with respect to the five parameters and set the results to zero. As a result, the solution is similar with the one stated in [24] which uses iterations to solve the nonlinear equation. Given an initial guess of M/W and N/W, then P is known and we can solve for U, V and W using the following equation:

Equation (12)

where ${{E}_{\omega}}=J{{E}_{x}}+K{{E}_{y}}+{{E}_{t}}$ . Using the estimation of U, V and W, we can solve for the new M, N and W using

Equation (13)

Based on the new M, N and W, we can continue the iteration for new U, V and W. Eventually, a close approximation of time-to-contact can be obtained after several iterations.

For mobile robots, the case is simplified since the angular velocities ${{\omega}_{x}}$ , ${{\omega}_{z}}$ can be neglected due to relatively small rotation around x and z axis compared to possible rotations around y axis. Then we have $J=-\frac{{{\lambda}^{2}}+{{x}^{2}}}{\lambda}{{\omega}_{y}}$ and $K=-\frac{xy}{\lambda}{{\omega}_{y}}$ . Since in [24], case II gives the best results, here we use equation (5), and equation (5) can be rewritten as:

Equation (14)

2.3. Kalman filter

In the preliminary mobile robot docking experiments, the estimated time-to-contact might abruptly change due to the low quality camera and measurement errors from gyroscope, which caused the vibration of robot's speed [28]. To address this problem, we adopt Kalman filter to smooth the estimated time-to-contact. Kalman filter is widely used for obtaining more precise measurements by using Bayesian inference and estimating a joint probability distribution over the variables for each time frame even though there are noise and inaccuracies in the original measurements. Here we set τ as the system state and assume the differential equation is [31]:

Equation (15)

where, ui−1 is the system input at time i  −  1, Zi is measured time-to-contact at time i, A  =  C  =  1 and B  =  D  =  0, ${{w}_{i-1}}={{v}_{i}}$ are the process and measurement noise respectively. Then the discrete Kalman filter time predicting equations can be written as:

Equation (16)

where ${{\bar{\tau}}_{i}}$ is predicted state estimate at time i and ${{\tau}_{i-1}}$ is the filtered state at time i  −  1. And the update can be written as follows:

Equation (17)

where ${{E}_{x}}={{E}_{z}}$ are process noise covariance and measurement noise covariance respectively. Ki is the optimal Kalman gain at time i.

2.4. Error based proportional controller with gain scheduling

After we get the estimation of time-to-contact, a controller can be designed to make the time-to-contact track some reference trajectory so that the robot can perform different tasks such as landing, docking and chasing. Let the reference value for time-to-contact ${{\tau}_{\text{ref}}}$ be a constant value. It has been verified that keeping a constant time-to-contact can realize a soft contact between the observer and object [10, 32].

For the experiment, we focus on the mobile robot docking problem shown in figure 1. In this scenario the robot needs to control its speed as it is approaching the wall to realize a soft contact. For this problem, a mobile robot with a camera attached moves towards a vertical wall, and we have the following equations:

Equation (18)

where Z is the distance from the camera to the wall, Tz is the robot velocity along Z axis of the camera frame, and a is the acceleration which is in the same direction with Tz.

For this system, a standard proportional controller is widely used [10]:

Equation (19)

where K is a constant proportional parameter. Even though this controller works, to get better control results, we design a new controller which is based on the error with gain scheduling strategy:

Equation (20)

where

Equation (21)

where, ${{K}_{11}}>{{K}_{21}},{{K}_{12}}>{{K}_{22}},{{K}_{13}}>{{K}_{23}}$ . The proportional gains are based on the errors, so we call it error based controller in this paper. Why we design different gains for the same absolute errors can be explained as follows. When Z/Tz is larger than ${{\tau}_{\text{ref}}}$ , $\tau =Z/{{T}_{z}}$ can be adjusted to reference value by increasing Tz. Since Z is decreasing as long as Tz  >  0, it works similarly with the standard proportional controller. But, when Z/Tz is smaller than ${{\tau}_{\text{ref}}}$ , the increasing of Z/Tz is not guaranteed if Tz decreases slowly since Z is also decreasing as long as Tz  >  0. Therefore, we design larger proportional gains for positive errors to compensate the decreasing of Z.

3. Experiment results and discussion

In this section, we conduct experiments to validate the performance of estimation and control algorithms using a mobile robot platform. First, we verify the computational efficiency and accuracy of the featureless direct method. Based on this estimation method, the mobile robot docking experiment is conducted. The performance of the error based controller with gain scheduling strategy is tested. Second, we validate the estimation algorithm with angular velocities using image sequences when the mobile robot has a specific angular velocity. Based on the extended algorithm, we implement the estimation method with angular velocity and realize the docking experiment control using the error based proportional controller with gain scheduling strategy when the robot has angular velocities. To improve the control performance, we also adopt Kalman filter when there is angular velocity.

For the general experiment setup, we developed an integrated system shown in figure 2. A mobile robot (A4WD1 from Robotshop) with four wheel drive serves as the main platform. A Raspberry Pi 2 is used as the central processing unit to interface with a camera and a gyroscope, estimate the time-to-contact, and compute the control command. An Arduino board (Arduino Pro 328 from Sparkfun) is employed to achieve closed-loop speed control of the robot. A forward-facing Raspberry Pi camera module is mounted on the top of the mobile robot with a 3D printed fixture. To validate the estimation algorithm with angular velocity, a distance sensor (OPT2011 from Automationdirect) is used to get the ground truth of time-to-contact. And an ADC converter chip (MCP3008 from Sparkfun) is used to get the digital signal. A gyroscope (MPU6050 from Sparkfun) is also employed to feedback the angular velocity. Several papers with checkerboard patterns are randomly placed on a wall that the robot will move towards.

Figure 2.

Figure 2. The robotic system used for onboard experiments.

Standard image High-resolution image

3.1. Experimental results without angular velocity

In this section, the estimation and control experiments of featureless direct method are conducted. First we verify the computational speed and accuracy of the method, then we carry out the docking experiment without angular velocities.

3.1.1. Estimation experiment

In this experiment, we use image sequences to compare the computational speed and accuracy of featureless direct method with the feature based method. The images are taken while the robot is moving perpendicular to the wall with a constant speed. The initial distance is 3.81 m and the constant velocity of the robot is 0.57 m s−1. After taking 150 successive images, the robot will stop. We estimate the computational speed and accuracy for the optic flow feature based method, whose source code in Matlab is available [13]. Both estimation experiments are conducted on Matlab2015a on a desktop (Intel (R) Core (TM) i-74790, 3.6 GHz CPU, 4 Gb RAM, 64 bit). With a resolution of $192\times 72$ pixels for each image, the computation time for each image pair is listed in table 1 for the direct featureless method and optical flow based method. In table 1, we also listed the average absolute error obtained by getting the mean of the difference between the ground truth and estimation value. One of the estimation results of featureless direct method and optic flow based method are illustrated in figure 3.

Figure 3.

Figure 3. The estimation results for featureless direct method and optic flow based method.

Standard image High-resolution image

Table 1. Comparison of different methods.

Method Time for each image pair (s) Average absolute error
Featureless direct method 0.037 0.4821
Optic flow (L-K method) based method 0.14 1.3629

From the results, it is obvious that the featureless direct method is more computational efficient compared with optic flow based method. Since every time the optic flow based method will generate different estimations which depend on feature extraction and tracking, we run the optic flow based method for 5 times. The result shown in figure 3 is the second estimation result. The 5 estimations generate average absolute error of 0.73, 1.12, 1.64, 1.70 and 1.61 respectively.

It is notable that the estimation error of the featureless direct method in equation (5) also relates to slope of the wall and true time-to-contact. Namely, larger surface slope or true time-to-contact generates larger estimation error.

3.1.2. Control experiment

In this experiment, the estimation algorithm for case II in section 2.1.2 and the control law in equation (20) are combined. The reference time-to-contact is set to be 2 s. We set the gain scheduling parameters ${{K}_{11}}=6,{{K}_{12}}=8,{{K}_{13}}=10,{{K}_{21}}=2,\,{{K}_{22}}=4,\,{{K}_{23}}=6$ respectively. The robot moves towards a fronto-parallel wall with an initial speed Tz0 of 0.38 m s−1, maximum speed ${{T}_{z\text{max}}}$ of 1.14 m s−1 and initial distance of 4.6 m. As shown in figure 4, for each control loop, the Raspberry Pi will control the camera to grab a color image with a resolution of $192\times 144$ , then convert the image into grayscale. Then the Raspberry Pi will run the estimation and control algorithm, and send the speed command to the Arduino board. Sample images during the motion are shown in figure 5. Since part of each image includes ground information, we only use the upper half part of the image ($192\times 72$ pixels) for estimation and control. With such a resolution, the Raspberry Pi can accomplish a control frequency of 20 Hz. Note that in this experiment, the gyroscope, distance sensor, and Kalman filter are not applied.

Figure 4.

Figure 4. Schematic for closed-loop control using the featureless estimation method and proportional controller.

Standard image High-resolution image
Figure 5.

Figure 5. Sample images for docking experiment. The four images are from the first experiment of the error based controller. The first image is dimmer than others because of different light conditions from the window. (a) Image 1. (b) Image 25. (c) Image 50. (d) Image 73.

Standard image High-resolution image

To see the control performance of the error based proportional controller with gain scheduling strategy, five experiments with the standard proportional controller are carried out. Figure 6 shows the estimated time-to-contact and robot speed with respect to time without gain scheduling strategy. Figure 7 shows the results of five docking experiments with gain scheduling strategy. From figure 7(a), we can see that because of the low speed at the beginning, the estimated time-to-contact is much larger than the reference value, so the robot accelerates to decrease time-to-contact. After that, the estimated time-to-contact is smaller than the reference value, and the robot decelerates. Even there exist some vibrations, the time-to-contact approximately stays around the reference value. The low quality of the camera mainly contributes to these vibrations. Although there are some vibrations in the estimation, the robot is almost continuously decreasing speed as shown in figure 7(b), which realizes the safe docking task. However, some vibrations exist when the robot is going to stop. These vibrations are caused by the algorithm when the robot is close to any object [24, 33].

Figure 6.

Figure 6. Experiment results for standard proportional controller. (a) is the controlled time-to-contact of the 5 experiments with the standard proportional controller. The horizontal blue line is ${{\tau}_{\text{ref}}}=2$ s. And (b) is the robot velocity during the experiment. The five colored curves represent the five experiment results in both figures. For the five experiments, the robot stopped after it took 68, 64, 68, 80, and 75 images, respectively. (a) Estimated time-to-contact value. (b) Speed of robot.

Standard image High-resolution image
Figure 7.

Figure 7. Experiment results for error based proportional controller with gain scheduling strategy. (a) is the controlled time-to-contact of the 5 experiments with the error based proportional controller. The horizontal blue line is ${{\tau}_{\text{ref}}}=2$ s. (b) is the robot velocity during the experiment. The five colored curves represent the five experiment results in both figures. For the five experiments, the robot stopped after it took 73, 66, 67, 80, and 72 images, respectively. (a) Estimated time-to-contact. (b) Speed of robot.

Standard image High-resolution image

From the results we can see that error based proportional controller generates better performance since the average steady state error of time-to-contact is 0.36 s while the standard proportional controller gives the average steady state error of 0.45 s. Especially when the robot is near the wall and there are less vibrations and the amplitude of the vibrations are smaller compared with the results of the standard proportional controller.

3.2. Experimental results with angular velocities

In this section, we first validate the featureless estimation algorithm with angular velocity, then use the estimation algorithm and the error based gain scheduling controller to perform the docking experiment. To test the performance of the Kalman filter in time-to-contact estimation algorithm, we also perform docking control experiments with and without Kalman filter separately when there is angular velocity.

3.2.1. Estimation experiment

To validate the time-to-contact estimation algorithm with angular velocities, first we need to get the ground truth of time-to-contact. We measure the distance with the distance sensor. Angular velocity is measured by the gyroscope and sent to Raspberry Pi 2 through I2C communication. The angular velocity which is about $-{{45}^{\circ}}~{{\text{s}}^{-1}}$ is applied by driving the wheels on the two sides with different speed. In this case, only angular velocity around Y axis exists. The images are acquired continuously right after the distance and angular velocity are sampled while the robot is moving.

The estimated time-to-contact is computed with equation (14) and the ground truth of time-to-contact value is calculated by: Z/Tz. Z is the distance from the distance sensor. To compare the performance of the extended algorithm with the algorithm which does not consider angular velocities, we also plotted the estimation results of equation (5). Figure 8 represents the sample images in the experiment. Figure 9 shows the estimation result with Kamlan filter for the experiment.

Figure 8.

Figure 8. Sample images for estimated time-to-contact with angular velocity. The four images are from the experiment with angular velocity of which the mean is $-{{45}^{\circ}}~{{\text{s}}^{-1}}$ around Y axis. (a) Image 1. (b) Image 30. (c) Image 60. (d) Image 90.

Standard image High-resolution image
Figure 9.

Figure 9. Time-to-contact with angular velocity.

Standard image High-resolution image

From figure 9, we can see that the trend of the estimated time-to-contact of extended algorithm follows the ground truth. The extended algorithm gives the average absolute error of 0.43 s, while the original algorithm gives the average absolute error of 0.87 s. It verifies the necessity and the accuracy of the algorithm. Note that the smaller the angular velocity is, the less difference between the two algorithms is. Nevertheless, there are still some errors. The following reasons may contribute to the error: (1) Although the robot does not move, the small unevenness of the ground may lead to rotation about X and Z axis. (2) There is noise when gyroscope measures the angular velocity. (3) The quality of the camera is not good enough as it has light intensity vibration. (4) The slope of the wall also influence the estimation result since the algorithm does not consider the slope.

3.2.2. Control experiment

In this section, we applied the estimation algorithm with angular velocities to control the docking process of the mobile robot using the error based proportional controller with gain scheduling strategy. Two experiments are carried out: one with Kalman filter to smooth the estimation value and the other one without Kalman filter. In these two experiments, the reference value of τ is set to be 2 s. The robot moves towards a fronto-parallel wall with an initial speed Tz0 of 0.38 m s−1 and maximum speed ${{T}_{z\text{max}}}$ of 0.95 m s−1. The initial distance from the wall to robot is 4.2 m.

From figures 10 and 11 we can see that, at the beginning, because of the low speed, the estimated time-to-contact is much larger than the reference value. Then the error based gain scheduling controller begins to work to decrease time-to-contact by acceleration. After several control loops, the estimated time-to-contact is maintained at the reference value. At the end, the estimated time-to-contact is less than the reference value because of the high speed and short distance to the wall. The error based controller begins to increase time-to-contact by deceleration. Correspondingly, the speed will increase first and then decrease. By comparing the two figures, it is obvious that Kalman filter plays an important role in smoothing the estimation results and minimizing the speed vibrations.

Figure 10.

Figure 10. Experiment results without Kalman filter. (a) is the controlled time-to-contact of the 5 experiments without Kalman filter. The horizontal blue line is ${{\tau}_{\text{ref}}}=2$ . (b) is the robot velocity during the experiment. The five colored curves represent the five experiment results in both figures. For the five experiments, the robot stopped after it took 133, 158, 119, 157 and 149 images, respectively. (a) Estimated Time-to-contact. (b) Speed of Robot.

Standard image High-resolution image
Figure 11.

Figure 11. Experiment results with Kalman filter. (a) is the controlled time-to-contact of the 5 experiments with Kalman filter. The horizontal blue line is ${{\tau}_{\text{ref}}}=2$ . (b) is the robot velocity during the experiment. The five colored curves represent the five experiment results in both figures. For the five experiments, the robot stopped after it took 124, 122, 121, 142, and 129 images, respectively. (a) Estimated Time-to-contact. (b) Speed of Robot.

Standard image High-resolution image

4. Conclusion

Time-to-contact is a biologically inspired concept which can be applied to control the motion of a robot to fulfill tasks such as landing, perching, or docking. In this paper, a featureless method is employed to estimate time-to-contact from image sequences. Such a method does not need to extract and track features, resulting in more efficient computation compared with other feature based methods. An error based controller with gain scheduling is implemented together with the featureless estimation algorithm on a mobile robot platform. The mobile robot's speed is successfully controlled to maintain a reference time-to-contact. In addition, we also extended the featureless estimation algorithm to incorporate angular velocities. Validation experiments and control experiments are carried out. With the Kalman filter, the estimation algorithm and control strategy lead to better performance. Future efforts will be focused on the landing control and navigation of aerial robot. The results presented in this paper can be readily applied to miniature robots that only have the vision sensor for navigation and control.

Acknowledgments

This work is partially supported by the National Science Foundation under Grant CNS-1320561.

Please wait… references are loading.
10.1088/1748-3190/aa53c4