Fast planar surface 3D SLAM using LIDAR

https://doi.org/10.1016/j.robot.2017.03.013Get rights and content

Highlights

  • Efficient processing of 3D point clouds achieved by projecting them onto image planes.

  • Performing fast segmentation of projected point clouds into planar segments.

  • Exploiting the sparsity of SLAM information matrix without approximation error.

  • Technique that combines planar segments that lie on the same plane into one plane.

Abstract

In this paper we propose a fast 3D pose based SLAM system that estimates a vehicle’s trajectory by registering sets of planar surface segments, extracted from 360 field of view (FOV) point clouds provided by a 3D LIDAR. Full FOV and planar representation of the map gives the proposed SLAM system the capability to map large-scale environments while maintaining fast execution time. For efficient point cloud processing we apply image-based techniques to project it to three two-dimensional images. The SLAM backend is based on Exactly Sparse Delayed State Filter as a non-iterative way of updating the pose graph and exploiting sparsity of the SLAM information matrix. Finally, our SLAM system enables reconstruction of the global map by merging the local planar surface segments in a highly efficient way. The proposed point cloud segmentation and registration method was tested and compared with the several state-of-the-art methods on two publicly available datasets. Complete SLAM system was also tested in one indoor and one outdoor experiment. The indoor experiment was conducted using a research mobile robot Husky A200 to map our university building and the outdoor experiment was performed on the publicly available dataset provided by the Ford Motor Company, in which a car equipped with a 3D LIDAR was driven in the downtown Dearborn Michigan.

Introduction

In the last two decades sensors capable of providing real-time 3D point clouds which represent the environment opened many possibilities in the field of mobile robotics and environment sensing in general. Using real-time 3D information, autonomous robots are capable to safely navigate in unknown complex environments and perform wider variety of tasks. One of the best examples comes from the automotive industry. Today, automotive industry invests a lot in research of fully autonomous driving capability, which does not require the driver to intervene and take control of the vehicle. Self-driving cars could revolutionize the transportation system known today and bring many advantages to it, such as increased efficiency and lower accidents rate. However, autonomous driving opens many new challenges for the robotic systems and the full autonomy is still far away. Some of the challenges are design of perception systems that operate reliably in changing environmental conditions and navigation in an unknown and GPS restricted complex environments. There are different sensors for environment perception (e.g. stereo cameras, depth cameras, radars, LIDARs etc.) but LIDARs are by far the most used ones. Therefore, development of a real-time Simultaneous Localization And Mapping (SLAM) solution based on 3D LIDAR measurements, that can work in large-scale environments, is of great importance.

3D LIDAR provides 360 field of view (FOV) 3D point clouds representing the surrounding environment. The main question is how to process those point clouds. Many attempts [1], [2], [3] had the philosophy that the environment is its best representation and operate directly on raw 3D point clouds. In almost all of these works, matching 3D point clouds of the environment is performed with algorithms derived from the iterative closest point (ICP) algorithm [4]. Although ICP has an advantage of implicitly solving data association problem, it suffers from premature convergence to local minima, especially when the overlap between scene samples decreases [5]. Furthermore the main problem of ICP comes from the sheer size of the raw data provided by 3D sensors that even modern computing systems cannot process in real-time.

Since raw 3D point cloud processing is computationally intensive, a question arises what is the best environment representation that would allow real time processing while preserving precision up to a certain level. Answer to this question is especially important for SLAM systems since they must simultaneously build the map and perform localization using sensors information about the environment. Ability to work in real-time increases SLAM localization performance and enables it to build a meaningful map. One solution to the problem is in segmenting the sensor data into higher level features. During segmentation of raw data into higher level features some precision is always lost but the SLAM system can be designed so that this does not significantly impact its overall performance. The maps consisting of higher level features such as polygons can be almost as good for the navigation tasks as more precise maps consisting of raw data, but also enable execution in real-time which is more important in those tasks.

The main contribution of this paper is the new 3D SLAM system capable of working in real-time and in large-scale environments. We have designed every component of our SLAM system with computational and memory efficiency in mind to be suitable for navigation in large-scale complex environments. We have chosen planar surface segments to represent the environment because they are prevalent in indoor and outdoor urban spaces. Three important distinctive novelties can be pointed out. First, we have adapted point cloud segmentation algorithm from [6], which was originally developed for RGB-D sensors, to operate on full field of view 3D LIDAR point clouds. We did this by dividing and projecting full FOV 3D point clouds onto three image planes which allows fast 2.5D point cloud segmentation based on recursive 2D Delaunay triangulation and region merging. Second, we have changed algorithm presented in [7], which calculates relative pose between two local maps by registering planar surface segments contained within them, to include initial guess of the relative pose from the SLAM trajectory. We have also adapted its planar segment model and registration algorithm to take into account both the uncertainty model and 360 FOV of 3D LIDAR. By doing this we were able to significantly reduce the number of outliers in pose constraint calculation and increase process speed at the same time. Third, we have developed a global planar map building algorithm which reduces the number of required planar surface segments for map representation. Number of segments is reduced by taking into account that several planar surface segments extracted at different time can belong to one single planar segment and that same planar surface segment can be re-observed so consequently it does not form a new planar surface segment in the global map.

The rest of the paper is organized as follows. The related work and the overall concept of the proposed SLAM system are presented in Section 2. In Section 3, our point cloud segmentation algorithm is described. In Section 4, five components of our SLAM system are explained. Experimental results are presented in Section 5 and conclusion is given in Section 6.

Section snippets

Related work and overall system concept

In this section, first we present previous work related to point cloud segmentation and 3D planar SLAM, and then give the overall concept of the proposed SLAM system together with notations used throughout the paper.

Local map building

Local maps consist of planar surface segments extracted from one point cloud. In this section, we first describe a method for detection of planar surface segments from 3D LIDAR point clouds and then give their mathematical description and uncertainty model. Local maps are used for creation of the global environment map and for calculating relative poses used as the pose constraints in the SLAM pose graph.

Planar surface 3D SLAM

In this section, we describe in details each of the five components introduced in Section 2.2 that constitute our 3D SLAM system: (i) local maps registration, (ii) SLAM backend, (iii) loop closing detection, (iv) global map building and (v) update of planar surface segments.

Experimental results

We have divided the testing of our SLAM solution into two stages. First we present test results of our point cloud segmentation and registration algorithm and then provide test results for our SLAM system.

Conclusion

In this paper, we have presented a fast planar surface 3D SLAM solution that is designed to work on full field of view 3D point clouds obtained from 3D LIDAR measurements. There are four key improvements that allow our SLAM algorithm to work fast in large-scale environments. First is efficient processing of 3D point clouds achieved by projecting them onto 2D image planes and then performing segmentation of projected point clouds into planar surface segments. Second is the use of the information

Acknowledgments

This work has been supported by the Unity Through Knowledge Fund (no. 25/15) under the project Cooperative Cloud based Simultaneous Localization and Mapping in Dynamic Environments. We would also like to thank professors Kaustubh Pathak, Max Folkert Pfingsthorn and Narunas Vaskevicius from Jacobs University for providing us with the source code of their method and helping us with running it on our machine.

Kruno Lenac, mag. ing was born on May 19th 1989. He received his B.Sc. degree in 2011 and M.Sc. degree in 2013, both in Electrical Engineering from Faculty of Electrical Engineering and Computing (FER Zagreb), University of Zagreb, Croatia. He has been employed as a researcher on the ACROSS project in October 2013 at FER. At the same time he became a Ph.D. student at FER under mentorship of prof. dr. sc. Ivan Petrovi. Currently he is employed as a researcher on FER-KIET project. His main areas

References (32)

  • XiaoaJ. et al.

    Three-dimensional point cloud plane segmentation in both structured and unstructured environments

    Robot. Auton. Syst.

    (2013)
  • XiaoJ. et al.

    Planar segment based three dimensional point cloud registration in outdoor environments

    J. Field Robot.

    (2013)
  • J. Weingarten, R. Siegwart, 3D SLAM using planar segments, in: IEEE International Conference on Intelligent Robots and...
  • KohlheppP. et al.

    Sequential 3D-SLAM for mobile action planning

  • R.A. Newcombe, D. Molyneaux, D. Kim, A.J. Davison, J. Shotton, S. Hodges, A. Fitzgibbon, KinectFusion: Real-time dense...
  • R.F. Salas-Moreno, R.A. Newcombe, H. Strasdat, P.H.J. Kelly, A.J. Davison, SLAM++: Simultaneous localisation and...
  • Cited by (0)

    Kruno Lenac, mag. ing was born on May 19th 1989. He received his B.Sc. degree in 2011 and M.Sc. degree in 2013, both in Electrical Engineering from Faculty of Electrical Engineering and Computing (FER Zagreb), University of Zagreb, Croatia. He has been employed as a researcher on the ACROSS project in October 2013 at FER. At the same time he became a Ph.D. student at FER under mentorship of prof. dr. sc. Ivan Petrovi. Currently he is employed as a researcher on FER-KIET project. His main areas of interest is mobile robotics with focus on autonomous localization, navigation and map building.

    Dr. sc. Andrej Kitanov received his Ph.D. degree in Electrical Engineering in 2010 from the Faculty of Electrical Engineering and Computing (FER Zagreb), University of Zagreb, Croatia. Currently, he is a post-doctoral research fellow at Department of Control and Computer Engineering, FER, Zagreb. His main research interests are estimation and control of mobile robots in unstructured environments, especially vision-based simultaneous localization and mapping (SLAM).

    Prof. dr. sc. Robert Cupec graduated at the Faculty of Electrical Engineering, University of Zagreb in 1995, where he received the M.Sc. degree in 1999. Upon graduation he joined the Department of Control and Computer Engineering in Automation, University of Zagreb, where he was employed until 2000. From 2000 to 2004 he is working at the Institute of Automatic Control Engineering, Technische Universitt Mnchen where he received the Ph.D. degree in 2005. Currently, he is employed as an Assistant Professor at the Faculty of Electrical Engineering, University of Osijek, Croatia. His main research interest is in robot vision.

    Prof dr. sc. Ivan Petrović received B.Sc. degree in 1983, M.Sc. degree in 1989 and Ph.D. degree in 1998, all in Electrical Engineering from the Faculty of Electrical Engineering and Computing (FER Zagreb), University of Zagreb, Croatia. He had been employed as an R&D Engineer at the Institute of Electrical Engineering of the Konar Corporation in Zagreb from 1985 to 1994. Since 1994 he has been with FER Zagreb, where he is currently a full professor. He teaches a number of undergraduate and graduate courses in the field of control systems and mobile robotics. His research interests include various advanced control strategies and their applications to control of complex systems and mobile robots navigation. He has published more than 40 journal and 160 conference papers, and results of his research have been implemented in several industrial products. He is a member of IEEE, IFAC – TC on Robotics and FIRA – Executive committee. He is a member of the Croatian Academy of Engineering.

    View full text