2011 | OriginalPaper | Chapter
Learning Landmark Selection Policies for Mapping Unknown Environments
Authors : Hauke Strasdat, Cyrill Stachniss, Wolfram Burgard
Published in: Robotics Research
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In general, a mobile robot that operates in unknown environments has to maintain a map and has to determine its own location given themap. This introduces significant computational and memory constraints for most autonomous systems, especially for lightweight robots such as humanoids or flying vehicles. In this paper, we present a universal approach for learning a landmark selection policy that allows a robot to discard landmarks that are not valuable for its current navigation task. This enables the robot to reduce the computational burden and to carry out its task more efficiently by maintaining only the important landmarks. Our approach applies an unscented Kalman filter for addressing the simultaneous localization and mapping problem and uses Monte-Carlo reinforcement learning to obtain the selection policy. In addition to that, we present a technique to compress learned policies without introducing a performance loss. In this way, our approach becomes applicable on systems with constrained memory resources. Based on real world and simulation experiments, we show that the learned policies allow for efficient robot navigation and outperform handcrafted strategies.We furthermore demonstrate that the learned policies are not only usable in a specific scenario but can also be generalized towards environments with varying properties.