Skip to main content

Über dieses Buch

This book contains the proceedings of the 10th FSR, (Field and Service Robotics) which is the leading single-track conference on applications of robotics in challenging environments. The 10th FSR was held in Toronto, Canada from 23-26 June 2015. The book contains 42 full-length, peer-reviewed papers organized into a variety of topics: Aquatic, Vision, Planetary, Aerial, Underground, and Systems.

The goal of the book and the conference is to report and encourage the development and experimental evaluation of field and service robots, and to generate a vibrant exchange and discussion in the community. Field robots are non-factory robots, typically mobile, that operate in complex and dynamic environments: on the ground (Earth or other planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans to help them with their lives. The first FSR was held in Canberra, Australia, in 1997. Since that first meeting, FSR has been held roughly every two years, cycling through Asia, Americas, Europe.





A Spatially and Temporally Scalable Approach for Long-Term Lakeshore Monitoring

This paper provides an image processing framework to assist in the inspection and, more generally, the data association of a natural environment, which we demonstrate in a long-term lakeshore monitoring task with an autonomous surface vessel. Our domain consists of 55 surveys of a 1 km lakeshore collected over a year and a half. Our previous work introduced a framework in which images of the same scene from different surveys are aligned using visual SLAM and SIFT Flow. This paper: (1) minimizes the number of expensive image alignments between two surveys using a covering set of poses, rather than all the poses in a sequence; (2) improves alignment quality using a local search around each pose and an alignment bias derived from the 3D information from visual SLAM; and (3) provides exhaustive results of image alignment quality. Our improved framework finds significantly more precise alignments despite performing image registration over an order of magnitude fewer times. We show changes a human spotted between surveys that would have otherwise gone unnoticed. We also show cases where our approach was robust to ‘extreme’ variation in appearance.

Shane Griffith, Cédric Pradalier

Autonomous Greenhouse Gas Sampling Using Multiple Robotic Boats

Accurately quantifying total greenhouse gas emissions (e.g. methane) from natural systems such as lakes, reservoirs and wetlands requires the spatial-temporal measurement of both diffusive and ebullitive (bubbling) emissions. Traditional, manual, measurement techniques provide only limited localised assessment of methane flux, often introducing significant errors when extrapolated to the whole-of-system. In this paper, we directly address these current sampling limitations and present a novel multiple robotic boat system configured to measure the spatiotemporal release of methane to atmosphere across inland waterways. The system, consisting of multiple networked Autonomous Surface Vehicles (ASVs) and capable of persistent operation, enables scientists to remotely evaluate the performance of sampling and modelling algorithms for real-world process quantification over extended periods of time. This paper provides an overview of the multi-robot sampling system including the vehicle and gas sampling unit design. Experimental results are shown demonstrating the system’s ability to autonomously navigate and implement an exploratory sampling algorithm to measure methane emissions on two inland reservoirs.

Matthew Dunbabin

Experimental Analysis of Receding Horizon Planning Algorithms for Marine Monitoring

Autonomous surface vehicles (ASVs) are becoming more widely used in environmental monitoring applications. Due to the limited duration of these vehicles, algorithms need to be developed to save energy and maximize monitoring efficiency. This paper compares receding horizon path planning models for their effectiveness at collecting usable data in an aquatic environment. An adaptive receding horizon approach is used to plan ASV paths to collect data. A problem that often troubles conventional receding horizon algorithms is the path planner becoming trapped at local optima. Our proposed Jumping Horizon (J-Horizon) algorithm planner improves on the conventional receding horizon algorithm by jumping out of local optima. We demonstrate that the J-Horizon algorithm collects data more efficiently than commonly used lawnmower patterns, and we provide a proof-of-concept field implementation on an ASV with a temperature monitoring task in a lake.

Soo-Hyun Yoo, Andrew Stuntz, Yawei Zhang, Robert Rothschild, Geoffrey A. Hollinger, Ryan N. Smith

Return to Antikythera: Multi-session SLAM Based AUV Mapping of a First Century B.C. Wreck Site

This paper describes an expedition to map a first century B.C. ship wreck off the coast of the Greek island of Antikythera using an Autonomous Underwater Vehicle (AUV) equipped with a high-resolution stereo imaging system. The wreck, first discovered in 1900, has yielded a wealth of important historical artefacts from two previous interventions, including the renowned Antikythera mechanism. The deployments described in this paper aimed to map the current state of the wreck site prior to further excavation. Over the course of 10 days of operation, the AUV completed multiple dives over the main wreck site and other nearby targets of interest. This paper describes the motivation for returning to the wreck and producing a detailed map, gives an overview of the techniques used for multi-session Simultaneous Localisation and Mapping (SLAM) to stitch data from two dives into a single, composite map of the site and presents preliminary results of the mapping exercise.

Stefan B. Williams, Oscar Pizarro, Brendan Foley

An Overview of MIT-Olin’s Approach in the AUVSI RobotX Competition

The inaugural RobotX competition was held in Singapore in Oct. 2014. The purpose of the competition was to challenge teams to develop new strategies for tackling unique and important problems in marine robotics. The joint team from Massachusetts Institute of Technology (MIT) and Olin College was chosen as one of 15 competing teams from five nations (USA, South Korea, Japan, Singapore and Australia). The team received the surface vehicle platform, the WAM-V (Fig. 1) in Nov. 2013 and spent a year building the propulsion, electronic, sensing, and algorithmic capabilities required to complete the five tasks that included navigation, underwater pinger localization, docking, light sequence detection, and obstacle avoidance. Ultimately the MIT/Olin team narrowly won first place in a competitive field. This paper summarizes our approach to the tasks, as well as some lessons learned in the process. As a result of the competition, we have developed a new suite of open-source tools for feature detection and tracking, realtime shape detection from imagery, bearing-only target localization, and obstacle avoidance.

Arthur Anderson, Erin Fischell, Thom Howe, Tom Miller, Arturo Parrales-Salinas, Nick Rypkema, David Barrett, Michael Benjamin, Alex Brennen, Michael DeFillipo, John J. Leonard, Liam Paull, Henrik Schmidt, Nick Wang, Alon Yaari

A Parameterized Geometric Magnetic Field Calibration Method for Vehicles with Moving Masses with Applications to Underwater Gliders

The accuracy of magnetic measurements performed by autonomous vehicles is often limited by the presence of moving ferrous masses. This work proposes a third order parameterized ellipsoid calibration method for magnetic measurements in the sensor frame. In this manner the ellipsoidal calibration coefficients are dependent on the locations of the moving masses. The parameterized calibration method is evaluated through field trials with an autonomous underwater glider equipped with a low power precision fluxgate sensor. These field trials were performed in the East Arm of Bonne Bay, Newfoundland in December of 2013. During these trials a series of calibration profiles with the mass shifting and ballast mechanisms at different locations were performed before and after the survey portion of the trials. The nominal ellipsoidal coefficients were extracted using the full set of measurements from a set of calibration profiles and used as the initial conditions for the third order polynomials. These polynomials were then optimized using a gradient descent solver resulting in a RMS error between the calibration measurements and the local total field of 28 and 17 nT for the first and second set of calibration runs. When the parameterized coefficients are used to correct the magnetic measurements from the survey portion of the field trials the RMS error between the survey measurements and the local total field was 124 and 69 nT when using the first and second set of coefficients.

Brian Claus, Ralf Bachmayer

Towards Autonomous Robotic Coral Reef Health Assessment

This paper addresses the automated analysis of coral in shallow reef environments up to 90 ft deep. During a series of robotic ocean deployments, we have collected a data set of coral and non-coral imagery from four distinct reef locations. The data has been annotated by an experienced biologist and presented as a representative challenge for visual understanding techniques. We describe baseline techniques using texture and color features combined with classifiers for two vision sub-tasks: live coral image classification and live coral semantic segmentation. The results of these methods demonstrate both the feasibility of the task as well as the remaining challenges that must be addressed through the development of more sophisticated techniques in the future.

Travis Manderson, Jimmy Li, David Cortés Poza, Natasha Dudek, David Meger, Gregory Dudek



BOR $$^2$$ 2 G: Building Optimal Regularised Reconstructions with GPUs (in Cubes)

This paper is about dense regularised mapping using a single camera as it moves through large work spaces. Our technique is, as many are, a depth-map fusion approach. However, our desire to work both at large scales and outdoors precludes the use of RGB-D cameras. Instead, we need to work with the notoriously noisy depth maps produced from small sets of sequential camera images with known inter-frame poses. This, in turn, requires the application of a regulariser over the 3D surface induced by the fusion of multiple (of order 100) depth maps. We accomplish this by building and managing a cube of voxels. The combination of issues arising from noisy depth maps and moving through our workspace/voxel cube, so it envelops us, rather than orbiting around it as is common in desktop reconstructions, forces the algorithmic contribution of our work. Namely, we propose a method to execute the optimisation and regularisation in a 3D volume which has been only partially observed and thereby avoiding inappropriate interpolation and extrapolation. We demonstrate our technique indoors and outdoors and offer empirical analysis of the precision of the reconstructions.

Michael Tanner, Pedro Piniés, Lina Maria Paz, Paul Newman

Online Loop-Closure Detection via Dynamic Sparse Representation

Visual loop closure detection is an important problem in visual robot navigation. Successful solutions to visual loop closure detection are based on image matching between the current view and the map images. In order to obtain a solution that is scalable to large environments involving thousands or millions of images, the efficiency of a loop closure detection algorithm is critical. Recently people have proposed to apply $$l_{1}$$l1-minimization methods to visual loop closure detection in which the problem is cast as one of obtaining a sparse representation of the current view in terms of map images. The proposed solution, however, is insufficient with a time complexity worse than linear search. In this paper, we present a solution that overcomes the inefficiency by employing dynamic algorithms in $$l_{1}$$l1-minimization. Our solution exploits the sequential nature of the loop closure detection problem. As a result, our proposed algorithm is able to obtain a performance that is an order of magnitude more efficient than the existing $$l_{1}$$l1-minimization based solution. We evaluate our algorithm on publicly available visual SLAM datasets to establish its accuracy and efficiency.

Moein Shakeri, Hong Zhang

Large Scale Dense Visual Inertial SLAM

In this paper we present a novel large scale SLAM system that combines dense stereo vision with inertial tracking. The system divides space into a grid and efficiently allocates GPU memory only when there is surface information within a grid cell. A rolling grid approach allows the system to work for large scale outdoor SLAM. A dense visual inertial dense tracking pipeline incrementally localizes stereo cameras against the scene. The proposed system is tested with both a simulated data set and several real-life data in different lighting (illumination changes), motion (slow and fast), and weather (snow, sunny) conditions. Compared to structured light-RGBD systems the proposed system works indoors and outdoors and over large scales beyond single rooms or desktop scenes. Crucially, the system is able to leverage inertial measurements for robust tracking when visual measurements do not suffice. Results demonstrate effective operation with simulated and real data, and both indoors and outdoors under varying lighting conditions.

Lu Ma, Juan M. Falquez, Steve McGuire, Gabe Sibley

Dense and Swift Mapping with Monocular Vision

The estimation of dense depth maps has become a fundamental module in the pipeline of many visual-based navigation and planning systems. The motivation of our work is to achieve a fast and accurate in-situ infrastructure modelling from a monocular camera mounted on an autonomous car. Our technical contribution is in the application of a Lagrangian Multipliers based formulation to minimise an energy that combines a non-convex dataterm with adaptive pixel-wise regularisation to yield the final local reconstruction. We advocate the use of constrained optimisation for this task—we shall show it is swift, accurate and simple to implement. Specifically we propose an Augmented Lagrangian (AL) method that markedly reduces the number of iterations required for convergence, more than $$50\,\%$$50% of reduction in all cases in comparison to the state-of-the-art approach. As a result, part of this significant saving is invested in improving the accuracy of the depth map. We introduce a novel per pixel inverse depth uncertainty estimation that affords us to apply adaptive regularisation on the initial depth map: high informative inverse depth pixels require less regularisation, however its impact on more uncertain regions can be propagated providing significant improvement on textureless regions. To illustrate the benefits of our approach, we ran our experiments on three synthetich datasets with perfect ground truth for textureless scenes. An exhaustive analysis shows that AL can speed up the convergence up to 90 % achieving less than 4 cm of error. In addition, we demonstrate the application of the proposed approach on a challenging urban outdoor dataset exhibiting a very diverse and heterogeneous structure.

Pedro Piniés, Lina Maria Paz, Paul Newman

Wrong Today, Right Tomorrow: Experience-Based Classification for Robot Perception

This paper is about building robots that get better through use in their particular environment, improving their perceptual abilities. We approach this from a life long learning perspective: we want the robot’s ability to detect objects in its specific operating environment to evolve and improve over time. Our idea, which we call Experience-Based Classification (EBC), builds on the well established practice of performing hard negative mining to train object detectors. Rather than cease mining for data once a detector is trained, EBC continuously seeks to learn from mistakes made while processing data observed during the robot’s operation. This process is entirely self-supervised, facilitated by spatial heuristics and the fact that we have additional scene data at our disposal in mobile robotics. In the context of autonomous driving we demonstrate considerable object detector improvement over time using 40 Km of data gathered from different driving routes at different times of year.

Jeffrey Hawke, Corina Gurău, Chi Hay Tong, Ingmar Posner

Beyond a Shadow of a Doubt: Place Recognition with Colour-Constant Images

Colour-constant images have been shown to improve visual navigation taking place over extended periods of time. These images use a colour space that aims to be invariant to lighting conditions—a quality that makes them very attractive for place recognition, which tries to identify temporally distant image matches. Place recognition after extended periods of time is especially useful for SLAM algorithms, since it bounds growing odometry errors. We present results from the FAB-MAP 2.0 place recognition algorithm, using colour-constant images for the first time, tested with a robot driving a 1 km loop 11 times over the course of several days. Computation can be improved by grouping short sequences of images and describing them with a single descriptor. Colour-constant images are shown to improve performance without a significant impact on computation, and the grouping strategy greatly speeds up computation while improving some performance measures. These two simple additions contribute robustness and speed, without modifying FAB-MAP 2.0.

Kirk MacTavish, Michael Paton, Timothy D. Barfoot

Segmentation and Classification of 3D Urban Point Clouds: Comparison and Combination of Two Approaches

Segmentation and classification of 3D urban point clouds is a complex task, making it very difficult for any single method to overcome all the diverse challenges offered. This sometimes requires the combination of several techniques to obtain the desired results for different applications. This work presents and compares two different approaches for segmenting and classifying 3D urban point clouds. In the first approach, detection, segmentation and classification of urban objects from 3D point clouds, converted into elevation images, are performed by using mathematical morphology. First, the ground is segmented and objects are detected as discontinuities on the ground. Then, connected objects are segmented using a watershed approach. Finally, objects are classified using SVM (Support Vector Machine) with geometrical and contextual features. The second method employs a super-voxel based approach in which the 3D urban point cloud is first segmented into voxels and then converted into super-voxels. These are then clustered together using an efficient link-chain method to form objects. These segmented objects are then classified using local descriptors and geometrical features into basic object classes. Evaluated on a common dataset (real data), both these methods are thoroughly compared on three different levels: detection, segmentation and classification. After analyses, simple strategies are also presented to combine the two methods, exploiting their complementary strengths and weaknesses, to improve the overall segmentation and classification results.

A. K. Aijazi, A. Serna, B. Marcotegui, P. Checchin, L. Trassoudaine

A Stereo Vision Based Obstacle Detection System for Agricultural Applications

In this paper, an obstacle detection system for field applications is presented which relies on the output of a stereo vision camera. In a first step, it splits the point cloud into cells which are analyzed in parallel. Here, features like density and distribution of the points and the normal of a fitted plane are taken into account. Finally, a neighborhood analysis clusters the obstacles and identifies additional ones based on the terrain slope. Furthermore, additional properties can be easily derived from the grid structure like a terrain traversability estimation or a dominant ground plane. The experimental validation has been done on a modified tractor on the field, with a test vehicle on the campus and within the forest.

Patrick Fleischmann, Karsten Berns

CoPilot: Autonomous Doorway Detection and Traversal for Electric Powered Wheelchairs

In this paper we introduce CoPilot, an active driving aid that enables semi-autonomous, cooperative navigation of an electric powered wheelchair (EPW) for automated doorway detection and traversal. The system has been cleanly integrated into a commercially available EPW, and demonstrated with both joystick and head array interfaces. Leveraging the latest in 3D perception systems, we developed both feature and histogram-based approaches to the doorway detection problem. When coupled with a sample-based planner, success rates for automated doorway traversal approaching 100 % were achieved.

Tom Panzarella, Dylan Schwesinger, John Spletzer

Learning a Context-Dependent Switching Strategy for Robust Visual Odometry

Many applications for robotic systems require the systems to traverse diverse, unstructured environments. State estimation with Visual Odometry (VO) in these applications is challenging because there is no single algorithm that performs well across all environments and situations. The unique trade-offs inherent to each algorithm mean different algorithms excel in different environments. We develop a method to increase robustness in state estimation by using an ensemble of VO algorithms. The method combines the estimates by dynamically switching to the best algorithm for the current context, according to a statistical model of VO estimate errors. The model is a Random Forest regressor that is trained to predict the accuracy of each algorithm as a function of different features extracted from the sensory input. We evaluate our method in a dataset of consisting of four unique environments and eight runs, totaling over 25 min of data. Our method reduces the mean translational relative pose error by 3.5 % and the angular error by 4.3 % compared to the single best odometry algorithm. Compared to the poorest performing odometry algorithm, our method reduces the mean translational error by 39.4 % and the angular error by 20.1 %.

Kristen Holtz, Daniel Maturana, Sebastian Scherer



System Design of a Tethered Robotic Explorer (TReX) for 3D Mapping of Steep Terrain and Harsh Environments

The use of a tether in mobile robotics provides a method to safely explore steep terrain and harsh environments considered too dangerous for humans and beyond the capability of standard ground rovers. However, there are significant challenges yet to be addressed concerning mobility while under tension, autonomous tether management, and the methods by which an environment is assessed. As an incremental step towards solving these problems, this paper outlines the design and testing of a center-pivoting tether management payload enabling a four-wheeled rover to access and map steep terrain. The chosen design permits a tether to attach and rotate passively near the rover’s center-of-mass in the direction of applied tension. Prior design approaches in tethered climbing robotics are presented for comparison. Tests of our integrated payload and rover, Tethered Robotic Explorer (TReX), show full rotational freedom while under tension on steep terrain, and basic autonomy during flat-ground tether management. Extensions for steep-terrain tether management are also discussed. Lastly, a planar lidar fixed to a tether spool is used to demonstrate a 3D mapping capability during a tethered traverse. Using visual odometry to construct local point-cloud maps over short distances, a globally-aligned 3D map is reconstructed using a variant of the Iterative Closest Point (ICP) algorithm.

Patrick McGarey, François Pomerleau, Timothy D. Barfoot

Design, Control, and Experimentation of Internally-Actuated Rovers for the Exploration of Low-Gravity Planetary Bodies

In this paper we discuss the design, control, and experimentation of internally-actuated rovers for the exploration of low-gravity (micro-g to milli-g) planetary bodies, such as asteroids, comets, or small moons. The actuation of the rover relies on spinning three internal flywheels, which allows all subsystems to be packaged in one sealed enclosure and enables the platform to be minimalistic, thereby reducing its cost. By controlling the flywheels’ spin rates, the rover is capable of achieving large surface coverage by attitude-controlled hops, fine mobility by tumbling, and coarse instrument pointing by changing orientation relative to the ground. We discuss the dynamics of such rovers, their control, and key design features (e.g., flywheel design and orientation, geometry of external spikes, and system engineering aspects). The theoretical analysis is validated on a first-of-a-kind 6 degree-of-freedom (DoF) microgravity test bed, which consists of a 3 DoF gimbal attached to an actively controlled gantry crane.

B. Hockman, A. Frick, I. A. D. Nesnas, M. Pavone

Considering the Effects of Gravity When Developing and Field Testing Planetary Excavator Robots

One of the challenges of field testing planetary rovers on Earth is the difference in gravity between the test and the intended operating conditions. This not only changes the weight exerted by the robot on the surface but also affects the behaviour of the granular surface itself, and unfortunatly no field test can fully address this shortcoming. This research introduces novel experimentation that for the first time subjects planetary excavator robots to gravity offload (a cable pulls up on the robot with 5/6 its weight, to simulate lunar gravity) while they dig. Excavating with gravity offload underestimates the detrimental effects of gravity on traction, but overestimates the detrimental effects on excavation resistance; though not ideal, this is a more balanced test than excavating in Earth gravity, which underestimates detrimental effects on both traction and resistance. Experiments demonstrate that continuous excavation (e.g. bucket-wheel) fares better than discrete excavation (e.g. front-loader) when subjected to gravity offload, and is better suited for planetary excavation. This key result is incorporated into the development of a novel planetary excavator prototype. Lessons learned from the prototype development also address ways to mitigate suspension lift-off for lightweight skid-steer robots, a problem encountered during mobility field testing.

Krzysztof Skonieczny, Thomas Carlone, W. L. “Red” Whittaker, David S. Wettergreen

Update on the Qualification of the Hakuto Micro-rover for the Google Lunar X-Prize

Hakuto is developing a dual rover system for the Google Lunar XPRIZE (GLXP) and exploration of a potential lava tube skylight. We designed, built and tested two rovers and a lander interface in order to prove flight-readiness. The rover architecture was iterated over several prototype phases as an academic project, and then updated for flight-readiness using space-ready Commercial Off The Shelf (COTS) parts and a program for qualifying terrestrial COTS parts as well as the overall system. We have successfully tested a robust rover architecture including controllers with performance orders of magnitude higher than currently available space-ready controllers. The test regime included component level radiation testing to 15.3 kilo-rads, integrated thermal vacuum testing to simulate the environments during the cruise phase and surface mission phases, integrated vibration testing to 10 G$$_{rms}$$rms, and field testing. The overall development methodology of moving from a flexible architecture composed of inexpensive parts towards a single purpose architecture composed of qualified parts was successful and all components passed testing, with only minor changes required to flight model rovers required ahead of a mid 2016 launch date.

John Walker, Nathan Britton, Kazuya Yoshida, Toshiro Shimizu, Louis-Jerome Burtz, Alperen Pala

Mobility Assessment of Wheeled Robots Operating on Soft Terrain

Optimizing the vehicle mobility is an important goal in the design and operation of wheeled robots intended to perform on soft, unstructured terrain. In the case of vehicles operating on soft soil, mobility is not only a kinematic concept, but it is related to the traction developed at the wheel-ground interface and cannot be separated from terramechanics. Poor mobility may result in the entrapment of the vehicle or limited manoeuvring capabilities. This paper discusses the effect of normal load distribution among the wheels of an exploration rover and proposes strategies to modify this distribution in a convenient way to enhance the vehicle ability to generate traction. The reconfiguration of the suspension and the introduction of actuation on previously passive joints were the strategies explored in this research. The effect of these actions on vehicle mobility was assessed with numerical simulation and sets of experiments, conducted on a six-wheeled rover prototype. Results confirmed that modifying the normal load distribution is a suitable technique to improve the vehicle behaviour in certain manoeuvres such as slope climbing.

Bahareh Ghotbi, Francisco González, József Kövecses, Jorge Angeles

Taming the North: Multi-camera Parallel Tracking and Mapping in Snow-Laden Environments

Robot deployment in open snow-covered environments poses challenges to existing vision-based localization and mapping methods. Limited field of view and over-exposure in regions where snow is present leads to difficulty identifying and tracking features in the environment. The wide variation in scene depth and relative visual saliency of points on the horizon results in clustered features with poor depth estimates, as well as the failure of typical keyframe selection metrics to produce reliable bundle adjustment results. In this work, we propose the use of and two extensions to Multi-Camera Parallel Tracking and Mapping (MCPTAM) to improve localization performance in snow-laden environments. First, we define a snow segmentation method and snow-specific image filtering to enhance detectability of local features on the snow surface. Then, we define a feature entropy reduction metric for keyframe selection that leads to reduced map sizes while maintaining localization accuracy. Both refinements are demonstrated on a snow-laden outdoor dataset collected with a wide field-of-view, three camera cluster on a ground rover platform.

Arun Das, Devinder Kumar, Abdelhamid El Bably, Steven L. Waslander

Four-Wheel Rover Performance Analysis at Lunar Analog Test

A high fidelity field test of a four-wheeled lunar micro-rover, code-named Moonraker, was conducted by the Space Robotics Lab at a lunar analog site in Hamamatsu Japan, in cooperation with Google Lunar XPRIZE Team Hakuto. For the target mission to a lunar maria region with a steep slope, slippage in loose soil is a key risk; a prediction method of the slip ratio of the system based on the angle of the slope being traversed using only on-board telemetry is highly desirable. A ground truth of Moonraker’s location was measured and compared with the motor telemetry to obtain a profile of slippage during the entire four hour 500 m mission. A linear relationship between the slope angle and slip ratio was determined which can be used to predict the slip ratio when ground truth data is not available.

Nathan Britton, John Walker, Kazuya Yoshida, Toshiro Shimizu, Tommaso Paniccia, Kei Nakata

Energy-Aware Terrain Analysis for Mobile Robot Exploration

This paper presents an approach to predict energy consumption in mobility systems for wheeled ground robots. The energy autonomy is a critical problem for various battery-powered systems. Specifically, the consumption prediction in mobility systems, which is difficult to obtain due to its complex interactivity, can be used to improve energy efficiency. To address this problem, a self-supervised approach is presented which considers terrain geometry and soil types. Especially, this paper analyzes soil types which affect energy usage models, then proposes a prediction scheme based on terrain type recognition and simple consumption modeling. The developed vibration-based terrain classifier is validated with a field test in diverse volcanic terrain.

Kyohei Otsu, Takashi Kubota



Vision and Learning for Deliberative Monocular Cluttered Flight

Cameras provide a rich source of information while being passive, cheap and lightweight for small Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. Two key contributions make this possible: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with an off-the-shelf quadrotor. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar.

Debadeepta Dey, Kumar Shaurya Shankar, Sam Zeng, Rupesh Mehta, M. Talha Agcayazi, Christopher Eriksen, Shreyansh Daftry, Martial Hebert, J. Andrew Bagnell

Robust Autonomous Flight in Constrained and Visually Degraded Environments

This paper addresses the problem of autonomous navigation of a micro aerial vehicle (MAV) inside a constrained shipboard environment for inspection and damage assessment, which might be perilous or inaccessible for humans especially in emergency scenarios. The environment is GPS-denied and visually degraded, containing narrow passageways, doorways and small objects protruding from the wall. This makes existing 2D LIDAR, vision or mechanical bumper-based autonomous navigation solutions fail. To realize autonomous navigation in such challenging environments, we propose a fast and robust state estimation algorithm that fuses estimates from a direct depth odometry method and a Monte Carlo localization algorithm with other sensor information in an EKF framework. Then, an online motion planning algorithm that combines trajectory optimization with receding horizon control framework is proposed for fast obstacle avoidance. All the computations are done in real-time onboard our customized MAV platform. We validate the system by running experiments in different environmental conditions. The results of over 10 runs show that our vehicle robustly navigates 20 m long corridors only 1 m wide and goes through a very narrow doorway (66 cm width, only 4 cm clearance on each side) completely autonomously even when it is completely dark or full of light smoke.

Zheng Fang, Shichao Yang, Sezal Jain, Geetesh Dubey, Silvio Maeta, Stephan Roth, Sebastian Scherer, Yu Zhang, Stephen Nuske

Autonomous Exploration for Infrastructure Modeling with a Micro Aerial Vehicle

Micro aerial vehicles (MAVs) are an exciting technology for mobile sensing of infrastructure as they can easily position sensors in to hard to reach positions. Although MAVs equipped with 3D sensing are starting to be used in industry, they currently must be remotely controlled by a skilled pilot. In this paper we present an exploration path planning approach for MAVs equipped with 3D range sensors like lidar. The only user input that our approach requires is a 3D bounding box around the structure. Our method incrementally plans a path for a MAV to scan all surfaces of the structure up to a resolution and detects when exploration is finished. We demonstrate our method by modeling a train bridge and show that our method builds 3D models with the efficiency of a skilled pilot.

Luke Yoder, Sebastian Scherer

Long-Endurance Sensing and Mapping Using a Hand-Launchable Solar-Powered UAV

This paper investigates and demonstrates the potential for very long endurance autonomous aerial sensing and mapping applications with AtlantikSolar, a small-sized, hand-launchable, solar-powered fixed-wing unmanned aerial vehicle. The platform design as well as the on-board state estimation, control and path-planning algorithms are overviewed. A versatile sensor payload integrating a multi-camera sensing system, extended on-board processing and high-bandwidth communication with the ground is developed. Extensive field experiments are provided including publicly demonstrated field-trials for search-and-rescue applications and long-term mapping applications. An endurance analysis shows that AtlantikSolar can provide full-daylight operation and a minimum flight endurance of 8 h throughout the whole year with its full multi-camera mapping payload. An open dataset with both raw and processed data is released and accompanies this paper contribution.

Philipp Oettershagen, Thomas Stastny, Thomas Mantel, Amir Melzer, Konrad Rudin, Pascal Gohl, Gabriel Agamennoni, Kostas Alexis, Roland Siegwart

Aerial Vehicle Path Planning for Monitoring Wildfire Frontiers

This paper explores the use of unmanned aerial vehicles (UAVs) in wildfire monitoring. To begin establishing effective methods for autonomous monitoring, a simulation (FLAME) is developed for algorithm testing. To simulate a wildfire, the well established FARSITE fire simulator is used to generate realistic fire behavior models. FARSITE is a wildfire simulator that is used in the field by Incident Commanders (IC’s) to predict the spread of the fire using topography, weather, wind, moisture, and fuel data. The data obtained from FARSITE is imported into FLAME and parsed into a dynamic frontier used for testing hotspot monitoring algorithms. In this paper, points of interest along the frontier are established as points with a fireline intensity (British-Thermal-Unit/feet/second) above a set threshold. These interest points are refined into hotspots using the Mini-Batch K-means Clustering technique. A distance threshold differentiates moving hotspot centers and newly developed hotspots. The proposed algorithm is compared to a baseline for minimizing the sum of the max time untracked J(t). The results show that simply circling the fire performs poorly (baseline), while a weighted-greedy metric (proposed) performs significantly better. The algorithm was then run on a UAV to demonstrate the feasibility of real world implementation.

Ryan C. Skeele, Geoffrey A. Hollinger



Multi-robot Mapping of Lava Tubes

Terrestrial planetary bodies such as Mars and the Moon are known to harbor volcanic terrain with enclosed lava tube conduits and caves. The shielding from cosmic radiation that they provide makes them a potentially hospitable habitat for life. This motivates the need to explore such lava tubes and assess their potential as locations for future human outposts. Such exploration will likely be conducted by autonomous mobile robots before humans, and this paper proposes a novel mechanism for constructing maps of lava tubes using a multi-robot platform. A key issue in mapping lava tubes is the presence of fine sand that can be found at the bottom of most tubes, as observed on earth. This fine sand makes robot odometry measurements highly prone to errors. To address this issue, this work leverages the ability of a multi-robot system to measure the relative motion of robots using laser range finders. Mounted on each robot is a 2D laser range finder attached to a servo to enable 3D scanning. The lead robot has an easily recognized target panel that allows the follower robot to measure both the relative distance and orientation between robots. First, these measurements are used to enable 2D (SLAM) of a lava tube. Second, the 3D range measurements are fused with the 2D maps via ICP algorithms to construct full 3D representations. This method of 3D mapping does not require odometry measurements or fine-scale environment features. It was validated in a building hallway system, demonstrating successful loop closure and mapping errors on the order of 0.63 m over a 79.64 m long loop. Error growth models were determined experimentally that indicate the robot localization errors grow at a rate of 20 mm per meter travelled, although this is also dependent on the relative orientation of robots localizing each other. Finally, the system was deployed in a lava tube located at Pisgah Crater in the Mojave Desert, CA. Data was collected to generate a full 3D map of the lava tube. Comparison with known measurements taken between two ends of the lava tube indicates the mapping errors were on the order of 1.03 m after the robot travelled 32 m.

X. Huang, J. Yang, M. Storrie-Lombardi, G. Lyzenga, C. M. Clark

Admittance Control for Robotic Loading: Underground Field Trials with an LHD

In this paper we describe field trials of an admittance-based Autonomous Loading Controller (ALC) applied to a robotic Load-Haul-Dump (LHD) machine at an underground mine near Örebro, Sweden. The ALC was tuned and field tested by using a 14-tonne capacity Atlas Copco ST14 LHD mining machine in piles of fragmented rock, similar to those found in operational mines. Several relationships between the ALC parameters and our performance metrics were discovered through the described field tests. During these tests, the tuned ALC took 61 % less time to load 39 % more payload when compared to a manual operator. The results presented in this paper suggest that the ALC is more consistent than manual operators, and is also robust to uncertainties in the unstructured mine environment.

Andrew A. Dobson, Joshua A. Marshall, Johan Larsson

From ImageNet to Mining: Adapting Visual Object Detection with Minimal Supervision

This paper presents visual detection and classification of light vehicles and personnel on a mine site. We capitalise on the rapid advances of ConvNet based object recognition but highlight that a naive black box approach results in a significant number of false positives. In particular, the lack of domain specific training data and the unique landscape in a mine site causes a high rate of errors. We exploit the abundance of background-only images to train a k-means classifier to complement the ConvNet. Furthermore, localisation of objects of interest and a reduction in computation is enabled through region proposals. Our system is tested on over 10 km of real mine site data and we were able to detect both light vehicles and personnel. We show that the introduction of our background model can reduce the false positive rate by an order of magnitude.

Alex Bewley, Ben Upcroft



Building, Curating, and Querying Large-Scale Data Repositories for Field Robotics Applications

Field robotics applications have some unique and unusual data requirements—the curating, organisation and management of which are often overlooked. An emerging theme is the use of large corpora of spatiotemporally indexed sensor data which must be searched and leveraged both offline and online. Increasingly we build systems that must never stop learning. Every sortie requires swift, intelligent read-access to gigabytes of memories and the ability to augment the totality of stored experiences by writing new memories. This however leads to vast quantities of data which quickly become unmanageable, especially when we want to find what is relevant to our needs. The current paradigm of collecting data for specific purposes and storing them in ad-hoc ways will not scale to meet this challenge. In this paper we present the design and implementation of a data management framework that is capable of dealing with large datasets and provides functionality required by many offline and online robotics applications. We systematically identify the data requirements of these applications and design a relational database that is capable of meeting their demands. We describe and demonstrate how we use the system to manage over 50TB of data collected over a period of 4 years.

Peter Nelson, Chris Linegar, Paul Newman

Search and Retrieval of Human Casualties in Outdoor Environments with Unmanned Ground Systems—System Overview and Lessons Learned from ELROB 2014

The European Land Robot Trail (ELROB) is a robot competition running for nearly 10 years now. Its focus changes between military and civilian applications every other year. Although the ELROB is now one of the most established competition events in Europe, there have been changes in the tasks over the years. In 2014, for the first time, a search and rescue scenario was provided. This paper addresses this Medical Evacuation (MedEvac) scenario and describes our system design to approach the challenge, especially our innovative control mechanism for the manipulator. Comparing our solution with the other teams’ approaches we will show advantages which, finally, enabled us to achieve the first place in this trial.

Bernd Brüggemann, Dennis Wildermuth, Frank E. Schneider

Monocular Visual Teach and Repeat Aided by Local Ground Planarity

Visual Teach and Repeat (VT&R) allows an autonomous vehicle to repeat a previously traversed route without a global positioning system. Existing implementations of VT&R typically rely on 3D sensors such as stereo cameras for mapping and localization, but many mobile robots are equipped with only 2D monocular vision for tasks such as teleoperated bomb disposal. While simultaneous localization and mapping (SLAM) algorithms exist that can recover 3D structure and motion from monocular images, the scale ambiguity inherent in these methods complicates the estimation and control of lateral path-tracking error, which is essential for achieving high-accuracy path following. In this paper, we propose a monocular vision pipeline that enables kilometre-scale route repetition with centimetre-level accuracy by approximating the ground surface near the vehicle as planar (with some uncertainty) and recovering absolute scale from the known position and orientation of the camera relative to the vehicle. This system provides added value to many existing robots by allowing for high-accuracy autonomous route repetition with a simple software upgrade and no additional sensors. We validate our system over 4.3 km of autonomous navigation and demonstrate accuracy on par with the conventional stereo pipeline, even in highly non-planar terrain.

Lee Clement, Jonathan Kelly, Timothy D. Barfoot

In the Dead of Winter: Challenging Vision-Based Path Following in Extreme Conditions

In order for vision-based navigation algorithms to extend to long-term autonomy applications, they must have the ability to reliably associate images across time. This ability is challenged in unstructured and outdoor environments, where appearance is highly variable. This is especially true in temperate winter climates, where snowfall and low sun elevation rapidly change the appearance of the scene. While there have been proposed techniques to perform localization across extreme appearance changes, they are not suitable for many navigation algorithms such as autonomous path following, which requires constant, accurate, metric localization during the robot traverse. Furthermore, recent methods that mitigate the effects of lighting change for vision algorithms do not perform well in the contrast-limited environments associated with winter. In this paper, we highlight the successes and failures of two state-of-the-art path-following algorithms in this challenging environment. From harsh lighting conditions to deep snow, we show through a series of field trials that there remain serious issues with navigation in these environments, which must be addressed in order for long-term, vision-based navigation to succeed.

Michael Paton, François Pomerleau, Timothy D. Barfoot

Non-Field-of-View Acoustic Target Estimation in Complex Indoor Environment

This paper presents a new approach which acoustically localizes a mobile target outside the Field-of-View (FOV), or the Non-Field-of-View (NFOV), of an optical sensor, and its implementation to complex indoor environments. In this approach, microphones are fixed sparsely in the indoor environment of concern. In a prior process, the Interaural Level Difference IID of observations acquired by each set of two microphones is derived for different sound target positions and stored as an acoustic cue. When a new sound is observed in the environment, a joint acoustic observation likelihood is derived by fusing likelihoods computed from the correlation of the IID of the new observation to the stored acoustic cues. The location of the NFOV target is finally estimated within the recursive Bayesian estimation framework. After the experimental parametric studies, the potential of the proposed approach for practical implementation has been demonstrated by the successful tracking of an elderly person needing health care service in a home environment.

Kuya Takami, Tomonari Furukawa, Makoto Kumon, Gamini Dissanayake

Novel Assistive Device for Teaching Crawling Skills to Infants

Crawling is a fundamental skill linked to development far beyond simple mobility. Infants who have cerebral palsy and similar conditions learn to crawl late, if at all, pushing back other elements of their development. This paper describes the development of a robot (the Self-Initiated Prone Progression Crawler V3, or SIPPC3) that assists infants in learning to crawl. When an infant is placed onboard, the robot senses contact forces generated by the limbs interacting with the ground. The robot then moves or raises the infant’s trunk accordingly. The robot responses are adjustable such that even infants lacking the muscle strength to crawl can initiate movement. The novel idea that this paper presents is the use of a force augmenting motion mechanism to help infants learn how to crawl.

Mustafa A. Ghazi, Michael D. Nash, Andrew H. Fagg, Lei Ding, Thubi H. A. Kolobe, David P. Miller

SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports

We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in real-time for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors.

Rudolph Triebel, Kai Arras, Rachid Alami, Lucas Beyer, Stefan Breuers, Raja Chatila, Mohamed Chetouani, Daniel Cremers, Vanessa Evers, Michelangelo Fiore, Hayley Hung, Omar A. Islas Ramírez, Michiel Joosse, Harmish Khambhaita, Tomasz Kucner, Bastian Leibe, Achim J. Lilienthal, Timm Linder, Manja Lohse, Martin Magnusson, Billy Okal, Luigi Palmieri, Umer Rafi, Marieke van Rooij, Lu Zhang

Easy Estimation of Wheel Lift and Suspension Force for a Novel High-Speed Robot on Rough Terrain

In operation of high-speed wheeled robots on rough terrain, it is important to predict or measure the interaction between wheel and ground in order to maintain optimal maneuverability. Therefore, this paper proposes an easy way to estimate wheel lift and suspension force of a high-speed wheeled robot on uneven surfaces. First, a high-speed robot with six wheels with individual steer motors was developed, and with the body of the robot connected to each wheel by semi-active suspensions. In a sensor system, potentiometers, which can measure angle of arms, are mounted at the end of arms and have a critical role in estimating wheel lift and suspension force. A simple dynamic equation of the spring-damper system is used to estimate the suspension force, and the equation is calculated in terms of the suspension displacement by measured angle of arms because the suspension displacement is a function of arm angle in the boundary of the kinematic model of the body–wheel connection. In addition, wheel lift can be estimated using the arm angle. When the robot keeps its initial state without normal force, the arm angle is set as zero point. When the wheels receive the normal force, the link angle is changed to a value higher than zero point. If a wheel does not contact to a ground, then the suspension force goes toward the negative direction as a value. Therefore, if wheel lift happens while driving, the arm angle will follow the zero point or the suspension force will indicate a negative value. The proposed method was validated in ADAM simulations. In addition, the results of the performance were verified through outdoor experiments in an environment with an obstacle using a high-speed robot developed for this purpose.

Jayoung Kim, Bongsoo Jeon, Jihong Lee

Application of Multi-Robot Systems to Disaster-Relief Scenarios with Limited Communication

In this systems description paper, we present a multi-robot solution for intelligence-gathering tasks in disaster-relief scenarios where communication quality is uncertain. First, we propose a formal problem statement in the context of operations research. The hardware configuration of two heterogeneous robotic platforms capable of performing experiments in a relevant field environment and a suite of autonomy-enabled behaviors that support operation in a communication-limited setting are described. We also highlight a custom user interface designed specifically for task allocation amongst a group of robots towards completing a central mission. Finally, we provide an experimental design and extensive, preliminary results for studying the effectiveness of our system.

Jason Gregory, Jonathan Fink, Ethan Stump, Jeffrey Twigg, John Rogers, David Baran, Nicholas Fung, Stuart Young
Weitere Informationen