Skip to main content
Top

2018 | Book

Field and Service Robotics

Results of the 11th International Conference

insite
SEARCH

About this book

This book contains the proceedings of the 11th FSR (Field and Service Robotics), which is the leading single-track conference on applications of robotics in challenging environments. This conference was held in Zurich, Switzerland from 12-15 September 2017. The book contains 45 full-length, peer-reviewed papers organized into a variety of topics: Control, Computer Vision, Inspection, Machine Learning, Mapping, Navigation and Planning, and Systems and Tools.

The goal of the book and the conference is to report and encourage the development and experimental evaluation of field and service robots, and to generate a vibrant exchange and discussion in the community. Field robots are non-factory robots, typically mobile, that operate in complex and dynamic environments: on the ground (Earth or other planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans to help them with their lives. The first FSR was held in Canberra, Australia, in 1997. Since that first meeting, FSR has been held roughly every two years, cycling through Asia, Americas, and Europe.

Table of Contents

Frontmatter

Control

Frontmatter
Controlling Ocean One

Using robots to explore venues that are beyond human reach has been a longstanding aspiration of scientists and expeditionists alike. The deep sea exemplifies such an unchartered environment that is currently inaccessible to humans. Ocean One (O$${_2}$$) is an anthropomorphic underwater robot, designed to operate in deep aquatic conditions and equipped with an array of sensor modalities. Central to the O$${_2}$$ concept is a human interface that connects the robot and human operator through haptics and vision. In this paper, we focus on O$${_2}$$’s control architecture and show how it enables an avatar-like synergy between the robot and human pilot. We establish functional autonomy by resolving kinematic and actuation redundancy, allowing the pilot to control O$${_2}$$ in a lower-dimensional space. We illustrate O$${_2}$$’s hierarchical whole-body control tasks including manipulation and posture tasks, feed-forward compensation as well as constraint handling. We also describe how to coordinate the dynamics of body and arms to achieve superior performance in contact and demonstrate O$${_2}$$’s capabilities in simulation, experiments in the pool as well as deployment to its archeological maiden mission to the ‘Lune’, a French naval vessel that sunk to 91 m depth in 1664 in the mediterranean sea.

Gerald Brantner, Oussama Khatib
Safe Self-collision Avoidance for Versatile Robots Based on Bounded Potentials

We present a novel and intrinsically safe collision avoidance method for torque- or force-controlled robots. We propose to insert a dedicated module after the nominal controller into the existing feedback loop to blend the nominal control signal with repulsive forces derived from an artificial potential. This blending is regulated by the system’s mechanical energy in a way that guarantees collision avoidance and at the same time allows navigating close to collisions. Although using well-known ingredients from previous reactive methods, our approach overcomes their limitations in respect of achieving reliability without significantly restricting the set of reachable configurations. We demonstrate the fitness of our approach by comparing it to a standard potential-based method in simulated experiments with a walking excavator.

David Gonon, Dominic Jud, Péter Fankhauser, Marco Hutter
Towards Controlling Bucket Fill Factor in Robotic Excavation by Learning Admittance Control Setpoints

This paper investigates the extension of an admittance control scheme toward learning and adaptation of its setpoints to achieve controllable bucket fill factor for robotic excavation of fragmented rock. A previously developed Dig Admittance Controller (DAC) is deployed on a 14-tonne capacity robotic load-haul-dump (LHD) machine, and full-scale excavation experiments are conducted with a rock pile at an underground mine to determine how varying DAC setpoints affect bucket fill factor. Results show that increasing the throttle setpoint increases the bucket fill factor and increasing the bucket’s reference velocity setpoint decreases the bucket fill factor. Further, the bucket fill factor is consistent for different setpoint values. Based on these findings, a learning framework is postulated to learn DAC setpoint values for a desired bucket fill factor over successive excavation iterations. Practical implementation problems such as bucket stall and wheel-slip are also addressed, and improvements to the DAC design are suggested to mitigate these problems.

Heshan A. Fernando, Joshua A. Marshall, Håkan Almqvist, Johan Larsson
Trajectory Optimization for Dynamic Grasping in Space Using Adhesive Grippers

Spacecraft equipped with gecko-inspired dry adhesive grippers can dynamically grasp objects having a wide variety of featureless surfaces. In this paper we propose an optimization-based control strategy to exploit the dynamic robustness of such grippers for the task of grasping a free-floating, spinning object. First, we extend previous work characterizing the dynamic grasping capabilities of these grippers to the case where both object and spacecraft are free-floating and comparably sized. We then formulate the acquisition problem as a two-phase optimization problem, which is amenable to real time implementation and can handle constraints on velocity, control, as well as integer timing constraints for grasping a specific target location on the surface of a spinning object. Conservative analytical bounds for the set of initial states that guarantee feasible grasping solutions are derived. Finally, we validate this control architecture on the Stanford free-flyer test bed—a 2D microgravity test bed for emulating drift dynamics of spacecraft.

Roshena MacPherson, Benjamin Hockman, Andrew Bylard, Matthew A. Estrada, Mark R. Cutkosky, Marco Pavone
Generation of Turning Motion for Tracked Vehicles Using Reaction Force of Stairs’ Handrail

Inspections by mobile robots are required in chemical and steel plants. The robots are required to ascend and descend stairs because equipment components are installed on different-level floors. This paper proposes turning motion for tracked vehicles on stairs. A characteristic of the proposed turning motion is that it is generated using the reaction force from the safety wall of the stairs’ handrail. The safety wall is commonly used in plants because it prevents objects from dropping down and damaging equipments. Proper turning motion is generated based on the motion model of the tracked vehicle. Experimental results show that the proposed turning motion can change the heading direction on the stairs. In addition, the proposed turning motion enables the vehicle to run with less slippage, as compared to other turning motions. The proposed method can reduce slippage by 88% while climbing up the stairs and by 44% while climbing down the stairs. The proposed method is more effective on the upward stairs than on the downward stairs. An autonomous turning motion control is implemented on the tracked vehicle, and it is evaluated on the upward stairs.

Yuto Ohashi, Shotaro Kojima, Kazunori Ohno, Yoshito Okada, Ryunosuke Hamada, Takahiro Suzuki, Satoshi Tadokoro

Computer Vision

Frontmatter
Finding Better Wide Baseline Stereo Solutions Using Feature Quality

Many robotic applications that involve relocalization or 3D scene reconstruction, have a need of finding geometry between camera images captured from widely different viewpoints. Computing epipolar geometry between wide baseline image pairs is difficult because often there are many more outliers than inliers computed at the feature correspondence stage. Abundant outliers require the naive approach to compute a huge number of random solutions to give a suitable probability that the correct solution is found. Furthermore, large numbers of outliers can also cause false solutions to appear like true solutions. We present a new method called UNIQSAC for ng weights for features to guide the random solutions towards high quality features, helping find good solutions. We also present a new method to evaluate geometry solutions that is more likely to find correct solutions. We demonstrate in a variety of different outdoor environments using both monocular and stereo image-pairs that our method produces better estimates than existing robust estimation approaches.

Stephen Nuske, Jay Patravali
High-Throughput Robotic Phenotyping of Energy Sorghum Crops

Plant phenotyping is a time consuming, labour intensive, error prone process of measuring the physical properties of plants. We present a scalable robotic system which employs computer vision and machine learning to phenotype plants rapidly. It maintains high throughput making multiple phenotyping measurements during the plant lifecycle in plots containing thousands of plants. Our novel approach allows scanning of plants inside the plant canopy in addition to the top and bottom section of the plants. Here we present our design decisions, implementation challenges and field observations.

Srinivasan Vijayarangan, Paloma Sodhi, Prathamesh Kini, James Bourne, Simon Du, Hanqi Sun, Barnabas Poczos, Dimitrios Apostolopoulos, David Wettergreen
Improved Tau-Guidance and Vision-Aided Navigation for Robust Autonomous Landing of UAVs

In many unmanned aerial vehicle (UAV) applications, flexible trajectory generation algorithms are required to enable high levels of autonomy for critical mission phases, such as take-off, area coverage, and landing. In this paper, we present a guidance approach which uses the improved intrinsic tau guidance theory to create spatio-temporal 4-D trajectories for a desired time-to-contact with a landing platform tracked by a visual sensor. This allows us to perform maneuvers with tunable trajectory profiles, while catering for static or non-static starting and terminating motion states. We validate our method in both simulations and real platform experiments by using rotary-wing UAVs to land on static platforms. Results show that our method achieves smooth landings within 10 cm accuracy, with easily adjustable trajectory parameters.

Amedeo Rodi Vetrella, Inkyu Sa, Marija Popović, Raghav Khanna, Juan Nieto, Giancarmine Fasano, Domenico Accardo, Roland Siegwart
Fast and Power-Efficient Embedded Software Implementation of Digital Image Stabilization for Low-Cost Autonomous Boats

The use of autonomous surface vehicles (ASVs) is an efficient alternative to the traditional manual or static sensor network sampling for large-scale monitoring of marine and aquatic environments. However, navigating natural and narrow waterways is challenging for low-cost ASVs due to possible obstacles and limited precision global positioning system (GPS) data. Visual information coming from a camera can be used for collision avoidance, and digital image stabilization is a fundamental step for achieving this capability. This work presents an implementation of an image stabilization algorithm for a heterogeneous low-power board (i.e., NVIDIA Jetson TX1). In particular, the paper shows how such an embedded vision application has been configured to best exploit the CPU and the GPU processing elements of the board in order to obtain both computation performance and energy efficiency. We present qualitative and quantitative experiments carried out on two different environments for embedded vision software development (i.e., OpenCV and OpenVX), using real data to find a suitable solution and to demonstrate its effectiveness. The data used in this study is publicly available.

S. Aldegheri, D. D. Bloisi, J. J. Blum, N. Bombieri, A. Farinelli
Evaluation of Combined Time-Offset Estimation and Hand-Eye Calibration on Robotic Datasets

Using multiple sensors often requires the knowledge of static transformations between those sensors. If these transformations are unknown, hand-eye calibration is used to obtain them. Additionally, sensors are often unsynchronized, thus requiring time-alignment of measurements. This alignment can further be hindered by having sensors that fail at providing useful data over a certain time period. We present an end-to-end calibration framework to solve the hand-eye calibration. After an initial time-alignment step, we use the time-aligned pose estimates to perform the static transformation estimation based on different prefiltering methods, which are robust to outliers. In a final step, we employ a non-linear optimization to locally refine the calibration and time-alignment. Successful application of this estimation framework is demonstrated on multiple robotic systems with different sensor configurations. This framework is released as open source software together with the datasets.

Fadri Furrer, Marius Fehr, Tonci Novkovic, Hannes Sommer, Igor Gilitschenski, Roland Siegwart

Inspection

Frontmatter
Field Report: UAV-Based Volcano Observation System for Debris Flow Evacuation Alarm

Once a volcano erupts, molten rocks, ash, pyroclastic flow, and debris flow can cause disasters. Debris flows can cause enormous damage over large areas. Therefore, a debris-flow simulation is an effective means of determining whether to issue an evacuation call for area residents. However, for safety purposes, restricted areas are set up around a volcano when it erupts. In these restricted areas, it is difficult to gather information such as the amount and permeability of the ash; this information is necessary for precise debris-flow simulations. To address this problem, we have developed an unmanned observation system for use in restricted areas around volcanoes. Our system is based on a multirotor micro unmanned aerial vehicle (MUAV); this system can be used to perform field tests in actual volcanic areas. In this paper, we report the field tests conducted at Mt. Unzen-Fugen during November 2016. The field tests included a demonstration of an unmanned surface flow measurement device and the deployment and retrieval of a small ground vehicle and a drop-down-type ash-depth measurement scale using an MUAV. In addition, we discuss some of the lessons learned.

Keiji Nagatani, Ryosuke Yajima, Seiga Kiribayashi, Tomoaki Izu, Hiromichi Kanai, Hiroyuki Kanasaki, Jun Minagawa, Yuji Moriyama
Cooperative UAVs as a Tool for Aerial Inspection of the Aging Infrastructure

This article presents an aerial tool towards the autonomous cooperative coverage and inspection of a 3D infrastructure using multiple Unmanned Aerial Vehicles (UAVs). In the presented approach the UAVs are relying only on their onboard computer and sensory system, deployed for inspection of the 3D structure. In this application each agent covers a different part of the scene autonomously, while avoiding collisions. The visual information collected from the aerial team is collaboratively processed to create the 3D model. The performance of the overall setup has been experimentally evaluated in a realistic outdoor infrastructure inspection experiments, providing sparse and dense 3D reconstruction of the inspected structures.

Sina Sharif Mansouri, Christoforos Kanellakis, Emil Fresk, Dariusz Kominiak, George Nikolakopoulos
Autonomous Aerial Inspection Using Visual-Inertial Robust Localization and Mapping

With recent technological breakthroughs bringing fully autonomous inspection using small Unmanned Aerial Vehicles (UAVs) closer to reality, the community of Robotics has actively been developing the real-time perception capabilities able to run onboard such constraint platforms. Despite good progress, realistic deployment of autonomous UAVs in GPS-denied environments is still rudimentary. In this work, we propose a novel system to generate a collision-free path towards a user-specified inspection direction for a small UAV using monocular-inertial sensing only and performing all computation onboard. Estimating both the previously unknown scene and the UAV’s trajectory on the fly, this system is evaluated on real experiments outdoors in the presence of wind and poorly structured environments. Our analysis reveals the shortcomings of using sparse feature maps for planning, highlighting the importance of robust dense scene estimation proposed here.

Lucas Teixeira, Ignacio Alzugaray, Margarita Chli
Sensing Water Properties at Precise Depths from the Air

Water properties critical to our understanding and managing of freshwater systems change rapidly with depth. This work presents an Unmanned Aerial Vehicle (UAV) based method of keeping a passive, cable-suspended sensor payload at a precise depth, with $$95\%$$ of submerged sensor readings within $$\pm 8.4\,\text {cm}$$ of the target depth, helping dramatically increase the spatiotemporal resolution of water science datasets. We use a submerged depth altimeter attached at the terminus of a $$3.5\,\text {m}$$ semi-rigid cable as the sole input to a depth controller actuated by the UAV’s motors. First, we simulate the system and common environmental disturbances of wind, water, and GPS drift and then use parameters discovered during simulation to guide implementation. In field experiments, we compare the depth precision of our new method to previous methods that used the UAV’s altitude as a proxy for submerged sensor depth, specifically: (1) only using the UAV’s air-pressure altimeter; and (2) fusing UAV-mounted ultrasonic sensors with the air-pressure altimeter. Our new method reduces the standard deviation of depth readings by $$75\%$$ in winds up to $$8\,\text {m/s}$$. We show the step response of the depth-altimeter method when transitioning between target depths and show that it meets the precision requirements. Finally, we explore a longer, $$8.0\,\text {m}$$ cable and show that our depth-altimeter method still outperforms previous methods and allows scientists to increase the spatiotemporal resolution of water property datasets.

John-Paul Ore, Carrick Detweiler
Autonomous and Safe Inspection of an Industrial Warehouse by a Multi-rotor MAV

This paper reports field tests of autonomous inspection in an industrial indoor facility by a Micro-Air Vehicle (MAV) with no prior knowledge on the environment. Localization, mapping and safe navigation is achieved using only the embedded sensors (stereo-vision, IMU, laser altimeter) and with the entire perception and control loop running on-board of the MAV. An overview of the algorithmic architecture and design choices is provided and the focus is put on mission and safety capabilities that have been demonstrated via several flight tests defined in association with SNCF (French Railways) in one of their train storage warehouse.

Alexandre Eudes, Julien Marzat, Martial Sanfourche, Julien Moras, Sylvain Bertrand

Machine Learning

Frontmatter
Online Multi-modal Learning and Adaptive Informative Trajectory Planning for Autonomous Exploration

In robotic information gathering missions, scientists are typically interested in understanding variables which require proxy measurements from specialized sensor suites to estimate. However, energy and time constraints limit how often these sensors can be used in a mission. Robots are also equipped with cheaper to use navigation sensors such as cameras. In this paper, we explore a challenging planning problem in which a robot is required to learn about a scientific variable of interest in an initially unknown environment by planning informative paths and deciding when and where to use its sensors. To tackle this we present two innovations: a Bayesian generative model framework to automatically learn correlations between expensive science sensors and cheaper to use navigation sensors online, and a sampling based approach to plan for multiple sensors while handling long horizons and budget constraints. Our approach does not grow in complexity with data and is anytime making it highly applicable to field robotics. We tested our approach extensively in simulation and validated it with real data collected during the 2014 Mojave Volatiles Prospector Mission. Our planning algorithm performs statistically significantly better than myopic approaches and at least as well as a coverage-based algorithm in an initially unknown environment while having added advantages of being able to exploit prior knowledge and handle other intricacies of the real world without further algorithmic modifications.

Akash Arora, P. Michael Furlong, Robert Fitch, Terry Fong, Salah Sukkarieh, Richard Elphic
Season-Invariant Semantic Segmentation with a Deep Multimodal Network

Semantic scene understanding is a useful capability for autonomous vehicles operating in off-roads. While cameras are the most common sensor used for semantic classification, the performance of methods using camera imagery may suffer when there is significant variation between the train and testing sets caused by illumination, weather, and seasonal variations. On the other hand, 3D information from active sensors such as LiDAR is comparatively invariant to these factors, which motivates us to investigate whether it can be used to improve performance in this scenario. In this paper, we propose a novel multimodal Convolutional Neural Network (CNN) architecture consisting of two streams, 2D and 3D, which are fused by projecting 3D features to image space to achieve a robust pixelwise semantic segmentation. We evaluate our proposed method in a novel off-road terrain classification benchmark, and show a 25% improvement in mean Intersection over Union (IoU) of navigation-related semantic classes, relative to an image-only baseline.

Dong-Ki Kim, Daniel Maturana, Masashi Uenoyama, Sebastian Scherer
StalkNet: A Deep Learning Pipeline for High-Throughput Measurement of Plant Stalk Count and Stalk Width

Recently, a body of computer vision research has studied the task of high-throughput plant phenotyping (measurement of plant attributes). The goal is to more rapidly and more accurately estimate plant properties as compared to conventional manual methods. In this work, we develop a method to measure two primary yield attributes of interest; stalk count and stalk width that are important for many broad-acre annual crops (sorghum, sugarcane, corn, maize for example). Prior work of using convolutional deep neural networks for plant analysis has either focused on object detection or dense image segmentation. In our work, we develop a novel pipeline that accurately extracts both detected object regions and dense semantic segmentation for extracting both stalk counts and stalk width. A ground-robot called the Robotanist is used to deploy a high-resolution stereo imager to capture dense image data of experimental plots of Sorghum plants. We ground-truth validate data extracted using two humans who assess the traits independently and we compare both accuracy and efficiency of human versus robotic measurements. Our method yields R-squared correlation of 0.88 for stalk count and a mean absolute error of 2.77 mm where average stalk width is 14.354 mm. Our approach is 30 times faster for stalk count and 270 times faster for stalk width measurement.

Harjatin Singh Baweja, Tanvir Parhar, Omeed Mirbod, Stephen Nuske
Learning Models for Predictive Adaptation in State Lattices

Approaches to autonomous navigation for unmanned ground vehicles rely on motion planning algorithms that optimize maneuvers under kinematic and environmental constraints. Algorithms that combine heuristic search with local optimization are well suited to domains where solution optimality is favored over speed and memory resources are limited as they often improve the optimality of solutions without increasing the sampling density. To address the runtime performance limitations of such algorithms, this paper introduces Predictively Adapted State Lattices, an extension of recombinant motion planning search space construction that adapts the representation by selecting regions to optimize using a learned model trained to predict the expected improvement. The model aids in prioritizing computations that optimize regions where significant improvement is anticipated. We evaluate the performance of the proposed method through statistical and qualitative comparisons to alternative State Lattice approaches for a simulated mobile robot with nonholonomic constraints. Results demonstrate an advance in the ability of recombinant motion planning search spaces to improve relative optimality at reduced runtime in varyingly complex environments.

Michael E. Napoli, Harel Biggie, Thomas M. Howard

Mapping

Frontmatter
Field Deployment of the Tethered Robotic eXplorer to Map Extremely Steep Terrain

Mobile robots outfitted with a supportive tether are ideal for gaining access to extreme environments for mapping when human or remote observation is not possible. This paper details a field deployment with the (TReX) to map a steep, tree-covered rock outcrop in an open-pit gravel mine. TReX is a mobile robot designed for the purpose of mapping extremely steep and cluttered environments for geologic and infrastructure inspection. Mapping is accomplished with a 2D lidar fixed to an actuated tether spool, which rotates to produce a 3D scan only when the robot drives and manages its tether. In order to handle motion distortion, we evaluate two existing, real-time approaches to estimate the trajectory of the robot and rectify individual scans before alignment into the map: (i) a continuous-time, lidar-only approach that handles asynchronous measurements using a physically motivated, constant-velocity motion prior, and (ii) a method that computes visual odometry from streaming stereo images to use as a motion estimate during scan collection.Once rectified, individual scans are matched to the global map by an efficient variant of the ICP algorithm. Our results include a comparison of estimated maps and trajectories to ground truth (measured by a remote survey station), an example of mapping in highly cluttered terrain, and lessons learned from the deployment and continued development of TReX.

Patrick McGarey, David Yoon, Tim Tang, François Pomerleau, Timothy D. Barfoot
Towards Automatic Robotic NDT Dense Mapping for Pipeline Integrity Inspection

This paper addresses automated mapping of the remaining wall thickness of metallic pipelines in the field by means of an inspection robot equipped with Non-Destructive Testing (NDT) sensing. Set in the context of condition assessment of critical infrastructure, the integrity of arbitrary sections in the conduit is derived with a bespoke robot kinematic configuration that allows dense pipe wall thickness discrimination in circumferential and longitudinal direction via NDT sensing with guaranteed sensing lift-off (offset of the sensor from pipe wall) to the pipe wall, an essential barrier to overcome in cement-lined water pipelines. The data gathered represents not only a visual understanding of the condition of the pipe for asset managers, but also constitutes a quantative input to a remaining-life calculation that defines the likelihood of the pipeline for future renewal or repair. Results are presented from deployment of the robotic device on a series of pipeline inspections which demonstrate the feasibility of the device and sensing configuration to provide meaningful 2.5D geometric maps.

Jaime Valls Miro, Dave Hunt, Nalika Ulapane, Michael Behrens
Real-Time Semantic Mapping for Autonomous Off-Road Navigation

In this paper we describe a semantic mapping system for autonomous off-road driving with an All-Terrain Vehicle (ATVs). The system’s goal is to provide a richer representation of the environment than a purely geometric map, allowing it to distinguish, e.g., tall grass from obstacles. The system builds a 2.5D grid map encoding both geometric (terrain height) and semantic information (navigation-relevant classes such as trail, grass, etc.). The geometric and semantic information are estimated online and in real-time from LiDAR and image sensor data, respectively. Using this semantic map, motion planners can create semantically aware trajectories. To achieve robust and efficient semantic segmentation, we design a custom Convolutional Neural Network (CNN) and train it with a novel dataset of labelled off-road imagery built for this purpose. We evaluate our semantic segmentation offline, showing comparable performance to the state of the art with slightly lower latency. We also show closed-loop field results with an autonomous ATV driving over challenging off-road terrain by using the semantic map in conjunction with a simple path planner. Our models and labelled dataset will be publicly available at http://dimatura.net/offroad.

Daniel Maturana, Po-Wei Chou, Masashi Uenoyama, Sebastian Scherer
Boundary Wire Mapping on Autonomous Lawn Mowers

Currently, the service robot market mainly consists of floor cleaning and lawn mowing robots. While some cleaning robots already feature SLAM technology for the constrained indoor application, autonomous lawn mowers typically use an electric wire for boundary definition and homing towards to charging station. An intermediate step towards SLAM for mowers is mapping of the boundary wire. In this work, we analyze three types of approaches for estimating the boundary of the working area of an autonomous mower: GNSS, visual odometry, and wheel-yaw odometry. We extended the latter with orientation loop closure, which gives the best overall result in estimating the metric shape of the boundary.

Nils Einecke, Jörg Deigmöller, Keiji Muro, Mathias Franzius
A Submap Joining Based RGB-D SLAM Algorithm Using Planes as Features

This paper presents a novel RGB-D SLAM algorithm for reconstructing a 3D surface in indoor environment. The method is a submap joining based RGB-D SLAM algorithm using planes as features and hence is called SJBPF-SLAM. Two adjacent keyframes, with the corresponding small patches and planes observed from the keyframes, are used to build a submap. Then the current submap is fused to the global map sequentially, meanwhile the global structure is recovered gradually through plane feature associations. The use of submap significantly reduces the computational cost during the optimization process, without losing any information about planes and structures. The proposed method is validated using publicly available RGB-D benchmarks and obtains good quality trajectory and 3D models, which are difficult for existing RGB-D SLAM algorithms.

Jun Wang, Jingwei Song, Liang Zhao, Shoudong Huang
Mapping on the Fly: Real-Time 3D Dense Reconstruction, Digital Surface Map and Incremental Orthomosaic Generation for Unmanned Aerial Vehicles

The reduced operational cost and increased robustness of unmanned aerial vehicles has made them a ubiquitous tool in the commercial, industrial and scientific sector. Especially the ability to map and surveil a large area in a short amount of time makes them interesting for various applications. Generating a map in real-time is essential for first response teams in disaster scenarios such as, e.g. earthquakes, floods, or avalanches or may help other UAVs to localize without the need of Global Navigation Satellite Systems. For this application, we implemented a mapping framework that incrementally generates a dense georeferenced 3D point cloud, a digital surface model, and an orthomosaic and we support our design choices with respect to computational costs and its performance in diverse terrain. For accurate estimation of the camera poses, we employ a cost-efficient sensor setup consisting of a monocular visual-inertial camera rig as well as a Global Positioning System receiver, which we fuse using an incremental smoothing algorithm. We validate our mapping framework on a synthetic dataset embedded in a hardware-in-the-loop environment and in a real-world experiment using a fixed-wing UAV. Finally, we show that our framework outperforms existing orthomosaic generation methods by an order of magnitude in terms of timing, making real-time reconstruction and orthomosaic generation feasible onboard of unmanned aerial vehicles.

Timo Hinzmann, Johannes L. Schönberger, Marc Pollefeys, Roland Siegwart
Aerial and Ground-Based Collaborative Mapping: An Experimental Study

We here present studies to enable aerial and ground-based collaborative mapping in GPS-denied environments. The work utilizes a system that incorporates a laser scanner, a camera, and a low-grade IMU in a miniature package which can be carried by a light-weight aerial vehicle. We also discuss a processing pipeline that involves multi-layer optimization to solve for 6-DOF ego-motion and build maps in real-time. If a map is available, the system can localize on the map and merge maps from separate runs for collaborative mapping. Experiments are conducted in urban and vegetated areas. Further, the work enables autonomous flights in cluttered environments through building and trees and at high speeds (up to 15 m/s).

Ji Zhang, Sanjiv Singh

Navigation and Planning

Frontmatter
I Can See for Miles and Miles: An Extended Field Test of Visual Teach and Repeat 2.0

Autonomous path-following systems based on the Teach and Repeat paradigm allow robots to traverse extensive networks of manually driven paths using on-board sensors. These methods are well suited for applications that involve repeated traversals of constrained paths such as factory floors, orchards, and mines. In order for path-following systems to be viable for these applications they must be able to navigate large distances over long time periods, a challenging task for vision-based systems that are susceptible to appearance change. This paper details Visual Teach and Repeat 2.0, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments. These tasks are achieved through the use of a suite of novel, multi-experience, vision-based navigation algorithms. We have validated our system experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada, where we incrementally built and autonomously traversed a 5 Km network of paths. Over the span of the field test, the robot logged over 140 Km of autonomous driving with an autonomy rate of 99.6%, despite experiencing significant appearance change due to lighting and weather, including driving at night using headlights.

Michael Paton, Kirk MacTavish, Laszlo-Peter Berczi, Sebastian Kai van Es, Timothy D. Barfoot
Dynamically Feasible Motion Planning for Micro Air Vehicles Using an Egocylinder

Onboard obstacle avoidance is a challenging, yet indespensible component of micro air vehicle (MAV) autonomy. Prior approaches for deliberative motion planning over vehicle dynamics typically rely on 3-D voxel-based world models, which require complex access schemes or extensive memory to manage resolution and maintain an acceptable motion-planning horizon. In this paper, we present a novel, lightweight motion planning method, for micro air vehicles with full configuration flat dynamics, based on perception with stereo vision and a 2.5-D egocylinder obstacle representation. We equip the egocylinder with temporal fusion to enhance obstacle detection and provide a rich, 360$$^{\circ }$$ representation of the environment well beyond the visible field-of-regard of a stereo camera pair. The natural pixel parameterization of the egocylinder is used to quickly identify dynamically feasible maneuvers onto radial paths, expressed directly in egocylinder coordinates, that enable finely detailed planning at extreme ranges within milliseconds. We have implemented our obstacle avoidance pipeline with an Asctec Pelican quadcopter, and demonstrate the efficiency of our approach experimentally with a set of challenging field scenarios.

Anthony T. Fragoso, Cevahir Cigla, Roland Brockers, Larry H. Matthies
Informed Asymptotically Near-Optimal Planning for Field Robots with Dynamics

Recent progress in sampling-based planning has provided performance guarantees in terms of optimizing trajectory cost even in the presence of significant dynamics. The STABLE_SPARSE_RRT (SST) algorithm has these desirable path quality properties and achieves computational efficiency by maintaining a sparse set of state-space samples. The current paper focuses on field robotics, where workspace information can be used to effectively guide the search process of a planner. In particular, the computational performance of SST is improved by utilizing appropriate heuristics. The workspace information guides the exploration process of the planner and focuses it on the useful subset of the state space. The resulting Informed-SST is evaluated in scenarios involving either ground vehicles or quadrotors. This includes testing for a physically-simulated vehicle over uneven terrain, which is a computationally expensive planning problem.

Zakary Littlefield, Kostas E. Bekris
Strategic Autonomy for Reducing Risk of Sun-Synchronous Lunar Polar Exploration

Sun-synchronous lunar polar exploration can extend solar-powered robotic missions by an order of magnitude by following routes of continuous sunlight. However, enforcing an additional constraint for continuous Earth communication while driving puts such missions at risk. This is due to the uncertainty of singularities: static points that provide weeks of continuous sunlight where communication blackouts can be endured. The uncertainty of their existence and exact location stems from the limited accuracy of lunar models and makes dwelling at singularities a high-risk proposition. This paper proposes a new mission concept called strategic autonomy, which instead permits rovers to follow preplanned, short, slow, autonomous drives without communication to gain distance from shadow and increase confidence in sustained solar power. In this way, strategic autonomy could greatly reduce overall risk for sun-synchronous lunar polar missions.

Nathan Otten, David Wettergreen, William Whittaker
Towards Visual Teach and Repeat for GPS-Denied Flight of a Fixed-Wing UAV

Most consumer and industrial Unmanned Aerial Vehicles (UAVs) rely on combining Global Navigation Satellite Systems (GNSS) with barometric and inertial sensors for outdoor operation. As a consequence, these vehicles are prone to a variety of potential navigation failures such as jamming and environmental interference. This usually limits their legal activities to locations of low population density within line-of-sight of a human pilot to reduce risk of injury and damage. Autonomous route-following methods such as Visual Teach and Repeat (VT&R) have enabled long-range navigational autonomy for ground robots without the need for reliance on external infrastructure or an accurate global position estimate. In this paper, we demonstrate the localisation component of VT&R outdoors on a fixed-wing UAV as a method of backup navigation in case of primary sensor failure. We modify the localisation engine of VT&R to work with a single downward facing camera on a UAV to enable safe navigation under the guidance of vision alone. We evaluate the method using visual data from the UAV flying a 1200 m trajectory (at altitude of 80 m) several times during a multi-day period, covering a total distance of 10.8 km using the algorithm. We examine the localisation performance for both small (single flight) and large (inter-day) temporal differences from teach to repeat. Through these experiments, we demonstrate the ability to successfully localise the aircraft on a self-taught route using vision alone without the need for additional sensing or infrastructure.

M. Warren, M. Paton, K. MacTavish, A. P. Schoellig, T. D. Barfoot
Local Path Optimizer for an Autonomous Truck in a Harbor Scenario

Recently, functional gradient algorithms like CHOMP have been very successful in producing locally optimal motion plans for articulated robots. In this paper, we have adapted CHOMP to work with a non-holonomic vehicle such as an autonomous truck with a single trailer and a differential drive robot. An extended CHOMP with rolling constraints have been implemented on both of these setup which yielded feasible curvatures. This paper details the experimental integration of the extended CHOMP motion planner with the sensor fusion and control system of an autonomous Volvo FH-16 truck. It also explains the experiments conducted on the differential-drive robot. Initial experimental investigations and results conducted in a real-world environment show that CHOMP can produce smooth and collision-free trajectories for mobile robots and vehicles as well. In conclusion, this paper discusses the feasibility of employing CHOMP to mobile robots.

Jennifer David, Rafael Valencia, Roland Philippsen, Karl Iagnemma

Systems and Tools

Frontmatter
Field Experiments in Robotic Subsurface Science with Long Duration Autonomy

A next challenge in planetary exploration involves probing the subsurface to understand composition, to search for volatiles like water ice, or to seek evidence of life. The Mars rover missions have scraped the surface of Mars and cored rocks to make ground breaking discoveries. Many believe that the chance of finding evidence of life is expected to increase by going deeper. Deploying a system that probes the subsurface brings its own challenges and to that end, we designed, built and field tested an autonomous robot that can collect subsurface samples using a 1 m drill. The drill operation, sample transfer, and sample analysis are all automated. The robot also navigates kilometers autonomously while making decisions about scientific measurements. The system is designed to execute multi-day science plans, stopping and resuming operation as necessary. This paper describes the robot and science instruments and lessons from designing and operating such a system.

Srinivasan Vijayarangan, David Kohanbash, Greydon Foil, Kris Zacny, Nathalie Cabrol, David Wettergreen
Design and Development of Explosion-Proof Tracked Vehicle for Inspection of Offshore Oil Plant

French oil company TOTAL and ANR (L’Agence Nationale de la Recherche) organize the ARGOS (Autonomous Robot for Gas and Oil Sites) Challenge, which our research group had the opportunity to participate in. ARGOS is a research and development competition for mobile robots capable of autonomous inspection of instruments and teleoperated information gathering in oil plants, in place of human workers. One of the features of this challenge is that robots should be constructed with explosion-proof structures, because the target plants may have explosive atmospheres. To participate in the third competition of the ARGOS Challenge in March 2017, we developed AIR-K, an explosion-proof robot. The AIR-K is divided into three parts to make it explosion-proof. According to the features for robot functions and sensors, it uses a flameproof battery enclosure (Ex ‘d’), a pressurized apparatus (Ex ‘p’) for its body, and intrinsic safety (Ex ‘i’) for sensors; the explosion-proof of the robot is achieved by a combination of these methods. In this paper, we introduce the design guidelines and implementations that allow our robot to be explosion-proof.

Keiji Nagatani, Daisuke Endo, Atsushi Watanabe, Eiji Koyanagi
Life Extension: An Autonomous Docking Station for Recharging Quadrupedal Robots

In this paper we describe the design of a fully autonomous docking station for the quadrupedal robot ANYmal. The autonomous recharging of mobile robots is a crucial feature when long-term autonomy is expected or human intervention is not possible. This is the case when a robot is used in environments that create a potential hazard to humans such as the inspection of oil rig platforms. If operated in such explosive environments, machines are usually required to be frequently purged with inert gas to avoid ignition through electric sparking (ATEX-P certification). Our docking station allows for recharging of ANYmal’s battery as well as purging of its main body with gas. We present a robust docking strategy that negotiates positioning errors of the robot through guiding elements and flexible parts. The docking mechanism itself consists of an actuated plug which is inserted into a socket on the robot’s belly for electrical and mechanical connection. The mechanism is designed for reliable, sealed and spark-free operation. The system has proven to be robust in a laboratory environment and under realistic conditions.

Hendrik Kolvenbach, Marco Hutter
Autonomous Mission with a Mobile Manipulator—A Solution to the MBZIRC

This work presents the system and approach we employed to tackle the second challenge of the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) (See http://www.mbzirc.com/challenge). The goal of this challenge is to find a tool panel on a field, pick an appropriate wrench from the panel, and operate a valve stem therewith. For this purpose we use a task-oriented field robot, based on Clearpath Husky with a customized series elastic arm, that can be deployed for versatile purposes. However, to be competitive in a robotic challenge, further specialization and improvements are necessary to achieve a certain task faster and more reliably. A high emphasis is put on designing a system that can operate fully autonomously and independently respond if a subtask was not executed successfully. Moreover, the operator can easily monitor the system through a graphical user interface and, if desired, interact with the robot. We present our algorithms to explore the field, detect the panel, and navigate to it. Furthermore, we use a support vector machine based object detection method to locate the valve stem and wrenches on the panel for visual servoing. Finally, we show the advantages of a force controllable manipulator to handle the valve stem with a tool. This system demonstrated its applicability when fulfilling the entire task fully autonomously during both trials of the Grand Challenge of the MBZIRC 2017.

Jan Carius, Martin Wermelinger, Balasubramanian Rajasekaran, Kai Holtmann, Marco Hutter
Towards a Generic Solution for Inspection of Industrial Sites

Autonomous robotic inspection of industrial sites offers a huge potential with respect to increasing human safety and operational efficiency. The present paper provides an insight into the approach taken by team LIO during the ARGOS Challenge. In this international competition, the legged robot ANYmal was equipped with a sensor head to perform visual, acoustic, and thermal inspection on an oil and gas site. The robot was able to autonomously navigate on the outdoor industrial facilty using rotating line-LIDAR sensors for localization and terrain mapping. Thanks to the superior mobility of legged robots, ANYmal can omni-directionally move with statically and dynamically stable gaits while overcoming large obstacles and stairs. Moreover, the versatile machine can adapt its posture for inspection. The paper additionally provides insight into the methods applied for visual inspection of pressure gauges and concludes with some insight into the general learnings from the ARGOS Challenge.

Marco Hutter, Remo Diethelm, Samuel Bachmann, Péter Fankhauser, Christian Gehring, Vassilios Tsounis, Andreas Lauber, Fabian Guenther, Marko Bjelonic, Linus Isler, Hendrik Kolvenbach, Konrad Meyer, Mark Hoepflinger
Foresight: Remote Sensing for Autonomous Vehicles Using a Small Unmanned Aerial Vehicle

A large number of traffic accidents, especially those involving vulnerable road users such as pedestrians and cyclists, are due to blind spots for the driver, for example when a vehicle takes a turn with poor visibility or when a pedestrian crosses from behind a parked vehicle. In these accidents, the consequences for the vulnerable road users are dramatic. Autonomous cars have the potential to drastically reduce traffic accidents thanks to high-performance sensing and reasoning. However, their perception capabilities are still limited to the field of view of their sensors. We propose to extend the perception capabilities of a vehicle, autonomous or human-driven, with a small Unmanned Aerial Vehicle (UAV) capable of taking off from the car, flying around corners to gather additional data from blind spots and landing back on the car after a mission. We present a holistic framework to detect blind spots in the map that is built by the car, plan an informative path for the drone, and detect potential threats occluded to the car. We have tested our approach with an autonomous car equipped with a drone.

Alex Wallar, Brandon Araki, Raphael Chang, Javier Alonso-Mora, Daniela Rus
Dynamic System Identification, and Control for a Cost-Effective and Open-Source Multi-rotor MAV

This paper describes dynamic system identification, and full control of a cost-effective Multi-rotor micro-aerial vehicle (MAV). The dynamics of the vehicle and autopilot controllers are identified using only a built-in IMU and utilized to design a subsequent model predictive controller (MPC). Experimental results for the control performance are evaluated using a motion capture system while performing hover, step responses, and trajectory following tasks in the presence of external wind disturbances. We achieve root-mean-square (RMS) errors between the reference and actual trajectory of x $$=$$ 0.021 m, y $$=$$ 0.016 m, z $$=$$ 0.029 m, roll $$=$$ 0.392$$^\circ $$, pitch $$=$$ 0.618$$^\circ $$, and yaw $$=$$ 1.087$$^\circ $$ while performing hover. Although we utilize accurate state estimation provided from a motion capture system in an indoor environment, the proposed method is one of the non-trivial prerequisites to build any field or service aerial robots. This paper also conveys the insights we have gained about the commercial vehicle and returned to the community through an open-source code, and documentation.

Inkyu Sa, Mina Kamel, Raghav Khanna, Marija Popović, Juan Nieto, Roland Siegwart
AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles

Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.

Shital Shah, Debadeepta Dey, Chris Lovett, Ashish Kapoor
Design and Development of Tether-Powered Multirotor Micro Unmanned Aerial Vehicle System for Remote-Controlled Construction Machine

In Japan, several types of natural disasters such as floods, earthquakes, and volcanic eruptions have occurred and will likely occur in the future. Therefore, civil engineering works are required for restoration after such natural disasters, and teleoperated construction machines have been developed to facilitate such works. During the operation of teleoperated construction machines, images from various viewpoints e.g., an image from the perspective of the machines or that from the side of the bucket is essential for carrying out tasks efficiently. However, in the case of the initial response to natural disasters, it is difficult to use dedicated, conventional camera-equipped vehicles and fixed cameras on external towers to obtain such perspective images, particularly within a month after the disaster. Therefore, in this research, we propose a tether-powered multirotor micro unmanned aerial vehicle (MUAV) system to obtain images from various perspectives for the operator of a teleoperated construction machine. The features of the proposed system are (1) high voltage for transmitting electric power through thin tether, (2) tension control of the tether in vibration and inclined conditions, and (3) wired VDSL communication between the MUAV and the helipad. In this paper, we introduce the design and implementation of the proposed system. In addition, we report the results of the field test of the tethered MUAV mounted on a construction machine.

Seiga Kiribayashi, Kaede Yakushigawa, Keiji Nagatani
Human-Robot Teaming: Concepts and Components for Design

In the past, robots were used primarily as “tools for humans.” As robotics technology has advanced, however, robots have increasingly become capable of assisting humans as partners, or peers, working together to accomplish joint work. This new relationship creates a new host of interdependencies and teamwork questions that need to be addressed in order for human-robot teams to be effective. In this paper, we define communication, coordination, and collaboration as the cornerstones for human-robot teamwork. We then describe the components of teaming, including agent abilities, taskwork, metrics, and peer-to-peer interactions. Our purpose is to enable system designers to understand the factors that influence teamwork and how to structure human-robot teams to facilitate effective teaming.

Lanssie Mingyue Ma, Terrence Fong, Mark J. Micire, Yun Kyung Kim, Karen Feigh
An Analysis of Degraded Communication Channels in Human-Robot Teaming and Implications for Dynamic Autonomy Allocation

The quality of the communication channel between human-robot teammates critically influences the team’s ability to perform a task safely and effectively. In this paper, we present a nine person pilot study that investigates the effects of different degradations of that communication channel, and within three shared-autonomy paradigms that differ according to how and at what level control is partitioned between the human and the autonomy. Accordingly, the rate and granularity of the human input differs for each shared-autonomy paradigm. We refer to each paradigm according to the input expected from the user, namely high-level, mid-level and low-level control paradigms. We find three primary insights. First, interruptions in the signal transmission (dropped signals) decrease safety and performance in modes where continuous and high-bandwidth inputs from the human are expected. Second, decreased transmission frequency offers a trade-off between safety and performance for low-level and mid-level control paradigms. Lastly, noise alters the safety of high-level input since the user is not continually correcting the signal. These insights inform us when to shift autonomy levels depending on the quality of the communication channel, which can vary with time. Knowing the ground truth of how the signal was degraded, we evaluate a recurrent neural network’s ability to classify whether the communication channel is experiencing lowered transmission frequency, dropped signals or noise, and we find an accuracy of 90% when operating with low-level commands. Combined with the key insights, our results indicate that a framework to dynamically allocate autonomy between the user and robot could improve overall performance.

Michael Young, Mahdieh Nejati, Ahmetcan Erdogan, Brenna Argall
LEAF: Using Semantic Based Experience to Prevent Task Failures

Using service robots at home is becoming more and more popular in order to help people in their life routine. Such robots are required to do various tasks, from user notification to devices manipulation. However, in such complex environments, robots sometimes fail to achieve one task. Failing is problematic as it is unpleasant for the user and may cause critical situations. Therefore, understanding and preventing failures is a challenging need. In this paper, we propose LEAF, an experience based approach to prevent task failure. LEAF relies on both semantic context knowledge through ontology and user validation, allowing LEAF to have an accurate understanding of failures. It then uses this new knowledge to adapt a Hierarchical Task Network (HTN) in order to prevent selecting tasks that have a high risk of failure in the plan. LEAF was tested in the Hadaptic platform and evaluated using a randomly generated dataset.

Nathan Ramoly, Hela Sfar, Amel Bouzeghoub, Beatrice Finance
State Estimation and Localization for ROV-Based Reactor Pressure Vessel Inspection

A vision-based extended Kalman filter is proposed to estimate the state of a remotely operated vehicle (ROV) used for inspection of a nuclear reactor pressure vessel. The state estimation framework employs an overhead, pan-tilt-zoom (PTZ) camera as the primary sensing modality. In addition to the camera state, a map of the nuclear reactor vessel is also estimated from a prior. We conduct experiments to validate the framework in terms of accuracy and robustness to environmental image degradation due to speckling and color attenuation. Subscale mockup experiments highlight estimate consistency as compared to ground truth despite visually degraded operating conditions. Full-scale platform experiments are conducted using the actual inspection system in a dry setting. In this case, the ROV achieves a lower state uncertainty as compared to subscale mockup evaluation. For both subscale and full-scale experiments, the state uncertainty was robust to environmental image degradation effects.

Timothy E. Lee, Nathan Michael
Metadata
Title
Field and Service Robotics
Editors
Prof. Dr. Marco Hutter
Roland Siegwart
Copyright Year
2018
Electronic ISBN
978-3-319-67361-5
Print ISBN
978-3-319-67360-8
DOI
https://doi.org/10.1007/978-3-319-67361-5