Skip to main content
main-content
Top

About this book

FSR, the International Conference on Field and Service Robotics, is the leading single track conference of robotics for field and service applications. This book presents the results of FSR2012, the eighth conference of Field and Service Robotics, which was originally planned for 2011 with the venue of Matsushima in Tohoku region of Japan. However, on March 11, 2011, a magnitude M9.0 earthquake occurred off the Pacific coast of Tohoku, and a large-scale disaster was caused by the Tsunami which resulted, therefore the conference was postponed by one year to July, 2012. In fact, this earthquake raised issues concerning the contribution of field and service robotics technology to emergency scenarios. A number of precious lessons were learned from operation of robots in the resulting, very real and challenging, disaster environments. Up-to-date study on disaster response, relief and recovery was then featured in the conference. This book offers 43 papers on a broad range of topics including: Disaster Response, Service/Entertainment Robots, Inspection/Maintenance Robots, Mobile Robot Navigation, Agricultural Robots, Robots for Excavation, Planetary Exploration, Large Area Mapping, SLAM for Outdoor Robots, and Elemental Technology for Mobile Robots.

Table of Contents

Frontmatter

Utilization of Robot Systems in Disaster Sites of the Great Eastern Japan Earthquake

In this paper, we report our activities in the real disaster areas damaged by the Great Eastern Japan Earthquake. From March 18–21, 2011, we tried to apply a ground rescue robot to the real disaster sites at Aomori Prefecture and Iwate Prefecture. On March 18, we carried out inspection mission in a damaged gymnasium. From March 19–21, we went to other sites to identify possibility of usage of robots, and we found the potential needs for not only ground robots but also underwater robots. Then, after the first activity we established a joint United States-Japanese team for underwater search. From April 19–23, 2011 the joint team brought four ROVs to Miyagi Prefecture for port inspection and to Iwate Prefecture for searching for submerged bodies. The joint team returned to Miyagi Prefecture October 22–26 with an AUV and two ROVs for cooperative debris mapping needed to assist with resuming fishing. Based on these experiences, we discuss the effectiveness and problems of applying the rescue robot in the real disaster sites.

Fumitoshi Matsuno, Noritaka Sato, Kazuyuki Kon, Hiroki Igarashi, Tetsuya Kimura, Robin Murphy

Improvements to the Rescue Robot Quince Toward Future Indoor Surveillance Missions in the Fukushima Daiichi Nuclear Power Plant

On March 11 2011, a huge earthquake and tsunami hit eastern Japan, and four reactors in the Fukushima Daiichi Nuclear Power Plant were seriously damaged. Because of high radiation levels around the damaged reactor buildings, robotic surveillance were demanded to respond to the accident. On June 20, we delivered our rescue robot named Quince which is a tracked vehicle with four sub-tracks, to Tokyo Electric Power Company (TEPCO) for damage inspection missions in the reactor buildings. Quince needed some enhancements such as a dosimeter, additional cameras, and a cable communication system for these missions. Furthermore, stair climbing ability and user interface was implemented for easy operation for novice operators. Quince have conducted six missions in the damaged reactor building. In the sixth mission on October 20, it reached to the topmost floor of the reactor building of unit 2. However, the communication cable was damaged on the way back, and Quince was left on the third floor of the reactor building. Therefore, an alternative Quince is requested recently. In this paper, we report the situation of the missions for Quince, and introduce enhancements of the next Quince for future missions.

Tomoaki Yoshida, Keiji Nagatani, Satoshi Tadokoro, Takeshi Nishimura, Eiji Koyanagi

Collaborative Mapping of an Earthquake Damaged Building via Ground and Aerial Robots

We report recent results from field experiments conducted with a team of ground and aerial robots toward the collaborative mapping of an earthquake damaged building. The goal of the experimental exercise is the generation of 3D maps that capture the layout of the environment and provide insight to the degree of damage inside the building. The experiments take place in the top three floors of a structurally compromised engineering building at Tohoku University in Sendai, Japan that was damaged during the 2011 Tohoku earthquake. We provide details of the approach to the collaborative mapping and report results from the experiments in the form of maps generated by the individual robots and as a team. We conclude by discussing observations from the experiments and future research topics.

Nathan Michael, Shaojie Shen, Kartik Mohta, Vijay Kumar, Keiji Nagatani, Yoshito Okada, Seiga Kiribayashi, Kazuki Otake, Kazuya Yoshida, Kazunori Ohno, Eijiro Takeuchi, Satoshi Tadokoro

Three-Dimensional Thermography Mapping for Mobile Rescue Robots

In urban search and rescue situations, a 3D map obtained using a 3D range sensor mounted on a rescue robot is very useful in determining a rescue crew’s strategy. Furthermore, thermal images captured by an infrared camera enable rescue workers to effectively locate victims. The objective of this study is to develop a 3D thermography mapping system using a 3D map and thermal images; this system is to be mounted on a tele-operated (or autonomous) mobile rescue robot. The proposed system enables the operator to understand the shape and temperature of the disaster environment at a glance. To realize the proposed system, we developed a 3D laser scanner comprising a 2D laser scanner, DC motor, and rotary electrical connector. We used a conventional infrared camera to capture thermal images. To develop a 3D thermography map, we integrated the thermal images and the 3D range data using a geometric method. Furthermore, to enable fast exploration, we propose a method for thermography mapping while the robot is in motion. This method can be realized by synchronizing the robot’s position and orientation with the obtained sensing data. The performance of the system was experimentally evaluated in real-world conditions. In addition, we extended the proposed method by introducing an improved iterative closest point (ICP) scan matching algorithm called thermo-ICP, which uses temperature information. In this paper, we report development of (1) a 3D thermography mapping system and (2) a scan matching method using temperature information.

Keiji Nagatani, Kazuki Otake, Kazuya Yoshida

Creating Multi-Viewpoint Panoramas of Streets with Sparsely Located Buildings

This paper presents a method for creating multi-viewpoint panoramas that is particularly targeted at streets with sparsely located buildings. As is known in the literature, it is impossible to create panoramas of such scenes having a wide range of depths in a distortion-free manner. To overcome this difficulty, our method renders sharp images only for the facades of buildings and the ground surface (e.g., vacant lands and sidewalks) along the target streets; it renders blurry images for other objects in the scene to make their geometric distortion less noticeable while maintaining their presence. To perform these, our method first estimates the three-dimensional structures of the target scenes using the results obtained by SfM (structure from motion), identifies to which category (i.e., the facade surface, the ground surface, or other objects) each scene point belongs based on MRF (Markov Random Field) optimization, and creates panoramic images of the scene by mosaicing the images of the three categories. The blurry images of objects are generated by a similar technique to digital refocus of the light field photography. We present several panoramic images created by our method for streets in the tsunami-devastated areas in the north-eastern Japan coastline because of the Great East Japan Earthquake of March 11, 2011.

Takayuki Okatani, Jun Yanagisawa, Daiki Tetsuka, Ken Sakurada, Koichiro Deguchi

Disaster Back-up Support using GIS Contents Composed of Images from Satellite and UAV

This manuscript describes a volunteer activity of reconstruction assistance for Great East Japan Earthquake. Concretely, the authors make sand erosion control contents, composed of 3D information from Geospatial Information Authority of Japan, and a combination between wide-area satellite images and a high-resolution mosaic image generated from a movie shot on UAV, i.e., Unmanned Aerial Vehicle, flying in a low-altitude-sky. In addition, we discuss and consider usability of the contents taking into account comments and advices from specialists of geology.

Sota Shimizu, Taro Suzuki, Masaya Ogawa, Yoshiyuki Fukazawa, Yuzo Shibayama, Takumi Hashizume

Mine Detecting Robot System

Humanitarian demining, the peaceful and non-explosive de-mining strategies, has been gaining worldwide acceptance lately. As the series of humanitarian demining, tele-operated mine detecting robot system was developed. This paper presents unique demining strategy of the demining robot system. There are two developed system called MIDERS-1 and MIDERS-2. The system is consisted of rough terrain mobile platform, multi degree of freedom manipulator, and the all-in-one mine detecting sensor module between ground penetrating radar and metal detector. We have focused that our cooperative demining procedure between the macroscopic and microscopic demining enhances the conventional human demining. With proposed methodology, the hardware configurations and functions are described.

SeungBeum Suh, JunHo Choi, ChangHyun Cho, YeonSub Jin, Seung-Yeup Hyun, Sungchul Kang

Experience in System Design for Human-Robot Teaming in Urban Search and Rescue

The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search and Rescue. A human-robot team consists of several semi-autonomous robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV operators) and one on-site human (UAV operator). This system has been developed in close cooperation with several rescue organizations, and has been deployed in a real-life tunnel accident use case. The human-robot team jointly explores an accident site, communicating using a multi-modal team interface, and spoken dialogue. The paper describes the development of this complex socio-technical system per se, as well as recent experience in evaluating the performance of this system.

G. J. M. Kruijff, M. Janíček, S. Keshavdas, B. Larochelle, H. Zender, N. J. J. M. Smets, T. Mioch, M. A. Neerincx, J. V. Diggelen, F. Colas, M. Liu, F. Pomerleau, R. Siegwart, V. Hlaváč, T. Svoboda, T. Petříček, M. Reinstein, K. Zimmermann, F. Pirri, M. Gianni, P. Papadakis, A. Sinha, P. Balmer, N. Tomatis, R. Worst, T. Linder, H. Surmann, V. Tretyakov, S. Corrao, S. Pratzler-Wanczura, M. Sulk

Advancing the State of Urban Search and Rescue Robotics Through the RoboCupRescue Robot League Competition

The RoboCupRescue Robot League is an international competition that has grown to be an effective driver for the dissemination of solutions to the challenges posed by Urban Search and Rescue Robotics and accelerated the development of the performance standards that are crucial to widespread effective deployment of robotic systems for these applications. In this paper, we will discuss how this competition has come to be more than simply a venue where teams compete to find a champion and is now “A League of Teams with one goal: to Develop and Demonstrate Advanced Robotic Capabilities for Emergency Responders.”

Raymond Sheh, Adam Jacoff, Ann-Marie Virts, Tetsuya Kimura, Johannes Pellenz, Sören Schwertfeger, Jackrit Suthakorn

Estimating the 3D Position of Humans Wearing a Reflective Vest Using a Single Camera System

This chapter presents a novel possible solution for people detection and estimation of their 3D position in challenging shared environments. Addressing safety critical applications in industrial environments, we make the basic assumption that people wear reflective vests. In order to detect these vests and to discriminate them from other reflective material, we propose an approach based on a single camera equipped with an IR flash. The camera acquires pairs of images, one with and one without IR flash, in short succession. The images forming a pair are then related to each other through feature tracking, which allows to discard features for which the relative intensity difference is small and which are thus not believed to belong to a reflective vest. Next, the local neighbourhood of the remaining features is further analysed. First, a Random Forest classifier is used to discriminate between features caused by a reflective vest and features caused by some other reflective materials. Second, the distance between the camera and the vest features is estimated using a Random Forest regressor. The proposed system was evaluated in one indoor and two challenging outdoor scenarios. Our results indicate very good classification performance and remarkably accurate distance estimation especially in combination with the SURF descriptor, even under direct exposure to sunlight.

Rafael Mosberger, Henrik Andreasson

Impression of Android for Communication Support in Hospitals and Elderly Facilities

In this paper, we report the impression of our android robot obtained in experiments and reactions of people to the android in demonstrations in medical and nursing fields. Our newly developed android robot Actroid-F is utilized for this research. Since the total system for controlling the android is light and compact, it is easy to install in medical and nursing fields. In order to survey the impression of android in the real-world, we conducted a preliminary experiment utilizing the android in a various environments. As a result, it was revealed that most of the subjects have no aversion to the presence of the android, and elderly people tend to have positive impressions of the android. Furthermore we demonstrated the robot in facilities for the elderly, and a school for children with developmental disorders. Findings from the demonstrations together with ideas for the potential application of the android based on the findings are presented.

Yoshio Matsumoto, Masahiro Yoshikawa, Yujin Wakita, Masahiko Sumitani, Masutomo Miyao, Hiroshi Ishiguro

Multi-Robot Formation Control via a Real-Time Drawing Interface

This paper describes a system that takes real-time user input to direct a robot swarm. The user interface is via drawing, and the user can create a single drawing or an animation to be represented by robots. For example, the drawn input could be a stick figure, with the robots automatically adopting a physical configuration to represent the figure. Or the input could be an animation of a walking stick figure, with the robots moving to represent the dynamic deforming figure. Each robot has a controllable RGB LED so that the swarm can represent color drawings. The computation of robot position, robot motion, and robot color is automatic, including scaling to the available number of robots. The work is in the field of entertainment robotics for play and making robot art, motivated by the fact that a swarm of mobile robots is now affordable as a consumer product. The technical contribution of the paper is three-fold. Firstly the paper presents shaped flocking, a novel algorithm to control multiple robots—this extends existing flocking methods so that robot behavior is driven by both flocking forces and forces arising from a target shape. Secondly the new work is compared with an alternative approach from the existing literature, and the experimental results include a comparative analysis of both algorithms with metrics to compare performance. Thirdly, the paper describes a working real-time system with results for a physical swarm of 60 differential-drive robots.

Sandro Hauri, Javier Alonso-Mora, Andreas Breitenmoser, Roland Siegwart, Paul Beardsley

Evaluation and Training System of Muscle Strength for Leg Rehabilitation Utilizing an MR Fluid Active Loading Machine

An evaluation and training system of muscle strength for leg rehabilitation has been developed by using a new conceptual loading machine. This loading machine, which is called MR fluid active loading machine, mainly consists of a newly designed magneto-rheological (MR) fluid clutch and a reversible induction motor. The MR fluid clutch produces passively the magnetic field-dependent transmitting torque almost independent of the rotational speed. Because of this feature, the MR fluid clutch will be suitable for the loading machine of a rehabilitation system from a viewpoint of safety and relief. This system can perform the isometric and isokinetic strength evaluations and the isokinetic strength training. And also, the system has applicability to the Range Of Motion training (ROM training). In this paper, the methods of the muscle strength evaluations and training in this system are described, and the performances of the evaluation and training modes are discussed.

Hiroshi Nakano, Masami Nakano

Automated and Frequent Calibration of a Robot Manipulator-mounted IR Range Camera for Steel Bridge Maintenance

This paper presents an approach to perform frequent hand-eye calibration of an Infrared (IR) range camera mounted to the end-effector of a robot manipulator in a field environment. A set of three reflector discs arranged in a structured pattern is attached to the robot platform to provide high contrast image features with corresponding range readings for accurate calculation of the camera-to-robot base transform. Using this approach the hand-eye transform between the IR range camera and robot end-effector can be determined by considering the robot manipulator model. Experimental results show that a structured lighting-based IR range camera can be reliably hand-eye calibrated to a six DOF robot manipulator using the presented automated approach. Once calibrated, the IR range camera can be positioned with the manipulator so as to generate a high resolution geometric map of the surrounding environment suitable for performing the grit-blasting task.

Andrew Wing Keung To, Gavin Paul, David Rushton-Smith, Dikai Liu, Gamini Dissanayake

Vertical Infrastructure Inspection Using a Quadcopter and Shared Autonomy Control

This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure—tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadcopter with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.

Inkyu Sa, Peter Corke

Towards Autonomous Robotic Systems for Remote Gas Leak Detection and Localization in Industrial Environments

Detection and localization of escaped hazardous gases is of great industrial and public interest in order to prevent harm to humans, nature and assets or just to prevent financial losses. The development of novel leak-detection technologies will yield better coverage of inspected objects while helping to lower plant operation costs at the same time. Moreover, inspection personnel can be relieved from repetitive work and focus on value-adding supervisory control and optimization tasks. The proposed system consists of autonomous mobile inspection robots that are equipped with several remote gas sensing devices and local intelligence. All-terrain robots with caterpillar tracks are used that can handle slopes, unpaved routes and offer maneuverability in restricted spaces as required for inspecting plants such as petroleum refineries, tank farms or chemical sites as well as sealed landfills. The robots can detect and locate gas leaks autonomously to a great extent using infrared optical spectroscopic and thermal remote sensing techniques and data processing. This article gives an overview of the components of the robotic system prototype, i.e. the robotic platform and the remote sensing and evaluation module. The software architecture, including the robot middleware and the measurement routines, are described. Results from testing autonomous mobility and object inspection functions in a large test course are presented.

Samuel Soldan, Jochen Welle, Thomas Barz, Andreas Kroll, Dirk Schulz

To the Bookstore! Autonomous Wheelchair Navigation in an Urban Environment

In this paper, we demonstrate reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise GPS position estimates through significant multi-path effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, our SWS employed a map-based localization approach. A map of the environment was acquired using a server vehicle, synthesized

a priori

, and made accessible to the SWS. The map embedded not only the locations of landmarks, but also semantic data delineating 7 different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2D and 3D LIDAR systems. The resulting localization method has demonstrated decimeter level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample based motion planner and control loop running at 5 Hz to enable autonomous navigation. For validation, the SWS repeatedly navigated autonomously between Lehigh University’s Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.

Corey Montella, Timothy Perkins, John Spletzer, Michael Sands

A Trail-Following Robot Which Uses Appearance and Structural Cues

We describe a wheeled robotic system which navigates along outdoor “trails” intended for hikers and bikers. Through a combination of appearance and structural cues derived from stereo omnidirectional color cameras and a tiltable laser range-finder, the system is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is efficiently segmented in a top-down fashion based on color, brightness, and/or height contrast with flanking areas, and a differential motion planner searches for maximally-safe paths within that region according to several criteria. When the trail tracker’s confidence drops the robot slows down to allow a more detailed search, and when it senses a dangerous situation due to excessive slope, dense trailside obstacles, or visual trail segmentation failure, it stops entirely to acquire and analyze a ladar-derived point cloud in order to reset the tracker. Our system’s ability to negotiate a variety of challenging trail types over long distances is demonstrated through a number of live runs through different terrain and in different weather conditions.

Christopher Rasmussen, Yan Lu, Mehmet Kocamaz

Construction of Semantic Maps for Personal Mobility Robots in Dynamic Outdoor Environments

In this paper, a construction system of outdoor semantic maps by personal mobility robots that move in dynamic outdoor environments is proposed. The maps have topological forms based on understanding of road structures. That is, the nodes of maps are intersections, and arcs are roads between each pair of intersections. Topological framework significantly reduces computer resources, and enables consistent map building in environments which include loops. Trajectories of moving objects, landmarks, entrances of buildings, and traffic signs are added along each road. This framework enables personal mobility robots to recognize dangerous points or regions. The proposed system uses two laser range finders (LRFs) and one omni-directional camera. One LRF is swung by a tilt unit, and reconstruct 3D shapes of obstacles and the ground. The other LRF is fixed on the body of the robot, and is used for moving objects detection and tracking. The camera is used for localization and loop closings. We implemented the proposed system in a personal mobility robot, and demonstrated its effectiveness in outdoor environments.

Naotaka Hatao, Satoshi Kagami, Ryo Hanai, Kimitoshi Yamazaki, Masayuki Inaba

Terrain Mapping and Control Optimization for a 6-Wheel Rover with Passive Suspension

Rough terrain control optimization for space rovers has become a popular and challenging research field. Improvements can be achieved concerning power consumption, reducing the risk of wheels digging in and increasing ability of overcoming obstacles.In this paper, we propose a terrain profiling and wheel speed adjustment approach based on terrain shape estimation. This terrain estimation is performed using sensor data limited to IMU, motor encoders and suspension bogie angles. Markov Localization was also implemented in order to accurately keep track of the rover position.Tests were conducted in and outdoors in low and high friction environments. Our control approach showed promising results in high friction environment: the profiled terrain was reconstructed well and, due to wheel speed control, wheel slippage could be also decreased. In the low friction sandy test bed however, terrain profiling still worked reasonably well, but uncertainties like wheel slip were too large for a significant control performance improvement.

Pascal Strupler, Cédric Pradalier, Roland Siegwart

Robust Monocular Visual Odometry for a Ground Vehicle in Undulating Terrain

Here we present a robust method for monocular visual odometry capable of accurate position estimation even when operating in undulating terrain. Our algorithm uses a steering model to separately recover rotation and translation. Robot 3DOF orientation is recovered by minimizing image projection error, while, robot translation is recovered by solving an NP-hard optimization problem through an approximation. The decoupled estimation ensures a low computational cost. The proposed method handles undulating terrain by approximating ground patches as locally flat but not necessarily level, and recovers the inclination angle of the local ground in motion estimation. Also, it can automatically detect when the assumption is violated by analysis of the residuals. If the imaged terrain cannot be sufficiently approximated by locally flat patches, wheel odometry is used to provide robust estimation. Our field experiments show a mean relative error of less than 1 %.

Ji Zhang, Sanjiv Singh, George Kantor

Lighting-Invariant Visual Odometry using Lidar Intensity Imagery and Pose Interpolation

Recent studies have demonstrated that images constructed from lidar reflectance information exhibit superior robustness to lighting changes in outdoor environments in comparison to traditional passive stereo camera imagery. Moreover, for visual navigation methods originally developed using stereo vision, such as visual odometry (VO) and visual teach and repeat (VT&R), scanning lidar can serve as a direct replacement for the passive sensor. This results in systems that retain the efficiency of the sparse, appearance-based techniques while overcoming the dependence on adequate/consistent lighting conditions required by traditional cameras. However, due to the scanning nature of the lidar and assumptions made in previous implementations, data acquired during continuous vehicle motion suffer from geometric motion distortion and can subsequently result in poor metric VO estimates, even over short distances (e.g., 5–10 m). This paper revisits the measurement timing assumption made in previous systems, and proposes a frame-to-frame VO estimation framework based on a novel pose interpolation scheme that explicitly accounts for the exact acquisition time of each feature measurement. In this paper, we present the promising preliminary results of our new method using data generated from a lidar simulator and experimental data collected from a planetary analogue environment with a real scanning laser rangefinder.

Hang Dong, Timothy D. Barfoot

Modeling and Calibrating Visual Yield Estimates in Vineyards

Accurate yield estimates are of great value to vineyard growers to make informed management decisions such as crop thinning, shoot thinning, irrigation and nutrient delivery, preparing for harvest and planning for market. Current methods are labor intensive because they involve destructive hand sampling and are practically too sparse to capture spatial variability in large vineyard blocks. Here we report on an approach to predict vineyard yield automatically and non-destructively using images collected from vehicles driving along vineyard rows. Computer vision algorithms are applied to detect grape berries in images that have been registered together to generate high-resolution estimates. We propose an underlying model relating image measurements to harvest yield and study practical approaches to calibrate the two. We report on results on datasets of several hundred vines collected both early and in the middle of the growing season. We find that it is possible to estimate yield to within 4 % using calibration data from prior harvest data and 3 % using calibration data from destructive hand samples at the time of imaging.

Stephen Nuske, Kamal Gupta, Srinivasa Narasimhan, Sanjiv Singh

Forest 3D Mapping and Tree Sizes Measurement for Forest Management Based on Sensing Technology for Mobile Robots

This research work is aimed at application of sensing and mapping technologies that have been developed in mobile robotics, so as to perform equipment measurements of forest trees. This research work utilizes a small sized laser scanner and SLAM (Simultaneous Localization and Mapping) technology for the problem of performing forest mensurements. One of the key pieces of information required for forest management, especially in artificial forests, is accurate records of the tree sizes and the standing timber volume per unit area. The authors have made measurement equipment fore a pre-production trial which consists of small sized laser range scanners with a rotating (scanning) mechanism of them. SLAM and related technologies are applied for the information extraction. In the development of SLAM algorithm for this application, the sparseness of the standing trees and the inclination of the forest floor are considered. After performing the SLAM and obtaining a map based on the data from several measurement points, we can obtain useful information including a map of the standing trees, the diameter at chest height of every tree, and the height at crown base (length of the clear bole). The authors will present the experimental results from the forest including the map and the measured tree sizes.

Takashi Tsubouchi, Asuka Asano, Toshihiko Mochizuki, Shuhei Kondou, Keiko Shiozawa, Mitsuhiro Matsumoto, Shuhei Tomimura, Shuichi Nakanishi, Akiko Mochizuki, Yukihiro Chiba, Kouji Sasaki, Toru Hayami

Iterative Autonomous Excavation

This paper introduces a Cartesian impedance control framework in which reaction forces exceeding control authority directly reshape bucket motion during successive excavation passes. This novel approach to excavation results in an iterative process that does not require explicit prediction of terrain forces. This is in contrast to most excavation control approaches that are based on the generation, tracking and re-planning of single-pass tasks where the performance is limited by the accuracy of the prediction. In this view, a final trench profile is achieved iteratively, provided that the forces generated by the excavator are capable of removing some minimum amount of soil, maintaining convergence towards the goal. Field experiments show that a disturbance compensated controller is able to maintain convergence, and that a 2-DOF feedforward controller based on free motion inverse dynamics may not converge due to limited feedback gains.

Guilherme J. Maeda, David C. Rye, Surya P. N. Singh

Rock Recognition Using Stereo Vision for Large Rock Breaking Operation

At the work front in a quarry, many large rocks are generated by rock blasting. Since some of these rocks are too large to be fed into a rock crusher machine, a hydraulic breaker is used to break the oversized rocks into suitable sizes. The purpose of this study is an automation of rock breaking operation in working front of an open-pit quarry. In this paper we describe an approach using stereo vision to recognize position and shape of large rocks. For rock recognition and rock moving experiments, we set up scaled down experimental environment in laboratory and use small rocks and a robotic manipulator in experiments.

Anusorn Iamrurksiri, Takashi Tsubouchi, Shigeru Sarata

Plowing for Rover Control on Extreme Slopes

Planetary rovers are increasingly challenged to negotiate extreme terrain. Early destinations have been benign to preclude risk, but canyons, funnels, and newly discovered holes present steep slopes that defy tractive descent. Steep craters and holes with unconsolidated material pose a particularly treacherous danger to modern rovers. This research explores robotic braking by plowing, a novel method for decreasing slip and improving mobility while driving on steep unconsolidated slopes. This technique exploits subsurface strength that is under, not on, weak soil. Starting with experimental work on Icebreaker, a tracked rover, and concluding with detailed plow testing in a wheel test-bed the plow is developed for use. This work explores using plows of different diameters and at different depths as well as the associated braking force. By plowing the Icebreaker rover can successfully move on a slope with a high degree of accuracy thereby enabling science targets on slopes and crater walls to now be considered accessible.

David Kohanbash, Scott Moreland, David Wettergreen

Complementary Flyover and Rover Sensing for Superior Modeling of Planetary Features

This paper presents complementary flyover and surface exploration for reconnaissance of planetary point destinations, like skylights and polar crater rims, where local 3D detail matters. Recent breakthroughs in precise, safe landing enable spacecraft to touch down within a few hundred meters of target destinations. These precision trajectories provide unprecedented access to bird’s-eye views of the target site and enable a paradigm shift in terrain modeling and path planning. High-angle flyover views penetrate deep into concave features while low-angle rover perspectives provide detailed views of areas that cannot be seen in flight. By combining flyover and rover sensing in a complementary manner, coverage is improved and rover trajectory length is reduced by 40 %. Simulation results for modeling a lunar skylight are presented.

Heather L. Jones, Uland Wong, Kevin M. Peterson, Jason Koenig, Aashish Sheshadri, William L. Red Whittaker

Path Planning and Navigation Framework for a Planetary Exploration Rover Using a Laser Range Finder

This chapter presents a path planning and navigation framework for a planetary exploration rover and its experimental tests at a Lunar/Martian analog site. The framework developed in this work employs a laser range finder (LRF) for terrain feature mapping. The path planning algorithm generates a feasible path based on a cost function consisting of terrain inclination, terrain roughness, and path length. A set of navigation commands for the rover is then computed from the generated path. The rover executes those navigation commands to reach a desired goal. In this paper, a terrain mapping technique that uses a LRF is described along with an introduction to a cylindrical coordinate digital elevation map (

$$\text {C}^2$$

C

2

DEM). The gird-based path planning algorithm is also presented. Field experiments regarding the path planning and navigation that evaluate the feasibility of the framework developed in this work are reported.

Genya Ishigami, Masatsugu Otsuki, Takashi Kubota

Motion Analysis System for Robot Traction Device Evaluation and Design

Though much research has been conducted regarding traction of tires in soft granular terrain, little empirical data exist on the motion of soil particles beneath a tire. A novel experimentation and analysis technique has been developed to enable detailed investigation of robot interactions with granular soil. This technique, the Shear Interface Imaging Analysis method, provides visualization and analysis capability of soil shearing and flow as it is influenced by a wheel or excavation tool. The method places a half-width implement (wheel, excavation bucket, etc.) of symmetrical design in granular soil up against a transparent glass sidewall. During controlled motion of the implement, high-speed images are taken of the sub-surface soil, and are processed via optical flow software. The resulting soil displacement field is of very high fidelity and can be used for various analysis types. Identification of clusters of soil motion, shear interfaces and shearing direction/magnitude allow for analysis of the soil mechanics governing traction. The Shear Interface Imaging Analysis Tool enables analysis of robot-soil interactions in richer detail than possible before. Prior state-of-art technique relied on long-exposure images that provided only qualitative insight, while the new processing technique identifies sub-millimeter gradations in motion and can do so even for high frequency changes in motion. Results are presented for various wheel types and locomotion modes: small/large diameter, rigid/compliant rim, grouser implementation, and push-roll locomotion.

Scott J. Moreland, Krzysztof Skonieczny, David S. Wettergreen

Image-Directed Sampling for Geometric Modeling of Lunar Terrain

Geometric modeling from range scanners can be vastly improved by sampling the scene with a Nyquist criterion. This work presents a method to estimate frequency content

a priori

from intensity imagery using wavelet analysis and to utilize these estimates in efficient single-view sampling. The key idea is that under certain constrained and estimable image formation conditions, images are a strong predictor of surface frequency. This approach is explored in the context of lunar application to enhance robotic modeling. Experimentation on simulated data and in artificial lunar terrain at aerial and ground rover scales is documented. Results show up to 40 % improvement in MSE reconstruction error. Lastly, a class of image-directed range sensors is described and a hardware implementation of this paradigm on a structured light scanner is demonstrated.

Uland Wong, Ben Garney, Warren Whittaker, Red Whittaker

Efficient Large-Scale 3D Mobile Mapping and Surface Reconstruction of an Underground Mine

Mapping large-scale underground environments, such as mines, tunnels, and caves is typically a time consuming and challenging endeavor. In April 2011, researchers at CSIRO were contracted to map the Northparkes Mine in New South Wales, Australia. The mine operators required a locally accurate 3D surface model in order to determine whether and how some pieces of large equipment could be moved through the decline. Existing techniques utilizing 3D terrestrial scanners mounted on tripods rely on accurate surveyed sensor positions and are relatively expensive, time consuming, and inefficient. Mobile mapping solutions have the potential to map a space more efficiently and completely; however, existing commercial systems are reliant on a GPS signal and navigation- or tactical-grade inertial systems. A 3D SLAM solution developed at CSIRO, consisting of a spinning 2D lidar and industrial-grade MEMS IMU was customized for this particular application. The system was designed to be mounted on a site vehicle which continuously acquires data at typical mine driving speeds without disrupting any mine operations. The deployed system mapped over 17 km of mine tunnel in under two hours, resulting in a dense and accurate georeferenced 3D surface model that was promptly delivered to the mine operators.

Robert Zlot, Michael Bosse

Large Scale Monocular Vision-Only Mapping from a Fixed-Wing sUAS

This paper presents the application of a monocular visual SLAM on a fixed-wing small Unmanned Aerial System (sUAS) capable of simultaneous estimation of aircraft pose and scene structure. We demonstrate the robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude. It is ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors. We explore some of the challenges of visual SLAM from a sUAS including dealing with planar structure, distant scenes and noisy observations. The developed techniques are applied on vision data gathered from a fast-moving fixed-wing radio control aircraft flown over a

$$1\times 1$$

1

×

1

km rural area at an altitude of 20–100 m. We present both raw Structure from Motion results and a SLAM solution that includes FAB-MAP based loop-closures and graph-optimised pose. Timing information is also presented to demonstrate near online capabilities. We compare the accuracy of the 6-DOF pose estimates to an off-the-shelf GPS aided INS over a 1.7 km trajectory. We also present output 3D reconstructions of the observed scene structure and texture that demonstrates future applications in autonomous monitoring and surveying.

Michael Warren, David McKinnon, Hu He, Arren Glover, Michael Shiel, Ben Upcroft

Super-Voxel Based Segmentation and Classification of 3D Urban Landscapes with Evaluation and Comparison

Classification of urban range data into different object classes offers several challenges due to certain properties of the data such as density variation, inconsistencies due to holes and the large data size which requires heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from Lidar sensors is presented. The 3D point cloud is first segmented into voxels which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metrics is presented which combines both segmentation and classification results simultaneously. The proposed method is evaluated on standard datasets using three different evaluation metrics.

Ahmad Kamal Aijazi, Paul Checchin, Laurent Trassoudaine

Classification of 3-D Point Cloud Data that Includes Line and Frame Objects on the Basis of Geometrical Features and the Pass Rate of Laser Rays

The authors aim at classification of 3-D point cloud data at disaster environment. In this paper, we proposed a method of classification for 3-D point cloud data using geometrical features and the pass rate of laser rays. Line and frame objects often trap robots, which causes the damages of sensors, motors, mechanical parts etc. at remote operation. Using our proposed method, the line and frame objects can be classified from the 3-D point cloud data. Key-point is use of the pass rate of laser rays. It is confirm that recognition rate of line and frame objects can be increased using the pass rate of laser rays. In addition, it is confirm that the proposed classification method works in the real scene. A training facility of Japan fireman department is used for the evaluation test because it is similar to the real disaster scene comparing the laboratory’s test field.

Kazunori Ohno, Takahiro Suzuki, Kazuyuki Higashi, Masanobu Tsubota, Eijiro Takeuchi, Satoshi Tadokoro

Solid Model Reconstruction of Large-Scale Outdoor Scenes from 3D Lidar Data

Globally consistent 3D maps are commonly used for robot mission navigation, and teleoperation in unstructured and uncontrolled environments. These maps are typically represented as 3D point clouds; however other representations, such as surface or solid models, are often required for humans to perform scientific analyses, infrastructure planning, or for general visualization purposes. Robust large-scale solid model reconstruction from point clouds of outdoor scenes can be challenging due to the presence of dynamic objects, the ambiguitiy between non-returns and sky-points, and scalability requirements. Volume-based methods are able to remove spurious points arising from moving objects in the scene by considering the entire ray of each measurement, rather than simply the end point. Scalability can be addressed by decomposing the overall space into multiple tiles, from which the resulting surfaces can later be merged. We propose an approach that applies a weighted signed distance function along each measurement ray, where the weight indicates the confidence of the calculated distance. Due to the unenclosed nature of outdoor environments, we introduce a technique to automatically generate a thickened structure in order to model surfaces seen from only one side. The final solid models are thus suitable to be physically printed by a rapid prototyping machine.The approach is evaluated on 3D laser point cloud data collected from a mobile lidar in unstructured and uncontrolled environments, including outdoors and inside caves. The accuracy of the solid model reconstruction is compared to a previously developed binary voxel carving method. The results show that the weighted signed distance approach produces a more accurate reconstruction of the surface, and since higher accuracy models can be produced at lower resolutions, this additionally results in significant improvements in processing time.

Ciril Baselgia, Michael Bosse, Robert Zlot, Claude Holenstein

Lightweight Laser Scan Registration in Underground Mines with Band-based Downsampling Method

Robots operating in underground mines must accurately track their location and create maps. The rough, undulating floors typical of mine environments preclude the 2D representation of scene integral to many existing real-time mobile robot simultaneous localization and mapping systems. On the other hand, a full 3D solution is made unrealistic by the computational expense of aligning large point clouds. This paper presents an approach that extracts high-density, horizontal bands of laser scans and uses them to represent the scene with detail sufficient to capture the moderate non-planar motion typical of mining robots. Our approach is able to operate in real-time, building maps and localizing in pace with range scanning and is fast enough to allow continuous vehicle motion. We present details of the approach which has been validated in an underground mine. Trials runs have shown a significant decrease in computation time without an appreciable decrease in accuracy over a full 3D strategy.

James Lee, David Wettergreen, George Kantor

Featureless Visual Processing for SLAM in Changing Outdoor Environments

Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.

Michael Milford, Ashley George

Gold-Fish SLAM: An Application of SLAM to Localize AGVs

The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3 m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs.

Henrik Andreasson, Abdelbaki Bouguerra, Björn Åstrand, Thorsteinn Rögnvaldsson

Design, Development, and Mobility Test of an Omnidirectional Mobile Robot for Rough Terrain

Omnidirectional vehicles have been widely applied in several areas, but most of them are designed for the case of motion on flat, smooth terrain, and are not feasible for outdoor usage. This paper presents an omnidirectional mobile robot that possesses high mobility in rough terrain. The omnidirectional robot employs four sets of mobility modules, called active split offset caster (ASOC). The ASOC module has two independently-driven wheels that produce arbitrary planar translational velocity, enabling the robot to achieve its omnidirectional mobility. Each module is connected to the main body of the robot via a parallel link with shock absorbers. In this paper, a design and development of the ASOC-driven omnidirectional mobile robot for rough terrain are described. Also, a control scheme that considers a kinematics of the omnidirectional mobile robot is presented. The omnidirectional mobility of the robot regardless of ifs heading direction is experimentally evaluated based on a metric called omnidirectional mobility index.

Genya Ishigami, Elvine Pineda, Jim Overholt, Greg Hudas, Karl Iagnemma

A Vector Algebra Formulation of Mobile Robot Velocity Kinematics

Typical formulations of the forward and inverse velocity kinematics of wheeled mobile robots assume flat terrain, consistent constraints, and no slip at the wheels. Such assumptions can sometimes permit the wheel constraints to be substituted into the differential equation to produce a compact, apparently unconstrained result. However, in the general case, the terrain is not flat, the wheel constraints cannot be eliminated in this way, and they are typically inconsistent if derived from sensed information. In reality, the motion of a wheeled mobile robot (WMR) is restricted to a manifold which more-or-less satisfies the wheel slip constraints while both following the terrain and responding to the inputs. To address these more realistic cases, we have developed a formulation of WMR velocity kinematics as a differential-algebraic system—a constrained differential equation of first order. This paper presents the modeling part of the formulation. The

Transport Theorem

is used to derive a generic 3D model of the motion at the wheels which is implied by the motion of an arbitrarily articulated body. This

wheel equation

is the basis for forward and inverse velocity kinematics and for the expression of explicit constraints of wheel slip and terrain following. The result is a mathematically correct method for predicting motion over non-flat terrain for arbitrary wheeled vehicles on arbitrary terrain subject to arbitrary constraints. We validate our formulation by applying it to a Mars rover prototype with a passive suspension in a context where ground truth measurement is easy to obtain. Our approach can constitute a key component of more informed state estimation, motion control, and motion planning algorithms for wheeled mobile robots.

Alonzo Kelly, Neal Seegmiller

A Self-Learning Ground Classifier Using Radar Features

Autonomous off-road ground vehicles require advanced perception systems in order to sense and understand the surrounding environment, while ensuring robustness under compromised visibility conditions. In this paper, the use of millimeter wave radar is proposed as a possible solution for all-weather off-road perception. A self-learning ground classifier is developed that segments radar data for scene understanding and autonomous navigation tasks. The proposed system comprises two main stages: an adaptive training stage and a classification stage. During the training stage, the system automatically learns to associate appearance of radar data with class labels. Then, it makes predictions based on past observations. The training set is continuously updated online using the latest radar readings, thus making it feasible to use the system for long range and long duration navigation, over changing environments. Experimental results, obtained with an unmanned ground vehicle operating in a rural environment, are presented to validate this approach. Conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios.

Giulio Reina, Annalisa Milella, James Underwood

Development of a Low Cost Multi-Robot Autonomous Marine Surface Platform

In this paper, we outline a low cost multi-robot autonomous platform for a broad set of applications including water quality monitoring, flood disaster mitigation and depth buoy verification. By working cooperatively, fleets of vessels can cover large areas that would otherwise be impractical, time consuming and prohibitively expensive to traverse by a single vessel. We describe the hardware design, control infrastructure, and software architecture of the system, while additionally presenting experimental results from several field trials. Further, we discuss our initial efforts towards developing our system for water quality monitoring, in which a team of watercraft equipped with specialized sensors autonomously samples the physical quantity being measured and provides online situational awareness to the operator regarding water quality in the observed area. From canals in New York to volcanic lakes in the Philippines, our vessels have been tested in diverse marine environments and the results obtained from initial experiments in these domains are also discussed.

A. Valada, P. Velagapudi, B. Kannan, C. Tomaszewski, G. Kantor, P. Scerri
Additional information