Skip to main content

Über dieses Buch

FSR, the International Conference on Field and Service Robotics, is a robotics Symposium which has established over the past ten years the latest research and practical results towards the use of field and service robotics in the community with particular focus on proven technology. The first meeting was held in Canberra, Australia, in 1997. Since then the meeting has been held every two years in the pattern Asia, America, Europe.

Field robots are non-factory robots, typically mobile, that operate in complex and dynamic environments; on the ground (of earth or planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans to help them with their lives. This book present the results of the ninth edition of Field and Service Robotics, FSR13, held in Brisbane, Australia on 9th-11th December 2013. The conference provided a forum for researchers, professionals and robot manufactures to exchange up-to-date technical knowledge and experience. This book offers a collection of a broad range of topics including: Underwater Robots and Systems, Unmanned Aerial Vehicles technologies and applications, Agriculture, Space, Search and Rescue and Domestic Robotics, Robotic Vision, Mapping and Recognition.



Autonomous Underwater Vehicles


Hierarchical Classification in AUV Imagery

In recent years, Autonomous Underwater Vehicles (AUVs) have been used extensively to gather imagery and other environmental data for ocean monitoring. Processing of this vast amount of collected imagery to label content is difficult, expensive and time consuming. Because of this, typically only a small subset of images are labelled, and only at a small number of points. In order to make full use of the raw data returned from the AUV, this labelling process needs to be automated. In this work the single species classification problem of [1] is extended to a multi-species classification problem following a taxonomical hierarchy. We demonstrate the application of techniques used in areas such as computer vision, text classification and medical diagnosis to the supervised hierarchical classification of benthic images. After making a comparison to flat multi-class classification, we also discuss critical aspects such as training topology and various prediction and scoring methodologies. An interesting aspect of the presented work is that the ground truth labels are sparse and incomplete, i.e. not all labels go to the leaf node, which brings with it other interesting challenges.We find that the best classification results are obtained using Local Binary Patterns (LBP), training a network of binary classifiers with probabilistic output, and applying “one-vs-rest” classification at each level of the hierarchy for prediction. This work presents a working solution that allows AUV images to be automatically labelled with the most appropriate node in a hierarchy of 19 biological groupings and morphologies. The result is that the output of the AUV system can include a semantic map using the taxonomy prescribed by marine scientists. This has the potential to not only reduce the manual labelling workload, but also to reduce the current dependence that marine scientists have on extrapolating information from a relatively small number of sparsely labelled points.

M. S. Bewley, N. Nourani-Vatani, D. Rao, B. Douillard, O. Pizarro, S. B. Williams

Mapping 3D Underwater Environments with Smoothed Submaps

This paper presents a technique for improved mapping of complex underwater environments. Autonomous underwater vehicles (AUVs) are becoming valuable tools for inspection of underwater infrastructure, and can create 3D maps of their environment using high-frequency profiling sonar. However, the quality of these maps is limited by the drift in the vehicle’s navigation system.We have developed a technique for simultaneous localization and mapping (SLAM) by aligning point clouds gathered over a short time scale using the iterative closest point (ICP) algorithm. To improve alignment, we have developed a system for smoothing these “submaps” and removing outliers. We integrate the constraints from submap alignment into a 6-DOF pose graph, which is optimized to estimate the full vehicle trajectory over the duration of the inspection task. We present real-world results using the Bluefin Hovering AUV, as well as analysis of a synthetic data set.

Mark VanMiddlesworth, Michael Kaess, Franz Hover, John J. Leonard

Outdoor Driving


Towards Autonomous Mobile Robots for the Exploration of Steep Terrain

Steep, natural terrain offers excellent opportunities for scientific investigations into the composition and history of Mars and other planetary bodies. In this paper, we present a prototype tethered robot,


(vertical scout), capable of operating in steep, rugged terrain. The primary purpose of this vehicle is to support field geologists conducting research on cliffs, in canyons, and on crater walls. However, the long-term vision is to develop a system suitable for planetary exploration (and more diverse terrestrial applications). Unlike other systems for exploration in steep terrain, vScout has demonstrated autonomous operation on steep surfaces by making use of a network of reusable paths and visual teach & repeat. Here we describe the first vScout prototype and our experiences with it. We also outline some challenges and the directions we intend to take with this research.

Braden Stenning, Lauren Bajin, Christine Robson, Valentin Peretroukhin, Gordon R. Osinski, Timothy D. Barfoot

Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle

In this paper, a reliable road/obstacle detection with 3D point cloud for intelligent vehicle on a variety of challenging environments (undulated road and/or uphill/ downhill) is handled. For robust detection of road we propose the followings: 1) correction of 3D point cloud distorted by the motion of vehicle (high speed and heading up and down) incorporating vehicle posture information; 2) guideline for the best selection of the proper features such as gradient value, height average of neighboring node; 3) transformation of the road detection problem into a classification problem of different features; and 4) inference algorithm based on MRF with the loopy belief propagation for the area that the LIDAR does not cover. In experiments, we use a publicly available dataset as well as numerous scans acquired by the HDL-64E sensor mounted on experimental vehicle in inner city traffic scenes. The results show that the proposed method is more robust and reliable than the conventional approach based on the height value on the variety of challenging environment.

Jaemin Byun, Ki-in Na, Beom-su Seo, Myungchan Roh

Predicting Terrain Traversability from Thermal Diffusivity

This paper presents a method to predict soil traversability by estimating the thermal diffusivity of terrain using a moving, continuous-wave laser. This method differentiates between different densities of the samematerial, which visionbased methods alone cannot achieve. The bulk density of a granular material has a significant effect on both its mechanical behavior and its thermal properties. This approach fits the thermal response as effected by a laser to an analytical model that is dependent on thermal diffusivity. Experimental soil strength measurements validate that thermal diffusivity is a predictor of traversability for a given material.

Chris Cunningham, Uland Wong, Kevin M. Peterson, William L. “Red” Whittaker

Modular Dynamic Simulation of Wheeled Mobile Robots

This paper presents a modular method for 3D dynamic simulation of wheeled mobile robots (WMRs). Our method extends efficient dynamics algorithms based on spatial vector algebra to accommodate any articulated WMR configuration. In contrast to some alternatives, our method also supports complex, nonlinear wheel-ground contact models. Instead of directly adding contact forces, we solve for them in a novel differential algebraic equation (DAE) formulation. To make this possible we resolve issues of nonlinearity and overconstraint. We demonstrate our method’s flexibility and speed through simulations of two state-of-the-art WMR platforms and wheel-ground contact models. Simulation accuracy is verified in a physical experiment.

Neal Seegmiller, Alonzo Kelly

Unmanned Aerial Vehicles


Autonomous River Exploration

Mapping a rivers course and width provides valuable information to help understand the ecology, topology and health of a particular environment. Such maps can also be useful to determine whether specific surface vessels can traverse the rivers. While rivers can be mapped from satellite imagery, the presence of vegetation, sometimes so thick that the canopy completely occludes the river, complicates the process of mapping. Here we propose the use of a micro air vehicle flying under the canopy to create accurate maps of the environment.We study and present a systemthat can autonomously explore riverswithout any prior information, and demonstrate an algorithm that can guide the vehicle based upon local sensors mounted on board the flying vehicle that can perceive the river, bank and obstacles. Our field experiments demonstrate what we believe is the first autonomous exploration of rivers by an autonomous vehicle. We show the 3D maps produced by our system over runs of 100-450 meters in length and compare guidance decisions made by our system to those made by a human piloting a boat carrying our system over multiple kilometers.

Sezal Jain, Stephen Nuske, Andrew Chambers, Luke Yoder, Hugh Cover, Lyle Chamberlain, Sebastian Scherer, Sanjiv Singh

Outdoor Flight Testing of a Pole Inspection UAV Incorporating High-speed Vision

We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however.When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller.We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz ) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.

Inkyu Sa, Stefan Hrabar, Peter Corke

Inspection of Penstocks and Featureless Tunnel-like Environments Using Micro UAVs

Micro UAVs are receiving a great deal of attention in many diverse applications. In this paper, we are interested in a unique application, surveillance for maintenance of large infrastructure assets such as dams and penstocks, where the goal is to periodically inspect and map the structure to detect features that might indicate the potential for failures. Availability of architecture drawings of these constructions makes the mapping problem easier. However large buildings with featureless geometries pose a significant problem since it is difficult to design a robust localization algorithm for inspection operations. In this paper we show how a small quadrotor equipped with minimal sensors can be used for inspection of tunnel-like environments such as seen in dam penstocks. Penstocks in particular lack features and do not provide adequate structure for robot localization, especially along the tunnel axis. We develop a Rao-Blackwellized particle filter based localization algorithm which uses a derivative of the ICP for integrating laser measurements and IMU for short-to-medium range pose estimation. To our knowledge, this is the only study in the literature focusing on localization and autonomous control of a UAV in 3-D, featureless tunnel-like environments. We show the success of our work with results from real experiments.

Tolga Özaslan, Shaojie Shen, Yash Mulgaonkar, Nathan Michael, Vijay Kumar

Autonomous Aerial Water Sampling

Obtaining spatially separated, high-frequency water samples from rivers and lakes is critical to enhance our understanding and effectivemanagement of fresh water resources. In this paper we present an aerial water sampler and verify the system in field experiments. The aerial water sampler has the potential to vastly increase the speed and range at which scientists obtain water sampleswhile reducing cost and effort. The water sampling system includes: 1) a mechanism to capture three 20


samples per mission; 2) sensors and algorithms for safe navigation and altitude approximation over water; and 3) software components that integrate and analyze sensor data, control the vehicle, and drive the sampling mechanism. In this paper we validate the system in the lab, characterize key sensors, and present results of outdoor experiments.We comparewater samples from local lakes obtained by our system to samples obtained by traditional sampling techniques. We find that most water properties are consistent between the two techniques. These experiments show that despite the challenges associated with flying precisely over water, it is possible to quickly obtain water samples with an Unmanned Aerial Vehicle (UAV).

John-Paul Ore, Sebastian Elbaum, Amy Burgin, Baoliang Zhao, Carrick Detweiler

Tightly-Coupled Model Aided Visual-Inertial Fusion for Quadrotor Micro Air Vehicles

The main contribution of this paper is a tightly-coupled visual-inertial fusion algorithm for simultaneous localisation and mapping (SLAM) for a quadrotor micro aerial vehicle (MAV). Proposed algorithm is based on an extended Kalman filter that uses a platform specific dynamic model to integrate information from an inertial measurement unit (IMU) and a monocular camera on board the MAV. MAV dynamic model exploits the unique characteristics of the quadrotor, making it possible to generate relatively accurate motion predictions. This, together with an undelayed feature initialisation strategy based on inverse depth parametrisation enables more effective feature tracking and reliable visual SLAM with a small number of features even during rapid manoeuvres. Experimental results are presented to demonstrate the effectiveness of the proposed algorithm.

Dinuka Abeywardena, Gamini Dissanayake

Enabling Aircraft Emergency Landings Using Active Visual Site Detection

The ability to automate forced landings in an emergency such as engine failure is an essential ability to improve the safety of Unmanned Aerial Vehicles operating in General Aviation airspace. By using active vision to detect safe landing zones below the aircraft, the reliability and safety of such systems is vastly improved by gathering up-to-the-minute information about the ground environment. This paper presents the Site Detection System, a methodology utilising a downward facing camera to analyse the ground environment in both 2D and 3D, detect safe landing sites and characterise them according to size, shape, slope and nearby obstacles. A methodology is presented showing the fusion of landing site detection from 2D imagery with a coarse Digital Elevation Map and dense 3D reconstructions using INS-aided Structure-from-Motion to improve accuracy. Results are presented from an experimental flight showing the precision/recall of landing sites in comparison to a hand-classified ground truth, and improved performance with the integration of 3D analysis from visual Structure-from-Motion.

Michael Warren, Luis Mejias, Xilin Yang, Bilal Arain, Felipe Gonzalez, Ben Upcroft

INS Assisted Monocular Visual Odometry for Aerial Vehicles

The requirement to operate aircrafts in GPS denied environments can be met by use of visual odometry.We study the case that the height of the aircraft above the ground can be measured by an altimeter. Even with a high quality INS that the orientation drift is neglectable, random noise exists in the INS orientation. The noise can lead to the error of position estimate, which accumulates over time. Here, we solve the visual odometry problem by tightly coupling the INS and camera. During state estimation, we virtually rotate the camera by reprojecting features with their depth direction perpendicular to the ground. This allows us to partially eliminate the error accumulation in state estimation, resulting in a slow position drift. The method is tested with data collected on a full-scale helicopter for approximately 16km of travel. The estimation error is less than 1% of the flying distance.

Ji Zhang, Sanjiv Singh



An Attitude Controller for Small Scale Rockets

As technology has advanced, electronic components and systems have become smaller and more powerful. A similar trend holds for space systems, and satellites are no exception. As payloads become smaller, so too can the launch vehicles designed to carry them into orbital trajectories. An energy analysis shows that a rocket system with as low as tens of kg of fuel can be sufficient to deliver a 10g payload into orbit given a sufficiently low mass autonomous rocket flight control system. To develop this, the GINA board, a 2g sensor-laden wireless-enabled microprocessor system, was mounted on a custom actuated rocket system and programmed for inertial flight control. Ground and flight tests demonstrated accurate dead reckoning state estimation along with successful open loop actuator control. Further experiments showed the capabilities of the control system at closed loop feedback control. The results presented in this paper demonstrate the feasibility of a sufficiently low mass flight controller, paving the way for a small scale rocket system to deliver a 10g attosatellite into low Earth orbit (LEO).

Florian Kehl, Ankur M. Mehta, Kristofer S. J. Pister

Posture Reconfiguration and Navigation Maneuvers on a Wheel-Legged Hydraulic Robot

Wheel-legged hybrid robots are known to be extremely capable in negotiating different types of terrain as they combine the efficiency of conventional wheeled platforms and the rough terrain capabilities of legged platforms. The Micro-Hydraulic Toolkit (MHT), developed by Defense Research and Development Canada at the Suffield Research Centre, is one such quadruped hybrid robot.MHT’s relatively small size, mobility, actuation and locomotion types fill a gap in military unmanned ground vehicles (UGVs). Previously, a velocity-level closed loop inverse kinematics controller had been developed and tested in simulation on a detailed physics-based model of the MHT in LMS Virtual.LabMotion. The controller was employed to generate a variety of posture reconfiguration maneuvers, such as achieving minimum ormaximum chassis height at specific wheel separations. In this paper, the aforementioned inverse kinematics controller was adapted to function on the physical MHT. Several test maneuvers, including chassis height and pitch reconfiguration and uneven terrain navigation maneuvers, were implemented on the MHT and the robot’s performance was evaluated.

Christopher Yee Wong, Korhan Turker, Inna Sharf, Blake Beckman

Roll Control of an Autonomous Underwater Vehicle Using an Internal Rolling Mass

A stable autonomous underwater vehicle (AUV) is essential for underwater survey activities. Previous studies have associated poor results in bathymetry survey and side-scan imaging with the vehicle’s unwanted roll motion. The problem is becoming more prominent as AUVs are smaller nowadays. This causes reduction in the metacentric height of the AUVs which affects the inherent self-stabilization in the roll-axis. In this paper, we demonstrate the use of an internal rolling mass (IRM) mechanism to actively stabilize the roll motion of an AUV. We rotate the whole electronics tray, which has an off-centric center of gravity, to produce the required torque to stabilize the rollmotion. Themechanical design of such mechanism and its dynamics modeling are discussed in detail. A Proportional-Integral (PI) controller is synthesized using the identified linear model. Results from tank tests and open-field tests demonstrate the effectiveness of the mechanism in regulating the roll motion of the AUV.

Eng You Hong, Mandar Chitre

Humanoid and Space


Human Biomechanical Model Based Optimal Design of Assistive Shoulder Exoskeleton

Robotic exoskeletons are being developed to assist humans in tasks such as robotic rehabilitation, assistive living, industrial and other service applications. Exoskeletons for the upper limb are required to encompass the shoulder whilst achieving a range of motion so as to not impede the wearer, avoid collisions with the wearer, and avoid kinematic singularities during operation. However this is particularly challenging due to the large range of motion of the human shoulder. In this paper a biomechanical model based optimisation is applied to the design of a shoulder exoskeleton with the objective of maximising shoulder range of motion. A biomechanical model defines the healthy range of motion of the human shoulder. A genetic algorithm maximises the range of motion of the exoskeleton towards that of the human, whilst taking into account collisions and kinematic singularities. It is shown how the optimisation can increase the exoskeleton range of motion towards that equivalent of the human, or towards a subset of human range of motion relevant to specific applications.

Marc G. Carmichael, Dikai K. Liu

Lunar Micro Rover Design for Exploration through Virtual Reality Tele-operation

A micro rover, code-named Moonraker, was developed to demonstrate the feasibility of 10kg-class lunar rover missions. Requirements were established based on the Google Lunar X-Prize mission guidelines in order to effectively evaluate the prototype. A 4-wheel skid steer configuration was determined to be effective to reduce mass, maximize regolith traversability, and fit within realistic restrictions on the rover’s envelope by utilizing the top corners of the volume.

A static, hyperbolic mirror-based omnidirectional camera was selected in order to provide full 360° views around the rover, eliminating the need for a pan/tilt mechanism and motors. A front mounted, motorless MEMS laser scanner was selected for similar mass reduction qualities. A virtual reality interface is used to allow one operator to intuitively change focus between various narrow targets of interest within the wide set of fused data available from these sensors.

Lab tests were conducted on the mobility system, as well as field tests at three locations in Japan and Mauna Kea. Moonraker was successfully teleoperated to travel over 900m up and down a peak with slopes of up to 15° These tests demonstrate the rover’s capability to traverse across lunar regolith and gather sufficient data for effective situational awareness and near real-time tele-operation.

Nathan Britton, Kazuya Yoshida, John Walker, Keiji Nagatani, Graeme Taylor, Loïc Dauphin

Mapping and Recognition


Localization and Place Recognition Using an Ultra-Wide Band (UWB) Radar

This paper presents an approach to mobile robot localization, place recognition and loop closure using a monostatic ultra-wide band (UWB) radar system. The UWB radar is a time-of-flight based range measurement sensor that transmits short pulses and receives reflected waves from objects in the environment. The main idea of the poposed localization method is to treat the received waveform as a signature of place. The resulting echo waveform is very complex and highly depends on the position of the sensor with respect to surrounding objects. On the other hand, the sensor receives similar waveforms from the same positions.Moreover, the directional characteristics of dipole antenna is almost omnidirectional. Therefore, we can localize the sensor position to find similar waveform from waveform database. This paper proposes a place recognitionmethod based on waveform matching, presents a number of experiments that illustrate the high positon estimation accuracy of our UWB radar-based localization system, and shows the resulting loop detection performance in a typical indoor office environment and a forest.

Eijiro Takeuchi, Alberto Elfes, Jonathan Roberts

Laser-Radar Data Fusion with Gaussian Process Implicit Surfaces

This work considers the problem of building high-fidelity 3D representations of the environment from sensor data acquired by mobile robots. Multi-sensor data fusion allows for more complete and accurate representations, and for more reliable perception, especially when different sensing modalities are used. In this paper, we propose a thorough experimental analysis of the performance of 3D surface reconstruction from laser and mm-wave radar data using Gaussian Process Implicit Surfaces (GPIS), in a realistic field robotics scenario. We first analyse the performance of GPIS using raw laser data alone and raw radar data alone, respectively, with different choices of covariance matrices and different resolutions of the input data. We then evaluate and compare the performance of two different GPIS fusion approaches. The first, state-of-the-art approach directly fuses raw data from laser and radar. The alternative approach proposed in this paper first computes an initial estimate of the surface from each single source of data, and then fuses these two estimates. We show that this method outperforms the state of the art, especially in situations where the sensors react differently to the targets they perceive.

Marcos P. Gerardo-Castro, Thierry Peynot, Fabio Ramos

Cluster-Based SJPDAFs for Classification and Tracking of Multiple Moving Objects

This paper describes a method for classifying and tracking multiplemoving objects with a laser range finder (LRF). As moving objects are tracked in the framework of sample-based joint probabilistic data association filters (SJPDAFs), the proposed method is robust against occlusions or false segmentation of LRF scans. It divides tracking targets and corresponding LRF segments into clusters and able to classify each cluster as a car or a group of pedestrians. In addition, it can correct false segmentation of LRF scans. We implemented the proposed method and obtained experimental results demonstrating its effectiveness in outdoor environments and crowded indoor environments.

Naotaka Hatao, Satoshi Kagami

GPmap: A Unified Framework for Robotic Mapping Based on Sparse Gaussian Processes

This paper proposes a unified framework called


for reconstructing surface meshes and building continuous occupancy maps using sparse Gaussian processes. Previously, Gaussian processes have been separately applied for surface reconstruction and occupancy mapping with different function definitions. However, by adopting the signed distance function as the latent function and applying the probabilistic least square classification, we solve two different problems in a single framework. Thus, two different map representations can be obtained at a single cast, for instance, an object shape for grasping and an occupancy map for obstacle avoidance. Another contribution of this paper is reduction of computational complexity for scalability. The cubic computational complexity of Gaussian processes is a well-known issue limiting its applications for large-scale data. We address this by applying the sparse covariance function which makes distant data independent and thus divides both training and test data into grid blocks of manageable sizes. In contrast to previous work, the size of grid blocks is determined in a principled way by learning the characteristic length-scale of the sparse covariance function from the training data. We compare theoretical complexity with previous work and demonstrate our method with structured indoor and unstructured outdoor datasets.

Soohwan Kim, Jonghyuk Kim



Purposive Sample Consensus: A Paradigm for Model Fitting with Application to Visual Odometry

ANSAC (random sample consensus) is a robust algorithm for model fitting and outliers’ removal, however, it is neither efficient nor reliable enough to meet the requirement of many applications where time and precision is critical. Various algorithms have been developed to improve its performance for model fitting.

A new algorithm named PURSAC (purposive sample consensus) is introduced in this paper, which has three major steps to address the limitations of RANSAC and its variants. Firstly, instead of assuming all the samples have a same probability to be inliers, PURSAC seeks their differences and purposively selects sample sets. Secondly, as sampling noise always exists; the selection is also according to the sensitivity analysis of a model against the noise. The final step is to apply a local optimization for further improving its model fitting performance. Tests show that PURSAC can achieve very high model fitting certainty with a small number of iterations.

Two cases are investigated for PURSAC implementation. It is applied to line fitting to explain its principles, and then to feature based visual odometry, which requires efficient, robust and precise model fitting. Experimental results demonstrate that PURSAC improves the accuracy and efficiency of fundamental matrix estimation dramatically, resulting in a precise and fast visual odometry.

Jianguo Wang, Xiang Luo

Cooperative Targeting: Detection and Tracking of Small Objects with a Dual Camera System

Surveillance of a scene with computer vision faces the challenge of meeting two competing design objectives simultaneously: maintain a sufficient field-of-view coverage and provide adequate details of a target object if and when it appears in the scene. In this paper we propose a dual-camera system for tracking small objects based on a stationary camera and a pantilt- zoom (PTZ) camera. The utilization of two cameras enables us to detect a small object with the stationary camera while tracking it with the second moving camera. We present a method for modeling the dual-camera system and demonstrate how the model can be used in object detection and tracking applications. The main contribution of this paper is that we provide a model for explicitly computing the extrinsic parameters of the PTZ camera with respect to the stationary camera in order to target the moving camera at an object that has been detected by the stationary camera. Our mathematical model combines stereo calibration and hand-eye calibration algorithms as well as the kinematics of the pan-tilt unit, in order for the two cameras to collaborate. We present experimental results of our model in indoor and outdoor applications. The results prove that our dual camera system is an effective solution to the problem of detecting and tracking a small object with both excellent scene coverage and object details.

Moein Shakeri, Hong Zhang

Experiments on Stereo Visual Odometry in Feature-Less Volcanic Fields

This paper describes a stereo visual odometry system for volcanic fields which lack visual features on the ground. There are several technical problems in untextured terrain including the diversity of terrain appearance, the lack of welltracked features on surfaces, and the limited computational resources of onboard computers. This paper tries to address these problems and enable efficient and accurate visual localization independently of terrain appearance. Several key techniques are presented including a framework for terrain adaptive feature detection and a motion estimation method using fewer feature points. Field experiments have been conducted in volcanic fields for validation and evaluation of the system effectiveness and efficiency.

Kyohei Otsu, Masatsugu Otsuki, Takashi Kubota

Eight Weeks of Episodic Visual Navigation Inside a Non-stationary Environment Using Adaptive Spherical Views

This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.

Feras Dayoub, Grzegorz Cielniak, Tom Duckett

Domestic Robots


Human Activity Recognition for Domestic Robots

Capabilities of domestic service robots could be further improved, if the robot is equipped with an ability to recognize activities performed by humans in its sensory range. For example in a simple scenario a floor cleaning robot can vacuum the kitchen floor after recognizing human activity ”cooking in the kitchen”. Most of the complex human activities can be sub divided into simple activities which can later used for recognize complex activities. Activities like ”take meditation” can be sub divided into simple activities like ”opening pill container” and ”drinking water”. However, even recognizing simple activities are highly challenging due to the similarities between some inter activities and dissimilarities of intra activities which are performed by different people, body poses and orientations. Even a simple human activity like ”drinking water” can be performed while the subject is in different body poses like sitting, standing or walking. Therefore building machine learning techniques to recognize human activities with such complexities is non trivial. To address this issue, we propose a human activity recognition technique that uses 3D skeleton features produced by a depth camera. The algorithm incorporates importance weights for skeleton 3D joints according to the activity being performed. This allows the algorithm to ignore the confusing or irrelevant features while relying on informative features. Later these joints were ensembled together to train Dynamic Bayesian Networks (DBN), which is then used to infer human activities based on likelihoods. The proposed activity recognition technique is tested on a publicly available dataset and UTS experiments with overall accuracies of 85% and 90%.

Lasitha Piyathilaka, Sarath Kodagoda

Building Environmental Maps of Human Activity for a Mobile Service Robot at the ”Miraikan” Museum

This paper describes environmentmaps that are comprised of the following three types of information, 1) 3D environmental changes that represents human activities, 2) human trajectories in 2D that represent how humans move in the environment, and 3) human posture data. These maps are utilized in order to plan safer, quicker and/or non-human-disturbingpaths for a mobile service robot at themuseum “Miraikan”. Experiments are conducted within “Miraikan” and results are shown.

Ippei Samejima, Yuma Nihei, Naotaka Hatao, Satoshi Kagami, Hiroshi Mizoguchi, Hiroshi Takemura, Akihiro Osaki

Agriculture Robots


Accuracy and Performance Experiences of Four Wheel Steered Autonomous Agricultural Tractor in Sowing Operation

In agriculture, a typical task is to do a coverage operation for a field. Coverage path planning algorithms can be used to create the path for a vehicle. In case of an autonomous agricultural vehicle, the path is provided to the guidance or navigation system that steers the vehicle. In this paper, a four wheel steered tractor is used in autonomous sowing operation. The full size tractor is equipped with 2.5 m hitch mounted seed drill and the developed guidance system is used to sow about six hectares spring wheat. In this paper is presented the results of the guidance accuracy in the field tests, in four field plots. The guidance accuracy in terms of lateral and angular error to the path is typically less than 10 cm and one degree, respectively. The paper also presents real life problems happened in the field tests, including losing GPS positioning signal and tractor safety related wireless communication problems.

Timo Oksanen

Robotics for Sustainable Broad-Acre Agriculture

This paper describes the development of small low-cost cooperative robots for sustainable broad-acre agriculture to increase broad-acre crop production and reduce environmental impact. The current focus of the project is to use robotics to deal with resistant weeds, a critical problem for Australian farmers. To keep the overall system affordable our robot uses low-cost cameras and positioning sensors to perform a large scale coverage task while also avoiding obstacles. A multi-robot coordinator assigns parts of a given field to individual robots. The paper describes the modification of an electric vehicle for autonomy and experimental results from one real robot and twelve simulated robots working in coordination for approximately two hours on a 55 hectare field in Emerald Australia. Over this time the real robot ‘sprayed’ 6 hectares missing 2.6% and overlapping 9.7% within its assigned field partition, and successfully avoided three obstacles.

David Ball, Patrick Ross, Andrew English, Tim Patten, Ben Upcroft, Robert Fitch, Salah Sukkarieh, Gordon Wyeth, Peter Corke

A Pipeline for Trunk Localisation Using LiDAR in Trellis Structured Orchards

Autonomous operation and information processing in an orchard environment requires an accurate inventory of the trees. Individual trees must be identified and catalogued in order to represent their distinct measures such as yield count, crop health and canopy volume. Hand labelling individual trees is a labour-intensive and time-consuming process. This paper presents a trunk localisation pipeline for identification of individual trees in an apple orchard using ground based LiDAR data. The trunk candidates are detected using a Hough Transform, and the orchard inventory is refined using a Hidden Semi-Markov Model. Such a model leverages from contextual information provided by the structured/repetitive nature of an orchard.Operating at an apple orchard near Melbourne, Australia, which hosts a modern Güttingen V trellis structure, we were able to perform tree segmentation with 89% accuracy.

Suchet Bargoti, James P. Underwood, Juan I. Nieto, Salah Sukkarieh

LiDAR Based Tree and Platform Localisation in Almond Orchards

In this paper we present an approach to tree recognition and localisation in orchard environments for tree-crop applications. The method builds on the natural structure of the orchard by first segmenting the data into individual trees using a Hidden Semi-Markov Model. Second, a descriptor for representing the characteristics of the trees is introduced, allowing a Hidden Markov Model based matching method to associate new observations with an existing map of the orchard. The localisation method is evaluated on a dataset collected in an almond orchard, showing good performance and robustness both to segmentation errors and measurement noise.

Gustav Jagbrant, James Patrick Underwood, Juan Nieto, Salah Sukkarieh

A Feature Learning Based Approach for Automated Fruit Yield Estimation

This paper demonstrates a generalised multi-scale feature learning approach to multi-class segmentation, applied to the estimation of fruit yield on treecrops. The learning approach makes the algorithmflexible and adaptable to different classification problems, and hence applicable to a wide variety of tree-crop applications. Extensive experiments were performed on a dataset consisting of 8000 colour images collected in an apple orchard. This paper shows that the algorithm was able to segment apples with different sizes and colours in an outdoor environment with natural lighting conditions, with a single model obtained from images captured using a monocular colour camera. The segmentation results are applied to the problem of fruit counting and the results are compared against manual counting. The results show a squared correlation coefficient of



= 0.81.

Calvin Hung, James Underwood, Juan Nieto, Salah Sukkarieh

Search and Rescue Robots


Visual and Inertial Odometry for a Disaster Recovery Humanoid

Disaster recovery robots must operate in unstructured environments where wheeled or tracked motion may not be feasible or where it may be subject to extreme slip. Many industrial disaster scenarios also preclude reliance on


or other external signals as robots are deployed indoors or underground. Two of the candidates for precise positioning in these scenarios are visual odometry and inertial navigation. This paper presents some practical experience in the design and analysis of a combined visual and inertial odometry system for the Carnegie Mellon University Highly Intelligent Mobile Platform (CHIMP); a humanoid robot competing in the


Robotics Challenge.

Michael George, Jean-Philippe Tardif, Alonzo Kelly

Precise Velocity Estimation for Dog Using Its Gait

We aimed to record and visualize the investigation activities of search and rescue dogs. The dog’s trajectory is required to create this visualization, and the dog’s velocity needs to be determined to estimate its trajectory. In this study, we examined a method for velocity estimation that uses a dog’s gait. We measured a Labrador dog’s gaits (walk and trot) and analyzed the gait data. From the gait data, we found that there are cyclic moments when the dog’s velocity vector faces its heading direction. This fact enables the reconstruction of the velocity vector


 = (











from the dog’s speed |


| and pose. We devised a precise estimation method for a dog’s velocity and evaluated its accuracy. From the evaluation results, we confirmed that the gait-based velocity estimation was more accurate than velocity estimation based on the extended Kalman filter when |


| was obtained at 1, 5, and 10 Hz. This result can pave the way for using a mobile phone to estimate a dog’s trajectory.

Naoki Sakaguchi, Kazunori Ohno, Eijiro Takeuchi, Satoshi Tadokoro


Weitere Informationen