Skip to main content

2014 | Buch

Experimental Robotics

The 12th International Symposium on Experimental Robotics

herausgegeben von: Oussama Khatib, Vijay Kumar, Gaurav Sukhatme

Verlag: Springer Berlin Heidelberg

Buchreihe : Springer Tracts in Advanced Robotics

insite
SUCHEN

Über dieses Buch

The International Symposium on Experimental Robotics (ISER) is a series of bi-annual meetings which are organized in a rotating fashion around North America, Europe and Asia/Oceania. The goal of ISER is to provide a forum for research in robotics that focuses on novelty of theoretical contributions validated by experimental results. The meetings are conceived to bring together, in a small group setting, researchers from around the world who are in the forefront of experimental robotics research.

This unique reference presents the latest advances across the various fields of robotics, with ideas that are not only conceived conceptually but also explored experimentally. It collects robotics contributions on the current developments and new directions in the field of experimental robotics, which are based on the papers presented at the 12th ISER held on December 18-21, 2010 in New Delhi and Agra, India.This present twelfth edition of Experimental Robotics edited by Oussama Khatib, Vijay Kumar and Gaurav Sukhatme offers in its eight-chapter volume a collection of a broad range of topics in field and human-centered robotics.

Inhaltsverzeichnis

Frontmatter
An Optimization-Based Estimation and Adaptive Control Approach for Human-Robot Cooperation

This paper presents a novel robot programming approach for actively assisting humans in human-robot cooperation tasks. First, the paper discusses an invariant description-based parametric modeling approach for six degree-of-freedom motion trajectories. This generic approach facilitates building a library of motion models in a systematic way. Second, the paper presents a constrained optimization-based parameter estimation technique for estimating the motion model parameters. Both batch and recursive schemes are presented. Third, the paper presents a control architecture based on our constraint-based task specification approach iTaSC that supports including secondary task objectives or inequality constraints (for example joint limits) in the robot task definition. The control architecture is exemplified using the KUKA LWR 4 robot and Orocos robot control software. Experimental results clearly indicate the potential of the approach by showing significant lower human-robot interaction forces compared to classical admittance control.

Wilm Decré, Herman Bruyninckx, Joris De Schutter
Motion-Language Association Model for Human-Robot Communication

Language is symbolic system, from which human intelligence originates. A robot is expected to use language. Recently a large amount of text corpus can be obtained, and human knowledge lies there. Thus, it is important for the robot to acquire the knowledge and to develop the ability to use language. This paper describes a novel approach to a humanoid robot that makes linguistic inference by using language knowledge in the dictionary. Sentences are interpreted as broader concepts according to the dictionary. The association between the broader concepts is stochastically represented. The abstract referential relationship between sentences and broader concepts, and the associative relationship among the broader concepts allow the robot to make linguistic inference by generating sentences from input one. As one of applications of the proposed linguistic inference, this framework is integrated with the symbols of motion patterns. The developed application demonstrates the validity of the proposed framework since the robot can make linguistic inference and associate motion patterns subsequently.

Wataru Takano, Minoru Kanazawa, Yoshihiko Nakamura
Grounding Verbs of Motion in Natural Language Commands to Robots

To be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. An important class of instructions involve interacting with people, such as “Follow the person to the kitchen” or “Meet the person at the elevators.” These instructions require that the robot fluidly react to changes in the environment, not simply follow a pre-computed plan. We present an algorithm for understanding natural language commands with three components. First, we create a cost function that scores the language according to how well it matches a candidate plan in the environment, defined as the log-likelihood of the plan given the command. Components of the cost function include novel models for the meanings of motion verbs such as “follow,” “meet,” and “avoid,” as well as spatial relations such as “to” and landmark phrases such as “the kitchen.” Second, an inference method uses this cost function to perform forward search, finding a plan that matches the natural language command. Third, a high-level controller repeatedly calls the inference method at each timestep to compute a new plan in response to changes in the environment such as the movement of the human partner or other people in the scene. When a command consists of more than a single task, the controller switches to the next task when an earlier one is satisfied. We evaluate our approach on a set of example tasks that require the ability to follow both simple and complex natural language commands.

Thomas Kollar, Stefanie Tellex, Deb Roy, Nicholas Roy
Mightability: A Multi-state Visuo-spatial Reasoning for Human-Robot Interaction

We, the Humans, are capable of estimating various abilities of ourselves and of the person we are interacting with. Visibility and reachability are among two such abilities. Studies in neuroscience and psychology suggest that from the age of 12-15 months children start to understand the occlusion of others line-of-sight and from the age of 3 years they start to develop the ability, termed as

perceived reachability

for self and for others. As such capabilities evolve in the children, they start showing intuitive and proactive behavior by perceiving various abilities of the human partner.

Inspired from such studies, which suggest that

visuo-spatial perception

plays an important role in Human-Human interaction, we propose to equip our robot to perceive various types of abilities of the agents in the workspace. The robot perceives such abilities not only from the current state of the agent but also by virtually putting an agent into various achievable states, such as turn left, stand up, etc. As the robot estimates what an agent

might be able

to ‘see’ and ‘reach’ if will be in a particular state, we term such analyses as

Mightability Analyses

. Currently the robot is equipped to perform such Mightability analyses at two levels: cells in the 3D grid and objects in the space, which we termed as

Mightability Maps

(MM)

and

Object Oriented Mightabilities (OOM)

respectively.

We have shown the applications of Mightability analyses in performing various co-operative tasks like show and make an object accessible to the human as well as competitive tasks like hide and put away an object from the human. Such Mightability analyses equip the robot for higher-level learning and decisional capabilities as well as could facilitate the robot for better verbalize interaction and proactive behavior.

Amit Kumar Pandey, Rachid Alami
Learning from Demonstration: A Study of Visual and Auditory Communication and Influence Diagrams

Learning from demonstration utilizes human expertise to program a robot. We believe this approach to robot programming will facilitate the development and deployment of general purpose personal robots that can adapt to specific user preferences. Demonstrations can potentially take place across a wide variety of environmental conditions. In this paper we study the impact that the users visual access to the robot, or lack thereof, has on on teaching performance. Based on the obtained results, we then address how a robot can provide additional information to a instructor during the LfD process, to optimize the two-way process of teaching and learning. Finally, we describe a novel Bayesian approach to generating task policies from demonstration data.

Nathan Koenig, Leila Takayama, Maja J. Matarić
Reducing Uncertainty in Human-Robot Interaction: A Cost Analysis Approach

We present a technique for robust human-robot interaction taking into consideration uncertainty in input and task execution costs incurred by the robot. Specifically, this research aims to quantitatively model confirmation feedback, as required by a robot while communicating with a human operator to perform a particular task. Our goal is to model human-robot interaction from the perspective of risk minimization, taking into account errors in communication, “risk” involved in performing the required task, and task execution costs. Given an input modality with non-trivial uncertainty, we calculate the cost associated with performing the task specified by the user, and if deemed necessary, ask the user for confirmation. The estimated task cost and the uncertainty measure is given as input to a

Decision Function

, the output of which is then used to decide whether to execute the task, or request clarification from the user. In cases where the cost or uncertainty (or both) is estimated to be exceedingly high by the system, task execution is deferred until a significant reduction in the output of the Decision Function is achieved. We test our system through human-interface experiments, based on a framework custom designed for our family of amphibious robots, and demonstrate the utility of the framework in the presence of large task costs and uncertainties.We also present qualitative results of our algorithm from field trials of our robots in both open- and closed-water environments.

Junaed Sattar, Gregory Dudek
Interface Design and Control Strategies for a Robot Assisted Ultrasonic Examination System

This paper presents a new robotic system designed to assist sonographers in performing ultrasound examinations by addressing common limitations of sonography, namely the physical fatigue that can result from performing the examination, and the difficulty in interpreting ultrasound data. The proposed system comprises a robot manipulator that operates the transducer, and an integrated user interface that offers 3D visualization and a haptic device as the main user interaction tool. The sonographer controls the slave robot movements either haptically (collaborative tele-operation mode), or by prior programming of a desired path (semi-automatic mode). A force controller maintains a constant contact force between the transducer and the patient’s skin while the robot drives the transducer to the desired anatomical locations. The ultrasound imaging system is connected to a 3D visualization application which registers in real time the streaming 2D images generated by the transducer and displays the resulting data as 3D volumetric representation which can be further examined off-line.

François Conti, Jaeheung Park, Oussama Khatib
A Novel Discretely Actuated Steerable Probe for Percutaneous Procedures

We have developed a discretely actuated steerable probe for percutaneous procedures. We propose use of shape memory alloy (SMA) actuators in our design due to their small size and high power density. SMAs are attractive actuators when large forces or displacements are required and limited spaces are available. SMA actuators are shape-setted to an arc shape and mounted on the outer surface of the probe to generate bending action upon thermal actuation. SMA can recover large deformation on thermal activation and recovery strain of SMA is related to its temperature. Hence, we propose controlling the temperature of the SMA actuators for position control of the probe by heating up the SMA wires at each joint. Pulse width modulation (PWM) based control scheme is used to be able to control all SMA wires simultaneously. PWM is implemented via use of a switching circuit. Proposed controller is validated through an experiment to heat up the SMA wires to a desired temperature. Another experiment is carried out inside gelatin to mimic the motion of the probe inside soft tissue. PWM control was successfully implemented and we were able to demonstrate local actuation of the steerable probe.

Elif Ayvali, Mingyen Ho, Jaydev P. Desai
Continuous Control of the DLR Light-Weight Robot III by a Human with Tetraplegia Using the BrainGate2 Neural Interface System

We have investigated control of the DLR Light-Weight Robot III with DLR Five-Finger Hand by a person with tetraplegia using the BrainGate2 Neural Interface System. The goal of this research is to develop assistive technologies for people with severe physical disabilities. A BrainGate-enabled DLR LWR III would potentially permit a person with tetraplegia to gain improved control over their environment, e.g. to drink a glass of water. First results of the developed control loop are very encouraging and allow the participant to perform simple interaction tasks with her environment, e.g., pick up a bottle and move it around. To this end, only a few minutes of system training are required, after which the system can be used.

Joern Vogel, Sami Haddadin, John D. Simeral, Sergey D. Stavisky, Daniel Bacher, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt
Automotive Safety Solutions through Technology and Human-Factors Innovation

Advanced Driver Assistance Systems (ADAS) provide warnings and in some cases autonomous actions to increase driver and passenger safety by combining sensor technologies and situation awareness. In the last 10 years progressed from prototype demonstrators to full product deployment in motor vehicles. Early ADAS examples include Lane Departure Warning (LDW) and Forward Collision Warning (FCW) systems have been developed to warn drivers of potentially dangerous situations. More recently, driver inattention systems have made their debut. These systems are tackling one of the major causes of fatalities on roads – drowsiness and distraction. This paper describes DSS, a driver inattention warning system which has been developed by Seeing Machines for commercial applications, with an initial focus on heavy vehicle fleet management applications. A case study reporting a year-long real-world deployment of DSS is presented. The study showed the effectiveness of the DSS technology in mitigating driver inattention in a sustained manner.

Jochen Heinzman, Alexander Zelinsky
Controlling Closed-Chain Robots with Compliant SMA Actuators: Algorithms and Experiments

In this paper we present algorithms, devices, simulations, and experiments concerning a robot that locomotes using novel compliant, sheet-based, shape memory alloy actuators. Specifically, we describe the theory and practical implementation of a provably correct algorithm capable of generating locomotion gaits in closed-loop linkages. We implement this algorithm in a distributed fashion on the HexRoller, a closed-chain robot with six low-stiffness actuators. We describe these actuators in detail and characterize their performance along with that of the robot.

Kyle Gilpin, Eduardo Torres-Jara, Daniela Rus
Automatic Self-calibration of a Full Field-of-View 3D n-Laser Scanner

This paper describes the design, build, automatic self-calibration and evaluation of a 3D Laser sensor using conventional parts. Our goal is to design a system which is an order of magnitude cheaper than commercial systems, with commensurate performance. In this paper we adopt point cloud ‘crispness’ as the measure of system performance that we wish to optimise. Concretely, we apply the information theoretic measure known as Rényi Quadratic Entropy to capture the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end extrinsic parameter calibration process, the estimation of the clock skews between the four constituent microprocessors and analyse the effect our spatial and temporal calibrations have on point cloud quality.

Mark Sheehan, Alastair Harrison, Paul Newman
Unsupervised Calibration for Multi-beam Lasers

Light Detection and Ranging (LIDAR) sensors have become increasingly common in both industrial and robotic applications. LIDAR sensors are particularly desirable for their direct distance measurements and high accuracy, but traditionally have been configured with only a single rotating beam. However, recent technological progress has spawned a new generation of LIDAR sensors equipped with many simultaneous rotating beams at varying angles, providing at least an order of magnitude more data than single-beam LIDARs and enabling new applications in mapping [6], object detection and recognition [15], scene understanding [16], and SLAM [9].

Jesse Levinson, Sebastian Thrun
A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors

Fusion of data from multiple sensors can enable robust navigation in varied environments.However, for optimal performance, the sensors must be calibrated relative to one another. Full sensor-to-sensor calibration is a

spatiotemporal

problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.

Jonathan Kelly, Gaurav S. Sukhatme
Calibrating a Multi-arm Multi-sensor Robot: A Bundle Adjustment Approach

Complex robots with multiple arms and sensors need good calibration to perform precise tasks in unstructured environments. The sensors must be calibrated both to the manipulators and to each other, since fused sensor data is often needed. We propose an extendable framework that combines measurements from the robot’s various sensors (proprioceptive and external) to calibrate the robot’s joint offsets and external sensor locations. Our approach is unique in that it accounts for sensor measurement uncertainties, thereby allowing sensors with very different error characteristics to be used side by side in the calibration. The framework is general enough to handle complex robots with kinematic components, including external sensors on kinematic chains. We validate the framework by implementing it on the Willow Garage PR2 robot, providing a significant improvement in the robot’s calibration.

Vijay Pradeep, Kurt Konolige, Eric Berger
Soft Autonomous Materials—Using Active Elasticity and Embedded Distributed Computation

The impressive agility of living systems seems to stem from modular sensing, actuation and communication capabilities, as well as intelligence embedded in the mechanics in the form of active compliance. As a step towards bridging the gap between man-made machines and their biological counterparts, we developed a class of soft mechanisms that can undergo shape change and locomotion under pneumatic actuation. Sensing, computation, communication and actuation are embedded in the material leading to an amorphous, soft material. Soft mechanisms are harder to control than stiff mechanisms as their kinematics are difficult to model and their degrees of freedom are large. Here we show instances of such mechanisms made from identical cellular elements and demonstrate shape changing, and autonomous, sensor-based locomotion using distributed control. We show that the flexible system is accurately modeled by an equivalent spring-mass model and that shape change of each element is linear with applied pressure. We also derive a distributed feedback control law that lets a belt-shaped robot made of flexible elements locomote and climb up inclinations. These mechanisms and algorithms may provide a basis for creating a new generation of biomimetic soft robots that can negotiate openings and manipulate objects with an unprecedented level of compliance and robustness.

Nikolaus Correll, Çağdaş D. Önal, Haiyi Liang, Erik Schoenfeld, Daniela Rus
Towards Reliable Grasping and Manipulation in Household Environments

We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS).

Matei Ciocarlie, Kaijen Hsiao, Edward Gil Jones, Sachin Chitta, Radu Bogdan Rusu, Ioan A. Şucan
Using Near-Field Stereo Vision for Robotic Grasping in Cluttered Environments

Robotic grasping in unstructured environments requires the ability to adjust and recover when a pre-planned grasp faces imminent failure. Even for a single object, modeling uncertainties due to occluded surfaces, sensor noise and calibration errors can cause grasp failure; cluttered environments exacerbate the problem. In this work, we propose a simple but robust approach to both pre-touch grasp adjustment and grasp planning for unknown objects in clutter, using a small-baseline stereo camera attached to the gripper of the robot. By employing a 3D sensor from the perspective of the gripper we gain information about the object and nearby obstacles immediately prior to grasping that is not available during head-sensor-based grasp planning. We use a feature-based cost function on local 3D data to evaluate the feasibility of a proposed grasp. In cases where only minor adjustments are needed, our algorithm uses gradient descent on a cost function based on local features to find optimal grasps near the original grasp. In cases where no suitable grasp is found, the robot can search for a significantly different grasp pose rather than blindly attempting a doomed grasp. We present experimental results to validate our approach by grasping a wide range of unknown objects in cluttered scenes. Our results show that reactive pre-touch adjustment can correct for a fair amount of uncertainty in the measured position and shape of the objects, or the presence of nearby obstacles.

Adam Leeper, Kaijen Hsiao, Eric Chu, J. Kenneth Salisbury
Aerial Grasping from a Helicopter UAV Platform

We aim to extend the functionality of Unmanned Aerial Vehicles (UAVs) beyond passive observation to active interaction with objects. Of particular interest is grasping objects with hovering robots. This task is difficult due to the unstable dynamics of flying vehicles and limited positional accuracy demonstrated by existing hovering vehicles. Conventional robot grippers require centimetre-level positioning accuracy to successfully grasp objects. Our approach employs passive mechanical compliance and adaptive underactuation in a gripper to allow for large positional displacements between the aircraft and target object. In this paper, we present preliminary analysis and experiments for reliable grasping of unstructured objects with a robot helicopter. Key problems associated with this task are discussed, including hover precision, flight stability in the presence of compliant object contact, and aerodynamic disturbances. We evaluate performance of the initial proof-of-concept prototype and show that this approach to object capture and retrieval is viable.

Paul E. Pounds, Aaron M. Dollar
Manipulation Capabilities with Simple Hands

A simple hand is a robotic gripper that trades off generality in function for practicality in design and control. The long-term goal of our work is to explore that tradeoff and demonstrate broad manipulation capabilities with simple hands. This paper describes two prototype simple hands. Both hands have thin cylindrical fingers arranged symmetrically around a low friction circular palm. The fingers are compliantly coupled to a single actuator. Our experiments with both hands in a bin-picking scenario demonstrate that we can achieve robust grasp classification and in-hand localization using simple statistical techniques. We further show how the classification accuracy increases as the grasp proceeds by exploiting information obtained online. We finally evaluate the relative importance of observing the full state of the hand rather than just observing the state of the actuators.

Alberto Rodriguez, Matthew T. Mason, Siddhartha S. Srinivasa
Interactive Perception of Articulated Objects

We present a skill for the perception of three-dimensional kinematic structures of rigid articulated bodies with revolute and prismatic joints. The ability to acquire such models autonomously is required for general manipulation in unstructured environments. Experiments on a mobile manipulation platform with real-world objects under varying lighting conditions demonstrate the robustness of the proposed method. This robustness is achieved by integrating perception and manipulation capabilities: the manipulator interacts with the environment to move an unknown object, thereby creating a perceptual signal that reveals the kinematic properties of the object. For good performance, the perceptual skill requires the presence of trackable visual features in the scene.

Dov Katz, Andreas Orthey, Oliver Brock
MiniMag: A Hemispherical Electromagnetic System for 5-DOF Wireless Micromanipulation

The MiniMag is a magnetic manipulation system capable of 5 degree-of-freedom (5-DOF) wireless magnetic control of an untethered microrobot (3-DOF position, 2-DOF pointing orientation). The system has a spherical workspace with an intended diameter of approximately 10 mm, and is completely unrestrained in the rotational degrees-of-freedom. This is accomplished through the superposition of multiple magnetic fields, and capitalizes on a linear representation of the coupled field contributions of multiple softmagnetic- core electromagnets acting in concert. The prototype system consists of 8 stationary electromagnets with ferromagnetic cores, and is capable of producing magnetic fields in excess of 20 mT and field gradients in excess of 2 T/m at frequencies up 2 kHz.

Bradley E. Kratochvil, Michael P. Kummer, Sandro Erni, Ruedi Borer, Dominic R. Frutiger, Simone Schürle, Bradley J. Nelson
Interaction Force, Impedance and Trajectory Adaptation: By Humans, for Robots

This paper develops and analyses a biomimetic learning controller for robots. This controller can simultaneously adapt reference trajectory, impedance and feedforward force to maintain stability and minimize the weighted summation of interaction force and performance errors. This controller was inspired from our studies of human motor behavior, especially the human motor control approach dealing with unstable situations typical of tool use. Simulations show that the developed controller is a good model of human motor adaptation. Implementations demonstrate that it can also utilise the capabilities of joint torque controlled robots and variable impedance actuators to optimally adapt interaction with dynamic environments and humans.

Etienne Burdet, Gowrishankar Ganesh, Chenguang Yang, Alin Albu-Schäffer
Experiments with Motor Primitives in Table Tennis

Efficient acquisition of new motor skills is among the most important abilities in order to make robot application more flexible, reduce the amount and cost of human programming as well as to make future robots more autonomous. However, most machine learning approaches to date are not capable to meet this challenge as they do not scale into the domain of high dimensional anthropomorphic and service robots. Instead, robot skill learning needs to rely upon task-appropriate approaches and domain insights. A particularly powerful approach has been driven by the concept of re-usable motor primitives. These have been used to learn a variety of “elementary movements” such as striking movements (e.g., hitting a T-ball, striking a table tennis ball), rhythmic movements (e.g., drumming, gaits for legged locomotion, padlling balls on a string), grasping, jumping and many others. Here, we take the approach to the next level and show experimentally how most elements required for table tennis can be addressed using motor primitives. We show four important components: (i) We present a motor primitive formulation that can deal with hitting and striking movements. (ii) We show how these can be initialized by imitation learning and (iii) generalized by reinforcement learning. (iv) We show how selection, generalization and pruning for motor primitives can be dealt with using a mixture of motor primitives. The resulting experimental prototypes can be shown to work well in practice.

Jan Peters, Katharina Mülling, Jens Kober
Trajectory Generation and Control for Precise Aggressive Maneuvers with Quadrotors

We study the problem of designing dynamically feasible trajectories and controllers that drive a quadrotor to a desired state in state space. We focus on the development of a family of trajectories defined as a sequence of segments, each with a controller parameterized by a goal state. Each controller is developed from the dynamic model of the robot and then iteratively refined through successive experimental trials to account for errors in the dynamic model and noise in the actuators and sensors. We show that this approach permits the development of trajectories and controllers enabling aggressive maneuvers such as flying through narrow, vertical gaps and perching on inverted surfaces with high precision and repeatability.

Daniel Mellinger, Nathan Michael, Vijay Kumar
Improved Stability of Running over Unknown Rough Terrain via Prescribed Energy Removal

The speed and maneuverability at which legged animals can travel through rough and cluttered landscapes has provided inspiration for the pursuit of legged robots with similar capabilities. Researchers have developed reduced-order models of legged locomotion and have begun investigating complementary control strategies based on observed biological control schemes. This study examines a novel control law which prescribes a feed-forward actuation scheme in which energy is actively removed during a portion of each stride to maximize stability. The behavior of this approach is demonstrated on a dynamic running platform while traversing a track with unexpected alterations in terrain height. Results indicate that this novel control approach provides greater stability for a single-legged hopping robot than more traditional control methods.

Bruce Miller, Ben Andrews, Jonathan E. Clark
On the Comparative Analysis of Locomotory Systems with Vertical Travel

This paper revisits the concept of specific resistance,

ε

, a dimensionless measure of locomotive efficiency often used to compare the transport cost of vehicles [6], and extends its use to the vertical domain. As specific resistance is designed for comparing horizontal locomotion, we introduce a compensation term in order to offset the gravitational potential gained or lost during locomotion. We observe that this modification requires an additional, experimentally fitted model estimating the efficiency at which a system is able to transfer energy to and from gravitational potential. This paper introduces a family of such models, thus introducing methods to allow fair comparisons of locomotion on level ground, sloped, and vertical surfaces, for any vehicle which necessarily gains or loses potential energy during travel.

G. C. Haynes, D. E. Koditschek
Planning and Control of a Humanoid Robot for Navigation on Uneven Multi-scale Terrain

This paper presents a humanoid navigation system on uneven terrains that include unknown roughness on the order of a few centimeters. A footstep planner decides where to step by using the known terrain shape. A walking balance controller that consists of a 20[ms] cycle dynamically stable motion pattern generation loop and a 1[ms] cycle sensor feedback loop allows the robot to step to the planned footprints and is able to manage a few centimeters of uncertainty. The free leg trajectories and the torso height trajectory are also designed automatically according to the given terrain shape and the planned footprints. We introduced an interactive navigation system that uses mixed reality technology. An outline of the path can be drawn on the real environment to give the commands to the robot in the system. Each developed technology is implemented and integrated on the full-size humanoid HRP-2. Experimental results walking over multi-level platform with unknown small obstacles show the performance of the proposed system.

Koichi Nishiwaki, Joel Chestnutt, Satoshi Kagami
On-Line Mobile Robot Model Identification Using Integrated Perturbative Dynamics

We present an approach to the problem of real-time identification of vehicle motion models based on fitting, on a continuous basis, parametrized slip models to observed behavior. Our approach is unique in that we generate parametric models capturing the dynamics of systematic error (i.e. slip) and then

predict

trajectories for arbitrary inputs on arbitrary terrain. The integrated error dynamics are linearized with respect to the unknown parameters to produce an observer relating errors in predicted slip to errors in the parameters. An Extended Kalman filter is used to identify this model on-line. The filter forms innovations based on residual differences between the motion originally predicted using the present model and the motion ultimately experienced by the vehicle. Our results show that the models converge in a few seconds and they reduce prediction error for even benign maneuvers where errors might be expected to be small already. Results are presented for both a skid-steered and an Ackerman steer vehicle.

Forrest Rogers-Marcovitz, Alonzo Kelly
Effects of Sensory Precision on Mobile Robot Localization and Mapping

This paper will explore the relationship between sensory accuracy and Simultaneous Localization and Mapping (SLAM) performance. As inexpensive robots are developed with commodity components, the relationship between performance level and accuracy will need to be determined. Experiments are presented in this paper which compare various aspects of sensor performance such as maximum range, noise, angular precision, and viewable angle. In addition, mapping results from three popular laser scanners (Hokuyo’s URG and UTM30, as well as SICK’s LMS291) are compared.

John G. Rogers III, Alexander J. B. Trevor, Carlos Nieto-Granda, Alex Cunningham, Manohar Paluri, Nathan Michael, Frank Dellaert, Henrik I. Christensen, Vijay Kumar
Motion-Aided Network SLAM

A key problem in the deployment of sensor networks is that of determining the location of each sensor such that subsequent data gathered can be registered. We would also like the network to provide localization for mobile entities, allowing them to navigate and explore the environment. In this paper, we present a thorough evaluation of our algorithm for localizing and mapping the mobile and stationary nodes in a sparsely connected sensor network using range-only measurements and odometry from the mobile node. Our approach utilizes an Extended Kalman Filter (EKF) in polar space allowing us to model the nonlinearity within the range-only measurements using Gaussian distributions. We demonstrate the effectiveness of our approach using experiments in realistic obstacle-filled environments that not only limit network connectivity but also introduce additional noise to the range data. Our results reveal that our proposed method offers good accuracy in these challenging environments even when little to no prior information is available.

Joseph Djugash, Sanjiv Singh
A Bayesian Approach to Learning 3D Representations of Dynamic Environments

We propose a novel probabilistic approach to learning spatial representations of dynamic environments from 3D laser range measurements. Whilst most of the previous techniques developed in robotics address this problem by computationally expensive tracking frameworks, our method performs in real-time even in the presence of large amounts of dynamic objects. The computer vision community has provided comparable methods for learning foreground activity patterns in images. However, these methods generally do not account well for the uncertainty involved in the sensing process. In this paper, we show that the problem of detecting occurrences of non-stationary objects in range readings can be solved online under the assumption of a consistent Bayesian framework. Whilst the model underlying our framework naturally scales with the complexity and the noise characteristics of the environment, all parameters involved in the detection process obey a clean probabilistic interpretation. When applied to real-world urban settings, the results produced by our approach appear promising and may directly be applied to solve map building, localization, or robot navigation problems.

Ralf Kästner, Nikolas Engelhard, Rudolph Triebel, Roland Siegwart
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments

RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used in the context of robotics, specifically for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop closure detection, followed by pose optimization to achieve globally consistent maps.We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.

Peter Henry, Michael Krainin, Evan Herbst, Xiaofeng Ren, Dieter Fox
Vision-Based Reacquisition for Task-Level Control

We describe a vision-based algorithm that enables a robot to “reacquire” objects previously indicated by a human user through simple image-based stylus gestures. By automatically generating a multiple-view appearance model for each object, the method can reacquire the object and reconstitute the user’s segmentation hints even after the robot has moved long distances or significant time has elapsed since the gesture. We demonstrate that this capability enables novel command and control mechanisms: after a human gives the robot a “guided tour” of named objects and their locations in the environment, he can dispatch the robot to fetch any particular object simply by stating its name. We implement the object reacquisition algorithm on an outdoor mobile manipulation platform and evaluate its performance under challenging conditions that include lighting and viewpoint variation, clutter, and object relocation.

Matthew R. Walter, Yuli Friedman, Matthew Antone, Seth Teller
Cost-Effective Mapping Using Unmanned Aerial Vehicles in Ecology Monitoring Applications

Ecology monitoring of large areas of farmland, rangelands and wilderness relies on routine map building and picture compilation, traditionally performed using high-flying surveys with manned-aircraft or through satellite remote sensing. Unmanned Aerial Vehicles (UAVs) are a promising alternative as a data collection platform due to the small-size, longer endurance and thus cost-effectiveness of these systems. Additionally UAVs can fly lower to the ground, collecting higher-resolution imagery than with manned aircraft or satellites. This paper discusses the development and experimental evaluation of systems and algorithms for airborne environment mapping, object detection and vegetation classification using low-cost sensor data including monocular vision collected from a UAV. Experimental results of the system are presented in multiple flights of our UAV system in three different environments and two different ecology monitoring applications, operating in remote locations in outback Australia.

Mitch Bryson, Alistair Reid, Calvin Hung, Fabio Tozeto Ramos, Salah Sukkarieh
Mapping Complex Marine Environments with Autonomous Surface Craft

This paper presents a novel marine mapping system using an Autonomous Surface Craft (ASC). The platform includes an extensive sensor suite for mapping environments both above and below the water surface. A relatively small hull size and shallow draft permits operation in cluttered and shallow environments. We address the Simultaneous Mapping and Localization (SLAM) problem for concurrent mapping above and below the water in large scale marine environments. Our key algorithmic contributions include: (1) methods to account for degradation of GPS in close proximity to bridges or foliage canopies and (2) scalable systems for management of large volumes of sensor data to allow for consistent online mapping under limited physical memory. Experimental results are presented to demonstrate the approach for mapping selected structures along the Charles River in Boston.

Jacques C. Leedekerken, Maurice F. Fallon, John J. Leonard
Simultaneous Tracking and Sampling of Dynamic Oceanographic Features with Autonomous Underwater Vehicles and Lagrangian Drifters

Studying ocean processes often requires observations made in a Lagrangian frame of reference, that is, a frame of reference moving with a feature of interest [1]. Often, the only way to understand a process is to acquire measurements at sufficient spatial and temporal resolution within a specific feature while it is evolving. Examples of coastal ocean features whose study requires Lagrangian observations include concentrated patches of microscopic algae (Fig. 1) that are toxic and may have impacts on fisheries, marine life and humans, or a patch of low-oxygen water that may cause marine life mortality depending on its movement and mixing.

Jnaneshwar Das, Frédéric Py, Thom Maughan, Tom O’Reilly, Monique Messié, John Ryan, Kanna Rajan, Gaurav S. Sukhatme
An Experimental Validation of Robotic Tactile Mapping in Harsh Environments such as Deep Sea Oil Well Sites

This work experimentally validates the feasibility of a tactile exploration approach to map harsh environments such as deep sea oil well sites. The recent collapse of the offshore oil-drilling platform

Deepwater Horizon

in the Gulf of Mexico resulted in the largest marine accidental disaster in history. Initial attempts to control the spill failed because of the very challenging environmental conditions. Knowing the shape and dimensions of the cracks in the leaking structure could have provided critical information to maneuver the Remotely Operated Vehicles. Here, a method developed in our previous work for tactile exploration of oil wells is applied to the problem of mapping underwater oil well sites. This method only requires a manipulator provided with joint encoders, and does not need any range, tactile or force sensor. This makes the approach robust and directly applicable to the mapping of underwater sites. This paper focuses on the experimental validation of the approach. Several experiments are described, showing the effectiveness of the approach in mapping unknown structured environment in short time, and demonstrating its reliability under very harsh conditions, such as irregular environment surfaces, surrounding viscous fluids and high manipulator joint backlash.

Francesco Mazzini, Steven Dubowsky
Delay and Dropout Tolerant State Estimation for MAVs

This paper presents a filter based position and velocity estimation for an aerial vehicle fusing inertial and delayed, dropout-susceptible vision measurements, without the a priori knowledge of the exact variable time delay. The data from the two sensors, which are running at different rates, is transmitted via independent wireless links to a ground station. A synchronization between both communication ways makes it possible to determine the image transmission and processing time. The computational complexity of the algorithm is kept at a low level. The images are processed by a Visual SLAM algorithm that builds up a map of the area and simultaneously tracks the pose of the camera. With a delay going up to 230

ms

and an amount of 16% dropout in the vision data, we show that with the presented filter a quadrotor can be stabilized and kept in the region of a setpoint with a simple PID controller.

Frédéric Bourgeois, Laurent Kneip, Stephan Weiss, Roland Siegwart
A Pipeline for the Segmentation and Classification of 3D Point Clouds

This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier’s behaviour.

B. Douillard, J. Underwood, V. Vlaskine, A. Quadros, S. Singh
Smooth Coordination and Navigation for Multiple Differential-Drive Robots

Multiple independent robots sharing the workspace need to be able to navigate to their goals while avoiding collisions with each other. In this paper, we describe and evaluate two algorithms for smooth and collision-free navigation for multiple independent differential-drive robots.We extend reciprocal collision avoidance algorithms based on velocity obstacles and on acceleration-velocity obstacles. We implement bothmethods on multiple iRobot Create differential-drive robots, and report on the quality and ability of the robots using the two algorithms to navigate to their goals in a smooth and collision-free manner.

Jamie Snape, Stephen J. Guy, Jur van den Berg, Dinesh Manocha
Top-Down vs. Bottom-Up Model-Based Methodologies for Distributed Control: A Comparative Experimental Study

Model-based synthesis of distributed controllers for multi-robot systems is commonly approached in either a

top-down

or

bottom-up

fashion. In this paper, we investigate the experimental challenges of both approaches, with a special emphasis on resource-constrained miniature robots. We make our comparison through a case study in which a group of 2-cm-sized mobile robots screen the environment for undesirable features, and destroy or neutralize them. First, we solve this problem using a top-down approach that relies on a graph-based representation of the system, allowing for direct optimization using numerical techniques (e.g., linear and non-linear convex optimization) under very unrealistic assumptions (e.g., infinite number of robots, perfect localization, global communication, etc.). We show how one can relax these assumptions in the context of resource-constrained robots, and explain the resulting impact on system performance. Second, we solve the same problem using a bottom-up approach, i.e., we build up computationally efficient and accurate models at multiple abstraction levels, and use them to optimize the robots’ controller using evolutionary algorithms. Finally, we outline the differences between the top-down and bottom-up approaches, and experimentally compare their performance.

Grégory Mermoud, Utkarsh Upadhyay, William C. Evans, Alcherio Martinoli
An Experimental Study of Time Scales and Stability in Networked Multi-Robot Systems

This paper considers the effect of network-induced time delays on the stability of distributed controllers for groups of robots. A linear state space model is proposed for analyzing the coupled interaction of the information flow over the network with the dynamics of the robots. It is shown both analytically and experimentally that control gain, network update rate, and communication and control graph topologies are all critical factors determining the stability of the group of robots. Experiments with a group of flying quadrotor robots demonstrate the effect of different control gains for two different control graph topologies.

Nathan Michael, Mac Schwager, Vijay Kumar, Daniela Rus
Mechanics of Continuum Robots with External Loading and General Tendon Routing

Routing tendons in straight paths along an elastic backbone is a widely used method of actuation for continuum robots. Tendon routing paths which are general curves in space enable a much larger family of robots to be designed, with configuration spaces and workspaces that are unattainable with straight tendon routing. Harnessing general tendon routing to extend the capabilities of continuum robots requires a model for the kinematics and statics of the robot, which is the primary focus of this paper. Our approach is to couple the classical Cosserat theories of strings and rods using a geometrically exact derivation of the distributed loads that the tendons impose along the robot. Experiments demonstrate that the model accurately predicts tip position to 1.7% of the total arc length, on a prototype robot that includes both straight and helical tendon routing and is subject to both point and distributed loads.

D. Caleb Rucker, Robert J. Webster III
Estimation of Thruster Configurations for Reconfigurable Modular Underwater Robots

We present an algorithm for estimating thruster configurations of underwater vehicles with reconfigurable thrusters. The algorithm estimates each thruster’s effect on the vehicle’s attitude and position. The estimated parameters are used to maintain the robot’s attitude and position.

The algorithm operates by measuring impulse response of individual thrusters and thruster combinations. Statistical metrics are used to select data samples. Finally, we compute a Moore-Penrose pseudoinverse, which is used to project the desired attitude and position changes onto the thrusters.

We verify our algorithm experimentally using our robot AMOUR. The robot consists of a main body with a variable number of thrusters that can be mounted at arbitrary locations. It utilizes an IMU and a pressure sensor to continuously compute its attitude and depth. We use the algorithm to estimate different thruster configurations and show that the estimated parameters successfully control the robot. The gathering of samples together with the estimation computation takes approximately 40 seconds. Further, we show that the performance of the estimated controller matches the performance of a manually tuned controller. We also demonstrate that the estimation algorithm can adapt the controller to unexpected changes in thruster positions. The estimated controller greatly improves the stability and maneuverability of the robot when compared to the manually tuned controller.

Marek Doniec, Carrick Detweiler, Daniela Rus
Characterization of Dynamic Behaviors in a Hexapod Robot

This paper investigates the relationship between energetic efficiency and the dynamical structure of a legged robot’s gait. We present an experimental data set collected from an untethered dynamic hexapod, EduBot [1] (a RHex-class [2] machine), operating in four distinct manually selected gaits. We study the robot’s single tripod stance dynamics of the robot which are identified by a purely jointspace-driven estimation method introduced in this paper. Our results establish a strong relationship between energetic efficiency (simultaneous reduction in power consumption and increase in speed) and the dynamical structure of an alternating tripod gait as measured by its fidelity to the SLIP mechanics—a dynamical pattern exhibiting characteristic exchanges of kinetic and spring-like potential energy [3]. We conclude that gaits that are dynamic in this manner give rise to better utilization of energy for the purposes of locomotion.

Haldun Komsuoglu, Anirudha Majumdar, Yasemin Ozkan Aydin, Daniel E. Koditschek
HangBot: A Ceiling Mobile Robot with Robust Locomotion under a Large Payload
(Basic Design and Development of Key Mechanisms)

In this paper, we proposed a ceiling mobile robot which can transfer a heavy load. First of all, we explained the basic design to overcome the constraints of the motion flexibility and maximum payload, and proposed a ceiling mobile robot that hangs under a ceiling perforated plate with continuous hole patterns. Next we designed and implemented two key mechanisms for the proposed robot; (1) a hanging mechanism for the robust ceiling lock/release motion and (2) a pantograph mechanism for averaging the locomotion speed of the separated inner and outer bodies. In the experiment, the capabilities of the developed two key mechanisms are evaluated and revealed that both mechanisms have enough performance to construct the proposed ceiling mobile robot. Moreover key points of the future works to utilize the developed mechanisms are also mentioned. That is, the optimization of the number and alignment of the hanging mechanisms.

Rui Fukui, Hiroshi Morishita, Taketoshi Mori, Tomomasa Sato
FLIRT: Interest Regions for 2D Range Data with Applications to Robot Navigation

In this paper we present the Fast Laser Interest Region Transform (FLIRT), a multi-scale interest region operator for 2D range data. FLIRT combines a detector based on a geodesic curve approximation of the range signal and a descriptor based on a polar histogram of occupancy probabilities. This combination was found to perform best in a set of comparative benchmarks on standard indoor and outdoor data sets. The experiments show that FLIRT features have similar repeatability and matching performance than interest points in the computer vision literature.We demonstrate how FLIRT in conjunction with RANSAC make up an accurate, highly robust and particularly simple SLAM front-end that can be applied for navigation tasks such as loop closing, global localization, incremental mapping and SLAM. In the experiments carried out in structured, unstructured, indoor, outdoor, highly dynamic and static environments, we find that FLIRT is able to robustly capture the invariant structures in the data, allowing for very high global localization and loop detection probabilities from single scans. As data association with FLIRT scales linearly with the map size, the method is also fast. The evaluation of FLIRT maps using a recently introduced SLAM characterization metric further shows that the maps are better or on par with the state of the art while being produced by simpler algorithms. Finally, the presented methods are structurally identical to the algorithms for visual interest points making the unified treatment of range and image data possible.

Gian Diego Tipaldi, Manuel Braun, Kai O. Arras
Perception Quality Evaluation with Visual and Infrared Cameras in Challenging Environmental Conditions

This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ‘reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.

Christopher Brunner, Thierry Peynot
Multi-task Learning of Visual Odometry Estimators

This paper presents a novel framework for learning visual odometry estimators from a single uncalibrated camera through multi-task non-parametric Bayesian inference. A new methodology, Coupled Gaussian Processes, is developed to jointly estimate vehicle velocity while concomitantly inferring a full covariance matrix of all tasks. Matched image feature descriptors obtained from sequential frames act as inputs and the vehicle’s linear and angular velocities as outputs, allowing its position to be incrementally determined. This approach has three main benefits: firstly, it readily provides uncertainty measurements, thus allowing posterior data fusion with other sensors; secondly, it eliminates the need for camera calibration, as the system essentially learns the transformation between the optical flow and vehicle velocity spaces; thirdly, it provides motion estimation directly, not subject to scaling as in standard structure from motion techniques with monocular cameras. Experiments conducted using imagery collected in urban and off-road environments under challenging conditions show the benefits of the approach for trajectories of up to 2 km. Finally, the framework is integrated into a Exactly Sparse Extended Information Filter for deployment in a SLAM scenario.

Vitor Campanholo Guizilini, Fabio Tozeto Ramos
Any-Com Multi-robot Path-Planning with Dynamic Teams: Multi-robot Coordination under Communication Constraints

We are interested in finding solutions to the multi-robot path-planning problem that have guarantees on completeness, are robust to communication failure, and incorporate varying team size. In this paper we present an algorithm that addresses the complete multi-robot path-planning problem from two different angles. First, dynamic teams are used to minimize computational complexity per robot and maximize communication bandwidth between team-members. Second, each team is formed into a distributed computer that utilizes surplus communication bandwidth to help achieve better solution quality and to speed-up consensus time. The proposed algorithm is evaluated in three real-world experiments that promote dynamic team formation. In the first experiment, a five mobile robot team plans a set of compatible paths through an office environment while communication quality is disrupted using a tin-can Faraday cage. Results show that the distributed framework of the proposed algorithm drastically speeds-up computation, even when packet loss is as high as 97%. In the second and third experiments, four robots are deployed in a network of three building wings connected by a common room. Results of the latter experiments emphasize a need for dynamic team algorithms that can judiciously choose which subset of the original problem a particular dynamic team should solve.

Michael Otte, Nikolaus Correll
Compliant Leg Shape, Reduced-Order Models and Dynamic Running

The groundbreaking running performances of RHex-like robots are analyzed from the perspective of their leg designs. In particular, two-segment-leg models are used both for studying the running with the legs currently employed and for suggesting new leg designs that could improve the gait stability, running efficiency and forward speed. New curved compliant monolithic legs are fabricated from these models, and the running with these legs is tested by using a newly designed running test robot. Both the simulations and the experimental trials seem to suggest that running with legs with unity-ratio of the leg segments is faster and more efficient than running with the leg that is currently used on the RHex-like robots. The simulation model predictions seem to match closely to experimental trials in some instances but not always. In the future, a more sophisticated model is needed to capture the actual running with curved legs more accurately.

Jae Yun Jun, Duncan Haldane, Jonathan E. Clark
Towards Fully Autonomous Bacterial Microrobots

To be autonomous, a microrobot must be able to operate without any links to an external source. Hence, such autonomy calls for an embedded source of power and an efficient propulsion system while being programmable for a given task that would be executed autonomously. But for microrobots and especially with the ones at the lower end of the spectrum with overall dimensions of only 1 to 2 micrometers in diameter, several technological barriers have prevented the implementation of autonomous microrobots at such a scale. Since an artificial approach is not yet possible due to technological constraints, a natural approach is proposed where the strategy is to identify a biological entity with an embedded power and propulsion system that could be programmed to execute a given task autonomously in a similar fashion as a futuristic artificial microrobot would. Here we propose one such natural entity in the form of an altered MC-1 flagellated bacterium. We show these self-propelled and self-replicating cells have the potential to be pre-programmed to execute a given task by exploiting aerotaxis as a sensory means capable of influencing their motions. Hence, by translating task requirements provided through a human interface to a related pattern of oxygen bubbles of different sizes and distributed throughout an aqueous workspace, a specified task could be performed by a single bacterium up to many swarms of bacteria acting like bacterial microrobots.

Sylvain Martel
Closed-Loop Actuated Surgical System Utilizing Real-Time In-Situ MRI Guidance

Direct magnetic resonance imaging (MRI) guidance during surgical intervention would provide many benefits; most significantly, interventional MRI can be used for planning, monitoring of tissue deformation, realtime visualization of manipulation, and confirmation of procedure success. Direct MR guidance has not yet taken hold because it is often confounded by a number of issues including: MRI-compatibility of existing surgery equipment and patient access in the scanner bore. This paper presents a modular surgical system designed to facilitate the development of MRI-compatible intervention devices. Deep brain stimulation and prostate brachytherapy robots are the two examples that successfully deploying this surgical modules. Phantom and human imaging experiments validate the capability of delineating anatomical structures in 3T MRI during robot motion.

Gregory A. Cole, Kevin Harrington, Hao Su, Alex Camilo, Julie G. Pilitsis, Gregory S. Fischer
Fabrication of Highly Articulated Miniature Snake Robot Structures Using In-Mold Assembly of Compliant Joints

Snake inspired robots have promising applications in minimally invasive surgery. However one of the most significant challenges, which presents itself as a roadblock to large scale exploitation of this technology, is the high manufacturing costs associated with fabricating highly articulated miniature structures. We present a method to fabricate such highly articulated miniature snake robots using a modular mold design. This design combines the benefits of in-mold assembly and insert molding to fabricate highly articulating miniature structures. The experimental results demonstrate the feasibility of the modular mold design for making snake robot chains. This paper also discusses a mathematical framework which can be used to optimize the size of the compliant links and thereby the overall cross section of the snake robot chain that can be manufactured using the modular mold design.

Arvind Ananthanarayanan, Felix Bussemer, Satyandra K. Gupta, Jaydev P. Desai
Control of an Omnidirectional Walking Simulator

Simulators are a unique way of replicating any real world scenario. It gives one the opportunity to be in a place virtually without being present there physically. The motivation behind making this simulator was to replicate real world terrains and make a human walk in that environment. The basic application which acted as motivation was training people like soldiers, sportspersons and others on various terrains. Control of a prototype is reported in this paper. The prototype is an omnidirectional walking simulator that allows one to walk on it to get the feeling of walking on a horizontal levelled plane in any direction.

Manish Chauhan, C. G. Rajeevlochana, Subir Kumar Saha, S. P. Singh
Control of Robotic Manipulators with Input-Output Delays: An Experimental Verification

In this paper we experimentally study recently developed control algorithms for robotic manipulators with input-output delays. Our previous work demonstrates that the scattering transformation can be used as an effective tool to address delay instability problems in robot control. Specifically, the classical PI controller can be modified to regulate the robotic manipulator in the presence of constant and time-varying input-output delays by using the scattering transformation. These results are validated in this paper through experiments on the PHANToM Omni device. Furthermore, by appropriately compensating for the gravitational and frictional forces, a model of the PHANToM Omni device, suitable for control implementation, is also developed. The implementation of the control algorithms is discussed, and experiments are conducted to demonstrate the efficacy of the proposed results.

Yen-Chen Liu, Nikhil Chopra
Model-Based Control and Estimation of Humanoid Robots via Orthogonal Decomposition

Model-based control techniques, which use a model of robot dynamics to compute force/torque control commands, have a proven record for achieving accuracy and compliance in force-controllable robot manipulators. However, applying such methods to humanoid and legged systems has yet to happen due to challenges such as: 1) under-actuation inherent in these floating base systems, 2) dynamically changing contact states with potentially unknown contact forces, 3) and the difficulty of accurately modeling these high degree of freedom systems, especially with inadequate sensing. In this work, we present a relatively simple technique for fullbody model-based control and estimation of humanoid robot, using an orthogonal decomposition of rigid-body dynamics. Doing so simplifies the problem by reducing control and estimation to only those variables critical for the task. We present some of our recent evaluations of our approaches on the CarnegieMellon/Sarcos hydraulic force-controllable humanoid robot, engaging in dynamic tasks with contact state changes, such as standing up from a chair.

Michael Mistry, Akihiko Murai, Katsu Yamane, Jessica Hodgins
Enhancement of Multi-user Teleoperation Systems by Prediction of Dyadic Haptic Interaction

By integrating a model of the remote environment or the human operator into a haptic bilateral teleoperation control architecture, their behavior can be predicted to compensate time delay introduced by a non-ideal communication channel. This results in increased robustness and fidelity of the closed-loop system. In literature, models of the remote environment, the teleoperator dynamics or task-specific operator models are integrated into single-user teleoperation systems. The present paper is the first that explicitly considers dyadic haptic interaction between two operators in the prediction algorithms applied to a multi-user teleoperation system. Our comparative experimental results obtained in a 3 degree-of-freedom teleoperation system show an increased robustness and fidelity of this approach compared to a classic bilateral force-force architecture.

Daniela Feth, Angelika Peer, Martin Buss
The Reconfigurable Omnidirectional Articulated Mobile Robot (ROAMeR)

Articulated Wheeled Robot (AWR) locomotion systems consist of a chassis connected to sets of wheels through articulated linkages. Such articulated “legwheel systems” facilitate reconfigurability that has significant applications in many arenas, but also engender constraints that make the design, analysis and control difficult. In this paper we study this class of systems in the context of design, analysis and control of a novel planar reconfigurable omnidirectional wheeled mobile robot. This AWR distinguishes itself from existing wheeled mobile robots by having the capability to change the location of its wheels relative to the chassis.We first extend a twist-based modeling approach to systematically construct the symbolic kinematic model for a general class of AWR before specializing it to our planar AWR example. We then develop a kinematic redundancy resolution scheme to coordinate the motion of the articulated legs and wheels. Two generations of physical prototypes were developed, refined and tested using simulation/virtual prototyping and realtime/ hardware in the loop methodologies. Representative results from both sets of approaches are presented to illustrate combined locomotion and reconfiguration.

Qiushi Fu, Xiaobo Zhou, Venkat Krovi
Practical Motion Planning in Unknown and Unpredictable Environments

Motion planners for robots in unknown and dynamic environments often assume known obstacle geometry and use that to predict unknown motions of obstacles through tracking, but such an assumption may not be realistic. In [1], we introduced a

collision-free perceiver

(CFP) that can detect guaranteed collision-free trajectory segments in the unknown configuration-time (CT) space of a robot without assuming known obstacle geometry or motion. However, such a guarantee by the CFP is at the expense of a finite period for perception and processing of each collision-free CT point. In this paper, we address how to incorporate the CFP, taking into account its finite processing time, into real-time motion planning to enable a robot of high degree of freedom to plan and move at the same time in an unknown and unpredictable environment while minimizing unsafe stops when the robot may collide with an obstacle. The approach was implemented and tested in experiments with a real 7-DOF robot arm and a stereo-vision sensor, indicating the potential of the approach.

Rayomand Vatcha, Jing Xiao
Experiments in Vision-Laser Fusion Using the Bayesian Occupancy Filter

Occupancy Grids have been used to represent the environment for some time. More recently, the Bayesian Occupancy Filter (BOF), which provides both an estimate of likelihood of occupancy of each cell, AND a probabilistic estimate of the velocity of each cell in the grid, has been introduced and patented. This work presents the first experiments in the use of the BOF to fuse data obtained from stereo vision and multiple laser sensors, on an intelligent vehicle platform. The paper describes the experimental platform, the approach to sensor fusion, and shows results from data captured in real traffic situations.

John-David Yoder, Mathias Perrollaz, Igor E. Paromtchik, Yong Mao, Christian Laugier
Towards Experimental Analysis of Challenge Scenarios in Robotics

We explore the idea of simulated experimental analysis for challenge scenarios in robotics using the search and secure problem from the Multi Autonomous Ground-robotic International Challenge (MAGIC). The MAGIC problem requires a team of heterogeneous robots to locate, classify, and secure a number of targets in an urban environment with indoor and outdoor areas. We introduce a framework for solving the coordination aspects of the challenge by providing guaranteed clearing strategies (i.e., strategies that ensure coming into contact with any adversarial target). The proposed method allows for repair of the clearing schedule after robot failure, as well as a fall-back strategy if clearing is no longer possible. We analyze scenarios taken directly from the competition, and we utilize repeated simulated trials to validate the hypothesis that strategies designed for locating worst-case targets tend to be more robust to failure than strategies designed for locating average-case targets. Thus, more conservative worst-case methods would tend to perform better if the competition were run many times. However, riskier average-case strategies may win in a single competition. These results demonstrate how insight can be gained from repeated simulated analysis of challenge scenarios in robotics.

Geoffrey A. Hollinger, Sanjiv Singh
Sensitivity of Task Space Performance to Null Space Control in Presence of Model Uncertainties

This paper investigates the sensitivity of the task space tracking errors to null space tracking errors in the face of model uncertainties for several operational space controllers (OSCs). Under the same inaccurate robot model and digital control effects, experimental data on different OSCs indicated that the sensitivity of task space performance to its null space controller varies for different OSCs based on the way the controllers make use of the inaccurate robot model. Analysis of the error equations of different OSCs reveal several reasons of why some OSCs can give better performance than others. Also from the discussion, it is suggested that the inaccurate model should be used in joint space rather than in task space to avoid magnifying the model uncertainties through the kinematic chain.

Ngoc Dung Vuong, Chongyou Ma, Marcelo H. Ang Jr.
Backmatter
Metadaten
Titel
Experimental Robotics
herausgegeben von
Oussama Khatib
Vijay Kumar
Gaurav Sukhatme
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28572-1
Print ISBN
978-3-642-28571-4
DOI
https://doi.org/10.1007/978-3-642-28572-1

Neuer Inhalt