Skip to main content
main-content

Über dieses Buch

This volume presents a collection of papers presented at the 15th International Symposium of Robotic Research (ISRR). ISRR is the biennial meeting of the International Foundation of Robotic Research (IFRR) and its 15th edition took place in Flagstaff, Arizona on December 9 to December 12, 2011. As for the previous symposia, ISRR 2011 followed up on the successful concept of a mixture of invited contributions and open submissions. Therefore approximately half of the 37 contributions were invited contributions from outstanding researchers selected by the IFRR officers and the program committee, and the other half were chosen among the open submissions after peer review. This selection process resulted in a truly excellent technical program which featured some of the very best of robotic research.

The program was organized around oral presentation in a single-track format and included for the first time a small number of interactive presentations. The symposium contributions contained in this volume report on a variety of new robotics research results covering a broad spectrum including perception, manipulation, grasping, vehicles and design, navigation, control and integration, estimation and SLAM.

Inhaltsverzeichnis

Frontmatter

Aerial Vehicles

Frontmatter

Progress on “Pico” Air Vehicles

As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power—whereas such questions have in general been answered for larger aircraft. When developing a flying robot on the scale of a common housefly, all hardware must be developed from scratch as there is nothing “off-the-shelf” which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. This technology void also applies to techniques available for fabrication and assembly of the aeromechanical components: the scale and complexity of the mechanical features requires new ways to design and prototype at scales between macro and MEMS, but with rich topologies and material choices one would expect in designing human-scale vehicles. With these challenges in mind, we present progress in the essential technologies for insect-scale robots, or “pico” air vehicles.

Robert J. Wood, Benjamin Finio, Michael Karpelson, Kevin Ma, Néstor O. Pérez-Arancibia, Pratheev S. Sreetharan, Hiro Tanaka, John P. Whitney

Aerial Locomotion in Cluttered Environments

Many environments where robots are expected to operate are cluttered with objects, walls, debris, and different horizontal and vertical structures. In this chapter, we present four design features that allow small robots to rapidly and safely move in 3 dimensions through cluttered environments: a perceptual system capable of detecting obstacles in the robot’s surroundings, including the ground, with minimal computation, mass, and energy requirements; a flexible and protective framework capable of withstanding collisions and even using collisions to learn about the properties of the surroundings when light is not available; a mechanism for temporarily perching to vertical structures in order to monitor the environment or communicate with other robots before taking off again; and a self-deployment mechanism for getting in the air and perform repetitive jumps or glided flight. We conclude the chapter by suggesting future avenues for integration of multiple features within the same robotic platform.

Dario Floreano, Jean-Christophe Zufferey, Adam Klaptocz, Jürg Germann, Mirko Kovac

Opportunities and Challenges with Autonomous Micro Aerial Vehicles

We survey the recent work on micro-UAVs, a fast-growing field in robotics, outlining the opportunities for research and applications, along with the scientific and technological challenges. Micro-UAVs can operate in three-dimensional environments, explore and map multi-story buildings, manipulate and transport objects, and even perform such tasks as assembly. While fixed-base industrial robots were the main focus in the first two decades of robotics, and mobile robots enabled most of the significant advances during the next two decades, it is likely that UAVs, and particularly micro-UAVs will provide a major impetus for the third phase of development.

Vijay Kumar, Nathan Michael

Perception and Mapping

Frontmatter

Unsupervised 3D Object Discovery and Categorization for Mobile Robots

We present a method for mobile robots to learn the concept of objects and categorize them without supervision using 3D point clouds from a laser scanner as input. In particular, we address the challenges of categorizing objects discovered in different scans without knowing the number of categories. The underlying object discovery algorithm finds objects per scan and gives them locally-consistent labels. To associate these object labels across all scans, we introduce class graph which encodes the relationship among local object class labels. Our algorithm finds the mapping from local class labels to global category labels by inferring on this graph and uses this mapping to assign the final category label to the discovered objects. We demonstrate on real data our algorithm’s ability to discover and categorize objects without supervision.

Jiwon Shin, Rudolph Triebel, Roland Siegwart

Probabilistic Collision Detection Between Noisy Point Clouds Using Robust Classification

We present a new collision detection algorithm to perform contact computations between noisy point cloud data. Our approach takes into account the uncertainty that arises due to discretization error and noise, and formulates collision checking as a two-class classification problem. We use techniques from machine learning to compute the collision probability for each point in the input data and accelerate the computation using stochastic traversal of bounding volume hierarchies. We highlight the performance of our algorithm on point clouds captured using PR2 sensors as well as synthetic data sets, and show that our approach can provide a fast and robust solution for handling uncertainty in contact computations.

Jia Pan, Sachin Chitta, Dinesh Manocha

Active Classification: Theory and Application to Underwater Inspection

We discuss the problem in which an autonomous vehicle must classify an object based on multiple views. We focus on the active classification setting, where the vehicle controls which views to select to best perform the classification. The problem is formulated as an extension to Bayesian active learning, and we show connections to recent theoretical guarantees in this area. We formally analyze the benefit of acting adaptively as new information becomes available. The analysis leads to a probabilistic algorithm for determining the best views to observe based on information theoretic costs. We validate our approach in two ways, both related to underwater inspection: 3D polyhedra recognition in synthetic depth maps and ship hull inspection with imaging sonar. These tasks encompass both the planning and recognition aspects of the active classification problem. The results demonstrate that actively planning for informative views can reduce the number of necessary views by up to 80 % when compared to passive methods.

Geoffrey A. Hollinger, Urbashi Mitra, Gaurav S. Sukhatme

The Importance of Structure

Many tasks in robotics and computer vision are concerned with inferring a continuous or discrete state variable from observations and measurements from the environment. Due to the high-dimensional nature of the input data the inference is often cast as a two stage process: first a low-dimensional feature representation is extracted on which secondly a learning algorithm is applied. Due to the significant progress that have been achieved within the field of machine learning over the last decade focus have placed at the second stage of the inference process, improving the process by exploiting more advanced learning techniques applied to the same (or more of the same) data. We believe that for many scenarios significant strides in performance could be achieved by focusing on representation rather than aiming to alleviate inconclusive and/or redundant information by exploiting more advanced inference methods. This stems from the notion that; given the “correct” representation the inference problem becomes easier to solve. In this paper we argue that one important mode of information for many application scenarios is not the actual variation in the data but the rather the higher order statistics as the structure of variations. We will exemplify this through a set of applications and show different ways of representing the structure of data.

Carl Henrik Ek, Danica Kragic

Modular Design of Image Based Visual Servo Control for Dynamic Mechanical Systems

This paper presents a modular framework for design of image based visual servo control for fully actuated dynamic mechanical systems. The approach taken uses the formalism of port Hamiltonian systems to track energy exchanged between the mechanical system and virtual potentials or Hamiltonians associated with each image feature. Asymptotic stability of the system is guaranteed by injecting damping to the otherwise conservative system. A simple approach based on full state measurement is presented and then extended to deal with unmeasured relative depth of image features.

Robert Mahony

Force Sensing by Microrobot on a Chip

In this paper, we discuss a force sensing by microrobot called magnetically driven microtool (MMT) in a microfluidic chip. On-chip force sensor is fabricated by assembling layers to neglect the friction issue and it is actuated by permanent magnets, which supply mN order force to stimulate microorganisms. The displacement is magnified by designing beams on the force sensor and the sensor achieved 100 μN resolutions. We succeeded in on-chip stimulation and evaluation of Pleurosira laevis by developed MMT with force sensing structure.

Tomohiro Kawahara, Fumihito Arai

Force Control and Reaching Movements on the iCub Humanoid Robot

This paper is about a layered controller for a complex humanoid robot: namely, the iCub. We exploited a combination of precomputed models and machine learning owing to the principle of balancing the design effort with the complexity of data collection for learning. A first layer uses the iCub sensors to implement impedance control, on top of which we plan trajectories to reach for visually identified targets while avoiding the most obvious joint limits or self collision of the robot arm and body. Modeling errors or misestimation of parameters are compensated by machine learning in order to obtain accurate pointing and reaching movements. Motion segmentation is the main visual cue employed by the robot.

Giorgio Metta, Lorenzo Natale, Francesco Nori, Giulio Sandini

Analytical Least-Squares Solution for 3D Lidar-Camera Calibration

This paper addresses the problem of estimating the intrinsic parameters of the 3D Velodyne lidar while at the same time computing its extrinsic calibration with respect to a rigidly connected camera. Existing approaches to solve this nonlinear estimation problem are based on iterative minimization of nonlinear cost functions. In such cases, the accuracy of the resulting solution hinges on the availability of a precise initial estimate, which is often not available. In order to address this issue, we divide the problem into two least-squares sub-problems, and analytically solve each one to determine a precise initial estimate for the unknown parameters. We further increase the accuracy of these initial estimates by iteratively minimizing a batch nonlinear least-squares cost function. In addition, we provide the minimal observability conditions, under which, it is possible to accurately estimate the unknown parameters. Experimental results consisting of photorealistic 3D reconstruction of indoor and outdoor scenes are used to assess the validity of our approach.

Faraz M. Mirzaei, Dimitrios G. Kottas, Stergios I. Roumeliotis

Tactile Object Recognition and Localization Using Spatially-Varying Appearance

In this work, we present a new method for doing object recognition using tactile force sensors that makes use of recent work on “tactile appearance” to describe objects by the spatially-varying appearance characteristics of their surface texture. The method poses recognition as a localization problem with a discrete component of the state representing object identity, allowing the application of sequential state estimation techniques from the mobile robotics literature. Ideas from geometric hashing approaches are incorporated to enable efficient updating of probabilities over object identity and pose. The method’s strong performance is demonstrated experimentally both in simulation and using physical sensors.

Zachary Pezzementi, Gregory D. Hager

The Antiparticle Filter—An Adaptive Nonlinear Estimator

We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when the uncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

John Folkesson

Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera

RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.

Albert S. Huang, Abraham Bachrach, Peter Henry, Michael Krainin, Daniel Maturana, Dieter Fox, Nicholas Roy

Efficient Planning in Non-Gaussian Belief Spaces and Its Application to Robot Grasping

The limited nature of robot sensors make many important robotics problems partially observable. These problems may require the system to perform complex information-gathering operations. One approach to solving these problems is to create plans in belief-space, the space of probability distributions over the under-lying state of the system. The belief-space plan encodes a strategy for performing a task while gaining information as necessary. Most approaches to belief-space planning rely upon representing belief state in a particular way (typically as a Gaussian). Unfortunately, this can lead to large errors between the assumed density representation of belief state and the true belief state. This paper proposes a new sample-based approach to belief-space planning that has fixed computational complexity while allowing arbitrary implementations of Bayes filtering to be used to track belief state. The approach is illustrated in the context of a simple example and compared to a prior approach. Then, we propose an application of the technique to an instance of the grasp synthesis problem where a robot must simultaneously localize and grasp an object given initially uncertain object parameters by planning information-gathering behavior. Experimental results are presented that demonstrate the approach to be capable of actively localizing and grasping boxes that are presented to the robot in uncertain and hard-to-localize configurations.

Robert Platt, Leslie Kaelbling, Tomas Lozano-Perez, Russ Tedrake

Pose Graph Compression for Laser-Based SLAM

The pose graph is a central data structure in graph-based SLAM approaches. It encodes the poses of the robot during data acquisition as well as spatial constraints between them. The size of the pose graph has a direct influence on the runtime and the memory requirements of a SLAM system since it is typically used to make data associations and within the optimization procedure. In this paper, we address the problem of efficient, information-theoretic compression of such pose graphs. The central question is which sensor measurements can be removed from the graph without loosing too much information. Our approach estimates the expected information gain of laser measurements with respect to the resulting occupancy grid map. It allows us to restrict the size of the pose graph depending on the information that the robot acquires about the environment. Alternatively, we can enforce a maximum number of laser scans the robot is allowed to store, which results in an any-space SLAM system. Real world experiments suggest that our approach efficiently reduces the growth of the pose graph while minimizing the loss of information in the resulting grid map.

Cyrill Stachniss, Henrik Kretzschmar

Planning

Frontmatter

Demonstration-Guided Motion Planning

We present demonstration-guided motion planning (DGMP), a new frame-work for planning motions for personal robots to perform household tasks. DGMP combines the strengths of sampling-based motion planning and robot learning from demonstrations to generate plans that (1) avoid novel obstacles in cluttered environments, and (2) learn and maintain critical aspects of the motion required to successfully accomplish a task. Sampling-based motion planning methods are highly effective at computing paths from start to goal configurations that avoid obstacles, but task constraints (e.g. a glass of water must be held upright to avoid a spill) must be explicitly enumerated and programmed. Instead, we use a set of expert demonstrations and automatically extract time-dependent task constraints by learning low variance aspects of the demonstrations, which are correlated with the task constraints. We then introduce multi-component rapidly-exploring roadmaps (MC-RRM), a sampling-based method that incrementally computes a motion plan that avoids obstacles and optimizes a learned cost metric. We demonstrate the effectiveness of DGMP using the Aldebaran Nao robot performing household tasks in a cluttered environment, including moving a spoon full of sugar from a bowl to a cup and cleaning the surface of a table.

Gu Ye, Ron Alterovitz

Learning from Experience in Manipulation Planning: Setting the Right Goals

In this paper, we describe a method of improving trajectory optimization based on predicting good initial guesses from previous experiences. In order to generalize to new situations, we propose a paradigm shift: predicting qualitative attributes of the trajectory that place the initial guess in the basin of attraction of a low-cost solution. We start with a key such attribute, the choice of a goal within a goal set that describes the task, and show the generalization capabilities of our method in extensive experiments on a personal robotics platform.

Anca D. Dragan, Geoffrey J. Gordon, Siddhartha S. Srinivasa

Planning Complex Inspection Tasks Using Redundant Roadmaps

The aim of this work is fast, automated planning of robotic inspections involving complex 3D structures. A model comprised of discrete geometric primitives is provided as input, and a feasible robot inspection path is produced as output. Our algorithm is intended for tasks in which 2.5D algorithms, which divide an inspection into multiple 2D slices, and segmentation-based approaches, which divide a structure into simpler components, are unsuitable. This degree of 3D complexity has been introduced by the application of autonomous in-water ship hull inspection; protruding structures at the stern (propellers, shafts, and rudders) are positioned in close proximity to one another and to the hull, and clearance is an issue for a mobile robot. A global, sampling-based approach is adopted, in which all the structures are simultaneously considered in planning a path. First, the state space of the robot is discretized by constructing a roadmap of feasible states; construction ceases when each primitive is observed by a specified number of states. Once a roadmap is produced, the set cover problem and traveling salesman problem are approximated in sequence to build a feasible inspection tour. We analyze the performance of this procedure in solving one of the most complex inspection planning tasks to date, covering the stern of a large naval ship, using an a priori triangle mesh model obtained from real sonar data and comprised of 100,000 primitives. Our algorithm generates paths on a par with dual sampling, with reduced computational effort.

Brendan Englot, Franz Hover

Path Planning with Loop Closure Constraints Using an Atlas-Based RRT

In many relevant path planning problems, loop closure constraints reduce the configuration space to a manifold embedded in the higher-dimensional joint ambient space. Whereas many progresses have been done to solve path planning problems in the presence of obstacles, only few work consider loop closure constraints. In this paper we present the AtlasRRT algorithm, a planner specially tailored for such constrained systems that builds on recently developed tools for higher-dimensional continuation. These tools provide procedures to define charts that locally parametrize manifolds and to coordinate them forming an atlas. AtlasRRT simultaneously builds an atlas and a Rapidly-Exploring Random Tree (RRT), using the atlas to sample relevant configurations for the RRT, and the RRT to devise directions of expansion for the atlas. The new planner is advantageous since samples obtained from the atlas allow a more efficient extension of the RRT than state of the art approaches, where samples are generated in the joint ambient space.

Léonard Jaillet, Josep M. Porta

Decentralized Control for Optimizing Communication with Infeasible Regions

In this paper we present a decentralized gradient-based controller that optimizes communication between mobile aerial vehicles and stationary ground sensor vehicles in an environment with infeasible regions. The formulation of our problem as a MIQP is easily implementable, and we show that the addition of a scaling matrix can improve the range of attainable converged solutions by influencing trajectories to move around infeasible regions. We demonstrate the robustness of the controller in 3D simulation with agent failure, and in 10 trials of a multi-agent hardware experiment with quadrotors and ground sensors in an indoor environment. Lastly, we provide analytical guarantees that our controller strictly minimizes a nonconvex cost along agent trajectories, a desirable property for general multi-agent coordination tasks.

Stephanie Gil, Samuel Prentice, Nicholas Roy, Daniela Rus

Pre-image Backchaining in Belief Space for Mobile Manipulation

There have been several recent approaches to planning and control in uncertain domains, based on online planning in a determinized approximation of the belief-space dynamics, and replanning when the actual belief state diverges from the predicted one. In this work, we extend this approach to planning for mobile manipulation tasks with very long horizons, using a hierarchical combination of logical and geometric representations. We present a novel approach to belief-space preimage backchaining with logical representations, an efficient method for on-line execution monitoring and replanning, and preliminary results on mobile manipulation tasks.

Leslie Pack Kaelbling, Tomás Lozano-Pérez

Realtime Informed Path Sampling for Motion Planning Search

Robot motions typically originate from an uninformed path sampling process such as random or low-dispersion sampling. We demonstrate an alternative approach to path sampling that closes the loop on the expensive collision-testing process. Although all necessary information for collision-testing a path is known to the planner, that information is typically stored in a relatively unavailable form in a costmap. By summarizing the most salient data in a more accessible form, our process delivers a denser sampling of the free space per unit time than open-loop sampling techniques. We obtain this result by probabilistically modeling—in real time and with minimal information—the locations of obstacles, based on collision test results. We demonstrate up to a 780 % increase in paths surviving collision test.

Ross A. Knepper, Matthew T. Mason

Asymptotically Near-Optimal Is Good Enough for Motion Planning

Asymptotically optimal motion planners guarantee that solutions approach optimal as more iterations are performed. There is a recently proposed roadmap-based method that provides this desirable property, the PRM∗ approach, which minimizes the computational cost of generating the roadmap. Even for this method, however, the roadmap can be slow to construct and quickly grows too large for storage or fast online query resolution. From graph theory, there are many algorithms that produce sparse subgraphs, known as spanners, which can guarantee near optimal paths. In this work, a method for interleaving an incremental graph spanner algorithm with the asymptotically optimal PRM∗ algorithm is described. The result is an asymptotically near-optimal motion planning solution. Theoretical analysis and experiments performed on typical, geometric motion planning instances show that large reductions in construction time, roadmap density, and online query resolution time can be achieved with a small sacrifice of path quality. If smoothing is applied, the results are even more favorable for the near-optimal solution.

James D. Marble, Kostas E. Bekris

Robust Adaptive Coverage for Robotic Sensor Networks

This paper presents a distributed control algorithm to drive a group of robots to spread out over an environment and provide adaptive sensor coverage of that environment. The robots use an on-line learning mechanism to approximate the areas in the environment which require more concentrated sensor coverage, while simultaneously exploring the environment before moving to final positions to provide this coverage. More precisely, the robots learn a scalar field, called the weighting function, representing the relative importance of different regions in the environment, and use a Traveling Salesperson based exploration method, followed by a Voronoi-based coverage controller to position themselves for sensing over the environment. The algorithm differs from previous approaches in that provable robustness is emphasized in the representation of the weighting function. It is proved that the robots approximate the weighting function with a known bounded error, and that they converge to locations that are locally optimal for sensing with respect to the approximate weighting function. Simulations using empirically measured light intensity data are presented to illustrate the performance of the method.

Mac Schwager, Michael P. Vitus, Daniela Rus, Claire J. Tomlin

A Multi-robot Control Policy for Information Gathering in the Presence of Unknown Hazards

This paper addresses the problem of deploying a network of robots to gather information in an environment, where the environment is hazardous to the robots. This may mean that there are adversarial agents in the environment trying to disable the robots, or that some regions of the environment tend to make the robots fail, for example due to radiation, fire, adverse weather, or caustic chemicals. A probabilistic model of the environment is formulated, under which recursive Bayesian filters are used to estimate the environment events and hazards online. The robots must control their positions both to avoid sensor failures and to provide useful sensor information by following the analytical gradient of mutual information computed using these online estimates. Mutual information is shown to combine the competing incentives of avoiding failure and collecting informative measurements under a common objective. Simulations demonstrate the performance of the algorithm.

Mac Schwager, Philip Dames, Daniela Rus, Vijay Kumar

Motion Planning Under Uncertainty Using Differential Dynamic Programming in Belief Space

We present an approach to motion planning under motion and sensing un-certainty, formally described as a continuous partially-observable Markov decision process (POMDP). Our approach is designed for non-linear dynamics and observation models, and follows the general POMDP solution framework in which we represent beliefs by Gaussian distributions, approximate the belief dynamics using an extended Kalman filter (EKF), and represent the value function by a quadratic function that is valid in the vicinity of a nominal trajectory through belief space. Using a variant of differential dynamic programming, our approach iterates with second-order convergence towards a linear control policy over the belief space that is locally-optimal with respect to a user-defined cost function. Unlike previous work, our approach does not assume maximum-likelihood observations, does not assume fixed estimator or control gains, takes into account obstacles in the environment, and does not require discretization of the belief space. The running time of the algorithm is polynomial in the dimension of the state space. We demonstrate the potential of our approach in several continuous partially-observable planning domains with obstacles for robots with non-linear dynamics and observation models.

Jur van den Berg, Sachin Patil, Ron Alterovitz

Systems and Integration

Frontmatter

Rosbridge: ROS for Non-ROS Users

We present rosbridge, a middleware abstraction layer which provides robotics technology with a standard, minimalist applications development framework accessible to applications programmers who are not themselves roboticists. Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided (for now) by ROS, the open-source “Robot Operating System”, the current state-of-the-art in robot middleware. In particular, it facilitates the use of web technologies such as Javascript for the purpose of broadening the use and usefulness of robotic technology. We demonstrate potential applications in the interface design, education, human-robot interaction and remote laboratory environments.

Christopher Crick, Graylin Jay, Sarah Osentoski, Benjamin Pitzer, Odest Chadwicke Jenkins

Introduction to the Robot Town Project and 3-D Co-operative Geometrical Modeling Using Multiple Robots

This paper introduces the author’s research project called the “Robot Town Project”. Service robots, which co-exist with humans and provide various services in daily life, must have sufficient ability to sense changes in the environment and deal with a variety of situations. However, since the daily environment is complex and unpredictable, it is almost impossible with current methods to sense all the necessary information using only a robot and the attached sensors. One promising approach for robots to co-exist with humans is to use IT technology, such as a distributed sensor network and network robotics. As an empirical example of this approach, the authors have started Robot Town Project. The aim of this research project is to develop a distributed sensor network system covering an area of a block in a town in which there are many houses, buildings, and roads, and manage robot services by monitoring events that occur in the town. This paper introduces currently available technologies including an RFID-tag-based localization system, distributed sensor systems for moving object tracking, and object management systems using RFID tags. For the construction of 3-D geometrical models of large-scale environments, a measurement and modeling system using a group of multiple robots and an on-board laser range finder is also introduced.

Ryo Kurazume, Yumi Iwashita, Koji Murakami, Tsutomu Hasegawa

Soft Mobile Robots with On-Board Chemical Pressure Generation

We wish to develop robot systems that are increasingly more elastic, as a step towards bridging the gap between man-made machines and their biological counterparts. To this end, we develop soft actuators fabricated from elastomer films with embedded fluidic channels. These actuators offer safety and adaptability and may potentially be utilized in robotics, wearable tactile interfaces, and active orthoses or prostheses. The expansion of fluidic channels under pressure creates a bending moment on the actuators and their displacement response follows theoretical predictions. Fluidic actuators require a pressure source, which limits their mobility and mainstream usage. This paper considers instances of mechanisms made from distributed elastomer actuators to generate motion using a chemical means of pressure generation. A mechanical feedback loop controls the chemical decomposition of hydrogen peroxide into oxygen gas in a closed container to self-regulate the actuation pressure. This on-demand pressure generator, called the pneumatic battery, bypasses the need for electrical energy by the direct conversion of chemical to mechanical energy. The portable pump can be operated in any orientation and is used to supply pressure to an elastomeric rolling mobile robot as a representative for a family of soft robots.

Cagdas D. Onal, Xin Chen, George M. Whitesides, Daniela Rus

Computational Human Model as Robot Technology

The study of computational approach to human understanding has been the history of artificial intelligence. The robotics developments in algorithms and software have prepared the powerful research tools that were not available when the study of intelligence started from unembodied frameworks. The computational human model is a large field of research. The author and the colleagues have studied by focussing on behavioral modeling and anatomical modeling. The aims of study on human modeling are double faces of a coin. One side is to develop the technological foundation to predict human behaviors including utterance for robots communicating with the humans. The other side is to develop the quantitative methods to estimate the internal states of the humans. The former directly connected to the development of robotic applications in the aging societies. The latter finds fields of application in medicine, rehabilitation, pathology, gerontology, development, and sports science. This paper survey the recent research of the authors group on the anatomical approach to the computational human modeling.

Yoshihiko Nakamura

Control

Frontmatter

Grasping and Fixturing as Submodular Coverage Problems

Grasping and fixturing are concerned with immobilizing objects. Most prior work in this area strives to minimize the number of contacts needed. However, for delicate objects or surfaces such as glass or bone (in medical applications), extra contacts can be used to reduce the forces needed at each contact to resist applied wrenches. We focus on the following class of problems. Given a polyhedral object model, set of candidate contacts, and a limit on the sum of applied forces at the contacts or a limit on any individual applied force, compute a set of k contact points that maximize the radius of the ball in wrench space that can be resisted. We present an algorithm, SatGrasp, that is guaranteed to find near-optimal solutions in linear time. At the core of our approach are (i) an alternate formulation of the residual radius objective, and (ii) the insight that the resulting problem is a submodular coverage problem. This allows us to exploit the submodular saturation algorithm, which has recently been derived for applications in sensor placement. Our approach is applicable in situations with or without friction.

John D. Schulman, Ken Goldberg, Pieter Abbeel

A Unified Perturbative Dynamics Approach to Online Vehicle Model Identification

The motions of wheeled mobile robots are governed by non-contact gravity forces and contact forces between the wheels and the terrain. Inasmuch as future wheel-terrain interactions are unpredictable and unobservable, high performance autonomous vehicles must ultimately learn the terrain by feel and extrapolate, just as humans do. We present an approach to the automatic calibration of dynamic models of arbitrary wheeled mobile robots on arbitrary terrain. Inputs beyond our control (disturbances) are assumed to be responsible for observed differences between what the vehicle was initially predicted to do and what it was subsequently observed to do. In departure from much previous work, and in order to directly support adaptive and predictive controllers, we concentrate on the problem of predicting candidate trajectories rather than measuring the current slip. The approach linearizes the nominal vehicle model and then calibrates the perturbative dynamics to explain the observed prediction residuals. Both systematic and stochastic disturbances are used, and we model these disturbances as functions over the terrain, the velocities, and the applied inertial and gravitational forces. In this way, we produce a model which can be used to predict behavior across all of state space for arbitrary terrain geometry. Results demonstrate that the approach converges quickly and produces marked improvements in the prediction of trajectories for multiple vehicle classes throughout the performance envelope of the platform, including during aggressive maneuvering.

Neal Seegmiller, Forrest Rogers-Marcovitz, Greg Miller, Alonzo Kelly

Prediction and Planning Methods of Bipedal Dynamic Locomotion Over Very Rough Terrains

Although the problem of dynamic locomotion in very rough terrain is critical to the advancement of various areas in robotics and health devices, little progress has been made on generalizing gait behavior with arbitrary paths. Here, we report that perturbation theory, a set of approximation schemes that has roots in celestial mechanics and non-linear dynamical systems, can be adapted to predict the behavior of non closed-form integrable state-space trajectories of a robot’s center of mass, given its arbitrary contact state and center of mass (CoM) geometric path. Given an arbitrary geometric path of the CoM and known step locations, we use perturbation theory to determine phase curves of CoM behavior. We determine step transitions as the points of intersection between adjacent phase curves. To discover intersection points, we fit polynomials to the phase curves of neighboring steps and solve their differential roots. The resulting multi-step phase diagram is the locomotion plan suited to drive the behavior of a robot or device maneuvering in the rough terrain. We provide two main contributions to legged locomotion: (1) predicting CoM state-space behavior for arbitrary paths by means of numerical integration, and (2) finding step transitions by locating common intersection points between neighboring phase curves. Because these points are continuous in phase they correspond to the desired contact switching policy. We validate our results on a human-size avatar navigating in a very rough environment and compare its behavior to a human subject maneuvering through the same terrain.

Luis Sentis, Benito R. Fernandez, Michael Slovich

Autonomous Navigation of a Humanoid Robot Over Unknown Rough Terrain

The present paper describes the integration of laser-based perception, footstep planning, and walking control of a humanoid robot for navigation over previously unknown rough terrain. A perception system that obtains the shape of the surrounding environment to an accuracy of a few centimeters is realized based on input obtained using a scanning laser range sensor. A footstep planner decides the sequence of stepping positions using the obtained terrain shape. A walking controller that can cope with a few centimeters error in terrain shape measurement is achieved by combining 40 ms cycle online walking pattern generation and a sensor feedback ground reaction force controller. An operation interface that was developed to send commands to the robot is also presented. A mixed-reality display is adopted in order to realize intuitive interfaces. The navigation system is implemented on the HRP-2, a full-size humanoid robot. The performance of the proposed system for navigation over unknown rough terrain is investigated through several experiments.

Koichi Nishiwaki, Joel Chestnutt, Satoshi Kagami

Hybrid System Identification via Switched System Optimal Control for Bipedal Robotic Walking

While the goal of robotic bipedal walking to date has been the development of anthropomorphic gait, the community as a whole has been unable to agree upon an appropriate model to generate such gait. In this paper, we describe a method to segment human walking data in order to generate a robotic model capable of human-like walking. Generating the model requires the determination of the sequence of contact point enforcements which requires solving a combinatorial scheduling problem. We resolve this problem by transforming the detection of contact point enforcements into a constrained switched system optimal control problem for which we develop a provably convergent algorithm. We conclude the paper by illustrating the performance of the algorithm on identifying a model for robotic bipedal walking.

Ram Vasudevan
Weitere Informationen