Skip to main content

2007 | Buch

Robotics Research

Results of the 12th International Symposium ISRR

herausgegeben von: Dr. Sebastian Thrun, Dr. Rodney Brooks, Dr. Hugh Durrant-Whyte

Verlag: Springer Berlin Heidelberg

Buchreihe : Springer Tracts in Advanced Robotics

insite
SUCHEN

Über dieses Buch

Robotics is undergoing a major transformation in scope and dimension. From a largely dominant industrial focus, robotics is rapidly expanding into human environments and vigorously engaged in its new challenges. Interacting with, assisting, serving, and exploring with humans, the emerging robots will increasingly touch people and their lives. The Springer Tracts in Advanced Robotics (STAR) is devoted to bringing to the research community the latest advances in the robotics field on the basis of their significance and quality. Through a wide and timely dis­ semination of critical research developments in robotics, our objective with this series is to promote more exchanges and collaborations among the re­ searchers in the community and contribute to further advancements in this rapidly growing field. As one of robotics pioneering symposia, the International Symposium on Robotics Research (ISRR) has established over the past two decades some of the fields most fundamental and lasting contributions. Since the launching of STAR, ISRR and several other thematic symposia in robotics find an important platform for closer links and extended reach within the robotics community. This twelfth edition of Robotics Research, edited by Sebastian Thrun, Rodney Brooks, and Hugh Durrant-Whyte, offers in its 14-part volume a collection of a broad range of topics in robotics. The content of these contributions provides a wide coverage of the current state of robotics research: the advances and challenges in its theoretical foundation and technology basis, and the developments in its traditional and novel areas of apphcations.

Inhaltsverzeichnis

Frontmatter

Physical Human Robot Interaction and Haptics

Frontmatter
Session Overview Physical Human-Robot Integration and Haptics

Machines and robots in the near future will share environments, and often come directly in touch with humans. This is to happen in several applications domains, including domestic applications (domotics), entertainment, assistance, cooperative manipulation tasks, teleoperation, human augmentation, haptic interfaces and exoskeletons. Physical Human-Robot Interaction (pHRI) poses many challenges, which can be summarized by the dichotomy

safety vs. performance

. The first and foremost concern, indeed, is that the robot must not hurt humans, directly nor indirectly, in regular operations nor in failures. Second, the machine is expected to perform swiftly and effectively its tasks in the service to humans.

Antonio Bicchi, Yoshihiko Nakamura
A Unified Passivity Based Control Framework for Position, Torque and Impedance Control of Flexible Joint Robots

In this paper we describe a general passivity based framework for the control of flexible joint robots. Herein the recent DLR results on torque-, position-, as well as impedance control of flexible joint robots are summarized, and the relations between the individual contributions are highlighted. It is shown that an inner torque feedback loop can be incorporated into a passivity based analysis by interpreting torque feedback in terms of shaping of the motor inertia. This result, which implicitly was already included in our earlier works on torque- and position control, can also be seized for the design of impedance controllers. For impedance control, furthermore, potential shaping is of special interest. It is shown how, based only on the motor angles, a potential function can be designed which simultaneously incorporates gravity compensation and a desired Cartesian stiffness relation for the link angles.

Alin Albu-Schäffer, Christian Ott, Gerd Hirzinger
Wave Haptics: Encoderless Virtual Stiffnesses

Haptic rendering commonly implements virtual springs using DC motors with current amplifiers and encoder-based position feedback. In these schemes, quantization, discretization, and delays all impose performance limits. Meanwhile the amplifiers try to cancel the electrical motor dynamics, which are actually beneficial to the haptic display.

Günter Niemeyer, Nicola Diolaiti, Neal Tanner
Reality-Based Estimation of Needle and Soft-Tissue Interaction for Accurate Haptic Feedback in Prostate Brachytherapy Simulation

Prostate Brachytherapy is the implantation of radioactive seeds into the prostate as a treatment for prostate cancer. The success rate of the procedure is directly related to the physician’s level of experience. In addition, minor deviations in seed alignment caused by gland compression/retraction, gland edema (swelling) and needle deflections can create significant areas of over or under dosage to the gland and/or injury to surrounding nerves and organs, leading to increased morbidity. Therefore, reductions in brachytherapy complication rates will be dependent on improving the tools physicians use for training to improve the accuracy of needle guidance and deployment of ‘seeds’ within the prostate gland. Through our novel approach of using two C-ARM fluoroscopes, we propose a reality-based approach for estimating needle and soft tissue interaction for the purpose of eventually developing an accurate seed placement training simulator with haptic feedback for prostate brachytherapy. By recording implanted fiducial movement and needle-soft tissue interaction forces, we can: extract the local effective modulus during puncture events, quantify tissue deformation, obtain an approximate cutting force, and build a finite element model to provide accurate haptic feedback in the training simulator for needle insertion tasks.

James T. Hing, Ari D. Brooks, Jaydev P. Desai
Haptic Virtual Fixtures for Robot-Assisted Manipulation

Haptic virtual fixtures are software-generated force and position signals applied to human operators in order to improve the safety, accuracy, and speed of robot-assisted manipulation tasks. Virtual fixtures are effective and intuitive because they capitalize on both the accuracy of robotic systems and the intelligence of human operators. In this paper, we discuss the design, analysis, and implementation of two categories of virtual fixtures: guidance virtual fixtures, which assist the user in moving the manipulator along desired paths or surfaces in the workspace, and forbidden-region virtual fixtures, which prevent the manipulator from entering into forbidden regions of the workspace. Virtual fixtures are analyzed in the context of both cooperative manipulation and telemanipulation systems, considering issues related to stability, passivity, human modeling, and applications.

Jake J. Abbott, Panadda Marayong, Allison M. Okamura

Planning

Frontmatter
Session Overview Planning

When we discuss autonomous robots, we think of robots that move around, interacting with people and making changes in the world. The problem of actually choosing motor commands to achieve high level goals — such as moving to a desired destination or answering a query from a human — typically involves planning. Planning is of course one of the central questions of artificial intelligence, and the planning field has moved a long way from the early days when planning meant searching for a sequence of abstract actions that satisfied some symbolic predicate. Robots can now learn their own representations through statistical inference procedures, they can now reason using different representations and they can reason in worlds where action can have stochastic outcomes.

Nicholas Roy, Roland Siegwart
POMDP Planning for Robust Robot Control

POMDPs provide a rich framework for planning and control in partially observable domains. Recent new algorithms have greatly improved the scalability of POMDPs, to the point where they can be used in robot applications. In this paper, we describe how approximate POMDP solving can be further improved by the use of a new theoretically-motivated algorithm for selecting salient information states. We present the algorithm, called PEMA, demonstrate competitive performance on a range of navigation tasks, and show how this approach is robust to mismatches between the robot’s physical environment and the model used for planning.

Joelle Pineau, Geoffrey J. Gordon
On the Probabilistic Foundations of Probabilistic Roadmap Planning

Why are probabilistic roadmap (PRM) planners “probabilistic”? This paper tries to establish the probabilistic foundations of PRM planning and reexamines previous work in this context. It shows that the success of PRM planning depends mainly and critically on the assumption that the configuration space

C

of a robot often verifies favorable “visibility” properties that are not directly dependent on the dimensionality of

C

. A promising way of speeding up PRM planners is to extract partial knowledge on such properties during roadmap construction and use this knowledge to adjust the sampling measure continuously. This paper also shows that the choice of the sampling source—pseudo-random or deterministic—has small impact on a PRM planner’s performance, compared to that of the sampling measure. These conclusions are supported by both theoretical arguments and empirical results.

David Hsu, Jean-Claude Latombe, Hanna Kurniawati

Humanoids

Frontmatter
Session Overview Humanoids

An epoch of humanoid robotics started from the astonishing reveal of Honda P2 in 1996, and the focus of interest in the field has been the motion control of humanoid robots as well as the development of the hardware in the beginning of the decade. A reliable hardware with the minimum level of the mobility can be a research platform of humanoid robotics as well as mobile robot platforms like Nomad. Several research platforms are available currently including HRP-2 with software platform OpenHRP and HOAP series, and the interests in humanoid robotics can spread over various topics; that is, intelligence, interactions with humans and a tool of cognitive science. The state of the art of humanoid robotics has arrived at the level of the beginning of mobile robot technologies in 1980s, and every aspect of robotics is now expected to be integrated on humanoid robots.

Hirohisa Hirukawa
Humanoid HRP2-DHRC for Autonomous and Interactive Behavior

Recently, research on humanoid-type robots has become increasingly active, and a broad array of fundamental issues are under investigation. However, in order to achieve a humanoid robot which can operate in human environments, not only the fundamental components themselves, but also the successful integration of these components will be required. At present, almost all humanoid robots that have been developed have been designed for bipedal locomotion experiments. In order to satisfy the functional demands of locomotion as well as high-level behaviors, humanoid robots require good mechanical design, hardware, and software which can support the integration of tactile sensing, visual perception, and motor control. Autonomous behaviors are currently still very primitive for humanoid-type robots. It is difficult to conduct research on high-level autonomy and intelligence in humanoids due to the development and maintenance costs of the hardware. We believe low-level autonomous functions will be required in order to conduct research on higher-level autonomous behaviors for humanoids.

S. Kagami, K. Nishiwaki, J. Kuffner, S. Thompson, J. Chestnutt, M. Stilman, P. Michel
Android Science
Toward a New Cross-Interdisciplinary Framework

In the evaluation of interactive robots, the performance measures are subjective impression of human subjects who interact with the robot and their unconscious reactions, such as synchronized human behaviors in the interactions and eye movements.

Hiroshi Ishiguro
Mimetic Communication Theory for Humanoid Robots Interacting with Humans

The theory of behavioral communication for humanoid robots that interact with humans is discussed in this paper. For behavioral communication, it is fundamental for a humanoid robot to recognize the meaning of the whole body motion of a human. According to the previous works, it can be done in the symbolic level by adopting the proto-symbol space defined by the Hidden Markov Models based on the mimesis theory. The generation of robot motions from the proto-symbols is also to be done in the same framework. In this paper, we first introduce the meta proto-symbols that stochastically represent and become signifiants of the interaction of a robot and a human. The meta proto-symbols are a little more abstract analogy of the proto-symbols and recognize/generate the relationship of the two. A hypothesis is then proposed as the principle of fundamental communication. The experimental result follows.

Yoshihiko Nakamura, Wataru Takano, Katsu Yamane

Mechanism and Design

Frontmatter
Session Overview Mechanisms and Design

Robot mechanisms science must be understood as acquiring an in-depth understanding of the mechanical behavior of a robot and involve domains such as kinematics, dynamics and singularity analysis. Two issues must be addressed:

analysis

: determine all the mechanical properties of a given robot that are necessary to control it and to verify that its behavior will satisfy a given set of requirements

synthesis

: being given a set of requirements determine what should be the mechanical arrangement and the dimensioning of the robot. Synthesis is in general a much more complex issue than analysis

The study of robot mechanisms and of their design is a fundamental and exciting part of robotic science as the mechanical part of the robot will, at the end, condition what the robot can performed in term of tasks and will drastically influence control issues.

Jean-Pierre Merlet
Design of a Compact 6-DOF Haptic Device to Use Parallel Mechanisms

We present design of a compact haptic device in which parallel mechanisms are utilized. The design realizes a large workspace of orientational motion in a compact volume of the device. The device is a parallel-serial mechanism consisting of a modified DELTA mechanism for translational motion and a spatial five-bar gimbal mechanism for orientational motion. We derive an analytical model of stiffness for the modified DELTA mechanism which we utilize for the design of a stiff platform for translational motion. The model shows that the compliance matrix is a function of kinematic parameters as well as elastic parameters of each mechanical element. Configuration dependency of the compliance matrix is therefore an important point to be noticed.

Masaru Uchiyama, Yuichi Tsumaki, Woo-Keun Yoon
Hybrid Nanorobotic Approaches to NEMS

Robotic manipulation at the nanometer scale is a promising technology for structuring, characterizing and assembling nano building blocks into nanoelectromechanical systems (NEMS). Combined with recently developed nanofabrication processes, a hybrid approach to building NEMS from individual carbon nanotubes (CNTs) and SiGe/Si nanocoils is described. Nanosensors and nanoactuators are investigated from experimental, theoretical, and design perspectives.

B. J. Nelson, L. X. Dong, A. Subramanian, D. J. Bell
Jacobian, Manipulability, Condition Number and Accuracy of Parallel Robots

Parallel robots are nowadays leaving academic laboratories and are finding their way in an increasingly larger number of application fields such as telescopes, fine positioning devices, fast packaging, machine-tool, medical application. A key issue for such use is optimal design as performances of parallel robots are very sensitive to their dimensioning. Optimal design methodologies have to rely on kinetostatic performance indices and accuracy is clearly a key-issue for many applications. It has also be a key-issue for serial robots and consequently this problem has been extensively studied and various accuracy indices have been defined. The results have been in general directly transposed to parallel robots. We will now review how well these indices are appropriate for parallel robots.

J. -P. Merlet

SLAM

Frontmatter
Session Overview Simultaneous Localisation and Mapping

The Simultaneous Localisation and Mapping (SLAM) problem remains a prominent area of research in the mobile robotics community. The ISRR symposia have borne witness to marked progress of the field since its conception almost 20 years ago. This year, once again, the question “is the SLAM problem now solved?” was posed. Well the answer to that question probably lies in the definition of “solved”. We still do not have the persistent SLAM-enabled machines that we strive for, so in that sense, perhaps it isn’t solved, but we do have a firm understanding of the problem now. We do appreciate the limits of performance, we can handle uncertainties in a principled way and recognize the penalties for failing to do so. We also have several solutions to the scaling problem that so dogged the field for several years. To these probabilistic frameworks we are able to attach any of several representational schemes to represent both maps and vehicle trajectories. We run these “solutions” on vehicles equipped with various sensors, cameras, radars, sonars and of course the ubiquitous laser range finder.

Paul Newman, Henrik I. Christensen
Subjective Localization with Action Respecting Embedding

Robot localization is the problem of how to estimate a robot’s pose within an objective frame of reference. Traditional localization requires knowledge of two key conditional probabilities: the motion and sensor models. These models depend critically on the specific robot as well as its environment. Building these models can be time-consuming, manually intensive, and can require expert intuitions. However, the models are necessary for the robot to relate its own subjective view of sensors and motors to the robot’s objective pose. In this paper we seek to remove the need for human provided models. We introduce a technique for

subjective localization

, relaxing the requirement that the robot localize within a global frame of reference. Using an algorithm for action-respecting non-linear dimensionality reduction, we learn a subjective representation of pose from a stream of actions and sensations. We then extract from the data natural motion and sensor models defined for this new representation. Monte Carlo localization is used to track this representation of the robot’s pose while executing new actions and receiving new sensor readings. We evaluate the technique in a synthetic image manipulation domain and with a mobile robot using vision and laser sensors.

Michael Bowling, Dana Wilkinson, Ali Ghodsi, Adam Milstein
D-SLAM: Decoupled Localization and Mapping for Autonomous Robots

The main contribution of this paper is the reformulation of the simultaneous localization and mapping (SLAM) problem for mobile robots such that the mapping and localization can be treated as two concurrent yet separated processes: D-SLAM (decoupled SLAM). It is shown that SLAM can be decoupled into solving a non-linear static estimation problem for mapping and a low-dimensional dynamic estimation problem for localization. The mapping problem can be solved using an Extended Information Filter where the information matrix is shown to be exactly sparse. A significant saving in the computational effort can be achieved for large scale problems by exploiting the special properties of sparse matrices. An important feature of D-SLAM is that the correlation among landmarks are still kept and it is demonstrated that the uncertainty of the map landmarks monotonically decrease. The algorithm is illustrated through computer simulations and experiments.

Zhan Wang, Shoudong Huang, Gamini Dissanayake
A Provably Consistent Method for Imposing Sparsity in Feature-Based SLAM Information Filters

An open problem in Simultaneous Localization and Mapping (SLAM) is the development of algorithms which scale with the size of the environment. A few promising methods exploit the key insight that representing the posterior in the canonical form parameterized by a sparse information matrix provides significant advantages regarding computational efficiency and storage requirements. Because the information matrix is naturally dense in the case of feature-based SLAM, additional steps are necessary to achieve sparsity. The delicate issue then becomes one of performing this sparsification in a manner which is consistent with the original distribution.

Matthew Walter, Ryan Eustice, John Leonard

Field Robots

Frontmatter
Session Overview Field Robotics

Field robots do not operate in factories or other controlled settings, but rather operate outdoors, underwater, underground, or even on other planets. They are characterized by a focus on real applications, and on operation in complex terrain. Field robots are often large vehicles, and often have forceful interactions with their workspace. Given their complex setting and complex (and often dangerous) tasks, most field robots are not fully autonomous: a great deal of effort goes into the user interface, providing mixed modes of human and robot interaction.

Alonzo Kelly, Chuck Thorpe
Field D*: An Interpolation-Based Path Planner and Replanner

We present an interpolation-based planning and replanning algorithm for generating direct, low-cost paths through nonuniform cost grids. Most grid-based path planners use discrete state transitions that artificially constrain an agent’s motion to a small set of possible headings (e.g. 0,

$$ \frac{\pi } {4},\frac{\pi } {2} $$

, etc). As a result, even ‘optimal’ grid-based planners produce unnatural, suboptimal paths. Our approach uses linear interpolation during planning to calculate accurate path cost estimates for arbitrary positions within each grid cell and to produce paths with a range of continuous headings. Consequently, it is particularly well suited to planning low-cost trajectories for mobile robots. In this paper, we introduce the algorithm and present a number of example applications and results.

Dave Ferguson, Anthony Stentz
Tradeoffs Between Directed and Autonomous Driving on the Mars Exploration Rovers

NASA’s Mars Exploration Rovers (MER) have collected a great diversity of geological science results, thanks in large part to their surface mobility capabilities. The six wheel rocker/bogie suspension provides driving capabilities in many distinct terrain types, the onboard IMU measures actual rover attitude changes (roll, pitch and yaw, but not position) quickly and accurately, and stereo camera pairs provide accurate position knowledge and/or terrain assessment. Solar panels generally provide enough power to drive the vehicle for at most four hours each day, but drive time is often restricted by other planned activities. Driving along slopes in nonhomogeneous terrain injects unpredictable amounts of slip into each drive. These restrictions led us to create driving strategies that maximize drive speed and distance, at the cost of increased complexity in the sequences of commands built by human Rover Planners each day.

Jeffrey J. Biesiadecki, Chris Leger, Mark W. Maimone
Surface Mining: Main Research Issues for Autonomous Operations

This paper presents the author’s view on the main challenges for autonomous operation in surface mining environment. A brief overview of the mine operation is presented showing the number of components that needs to interact in a safe, robust and efficient manner. Successful implementation of autonomous systems in field robotic applications are presented with a discussion of the fundamental problems that needs to be addressed to have this technology accepted in mining operations.

Eduardo M. Nebot

Robotic Vision

Frontmatter
Session Overview Robotic Vision

At the DARPA Grand Challenge in October 2005, laser range finders, especially the ones manufactured by SICK, were the predominant range sensors. Does that mean that stereo sensors are dead? No. It means that laser scanners satisfied the requirements of the Grand Challenge outdoor vehicle navigation application better than stereo. Stereo sensors, on the other hand, are the sensor of choice for several other applications, such as people monitoring and human-computer interfaces, because they are passive, relatively inexpensive, have no moving parts, and provide registered range and color data.

Yoshiaki Shirai, Bob Bolles
Bias Reduction and Filter Convergence for Long Range Stereo

We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates. Compared to the traditional approach, we show that bias is reduced by more than an order of magnitude, and that the variance of the estimator approaches the Cramer-Rao lower bound.

Gabe Sibley, Larry Matthies, Gaurav Sukhatme
Fusion of Stereo, Colour and Contrast

Stereo vision has numerous applications in robotics, graphics, inspection and other areas. A prime application, one which has driven work on stereo in our laboratory, is teleconferencing in which the use of a stereo webcam already makes possible various transformations of the video stream. These include digital camera control, insertion of virtual objects, background substitution, and eye-gaze correction [

9

,

8

].

A. Blake, A. Criminisi, G. Cross, V. Kolmogorov, C. Rother
Automatic Single-Image 3d Reconstructions of Indoor Manhattan World Scenes

3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of single-image depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.

Erick Delage, Honglak Lee, Andrew Y. Ng

Robot Design and Control

Frontmatter
Session Overview Robot Design and Control

Control theory to date has achieved tremendous success in the analysis and synthesis of single control systems, as well as in the development of control laws for simple groups of systems which are connected together by point-to-point wires (assumed reliable) so that information is received and processed synchronously at each subsystem. Many of these advances have been fueled by challenges in robotics: force feedback in haptic devices led to new formulations of stable control laws, coordination algorithms for robot swarms has likewise led to a theory of control for networked systems.

Claire J. Tomlin
One Is Enough!

We postulate that multi-wheel statically-stable mobile robots for operation in human environments are an evolutionary dead end. Robots of this class tall enough to interact meaningfully with people must have low centers of gravity, overly wide bases of support, and very low accelerations to avoid tipping over. Accordingly, we are developing an inverse of this type of mobile robot that is the height, width, and weight of a person, having a high center of gravity, that balances dynamically on a

single spherical wheel

. Unlike balancing 2-wheel platforms which must turn before driving in some direction, the single-wheel robot can move directly in any direction. We present the overall design, actuator mechanism based on an inverse mouse-ball drive, control system, and initial results including dynamic balancing, station keeping, and point-to-point motion.

Tom Lauwers, George Kantor, Ralph Hollis
A Steerable, Untethered, 250 × 60 µm MEMS Mobile Micro-Robot

We present a steerable, electrostatic, untethered, MEMS micro-robot, with dimensions of 60

µ

m by 250

µ

m by 10

µ

m. This micro-robot is

1 to 2 orders of magnitude

smaller in size than previous micro-robotic systems. The device consists of a curved, cantilevered steering arm, mounted on an untethered scratch drive actuator. These two components are fabricated monolithically from the same sheet of conductive polysilicon, and receive a common power and control signal through a capacitive coupling with an underlying electrical grid. All locations on the grid receive the same power and control signal, so that the devices can be operated without knowledge of their position on the substrate and without constraining rails or tethers. Control and power delivery waveforms are broadcast to the device through the capacitive power coupling, and are decoded by the electromechanical response of the device body. Individual control of the component actuators provides two distinct motion gaits (forward motion and turning), which together allow full coverage of a planar workspace (the robot is globally controllable). These MEMS micro-robots demonstrate turning error of less than 3.7 °/mm during forward motion, turn with radii as small as 176

µ

m, and achieve speeds of over 200

µ

m/sec, with an average step size of 12 nm. They have been shown to operate open-loop for distances exceeding 35 cm without failure, and can be controlled through teleoperation to navigate complex paths.

Bruce R. Donald, Christopher G. Levey, Craig D. McGray, Igor Paprotny, Daniela Rus
Some Issues in Humanoid Robot Design

Even though the market size is still small at this moment, applied fields of robots are gradually spreading from the manufacturing industry to the others in recent years. One can now easily expect that applications of robots will expand into the first and the third industrial fields as one of the important components to support our society in the 21st century. There also raises strong anticipations in Japan that robots for the personal use will coexist with humans and provide supports such as the assistance for the housework, care of the aged and the physically handicapped, since Japan is the fastest aging society in the world.

Atsuo Takanishi, Yu Ogura, Kazuko Itoh
That Which Does Not Stabilize, Will Only Make Us Stronger

The Berkeley Lower Extremity Exoskeleton (BLEEX) is a load-carrying and energetically autonomous human exoskeleton that, in this first generation prototype, carries up to a 34 kg (75 lb) payload for the pilot and allows the pilot to walk at up to 1.3 m/s (2.9 mph). This article focuses on the human-in-the-loop control scheme and the novel ring-based networked control architecture (ExoNET) that together enable BLEEX to support payload while safely moving in concert with the human pilot. The BLEEX sensitivity amplification control algorithm proposed here increases the closed loop system sensitivity to its wearer’s forces and torques without any measurement from the wearer (such as force, position, or electromyogram signal). The tradeoffs between not having sensors to measure human variables, the need for dynamic model accuracy, and robustness to parameter uncertainty are described. ExoNET provides the physical network on which the BLEEX control algorithm runs. The ExoNET control network guarantees strict determinism, optimized data transfer for small data sizes, and flexibility in configuration. Its features and application on BLEEX are described.

H. Kazerooni, R. Steger

Underwater Robotics

Frontmatter
Session Overview Underwater Robotics

It is an auspicious time for this first-ever ISRR special session on the topic of underwater robotics. Underwater robots are now performing high-resolution acoustic, optical, and physical oceanographic surveys in the deep ocean that previously were considered impractical or infeasible. For example: in 2001 the Argo II underwater robotic vehicle, [

1

], was employed to discover the first off-axis hydrothermal vent field located 15 km from the Mid-Atlantic Ridge at 30° North Latitude [

5

]. The dynamics of this important hydrothermal vent site have since been mapped, sampled, and probed extensively with human-occupied submersibles, tethered remotely controlled underwater robots, and untethered autonomous underwater robots [

6

,

4

,

7

].

Louis L. Whitcomb, Hugh Durrant-Whyte
Improved Estimation of Target Velocity Using Multiple Model Estimation and a Dynamic Bayesian Network for a Robotic Tracker of Ocean Animals

A vision-based automatic tracking system for ocean animals in the midwater has been demonstrated in Monterey Bay, CA. Currently, the input to this system is a measurement of relative position of a target with respect to the tracking vehicle, from which relative velocities are estimated by differentiation. In this paper, the estimation of target velocities is extended to use knowledge of the modal nature of the motions of the tracked target and to incorporate the discrete output of an online classifier that categorizes the visually observable body motions of the animal. First, by using a multiple model estimator, a more expressive hybrid dynamical model is imposed on the target. Then, the estimator is augmented to input the discrete classification from the secondary vision algorithm by recasting the process and sensor models as a dynamic Bayesian network (

DBN

). By leveraging the information in the body motion classifications, the estimator is able to detect mode changes before the resulting changes in velocity are apparent and a significant improvement in velocity estimation is realized. This, in turn, generates the potential for improved closed-loop tracking performance.

Aaron Plotnik, Stephen Rock
Techniques for Deep Sea Near Bottom Survey Using an Autonomous Underwater Vehicle

This paper reports the development and at-sea deployment of a set of algorithms that have enabled our autonomous underwater vehicle, ABE, to conduct near-bottom surveys in the deep sea. Algorithms for long baseline acoustic positioning, terrain-following, and automated nested surveys are reported.

Dana R. Yoerger, Michael Jakuba, Albert M. Bradley, Brian Bingham
Advances in High Resolution Imaging from Underwater Vehicles

Large area mapping at high resolution underwater continues to be constrained by the mismatch between available navigation as compared to sensor accuracy. In this paper we present advances that exploit consistency and redundancy within local sensor measurements to build high resolution optical and acoustic maps that are a consistent representation of the environment.

Hanumant Singh, Christopher Roman, Oscar Pizarro, Ryan Eustice

Learning and Adaptive Behavior

Frontmatter
Session Overview Learning and Adaptive Behavior

In the evolution of robotics, robots have been increasingly operating in a variety of environments, unstructured, and dynamically changing over time. It has been clear since the first progresses of advanced robotics how the capability of perceiving the environment and of behaving accordingly is critical for robots.

Paolo Dario
Using AdaBoost for Place Labeling and Topological Map Building

Indoor environments can typically be divided into places with different functionalities like corridors, kitchens, offices, or seminar rooms. We believe that the ability to learn such semantic categories from sensor data or in maps enables a mobile robot to more efficiently accomplish a variety of tasks such as human-robot interaction, path-planning, exploration, or localization. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from vision and laser range data into a strong classifier. We furthermore present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for robust online classification of the poses traversed along its path using a hidden Markov model. Secondly, we introduce a new approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with probabilistic labeling. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.

Óscar Martínez Mozos, Cyrill Stachniss, Axel Rottmann, Wolfram Burgard
Emergence, Exploration and Learning of Embodied Behavior

The real world is full of unexpected changes, contingencies and opportunities. Thus it is virtually impossible to perfectly specify in advance all the conditions, states and outcomes for all the possible actions. The so-called “frame problem” was originally discovered with symbolic reasoning agents [

6

], but essentially it affects any “intelligent” system that relies on explicit descriptions about the states and actions. For example, in control theory terms, the target system can abruptly deviate from the assumed model of the system dynamics, making the pre-defined control law invalid.

Yasuo Kuniyoshi, Shinsuke Suzuki, Shinji Sangawa
Hierarchical Conditional Random Fields for GPS-Based Activity Recognition

Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract a person’s activities and significant places from traces of GPS data. Our system uses hierarchically structured conditional random fields to generate a consistent model of a person’s activities and places. In contrast to existing techniques, our approach takes high-level context into account in order to detect the significant locations of a person. Our experiments show significant improvements over existing techniques. Furthermore, they indicate that our system is able to robustly estimate a person’s activities using a model that is trained from data collected by other persons.

Lin Liao, Dieter Fox, Henry Kautz

Networked Robotics

Frontmatter
Session Overview Networked Robotics

Robot systems with network capability is a promising frontier of robotics not only to realize new services by combining multiple RT(robot technology) and IT(information technology) components but also to know the human worlds and natural environments. This program session covers networked sensing/actuation, networked intelligence and networked control of robot systems.

Tomomasa Sato, Ray Jarvis
Networked Robotic Cameras for Collaborative Observation of Natural Environments

Scientific study of animals in situ requires vigilant observation of detailed animal behavior over weeks or months. When animals live in remote and/or inhospitable locations, observation can be an arduous, expensive, dangerous, and lonely experience for scientists. Emerging advances in robot cameras, long-range wireless networking, and distributed sensors make feasible a new class of portable robotic “observatories” that can allow groups of scientists, via the internet, to remotely observe, record, and index detailed animal activity. As a shorthand for such an instrument, we propose the acronym

CONE: Collaborative Observatory for Natural Environments

.

Dezhen Song, Ken Goldberg

Interfaces and Interaction

Frontmatter
Session Overview Interfaces and Interaction

The main focus on this section is “Interfaces and Interaction”. Computer display is an interface showing the information to human visually. For providing with information from human to computer (or robots), there are many interface devices, such as force sensor, acceleration sensor, velocity sensor, position sensor, tactile sensor, vision and so forth. EMG and EEG signals are also utilized as an interface signal from handicapped people to robots. By utilizing these interfaces, interactive motion between human and robot can be achieved. Interaction with variety is extremely important for entertainment robots, amusement robots, and social robots. Since the capability of these robots strongly depends upon the reaction and the expression, both sensors and actuators are key components for advancing them. Three papers are presented in this section. The first is concerned with haptic based communication between human and robots. The second deals with the vestibular sensor that can detect head motion of human. The final paper deals with diagnosing autism through the interaction between human and robot. While these three papers are largely unrelated to each other in the purpose, the common key word is interaction between human and robot. Especially, in the first and the third paper, the interaction between human and robot is strongly intended.

Makoto Kaneko, Hiroshi Ishiguro
Haptic Communication Between Humans and Robots

This paper introduces the haptic communication robots we developed and proposes a method for detecting human positions and postures based on haptic interaction between humanoid robots and humans. We have developed two types of humanoid robots that have tactile sensors embedded in a soft skin that covers the robot’s entire body as tools for studying haptic communication. Tactile sensation could be used to detect a communication partner’s position and posture even if the vision sensor did not observe the person. In the proposed method, the robot obtains a map that statistically describes relationships between its tactile information and human positions/postures from the records of haptic interaction taken by tactile sensors and a motion capturing system during communication. The robot can then estimate its communication partner’s position/posture based on the tactile sensor outputs and the map. To verify the method’s performance, we implemented it in the haptic communication robot. Results of experiments show that the robot can estimate a communication partner’s position/posture statistically.

Takahiro Miyashita, Taichi Tajika, Hiroshi Ishiguro, Kiyoshi Kogure, Norihiro Hagita
A Vestibular Interface for Natural Control of Steering in the Locomotion of Robotic Artifacts: Preliminary Experiments

This work addresses the problem of developing novel interfaces for robotic systems that can allow the most natural transmission of control commands and sensory information, in the two directions. A novel approach to the development of natural interfaces is based on the detection of the human’s motion intention, instead of the movement itself, as in traditional interfaces. Based on recent findings in neuroscience, the intention can be detected from anticipatory movements that naturally accompany more complex motor behaviors.

Cecilia Laschi, Eliseo Stefano Maini, Francesco Patane’, Luca Ascari, Gaetano Ciaravella, Ulisse Bertocchi, Cesare Stefanini, Paolo Dario, Alain Berthoz
How Social Robots Will Help Us to Diagnose, Treat, and Understand Autism

Autism is a pervasive developmental disorder that is characterized by social and communicative impairments. Social robots recognize and respond to human social cues with appropriate behaviors. Social robots, and the technology used in their construction, can be unique tools in the study of autism. Based on three years of integration and immersion with a clinical research group, this paper discusses how social robots will make an impact on the ways in which we diagnose, treat, and understand autism.

Brian Scassellati

Invited Overview Talk

Frontmatter
Expo 2005 Robotics Project

This paper overviews a robotics project at the Expo 2005. The project consists of long term experimental evaluation of practical robots at the Expo site simulating the society in the future and short term demonstration of prototype robots. The long term evaluation can let robots advance from the demonstration level to the practical use one, and the short term demonstration from the single shot experiment level to the demonstration one.

Hirohisa Hirukawa, Hirochika Inoue

Robotics Science (Panel Discussion)

Frontmatter
Position Statement: Robotics Science

Robotics as a subject of inquiry has had from its beginning an identity problem. Questions such as:

Is Robotics a science or engineering? Is it an application of certain discipline or does it have a core of problems, tools, methodologies which are unique to robotics?

Ruzena Bajcsy
Backmatter
Metadaten
Titel
Robotics Research
herausgegeben von
Dr. Sebastian Thrun
Dr. Rodney Brooks
Dr. Hugh Durrant-Whyte
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48113-3
Print ISBN
978-3-540-48110-2
DOI
https://doi.org/10.1007/978-3-540-48113-3

Neuer Inhalt