2013 | OriginalPaper | Chapter
Vision-Based Perception of Articulated Objects
Author : Jürgen Sturm
Published in: Approaches to Probabilistic Model Learning for Mobile Manipulation Robots
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
The probabilistic framework developed in the previous chapter enables a manipulation robot to learn accurate kinematic models of articulated objects. As input, our framework requires a sequence of pose observations of the articulated object. We implemented the perception in the previous chapter using visual markers or by directly recording the end effector trajectory while the robot was manipulating the articulated object. For the daily use in domestic environments, however, both options are not satisfactory: clearly, it is neither desirable to augment all furniture with visual markers nor to guide a robot manually to the handles of all relevant objects.