2014 | OriginalPaper | Buchkapitel
Multi-task Learning of Visual Odometry Estimators
verfasst von : Vitor Campanholo Guizilini, Fabio Tozeto Ramos
Erschienen in: Experimental Robotics
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This paper presents a novel framework for learning visual odometry estimators from a single uncalibrated camera through multi-task non-parametric Bayesian inference. A new methodology, Coupled Gaussian Processes, is developed to jointly estimate vehicle velocity while concomitantly inferring a full covariance matrix of all tasks. Matched image feature descriptors obtained from sequential frames act as inputs and the vehicle’s linear and angular velocities as outputs, allowing its position to be incrementally determined. This approach has three main benefits: firstly, it readily provides uncertainty measurements, thus allowing posterior data fusion with other sensors; secondly, it eliminates the need for camera calibration, as the system essentially learns the transformation between the optical flow and vehicle velocity spaces; thirdly, it provides motion estimation directly, not subject to scaling as in standard structure from motion techniques with monocular cameras. Experiments conducted using imagery collected in urban and off-road environments under challenging conditions show the benefits of the approach for trajectories of up to 2 km. Finally, the framework is integrated into a Exactly Sparse Extended Information Filter for deployment in a SLAM scenario.