Skip to main content
Erschienen in: Autonomous Robots 3/2018

01.07.2017

Edge alignment-based visual–inertial fusion for tracking of aggressive motions

verfasst von: Yonggen Ling, Manohar Kuse, Shaojie Shen

Erschienen in: Autonomous Robots | Ausgabe 3/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We propose a novel edge-based visual–inertial fusion approach to address the problem of tracking aggressive motions with real-time state estimates. At the front-end, our system performs edge alignment, which estimates the relative poses in the distance transform domain with a larger convergence basin and stronger resistance to changing lighting conditions or camera exposures compared to the popular direct dense tracking. At the back-end, a sliding-window optimization-based framework is applied to fuse visual and inertial measurements. We utilize efficient inertial measurement unit (IMU) preintegration and two-way marginalization to generate accurate and smooth estimates with limited computational resources. To increase the robustness of our proposed system, we propose to perform an edge alignment self check and IMU-aided external check. Extensive statistical analysis and comparison are presented to verify the performance of our proposed approach and its usability with resource-constrained platforms. Comparing to state-of-the-art point feature-based visual–inertial fusion methods, our approach achieves better robustness under extreme motions or low frame rates, at the expense of slightly lower accuracy in general scenarios. We release our implementation as open-source ROS packages.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Baker, S., & Matthews, I. (2004). Lucas–Kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3), 221–255.CrossRef Baker, S., & Matthews, I. (2004). Lucas–Kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3), 221–255.CrossRef
Zurück zum Zitat Bay, H., Tuytelaars, T., Ess, A., & Gool, L. V. (2008). Speeded up robust features. In Computer vision and image understanding. Bay, H., Tuytelaars, T., Ess, A., & Gool, L. V. (2008). Speeded up robust features. In Computer vision and image understanding.
Zurück zum Zitat Bloesch, M., Omari, S., Hutter, M., & Roland, S. (2015). Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems. Bloesch, M., Omari, S., Hutter, M., & Roland, S. (2015). Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems.
Zurück zum Zitat Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., et al. (2016). The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research, 35(10), 1157–1163.CrossRef Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., et al. (2016). The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research, 35(10), 1157–1163.CrossRef
Zurück zum Zitat Christian, F., Luca, C., Frank, D., & Davide, S. (2015). IMU preintegration on manifold for efficient visual–inertial maximum-a-posteriori estimation. In Proceedings of the robotics: Science and system. Christian, F., Luca, C., Frank, D., & Davide, S. (2015). IMU preintegration on manifold for efficient visual–inertial maximum-a-posteriori estimation. In Proceedings of the robotics: Science and system.
Zurück zum Zitat Dong-Si, T., & Mourikis, A. I. (2012). Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Dong-Si, T., & Mourikis, A. I. (2012). Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Zurück zum Zitat Engel, J., Schöps, T., & Cremers, D. (2014). LSD-SLAM: Large-scale direct monocular SLAM. In European conference on computer vision. Engel, J., Schöps, T., & Cremers, D. (2014). LSD-SLAM: Large-scale direct monocular SLAM. In European conference on computer vision.
Zurück zum Zitat Engel, J., Sturm, J., & Cremers, D. (2013). Semi-dense visual odometry for a monocular camera. In: Proceedings of the IEEE international conference computer vision, Sydney. Engel, J., Sturm, J., & Cremers, D. (2013). Semi-dense visual odometry for a monocular camera. In: Proceedings of the IEEE international conference computer vision, Sydney.
Zurück zum Zitat Felzenszwalb, P. F., & Huttenlocher, D. P. (2012). Distance transforms of sampled functions. Theory of Computing, 8(1), 415–428.MathSciNetCrossRefMATH Felzenszwalb, P. F., & Huttenlocher, D. P. (2012). Distance transforms of sampled functions. Theory of Computing, 8(1), 415–428.MathSciNetCrossRefMATH
Zurück zum Zitat Fitzgibbon, A. (2003). Robust registration of 2D and 3D point sets. Image and Vision Computing, 21(14), 1145–1153.CrossRef Fitzgibbon, A. (2003). Robust registration of 2D and 3D point sets. Image and Vision Computing, 21(14), 1145–1153.CrossRef
Zurück zum Zitat Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on computer vision and pattern recognition. Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on computer vision and pattern recognition.
Zurück zum Zitat Harris, C. G., & Pike, J. M. (1987). 3D positional integration from image sequences. In Proceedings of the Alvey vision conference, Cambridge. Harris, C. G., & Pike, J. M. (1987). 3D positional integration from image sequences. In Proceedings of the Alvey vision conference, Cambridge.
Zurück zum Zitat Heng, L., Lee, G. H., & Pollefeys, M. (2014). Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle. In Proceedings of Robotics: Science and Systems. Berkeley, CA. Heng, L., Lee, G. H., & Pollefeys, M. (2014). Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle. In Proceedings of Robotics: Science and Systems. Berkeley, CA.
Zurück zum Zitat Hesch, J. A., Kottas, D. G., Bowman, S. L., & Roumeliotis, S. I. (2014). Consistency analysis and improvement of vision-aided inertial navigation. IEEE Transactions on Robotics, 30(1), 158–176.CrossRef Hesch, J. A., Kottas, D. G., Bowman, S. L., & Roumeliotis, S. I. (2014). Consistency analysis and improvement of vision-aided inertial navigation. IEEE Transactions on Robotics, 30(1), 158–176.CrossRef
Zurück zum Zitat Huang, A. S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D., & Roy, N. (2011). Visual odometry and mapping for autonomous flight using an RGB-D camera. In Proceedings of the international symposium of robotics research, Flagstaff, AZ. Huang, A. S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D., & Roy, N. (2011). Visual odometry and mapping for autonomous flight using an RGB-D camera. In Proceedings of the international symposium of robotics research, Flagstaff, AZ.
Zurück zum Zitat Huang, G., Kaess, M., & Leonard, J. J. (2014). Towards consistent visual–inertial navigation. In Proceedings of the IEEE international conference on robotics and automation, Hong Kong. Huang, G., Kaess, M., & Leonard, J. J. (2014). Towards consistent visual–inertial navigation. In Proceedings of the IEEE international conference on robotics and automation, Hong Kong.
Zurück zum Zitat Kerl, C., Sturm, J., & Cremers, D. (2013). Robust odometry estimation for RGB-D cameras. In Proceedings of the IEEE international conference on robotics and automation. Kerl, C., Sturm, J., & Cremers, D. (2013). Robust odometry estimation for RGB-D cameras. In Proceedings of the IEEE international conference on robotics and automation.
Zurück zum Zitat Kuse, M., & Shen, S. (2016). Robust camera motion estimation using direct edge alignment and sub-gradient method. In Proceedings of the IEEE international conference on robotics and automation. Kuse, M., & Shen, S. (2016). Robust camera motion estimation using direct edge alignment and sub-gradient method. In Proceedings of the IEEE international conference on robotics and automation.
Zurück zum Zitat Leutenegger, S., Furgale, P., Rabaud, V., Chli, M., Konolige, K., & Siegwart, R. (2015). Keyframe-based visual–inertial using nonlinear optimization. The International Journal of Robotics Research, 34(3), 314–334. Leutenegger, S., Furgale, P., Rabaud, V., Chli, M., Konolige, K., & Siegwart, R. (2015). Keyframe-based visual–inertial using nonlinear optimization. The International Journal of Robotics Research, 34(3), 314–334.
Zurück zum Zitat Li, M., & Mourikis, A. (2013). High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics, 32(6), 690–711.CrossRef Li, M., & Mourikis, A. (2013). High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics, 32(6), 690–711.CrossRef
Zurück zum Zitat Ling, Y., Liu, T., & Shen, S. (2016). Aggressive quadrotor flight using dense visual–inertial fusion. In Proceedings of the IEEE international conference on robotics and automation. Ling, Y., Liu, T., & Shen, S. (2016). Aggressive quadrotor flight using dense visual–inertial fusion. In Proceedings of the IEEE international conference on robotics and automation.
Zurück zum Zitat Ling, Y., & Shen, S. (2015). Dense visual–inertial odometry for tracking of aggressive motions. In Proceedings of the IEEE international conference on robotics and biomimetics. Ling, Y., & Shen, S. (2015). Dense visual–inertial odometry for tracking of aggressive motions. In Proceedings of the IEEE international conference on robotics and biomimetics.
Zurück zum Zitat Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef
Zurück zum Zitat Ma, Y., Soatto, S., Kosecka, J., & Sastry, S. S. (2012). An invitation to 3-d vision: From images to geometric models (Vol. 26). Berlin: Springer.MATH Ma, Y., Soatto, S., Kosecka, J., & Sastry, S. S. (2012). An invitation to 3-d vision: From images to geometric models (Vol. 26). Berlin: Springer.MATH
Zurück zum Zitat Newcombe, R. A., Lovegrove, S., & Davison, A. J. (2011). DTAM: Dense tracking and mapping in real-time. In IEEE international conference on computer vision (pp. 2320–2327). Newcombe, R. A., Lovegrove, S., & Davison, A. J. (2011). DTAM: Dense tracking and mapping in real-time. In IEEE international conference on computer vision (pp. 2320–2327).
Zurück zum Zitat Omari, S., Bloesch, M., Gohl, P., & Siegwart, R. (2015). Dense visual–inertial navigation system for mobile robots. In Proceedings of the IEEE international conference on robotics and automation. Omari, S., Bloesch, M., Gohl, P., & Siegwart, R. (2015). Dense visual–inertial navigation system for mobile robots. In Proceedings of the IEEE international conference on robotics and automation.
Zurück zum Zitat Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In IEEE conference on European conference on computer vision. Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In IEEE conference on European conference on computer vision.
Zurück zum Zitat Rusinkiewicz, S., & Levoy, M. (2001). Efficient variants of the ICP algorithm. In International conference on 3-D imaging and modeling (pp. 145–152). Rusinkiewicz, S., & Levoy, M. (2001). Efficient variants of the ICP algorithm. In International conference on 3-D imaging and modeling (pp. 145–152).
Zurück zum Zitat Scaramuzza, D., Achtelik, M., Doitsidis, L., Fraundorfer, F., Kosmatopoulos, E., Martinelli, A., et al. (2014). Vision-controlled micro flying robots: From system design to autonomous navigation and mapping in GPS-denied environments. IEEE Robotics & Automation Magazine, 21(3), 26–40. Scaramuzza, D., Achtelik, M., Doitsidis, L., Fraundorfer, F., Kosmatopoulos, E., Martinelli, A., et al. (2014). Vision-controlled micro flying robots: From system design to autonomous navigation and mapping in GPS-denied environments. IEEE Robotics & Automation Magazine, 21(3), 26–40.
Zurück zum Zitat Shi, J., & Tomasi, C. (1994). Good features to track. In IEEE conference on computer vision and pattern recognition. Shi, J., & Tomasi, C. (1994). Good features to track. In IEEE conference on computer vision and pattern recognition.
Zurück zum Zitat Segal, A., Haehnel, D., & Thrun, S. (2005). Generalized-ICP. In Robotics: Science and systems. Segal, A., Haehnel, D., & Thrun, S. (2005). Generalized-ICP. In Robotics: Science and systems.
Zurück zum Zitat Shen, S., Michael, N., & Kumar, V. (2015). Tightly-coupled monocular visual–inertial fusion for autonomous flight of rotorcraft MAVs. In Proceedings of the IEEE international conference on robotics and automation, Seattle, WA. Shen, S., Michael, N., & Kumar, V. (2015). Tightly-coupled monocular visual–inertial fusion for autonomous flight of rotorcraft MAVs. In Proceedings of the IEEE international conference on robotics and automation, Seattle, WA.
Zurück zum Zitat Shen, S., Mulgaonkar, Y., Michael, N., & Kumar, V. (2013). Vision-based state estimation and trajectory control towards high-speed flight with a quadrotor. In Proceedings of robotics: Science and systems, Berlin. Shen, S., Mulgaonkar, Y., Michael, N., & Kumar, V. (2013). Vision-based state estimation and trajectory control towards high-speed flight with a quadrotor. In Proceedings of robotics: Science and systems, Berlin.
Zurück zum Zitat Shen, S., Mulgaonkar, Y., Michael, N., & Kumar, V. (2014). Initialization-free monocular visual–inertial estimation with application to autonomous MAVs. In Proceedings of the international symposium on experimental robotics, Morocco. Shen, S., Mulgaonkar, Y., Michael, N., & Kumar, V. (2014). Initialization-free monocular visual–inertial estimation with application to autonomous MAVs. In Proceedings of the international symposium on experimental robotics, Morocco.
Zurück zum Zitat Sibley, G., Matthies, L., & Sukhatme, G. (2010). Sliding window filter with application to planetary landing. Journal of Field Robotics, 27(5), 587–608. Sibley, G., Matthies, L., & Sukhatme, G. (2010). Sliding window filter with application to planetary landing. Journal of Field Robotics, 27(5), 587–608.
Zurück zum Zitat Stückler, J., & Behnke, S. (2012). Model learning and real-time tracking using multi-resolution surfel maps. In Association for the advancement of artificial intelligence. Stückler, J., & Behnke, S. (2012). Model learning and real-time tracking using multi-resolution surfel maps. In Association for the advancement of artificial intelligence.
Zurück zum Zitat Tomasi, C., & Kanade, T. (1991). Detection and tracking of point features. In Carnegie Mellon University Technical Report CMU-CS-91-132. Tomasi, C., & Kanade, T. (1991). Detection and tracking of point features. In Carnegie Mellon University Technical Report CMU-CS-91-132.
Zurück zum Zitat Usenko, V., Engel, J., Stuckler, J., & Cremers, D. (2016). Direct visual–inertial odometry with stereo cameras. In Proceedings of the IEEE international conference on robotics and automation. Usenko, V., Engel, J., Stuckler, J., & Cremers, D. (2016). Direct visual–inertial odometry with stereo cameras. In Proceedings of the IEEE international conference on robotics and automation.
Zurück zum Zitat Yang, Z., & Shen, S. (2015). Monocular visual–inertial fusion with online initialization and camera-IMU calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Yang, Z., & Shen, S. (2015). Monocular visual–inertial fusion with online initialization and camera-IMU calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Metadaten
Titel
Edge alignment-based visual–inertial fusion for tracking of aggressive motions
verfasst von
Yonggen Ling
Manohar Kuse
Shaojie Shen
Publikationsdatum
01.07.2017
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 3/2018
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-017-9642-0

Weitere Artikel der Ausgabe 3/2018

Autonomous Robots 3/2018 Zur Ausgabe

Neuer Inhalt