Skip to main content
Top

2018 | OriginalPaper | Chapter

Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments

Authors : Shehryar Khattak, Christos Papachristos, Kostas Alexis

Published in: Advances in Visual Computing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This paper proposes a method for tight fusion of visual, depth and inertial data in order to extend robotic capabilities for navigation in GPS-denied, poorly illuminated, and textureless environments. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, and landmark positions simultaneously as part of the filter state. As demonstrated through a set of hand-held and Micro Aerial Vehicle experiments, the proposed algorithm is shown to perform reliably in challenging visually-degraded environments using RGB-D information from a lightweight and low-cost sensor and data from an IMU.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Papachristos, C., Khattak, S., Alexis, K.: Autonomous exploration of visually-degraded environments using aerial robots. In: 2017 International Conference on Unmanned Aircraft Systems. IEEE (2017) Papachristos, C., Khattak, S., Alexis, K.: Autonomous exploration of visually-degraded environments using aerial robots. In: 2017 International Conference on Unmanned Aircraft Systems. IEEE (2017)
2.
go back to reference Papachristos, C., Khattak, S., Alexis, K.: Uncertainty-aware receding horizon exploration and mapping using aerial robots. In: IEEE International Conference on Robotics and Automation, May 2017 Papachristos, C., Khattak, S., Alexis, K.: Uncertainty-aware receding horizon exploration and mapping using aerial robots. In: IEEE International Conference on Robotics and Automation, May 2017
3.
go back to reference Mascarich, F., Khattak, S., Papachristos, C., Alexis, K.: A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference, pp. 1–7. IEEE (2018) Mascarich, F., Khattak, S., Papachristos, C., Alexis, K.: A multi-modal mapping unit for autonomous exploration and mapping of underground tunnels. In: 2018 IEEE Aerospace Conference, pp. 1–7. IEEE (2018)
5.
go back to reference Dang, T., Papachristos, T., Alexis, K.: Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In: IEEE International Conference on Robotics and Automation (ICRA), May 2018 Dang, T., Papachristos, T., Alexis, K.: Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In: IEEE International Conference on Robotics and Automation (ICRA), May 2018
6.
go back to reference Grocholsky, B., Keller, J., Kumar, V., Pappas, G.: Cooperative air and ground surveillance. IEEE Robot. Autom. Mag. 13(3), 16–25 (2006)CrossRef Grocholsky, B., Keller, J., Kumar, V., Pappas, G.: Cooperative air and ground surveillance. IEEE Robot. Autom. Mag. 13(3), 16–25 (2006)CrossRef
7.
go back to reference Balta, H., et al.: Integrated data management for a fleet of search-and-rescue robots. J. Field Robot. 34 (2017)CrossRef Balta, H., et al.: Integrated data management for a fleet of search-and-rescue robots. J. Field Robot. 34 (2017)CrossRef
8.
go back to reference Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRef Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRef
9.
go back to reference Kitt, B., et al.: Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In: Intelligent Vehicles Symposium (IV) (2010) Kitt, B., et al.: Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. In: Intelligent Vehicles Symposium (IV) (2010)
10.
go back to reference Scaramuzza, D., Fraundorfer, F.: Visual odometry: part I: the first 30 years and fundamentals. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)CrossRef Scaramuzza, D., Fraundorfer, F.: Visual odometry: part I: the first 30 years and fundamentals. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)CrossRef
11.
go back to reference Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (2014) Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (2014)
12.
go back to reference Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: Intelligent Robots and Systems (IROS), pp. 298–304. IEEE (2015) Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: Intelligent Robots and Systems (IROS), pp. 298–304. IEEE (2015)
13.
go back to reference Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef
14.
go back to reference Zhang, J., Singh, S.: LOAM: LiDAR odometry and mapping in real-time. In: Robotics: Science and Systems Conference, Pittsburgh, PA (2014) Zhang, J., Singh, S.: LOAM: LiDAR odometry and mapping in real-time. In: Robotics: Science and Systems Conference, Pittsburgh, PA (2014)
15.
go back to reference Zhang, J., et al.: On degeneracy of optimization-based state estimation problems. In: IEEE International Conference on Robotics and Automation (2016) Zhang, J., et al.: On degeneracy of optimization-based state estimation problems. In: IEEE International Conference on Robotics and Automation (2016)
16.
go back to reference Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for RGB-D cameras. In: International Conference on Robotics and Automation (2013) Kerl, C., Sturm, J., Cremers, D.: Robust odometry estimation for RGB-D cameras. In: International Conference on Robotics and Automation (2013)
17.
go back to reference Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)CrossRef Labbe, M., Michaud, F.: Appearance-based loop closure detection for online large-scale and long-term operation. IEEE Trans. Robot. 29(3), 734–745 (2013)CrossRef
18.
go back to reference Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 30(1), 177–187 (2014)CrossRef Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 30(1), 177–187 (2014)CrossRef
19.
go back to reference Alismail, H., Kaess, M., Browning, B., Lucey, S.: Direct visual odometry in low light using binary descriptors. IEEE Robot. Autom. Lett. 2(2), 444–451 (2017)CrossRef Alismail, H., Kaess, M., Browning, B., Lucey, S.: Direct visual odometry in low light using binary descriptors. IEEE Robot. Autom. Lett. 2(2), 444–451 (2017)CrossRef
20.
go back to reference Nascimento, E.R., et al.: BRAND: a robust appearance and depth descriptor for RGB-D images. In: International Conference on Intelligent Robots and Systems (2012) Nascimento, E.R., et al.: BRAND: a robust appearance and depth descriptor for RGB-D images. In: International Conference on Intelligent Robots and Systems (2012)
21.
go back to reference Wu, K., et al.: RISAS: a novel rotation, illumination, scale invariant appearance and shape feature. In: International Conference on Robotics and Automation (2017) Wu, K., et al.: RISAS: a novel rotation, illumination, scale invariant appearance and shape feature. In: International Conference on Robotics and Automation (2017)
22.
go back to reference Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics: Science and Systems, vol. 2 (2013) Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: Robotics: Science and Systems, vol. 2 (2013)
23.
go back to reference Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International conference on Computer Vision (ICCV) (2011) Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International conference on Computer Vision (ICCV) (2011)
24.
go back to reference Keselman, L., et al.: Intel (R) realsense (TM) stereoscopic depth cameras. In: Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (2017) Keselman, L., et al.: Intel (R) realsense (TM) stereoscopic depth cameras. In: Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE (2017)
Metadata
Title
Vision-Depth Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments
Authors
Shehryar Khattak
Christos Papachristos
Kostas Alexis
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-030-03801-4_46

Premium Partner