Skip to main content

2022 | OriginalPaper | Buchkapitel

Research on Visual-Inertial SLAM Technology with GNSS Assistance

verfasst von : Lin Zhao, Xiaohan Wang, Xiaoze Zheng, Chun Jia

Erschienen in: China Satellite Navigation Conference (CSNC 2022) Proceedings

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the new era of robustness and perception, the Visual-Inertial Odometry (VIO), which is tightly coupled by the camera and the Inertial Measurement Unit (IMU), can obtain high-precision local pose results in unknown environment. Its low cost and miniaturization have received widespread attention. However, due to the limitation of the measurement principle, in the long-term runs, error will still accumulate. In addition, the outdoor large-scale environment is also a major challenge facing VIO. The Global Navigation Satellite System (GNSS) can provide accurate global estimates for VIO in an open outdoor environment and correct drift caused by long-term operation. Similarly, VIO can still perform in environments where GNSS is denied, which makes it possible for seamless indoor and outdoor navigation. Therefore, this paper proposes a visual-inertial SLAM algorithm assisted by GNSS. Taking the optimized tightly coupled VIO as the main body, and the pose information obtained by GNSS is combined with the VIO solution result to enhance the global positioning while ensuring the accuracy of the local pose accuracy. To this end, a simulation experiment based on the KITTI data set was carried out. The results show that the VIO system with the aid of GNSS can achieve the accuracy of 1.687 m error average, 1.176 m standard deviation, and 2.056 m root mean square error, which is nearly 80% higher than that without assistance. And it can also play a role in the environment where GNSS is denied, and the robustness of the system is also enhanced.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Davison, A.J., Reid, I.D., Molton, N.D., et al.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067.9 (2007) Davison, A.J., Reid, I.D., Molton, N.D., et al.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067.9 (2007)
3.
Zurück zum Zitat Cadena, C., Carlone, L., Carrillo, H., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Rob. 32(6), 1309–1332 (2016)CrossRef Cadena, C., Carlone, L., Carrillo, H., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Rob. 32(6), 1309–1332 (2016)CrossRef
4.
Zurück zum Zitat Leutenegger, S., Lynen, S., Bosse, M., et al.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef Leutenegger, S., Lynen, S., Bosse, M., et al.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef
6.
Zurück zum Zitat Shi, J., Tomasi: Good features to track. In: 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. IEEE, Seattle, WA, USA (1994) Shi, J., Tomasi: Good features to track. In: 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. IEEE, Seattle, WA, USA (1994)
7.
Zurück zum Zitat Harris, C.J., Stephens, M.: A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference, Manchester, pp. 147–151 (1988) Harris, C.J., Stephens, M.: A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference, Manchester, pp. 147–151 (1988)
8.
Zurück zum Zitat Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on Artificial intelligence, pp. 674–679. Moran Kaufmann Publishers Inc., San Francisco, CA, USA (1981) Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on Artificial intelligence, pp. 674–679. Moran Kaufmann Publishers Inc., San Francisco, CA, USA (1981)
9.
Zurück zum Zitat Forster, C., Carlone, L., Dellaert, F., et al.: On-Manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Rob. 33(1), 1–21 (2017)CrossRef Forster, C., Carlone, L., Dellaert, F., et al.: On-Manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Rob. 33(1), 1–21 (2017)CrossRef
10.
Zurück zum Zitat Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE, Providence, RI, USA (2012) Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE, Providence, RI, USA (2012)
Metadaten
Titel
Research on Visual-Inertial SLAM Technology with GNSS Assistance
verfasst von
Lin Zhao
Xiaohan Wang
Xiaoze Zheng
Chun Jia
Copyright-Jahr
2022
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-19-2580-1_36

Neuer Inhalt