Skip to main content

2023 | OriginalPaper | Buchkapitel

A Comparison of Deep Learning-Based Monocular Visual Odometry Algorithms

verfasst von : Eunju Jeong, Jaun Lee, Pyojin Kim

Erschienen in: The Proceedings of the 2021 Asia-Pacific International Symposium on Aerospace Technology (APISAT 2021), Volume 2

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Visual odometry (VO) has recently attracted significant attention, as evidenced by the increasing interest in the development of autonomous mobile robots and vehicles. Studies have traditionally focused on geometry-based VO algorithms. These algorithms exhibit robust results under a restrictive setup, such as static and well-textured scenes. However, they are not accurate in challenging environments, such as changing illumination and dynamic environments. In recent years, VO algorithms based on deep learning methods have been developed and studied to overcome these limitations. However, there remains a lack of literature that provides a thorough comparative analysis of state-of-the-art deep learning-based monocular VO algorithms in challenging environments. This paper presents a comparison of four state-of-the-art monocular VO algorithms based on deep learning (DeepVO, SfMLearner, SC-SfMLearner, and DF-VO) in environments with glass walls, illumination changes, and dynamic objects. These monocular VO algorithms are based on supervised, unsupervised, and self-supervised learning integrated with multiview geometry. Based on the results of the evaluation on a variety of datasets, we conclude that DF-VO is the most suitable algorithm for challenging real-world environments.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bian J, Li Z, Wang N, Zhan H, Shen C, Cheng M, Reid I (2019) Unsupervised scale-consistency depth and ego-motion learning from monocular video. In: 33rd conference on neural information processing systems, Vancouver, Canada Bian J, Li Z, Wang N, Zhan H, Shen C, Cheng M, Reid I (2019) Unsupervised scale-consistency depth and ego-motion learning from monocular video. In: 33rd conference on neural information processing systems, Vancouver, Canada
2.
Zurück zum Zitat Delmerico J, Scaramuzza D (2018) A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In: IEEE international conference on robotics and automation, Brisbane, Australia, pp. 2502–2509 Delmerico J, Scaramuzza D (2018) A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In: IEEE international conference on robotics and automation, Brisbane, Australia, pp. 2502–2509
3.
Zurück zum Zitat Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the KITTI dataset. Int J Rob Res 32(11):1231–1237CrossRef Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: the KITTI dataset. Int J Rob Res 32(11):1231–1237CrossRef
4.
Zurück zum Zitat He M, Zhu C, Huang Q, Ren B (2019) A review of monocular visual odometry. Vis Comput 36(2):1053–1065 He M, Zhu C, Huang Q, Ren B (2019) A review of monocular visual odometry. Vis Comput 36(2):1053–1065
5.
Zurück zum Zitat Kasar A (2019) Benchmarking and comparing popular visual SLAM algorithms. Asian J Inf Technol, ISSN-2350-1146 Kasar A (2019) Benchmarking and comparing popular visual SLAM algorithms. Asian J Inf Technol, ISSN-2350-1146
6.
Zurück zum Zitat Lee T, Kim C, Cho DD (2019) A monocular vision sensor-based efficient SLAM method for indoor service robots. IEEE Trans Industr Electron 66(1):318–328CrossRef Lee T, Kim C, Cho DD (2019) A monocular vision sensor-based efficient SLAM method for indoor service robots. IEEE Trans Industr Electron 66(1):318–328CrossRef
7.
Zurück zum Zitat Merzlyakov A, Macenski S (2021) A comparison of modern general-purpose visual slam approaches. In: 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE Merzlyakov A, Macenski S (2021) A comparison of modern general-purpose visual slam approaches. In: 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE
8.
Zurück zum Zitat Sturm J et al (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE Sturm J et al (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE
10.
Zurück zum Zitat Wang S et al (2017) DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE Wang S et al (2017) DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE
11.
Zurück zum Zitat Zhan H et al (2020) Visual odometry revisited: what should be learnt? In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE Zhan H et al (2020) Visual odometry revisited: what should be learnt? In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE
12.
Zurück zum Zitat Zhou T et al (2017) Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE conference on computer vision and pattern recognition Zhou T et al (2017) Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Metadaten
Titel
A Comparison of Deep Learning-Based Monocular Visual Odometry Algorithms
verfasst von
Eunju Jeong
Jaun Lee
Pyojin Kim
Copyright-Jahr
2023
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-19-2635-8_68

    Premium Partner