Skip to main content
Erschienen in: Autonomous Robots 7/2019

19.02.2019

A lightweight and scalable visual-inertial motion capture system using fiducial markers

verfasst von: Guoping He, Shangkun Zhong, Jifeng Guo

Erschienen in: Autonomous Robots | Ausgabe 7/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Accurate localization of a moving object is important in many robotic tasks. Often an elaborate motion capture system is used to realize it. While high precision is guaranteed, such a complicated system is costly and limited to specified small size workspace. This paper describes a lightweight and scalable visual-inertial approach, which leverages paper printable, known size and unknown pose, artificial landmarks, as called fiducials, to obtain motion state estimates, including pose and velocity. Visual-inertial joint optimization using incremental smoother over factor graph and the IMU preintegration technique make our method efficient and accurate. No special hardware is required except a monocular camera and an IMU, making our system lightweight and easy to deploy. Using paper printable landmarks, as well as the efficient incremental inference algorithm, renders it nearly constant-time complexity and scalable to large-scale environment. We perform an extensive evaluation of our method on public datasets and real-world experiments. Results show our method achieves accurate state estimates and is scalable to large-scale environment and robust to fast motion and changing light condition. Besides, our method has the ability to recover from intermediate failure.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Botterill, T., Mills, S., & Green, R. (2013). Correcting scale drift by object recognition in single-camera slam. IEEE Transactions on Cybernetics, 43(6), 1767–1780.CrossRef Botterill, T., Mills, S., & Green, R. (2013). Correcting scale drift by object recognition in single-camera slam. IEEE Transactions on Cybernetics, 43(6), 1767–1780.CrossRef
Zurück zum Zitat Concha, A., Loianno, G., Kumar, V., & Civera, J. (2016). Visual-inertial direct slam. In IEEE international conference on robotics and automation (pp. 1331–1338). Concha, A., Loianno, G., Kumar, V., & Civera, J. (2016). Visual-inertial direct slam. In IEEE international conference on robotics and automation (pp. 1331–1338).
Zurück zum Zitat Dellaert, F. (2012). Factor graphs and gtsam: A hands-on introduction. Atlanta: Georgia Institute of Technology. Dellaert, F. (2012). Factor graphs and gtsam: A hands-on introduction. Atlanta: Georgia Institute of Technology.
Zurück zum Zitat Engel, J., Schps, T., & Cremers, D. (2014). Lsd-slam: Large-scale direct monocular slam. In European conference on computer vision (ECCV) (pp. 834–849). Engel, J., Schps, T., & Cremers, D. (2014). Lsd-slam: Large-scale direct monocular slam. In European conference on computer vision (ECCV) (pp. 834–849).
Zurück zum Zitat Engel, J., Koltun, V., & Cremers, D. (2017). Direct sparse odometry. IEEE Transactions on Pattern Analysis & Machine Intelligence, PP(99), 1–1. Engel, J., Koltun, V., & Cremers, D. (2017). Direct sparse odometry. IEEE Transactions on Pattern Analysis & Machine Intelligence, PP(99), 1–1.
Zurück zum Zitat Faessler, M., Mueggler, E., Schwabe, K., & Scaramuzza, D. (2014). A monocular pose estimation system based on infrared leds. In IEEE international conference on robotics and automation (pp. 907 – 913). Faessler, M., Mueggler, E., Schwabe, K., & Scaramuzza, D. (2014). A monocular pose estimation system based on infrared leds. In IEEE international conference on robotics and automation (pp. 907 – 913).
Zurück zum Zitat Fiala, M. (2005). Artag, a fiducial marker system using digital techniques. In IEEE computer society conference on computer vision and pattern recognition (CVPR) (Vol. 2, pp. 590–596). Fiala, M. (2005). Artag, a fiducial marker system using digital techniques. In IEEE computer society conference on computer vision and pattern recognition (CVPR) (Vol. 2, pp. 590–596).
Zurück zum Zitat Fiala, M. (2010). Designing highly reliable fiducial markers. IEEE Transactions on Pattern Analysis & Machine Intelligence, 32(7), 1317–24.CrossRef Fiala, M. (2010). Designing highly reliable fiducial markers. IEEE Transactions on Pattern Analysis & Machine Intelligence, 32(7), 1317–24.CrossRef
Zurück zum Zitat Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). Svo: Fast semi-direct monocular visual odometry. In IEEE international conference on robotics and automation (pp. 15–22). Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). Svo: Fast semi-direct monocular visual odometry. In IEEE international conference on robotics and automation (pp. 15–22).
Zurück zum Zitat Forster, C., Carlone, L., Dellaert, F., & Scaramuzza, D. (2017). On-manifold preintegration for real-time visual-inertial odometry. IEEE Transactions on Robotics, 33(1), 1–21.CrossRef Forster, C., Carlone, L., Dellaert, F., & Scaramuzza, D. (2017). On-manifold preintegration for real-time visual-inertial odometry. IEEE Transactions on Robotics, 33(1), 1–21.CrossRef
Zurück zum Zitat Frost, D. P., Khler, O., & Murray, D. W. (2016). Object-aware bundle adjustment for correcting monocular scale drift. In IEEE international conference on robotics and automation (pp. 4770–4776). Frost, D. P., Khler, O., & Murray, D. W. (2016). Object-aware bundle adjustment for correcting monocular scale drift. In IEEE international conference on robotics and automation (pp. 4770–4776).
Zurück zum Zitat Furgale, P., Rehder, J., & Siegwart, R. (2014). Unified temporal and spatial calibration for multi-sensor systems. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1280–1286). Furgale, P., Rehder, J., & Siegwart, R. (2014). Unified temporal and spatial calibration for multi-sensor systems. In IEEE/RSJ international conference on intelligent robots and systems (pp. 1280–1286).
Zurück zum Zitat Gálvez-López, D., Salas, M., Tardós, J. D., & Montiel, J. (2016). Real-time monocular object slam. Robotics & Autonomous Systems, 75(PB), 435–449.CrossRef Gálvez-López, D., Salas, M., Tardós, J. D., & Montiel, J. (2016). Real-time monocular object slam. Robotics & Autonomous Systems, 75(PB), 435–449.CrossRef
Zurück zum Zitat Hauke, S. (2012). Local accuracy and global consistency for efficient slam. London: Imperial College London. Hauke, S. (2012). Local accuracy and global consistency for efficient slam. London: Imperial College London.
Zurück zum Zitat Hauke, S., Montiel, J. M. M., & Davison, A. (2010). Scale drift-aware large scale monocular slam. In Robotics: Science and systems Hauke, S., Montiel, J. M. M., & Davison, A. (2010). Scale drift-aware large scale monocular slam. In Robotics: Science and systems
Zurück zum Zitat Kaess, M., Johannsson, H., Roberts, R., Ila, V., Leonard, J. J., & Dellaert, F. (2011). isam2: Incremental smoothing and mapping using the bayes tree. International Journal of Robotics Research, 31(2), 216–235.CrossRef Kaess, M., Johannsson, H., Roberts, R., Ila, V., Leonard, J. J., & Dellaert, F. (2011). isam2: Incremental smoothing and mapping using the bayes tree. International Journal of Robotics Research, 31(2), 216–235.CrossRef
Zurück zum Zitat Klein, G., & Murray, D. (2007). Parallel tracking and mapping for smallar workspaces. In IEEE and ACM international symposium on mixed and augmented reality (pp. 1–10). Klein, G., & Murray, D. (2007). Parallel tracking and mapping for smallar workspaces. In IEEE and ACM international symposium on mixed and augmented reality (pp. 1–10).
Zurück zum Zitat Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., & Furgale, P. (2014). Keyframe-based visual-inertial odometry using nonlinear optimization. International Journal of Robotics Research, 34(3), 314–334.CrossRef Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., & Furgale, P. (2014). Keyframe-based visual-inertial odometry using nonlinear optimization. International Journal of Robotics Research, 34(3), 314–334.CrossRef
Zurück zum Zitat Lim, H., & Lee, Y. S. (2009). Real-time single camera slam using fiducial markers. In Iccas-sice (pp. 177–182). Lim, H., & Lee, Y. S. (2009). Real-time single camera slam using fiducial markers. In Iccas-sice (pp. 177–182).
Zurück zum Zitat Mourikis, A. I., & Roumeliotis, S. I. (2007). A multi-state constraint kalman filter for vision-aided inertial navigation. In IEEE international conference on robotics and automation (pp. 3565–3572). Mourikis, A. I., & Roumeliotis, S. I. (2007). A multi-state constraint kalman filter for vision-aided inertial navigation. In IEEE international conference on robotics and automation (pp. 3565–3572).
Zurück zum Zitat Mur-Artal, R., Montiel, J. M. M., & Tards, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef Mur-Artal, R., Montiel, J. M. M., & Tards, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef
Zurück zum Zitat Neunert, M., Bloesch, M., & Buchli, J. (2016). An open source, fiducial based, visual-inertial motion capture system. Epigenetics Official Journal of the Dna Methylation Society, 7(7), 710–9. Neunert, M., Bloesch, M., & Buchli, J. (2016). An open source, fiducial based, visual-inertial motion capture system. Epigenetics Official Journal of the Dna Methylation Society, 7(7), 710–9.
Zurück zum Zitat Olson, E. (2011). Apriltag: A robust and flexible visual fiducial system. In IEEE international conference on robotics and automation (pp. 3400–3407). Olson, E. (2011). Apriltag: A robust and flexible visual fiducial system. In IEEE international conference on robotics and automation (pp. 3400–3407).
Zurück zum Zitat Qiu, K., Zhang, F., & Liu, M. (2015). Visible light communication-based indoor environment modeling and metric-free path planning. In IEEE international conference on automation science and engineering (pp. 200–205). Qiu, K., Zhang, F., & Liu, M. (2015). Visible light communication-based indoor environment modeling and metric-free path planning. In IEEE international conference on automation science and engineering (pp. 200–205).
Zurück zum Zitat Sementille, A. C., & Rodello, I. (2004). A motion capture system using passive markers. In Vrcai 2004, ACM siggraph international conference on virtual reality continuum and ITS applications in industry, Nanyang technological university, Singapore (pp. 440–447). Sementille, A. C., & Rodello, I. (2004). A motion capture system using passive markers. In Vrcai 2004, ACM siggraph international conference on virtual reality continuum and ITS applications in industry, Nanyang technological university, Singapore (pp. 440–447).
Zurück zum Zitat Usenko, V., Engel, J., Stuckler, J., & Cremers, D. (2016). Direct visual-inertial odometry with stereo cameras. In IEEE international conference on robotics and automation (pp. 1885–1892). Usenko, V., Engel, J., Stuckler, J., & Cremers, D. (2016). Direct visual-inertial odometry with stereo cameras. In IEEE international conference on robotics and automation (pp. 1885–1892).
Metadaten
Titel
A lightweight and scalable visual-inertial motion capture system using fiducial markers
verfasst von
Guoping He
Shangkun Zhong
Jifeng Guo
Publikationsdatum
19.02.2019
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 7/2019
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-019-09834-7

Weitere Artikel der Ausgabe 7/2019

Autonomous Robots 7/2019 Zur Ausgabe

Neuer Inhalt