Skip to main content
Erschienen in: Autonomous Robots 3/2018

26.07.2017

Are you ABLE to perform a life-long visual topological localization?

verfasst von: Roberto Arroyo, Pablo F. Alcantarilla, Luis M. Bergasa, Eduardo Romera

Erschienen in: Autonomous Robots | Ausgabe 3/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Visual topological localization is a process typically required by varied mobile autonomous robots, but it is a complex task if long operating periods are considered. This is because of the appearance variations suffered in a place: dynamic elements, illumination or weather. Due to these problems, long-term visual place recognition across seasons has become a challenge for the robotics community. For this reason, we propose an innovative method for a robust and efficient life-long localization using cameras. In this paper, we describe our approach (ABLE), which includes three different versions depending on the type of images: monocular, stereo and panoramic. This distinction makes our proposal more adaptable and effective, because it allows to exploit the extra information that can be provided by each type of camera. Besides, we contribute a novel methodology for identifying places, which is based on a fast matching of global binary descriptors extracted from sequences of images. The presented results demonstrate the benefits of using ABLE, which is compared to the most representative state-of-the-art algorithms in long-term conditions.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
OpenCV is available from: http://​opencv.​org/​.
 
2
More information, extra material, videos and open code (2016b Arroyo, Alcantarilla, Bergasa, and Romera) about ABLE are available from the website of the project: http://​www.​robesafe.​com/​personal/​roberto.​arroyo/​openable.​html.
 
Literatur
Zurück zum Zitat Alahi, A., Ortiz, R., & Vandergheynst, P. (2012). FREAK: Fast retina keypoint. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Vol. 2, pp. 510–517). doi:10.1109/CVPR.2012.6247715. Alahi, A., Ortiz, R., & Vandergheynst, P. (2012). FREAK: Fast retina keypoint. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Vol. 2, pp. 510–517). doi:10.​1109/​CVPR.​2012.​6247715.
Zurück zum Zitat Alcantarilla, P. F., Stent, S., Ros, G., Arroyo, R., & Gherardi, R. (2016). Street-view change detection with deconvolutional networks. In Robotics Science and Systems Conference (RSS) (pp. 1–10). doi:10.15607/RSS.2016.XII.044. Alcantarilla, P. F., Stent, S., Ros, G., Arroyo, R., & Gherardi, R. (2016). Street-view change detection with deconvolutional networks. In Robotics Science and Systems Conference (RSS) (pp. 1–10). doi:10.​15607/​RSS.​2016.​XII.​044.
Zurück zum Zitat Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Yebes, J. J., & Bronte, S. (2014a). Fast and effective visual place recognition using binary codes and disparity information. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3089–3094). doi:10.1109/IROS.2014.6942989. Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Yebes, J. J., & Bronte, S. (2014a). Fast and effective visual place recognition using binary codes and disparity information. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3089–3094). doi:10.​1109/​IROS.​2014.​6942989.
Zurück zum Zitat Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Yebes, J. J., & Gámez, S. (2014b). Bidirectional loop closure detection on panoramas for visual navigation. In IEEE Intelligent Vehicles Symposium (IV) (pp. 1378–1383). doi:10.1109/IVS.2014.6856457. Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., Yebes, J. J., & Gámez, S. (2014b). Bidirectional loop closure detection on panoramas for visual navigation. In IEEE Intelligent Vehicles Symposium (IV) (pp. 1378–1383). doi:10.​1109/​IVS.​2014.​6856457.
Zurück zum Zitat Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2015). Towards life-long visual localization using an efficient matching of binary sequences from images. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 6328–6335). doi:10.1109/ICRA.2015.7140088. Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2015). Towards life-long visual localization using an efficient matching of binary sequences from images. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 6328–6335). doi:10.​1109/​ICRA.​2015.​7140088.
Zurück zum Zitat Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2016a). Fusion and binarization of CNN features for robust topological localization across seasons. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4656–4663). doi:10.1109/IROS.2016.7759685. Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2016a). Fusion and binarization of CNN features for robust topological localization across seasons. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4656–4663). doi:10.​1109/​IROS.​2016.​7759685.
Zurück zum Zitat Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2016b). OpenABLE: An open-source toolbox for application in life-long visual localization of autonomous vehicles. In IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 965–970). doi:10.1109/ITSC.2016.7795672. Arroyo, R., Alcantarilla, P. F., Bergasa, L. M., & Romera, E. (2016b). OpenABLE: An open-source toolbox for application in life-long visual localization of autonomous vehicles. In IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 965–970). doi:10.​1109/​ITSC.​2016.​7795672.
Zurück zum Zitat Badino, H., Huber, D. F., & Kanade, T. (2012). Real-time topometric localization. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1635–1642). doi:10.1109/ICRA.2012.6224716. Badino, H., Huber, D. F., & Kanade, T. (2012). Real-time topometric localization. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1635–1642). doi:10.​1109/​ICRA.​2012.​6224716.
Zurück zum Zitat Cadena, C., Gálvez-López, D., Ramos, F., Tardós, J. D., & Neira, J. (2010). Robust place recognition with stereo cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5182–5189). doi:10.1109/IROS.2010.5650234. Cadena, C., Gálvez-López, D., Ramos, F., Tardós, J. D., & Neira, J. (2010). Robust place recognition with stereo cameras. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5182–5189). doi:10.​1109/​IROS.​2010.​5650234.
Zurück zum Zitat Calonder, M., Lepetit, V., Özuysal, M., Trzcinski, T., Strecha, C., & Fua, P. (2012). BRIEF: Computing a local binary descriptor very fast. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(7), 1281–1298. doi:10.1109/TPAMI.2011.222.CrossRef Calonder, M., Lepetit, V., Özuysal, M., Trzcinski, T., Strecha, C., & Fua, P. (2012). BRIEF: Computing a local binary descriptor very fast. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 34(7), 1281–1298. doi:10.​1109/​TPAMI.​2011.​222.CrossRef
Zurück zum Zitat Campos, F. M., Correia, L., & Calado, J. M. F. (2013). Loop closure detection with a holistic image feature. In Portuguese Conference on Artificial Intelligence (EPIA) (Vol. 8154, pp. 247–258). doi:10.1007/978-3-642-40669-0_22. Campos, F. M., Correia, L., & Calado, J. M. F. (2013). Loop closure detection with a holistic image feature. In Portuguese Conference on Artificial Intelligence (EPIA) (Vol. 8154, pp. 247–258). doi:10.​1007/​978-3-642-40669-0_​22.
Zurück zum Zitat Caramazana, L., Arroyo, R., & Bergasa, L. M. (2016). Visual odometry correction based on loop closure detection. In: Open Conference on Future Trends in Robotics (RoboCity16) (pp. 97–104). Caramazana, L., Arroyo, R., & Bergasa, L. M. (2016). Visual odometry correction based on loop closure detection. In: Open Conference on Future Trends in Robotics (RoboCity16) (pp. 97–104).
Zurück zum Zitat Carlevaris-Bianco, N., & Eustice, R. M. (2014). Learning visual feature descriptors for dynamic lighting conditions. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2769–2776). doi:10.1109/IROS.2014.6942941. Carlevaris-Bianco, N., & Eustice, R. M. (2014). Learning visual feature descriptors for dynamic lighting conditions. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2769–2776). doi:10.​1109/​IROS.​2014.​6942941.
Zurück zum Zitat Carlevaris-Bianco, N., Ushani, A. K., & Eustice, R. M. (2016). University of Michigan North Campus long-term vision and lidar dataset. International Journal of Robotics Research (IJRR), 35(9), 1023–1035. doi:10.1177/0278364915614638.CrossRef Carlevaris-Bianco, N., Ushani, A. K., & Eustice, R. M. (2016). University of Michigan North Campus long-term vision and lidar dataset. International Journal of Robotics Research (IJRR), 35(9), 1023–1035. doi:10.​1177/​0278364915614638​.CrossRef
Zurück zum Zitat Ceriani, S., Fontana, G., Giusti, A., Marzorati, D., Matteucci, M., Migliore, D., et al. (2009). Rawseeds ground truth collection systems for indoor self-localization and mapping. Autonomous Robots, 27(4), 353–371. doi:10.1007/s10514-009-9156-5.CrossRef Ceriani, S., Fontana, G., Giusti, A., Marzorati, D., Matteucci, M., Migliore, D., et al. (2009). Rawseeds ground truth collection systems for indoor self-localization and mapping. Autonomous Robots, 27(4), 353–371. doi:10.​1007/​s10514-009-9156-5.CrossRef
Zurück zum Zitat Clemente, L. A., Davison, A. J., Reid, I. D., Neira, J., & Tardós, J. D. (2007). Mapping large loops with a single hand-held camera. In Robotics Science and Systems Conference (RSS) (pp. 297–304). doi:10.15607/RSS.2007.III.038. Clemente, L. A., Davison, A. J., Reid, I. D., Neira, J., & Tardós, J. D. (2007). Mapping large loops with a single hand-held camera. In Robotics Science and Systems Conference (RSS) (pp. 297–304). doi:10.​15607/​RSS.​2007.​III.​038.
Zurück zum Zitat Corke, P., Paul, R., Churchill, W., & Newman, P. (2013). Dealing with shadows: Capturing intrinsic scene appearance for image-based outdoor localisation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2085–2092). doi:10.1109/IROS.2013.6696648. Corke, P., Paul, R., Churchill, W., & Newman, P. (2013). Dealing with shadows: Capturing intrinsic scene appearance for image-based outdoor localisation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2085–2092). doi:10.​1109/​IROS.​2013.​6696648.
Zurück zum Zitat Cummins, M., & Newman, P. (2008). FAB-MAP: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research (IJRR), 27(6), 647–665. doi:10.1177/0278364908090961.CrossRef Cummins, M., & Newman, P. (2008). FAB-MAP: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research (IJRR), 27(6), 647–665. doi:10.​1177/​0278364908090961​.CrossRef
Zurück zum Zitat Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Vol. 2, pp. 886–893). doi:10.1109/CVPR.2005.177. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Vol. 2, pp. 886–893). doi:10.​1109/​CVPR.​2005.​177.
Zurück zum Zitat Drouilly, R., Rives, P., & Morisset, B. (2015) Semantic representation for navigation in large-scale environments. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1106–1111). doi:10.1109/ICRA.2015.7139314. Drouilly, R., Rives, P., & Morisset, B. (2015) Semantic representation for navigation in large-scale environments. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1106–1111). doi:10.​1109/​ICRA.​2015.​7139314.
Zurück zum Zitat Dymczyk, M., Lynen, S., Cieslewski, T., Bosse, M., Siegwart, R., & Furgale, P. (2015). The gist of maps—Summarizing experience for lifelong localization. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2767–2773). doi:10.1109/ICRA.2015.7139575. Dymczyk, M., Lynen, S., Cieslewski, T., Bosse, M., Siegwart, R., & Furgale, P. (2015). The gist of maps—Summarizing experience for lifelong localization. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2767–2773). doi:10.​1109/​ICRA.​2015.​7139575.
Zurück zum Zitat Erkent, O., & Bozma, H. I. (2015). Long-term topological place learning. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5462–5467). doi:10.1109/ICRA.2015.7139962. Erkent, O., & Bozma, H. I. (2015). Long-term topological place learning. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5462–5467). doi:10.​1109/​ICRA.​2015.​7139962.
Zurück zum Zitat Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2012). Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review (AIR). doi:10.1007/s10462-012-9365-8. Fuentes-Pacheco, J., Ruiz-Ascencio, J., & Rendón-Mancha, J. M. (2012). Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review (AIR). doi:10.​1007/​s10462-012-9365-8.
Zurück zum Zitat Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3354–3361). doi:10.1109/CVPR.2012.6248074. Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3354–3361). doi:10.​1109/​CVPR.​2012.​6248074.
Zurück zum Zitat Glover, A. J., Maddern, W., Milford, M., & Wyeth, G. F. (2010). FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 3507–3512). doi:10.1109/ROBOT.2010.5509547. Glover, A. J., Maddern, W., Milford, M., & Wyeth, G. F. (2010). FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 3507–3512). doi:10.​1109/​ROBOT.​2010.​5509547.
Zurück zum Zitat Glover, A. J., Maddern, W., Warren, M., Reid, S., Milford, M., & Wyeth, G. F. (2012). OpenFABMAP: An open source toolbox for appearance-based loop closure detection. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 4730–4735). doi:10.1109/ICRA.2012.6224843. Glover, A. J., Maddern, W., Warren, M., Reid, S., Milford, M., & Wyeth, G. F. (2012). OpenFABMAP: An open source toolbox for appearance-based loop closure detection. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 4730–4735). doi:10.​1109/​ICRA.​2012.​6224843.
Zurück zum Zitat Korrapati, H., Uzer, F., & Mezouar, Y. (2013). Hierarchical visual mapping with omnidirectional images. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3684–3690). doi:10.1109/IROS.2013.6696882. Korrapati, H., Uzer, F., & Mezouar, Y. (2013). Hierarchical visual mapping with omnidirectional images. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3684–3690). doi:10.​1109/​IROS.​2013.​6696882.
Zurück zum Zitat Lee, G. H., & Pollefeys, M. (2014). Unsupervised learning of threshold for geometric verification in visual-based loop-closure. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1510–1516). doi:10.1109/ICRA.2014.6907052. Lee, G. H., & Pollefeys, M. (2014). Unsupervised learning of threshold for geometric verification in visual-based loop-closure. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1510–1516). doi:10.​1109/​ICRA.​2014.​6907052.
Zurück zum Zitat Leutenegger, S., Chli, M., & Siegwart, R. Y. (2011). BRISK: Binary robust invariant scalable keypoints. In International Conference on Computer Vision (ICCV) (pp. 2548–2555). doi:10.1109/ICCV.2011.6126542. Leutenegger, S., Chli, M., & Siegwart, R. Y. (2011). BRISK: Binary robust invariant scalable keypoints. In International Conference on Computer Vision (ICCV) (pp. 2548–2555). doi:10.​1109/​ICCV.​2011.​6126542.
Zurück zum Zitat Linegar, C., Churchill, W., & Newman, P. (2015). Work smart, not hard: Recalling relevant experiences for vast-scale but time-constrained localisation. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 90–97). doi:10.1109/ICRA.2015.7138985. Linegar, C., Churchill, W., & Newman, P. (2015). Work smart, not hard: Recalling relevant experiences for vast-scale but time-constrained localisation. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 90–97). doi:10.​1109/​ICRA.​2015.​7138985.
Zurück zum Zitat Liu, Y., & Zhang, H. (2012). Visual loop closure detection with a compact image descriptor. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1051–1056). doi:10.1109/IROS.2012.6386145. Liu, Y., & Zhang, H. (2012). Visual loop closure detection with a compact image descriptor. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1051–1056). doi:10.​1109/​IROS.​2012.​6386145.
Zurück zum Zitat Lowry, S., & Milford, M. (2015). Change removal: Robust online learning for changing appearance and changing viewpoint. In Workshop on Visual Place Recognition in Changing Environments at the IEEE International Conference on Robotics and Automation (W-ICRA). Lowry, S., & Milford, M. (2015). Change removal: Robust online learning for changing appearance and changing viewpoint. In Workshop on Visual Place Recognition in Changing Environments at the IEEE International Conference on Robotics and Automation (W-ICRA).
Zurück zum Zitat Lv, Q., Josephson, W., Wang, Z., Charikar, M., & Li, K. (2007). Multi-probe LSH: Efficient indexing for high-dimensional similarity search. In International Conference on Very Large Data Bases (VLDB) (pp. 950–961). Lv, Q., Josephson, W., Wang, Z., Charikar, M., & Li, K. (2007). Multi-probe LSH: Efficient indexing for high-dimensional similarity search. In International Conference on Very Large Data Bases (VLDB) (pp. 950–961).
Zurück zum Zitat Masatoshi, A., Yuuto, C., Kanji, T., & Kentaro, Y. (2015). Leveraging image-based prior in cross-season place recognition. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5455–5461). doi:10.1109/ICRA.2015.7139961. Masatoshi, A., Yuuto, C., Kanji, T., & Kentaro, Y. (2015). Leveraging image-based prior in cross-season place recognition. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5455–5461). doi:10.​1109/​ICRA.​2015.​7139961.
Zurück zum Zitat McManus, C., Churchill, W., Maddern, W., Stewart, A., & Newman, P. (2014). Shady dealings: Robust, long-term visual localisation using illumination invariance. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 901–906). doi:10.1109/ICRA.2014.6906961. McManus, C., Churchill, W., Maddern, W., Stewart, A., & Newman, P. (2014). Shady dealings: Robust, long-term visual localisation using illumination invariance. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 901–906). doi:10.​1109/​ICRA.​2014.​6906961.
Zurück zum Zitat Milford, M., & Wyeth, G. F. (2012). SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1643–1649). doi:10.1109/ICRA.2012.6224623. Milford, M., & Wyeth, G. F. (2012). SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1643–1649). doi:10.​1109/​ICRA.​2012.​6224623.
Zurück zum Zitat Mohan, M., Gálvez-López, D., Monteleoni, C., & Sibley, G. (2015). Environment selection and hierarchical place recognition. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5487–5494). doi:10.1109/ICRA.2015.7139966. Mohan, M., Gálvez-López, D., Monteleoni, C., & Sibley, G. (2015). Environment selection and hierarchical place recognition. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5487–5494). doi:10.​1109/​ICRA.​2015.​7139966.
Zurück zum Zitat Mousavian, A., Kosecká, J., & Lien, J. (2015). Semantically guided location recognition for outdoors scenes. In IEEE International Conference on Robotics and Automation (ICRA). (pp. 4882–4889). doi:10.1109/ICRA.2015.7139877. Mousavian, A., Kosecká, J., & Lien, J. (2015). Semantically guided location recognition for outdoors scenes. In IEEE International Conference on Robotics and Automation (ICRA). (pp. 4882–4889). doi:10.​1109/​ICRA.​2015.​7139877.
Zurück zum Zitat Muja, M., & Lowe, D. G. (2012). Fast matching of binary features. In Canadian Conference on Computer and Robot Vision (CRV) (pp. 404–410). doi:10.1109/CRV.2012.60. Muja, M., & Lowe, D. G. (2012). Fast matching of binary features. In Canadian Conference on Computer and Robot Vision (CRV) (pp. 404–410). doi:10.​1109/​CRV.​2012.​60.
Zurück zum Zitat Murillo, A. C., Singh, G., Kosecká, J., & Guerrero, J. J. (2013). Localization in urban environments using a panoramic gist descriptor. IEEE Transactions on Robotics (TRO), 29(1), 146–160. doi:10.1109/TRO.2012.2220211.CrossRef Murillo, A. C., Singh, G., Kosecká, J., & Guerrero, J. J. (2013). Localization in urban environments using a panoramic gist descriptor. IEEE Transactions on Robotics (TRO), 29(1), 146–160. doi:10.​1109/​TRO.​2012.​2220211.CrossRef
Zurück zum Zitat Nelson, P., Churchill, W., Posner, I., & Newman, P. (2015). From dusk till dawn: Localisation at night using artificial light sources. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5245–5252). doi:10.1109/ICRA.2015.7139930. Nelson, P., Churchill, W., Posner, I., & Newman, P. (2015). From dusk till dawn: Localisation at night using artificial light sources. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 5245–5252). doi:10.​1109/​ICRA.​2015.​7139930.
Zurück zum Zitat Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Visual Perception, Progress in Brain Research (PBR), 155(B), 23–36. doi:10.1016/S0079-6123(06)55002-2. Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Visual Perception, Progress in Brain Research (PBR), 155(B), 23–36. doi:10.​1016/​S0079-6123(06)55002-2.
Zurück zum Zitat Paul, R., & Newman, P. (2010). FAB-MAP 3D: Topological mapping with spatial and visual appearance. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2649–2656). doi:10.1109/ROBOT.2010.5509587. Paul, R., & Newman, P. (2010). FAB-MAP 3D: Topological mapping with spatial and visual appearance. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 2649–2656). doi:10.​1109/​ROBOT.​2010.​5509587.
Zurück zum Zitat Pepperell, E., Corke, P., & Milford, M. (2014). All-environment visual place recognition with SMART. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1612–1618). doi:10.1109/ICRA.2014.6907067. Pepperell, E., Corke, P., & Milford, M. (2014). All-environment visual place recognition with SMART. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1612–1618). doi:10.​1109/​ICRA.​2014.​6907067.
Zurück zum Zitat Pepperell, E., Corke, P., & Milford, M. (2015). Automatic image scaling for place recognition in changing environments. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1118–1124). doi:10.1109/ICRA.2015.7139316. Pepperell, E., Corke, P., & Milford, M. (2015). Automatic image scaling for place recognition in changing environments. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1118–1124). doi:10.​1109/​ICRA.​2015.​7139316.
Zurück zum Zitat Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In International Conference on Computer Vision (ICCV) (pp. 2564–2571). doi:10.1109/ICCV.2011.6126544. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In International Conference on Computer Vision (ICCV) (pp. 2564–2571). doi:10.​1109/​ICCV.​2011.​6126544.
Zurück zum Zitat Smith, M., Baldwin, I., Churchill, W., Paul, R., & Newman, P. (2009). The New College vision and laser data set. International Journal of Robotics Research (IJRR), 28(5), 595–599. doi:10.1177/0278364909103911.CrossRef Smith, M., Baldwin, I., Churchill, W., Paul, R., & Newman, P. (2009). The New College vision and laser data set. International Journal of Robotics Research (IJRR), 28(5), 595–599. doi:10.​1177/​0278364909103911​.CrossRef
Zurück zum Zitat Sünderhauf, N., & Protzel, P. (2011). BRIEF-Gist—Closing the loop by simple means. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1234–1241). doi:10.1109/IROS.2011.6094921. Sünderhauf, N., & Protzel, P. (2011). BRIEF-Gist—Closing the loop by simple means. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1234–1241). doi:10.​1109/​IROS.​2011.​6094921.
Zurück zum Zitat Sünderhauf, N., Neubert, P., & Protzel, P. (2013). Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons. In Workshop on Long-Term Autonomy at the IEEE International Conference on Robotics and Automation (W-ICRA). Sünderhauf, N., Neubert, P., & Protzel, P. (2013). Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons. In Workshop on Long-Term Autonomy at the IEEE International Conference on Robotics and Automation (W-ICRA).
Zurück zum Zitat Sünderhauf, N., Shirazi, S., Jacobson, A., Dayoub, F., Pepperell, E., Upcroft, B., & Milford, M. (2015). Place recognition with ConvNet landmarks: Viewpoint-robust, condition-robust, training-free. In Robotics Science and Systems Conference (RSS) (pp. 1–10). doi:10.15607/RSS.2015.XI.022. Sünderhauf, N., Shirazi, S., Jacobson, A., Dayoub, F., Pepperell, E., Upcroft, B., & Milford, M. (2015). Place recognition with ConvNet landmarks: Viewpoint-robust, condition-robust, training-free. In Robotics Science and Systems Conference (RSS) (pp. 1–10). doi:10.​15607/​RSS.​2015.​XI.​022.
Zurück zum Zitat Ulrich, I., & Nourbakhsh, I. R. (2000) Appearance-based place recognition for topological localization. In IEEE International Conference on Robotics and Automation (ICRA). (pp. 1023–1029). doi:10.1109/ROBOT.2000.844734. Ulrich, I., & Nourbakhsh, I. R. (2000) Appearance-based place recognition for topological localization. In IEEE International Conference on Robotics and Automation (ICRA). (pp. 1023–1029). doi:10.​1109/​ROBOT.​2000.​844734.
Zurück zum Zitat Upcroft, B., McManus, C., Churchill, W., Maddern, W., & Newman, P. (2014). Lighting invariant urban street classification. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1712–1718). doi:10.1109/ICRA.2014.6907082. Upcroft, B., McManus, C., Churchill, W., Maddern, W., & Newman, P. (2014). Lighting invariant urban street classification. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1712–1718). doi:10.​1109/​ICRA.​2014.​6907082.
Zurück zum Zitat Williams, B., Cummins, M., Neira, J., Newman, P., Reid, I. D., & Tardós, J. D. (2008). An image-to-map loop closing method for monocular SLAM. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2053–2059). doi:10.1109/IROS.2008.4650996. Williams, B., Cummins, M., Neira, J., Newman, P., Reid, I. D., & Tardós, J. D. (2008). An image-to-map loop closing method for monocular SLAM. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2053–2059). doi:10.​1109/​IROS.​2008.​4650996.
Zurück zum Zitat Yang, X., & Cheng, K. T. (2014). Local difference binary for ultrafast and distinctive feature description. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(1), 188–194. doi:10.1109/TPAMI.2013.150.CrossRef Yang, X., & Cheng, K. T. (2014). Local difference binary for ultrafast and distinctive feature description. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(1), 188–194. doi:10.​1109/​TPAMI.​2013.​150.CrossRef
Metadaten
Titel
Are you ABLE to perform a life-long visual topological localization?
verfasst von
Roberto Arroyo
Pablo F. Alcantarilla
Luis M. Bergasa
Eduardo Romera
Publikationsdatum
26.07.2017
Verlag
Springer US
Erschienen in
Autonomous Robots / Ausgabe 3/2018
Print ISSN: 0929-5593
Elektronische ISSN: 1573-7527
DOI
https://doi.org/10.1007/s10514-017-9664-7

Weitere Artikel der Ausgabe 3/2018

Autonomous Robots 3/2018 Zur Ausgabe

Neuer Inhalt