Skip to main content

2018 | OriginalPaper | Buchkapitel

2. Background

verfasst von : Emilio Garcia-Fidalgo, Alberto Ortiz

Erschienen in: Methods for Appearance-based Loop Closure Detection

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter is intended to provide the reader with a general overview of the most important concepts and terms needed to understand the rest of the book. Main concepts are briefly introduced, making use of examples as they are needed for illustration purposes. More precisely, in the first section, we consider the concept of topological map and define it in a formal way, as well as discuss its main advantages and disadvantages in front of metric approaches. Next, we deal with appearance-based loop closure detection and the factors that more affect the performance of the underlying algorithms.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Busquets, D.: A multi-agent approach to qualitative navigation in robotics. Ph.D. thesis, Universitat Politècnica de Catalunya (2003) Busquets, D.: A multi-agent approach to qualitative navigation in robotics. Ph.D. thesis, Universitat Politècnica de Catalunya (2003)
2.
Zurück zum Zitat Dudek, G., Jenkin, M.: Computational Principles of Mobile Robotics. Cambridge University Press (2010) Dudek, G., Jenkin, M.: Computational Principles of Mobile Robotics. Cambridge University Press (2010)
3.
Zurück zum Zitat Korrapati, H.: Loop closure for topological mapping and navigation with omnidirectional images. Ph.D. thesis, Université Blaise Pascal-Clermont-Ferrand II (2013) Korrapati, H.: Loop closure for topological mapping and navigation with omnidirectional images. Ph.D. thesis, Université Blaise Pascal-Clermont-Ferrand II (2013)
4.
Zurück zum Zitat Wu, J., Rehg, J.M.: CENTRIST: a visual descriptor for scene categorization. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1489–1501 (2011)CrossRef Wu, J., Rehg, J.M.: CENTRIST: a visual descriptor for scene categorization. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1489–1501 (2011)CrossRef
5.
Zurück zum Zitat Bosch, A., Zisserman, A., Munoz, X.: Representing shape with a spatial pyramid kernel. ACM Int. Conf. Image Video Ret. 401–408 (2007) Bosch, A., Zisserman, A., Munoz, X.: Representing shape with a spatial pyramid kernel. ACM Int. Conf. Image Video Ret. 401–408 (2007)
6.
Zurück zum Zitat Fazl-Ersi, E., Tsotsos, J.K.: Histogram of oriented uniform patterns for robust place recognition and categorization. Int. J. Rob. Res. 31(4), 468–483 (2012)CrossRef Fazl-Ersi, E., Tsotsos, J.K.: Histogram of oriented uniform patterns for robust place recognition and categorization. Int. J. Rob. Res. 31(4), 468–483 (2012)CrossRef
7.
Zurück zum Zitat Zhou, L., Zhou, Z., Hu, D.: Scene classification using a multi-resolution bag-of-features model. Patt. Recog. 46(1), 424–433 (2013)CrossRef Zhou, L., Zhou, Z., Hu, D.: Scene classification using a multi-resolution bag-of-features model. Patt. Recog. 46(1), 424–433 (2013)CrossRef
8.
Zurück zum Zitat Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. IEEE Conf. Comput. Vision Pattern Recog. 886–893 (2005) Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. IEEE Conf. Comput. Vision Pattern Recog. 886–893 (2005)
9.
Zurück zum Zitat Winters, N., Gaspar, J., Lacey, G., Santos-Victor, J.: Omni-directional vision for robot navigation. In: IEEE Workshop on Omnidirectional Vision, pp. 21–28 (2000) Winters, N., Gaspar, J., Lacey, G., Santos-Victor, J.: Omni-directional vision for robot navigation. In: IEEE Workshop on Omnidirectional Vision, pp. 21–28 (2000)
10.
Zurück zum Zitat Gaspar, J., Winters, N., Santos-Victor, J.: Vision-based navigation and environmental representations with an omnidirectional camera. IEEE Trans. Robot. Autom. 16(6), 890–898 (2000)CrossRef Gaspar, J., Winters, N., Santos-Victor, J.: Vision-based navigation and environmental representations with an omnidirectional camera. IEEE Trans. Robot. Autom. 16(6), 890–898 (2000)CrossRef
11.
Zurück zum Zitat Ulrich, I., Nourbakhsh, I.: Appearance-based place recognition for topological localization. IEEE Int. Conf. Robot. Autom. 2, 1023–1029 (2000) Ulrich, I., Nourbakhsh, I.: Appearance-based place recognition for topological localization. IEEE Int. Conf. Robot. Autom. 2, 1023–1029 (2000)
12.
Zurück zum Zitat Kosecka, J., Zhou, L., Barber, P., Duric, Z.: Qualitative image based localization in indoors environments. In: IEEE Conf. Comput. Vision Pattern Recog. 2, pp. II–3–II–8 (2003) Kosecka, J., Zhou, L., Barber, P., Duric, Z.: Qualitative image based localization in indoors environments. In: IEEE Conf. Comput. Vision Pattern Recog. 2, pp. II–3–II–8 (2003)
13.
Zurück zum Zitat Bradley, D., Patel, R., Vandapel, N., Thayer, S.: Real-time image-based topological localization in large outdoor environments. IEEE/RSJ Int. Conf. Intell. Robots Syst. 3670–3677 (2005) Bradley, D., Patel, R., Vandapel, N., Thayer, S.: Real-time image-based topological localization in large outdoor environments. IEEE/RSJ Int. Conf. Intell. Robots Syst. 3670–3677 (2005)
14.
Zurück zum Zitat Weiss, C., Masselli, A.: Fast outdoor robot localization using integral invariants. IEEE Int. Conf. Comput. Vision, 1–10 (2007) Weiss, C., Masselli, A.: Fast outdoor robot localization using integral invariants. IEEE Int. Conf. Comput. Vision, 1–10 (2007)
15.
Zurück zum Zitat Wang, J., Zha, H., Cipolla, R.: Efficient topological localization using orientation adjacency coherence histograms. Int. Conf. Pattern Recog. 271–274 (2006) Wang, J., Zha, H., Cipolla, R.: Efficient topological localization using orientation adjacency coherence histograms. Int. Conf. Pattern Recog. 271–274 (2006)
16.
Zurück zum Zitat Pronobis, A., Caputo, B., Jensfelt, P., Christensen, H.: A discriminative approach to robust visual place recognition. In: IEEE/RSJ Int. Conf. Intell. Robots Syst. 3829–3836 (2006) Pronobis, A., Caputo, B., Jensfelt, P., Christensen, H.: A discriminative approach to robust visual place recognition. In: IEEE/RSJ Int. Conf. Intell. Robots Syst. 3829–3836 (2006)
17.
Zurück zum Zitat Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vision 42(3), 145–175 (2001)CrossRefMATH Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vision 42(3), 145–175 (2001)CrossRefMATH
18.
Zurück zum Zitat Murillo, A.C., Campos, P., Kosecka, J., Guerrero, J.: Gist vocabularies in omnidirectional images for appearance based mapping and localization. In: Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras (RSS) (2010) Murillo, A.C., Campos, P., Kosecka, J., Guerrero, J.: Gist vocabularies in omnidirectional images for appearance based mapping and localization. In: Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras (RSS) (2010)
19.
Zurück zum Zitat Sunderhauf, N., Protzel, P.: BRIEF-gist - closing the loop by simple means. IEEE/RSJ Int. Conf. Intell. Robots Syst. 1234–1241 (2011) Sunderhauf, N., Protzel, P.: BRIEF-gist - closing the loop by simple means. IEEE/RSJ Int. Conf. Intell. Robots Syst. 1234–1241 (2011)
20.
Zurück zum Zitat Chapoulie, A., Rives, P., Filliat, D.: Appearance-based segmentation of indoors and outdoors sequences of spherical views. IEEE Int. Conf. Robot. Autom. 1946–1951 (2013) Chapoulie, A., Rives, P., Filliat, D.: Appearance-based segmentation of indoors and outdoors sequences of spherical views. IEEE Int. Conf. Robot. Autom. 1946–1951 (2013)
21.
Zurück zum Zitat Lamon, P., Nourbakhsh, I., Jensen, B., Siegwart, R.: Deriving and matching image fingerprint sequences for mobile robot localization. IEEE Int. Conf. Robot. Autom. 2, 1609–1614 (2001) Lamon, P., Nourbakhsh, I., Jensen, B., Siegwart, R.: Deriving and matching image fingerprint sequences for mobile robot localization. IEEE Int. Conf. Robot. Autom. 2, 1609–1614 (2001)
22.
Zurück zum Zitat Liu, M., Scaramuzza, D., Pradalier, C., Siegwart, R., Chen, Q.: Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptors. IEEE/RSJ Int. Conf. Intell. Robots Syst. 116–121 (2009) Liu, M., Scaramuzza, D., Pradalier, C., Siegwart, R., Chen, Q.: Scene recognition with omnidirectional vision for topological map using lightweight adaptive descriptors. IEEE/RSJ Int. Conf. Intell. Robots Syst. 116–121 (2009)
23.
Zurück zum Zitat Liu, M., Siegwart, R.: DP-FACT: towards topological mapping and scene recognition with color for omnidirectional camera. IEEE Int. Conf. Robot. Autom. 3503–3508 (2012) Liu, M., Siegwart, R.: DP-FACT: towards topological mapping and scene recognition with color for omnidirectional camera. IEEE Int. Conf. Robot. Autom. 3503–3508 (2012)
24.
Zurück zum Zitat Menegatti, E., Maeda, T., Ishiguro, H.: Image-based memory for robot navigation using properties of omnidirectional images. Rob. Auton. Syst. 47(4), 251–267 (2004)CrossRef Menegatti, E., Maeda, T., Ishiguro, H.: Image-based memory for robot navigation using properties of omnidirectional images. Rob. Auton. Syst. 47(4), 251–267 (2004)CrossRef
25.
Zurück zum Zitat Menegatti, E., Zoccarato, M., Pagello, E., Ishiguro, H.: Image-based monte carlo localisation with omnidirectional images. Rob. Auton. Syst. 48(1), 17–30 (2004)CrossRef Menegatti, E., Zoccarato, M., Pagello, E., Ishiguro, H.: Image-based monte carlo localisation with omnidirectional images. Rob. Auton. Syst. 48(1), 17–30 (2004)CrossRef
26.
Zurück zum Zitat Prasser, D., Wyeth, G.: Probabilistic visual recognition of artificial landmarks for simultaneous localization and mapping. IEEE Int. Conf. Robot. Autom. 1, 1291–1296 (2003) Prasser, D., Wyeth, G.: Probabilistic visual recognition of artificial landmarks for simultaneous localization and mapping. IEEE Int. Conf. Robot. Autom. 1, 1291–1296 (2003)
27.
Zurück zum Zitat Milford, M., Wyeth, G.: Mapping a suburb with a single camera using a biologically inspired slam system. IEEE Trans. Robot. 24(5), 1038–1053 (2008)CrossRef Milford, M., Wyeth, G.: Mapping a suburb with a single camera using a biologically inspired slam system. IEEE Trans. Robot. 24(5), 1038–1053 (2008)CrossRef
28.
Zurück zum Zitat Milford, M., Wyeth, G.: SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. IEEE Int. Conf. Robot. Autom. 1643–1649 (2012) Milford, M., Wyeth, G.: SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights. IEEE Int. Conf. Robot. Autom. 1643–1649 (2012)
29.
Zurück zum Zitat Lui, W.L.D., Jarvis, R.: A pure vision-based approach to topological slam. IEEE/RSJ Int. Conf. Intell. Robots Syst. 3784–3791 (2010) Lui, W.L.D., Jarvis, R.: A pure vision-based approach to topological slam. IEEE/RSJ Int. Conf. Intell. Robots Syst. 3784–3791 (2010)
30.
Zurück zum Zitat Lui, W.L.D., Jarvis, R.: A pure vision-based topological slam system. Int. J. Robt. Res. 31(4), 403–428 (2012)CrossRef Lui, W.L.D., Jarvis, R.: A pure vision-based topological slam system. Int. J. Robt. Res. 31(4), 403–428 (2012)CrossRef
31.
Zurück zum Zitat Badino, H., Huber, D., Kanade, T.: Real-time topometric localization. IEEE Int. Conf. Robot. Autom. 1635–1642 (2012) Badino, H., Huber, D., Kanade, T.: Real-time topometric localization. IEEE Int. Conf. Robot. Autom. 1635–1642 (2012)
32.
Zurück zum Zitat Lategahn, H., Beck, J., Kitt, B., Stiller, C.: How to learn an illumination robust image feature for place recognition. Intell. Vehic. Symp. 285–291 (2013) Lategahn, H., Beck, J., Kitt, B., Stiller, C.: How to learn an illumination robust image feature for place recognition. Intell. Vehic. Symp. 285–291 (2013)
33.
Zurück zum Zitat Nourani-Vatani, N., Borges, P., Roberts, J., Srinivasan, M.: On the use of optical flow for scene change detection and description. J. Intell. Robot. Syst. 74(3), 817–846 (2014)CrossRef Nourani-Vatani, N., Borges, P., Roberts, J., Srinivasan, M.: On the use of optical flow for scene change detection and description. J. Intell. Robot. Syst. 74(3), 817–846 (2014)CrossRef
34.
Zurück zum Zitat Tuytelaars, T., Mikolajczyk, K.: Local Invariant Feature Detectors: A Survey. Found. Trends® Comput. Gr. Vis. 3(3), 177–280 (2007) Tuytelaars, T., Mikolajczyk, K.: Local Invariant Feature Detectors: A Survey. Found. Trends® Comput. Gr. Vis. 3(3), 177–280 (2007)
35.
Zurück zum Zitat Schmidt, A., Kraft, M., Kasinski, A.: An evaluation of image feature detectors and descriptors for robot navigation. ICCVG, Computer Vision and Graphic. Lecture Notes in Computer Science, pp. 251–259. Springer, Berlin (2010) Schmidt, A., Kraft, M., Kasinski, A.: An evaluation of image feature detectors and descriptors for robot navigation. ICCVG, Computer Vision and Graphic. Lecture Notes in Computer Science, pp. 251–259. Springer, Berlin (2010)
36.
Zurück zum Zitat Miksik, O., Mikolajczyk, K.: Evaluation of local detectors and descriptors for fast feature matching. Int. Conf. Pattern Recog. 2681–2684 (2012) Miksik, O., Mikolajczyk, K.: Evaluation of local detectors and descriptors for fast feature matching. Int. Conf. Pattern Recog. 2681–2684 (2012)
37.
Zurück zum Zitat Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)CrossRef Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)CrossRef
38.
Zurück zum Zitat Harris, C., Stephens, M.: A Combined Corner and Edge Detector. In: Alvey Vision Conference, pp. 147–151 (1988) Harris, C., Stephens, M.: A Combined Corner and Edge Detector. In: Alvey Vision Conference, pp. 147–151 (1988)
39.
Zurück zum Zitat Shi, J., Tomasi, C.: Good features to track. IEEE Conf. Comput. Vision Pattern Recog. 593–600 (1994) Shi, J., Tomasi, C.: Good features to track. IEEE Conf. Comput. Vision Pattern Recog. 593–600 (1994)
40.
Zurück zum Zitat Smith, S., Brady, M.: SUSAN - a new approach to low level image processing. Int. J. Comput. Vision 23(1), 45–78 (1997)CrossRef Smith, S., Brady, M.: SUSAN - a new approach to low level image processing. Int. J. Comput. Vision 23(1), 45–78 (1997)CrossRef
41.
Zurück zum Zitat Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. Eur. Conf. Comput. Vision, 430–443 (2006) Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. Eur. Conf. Comput. Vision, 430–443 (2006)
42.
Zurück zum Zitat Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–19 (2010)CrossRef Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–19 (2010)CrossRef
43.
Zurück zum Zitat Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. IEEE Int. Conf. Comput. Vision 95, 2564–2571 (2011) Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. IEEE Int. Conf. Comput. Vision 95, 2564–2571 (2011)
44.
Zurück zum Zitat Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 6312, pp. 183–196. Springer, Berlin (2010) Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 6312, pp. 183–196. Springer, Berlin (2010)
45.
Zurück zum Zitat Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary robust invariant scalable keypoints. IEEE Int. Conf. Comput. Vision, 2548–2555 (2011) Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary robust invariant scalable keypoints. IEEE Int. Conf. Comput. Vision, 2548–2555 (2011)
46.
Zurück zum Zitat Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRef Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRef
47.
Zurück zum Zitat Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 3951, pp. 404–417. Springer, Berlin (2006) Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 3951, pp. 404–417. Springer, Berlin (2006)
48.
Zurück zum Zitat Agrawal, M., Konolige, K., Blas, M.R.: CenSurE: center surround extremas for realtime feature detection and matching. European Conference on Computer Vision, vol. 5305, pp. 102–115. Springer, Berlin (2008) Agrawal, M., Konolige, K., Blas, M.R.: CenSurE: center surround extremas for realtime feature detection and matching. European Conference on Computer Vision, vol. 5305, pp. 102–115. Springer, Berlin (2008)
49.
Zurück zum Zitat Konolige, K., Bowman, J., Chen, J., Mihelich, P., Calonder, M., Lepetit, V., Fua, P.: View-based maps. Int. J. Robt. Res. 29(8), 941–957 (2010)CrossRef Konolige, K., Bowman, J., Chen, J., Mihelich, P., Calonder, M., Lepetit, V., Fua, P.: View-based maps. Int. J. Robt. Res. 29(8), 941–957 (2010)CrossRef
50.
Zurück zum Zitat Ebrahimi, M., Mayol-Cuevas, W.: SUSurE: Speeded up surround extrema feature detector and descriptor for realtime applications. IEEE Conf. Comput. Vision Pattern Recog. 9–14 (2009) Ebrahimi, M., Mayol-Cuevas, W.: SUSurE: Speeded up surround extrema feature detector and descriptor for realtime applications. IEEE Conf. Comput. Vision Pattern Recog. 9–14 (2009)
51.
Zurück zum Zitat Alcantarilla, P.F., Bartoli, A., Davison, A.J.: KAZE features. European Conference on Computer Vision, pp. 214–227. Springer, Berlin (2012) Alcantarilla, P.F., Bartoli, A., Davison, A.J.: KAZE features. European Conference on Computer Vision, pp. 214–227. Springer, Berlin (2012)
52.
Zurück zum Zitat Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: British Machine Vision Conference (BMVC) (2013) Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: British Machine Vision Conference (BMVC) (2013)
53.
Zurück zum Zitat Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)MathSciNetCrossRefMATH Morel, J.M., Yu, G.: ASIFT: a new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2(2), 438–469 (2009)MathSciNetCrossRefMATH
54.
Zurück zum Zitat Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: British Machine Vision Conference (BMVC), pp. 1–10 (2002) Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: British Machine Vision Conference (BMVC), pp. 1–10 (2002)
55.
Zurück zum Zitat Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. IEEE Conf. Comput. Vision Pattern Recog. 506–513 (2004) Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. IEEE Conf. Comput. Vision Pattern Recog. 506–513 (2004)
56.
Zurück zum Zitat Andreasson, H., Duckett, T.: Topological localization for mobile robots using omnidirectional vision and local features. IFAC Symp. Intell. Auton, Vehic (2008) Andreasson, H., Duckett, T.: Topological localization for mobile robots using omnidirectional vision and local features. IFAC Symp. Intell. Auton, Vehic (2008)
57.
Zurück zum Zitat Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 815–830 (2010)CrossRef Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 815–830 (2010)CrossRef
58.
Zurück zum Zitat Sarfraz, M.S., Hellwich, O.: Head Pose Estimation in Face Recognition Across Pose Scenarios. In: International Conference on Computer Vision Theory and Applications, pp. 235–242 (2008) Sarfraz, M.S., Hellwich, O.: Head Pose Estimation in Face Recognition Across Pose Scenarios. In: International Conference on Computer Vision Theory and Applications, pp. 235–242 (2008)
59.
Zurück zum Zitat Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF : binary robust independent elementary features. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 6314, pp. 778–792. Springer, Berlin (2010) Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF : binary robust independent elementary features. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 6314, pp. 778–792. Springer, Berlin (2010)
60.
Zurück zum Zitat Alahi, A., Ortiz, R., Vandergheynst, P.: FREAK : fast retina keypoint. IEEE Conf. Comput. Vision Pattern Recog. 510–517 (2012) Alahi, A., Ortiz, R., Vandergheynst, P.: FREAK : fast retina keypoint. IEEE Conf. Comput. Vision Pattern Recog. 510–517 (2012)
61.
Zurück zum Zitat Trzcinski, T., Lepetit, V.: Efficient discriminative projections for compact binary descriptors. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 7572, pp. 228–242 (2012) Trzcinski, T., Lepetit, V.: Efficient discriminative projections for compact binary descriptors. European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 7572, pp. 228–242 (2012)
62.
Zurück zum Zitat Strecha, C., Bronstein, A.M., Bronstein, M.M., Fua, P.: LDAHash: Improved matching with smaller descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 34(1) (2012) Strecha, C., Bronstein, A.M., Bronstein, M.M., Fua, P.: LDAHash: Improved matching with smaller descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 34(1) (2012)
63.
Zurück zum Zitat Trzcinski, T., Christoudias, C., Fua, P., Lepetit, V.: Boosting binary keypoint descriptors. IEEE Conf. Comput. Vision Pattern Recog. 2874–2881 (2013) Trzcinski, T., Christoudias, C., Fua, P., Lepetit, V.: Boosting binary keypoint descriptors. IEEE Conf. Comput. Vision Pattern Recog. 2874–2881 (2013)
64.
Zurück zum Zitat Yang, X., Cheng, K.T.: Local difference binary for ultrafast and distinctive feature description. IEEE Trans. Pattern Anal. Mach. Intell. 36(1), 188–94 (2014)CrossRef Yang, X., Cheng, K.T.: Local difference binary for ultrafast and distinctive feature description. IEEE Trans. Pattern Anal. Mach. Intell. 36(1), 188–94 (2014)CrossRef
65.
Zurück zum Zitat Geng, L.C., Jodoin, P.M., Su, S.Z., Li, S.Z.: CBDF: compressed binary discriminative feature. Neurocomputing 184, 43–54 (2015)CrossRef Geng, L.C., Jodoin, P.M., Su, S.Z., Li, S.Z.: CBDF: compressed binary discriminative feature. Neurocomputing 184, 43–54 (2015)CrossRef
66.
Zurück zum Zitat Gionis, A., Indyk, P., Motwani, R.: Similarity search in high dimensions via hashing. In: International Conference on Very Large Data Bases, pp. 518–529 (1999) Gionis, A., Indyk, P., Motwani, R.: Similarity search in high dimensions via hashing. In: International Conference on Very Large Data Bases, pp. 518–529 (1999)
67.
Zurück zum Zitat Silpa-Anan, C., Hartley, R.: Optimised kd-trees for fast image descriptor matching. IEEE Conf. Comput. Vision Pattern Recog. 1–8 (2008) Silpa-Anan, C., Hartley, R.: Optimised kd-trees for fast image descriptor matching. IEEE Conf. Comput. Vision Pattern Recog. 1–8 (2008)
68.
Zurück zum Zitat Sivic, J., Zisserman, A.: video google: a text retrieval approach to object matching in videos. IEEE Int. Conf. Comput. Vision, 1470–1477 (2003) Sivic, J., Zisserman, A.: video google: a text retrieval approach to object matching in videos. IEEE Int. Conf. Comput. Vision, 1470–1477 (2003)
69.
Zurück zum Zitat Tsai, C.F.: Bag-of-words representation in image annotation: a review. ISRN Artif. Intell. 2012 (2012) Tsai, C.F.: Bag-of-words representation in image annotation: a review. ISRN Artif. Intell. 2012 (2012)
Metadaten
Titel
Background
verfasst von
Emilio Garcia-Fidalgo
Alberto Ortiz
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-75993-7_2

Neuer Inhalt