Skip to main content
Erschienen in: Intelligent Service Robotics 2/2020

17.12.2019 | Original Research Paper

Qualitative vision-based navigation based on sloped funnel lane concept

verfasst von: Mohamad Mahdi Kassir, Maziar Palhang, Mohammad Reza Ahmadzadeh

Erschienen in: Intelligent Service Robotics | Ausgabe 2/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Funnel lane is a map-less visual navigation technique that tries qualitatively to follow a path that has been recorded before by a camera. Unlike some other methods, funnel lane does not require any calculation to relate world coordinates to image coordinates. However, the funnel lane has some shortcomings. First, it does not provide any information about the radius of rotation of the robot. This reduces the robot maneuverability of the robots and, on some occasions, does not let the robot to correct its path if a deviation occurs. Second, funnel lane constraints sometimes do not distinguish between forward or turning movement of the robot while the robot is in the funnel lane, and command the robot to go forward. This prevents the robot to follow the desired path and leads to failure of the robot’s mission. This paper introduces the sloped funnel lane technique to address these shortcomings. It sets the rotation radius based on the observed frames. Moreover, it reduces translation and rotation ambiguity. Therefore, the robot can follow any desired path leading to more robust and accurate navigation. Experimental results on challenging scenarios on a real ground robot demonstrate the effectiveness of the sloped funnel lane technique.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Diosi A, Remazeilles A, šegvić S, Chaumette F (2007) Experimental evaluation of an urban visual path following framework. In: IFAC proceedings volumes (IFAC-PapersOnline) Diosi A, Remazeilles A, šegvić S, Chaumette F (2007) Experimental evaluation of an urban visual path following framework. In: IFAC proceedings volumes (IFAC-PapersOnline)
2.
Zurück zum Zitat Chen Z, Birchfield ST (2009) Qualitative vision-based path following. IEEE Trans Robot 25:749–754 CrossRef Chen Z, Birchfield ST (2009) Qualitative vision-based path following. IEEE Trans Robot 25:749–754 CrossRef
3.
Zurück zum Zitat Zhichao C, Birchfield ST (2006) Qualitative vision-based mobile robot navigation. In: Proceedings—IEEE international conference on robotics and automation Zhichao C, Birchfield ST (2006) Qualitative vision-based mobile robot navigation. In: Proceedings—IEEE international conference on robotics and automation
4.
Zurück zum Zitat Guerrero JJ, Martinez-Cantin R, Sagüś C (2005) Visual map-less navigation based on homographies. J Robot Syst 22:569–581CrossRef Guerrero JJ, Martinez-Cantin R, Sagüś C (2005) Visual map-less navigation based on homographies. J Robot Syst 22:569–581CrossRef
5.
Zurück zum Zitat Liang BLB, Pears N (2002) Visual navigation using planar homographies. In: Proceedings of 2002 IEEE international conference on robotic automation (Cat. No.02CH37292) Liang BLB, Pears N (2002) Visual navigation using planar homographies. In: Proceedings of 2002 IEEE international conference on robotic automation (Cat. No.02CH37292)
6.
Zurück zum Zitat Royer E, Lhuillier M, Dhome M, Lavest J-M (2007) Monocular vision for mobile robot localization and autonomous navigation. Int J Comput Vis 74:237–260CrossRef Royer E, Lhuillier M, Dhome M, Lavest J-M (2007) Monocular vision for mobile robot localization and autonomous navigation. Int J Comput Vis 74:237–260CrossRef
7.
Zurück zum Zitat Remazeilles A, Chaumette F, Gros P (2006) 3D navigation based on a visual memory. In: International conference on robotics and automation Remazeilles A, Chaumette F, Gros P (2006) 3D navigation based on a visual memory. In: International conference on robotics and automation
8.
Zurück zum Zitat Matsumoto Y, Ikeda K, Inaba M, Inoue H (1999) Visual navigation using omnidirectional view sequence. In: 1999 IEEE/RSJ international conference on intelligent robots and systems Matsumoto Y, Ikeda K, Inaba M, Inoue H (1999) Visual navigation using omnidirectional view sequence. In: 1999 IEEE/RSJ international conference on intelligent robots and systems
9.
Zurück zum Zitat Pasteau F, Narayanan VK, Babel M, Chaumette F (2016) visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot Auton Syst 75:28–40CrossRef Pasteau F, Narayanan VK, Babel M, Chaumette F (2016) visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot Auton Syst 75:28–40CrossRef
10.
Zurück zum Zitat David J, Manivannan PV (2014) Control of truck-trailer mobile robots: a survey. Intell Serv Robot 7:245–258CrossRef David J, Manivannan PV (2014) Control of truck-trailer mobile robots: a survey. Intell Serv Robot 7:245–258CrossRef
11.
Zurück zum Zitat Remazeilles A, Chaumette F (2007) Image-based robot navigation from an image memory. Robot Auton Syst 55:345–356CrossRef Remazeilles A, Chaumette F (2007) Image-based robot navigation from an image memory. Robot Auton Syst 55:345–356CrossRef
12.
Zurück zum Zitat šegvić S, Remazeilles A, Diosi A, Chaumette F (2007) Large scale vision-based navigation without an accurate global reconstruction. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition šegvić S, Remazeilles A, Diosi A, Chaumette F (2007) Large scale vision-based navigation without an accurate global reconstruction. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition
13.
Zurück zum Zitat Do T, Carrillo-Arce LC, Roumeliotis SI (2018) Autonomous flights through image-defined paths. In: Bicchi A, Burgard W (eds) Robotics research. Springer Proceedings in Advanced Robotics, vol 2. Springer, Cham, pp 39–55 Do T, Carrillo-Arce LC, Roumeliotis SI (2018) Autonomous flights through image-defined paths. In: Bicchi A, Burgard W (eds) Robotics research. Springer Proceedings in Advanced Robotics, vol 2. Springer, Cham, pp 39–55
14.
Zurück zum Zitat Nguyen T, Mann GKI, Gosine RG (2014) Vision-based qualitative path-following control of quadrotor aerial vehicle. In: 2014 international conference on unmanned aircraft systems, ICUAS 2014—conference proceedings Nguyen T, Mann GKI, Gosine RG (2014) Vision-based qualitative path-following control of quadrotor aerial vehicle. In: 2014 international conference on unmanned aircraft systems, ICUAS 2014—conference proceedings
15.
Zurück zum Zitat Royer E, Bom J, Dhome M, Thuilot B, Lhuillier M, Marmoiton F (2005) Outdoor autonomous navigation using monocular vision. In: Proceedings of IEEE/RSJ international conference intelligent robotics system Royer E, Bom J, Dhome M, Thuilot B, Lhuillier M, Marmoiton F (2005) Outdoor autonomous navigation using monocular vision. In: Proceedings of IEEE/RSJ international conference intelligent robotics system
16.
Zurück zum Zitat Do T, Carrillo-Arce LC, Roumeliotis SI (2019) High-speed autonomous quadrotor navigation through visual and inertial paths. Int J Robot Res 38:486–504CrossRef Do T, Carrillo-Arce LC, Roumeliotis SI (2019) High-speed autonomous quadrotor navigation through visual and inertial paths. Int J Robot Res 38:486–504CrossRef
17.
Zurück zum Zitat Bonin-Font F, Ortiz A, Oliver G (2008) Visual navigation for mobile robots: a survey. J Intell Robot Syst 53:263CrossRef Bonin-Font F, Ortiz A, Oliver G (2008) Visual navigation for mobile robots: a survey. J Intell Robot Syst 53:263CrossRef
18.
Zurück zum Zitat Kidono K, Miura J, Shirai Y (2002) Autonomous visual navigation of a mobile robot using a human-guided experience. Robot Auton Syst 40:121–130CrossRef Kidono K, Miura J, Shirai Y (2002) Autonomous visual navigation of a mobile robot using a human-guided experience. Robot Auton Syst 40:121–130CrossRef
19.
Zurück zum Zitat Chao H, Gu Y, Gross J (2013) A comparative study of optical flow and traditional sensors in UAV navigation. In: 2013 American control.. Chao H, Gu Y, Gross J (2013) A comparative study of optical flow and traditional sensors in UAV navigation. In: 2013 American control..
20.
Zurück zum Zitat Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol Rev 91:413–460CrossRef Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol Rev 91:413–460CrossRef
21.
Zurück zum Zitat Chao H, Gu Y, Napolitano M (2013) A survey of optical flow techniques for UAV navigation applications. In: 2013 international conference on unmanned aircraft systems, ICUAS 2013—conference proceedings Chao H, Gu Y, Napolitano M (2013) A survey of optical flow techniques for UAV navigation applications. In: 2013 international conference on unmanned aircraft systems, ICUAS 2013—conference proceedings
22.
Zurück zum Zitat King P, Vardy A, Forrest AL (2018) Teach-and-repeat path following for an autonomous underwater vehicle. J Field Robot 35:748–763CrossRef King P, Vardy A, Forrest AL (2018) Teach-and-repeat path following for an autonomous underwater vehicle. J Field Robot 35:748–763CrossRef
23.
Zurück zum Zitat Furgale P, Barfoot TD (2010) Visual teach and repeat for long-range rover autonomy. J Field Robot 27:534–560CrossRef Furgale P, Barfoot TD (2010) Visual teach and repeat for long-range rover autonomy. J Field Robot 27:534–560CrossRef
24.
Zurück zum Zitat Ostafew CJ, Schoellig AP, Barfoot TD (2013) Visual teach and repeat, repeat, repeat: iterative learning control to improve mobile robot path tracking in challenging outdoor environments. In: IEEE international conference on intelligent robots and systems Ostafew CJ, Schoellig AP, Barfoot TD (2013) Visual teach and repeat, repeat, repeat: iterative learning control to improve mobile robot path tracking in challenging outdoor environments. In: IEEE international conference on intelligent robots and systems
25.
Zurück zum Zitat Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD (2019) There’s no place like home: visual teach and repeat for emergency return of multirotor UAVs during GPS failure. IEEE Robot Autom Lett 4:161–168CrossRef Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD (2019) There’s no place like home: visual teach and repeat for emergency return of multirotor UAVs during GPS failure. IEEE Robot Autom Lett 4:161–168CrossRef
26.
Zurück zum Zitat Clement L, Kelly J, Barfoot TD (2017) Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. J Field Robot 34:74–97CrossRef Clement L, Kelly J, Barfoot TD (2017) Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. J Field Robot 34:74–97CrossRef
27.
Zurück zum Zitat Wang Z, Lambert A (2018) ICSP based visual teach and repeat for outdoor car-like robot localization. In: 2018 10th computer science and electronic engineering conference (CEEC) Wang Z, Lambert A (2018) ICSP based visual teach and repeat for outdoor car-like robot localization. In: 2018 10th computer science and electronic engineering conference (CEEC)
28.
Zurück zum Zitat Bista SR, Giordano PR, Chaumette F (2016) Appearance-based indoor navigation by IBVS using mutual information. In: 2016 14th international conference on control. Robotics and vision, ICARCV, automation, p 2017 Bista SR, Giordano PR, Chaumette F (2016) Appearance-based indoor navigation by IBVS using mutual information. In: 2016 14th international conference on control. Robotics and vision, ICARCV, automation, p 2017
29.
Zurück zum Zitat Krajnik T, Majer F, Halodova L, Vintr T (2018) Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE international conference on intelligent robots and systems Krajnik T, Majer F, Halodova L, Vintr T (2018) Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE international conference on intelligent robots and systems
30.
Zurück zum Zitat Burschka D, Hager G (2001) Vision-based control of mobile robots. In: Proceedings of IEEE international conference on robotic automation Burschka D, Hager G (2001) Vision-based control of mobile robots. In: Proceedings of IEEE international conference on robotic automation
31.
Zurück zum Zitat Nguyen T, Mann GKI, Gosine RG, Vardy A (2016) Appearance-based visual-teach-and-repeat navigation technique for micro aerial vehicle. J Intell Robot Syst Theory Appl 84:217–240CrossRef Nguyen T, Mann GKI, Gosine RG, Vardy A (2016) Appearance-based visual-teach-and-repeat navigation technique for micro aerial vehicle. J Intell Robot Syst Theory Appl 84:217–240CrossRef
32.
Zurück zum Zitat Toudeshki AG, Shamshirdar F, Vaughan R (2018) UAV visual teach and repeat using only semantic object features. CoRR Toudeshki AG, Shamshirdar F, Vaughan R (2018) UAV visual teach and repeat using only semantic object features. CoRR
33.
Zurück zum Zitat Tomasi C (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon University, Pittsburgh Tomasi C (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon University, Pittsburgh
35.
Zurück zum Zitat Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110CrossRef Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110CrossRef
36.
Zurück zum Zitat Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359CrossRef Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359CrossRef
Metadaten
Titel
Qualitative vision-based navigation based on sloped funnel lane concept
verfasst von
Mohamad Mahdi Kassir
Maziar Palhang
Mohammad Reza Ahmadzadeh
Publikationsdatum
17.12.2019
Verlag
Springer Berlin Heidelberg
Erschienen in
Intelligent Service Robotics / Ausgabe 2/2020
Print ISSN: 1861-2776
Elektronische ISSN: 1861-2784
DOI
https://doi.org/10.1007/s11370-019-00308-4

Weitere Artikel der Ausgabe 2/2020

Intelligent Service Robotics 2/2020 Zur Ausgabe

Neuer Inhalt