Skip to main content

2024 | OriginalPaper | Buchkapitel

ROS Compatible Local Planner and Controller Based on Reinforcement Learning

verfasst von : Muharrem Küçükyılmaz, Erkan Uslu

Erschienen in: Advances in Intelligent Manufacturing and Service System Informatics

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The study’s main objective is to develop a ROS compatible local planner and controller for autonomous mobile robots based on reinforcement learning. Reinforcement learning based local planner and controller differs from classical linear or nonlinear deterministic control approaches using flexibility on newly encountered conditions and model free learning process. Two different reinforcement learning approaches are utilized in the study, namely Q-Learning and DQN, which are then compared with deterministic local planners such as TEB and DWA. Q-Learning agent is trained by positive reward on reaching goal point and negative reward on colliding obstacles or reaching the outer limits of the restricted movable area. The Q-Learning approach can reach an acceptable behaviour at around 70000 episodes, where the long training times are related to large state space that Q-Learning cannot handle well. The second employed DQN method can handle this large state space more easily, as an acceptable behaviour is reached around 7000 episodes, enabling the model to include the global path as a secondary measure for reward. Both models assume the map is fully or partially known and both models are supplied with a global plan that does not aware of the obstacle ahead. Both methods are expected to learn the required speed controls to be able to reach the goal point as soon as possible, avoiding the obstacles. Promising results from the study reflect the possibility of a more generic local planner that can consume in-between waypoints on the global path, even in dynamic environments, based on reinforcement learning.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Fragapane, G., De Koster, R., Sgarbossa, F., Strandhagen, J.O.: Planning and control of autonomous mobile robots for intralogistics: literature review and research agenda. Eur. J. Oper. Res. 294, 405–426 (2021)MathSciNetCrossRefMATH Fragapane, G., De Koster, R., Sgarbossa, F., Strandhagen, J.O.: Planning and control of autonomous mobile robots for intralogistics: literature review and research agenda. Eur. J. Oper. Res. 294, 405–426 (2021)MathSciNetCrossRefMATH
2.
Zurück zum Zitat Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13, 99–110 (2006)CrossRef Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13, 99–110 (2006)CrossRef
3.
Zurück zum Zitat Betke, M., Gurvits, L.: Mobile robot localization using landmarks. IEEE Trans. Robot. Autom. 13, 251–263 (1997)CrossRef Betke, M., Gurvits, L.: Mobile robot localization using landmarks. IEEE Trans. Robot. Autom. 13, 251–263 (1997)CrossRef
4.
Zurück zum Zitat Leonard, J.J., Durrant-Whyte, H.F.: Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 7, 376–382 (1991)CrossRef Leonard, J.J., Durrant-Whyte, H.F.: Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 7, 376–382 (1991)CrossRef
5.
Zurück zum Zitat Biswas, J., Veloso, M.: Depth camera based indoor mobile robot localization and navigation. In: 2012 IEEE International Conference on Robotics and Automation (2012) Biswas, J., Veloso, M.: Depth camera based indoor mobile robot localization and navigation. In: 2012 IEEE International Conference on Robotics and Automation (2012)
6.
Zurück zum Zitat Ruan, X., Ren, D., Zhu, X., Huang, J.: Mobile robot navigation based on deep reinforcement learning. In: 2019 Chinese Control and Decision Conference (CCDC) (2019) Ruan, X., Ren, D., Zhu, X., Huang, J.: Mobile robot navigation based on deep reinforcement learning. In: 2019 Chinese Control and Decision Conference (CCDC) (2019)
7.
Zurück zum Zitat Heimann, D., Hohenfeld, H., Wiebe, F., Kirchner, F., Quantum deep reinforcement learning for robot navigation tasks, arXiv preprint arXiv:2202.12180 (2022) Heimann, D., Hohenfeld, H., Wiebe, F., Kirchner, F., Quantum deep reinforcement learning for robot navigation tasks, arXiv preprint arXiv:​2202.​12180 (2022)
8.
Zurück zum Zitat François-Lavet, V., Henderson, P., Islam, R., Bellemare, M.G., Pineau, J., et al.: An introduction to deep reinforcement learning. Found. Trends® Mach. Learn. 11, 219–354 (2018)CrossRefMATH François-Lavet, V., Henderson, P., Islam, R., Bellemare, M.G., Pineau, J., et al.: An introduction to deep reinforcement learning. Found. Trends® Mach. Learn. 11, 219–354 (2018)CrossRefMATH
10.
Zurück zum Zitat Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)CrossRef Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)CrossRef
Metadaten
Titel
ROS Compatible Local Planner and Controller Based on Reinforcement Learning
verfasst von
Muharrem Küçükyılmaz
Erkan Uslu
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-6062-0_37

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.