Skip to main content
Top
Published in:

15-02-2024 | Connected Automated Vehicles and ITS

Reinforcement-Tracking: An End-to-End Trajectory Tracking Method Based on Self-Attention Mechanism

Authors: Guanglei Zhao, Zihao Chen, Weiming Liao

Published in: International Journal of Automotive Technology | Issue 3/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

As a bridge between planners and actuators, trajectory tracking is a critical and essential part of robot navigation, autonomous driving and some other fields. While traditional trajectory tracking methods generally have disadvantages such as poor tracking accuracy, high modeling requirements, and heavy computational load. This paper proposes an end-to-end trajectory tracking method based on reinforcement learning, and an information encoding network and a reinforcement learning policy network are constructed. A multi-task dense reward function for trajectory tracking is designed. For efficient encoding of local trajectory information, a self-attentive mechanism is developed. A virtual simulation environment is constructed for model training by modeling the trajectory tracking task. Comparing with model predictive control and pure pursuit in the tracking experiments of several reference trajectories, the obtained results show that the proposed method has significant advantages in terms of lateral tracking.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

ATZelectronics worldwide

ATZlectronics worldwide is up-to-speed on new trends and developments in automotive electronics on a scientific level with a high depth of information. 

Order your 30-days-trial for free and without any commitment.

Show more products
Literature
go back to reference Bellman, R. (1957). A markovian decision process. Journal of Mathematics and Mechanics, 6(5), 679–684.MathSciNet Bellman, R. (1957). A markovian decision process. Journal of Mathematics and Mechanics, 6(5), 679–684.MathSciNet
go back to reference Chen, J., Li, S. E., & Tomizuka, M. (2022). Interpretable end-to-end urban autonomous driving with latent deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 23(6), 5068–5078.CrossRef Chen, J., Li, S. E., & Tomizuka, M. (2022). Interpretable end-to-end urban autonomous driving with latent deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 23(6), 5068–5078.CrossRef
go back to reference François-Lavet, V., Henderson, P., Islam, R., Bellemare, M. G., & Pineau, J. (2018). An introduction to deep reinforcement learning. Morgan & Claypool. Now Publishers Inc.CrossRef François-Lavet, V., Henderson, P., Islam, R., Bellemare, M. G., & Pineau, J. (2018). An introduction to deep reinforcement learning. Morgan & Claypool. Now Publishers Inc.CrossRef
go back to reference Hilleli, B., & El-Yaniv, R. (2018). Toward deep reinforcement learning without a simulator: An autonomous steering example. In Proceedings of the AAAI Conference on Artificial Intelligence, 32, 1471–1478.CrossRef Hilleli, B., & El-Yaniv, R. (2018). Toward deep reinforcement learning without a simulator: An autonomous steering example. In Proceedings of the AAAI Conference on Artificial Intelligence, 32, 1471–1478.CrossRef
go back to reference Hu, G., Zhang, W., & Zhu, W. (2021). Prioritized experience replay for continual learning. 6th International Conference on Computational Intelligence and Applications. pp. 16–20. Hu, G., Zhang, W., & Zhu, W. (2021). Prioritized experience replay for continual learning. 6th International Conference on Computational Intelligence and Applications. pp. 16–20.
go back to reference Huang, Z., J. Zhang, R. Tian, and Y. Zhang, (2019). End-to-end autonomous driving decision based on deep reinforcement learning. In 2019 5th International Conference on Control, Automation and Robotics (ICCAR). pp. 658–662. Huang, Z., J. Zhang, R. Tian, and Y. Zhang, (2019). End-to-end autonomous driving decision based on deep reinforcement learning. In 2019 5th International Conference on Control, Automation and Robotics (ICCAR). pp. 658–662.
go back to reference Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. A., & Yogamani, S. (2022). Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Trans- Portation Systems, 23(6), 4909–4926.CrossRef Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. A., & Yogamani, S. (2022). Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Trans- Portation Systems, 23(6), 4909–4926.CrossRef
go back to reference Koenig, N., & Howard, A. (2004). Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Con- ference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566). 3: 2149–2154. Koenig, N., & Howard, A. (2004). Design and use paradigms for gazebo, an open-source multi-robot simulator. In 2004 IEEE/RSJ International Con- ference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566). 3: 2149–2154.
go back to reference Pivtoraiko, M., Knepper, R. A., & Kelly, A. (2009). Differentially constrained mobile robot motion planning in state lattices. Journal of Field Robotics, 26(3), 308–333.CrossRef Pivtoraiko, M., Knepper, R. A., & Kelly, A. (2009). Differentially constrained mobile robot motion planning in state lattices. Journal of Field Robotics, 26(3), 308–333.CrossRef
go back to reference Rafiei, A.., Fasakhodi, A. O., & Hajati, F. (2022). Pedestrian collision avoidance using deep rein- forcement learning. International Journal of Auto- Motive Technology, 23(3), 613–622.CrossRef Rafiei, A.., Fasakhodi, A. O., & Hajati, F. (2022). Pedestrian collision avoidance using deep rein- forcement learning. International Journal of Auto- Motive Technology, 23(3), 613–622.CrossRef
go back to reference Van Hasselt, H., A. Guez, and D. Silver, (2016). Deep reinforcement learning with double Q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence. Pp. 2094–2100. Van Hasselt, H., A. Guez, and D. Silver, (2016). Deep reinforcement learning with double Q-learning. In Proceedings of the AAAI Conference on Artificial Intelligence. Pp. 2094–2100.
go back to reference Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., & de Freitas, N. (2016). Dueling net-work architectures for deep reinforcement learning. International Conference on Machine Learning. pp. 1995–2003. Wang, Z., Schaul, T., Hessel, M., van Hasselt, H., Lanctot, M., & de Freitas, N. (2016). Dueling net-work architectures for deep reinforcement learning. International Conference on Machine Learning. pp. 1995–2003.
go back to reference Yang, G., Zhu, C., & Zhang, Y. (2023). A Self-Training Framework Based on Multi-Scale Attention Fusion for Weakly Supervised Semantic Segmentation, IEEE International Conference on Multimedia and Expo. pp. 876–881. Yang, G., Zhu, C., & Zhang, Y. (2023). A Self-Training Framework Based on Multi-Scale Attention Fusion for Weakly Supervised Semantic Segmentation, IEEE International Conference on Multimedia and Expo. pp. 876–881.
go back to reference Zhou, S.,Liu, X., Xu, Y., & Guo, J. (2018). A Deep Q-network (DQN) Based Path Planning Method for Mobile Robots, IEEE International Conference on Information and Automation. pp. 366–371. Zhou, S.,Liu, X., Xu, Y., & Guo, J. (2018). A Deep Q-network (DQN) Based Path Planning Method for Mobile Robots, IEEE International Conference on Information and Automation. pp. 366–371.
Metadata
Title
Reinforcement-Tracking: An End-to-End Trajectory Tracking Method Based on Self-Attention Mechanism
Authors
Guanglei Zhao
Zihao Chen
Weiming Liao
Publication date
15-02-2024
Publisher
The Korean Society of Automotive Engineers
Published in
International Journal of Automotive Technology / Issue 3/2024
Print ISSN: 1229-9138
Electronic ISSN: 1976-3832
DOI
https://doi.org/10.1007/s12239-024-00043-5

Other articles of this Issue 3/2024

Research on Energy Consumption Optimization Control Strategy of Electric Vehicle Electric Drive Oil/Water Cooling System

  • Electric, Fuel Cell, and Hybrid Vehicle, Fuels and Lubricants, Heat Transfer, Fluid and Thermal Engineering, Vision and Sensors

Optimization of Dynamic Performance of a Solenoid Actuator in a CNG Injector Based on the Effects of Key Design Parameters

  • Engine and Emissions, Fuels and Lubricants, Heat Transfer, Fluid and Thermal Engineering

Traffic Flow Forecasting Based on Transformer with Diffusion Graph Attention Network

  • Electrical and Electronics, Vision and Sensors, Other Fields of Automotive Engineering

Development of Coordinator for Optimal Tireforces Distribution for Vehicle Dynamics Control Considering Nonlinear Tire Characteristics

  • Electric, Fuel Cell, and Hybrid Vehicle, Transmission and Driveline, Vehicle Dynamics and Control

Meta-Model Based Blade Optimization Design Considering the Fluid Characteristics of Vehicle Energy Harvesting

  • Chassis, Electric, Fuel Cell, and Hybrid Vehicle, Transmission and Driveline, Vehicle Dynamics and Control