Skip to main content

2024 | OriginalPaper | Buchkapitel

PnP: Integrated Prediction and Planning for Interactive Lane Change in Dense Traffic

verfasst von : Xueyi Liu, Qichao Zhang, Yinfeng Gao, Zhongpu Xia

Erschienen in: Neural Information Processing

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Making human-like decisions for autonomous driving in interactive scenarios is crucial and difficult, requiring the self-driving vehicle to reason about the reactions of interactive vehicles to its behavior. To handle this challenge, we provide an integrated prediction and planning (PnP) decision-making approach. A reactive trajectory prediction model is developed to predict the future states of other actors in order to account for the interactive nature of the behaviors. Then, n-step temporal-difference search is used to make a tactical decision and plan the tracking trajectory for the self-driving vehicle by combining the value estimation network with the reactive prediction model. The proposed PnP method is evaluated using the CARLA simulator, and the results demonstrate that PnP obtains superior performance compared to popular model-free and model-based reinforcement learning baselines.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Chauhan, N.S., Kumar, N.: Traffic flow forecasting using attention enabled Bi-LSTM and GRU hybrid model. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. CCIS, vol. 1794, pp. 505–517. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-1648-1_42 Chauhan, N.S., Kumar, N.: Traffic flow forecasting using attention enabled Bi-LSTM and GRU hybrid model. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. CCIS, vol. 1794, pp. 505–517. Springer, Singapore (2023). https://​doi.​org/​10.​1007/​978-981-99-1648-1_​42
2.
Zurück zum Zitat Chen, C., Hu, S., Nikdel, P., Mori, G., Savva, M.: Relational graph learning for crowd navigation. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10007–10013. IEEE (2020) Chen, C., Hu, S., Nikdel, P., Mori, G., Savva, M.: Relational graph learning for crowd navigation. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10007–10013. IEEE (2020)
3.
Zurück zum Zitat Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Conference on Robot Learning, pp. 1–16. PMLR (2017) Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Conference on Robot Learning, pp. 1–16. PMLR (2017)
4.
Zurück zum Zitat Gu, J., Sun, C., Zhao, H.: DenseTNT: end-to-end trajectory prediction from dense goal sets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15303–15312 (2021) Gu, J., Sun, C., Zhao, H.: DenseTNT: end-to-end trajectory prediction from dense goal sets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15303–15312 (2021)
5.
Zurück zum Zitat Guo, Y., Zhang, Q., Wang, J., Liu, S.: Hierarchical reinforcement learning-based policy switching towards multi-scenarios autonomous driving. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021) Guo, Y., Zhang, Q., Wang, J., Liu, S.: Hierarchical reinforcement learning-based policy switching towards multi-scenarios autonomous driving. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021)
6.
Zurück zum Zitat Hafner, D., Lillicrap, T., Ba, J., Norouzi, M.: Dream to control: Learning behaviors by latent imagination. In: International Conference on Learning Representations (2019) Hafner, D., Lillicrap, T., Ba, J., Norouzi, M.: Dream to control: Learning behaviors by latent imagination. In: International Conference on Learning Representations (2019)
7.
Zurück zum Zitat Hafner, D., Lillicrap, T.P., Norouzi, M., Ba, J.: Mastering Atari with discrete world models. In: International Conference on Learning Representations (2020) Hafner, D., Lillicrap, T.P., Norouzi, M., Ba, J.: Mastering Atari with discrete world models. In: International Conference on Learning Representations (2020)
8.
Zurück zum Zitat Hagedorn, S., Hallgarten, M., Stoll, M., Condurache, A.: Rethinking integration of prediction and planning in deep learning-based automated driving systems: a review. arXiv preprint arXiv:2308.05731 (2023) Hagedorn, S., Hallgarten, M., Stoll, M., Condurache, A.: Rethinking integration of prediction and planning in deep learning-based automated driving systems: a review. arXiv preprint arXiv:​2308.​05731 (2023)
9.
Zurück zum Zitat Li, D., Zhao, D., Zhang, Q., Chen, Y.: Reinforcement learning and deep learning based lateral control for autonomous driving [application notes]. IEEE Comput. Intell. Mag. 14(2), 83–98 (2019)CrossRef Li, D., Zhao, D., Zhang, Q., Chen, Y.: Reinforcement learning and deep learning based lateral control for autonomous driving [application notes]. IEEE Comput. Intell. Mag. 14(2), 83–98 (2019)CrossRef
10.
Zurück zum Zitat Liu, J., Zeng, W., Urtasun, R., Yumer, E.: Deep structured reactive planning. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 4897–4904. IEEE (2021) Liu, J., Zeng, W., Urtasun, R., Yumer, E.: Deep structured reactive planning. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 4897–4904. IEEE (2021)
11.
Zurück zum Zitat Liu, Y., Gao, Y., Zhang, Q., Ding, D., Zhao, D.: Multi-task safe reinforcement learning for navigating intersections in dense traffic. J. Franklin Inst. (2022) Liu, Y., Gao, Y., Zhang, Q., Ding, D., Zhao, D.: Multi-task safe reinforcement learning for navigating intersections in dense traffic. J. Franklin Inst. (2022)
12.
Zurück zum Zitat Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRef Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRef
13.
Zurück zum Zitat Palmal, S., Arya, N., Saha, S., Tripathy, S.: A multi-modal graph convolutional network for predicting human breast cancer prognosis. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. Communications in Computer and Information Science, vol. 1794, pp. 187–198. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-1648-1_16 Palmal, S., Arya, N., Saha, S., Tripathy, S.: A multi-modal graph convolutional network for predicting human breast cancer prognosis. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. Communications in Computer and Information Science, vol. 1794, pp. 187–198. Springer, Singapore (2023). https://​doi.​org/​10.​1007/​978-981-99-1648-1_​16
14.
Zurück zum Zitat Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef
15.
16.
Zurück zum Zitat Wang, J., Zhang, Q., Zhao, D.: Highway lane change decision-making via attention-based deep reinforcement learning. IEEE/CAA J. Automatica Sinica 9(3), 567–569 (2021)CrossRef Wang, J., Zhang, Q., Zhao, D.: Highway lane change decision-making via attention-based deep reinforcement learning. IEEE/CAA J. Automatica Sinica 9(3), 567–569 (2021)CrossRef
17.
Zurück zum Zitat Wang, J., Zhang, Q., Zhao, D.: Dynamic-horizon model-based value estimation with latent imagination. IEEE Trans. Neural Netw. Learn. Syst. (2022) Wang, J., Zhang, Q., Zhao, D.: Dynamic-horizon model-based value estimation with latent imagination. IEEE Trans. Neural Netw. Learn. Syst. (2022)
18.
Zurück zum Zitat Wang, J., Zhang, Q., Zhao, D., Chen, Y.: Lane change decision-making through deep reinforcement learning with rule-based constraints. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2019) Wang, J., Zhang, Q., Zhao, D., Chen, Y.: Lane change decision-making through deep reinforcement learning with rule-based constraints. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2019)
19.
Zurück zum Zitat Wen, J., Zhao, Z., Cui, J., Chen, B.M.: Model-based reinforcement learning with self-attention mechanism for autonomous driving in dense traffic. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. LNCS, vol. 13624, pp. 317–330. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30108-7_27 Wen, J., Zhao, Z., Cui, J., Chen, B.M.: Model-based reinforcement learning with self-attention mechanism for autonomous driving in dense traffic. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds.) Neural Information Processing, ICONIP 2022. LNCS, vol. 13624, pp. 317–330. Springer, Cham (2023). https://​doi.​org/​10.​1007/​978-3-031-30108-7_​27
20.
Zurück zum Zitat Wu, J., Huang, Z., Lv, C.: Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving. IEEE Trans. Intell. Veh. 8, 194–203 (2022)CrossRef Wu, J., Huang, Z., Lv, C.: Uncertainty-aware model-based reinforcement learning: methodology and application in autonomous driving. IEEE Trans. Intell. Veh. 8, 194–203 (2022)CrossRef
21.
Zurück zum Zitat Zhao, X., Chen, Y., Guo, J., Zhao, D.: A spatial-temporal attention model for human trajectory prediction. IEEE CAA J. Autom. Sinica 7(4), 965–974 (2020)CrossRef Zhao, X., Chen, Y., Guo, J., Zhao, D.: A spatial-temporal attention model for human trajectory prediction. IEEE CAA J. Autom. Sinica 7(4), 965–974 (2020)CrossRef
Metadaten
Titel
PnP: Integrated Prediction and Planning for Interactive Lane Change in Dense Traffic
verfasst von
Xueyi Liu
Qichao Zhang
Yinfeng Gao
Zhongpu Xia
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-8076-5_22

Premium Partner