Skip to main content
Top

2024 | OriginalPaper | Chapter

Traffic Signal Control Optimization Based on Deep Reinforcement Learning with Attention Mechanisms

Authors : Wenlong Ni, Peng Wang, Zehong Li, Chuanzhuang Li

Published in: Neural Information Processing

Publisher: Springer Nature Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Deep reinforcement learning (DRL) methodology with traffic control systems plays a vital role in adaptive traffic signal controls. However, previous studies have frequently disregarded the significance of vehicles near intersections, which typically involve higher decision-making requirements and safety considerations. To overcome this challenge, this paper presents a novel DRL-based method for traffic signal control, which incorporates an attention mechanism into the Dueling Double Deep Q Network (D3QN) framework. This approach emphasizes the priority of vehicles near intersections by assigning them higher weights and more attention. Moreover, the state design incorporates signal light statuses to facilitate a more comprehensive understanding of the current traffic environment. Furthermore, the model’s performance is enhanced through the utilization of Double DQN and Dueling DQN techniques. The experimental findings demonstrate the superior efficacy of the proposed method in critical metrics such as vehicle waiting time, queue length, and the number of halted vehicles when compared to D3QN, traditional DQN, and fixed timing strategies.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
2.
go back to reference Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press Cambridge (1998) Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press Cambridge (1998)
3.
go back to reference LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRef
4.
go back to reference Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1995–2003. PMLR (2016) Wang, Z., Schaul, T., Hessel, M., Hasselt, H., Lanctot, M., Freitas, N.: Dueling network architectures for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1995–2003. PMLR (2016)
5.
go back to reference El-Tantawy, S., Abdulhai, B.: An agent-based learning towards decentralized and coordinated traffic signal control. In: 13th International IEEE Conference on Intelligent Transportation Systems, pp. 665–670. IEEE (2010) El-Tantawy, S., Abdulhai, B.: An agent-based learning towards decentralized and coordinated traffic signal control. In: 13th International IEEE Conference on Intelligent Transportation Systems, pp. 665–670. IEEE (2010)
6.
go back to reference Jin, J., Ma, X.: A group-based traffic signal control with adaptive learning ability. Eng. Appl. Artif. Intell. 65, 282–293 (2017)CrossRef Jin, J., Ma, X.: A group-based traffic signal control with adaptive learning ability. Eng. Appl. Artif. Intell. 65, 282–293 (2017)CrossRef
7.
go back to reference Li, L., Lv, Y., Wang, F.Y.: Traffic signal timing via deep reinforcement learning. IEEE/CAA J. Automatica Sinica 3(3), 247–254 (2016)MathSciNetCrossRef Li, L., Lv, Y., Wang, F.Y.: Traffic signal timing via deep reinforcement learning. IEEE/CAA J. Automatica Sinica 3(3), 247–254 (2016)MathSciNetCrossRef
8.
go back to reference Liang, X., Du, X., Wang, G., Han, Z.: A deep q learning network for traffic lights’ cycle control in vehicular networks. IEEE Trans. Veh. Technol. 68(2), 1243–1253 (2019) Liang, X., Du, X., Wang, G., Han, Z.: A deep q learning network for traffic lights’ cycle control in vehicular networks. IEEE Trans. Veh. Technol. 68(2), 1243–1253 (2019)
9.
go back to reference Genders, W., Razavi, S.: Using a deep reinforcement learning agent for traffic signal control (2016) Genders, W., Razavi, S.: Using a deep reinforcement learning agent for traffic signal control (2016)
10.
go back to reference Zhang, L., et al.: DynamicLight: dynamically tuning traffic signal duration with DRL (2022) Zhang, L., et al.: DynamicLight: dynamically tuning traffic signal duration with DRL (2022)
11.
go back to reference Gao, J., Shen, Y., Liu, J., Ito, M., Shiratori, N.: Adaptive traffic signal control: deep reinforcement learning algorithm with experience replay and target network. arXiv preprint arXiv:1705.02755 (2017) Gao, J., Shen, Y., Liu, J., Ito, M., Shiratori, N.: Adaptive traffic signal control: deep reinforcement learning algorithm with experience replay and target network. arXiv preprint arXiv:​1705.​02755 (2017)
13.
go back to reference Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks (2019) Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks (2019)
14.
go back to reference Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016) Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)
15.
go back to reference Hallinan, A.J. Jr.: A review of the Weibull distribution. J. Qual. Technol. 25(2), 85–93 (1993) Hallinan, A.J. Jr.: A review of the Weibull distribution. J. Qual. Technol. 25(2), 85–93 (1993)
16.
go back to reference Webster, F.V.: Traffic signal settings. Road Research Technical Paper 39 (1958) Webster, F.V.: Traffic signal settings. Road Research Technical Paper 39 (1958)
17.
go back to reference Mnih, V., et al.: Playing Atari with deep reinforcement learning (2013) Mnih, V., et al.: Playing Atari with deep reinforcement learning (2013)
Metadata
Title
Traffic Signal Control Optimization Based on Deep Reinforcement Learning with Attention Mechanisms
Authors
Wenlong Ni
Peng Wang
Zehong Li
Chuanzhuang Li
Copyright Year
2024
Publisher
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-8067-3_11

Premium Partner