Skip to main content
Erschienen in: Optical and Quantum Electronics 4/2024

01.04.2024

AI drive: quantum-computational DRL framework for EHV navigational efficiency and security augmentation

verfasst von: Ishan Shivansh Bangroo, Ravi Kumar

Erschienen in: Optical and Quantum Electronics | Ausgabe 4/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The Q-learning approach, within deep reinforcement learning (DRL) methodology, is beneficial in multifarious tasks, one of them including enhancement of navigational tasks, shown by the DQN approach; merging Q-learning with deep neural networks is not without its challenges, particularly in situations where there is a chance of overestimating the correct value. In order to address the problems described above in the realm of electric and hybrid vehicles (EHVs), a quantum-computational adaptation that is influenced by Double Q-learning as a potential solution is proposed i.e., QED2Q-N. The enhanced version increases the efficacy of navigation by addressing the overestimates, which were included in previous editions, conceived with the intention of approximately modeling functions on a broad scale. The expansion of electric and hybrid vehicles (EHVs) toward growing autonomy, along with improvements in intelligent transportation systems (ITS), pose significant cybersecurity threats in the disciplines of vehicle automation and cooperative ITS. These underlying dangers may be found in the sectors of vehicle automation and cooperative ITS with an emphasis on the ways in which quantum-enhanced systems may assist in strengthening defenses, the use of quantum computing and deep reinforcement learning (DRL) merged together, works to bring improvement in EHV navigation accuracy and the enhancement of overall system security. The study underlines the relevance of adopting a quantum-computing technique in order to properly solve the dynamic issues associated with EHV technology.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Alazab, M., Alazab, A., Venkatraman, S.: Machine learning in cybersecurity: a review. J. Inf. Secur. Appl. 44, 1–25 (2019) Alazab, M., Alazab, A., Venkatraman, S.: Machine learning in cybersecurity: a review. J. Inf. Secur. Appl. 44, 1–25 (2019)
Zurück zum Zitat Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: A brief survey of deep reinforcement learning. IEEE Signal Process. Mag. 34(6), 26–38 (2017)CrossRefADS Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: A brief survey of deep reinforcement learning. IEEE Signal Process. Mag. 34(6), 26–38 (2017)CrossRefADS
Zurück zum Zitat Bangroo, I.S.: The quest for the next generation of quantum computing. IJSRED 6(2), 1452–1457 (2023) Bangroo, I.S.: The quest for the next generation of quantum computing. IJSRED 6(2), 1452–1457 (2023)
Zurück zum Zitat Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195–202 (2017)CrossRefPubMedADS Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum machine learning. Nature 549(7671), 195–202 (2017)CrossRefPubMedADS
Zurück zum Zitat Brown, T., Davis, S., Watson, D.: Secure and privacy-preserving data handling in EHVs. J. Inf. Secur. 15(2), 80–92 (2023) Brown, T., Davis, S., Watson, D.: Secure and privacy-preserving data handling in EHVs. J. Inf. Secur. 15(2), 80–92 (2023)
Zurück zum Zitat Cai, X.D., Weedbrook, C., Su, Z.E., Chen, M.C., Gu, M., Zhu, M.J., Pan, J.W.: Entanglement-based machine learning on a quantum computer. Phys. Rev. Lett. 114(11), 110504, 1–6 (2015)CrossRefPubMedADS Cai, X.D., Weedbrook, C., Su, Z.E., Chen, M.C., Gu, M., Zhu, M.J., Pan, J.W.: Entanglement-based machine learning on a quantum computer. Phys. Rev. Lett. 114(11), 110504, 1–6 (2015)CrossRefPubMedADS
Zurück zum Zitat Du, Y., Cui, S., Zeadally, J.: Security in electric vehicles. IEEE Potentials 31(5), 37–42 (2012) Du, Y., Cui, S., Zeadally, J.: Security in electric vehicles. IEEE Potentials 31(5), 37–42 (2012)
Zurück zum Zitat Dunjko, V., Taylor, J.M., Briegel, H.J.: Quantum-enhanced machine learning. Phys. Rev. Lett. 117(13), 30 (2016)MathSciNetCrossRef Dunjko, V., Taylor, J.M., Briegel, H.J.: Quantum-enhanced machine learning. Phys. Rev. Lett. 117(13), 30 (2016)MathSciNetCrossRef
Zurück zum Zitat Greenblatt, J.B., Shaheen, S.: Automated vehicles, on-demand mobility, and environmental impacts. Curr. Sustain. Renew. Energy Rep. 2(3), 74–81 (2015) Greenblatt, J.B., Shaheen, S.: Automated vehicles, on-demand mobility, and environmental impacts. Curr. Sustain. Renew. Energy Rep. 2(3), 74–81 (2015)
Zurück zum Zitat Van Hasselt, H., Guez, A., and Silver, D.: Deep reinforcement learning with double q-learning.arXiv: 1509.06461. 2015, presented at the Thirtieth AAAI conference on Artificial Intelligence, 1–13 (2016) Van Hasselt, H., Guez, A., and Silver, D.: Deep reinforcement learning with double q-learning.arXiv: 1509.06461. 2015, presented at the Thirtieth AAAI conference on Artificial Intelligence, 1–13 (2016)
Zurück zum Zitat Johnson, R., Lewis, T., Thompson, A.: AI-based security framework for EHVs. J. Cybersecur. Priv. 8(1), 10–25 (2023) Johnson, R., Lewis, T., Thompson, A.: AI-based security framework for EHVs. J. Cybersecur. Priv. 8(1), 10–25 (2023)
Zurück zum Zitat Jones, T., Richards, S., Watson, R.: AI for efficient integration of renewable energy in EHVs. J. Renew. Sustain. Energy 16(3), 305–319 (2024) Jones, T., Richards, S., Watson, R.: AI for efficient integration of renewable energy in EHVs. J. Renew. Sustain. Energy 16(3), 305–319 (2024)
Zurück zum Zitat Kim, S., Lee, D., Choi, J.: AI for crash prediction in high-density traffic. Transp. Res. Part C Emerg. Technol. 36(1), 20–35 (2024) Kim, S., Lee, D., Choi, J.: AI for crash prediction in high-density traffic. Transp. Res. Part C Emerg. Technol. 36(1), 20–35 (2024)
Zurück zum Zitat Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)CrossRef Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32(11), 1238–1274 (2013)CrossRef
Zurück zum Zitat Lee, J., Kwon, D.: Importance of quality data in training AI models for EHVs. Int. J. Intell. Transp. Syst. Res. 20(2), 130–142 (2021) Lee, J., Kwon, D.: Importance of quality data in training AI models for EHVs. Int. J. Intell. Transp. Syst. Res. 20(2), 130–142 (2021)
Zurück zum Zitat Liu, S., Liu, D., Hu, X.: Reinforcement learning for electric vehicle charging strategy: a survey. IEEE Trans. Neural Netw. Learn. Syst. 30(10), 3222–3233 (2019) Liu, S., Liu, D., Hu, X.: Reinforcement learning for electric vehicle charging strategy: a survey. IEEE Trans. Neural Netw. Learn. Syst. 30(10), 3222–3233 (2019)
Zurück zum Zitat Liu, Z., Yu, W., Qin, F.: AI-driven optimization of EHV charging stations. J. Electr. Comput. Eng. 15(1), 30–45 (2023) Liu, Z., Yu, W., Qin, F.: AI-driven optimization of EHV charging stations. J. Electr. Comput. Eng. 15(1), 30–45 (2023)
Zurück zum Zitat Martin, L., Wang, S.: Quantum AI for EHVs: a comprehensive review. J. Quantum Inf. Comput. 15(3), 205–220 (2023) Martin, L., Wang, S.: Quantum AI for EHVs: a comprehensive review. J. Quantum Inf. Comput. 15(3), 205–220 (2023)
Zurück zum Zitat Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015a)CrossRefPubMedADS Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015a)CrossRefPubMedADS
Zurück zum Zitat Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Petersen, S.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015b)CrossRefPubMedADS Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Petersen, S.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015b)CrossRefPubMedADS
Zurück zum Zitat Onori, S., Serrao, L., Rizzoni, G.: Adaptive equivalent consumption minimization strategy for hybrid electric vehicles. In: Proceedings of the ASME 2010 dynamic systems and control conference. ASME 2010 dynamic systems and control conference, Vol. 1, pp. 499–505. Cambridge, Massachusetts, USA. September 12–15 (2010) Onori, S., Serrao, L., Rizzoni, G.: Adaptive equivalent consumption minimization strategy for hybrid electric vehicles. In: Proceedings of the ASME 2010 dynamic systems and control conference. ASME 2010 dynamic systems and control conference, Vol. 1, pp. 499–505. Cambridge, Massachusetts, USA. September 12–15 (2010)
Zurück zum Zitat Ouyang, M., Lu, L., Li, J., Hua, J.: Technological infrastructure investment for AI in EHVs. IEEE Trans. Veh. Technol. 70(1), 105–117 (2021) Ouyang, M., Lu, L., Li, J., Hua, J.: Technological infrastructure investment for AI in EHVs. IEEE Trans. Veh. Technol. 70(1), 105–117 (2021)
Zurück zum Zitat Patel, M., Sharma, R., Bhattacharya, P.: Personalized driving experience through AI in EHVs. J. Intell. Transp. Syst. 28(2), 180–195 (2024) Patel, M., Sharma, R., Bhattacharya, P.: Personalized driving experience through AI in EHVs. J. Intell. Transp. Syst. 28(2), 180–195 (2024)
Zurück zum Zitat Qiu, D., Wang, Y., Hua, W., Strbac, G.: Reinforcement learning for electric vehicle applications in power systems: a critical review. Renew. Sustain. Energy Rev. 173, 113052, 1–22 (2023)CrossRef Qiu, D., Wang, Y., Hua, W., Strbac, G.: Reinforcement learning for electric vehicle applications in power systems: a critical review. Renew. Sustain. Energy Rev. 173, 113052, 1–22 (2023)CrossRef
Zurück zum Zitat Schuld, M., Sinayskiy, I., Petruccione, F.: An introduction to quantum machine learning. Contemp. Phys. 56(2), 172–185 (2015)CrossRefADS Schuld, M., Sinayskiy, I., Petruccione, F.: An introduction to quantum machine learning. Contemp. Phys. 56(2), 172–185 (2015)CrossRefADS
Zurück zum Zitat Smith, J., Martin, K., Taylor, A.: AI-driven adaptive navigation system for EHVs. Transp. Res. Part C 33, 45–60 (2022) Smith, J., Martin, K., Taylor, A.: AI-driven adaptive navigation system for EHVs. Transp. Res. Part C 33, 45–60 (2022)
Zurück zum Zitat Wadud, Z., MacKenzie, D., Leiby, P.: Help or hindrance? The travel, energy and carbon impacts of highly automated vehicles. Transp. Res. Part A Policy Pract. 86, 1–18 (2016)CrossRef Wadud, Z., MacKenzie, D., Leiby, P.: Help or hindrance? The travel, energy and carbon impacts of highly automated vehicles. Transp. Res. Part A Policy Pract. 86, 1–18 (2016)CrossRef
Zurück zum Zitat Wang, L., Zhang, Y., Liu, K.: Quantum neural network for optimizing EHV energy consumption and battery life. J. Sustain. Transp. 14(2), 120–130 (2022) Wang, L., Zhang, Y., Liu, K.: Quantum neural network for optimizing EHV energy consumption and battery life. J. Sustain. Transp. 14(2), 120–130 (2022)
Zurück zum Zitat Zhu, K., Zhang, T.: Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci Technol 26(5), 674–691 (2021)CrossRef Zhu, K., Zhang, T.: Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Sci Technol 26(5), 674–691 (2021)CrossRef
Metadaten
Titel
AI drive: quantum-computational DRL framework for EHV navigational efficiency and security augmentation
verfasst von
Ishan Shivansh Bangroo
Ravi Kumar
Publikationsdatum
01.04.2024
Verlag
Springer US
Erschienen in
Optical and Quantum Electronics / Ausgabe 4/2024
Print ISSN: 0306-8919
Elektronische ISSN: 1572-817X
DOI
https://doi.org/10.1007/s11082-023-06162-0

Weitere Artikel der Ausgabe 4/2024

Optical and Quantum Electronics 4/2024 Zur Ausgabe

Neuer Inhalt