Skip to main content
Erschienen in: Wireless Personal Communications 3/2020

15.05.2020

A Comprehensive Survey on Autonomous Driving Cars: A Perspective View

verfasst von: S. Devi, P. Malarvezhi, R. Dayana, K. Vadivukkarasi

Erschienen in: Wireless Personal Communications | Ausgabe 3/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Over the past decades Machine Learning and Deep Learning algorithm played a vital part in the development of Autonomous Vehicle. It is indeed for the perception system to examine the environment around the vehicle and identify the objects such as pedestrian, vehicle and traffic signals, etc. Using this information, control system module can take necessary action to control the vehicle in terms of braking, speed, lane change or steering, etc. This paper focuses on the survey of machine learning algorithms and techniques applied in the design of autonomous driving system over a decade. Performance of each algorithm was analyzed in terms of prediction time and accuracy have been documented and compared.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
7.
Zurück zum Zitat Ciberlin, J., Grbic, R., Teslic, N., & Pilipovic, M. (2019). Object detection and object tracking in front of the vehicle using front view camera. In IEEE conference on zooming innovation in consumer technologies conference (ZINC), Novi Sad. Ciberlin, J., Grbic, R., Teslic, N., & Pilipovic, M. (2019). Object detection and object tracking in front of the vehicle using front view camera. In IEEE conference on zooming innovation in consumer technologies conference (ZINC), Novi Sad.
8.
Zurück zum Zitat Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Computer vision and pattern recognition, USA. Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In Computer vision and pattern recognition, USA.
9.
Zurück zum Zitat Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified real time object detection. In The IEEE conference on computer vision and pattern recognition (pp. 779–788). Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified real time object detection. In The IEEE conference on computer vision and pattern recognition (pp. 779–788).
10.
Zurück zum Zitat Kalal, Z., Mikolajczyk, K., & Matas, J. (2010). Forward–backward error: Automatic detection of tracking failures. In International conference on pattern recognition. Kalal, Z., Mikolajczyk, K., & Matas, J. (2010). Forward–backward error: Automatic detection of tracking failures. In International conference on pattern recognition.
11.
Zurück zum Zitat Danelljan, M., Häger, G., Khan, F. S., & Felsberg, M. (2014). Accurate scale estimation for robust visual tracking. In Proceedings of the British machine vision conference BMVC. Danelljan, M., Häger, G., Khan, F. S., & Felsberg, M. (2014). Accurate scale estimation for robust visual tracking. In Proceedings of the British machine vision conference BMVC.
13.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Procedings of NIPS (pp. 1097–1105). Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Procedings of NIPS (pp. 1097–1105).
14.
Zurück zum Zitat Kebria, P. M., Khosravi, A., Salaken, S. M., & Nahavandi, S. (2020). Deep imitation learning for autonomous vehicles based on convolutional neural networks. IEEE/CAA Journal of Automatica Sinica, 7(1), 82–95.CrossRef Kebria, P. M., Khosravi, A., Salaken, S. M., & Nahavandi, S. (2020). Deep imitation learning for autonomous vehicles based on convolutional neural networks. IEEE/CAA Journal of Automatica Sinica, 7(1), 82–95.CrossRef
15.
Zurück zum Zitat Wei, J., He, J., Zhou, Y., Chen, K., Tang, Z., & Xiong, Z. (2019). Enhanced object detection with deep convolutional neural networks for. advanced driving assistance. IEEE Transactions on Intelligent Transportation Systems, 99, 1–12. Wei, J., He, J., Zhou, Y., Chen, K., Tang, Z., & Xiong, Z. (2019). Enhanced object detection with deep convolutional neural networks for. advanced driving assistance. IEEE Transactions on Intelligent Transportation Systems, 99, 1–12.
16.
Zurück zum Zitat Cai, Z., Fan, Q., Feris, R. S., & Vasconcelos, N. (2016). A unified multi-scale deep convolutional neural network for fast object detection. In Computer vision—ECCV (pp. 354–370). New York, NY, USA: Springer. Cai, Z., Fan, Q., Feris, R. S., & Vasconcelos, N. (2016). A unified multi-scale deep convolutional neural network for fast object detection. In Computer visionECCV (pp. 354–370). New York, NY, USA: Springer.
17.
Zurück zum Zitat Bodia, N., Singh, B., Chellappa, R., & Davis, L. S. (2017). Improving object detection with one line of code. Bodia, N., Singh, B., Chellappa, R., & Davis, L. S. (2017). Improving object detection with one line of code.
18.
Zurück zum Zitat Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99). Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99).
19.
Zurück zum Zitat Hemmati, M., Biglari-Abhari, M., & Niar, S. (2019). Adaptive vehicle detection for real-time autonomous driving system. In IEEE conference on design, automation & test in Europe conference & exhibition. Hemmati, M., Biglari-Abhari, M., & Niar, S. (2019). Adaptive vehicle detection for real-time autonomous driving system. In IEEE conference on design, automation & test in Europe conference & exhibition.
20.
Zurück zum Zitat Partial Reconfiguration User Guide. (2017). Xilinx Inc. Partial Reconfiguration User Guide. (2017). Xilinx Inc.
21.
Zurück zum Zitat Vipin, K., & Fahmy, S. A. (2014). ZyCAP: Efficient partial reconfiguration management on the Xilinx Zynq. IEEE Embedded Systems Letters, 6(3), 41–44.CrossRef Vipin, K., & Fahmy, S. A. (2014). ZyCAP: Efficient partial reconfiguration management on the Xilinx Zynq. IEEE Embedded Systems Letters, 6(3), 41–44.CrossRef
22.
Zurück zum Zitat Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05) (Vol. 1, pp. 886–893). Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05) (Vol. 1, pp. 886–893).
23.
Zurück zum Zitat Vapnik, V., Golowich, S. E., & Smola, A. (1996). Support vector method for function approximation regression estimation and signal processing. In Advances in neural information processing systems (Vol. 9, pp. 281–287). MIT Press. Vapnik, V., Golowich, S. E., & Smola, A. (1996). Support vector method for function approximation regression estimation and signal processing. In Advances in neural information processing systems (Vol. 9, pp. 281–287). MIT Press.
25.
Zurück zum Zitat Chen, L., Hu, X., Xu, T., Kuang, H., & Li, Q. (2017). Turn signal detection during nighttime by cnn detector and perceptual hashing tracking. IEEE Transactions on Intelligent Transportation Systems, 99, 1–12. Chen, L., Hu, X., Xu, T., Kuang, H., & Li, Q. (2017). Turn signal detection during nighttime by cnn detector and perceptual hashing tracking. IEEE Transactions on Intelligent Transportation Systems, 99, 1–12.
26.
Zurück zum Zitat Chien, C. L., Hang, H. M., Tseng, D. C., & Chen, Y. S. (2016). An image based overexposed taillight detection method for frontal vehicle detection in night vision. In 2016 Asia-Pacific signal and information processing association annual summit and conference (APSIPA) (pp. 1–9). Chien, C. L., Hang, H. M., Tseng, D. C., & Chen, Y. S. (2016). An image based overexposed taillight detection method for frontal vehicle detection in night vision. In 2016 Asia-Pacific signal and information processing association annual summit and conference (APSIPA) (pp. 1–9).
27.
Zurück zum Zitat O’Malley, R., Jones, E., & Glavin, M. (2010). Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions. IEEE Transactions on Intelligent Transportation Systems, 11(2), 453–462.CrossRef O’Malley, R., Jones, E., & Glavin, M. (2010). Rear-lamp vehicle detection and tracking in low-exposure color video for night conditions. IEEE Transactions on Intelligent Transportation Systems, 11(2), 453–462.CrossRef
28.
Zurück zum Zitat Zhang, W., Zheng, Y., Gao, Q., & Mi, Z. (2019). Part-aware region proposal for vehicle detection in high occlusion environment. IEEE Access, 7, 100383–100393.CrossRef Zhang, W., Zheng, Y., Gao, Q., & Mi, Z. (2019). Part-aware region proposal for vehicle detection in high occlusion environment. IEEE Access, 7, 100383–100393.CrossRef
29.
Zurück zum Zitat Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.CrossRef Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.CrossRef
30.
Zurück zum Zitat Cao, Z., Simon, T., Wei, S. -E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1302–1310). Cao, Z., Simon, T., Wei, S. -E., & Sheikh, Y. (2017). Realtime multi-person 2D pose estimation using part affinity fields. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1302–1310).
31.
Zurück zum Zitat Liu, S., Lu, C., & Jia, J. (2015). Box aggregation for proposal decimation: Last mile of object detection. In Proceedings of IEEE international conference on computer vision (ICCV) (pp. 2569–2577). Liu, S., Lu, C., & Jia, J. (2015). Box aggregation for proposal decimation: Last mile of object detection. In Proceedings of IEEE international conference on computer vision (ICCV) (pp. 2569–2577).
32.
Zurück zum Zitat Li, B., Wu, T., & Zhu, S. -C. (2014). Integrating context and occlusion for car detection by hierarchical and-or model. In Computer vision—ECCV. Cham, Switzerland: Springer. Li, B., Wu, T., & Zhu, S. -C. (2014). Integrating context and occlusion for car detection by hierarchical and-or model. In Computer visionECCV. Cham, Switzerland: Springer.
33.
Zurück zum Zitat Mousavian, A., Anguelov, D., Košecká, J., & Flynn, J. (2017). 3D bounding box estimation using deep learning and geometry. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5632–5640). Mousavian, A., Anguelov, D., Košecká, J., & Flynn, J. (2017). 3D bounding box estimation using deep learning and geometry. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5632–5640).
34.
Zurück zum Zitat Xiang, Y., Choi, W., Lin, Y., & Savarese, S. (2017). Subcategory-aware convolutional neural networks for object proposals and detection. In Proceedings of IEEE winter conference on applications of computer vision (WACV) (pp. 924–933). Xiang, Y., Choi, W., Lin, Y., & Savarese, S. (2017). Subcategory-aware convolutional neural networks for object proposals and detection. In Proceedings of IEEE winter conference on applications of computer vision (WACV) (pp. 924–933).
35.
Zurück zum Zitat Chabot, F., Chaouch, M., Rabarisoa, J., Teulière, C., & Chateau, T. (2017). Deep MANTA: A coarse-to-fine many-task network for joint 2D and 3D vehicle analysis from monocular image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1827–1836). Chabot, F., Chaouch, M., Rabarisoa, J., Teulière, C., & Chateau, T. (2017). Deep MANTA: A coarse-to-fine many-task network for joint 2D and 3D vehicle analysis from monocular image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1827–1836).
36.
Zurück zum Zitat Mayr, J., Giracoglu, C., Unger, C., & Tombari, F. (2019). Headlight range estimation for autonomous driving using deep neural networks. In IEEE conference on intelligent vehicles symposium (IV). Mayr, J., Giracoglu, C., Unger, C., & Tombari, F. (2019). Headlight range estimation for autonomous driving using deep neural networks. In IEEE conference on intelligent vehicles symposium (IV).
37.
Zurück zum Zitat Clausse, A., Benslimane, S., & de La Fortelle, A. (2019). Large-scale extraction of accurate vehicle trajectories for driving behavior learning. In IEEE conference on intelligent vehicles symposium (IV). Clausse, A., Benslimane, S., & de La Fortelle, A. (2019). Large-scale extraction of accurate vehicle trajectories for driving behavior learning. In IEEE conference on intelligent vehicles symposium (IV).
38.
Zurück zum Zitat Ren, X., Wang, D., Laskey, M., & Goldberg, K. (2018). Learning traffic behaviors by extracting vehicle trajectories from online video streams. In 14th IEEE international conference on automation science and engineering CASE 2018 (pp. 1276–1283). Ren, X., Wang, D., Laskey, M., & Goldberg, K. (2018). Learning traffic behaviors by extracting vehicle trajectories from online video streams. In 14th IEEE international conference on automation science and engineering CASE 2018 (pp. 1276–1283).
39.
Zurück zum Zitat Simon, D. (2006). Optimal state estimation: Kalman H infinity and nonlinear approaches. New York, NY: Wiley-Interscience.CrossRef Simon, D. (2006). Optimal state estimation: Kalman H infinity and nonlinear approaches. New York, NY: Wiley-Interscience.CrossRef
40.
Zurück zum Zitat Behbahani, F., Shiarlis, K., Chen, X., Kurin, V., Kasewa, S., Stirbu, C., Gomes, J., Paul, S., Oliehoek, F. A., Messias, J. V., & Whiteson, S. (2018). Learning from demonstration in the wild. arXiv:1811.03516. Behbahani, F., Shiarlis, K., Chen, X., Kurin, V., Kasewa, S., Stirbu, C., Gomes, J., Paul, S., Oliehoek, F. A., Messias, J. V., & Whiteson, S. (2018). Learning from demonstration in the wild. arXiv:​1811.​03516.
41.
Zurück zum Zitat Bochinski, E., Eiselein, V., & Sikora, T. (2017). High-speed tracking-by-detection without using image information. In International workshop on traffic and street surveillance for safety and security at IEEE AVSS 2017. Bochinski, E., Eiselein, V., & Sikora, T. (2017). High-speed tracking-by-detection without using image information. In International workshop on traffic and street surveillance for safety and security at IEEE AVSS 2017.
42.
Zurück zum Zitat Chen, Z., & Huang, X. (2019). Pedestrian detection for autonomous vehicle using multi-spectral cameras. IEEE Transaction on Intelligent Vehicles, 4, 211–219.CrossRef Chen, Z., & Huang, X. (2019). Pedestrian detection for autonomous vehicle using multi-spectral cameras. IEEE Transaction on Intelligent Vehicles, 4, 211–219.CrossRef
44.
Zurück zum Zitat Krotosky, S. J., & Trivedi, M. M. (2007). On color- infrared- and multimodal-stereo approaches to pedestrian detection. IEEE Transactions on Intelligent Transportation Systems, 8(4), 619–629.CrossRef Krotosky, S. J., & Trivedi, M. M. (2007). On color- infrared- and multimodal-stereo approaches to pedestrian detection. IEEE Transactions on Intelligent Transportation Systems, 8(4), 619–629.CrossRef
45.
Zurück zum Zitat Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.MATH Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.MATH
46.
Zurück zum Zitat Jeon, H. -M., Nguyen, V. D., & Jeon, J. W. (2019). Pedestrian detection based on deep learning. In Conference of the IEEE industrial electronics society. Jeon, H. -M., Nguyen, V. D., & Jeon, J. W. (2019). Pedestrian detection based on deep learning. In Conference of the IEEE industrial electronics society.
47.
Zurück zum Zitat Hbaieb, A., Rezgui, J., & Chaari, L. (2019). Pedestrian detection for autonomous driving within cooperative communication system. In IEEE wireless communications and networking conference (WCNC). Hbaieb, A., Rezgui, J., & Chaari, L. (2019). Pedestrian detection for autonomous driving within cooperative communication system. In IEEE wireless communications and networking conference (WCNC).
48.
Zurück zum Zitat Styles, O., Ross, A., & Sanchez, V. (2019). Forecasting pedestrian trajectory with machine-annotated training data. In IEEE conference on intelligent vehicles symposium (IV). Styles, O., Ross, A., & Sanchez, V. (2019). Forecasting pedestrian trajectory with machine-annotated training data. In IEEE conference on intelligent vehicles symposium (IV).
49.
Zurück zum Zitat Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR.
50.
Zurück zum Zitat Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. In ICIP. Wojke, N., Bewley, A., & Paulus, D. (2017). Simple online and realtime tracking with a deep association metric. In ICIP.
51.
Zurück zum Zitat Yagi, T., Mangalam, K., Yonetani, R., & Sato, Y. (2018). Future person localization in first-person videos. In CVPR. Yagi, T., Mangalam, K., Yonetani, R., & Sato, Y. (2018). Future person localization in first-person videos. In CVPR.
52.
Zurück zum Zitat Choi, S. -Y., Jeong, H. -J., Park, K. -S., & Ha, Y. -G. (2019). Efficient driving scene image creation using deep neural network. In 2019 IEEE international conference on big data and smart computing (BigComp). Choi, S. -Y., Jeong, H. -J., Park, K. -S., & Ha, Y. -G. (2019). Efficient driving scene image creation using deep neural network. In 2019 IEEE international conference on big data and smart computing (BigComp).
54.
Zurück zum Zitat Alec, R., et al. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434. Alec, R., et al. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:​1511.​06434.
56.
Zurück zum Zitat Masmoudi, M., Ghazzai, H., Frikha, M., & Massoud, Y. (2019). Object detection learning techniques for autonomous vehicle applications. In IEEE international conference on vehicular electronics and safety (ICVES). Masmoudi, M., Ghazzai, H., Frikha, M., & Massoud, Y. (2019). Object detection learning techniques for autonomous vehicle applications. In IEEE international conference on vehicular electronics and safety (ICVES).
57.
Zurück zum Zitat Pandey, D., & Niwaria, K. (2019). A novel single front camera based simpler approach for autonomous taxi navigation for smart city deployments. In 6th international conference on signal processing and integrated networks (SPIN), IEEE. Pandey, D., & Niwaria, K. (2019). A novel single front camera based simpler approach for autonomous taxi navigation for smart city deployments. In 6th international conference on signal processing and integrated networks (SPIN), IEEE.
58.
Zurück zum Zitat Hillel, A. B., Lerner, R., Levi, D., & Raz, G. (2014). Recent progress in road and lane detection: A survey. Machine Vision and Applications, 25(3), 727–745.CrossRef Hillel, A. B., Lerner, R., Levi, D., & Raz, G. (2014). Recent progress in road and lane detection: A survey. Machine Vision and Applications, 25(3), 727–745.CrossRef
59.
Zurück zum Zitat LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.CrossRef LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.CrossRef
Metadaten
Titel
A Comprehensive Survey on Autonomous Driving Cars: A Perspective View
verfasst von
S. Devi
P. Malarvezhi
R. Dayana
K. Vadivukkarasi
Publikationsdatum
15.05.2020
Verlag
Springer US
Erschienen in
Wireless Personal Communications / Ausgabe 3/2020
Print ISSN: 0929-6212
Elektronische ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-020-07468-y

Weitere Artikel der Ausgabe 3/2020

Wireless Personal Communications 3/2020 Zur Ausgabe

Neuer Inhalt