Skip to main content
Erschienen in: Optical Memory and Neural Networks 2/2022

01.06.2022

Advanced Techniques for Perception and Localization in Autonomous Driving Systems: A Survey

verfasst von: Qusay Sellat, Kanagachidambaresan Ramasubramanian

Erschienen in: Optical Memory and Neural Networks | Ausgabe 2/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Autonomous driving research has progressed significantly in recent years. In order to travel safely, comfortably, and effectively, an autonomous car must completely comprehend the driving scenario at all times. The key criteria for a comprehensive understanding of the driving situations are proper perception and localization. The research on perception and localization of autonomous cars has increased substantially as a result of recent breakthroughs in AI, such as the wide range of deep learning approaches. However, owing to environmental uncertainties, sensor noise, and the complex interaction between the parts of the driving environment, additional study is required to achieve totally trustworthy perception and localization systems. In this survey, we demonstrate the advanced perception and localization processes in the field of autonomous driving. We show how cutting-edge approaches and practices have brought today’s autonomous cars closer than ever to completely comprehending the driving environment.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Hoel, C.J., Driggs-Campbell, K., Wolff, K., Laine, L., and Kochenderfer, M.J., Combining planning and deep reinforcement learning in tactical decision making for autonomous driving, IEEE Trans. Intell. Veh., 2019, vol. 5, no. 2, pp. 294-305.CrossRef Hoel, C.J., Driggs-Campbell, K., Wolff, K., Laine, L., and Kochenderfer, M.J., Combining planning and deep reinforcement learning in tactical decision making for autonomous driving, IEEE Trans. Intell. Veh., 2019, vol. 5, no. 2, pp. 294-305.CrossRef
2.
Zurück zum Zitat Lefevre, S., Carvalho, A., and Borrelli, F., A learning-based framework for velocity control in autonomous driving, IEEE Trans. Autom. Sci. Eng., 2015, vol. 13, no. 1, pp. 32–42.MATHCrossRef Lefevre, S., Carvalho, A., and Borrelli, F., A learning-based framework for velocity control in autonomous driving, IEEE Trans. Autom. Sci. Eng., 2015, vol. 13, no. 1, pp. 32–42.MATHCrossRef
3.
Zurück zum Zitat Choi, J., Lee, J., Kim, D., Soprani, G., Cerri, P., Broggi, A., and Yi, K., Environment-detection-and-mapping algorithm for autonomous driving in rural or off-road environment, IEEE Trans. Intell. Transp. Syst., 2012, vol. 13, no. 2, pp. 974–982.CrossRef Choi, J., Lee, J., Kim, D., Soprani, G., Cerri, P., Broggi, A., and Yi, K., Environment-detection-and-mapping algorithm for autonomous driving in rural or off-road environment, IEEE Trans. Intell. Transp. Syst., 2012, vol. 13, no. 2, pp. 974–982.CrossRef
4.
Zurück zum Zitat Li, Q., Chen, L., Li, M., Shaw, S.L., and Nüchter, A., A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios, IEEE Trans. Veh. Technol., 2013, vol. 63, no. 2, pp. 540–555.CrossRef Li, Q., Chen, L., Li, M., Shaw, S.L., and Nüchter, A., A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios, IEEE Trans. Veh. Technol., 2013, vol. 63, no. 2, pp. 540–555.CrossRef
5.
Zurück zum Zitat Aldibaja, M., Suganuma, N., and Yoneda, K., Robust intensity-based localization method for autonomous driving on snow–wet road surface, IEEE Trans. Ind. Inf., 2017, vol. 13, no. 5, pp. 2369–2378.CrossRef Aldibaja, M., Suganuma, N., and Yoneda, K., Robust intensity-based localization method for autonomous driving on snow–wet road surface, IEEE Trans. Ind. Inf., 2017, vol. 13, no. 5, pp. 2369–2378.CrossRef
6.
Zurück zum Zitat Kim, T.H. and Park, T.H., Placement optimization of multiple LIDAR sensors for autonomous vehicles, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 5, pp. 2139–2145.CrossRef Kim, T.H. and Park, T.H., Placement optimization of multiple LIDAR sensors for autonomous vehicles, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 5, pp. 2139–2145.CrossRef
7.
Zurück zum Zitat Lim, K.L., Drage, T., Zhang, C., Brogle, C., Lai, W.W., Kelliher, T., Adina-Zada, M., and Bräunl, T., Evolution of a reliable and extensible high-level control system for an autonomous car, IEEE Trans. Intell. Veh., 2019, vol. 4, no. 3, pp. 396–405.CrossRef Lim, K.L., Drage, T., Zhang, C., Brogle, C., Lai, W.W., Kelliher, T., Adina-Zada, M., and Bräunl, T., Evolution of a reliable and extensible high-level control system for an autonomous car, IEEE Trans. Intell. Veh., 2019, vol. 4, no. 3, pp. 396–405.CrossRef
8.
Zurück zum Zitat Mei, J., Gao, B., Xu, D., Yao, W., Zhao, X., and Zhao, H., Semantic segmentation of 3d lidar data in dynamic scene using semi-supervised learning, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 6, pp. 2496–2509.CrossRef Mei, J., Gao, B., Xu, D., Yao, W., Zhao, X., and Zhao, H., Semantic segmentation of 3d lidar data in dynamic scene using semi-supervised learning, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 6, pp. 2496–2509.CrossRef
9.
Zurück zum Zitat Jo, K., Kim, J., Kim, D., Jang, C., and Sunwoo, M., Development of autonomous car – Part II: A case study on the implementation of an autonomous driving system based on distributed architecture, IEEE Trans. Ind. Electron., 2015, vol. 62, no. 8, pp. 5119–5132.CrossRef Jo, K., Kim, J., Kim, D., Jang, C., and Sunwoo, M., Development of autonomous car – Part II: A case study on the implementation of an autonomous driving system based on distributed architecture, IEEE Trans. Ind. Electron., 2015, vol. 62, no. 8, pp. 5119–5132.CrossRef
10.
Zurück zum Zitat Jo, K., Jo, Y., Suhr, J.K., Jung, H.G., and Sunwoo, M., Precise localization of an autonomous car based on probabilistic noise models of road surface marker features using multiple cameras, IEEE Trans. Intell. Transp. Syst., 2015, vol. 16, no. 6, pp. 3377–3392.CrossRef Jo, K., Jo, Y., Suhr, J.K., Jung, H.G., and Sunwoo, M., Precise localization of an autonomous car based on probabilistic noise models of road surface marker features using multiple cameras, IEEE Trans. Intell. Transp. Syst., 2015, vol. 16, no. 6, pp. 3377–3392.CrossRef
11.
Zurück zum Zitat Vivacqua, R.P.D., Bertozzi, M., Cerri, P., Martins, F.N., and Vassallo, R.F., Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving, IEEE Trans. Intell. Transp. Syst., 2017, vol. 19, no. 2, pp. 582–597.CrossRef Vivacqua, R.P.D., Bertozzi, M., Cerri, P., Martins, F.N., and Vassallo, R.F., Self-localization based on visual lane marking maps: An accurate low-cost approach for autonomous driving, IEEE Trans. Intell. Transp. Syst., 2017, vol. 19, no. 2, pp. 582–597.CrossRef
12.
Zurück zum Zitat Zong, W., Zhang, C., Wang, Z., Zhu, J., and Chen, Q., Architecture design and implementation of an autonomous vehicle, IEEE Access, 2018, vol. 6, pp. 21956–21970.CrossRef Zong, W., Zhang, C., Wang, Z., Zhu, J., and Chen, Q., Architecture design and implementation of an autonomous vehicle, IEEE Access, 2018, vol. 6, pp. 21956–21970.CrossRef
13.
Zurück zum Zitat Wang, C., Sun, Q., Li, Z., Zhang, H., and Ruan, K., Cognitive competence improvement for autonomous vehicles: A lane change identification model for distant preceding vehicles, IEEE Access, 2019, vol. 7, pp. 83229–83242.CrossRef Wang, C., Sun, Q., Li, Z., Zhang, H., and Ruan, K., Cognitive competence improvement for autonomous vehicles: A lane change identification model for distant preceding vehicles, IEEE Access, 2019, vol. 7, pp. 83229–83242.CrossRef
14.
Zurück zum Zitat Artunedo, A., Villagra, J., Godoy, J., and del Castillo, M.D., Motion planning approach considering localization uncertainty, IEEE Trans. Veh. Technol., 2020, vol. 69, no. 6, pp. 5983–5994.CrossRef Artunedo, A., Villagra, J., Godoy, J., and del Castillo, M.D., Motion planning approach considering localization uncertainty, IEEE Trans. Veh. Technol., 2020, vol. 69, no. 6, pp. 5983–5994.CrossRef
15.
Zurück zum Zitat Okumura, B., James, M.R., Kanzawa, Y., Derry, M., Sakai, K., Nishi, T., and Prokhorov, D., Challenges in perception and decision making for intelligent automotive vehicles: A case study, IEEE Trans. Intell. Veh., 2016, vol. 1, no, 1, pp. 20–32.CrossRef Okumura, B., James, M.R., Kanzawa, Y., Derry, M., Sakai, K., Nishi, T., and Prokhorov, D., Challenges in perception and decision making for intelligent automotive vehicles: A case study, IEEE Trans. Intell. Veh., 2016, vol. 1, no, 1, pp. 20–32.CrossRef
16.
Zurück zum Zitat Noh, S., Decision-making framework for autonomous driving at road intersections: Safeguarding against collision, overly conservative behavior, and violation vehicles, IEEE Trans. Ind. Electron., 2018, vol. 66, no. 4, pp. 3275–3286.CrossRef Noh, S., Decision-making framework for autonomous driving at road intersections: Safeguarding against collision, overly conservative behavior, and violation vehicles, IEEE Trans. Ind. Electron., 2018, vol. 66, no. 4, pp. 3275–3286.CrossRef
17.
Zurück zum Zitat Goodfellow, I., Bengio, Y., and Courville, A., Deep Learning, MIT Press, 2016.MATH Goodfellow, I., Bengio, Y., and Courville, A., Deep Learning, MIT Press, 2016.MATH
18.
Zurück zum Zitat Feng, D., Haase-Schütz, C., Rosenbaum, L., Hertlein, H., Glaeser, C., Timm, F., Wiesbeck, W., and Dietmayer, K., Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges, IEEE Trans. Intell. Transp. Syst., 2020, vol. 22, no. 3, pp. 1341–1360.CrossRef Feng, D., Haase-Schütz, C., Rosenbaum, L., Hertlein, H., Glaeser, C., Timm, F., Wiesbeck, W., and Dietmayer, K., Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges, IEEE Trans. Intell. Transp. Syst., 2020, vol. 22, no. 3, pp. 1341–1360.CrossRef
19.
Zurück zum Zitat Zou, Q., Jiang, H., Dai, Q., Yue, Y., Chen, L., and Wang, Q., Robust lane detection from continuous driving scenes using deep neural networks, IEEE Trans. Veh. Technol., 2019, vol. 69, no. 1, pp. 41–54.CrossRef Zou, Q., Jiang, H., Dai, Q., Yue, Y., Chen, L., and Wang, Q., Robust lane detection from continuous driving scenes using deep neural networks, IEEE Trans. Veh. Technol., 2019, vol. 69, no. 1, pp. 41–54.CrossRef
20.
Zurück zum Zitat Chen, X., Kundu, K., Zhu, Y., Ma, H., Fidler, S., and Urtasun, R., 3d object proposals using stereo imagery for accurate object class detection, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 40, no. 5, pp. 1259–1272.CrossRef Chen, X., Kundu, K., Zhu, Y., Ma, H., Fidler, S., and Urtasun, R., 3d object proposals using stereo imagery for accurate object class detection, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 40, no. 5, pp. 1259–1272.CrossRef
21.
Zurück zum Zitat Li, W., Qu, Z., Song, H., Wang, P., and Xue, B., The traffic scene understanding and prediction based on image captioning, IEEE Access, 2020, vol. 9, pp. 1420–1427.CrossRef Li, W., Qu, Z., Song, H., Wang, P., and Xue, B., The traffic scene understanding and prediction based on image captioning, IEEE Access, 2020, vol. 9, pp. 1420–1427.CrossRef
22.
Zurück zum Zitat Li, Y., Wang, H., Dang, L.M., Nguyen, T.N., Han, D., Lee, A., Jang, I., and Moon, H., A deep learning-based hybrid framework for object detection and recognition in autonomous driving, IEEE Access, 2020, vol. 8, pp. 194228–194239.CrossRef Li, Y., Wang, H., Dang, L.M., Nguyen, T.N., Han, D., Lee, A., Jang, I., and Moon, H., A deep learning-based hybrid framework for object detection and recognition in autonomous driving, IEEE Access, 2020, vol. 8, pp. 194228–194239.CrossRef
23.
Zurück zum Zitat Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M., Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934, 2020. Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M., Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934, 2020.
24.
Zurück zum Zitat Cao, Z., Simon, T., Wei, S.E., and Sheikh, Y., Realtime multi-person 2d pose estimation using part affinity fields, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 7291–7299. Cao, Z., Simon, T., Wei, S.E., and Sheikh, Y., Realtime multi-person 2d pose estimation using part affinity fields, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 7291–7299.
25.
Zurück zum Zitat Adadi, A. and Berrada, M., Peeking inside the black-box: a survey on explainable artificial intelligence (XAI), IEEE Access, 2018, vol. 6, pp. 52138–52160.CrossRef Adadi, A. and Berrada, M., Peeking inside the black-box: a survey on explainable artificial intelligence (XAI), IEEE Access, 2018, vol. 6, pp. 52138–52160.CrossRef
26.
Zurück zum Zitat Cai, Y., Luan, T., Gao, H., Wang, H., Chen, L., Li, Y., Sotelo, M.A., and Li, Z., YOLOv4-5D: An effective and efficient object detector for autonomous driving, IEEE Trans. Instrum. Meas., 2021, vol. 70, pp. 1–13. Cai, Y., Luan, T., Gao, H., Wang, H., Chen, L., Li, Y., Sotelo, M.A., and Li, Z., YOLOv4-5D: An effective and efficient object detector for autonomous driving, IEEE Trans. Instrum. Meas., 2021, vol. 70, pp. 1–13.
27.
Zurück zum Zitat Ren, S., He, K., Girshick, R., and Sun, J., Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 2016, vol. 39, no. 6, pp. 1137–1149.CrossRef Ren, S., He, K., Girshick, R., and Sun, J., Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 2016, vol. 39, no. 6, pp. 1137–1149.CrossRef
28.
Zurück zum Zitat Cai, Z. and Vasconcelos, N., Cascade r-cnn: Delving into high quality object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 6154–6162. Cai, Z. and Vasconcelos, N., Cascade r-cnn: Delving into high quality object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 6154–6162.
29.
Zurück zum Zitat Liu, Y., Yixuan, Y., and Liu, M., Ground-aware monocular 3d object detection for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 919–926.CrossRef Liu, Y., Yixuan, Y., and Liu, M., Ground-aware monocular 3d object detection for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 919–926.CrossRef
30.
Zurück zum Zitat Wei, J., He, J., Zhou, Y., Chen, K., Tang, Z., and Xiong, Z., Enhanced object detection with deep convolutional neural networks for advanced driving assistance, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 4, pp. 1572–1583.CrossRef Wei, J., He, J., Zhou, Y., Chen, K., Tang, Z., and Xiong, Z., Enhanced object detection with deep convolutional neural networks for advanced driving assistance, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 4, pp. 1572–1583.CrossRef
31.
Zurück zum Zitat Chen, L., Zhan, W., Tian, W., He, Y., and Zou, Q., Deep integration: A multi-label architecture for road scene recognition, IEEE Trans. Image Process., 2019, vol. 28, no. 10, pp. 4883–4898.MathSciNetMATHCrossRef Chen, L., Zhan, W., Tian, W., He, Y., and Zou, Q., Deep integration: A multi-label architecture for road scene recognition, IEEE Trans. Image Process., 2019, vol. 28, no. 10, pp. 4883–4898.MathSciNetMATHCrossRef
32.
Zurück zum Zitat Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L., Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 40, no. 4, pp. 834–848.CrossRef Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L., Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 40, no. 4, pp. 834–848.CrossRef
33.
Zurück zum Zitat Chen, L.C., Papandreou, G., Schroff, F., and Adam, H., Rethinking atrous convolution for semantic image segmentation, arXiv preprint arXiv:1706.05587, 2017. Chen, L.C., Papandreou, G., Schroff, F., and Adam, H., Rethinking atrous convolution for semantic image segmentation, arXiv preprint arXiv:1706.05587, 2017.
34.
Zurück zum Zitat Li, G., Xie, H., Yan, W., Chang, Y., and Qu, X., Detection of road objects with small appearance in images for autonomous driving in various traffic situations using a deep learning based approach, IEEE Access, 2020, vol. 8, pp. 211164–211172.CrossRef Li, G., Xie, H., Yan, W., Chang, Y., and Qu, X., Detection of road objects with small appearance in images for autonomous driving in various traffic situations using a deep learning based approach, IEEE Access, 2020, vol. 8, pp. 211164–211172.CrossRef
35.
Zurück zum Zitat Dominguez-Sanchez, A., Cazorla, M., and Orts-Escolano, S., Pedestrian movement direction recognition using convolutional neural networks, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3540–3548.CrossRef Dominguez-Sanchez, A., Cazorla, M., and Orts-Escolano, S., Pedestrian movement direction recognition using convolutional neural networks, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3540–3548.CrossRef
36.
Zurück zum Zitat Gupta, A. and Choudhary, A., A framework for camera-based real-time lane and road surface marking detection and recognition, IEEE Trans. Intell. Veh., 2018, vol. 3, no. 4, pp. 476–485.CrossRef Gupta, A. and Choudhary, A., A framework for camera-based real-time lane and road surface marking detection and recognition, IEEE Trans. Intell. Veh., 2018, vol. 3, no. 4, pp. 476–485.CrossRef
37.
Zurück zum Zitat Seo, E., Lee, S., Shin, G., Yeo, H., Lim, Y., and Choi, G., Hybrid tracker based optimal path tracking system of autonomous driving for complex road environments, IEEE Access, 2021, vol. 9, pp. 71763–71777.CrossRef Seo, E., Lee, S., Shin, G., Yeo, H., Lim, Y., and Choi, G., Hybrid tracker based optimal path tracking system of autonomous driving for complex road environments, IEEE Access, 2021, vol. 9, pp. 71763–71777.CrossRef
38.
Zurück zum Zitat Zhang, Y., Wang, J., Wang, X., and Dolan, J.M., Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor, IEEE Trans. Intell. Transp. Syst., 2018, vol. 19, no. 12, pp. 3981–3991.CrossRef Zhang, Y., Wang, J., Wang, X., and Dolan, J.M., Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor, IEEE Trans. Intell. Transp. Syst., 2018, vol. 19, no. 12, pp. 3981–3991.CrossRef
39.
Zurück zum Zitat Thrun, S., Burgard, W., and Fox, D., Probabilistic Robotics, USA: Massachusetts Inst. of Technology, 2005.MATH Thrun, S., Burgard, W., and Fox, D., Probabilistic Robotics, USA: Massachusetts Inst. of Technology, 2005.MATH
40.
Zurück zum Zitat Xu, Z., Sun, Y., and Liu, M., iCurb: Imitation learning-based detection of road curbs using aerial images for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1097–1104.CrossRef Xu, Z., Sun, Y., and Liu, M., iCurb: Imitation learning-based detection of road curbs using aerial images for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1097–1104.CrossRef
41.
Zurück zum Zitat Cai, P., Wang, S., Sun, Y., and Liu, M., Probabilistic end-to-end vehicle navigation in complex dynamic environments with multimodal sensor fusion, IEEE Rob. Autom. Lett. 2020, vol. 5, no. 3, pp. 4218–4224. Cai, P., Wang, S., Sun, Y., and Liu, M., Probabilistic end-to-end vehicle navigation in complex dynamic environments with multimodal sensor fusion, IEEE Rob. Autom. Lett. 2020, vol. 5, no. 3, pp. 4218–4224.
42.
Zurück zum Zitat Lin, C., Tian, D., Duan, X., and Zhou, J., 3D Environmental perception modeling in the simulated autonomous-driving systems, Complex Syst. Model. Simul., 2021, vol. 1, no. 1, pp. 45–54.CrossRef Lin, C., Tian, D., Duan, X., and Zhou, J., 3D Environmental perception modeling in the simulated autonomous-driving systems, Complex Syst. Model. Simul., 2021, vol. 1, no. 1, pp. 45–54.CrossRef
43.
Zurück zum Zitat Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S.L., Joint 3d proposal generation and object detection from view aggregation, in 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 1–8. Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S.L., Joint 3d proposal generation and object detection from view aggregation, in 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 1–8.
44.
Zurück zum Zitat Zhang, L., Li, Y., and Nevatia, R., Global data association for multi-object tracking using network flows, in 2008 IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, 2008, pp. 1–8. Zhang, L., Li, Y., and Nevatia, R., Global data association for multi-object tracking using network flows, in 2008 IEEE Conf. on Computer Vision and Pattern Recognition, IEEE, 2008, pp. 1–8.
45.
Zurück zum Zitat Yoon, K., Kim, D.Y., Yoon, Y.C., and Jeon, M., Data association for multi-object tracking via deep neural networks, Sensors, 2019, vol. 19, no. 3, p. 559.CrossRef Yoon, K., Kim, D.Y., Yoon, Y.C., and Jeon, M., Data association for multi-object tracking via deep neural networks, Sensors, 2019, vol. 19, no. 3, p. 559.CrossRef
46.
Zurück zum Zitat Hu, H.N., Cai, Q.Z., Wang, D., Lin, J., Sun, M., Krahenbuhl, P., Darrell, T., and Yu, F., Joint monocular 3D vehicle detection and tracking, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 5390–5399. Hu, H.N., Cai, Q.Z., Wang, D., Lin, J., Sun, M., Krahenbuhl, P., Darrell, T., and Yu, F., Joint monocular 3D vehicle detection and tracking, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 5390–5399.
47.
Zurück zum Zitat Zhou, X., Koltun, V., and Krähenbühl, P., Tracking objects as points, in European Conf. on Computer Vision, Cham: Springer, 2020, pp. 474–490. Zhou, X., Koltun, V., and Krähenbühl, P., Tracking objects as points, in European Conf. on Computer Vision, Cham: Springer, 2020, pp. 474–490.
48.
Zurück zum Zitat Li, Q., Hu, R., Wang, Z., and Ding, Z., Driving behavior-aware network for 3D object tracking in complex traffic scenes, IEEE Access, 2021, vol. 9, pp. 51550–51560.CrossRef Li, Q., Hu, R., Wang, Z., and Ding, Z., Driving behavior-aware network for 3D object tracking in complex traffic scenes, IEEE Access, 2021, vol. 9, pp. 51550–51560.CrossRef
49.
Zurück zum Zitat Leal-Taixé, L., Canton-Ferrer, C., and Schindler, K., Learning by tracking: Siamese CNN for robust target association, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, 2016, pp. 33–40. Leal-Taixé, L., Canton-Ferrer, C., and Schindler, K., Learning by tracking: Siamese CNN for robust target association, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, 2016, pp. 33–40.
50.
Zurück zum Zitat Kong, X., Xin, B., Wang, Y., and Hua, G., Collaborative deep reinforcement learning for joint object search, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 1695–1704. Kong, X., Xin, B., Wang, Y., and Hua, G., Collaborative deep reinforcement learning for joint object search, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 1695–1704.
51.
Zurück zum Zitat Yu, F., Wang, D., Shelhamer, E., and Darrell, T., Deep layer aggregation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 2403–2412. Yu, F., Wang, D., Shelhamer, E., and Darrell, T., Deep layer aggregation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 2403–2412.
52.
Zurück zum Zitat Li, Y., Møgelmose, A., and Trivedi, M.M., Pushing the “Speed Limit”: high-accuracy US traffic sign recognition with convolutional neural networks, IEEE Trans. Intell. Veh., 2016, vol. 1, no. 2, pp. 167–176.CrossRef Li, Y., Møgelmose, A., and Trivedi, M.M., Pushing the “Speed Limit”: high-accuracy US traffic sign recognition with convolutional neural networks, IEEE Trans. Intell. Veh., 2016, vol. 1, no. 2, pp. 167–176.CrossRef
53.
Zurück zum Zitat Chollet, F., Xception: Deep learning with depthwise separable convolutions, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 1251–1258. Chollet, F., Xception: Deep learning with depthwise separable convolutions, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 1251–1258.
54.
Zurück zum Zitat Bangquan, X. and Xiong, W.X., Real-time embedded traffic sign recognition using efficient convolutional neural network, IEEE Access, 2019, vol. 7, pp. 53330–53346.CrossRef Bangquan, X. and Xiong, W.X., Real-time embedded traffic sign recognition using efficient convolutional neural network, IEEE Access, 2019, vol. 7, pp. 53330–53346.CrossRef
55.
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P., and Girshick, R., Mask r-cnn, in Proc. of the IEEE Int. Conf. on Computer Vision, 2017, pp. 2961–2969. He, K., Gkioxari, G., Dollár, P., and Girshick, R., Mask r-cnn, in Proc. of the IEEE Int. Conf. on Computer Vision, 2017, pp. 2961–2969.
56.
Zurück zum Zitat Serna, C.G. and Ruichek, Y., Traffic signs detection and classification for European urban environments, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 10, pp. 4388–4399.CrossRef Serna, C.G. and Ruichek, Y., Traffic signs detection and classification for European urban environments, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 10, pp. 4388–4399.CrossRef
57.
Zurück zum Zitat Mannan, A., Javed, K., Rehman, A.U., Babri, H.A., and Noon, S.K., Classification of degraded traffic signs using flexible mixture model and transfer learning, IEEE Access, 2019, vol. 7, pp. 148800–148813.CrossRef Mannan, A., Javed, K., Rehman, A.U., Babri, H.A., and Noon, S.K., Classification of degraded traffic signs using flexible mixture model and transfer learning, IEEE Access, 2019, vol. 7, pp. 148800–148813.CrossRef
58.
Zurück zum Zitat Avramović, A., Sluga, D., Tabernik, D., Skočaj, D., Stojnić, V., and Ilc, N., Neural-network-based traffic sign detection and recognition in high-definition images using region focusing and parallelization, IEEE Access, 2020, vol. 8, pp. 189855–189868.CrossRef Avramović, A., Sluga, D., Tabernik, D., Skočaj, D., Stojnić, V., and Ilc, N., Neural-network-based traffic sign detection and recognition in high-definition images using region focusing and parallelization, IEEE Access, 2020, vol. 8, pp. 189855–189868.CrossRef
59.
Zurück zum Zitat Reza, A.M., Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement, J. VLSI Signal Process. Syst. Signal, Image Video Technol., 2004, vol. 38, no. 1, pp. 35–44.CrossRef Reza, A.M., Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement, J. VLSI Signal Process. Syst. Signal, Image Video Technol., 2004, vol. 38, no. 1, pp. 35–44.CrossRef
60.
Zurück zum Zitat Uijlings, J.R., Van de Sande, K.E., Gevers, T., and Smeulders, A.W., Selective search for object recognition, Int. J. Comput. Vision, 2013, vol. 104, no. 2, pp. 154–171.CrossRef Uijlings, J.R., Van de Sande, K.E., Gevers, T., and Smeulders, A.W., Selective search for object recognition, Int. J. Comput. Vision, 2013, vol. 104, no. 2, pp. 154–171.CrossRef
61.
Zurück zum Zitat Girshick, R., Donahue, J., Darrell, T., and Malik, J., Rich feature hierarchies for accurate object detection and semantic segmentation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 580–587. Girshick, R., Donahue, J., Darrell, T., and Malik, J., Rich feature hierarchies for accurate object detection and semantic segmentation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2014, pp. 580–587.
62.
Zurück zum Zitat Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D., Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell., 2009, vol. 32, no. 9, pp. 1627–1645.CrossRef Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D., Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell., 2009, vol. 32, no. 9, pp. 1627–1645.CrossRef
63.
Zurück zum Zitat Møgelmose, A., Liu, D., and Trivedi, M.M., Detection of US traffic signs, IEEE Trans. Intell. Transp. Syst., 2015, vol. 16, no. 6, pp. 3116–3125.CrossRef Møgelmose, A., Liu, D., and Trivedi, M.M., Detection of US traffic signs, IEEE Trans. Intell. Transp. Syst., 2015, vol. 16, no. 6, pp. 3116–3125.CrossRef
64.
Zurück zum Zitat Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., and Igel, C., Detection of traffic signs in real-world images: The German traffic sign detection benchmark, in 2013 Int. Joint Conf. on Neural Networks (IJCNN), IEEE, 2013, pp. 1–8. Houben, S., Stallkamp, J., Salmen, J., Schlipsing, M., and Igel, C., Detection of traffic signs in real-world images: The German traffic sign detection benchmark, in 2013 Int. Joint Conf. on Neural Networks (IJCNN), IEEE, 2013, pp. 1–8.
65.
Zurück zum Zitat Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., You only look once: Unified, real-time object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 779–788. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A., You only look once: Unified, real-time object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 779–788.
66.
Zurück zum Zitat Tabernik, D. and Skočaj, D., Deep learning for large-scale traffic-sign detection and recognition, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 4, pp. 1427–1440.CrossRef Tabernik, D. and Skočaj, D., Deep learning for large-scale traffic-sign detection and recognition, IEEE Trans. Intell. Transp. Syst., 2019, vol. 21, no. 4, pp. 1427–1440.CrossRef
67.
Zurück zum Zitat Zhang, X., Chen, Z., Wu, Q.J., Cai, L., Lu, D., and Li, X., Fast semantic segmentation for scene perception, IEEE Trans. Ind. Inform., 2018, vol. 15, no. 2, pp. 1183–1192.CrossRef Zhang, X., Chen, Z., Wu, Q.J., Cai, L., Lu, D., and Li, X., Fast semantic segmentation for scene perception, IEEE Trans. Ind. Inform., 2018, vol. 15, no. 2, pp. 1183–1192.CrossRef
68.
Zurück zum Zitat Wang, W., Fu, Y., Pan, Z., Li, X., and Zhuang, Y., Real-time driving scene semantic segmentation, IEEE Access, 2020, vol. 8, pp. 36776–36788.CrossRef Wang, W., Fu, Y., Pan, Z., Li, X., and Zhuang, Y., Real-time driving scene semantic segmentation, IEEE Access, 2020, vol. 8, pp. 36776–36788.CrossRef
69.
Zurück zum Zitat Long, J., Shelhamer, E., and Darrell, T., Fully convolutional networks for semantic segmentation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440. Long, J., Shelhamer, E., and Darrell, T., Fully convolutional networks for semantic segmentation, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
70.
Zurück zum Zitat Badrinarayanan, V., Kendall, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 39, no. 12, pp. 2481–2495.CrossRef Badrinarayanan, V., Kendall, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell., 2017, vol. 39, no. 12, pp. 2481–2495.CrossRef
71.
Zurück zum Zitat Badrinarayanan, V., Handa, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling, arXiv preprint arXiv:1505.07293, 2015. Badrinarayanan, V., Handa, A., and Cipolla, R., Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling, arXiv preprint arXiv:1505.07293, 2015.
72.
Zurück zum Zitat Simonyan, K. and Zisserman, A., Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014. Simonyan, K. and Zisserman, A., Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556, 2014.
73.
Zurück zum Zitat Ronneberger, O., Fischer, P., and Brox, T., U-net: Convolutional networks for biomedical image segmentation, in Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, Cham: Springer, 2015, pp. 234–241. Ronneberger, O., Fischer, P., and Brox, T., U-net: Convolutional networks for biomedical image segmentation, in Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, Cham: Springer, 2015, pp. 234–241.
74.
Zurück zum Zitat Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J., Pyramid scene parsing network, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890. Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J., Pyramid scene parsing network, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890.
75.
Zurück zum Zitat Yu, F. and Koltun, V., Multi-scale context aggregation by dilated convolutions, arXiv preprint arXiv:1511.07122, 2015. Yu, F. and Koltun, V., Multi-scale context aggregation by dilated convolutions, arXiv preprint arXiv:1511.07122, 2015.
76.
Zurück zum Zitat Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L., Semantic image segmentation with deep convolutional nets and fully connected crfs, arXiv preprint arXiv:1412.7062, 2014. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L., Semantic image segmentation with deep convolutional nets and fully connected crfs, arXiv preprint arXiv:1412.7062, 2014.
77.
Zurück zum Zitat Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., Feature pyramid networks for object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S., Feature pyramid networks for object detection, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125.
78.
Zurück zum Zitat Chaurasia, A. and Culurciello, E., December. Linknet: Exploiting encoder representations for efficient semantic segmentation, in 2017 IEEE Visual Communications and Image Processing (VCIP), IEEE, 2017, pp. 1–4. Chaurasia, A. and Culurciello, E., December. Linknet: Exploiting encoder representations for efficient semantic segmentation, in 2017 IEEE Visual Communications and Image Processing (VCIP), IEEE, 2017, pp. 1–4.
79.
Zurück zum Zitat Wu, C., Cheng, H.P., Li, S., Li, H., and Chen, Y., ApesNet: a pixel-wise efficient segmentation network for embedded devices, IET Cyber.-Phys. Syst.: Theory Appl., 2016, vo.1, no. 1, pp. 78-85. Wu, C., Cheng, H.P., Li, S., Li, H., and Chen, Y., ApesNet: a pixel-wise efficient segmentation network for embedded devices, IET Cyber.-Phys. Syst.: Theory Appl., 2016, vo.1, no. 1, pp. 78-85.
80.
Zurück zum Zitat Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E., Enet: A deep neural network architecture for real-time semantic segmentation, arXiv preprint arXiv:1606.02147, 2016. Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E., Enet: A deep neural network architecture for real-time semantic segmentation, arXiv preprint arXiv:1606.02147, 2016.
81.
Zurück zum Zitat Mehta, S., Rastegari, M., Caspi, A., Shapiro, L., and Hajishirzi, H., Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation, in Proc. of the European Conf. on Computer Vision (ECCV), 2018, pp. 552–568. Mehta, S., Rastegari, M., Caspi, A., Shapiro, L., and Hajishirzi, H., Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation, in Proc. of the European Conf. on Computer Vision (ECCV), 2018, pp. 552–568.
82.
Zurück zum Zitat Kim, J. and Heo, Y.S., Efficient semantic segmentation using spatio-channel dilated convolutions, IEEE Access, 2019, vol. 7, pp. 154239–154252.CrossRef Kim, J. and Heo, Y.S., Efficient semantic segmentation using spatio-channel dilated convolutions, IEEE Access, 2019, vol. 7, pp. 154239–154252.CrossRef
83.
Zurück zum Zitat Lo, S.Y., Hang, H.M., Chan, S.W., and Lin, J.J., Efficient dense modules of asymmetric convolution for real-time semantic segmentation, in Proc. of the ACM Multimedia Asia, 2019, pp. 1–6. Lo, S.Y., Hang, H.M., Chan, S.W., and Lin, J.J., Efficient dense modules of asymmetric convolution for real-time semantic segmentation, in Proc. of the ACM Multimedia Asia, 2019, pp. 1–6.
84.
Zurück zum Zitat Watanabe, S., Pattern Recognition: Human and Mechanical, Wiley, 1985. Watanabe, S., Pattern Recognition: Human and Mechanical, Wiley, 1985.
85.
Zurück zum Zitat Altun, M. and Celenk, M., Road scene content analysis for driver assistance and autonomous driving, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3398–3407.CrossRef Altun, M. and Celenk, M., Road scene content analysis for driver assistance and autonomous driving, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3398–3407.CrossRef
86.
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., Generative adversarial nets, in Proc. of the 27th Int. Conf. on Neural Information Processing Systems, Cambridge, MA, USAL MIT Press, 2014, vol. 2, pp. 2672–2680. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., Generative adversarial nets, in Proc. of the 27th Int. Conf. on Neural Information Processing Systems, Cambridge, MA, USAL MIT Press, 2014, vol. 2, pp. 2672–2680.
87.
Zurück zum Zitat Mirza, M. and Osindero, S., Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784, 2014. Mirza, M. and Osindero, S., Conditional generative adversarial nets, arXiv preprint arXiv:1411.1784, 2014.
88.
Zurück zum Zitat Han, X., Lu, J., Zhao, C., You, S., and Li, H., Semisupervised and weakly supervised road detection based on generative adversarial networks, IEEE Signal Process. Lett., 2018, vol. 25, no. 4, pp. 551–555.CrossRef Han, X., Lu, J., Zhao, C., You, S., and Li, H., Semisupervised and weakly supervised road detection based on generative adversarial networks, IEEE Signal Process. Lett., 2018, vol. 25, no. 4, pp. 551–555.CrossRef
89.
Zurück zum Zitat Yan, F., Wang, K., Zou, B., Tang, L., Li, W., and Lv, C., LiDAR-based multi-task road perception network for autonomous vehicles, IEEE Access, 2020, vol. 8, pp. 86753–86764.CrossRef Yan, F., Wang, K., Zou, B., Tang, L., Li, W., and Lv, C., LiDAR-based multi-task road perception network for autonomous vehicles, IEEE Access, 2020, vol. 8, pp. 86753–86764.CrossRef
90.
Zurück zum Zitat Sun, L., Yang, K., Hu, X., Hu, W., and Wang, K., Real-time fusion network for RGB-D semantic segmentation incorporating unexpected obstacle detection for road-driving images, IEEE Rob. Autom. Lett., 2020, vol. 5, no. 4, pp. 5558–5565.CrossRef Sun, L., Yang, K., Hu, X., Hu, W., and Wang, K., Real-time fusion network for RGB-D semantic segmentation incorporating unexpected obstacle detection for road-driving images, IEEE Rob. Autom. Lett., 2020, vol. 5, no. 4, pp. 5558–5565.CrossRef
91.
Zurück zum Zitat Huang, Z., Lv, C., Xing, Y., and Wu, J., Multi-modal sensor fusion-based deep neural network for end-to-end autonomous driving with scene understanding, IEEE Sens. J., 2020, vol. 21, no. 10, pp. 11781–11790.CrossRef Huang, Z., Lv, C., Xing, Y., and Wu, J., Multi-modal sensor fusion-based deep neural network for end-to-end autonomous driving with scene understanding, IEEE Sens. J., 2020, vol. 21, no. 10, pp. 11781–11790.CrossRef
92.
Zurück zum Zitat Pei, Y., Sun, B., and Li, S., Multifeature selective fusion network for real-time driving scene parsing, IEEE Trans. Instrum. Meas.t, 2021, vol. 70, pp. 1–12. Pei, Y., Sun, B., and Li, S., Multifeature selective fusion network for real-time driving scene parsing, IEEE Trans. Instrum. Meas.t, 2021, vol. 70, pp. 1–12.
93.
Zurück zum Zitat He, K., Zhang, X., Ren, S., and Sun, J., Deep residual learning for image recognition, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 770–778. He, K., Zhang, X., Ren, S., and Sun, J., Deep residual learning for image recognition, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
94.
Zurück zum Zitat Zhu, Z., Xu, M., Bai, S., Huang, T., and Bai, X., Asymmetric non-local neural networks for semantic segmentation, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 593–602. Zhu, Z., Xu, M., Bai, S., Huang, T., and Bai, X., Asymmetric non-local neural networks for semantic segmentation, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 593–602.
95.
Zurück zum Zitat Sun, Y., Zuo, W., Huang, H., Cai, P., and Liu, M., PointMoSeg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3D lidar point clouds for autonomous driving, IEEE Rob. Autom. Lett., 2020, vol. 6, no. 2, pp. 510–517.CrossRef Sun, Y., Zuo, W., Huang, H., Cai, P., and Liu, M., PointMoSeg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3D lidar point clouds for autonomous driving, IEEE Rob. Autom. Lett., 2020, vol. 6, no. 2, pp. 510–517.CrossRef
96.
Zurück zum Zitat Choy, C., Park, J., and Koltun, V., Fully convolutional geometric features, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 8958–8966. Choy, C., Park, J., and Koltun, V., Fully convolutional geometric features, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 8958–8966.
97.
Zurück zum Zitat Graham, B., Engelcke, M., and Van der Maaten, L., 3d semantic segmentation with submanifold sparse convolutional networks, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 9224–9232. Graham, B., Engelcke, M., and Van der Maaten, L., 3d semantic segmentation with submanifold sparse convolutional networks, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2018, pp. 9224–9232.
98.
Zurück zum Zitat Choy, C., Gwak, J., and Savarese, S., 4d spatio-temporal convnets: Minkowski convolutional neural networks, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 3075–3084. Choy, C., Gwak, J., and Savarese, S., 4d spatio-temporal convnets: Minkowski convolutional neural networks, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 3075–3084.
99.
Zurück zum Zitat Kumar, V.R., Yogamani, S., Rashed, H., Sitsu, G., Witt, C., Leang, I., Milz, S., and Mäder, P., Omnidet: Surround view cameras based multi-task visual perception network for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 2830–2837.CrossRef Kumar, V.R., Yogamani, S., Rashed, H., Sitsu, G., Witt, C., Leang, I., Milz, S., and Mäder, P., Omnidet: Surround view cameras based multi-task visual perception network for autonomous driving, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 2830–2837.CrossRef
100.
Zurück zum Zitat Fritsch, J., Kuehnl, T., and Geiger, A., A new performance measure and evaluation benchmark for road detection algorithms, in 16th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC 2013), IEEE, 2013, pp. 1693–1700. Fritsch, J., Kuehnl, T., and Geiger, A., A new performance measure and evaluation benchmark for road detection algorithms, in 16th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC 2013), IEEE, 2013, pp. 1693–1700.
101.
Zurück zum Zitat Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B., The cityscapes dataset for semantic urban scene understanding, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 3213–3223. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B., The cityscapes dataset for semantic urban scene understanding, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
102.
Zurück zum Zitat Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J., Semantickitti: A dataset for semantic scene understanding of lidar sequences, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 9297–9307. Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J., Semantickitti: A dataset for semantic scene understanding of lidar sequences, in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2019, pp. 9297–9307.
103.
Zurück zum Zitat Shin, M.O., Oh, G.M., Kim, S.W., and Seo, S.W., Real-time and accurate segmentation of 3-D point clouds based on Gaussian process regression, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3363–3377.CrossRef Shin, M.O., Oh, G.M., Kim, S.W., and Seo, S.W., Real-time and accurate segmentation of 3-D point clouds based on Gaussian process regression, IEEE Trans. Intell. Transp. Syst., 2017, vol. 18, no. 12, pp. 3363–3377.CrossRef
104.
Zurück zum Zitat Williams, C.K., and Rasmussen, C.E., Gaussian Processes for Machine Learning, Cambridge, MA: MIT press, 2006.MATH Williams, C.K., and Rasmussen, C.E., Gaussian Processes for Machine Learning, Cambridge, MA: MIT press, 2006.MATH
105.
Zurück zum Zitat Geiger, A., Lenz, P., Stiller, C., and Urtasun, R., Vision meets robotics: The kitti dataset, Int. J. Rob. Res., 2013, vol. 32, no. 11, pp. 1231–1237.CrossRef Geiger, A., Lenz, P., Stiller, C., and Urtasun, R., Vision meets robotics: The kitti dataset, Int. J. Rob. Res., 2013, vol. 32, no. 11, pp. 1231–1237.CrossRef
106.
Zurück zum Zitat Brostow, G.J., Shotton, J., Fauqueur, J., and Cipolla, R., Segmentation and recognition using structure from motion point clouds, in European Conf. on computer vision, Berlin, Heidelberg: Springer, 2008, pp. 44–57. Brostow, G.J., Shotton, J., Fauqueur, J., and Cipolla, R., Segmentation and recognition using structure from motion point clouds, in European Conf. on computer vision, Berlin, Heidelberg: Springer, 2008, pp. 44–57.
107.
Zurück zum Zitat Hussain Raza, S., Grundmann, M., and Essa, I., Geometric context from videos, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2013, pp. 3081–3088. Hussain Raza, S., Grundmann, M., and Essa, I., Geometric context from videos, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2013, pp. 3081–3088.
108.
Zurück zum Zitat Brostow, G.J., Fauqueur, J., and Cipolla, R., Semantic object classes in video: A high-definition ground truth database, Pattern Recognit. Lett., 2009, vol. 30, no. 2, pp. 88–97.CrossRef Brostow, G.J., Fauqueur, J., and Cipolla, R., Semantic object classes in video: A high-definition ground truth database, Pattern Recognit. Lett., 2009, vol. 30, no. 2, pp. 88–97.CrossRef
109.
Zurück zum Zitat Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O., Nuscenes: A multimodal dataset for autonomous driving, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 11621–11631. Caesar, H., Bankiti, V., Lang, A.H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O., Nuscenes: A multimodal dataset for autonomous driving, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2020, pp. 11621–11631.
110.
Zurück zum Zitat Mudassar, B.A., Saha, P., Long, Y., Amir, M.F., Gebhardt, E., Na, T., Ko, J.H., Wolf, M., and Mukhopadhyay, S., Camel: An adaptive camera with embedded machine learning-based sensor parameter control, IEEE J. Emerging Selected Top. Circuits Syst., 2019, vol. 9, no. 3, pp. 498–508.CrossRef Mudassar, B.A., Saha, P., Long, Y., Amir, M.F., Gebhardt, E., Na, T., Ko, J.H., Wolf, M., and Mukhopadhyay, S., Camel: An adaptive camera with embedded machine learning-based sensor parameter control, IEEE J. Emerging Selected Top. Circuits Syst., 2019, vol. 9, no. 3, pp. 498–508.CrossRef
111.
Zurück zum Zitat Kleinfelder, S., Lim, S., Liu, X., and El Gamal, A., A 10000 frames/s CMOS digital pixel sensor, IEEE J. Solid-State Circuits, 2001, vol. 36, no. 12, pp. 2049–2059.CrossRef Kleinfelder, S., Lim, S., Liu, X., and El Gamal, A., A 10000 frames/s CMOS digital pixel sensor, IEEE J. Solid-State Circuits, 2001, vol. 36, no. 12, pp. 2049–2059.CrossRef
112.
Zurück zum Zitat Skorka, O. and Joseph, D., CMOS digital pixel sensors: Technology and applications, in Nanosensors, Biosensors, and Info-Tech Sensors and Systems 2014, International Society for Optics and Photonics, 2014, vol. 9060, p. 90600G. Skorka, O. and Joseph, D., CMOS digital pixel sensors: Technology and applications, in Nanosensors, Biosensors, and Info-Tech Sensors and Systems 2014, International Society for Optics and Photonics, 2014, vol. 9060, p. 90600G.
113.
Zurück zum Zitat Tu, C., Takeuchi, E., Carballo, A., and Takeda, K., May. Point cloud compression for 3d lidar sensor using recurrent neural network with residual blocks, in 2019 Int. Conf. on Robotics and Automation (ICRA), IEEE, 2019, pp. 3274–3280. Tu, C., Takeuchi, E., Carballo, A., and Takeda, K., May. Point cloud compression for 3d lidar sensor using recurrent neural network with residual blocks, in 2019 Int. Conf. on Robotics and Automation (ICRA), IEEE, 2019, pp. 3274–3280.
114.
Zurück zum Zitat Mehra, A., Mandal, M., Narang, P., and Chamola, V., ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions, IEEE Trans. Intell. Transp. Syst., 2020. Mehra, A., Mandal, M., Narang, P., and Chamola, V., ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions, IEEE Trans. Intell. Transp. Syst., 2020.
115.
Zurück zum Zitat Qu, Y., Chen, Y., Huang, J., and Xie, Y., Enhanced pix2pix dehazing network, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 8160–8168. Qu, Y., Chen, Y., Huang, J., and Xie, Y., Enhanced pix2pix dehazing network, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019, pp. 8160–8168.
116.
Zurück zum Zitat Engin, D., Genç, A., and Kemal Ekenel, H., Cycle-dehaze: Enhanced cyclegan for single image dehazing, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, 2018, pp. 825–833. Engin, D., Genç, A., and Kemal Ekenel, H., Cycle-dehaze: Enhanced cyclegan for single image dehazing, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, 2018, pp. 825–833.
117.
Zurück zum Zitat Mehta, A., Sinha, H., Narang, P., and Mandal, M., Hidegan: A hyperspectral-guided image dehazing gan, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, 2020, pp. 212–213. Mehta, A., Sinha, H., Narang, P., and Mandal, M., Hidegan: A hyperspectral-guided image dehazing gan, in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, 2020, pp. 212–213.
118.
Zurück zum Zitat Chen, B. and Wornell, G.W., Quantization index modulation: A class of provably good methods for digital watermarking and information embedding, IEEE Trans. Inform. Theory, 2001, vol. 47, no. 4, pp. 1423–1443.MathSciNetMATHCrossRef Chen, B. and Wornell, G.W., Quantization index modulation: A class of provably good methods for digital watermarking and information embedding, IEEE Trans. Inform. Theory, 2001, vol. 47, no. 4, pp. 1423–1443.MathSciNetMATHCrossRef
119.
Zurück zum Zitat Chen, B. and Wornell, G.W., Quantization index modulation methods for digital watermarking and information embedding of multimedia, J. VLSI Signal Process. Syst. Signal, Image Video Technol., 2001, vol. 27, no. 1, pp. 7–33.MATHCrossRef Chen, B. and Wornell, G.W., Quantization index modulation methods for digital watermarking and information embedding of multimedia, J. VLSI Signal Process. Syst. Signal, Image Video Technol., 2001, vol. 27, no. 1, pp. 7–33.MATHCrossRef
120.
Zurück zum Zitat Malik, H., Subbalakshmi, K.P., and Chandramouli, R., Nonparametric steganalysis of QIM data hiding using approximate entropy, in Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, International Society for Optics and Photonics, 2008, vol. 6819, p. 681914. Malik, H., Subbalakshmi, K.P., and Chandramouli, R., Nonparametric steganalysis of QIM data hiding using approximate entropy, in Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, International Society for Optics and Photonics, 2008, vol. 6819, p. 681914.
121.
Zurück zum Zitat Malik, H., Chandramouli, R., and Subbalakshmi, K.P., Steganalysis: Trends and challenges, in Multimedia Forensics and Security, IGI Global, 2009, pp. 245–265. Malik, H., Chandramouli, R., and Subbalakshmi, K.P., Steganalysis: Trends and challenges, in Multimedia Forensics and Security, IGI Global, 2009, pp. 245–265.
122.
Zurück zum Zitat Changalvala, R. and Malik, H., LiDAR data integrity verification for autonomous vehicle, IEEE Access, 2019, vol. 7, pp. 138018–138031.CrossRef Changalvala, R. and Malik, H., LiDAR data integrity verification for autonomous vehicle, IEEE Access, 2019, vol. 7, pp. 138018–138031.CrossRef
123.
Zurück zum Zitat Bresson, G., Alsayed, Z., Yu, L., and Glaser, S., Simultaneous localization and mapping: A survey of current trends in autonomous driving, IEEE Trans. Intell. Veh., 2017, vol. 2, no. 3, pp. 194–220.CrossRef Bresson, G., Alsayed, Z., Yu, L., and Glaser, S., Simultaneous localization and mapping: A survey of current trends in autonomous driving, IEEE Trans. Intell. Veh., 2017, vol. 2, no. 3, pp. 194–220.CrossRef
124.
Zurück zum Zitat Wang, H., Wang, C., and Xie, L., Intensity-SLAM: Intensity Assisted localization and mapping for large scale environment, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1715–1721.CrossRef Wang, H., Wang, C., and Xie, L., Intensity-SLAM: Intensity Assisted localization and mapping for large scale environment, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1715–1721.CrossRef
125.
Zurück zum Zitat White, C.E., Bernstein, D., and Kornhauser, A.L., Some map matching algorithms for personal navigation assistants, Transp. Res., Part C: Emerging Technol., 2000, vol. 8, no. 1–6, pp. 91–108.CrossRef White, C.E., Bernstein, D., and Kornhauser, A.L., Some map matching algorithms for personal navigation assistants, Transp. Res., Part C: Emerging Technol., 2000, vol. 8, no. 1–6, pp. 91–108.CrossRef
126.
Zurück zum Zitat El Najjar, M.E. and Bonnifait, P., A road-matching method for precise vehicle localization using belief theory and kalman filtering, Autonom. Rob., 2005, vol. 19, no. 2, pp. 173–191. El Najjar, M.E. and Bonnifait, P., A road-matching method for precise vehicle localization using belief theory and kalman filtering, Autonom. Rob., 2005, vol. 19, no. 2, pp. 173–191.
127.
Zurück zum Zitat Newson, P. and Krumm, J., Hidden Markov map matching through noise and sparseness, in Proc. of the 17th ACM SIGSPATIAL Int. Conf. on advances in geographic information systems, 2009, pp. 336–343. Newson, P. and Krumm, J., Hidden Markov map matching through noise and sparseness, in Proc. of the 17th ACM SIGSPATIAL Int. Conf. on advances in geographic information systems, 2009, pp. 336–343.
128.
Zurück zum Zitat Ballardini, A.L., Cattaneo, D., Izquierdo, R., Parra, I., Sotelo, M.A., and Sorrenti, D.G., Ego-lane estimation by modeling lanes and sensor failures, in 2017 IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC), IEEE, 2017, pp. 1–7. Ballardini, A.L., Cattaneo, D., Izquierdo, R., Parra, I., Sotelo, M.A., and Sorrenti, D.G., Ego-lane estimation by modeling lanes and sensor failures, in 2017 IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC), IEEE, 2017, pp. 1–7.
129.
Zurück zum Zitat Popescu, V., Danescu, R., and Nedevschi, S., On-road position estimation by probabilistic integration of visual cues, in 2012 IEEE Intelligent Vehicles Symposium, IEEE, 2012, pp. 583–589. Popescu, V., Danescu, R., and Nedevschi, S., On-road position estimation by probabilistic integration of visual cues, in 2012 IEEE Intelligent Vehicles Symposium, IEEE, 2012, pp. 583–589.
130.
Zurück zum Zitat Jo, K., Chu, K., and Sunwoo, M., Interacting multiple model filter-based sensor fusion of GPS with in-vehicle sensors for real-time vehicle positioning, IEEE Trans. Intell. Transp. Syst., 2011, vol. 13, no. 1, pp. 329–343.CrossRef Jo, K., Chu, K., and Sunwoo, M., Interacting multiple model filter-based sensor fusion of GPS with in-vehicle sensors for real-time vehicle positioning, IEEE Trans. Intell. Transp. Syst., 2011, vol. 13, no. 1, pp. 329–343.CrossRef
131.
Zurück zum Zitat Gwak, M., Jo, K., and Sunwoo, M., Neural-network multiple models filter (NMM)-based position estimation system for autonomous vehicles, Int. J. Autom. Technol., 2013, vol. 14, no. 2, pp. 265–274.CrossRef Gwak, M., Jo, K., and Sunwoo, M., Neural-network multiple models filter (NMM)-based position estimation system for autonomous vehicles, Int. J. Autom. Technol., 2013, vol. 14, no. 2, pp. 265–274.CrossRef
132.
Zurück zum Zitat Gustafsson, F., Particle filter theory and practice with positioning applications, IEEE Aerosp. Electron. Syst. Mag., 2010, vol. 25, no. 7, pp. 53–82.CrossRef Gustafsson, F., Particle filter theory and practice with positioning applications, IEEE Aerosp. Electron. Syst. Mag., 2010, vol. 25, no. 7, pp. 53–82.CrossRef
133.
Zurück zum Zitat Klein, G. and Murray, D., Parallel tracking and mapping for small AR workspaces, in 2007 6th IEEE and ACM Int. Symposium on Mixed and Augmented Reality, IEEE, 2007, pp. 225–234. Klein, G. and Murray, D., Parallel tracking and mapping for small AR workspaces, in 2007 6th IEEE and ACM Int. Symposium on Mixed and Augmented Reality, IEEE, 2007, pp. 225–234.
134.
Zurück zum Zitat Mur-Artal, R. and Tardós, J.D., Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras, IEEE Trans. Rob., 2017, vol. 33, no. 5, pp. 1255–1262.CrossRef Mur-Artal, R. and Tardós, J.D., Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras, IEEE Trans. Rob., 2017, vol. 33, no. 5, pp. 1255–1262.CrossRef
135.
Zurück zum Zitat Ferrera, M., Eudes, A., Moras, J., Sanfourche, M., and Le Besnerais, G., OV $^{2} $ SLAM: A fully online and versatile visual SLAM for real-time applications, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1399–1406.CrossRef Ferrera, M., Eudes, A., Moras, J., Sanfourche, M., and Le Besnerais, G., OV $^{2} $ SLAM: A fully online and versatile visual SLAM for real-time applications, IEEE Rob. Autom. Lett., 2021, vol. 6, no. 2, pp. 1399–1406.CrossRef
136.
Zurück zum Zitat Angeli, A., Filliat, D., Doncieux, S., and Meyer, J.A., Fast and incremental method for loop-closure detection using bags of visual words, IEEE Trans. Rob., 2008, vol. 24, no. 5, pp. 1027–1037.CrossRef Angeli, A., Filliat, D., Doncieux, S., and Meyer, J.A., Fast and incremental method for loop-closure detection using bags of visual words, IEEE Trans. Rob., 2008, vol. 24, no. 5, pp. 1027–1037.CrossRef
137.
Zurück zum Zitat Nicosevici, T. and Garcia, R., Automatic visual bag-of-words for online robot navigation and mapping, IEEE Trans. Rob., 2012, vol. 28, no. 4, pp. 886–898.CrossRef Nicosevici, T. and Garcia, R., Automatic visual bag-of-words for online robot navigation and mapping, IEEE Trans. Rob., 2012, vol. 28, no. 4, pp. 886–898.CrossRef
138.
Zurück zum Zitat Garcia-Fidalgo, E. and Ortiz, A., ibow-lcd: An appearance-based loop-closure detection approach using incremental bags of binary words, IEEE Rob. Autom. Lett., 2018, vol. 3, no. 4, pp. 3051–3057.CrossRef Garcia-Fidalgo, E. and Ortiz, A., ibow-lcd: An appearance-based loop-closure detection approach using incremental bags of binary words, IEEE Rob. Autom. Lett., 2018, vol. 3, no. 4, pp. 3051–3057.CrossRef
139.
Zurück zum Zitat Nobis, F., Papanikolaou, O., Betz, J., and Lienkamp, M., Persistent map saving for visual localization for autonomous vehicles: An ORB-SLAM 2 extension, in 2020 Fifteenth Int. Conf. on Ecological Vehicles and Renewable Energies (EVER), IEEE, 2020, pp. 1–9. Nobis, F., Papanikolaou, O., Betz, J., and Lienkamp, M., Persistent map saving for visual localization for autonomous vehicles: An ORB-SLAM 2 extension, in 2020 Fifteenth Int. Conf. on Ecological Vehicles and Renewable Energies (EVER), IEEE, 2020, pp. 1–9.
140.
Zurück zum Zitat Kasmi, A., Laconte, J., Aufrère, R., Denis, D., and Chapuis, R., End-to-end probabilistic ego-vehicle localization framework, IEEE Trans. Intell. Veh., 2020, vol. 6, no. 1, pp. 146–158.CrossRef Kasmi, A., Laconte, J., Aufrère, R., Denis, D., and Chapuis, R., End-to-end probabilistic ego-vehicle localization framework, IEEE Trans. Intell. Veh., 2020, vol. 6, no. 1, pp. 146–158.CrossRef
141.
Zurück zum Zitat Ramm, F., Topf, J., and Chilton, S., OpenStreetMap: using and enhancing the free map of the world, SoC Bulletin, Cambridge: UIT Cambridge, 2011, vol. 45, p. 55. Ramm, F., Topf, J., and Chilton, S., OpenStreetMap: using and enhancing the free map of the world, SoC Bulletin, Cambridge: UIT Cambridge, 2011, vol. 45, p. 55.
142.
Zurück zum Zitat Rozenberszki, D. and Majdik, A.L., May. LOL: Lidar-only odometry and localization in 3D point cloud maps, in 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), IEEE, 2020, pp. 4379–4385. Rozenberszki, D. and Majdik, A.L., May. LOL: Lidar-only odometry and localization in 3D point cloud maps, in 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), IEEE, 2020, pp. 4379–4385.
143.
Zurück zum Zitat Koide, K., Miura, J., and Menegatti, E., A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement, Int. J. Adv. Rob. Syst. 2019, vol. 16, no. 2, p. 1729881419841532. Koide, K., Miura, J., and Menegatti, E., A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement, Int. J. Adv. Rob. Syst. 2019, vol. 16, no. 2, p. 1729881419841532.
144.
Zurück zum Zitat Chetverikov, D., Svirko, D., Stepanov, D., and Krsek, P., The trimmed iterative closest point algorithm, in Object Recognition Supported by User Interaction for Service Robots, IEEE, 2002, vol. 3, pp. 545–548. Chetverikov, D., Svirko, D., Stepanov, D., and Krsek, P., The trimmed iterative closest point algorithm, in Object Recognition Supported by User Interaction for Service Robots, IEEE, 2002, vol. 3, pp. 545–548.
145.
Zurück zum Zitat Zhang, J. and Singh, S., LOAM: Lidar odometry and mapping in real-time, in Robotics: Science and Systems, 2014, vol. 2, no. 9. Zhang, J. and Singh, S., LOAM: Lidar odometry and mapping in real-time, in Robotics: Science and Systems, 2014, vol. 2, no. 9.
146.
Zurück zum Zitat Shan, T. and Englot, B., Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain, in 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 4758–4765. Shan, T. and Englot, B., Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain, in 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 4758–4765.
147.
Zurück zum Zitat Elfes, A., Using occupancy grids for mobile robot perception and navigation, Computer, 1989, vol. 22, no. 6, pp. 46–57.CrossRef Elfes, A., Using occupancy grids for mobile robot perception and navigation, Computer, 1989, vol. 22, no. 6, pp. 46–57.CrossRef
148.
Zurück zum Zitat Levinson, J., Montemerlo, M., and Thrun, S., Map-based precision vehicle localization in urban environments, in Robotics: Science and Systems, 2007.MATH Levinson, J., Montemerlo, M., and Thrun, S., Map-based precision vehicle localization in urban environments, in Robotics: Science and Systems, 2007.MATH
149.
Zurück zum Zitat Castorena, J. and Agarwal, S., Ground-edge-based LIDAR localization without a reflectivity calibration for autonomous driving, IEEE Rob. Autom. Lett., 2017, vol. 3, no. 1, pp. 344–351.CrossRef Castorena, J. and Agarwal, S., Ground-edge-based LIDAR localization without a reflectivity calibration for autonomous driving, IEEE Rob. Autom. Lett., 2017, vol. 3, no. 1, pp. 344–351.CrossRef
150.
Zurück zum Zitat de Paula Veronese, L., Auat-Cheein, F., Mutz, F., Oliveira-Santos, T., Guivant, J.E., de Aguiar, E., Badue, C., and De Souza, A.F., Evaluating the limits of a LiDAR for an autonomous driving localization, IEEE Trans. Intell. Transp. Syst., 2020, vol. 22, no. 3, pp. 1449–1458.CrossRef de Paula Veronese, L., Auat-Cheein, F., Mutz, F., Oliveira-Santos, T., Guivant, J.E., de Aguiar, E., Badue, C., and De Souza, A.F., Evaluating the limits of a LiDAR for an autonomous driving localization, IEEE Trans. Intell. Transp. Syst., 2020, vol. 22, no. 3, pp. 1449–1458.CrossRef
151.
Zurück zum Zitat Dubé, R., Gollub, M.G., Sommer, H., Gilitschenski, I., Siegwart, R., Cadena, C., and Nieto, J., Incremental-segment-based localization in 3-d point clouds, IEEE Rob. Autom. Lett., 2018, vol. 3, no. 3, pp. 1832–1839.CrossRef Dubé, R., Gollub, M.G., Sommer, H., Gilitschenski, I., Siegwart, R., Cadena, C., and Nieto, J., Incremental-segment-based localization in 3-d point clouds, IEEE Rob. Autom. Lett., 2018, vol. 3, no. 3, pp. 1832–1839.CrossRef
152.
Zurück zum Zitat Whelan, T., Ma, L., Bondarev, E., de With, P.H.N., and McDonald, J., Incremental and batch planar simplification of dense point cloud maps, Rob. Autonom. Syst., 2015, vol. 69, pp. 3–14.CrossRef Whelan, T., Ma, L., Bondarev, E., de With, P.H.N., and McDonald, J., Incremental and batch planar simplification of dense point cloud maps, Rob. Autonom. Syst., 2015, vol. 69, pp. 3–14.CrossRef
153.
Zurück zum Zitat Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., and Song, S., DeepICP: An end-to-end deep neural network for 3D point cloud registration, IEEE Int. Conf. on Computer Vision (ICCV), 2019, pp. 12–21. Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., and Song, S., DeepICP: An end-to-end deep neural network for 3D point cloud registration, IEEE Int. Conf. on Computer Vision (ICCV), 2019, pp. 12–21.
154.
Zurück zum Zitat Khan, S., Wollherr, D., and Buss, M., Modeling laser intensities for simultaneous localization and mapping, IEEE Rob. Autom. Lett., 2016, vol. 1, no. 2, pp. 692–699.CrossRef Khan, S., Wollherr, D., and Buss, M., Modeling laser intensities for simultaneous localization and mapping, IEEE Rob. Autom. Lett., 2016, vol. 1, no. 2, pp. 692–699.CrossRef
155.
Zurück zum Zitat Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., and Leonard, J.J., Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age, IEEE Trans. Rob., 2016, vol. 32, no. 6, pp. 1309–1332.CrossRef Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., and Leonard, J.J., Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age, IEEE Trans. Rob., 2016, vol. 32, no. 6, pp. 1309–1332.CrossRef
156.
Zurück zum Zitat Kohlbrecher, S., Von Stryk, O., Meyer, J., and Klingauf, U., A flexible and scalable SLAM system with full 3D motion estimation, in 2011 IEEE Int. Symposium on Safety, Security, and Rescue Robotics, IEEE, 2011, pp. 155–160. Kohlbrecher, S., Von Stryk, O., Meyer, J., and Klingauf, U., A flexible and scalable SLAM system with full 3D motion estimation, in 2011 IEEE Int. Symposium on Safety, Security, and Rescue Robotics, IEEE, 2011, pp. 155–160.
157.
Zurück zum Zitat Sun, L., Zhao, J., He, X., and Ye, C., June. Dlo: Direct lidar odometry for 2.5 d outdoor environment, in 2018 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2018, pp. 1–5. Sun, L., Zhao, J., He, X., and Ye, C., June. Dlo: Direct lidar odometry for 2.5 d outdoor environment, in 2018 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2018, pp. 1–5.
158.
Zurück zum Zitat Li, J., Zhao, J., Kang, Y., He, X., Ye, C., and Sun, L., DL-SLAM: Direct 2.5 D LiDAR SLAM for Autonomous Driving, in 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019, pp. 1205–1210. Li, J., Zhao, J., Kang, Y., He, X., Ye, C., and Sun, L., DL-SLAM: Direct 2.5 D LiDAR SLAM for Autonomous Driving, in 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019, pp. 1205–1210.
Metadaten
Titel
Advanced Techniques for Perception and Localization in Autonomous Driving Systems: A Survey
verfasst von
Qusay Sellat
Kanagachidambaresan Ramasubramanian
Publikationsdatum
01.06.2022
Verlag
Pleiades Publishing
Erschienen in
Optical Memory and Neural Networks / Ausgabe 2/2022
Print ISSN: 1060-992X
Elektronische ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X22020084

Weitere Artikel der Ausgabe 2/2022

Optical Memory and Neural Networks 2/2022 Zur Ausgabe

Premium Partner