Skip to main content
Top
Published in: Multimedia Systems 5/2023

22-07-2021 | Special Issue Paper

Visual driving assistance system based on few-shot learning

Authors: Shan Liu, Yichao Tang, Ying Tian, Hansong Su

Published in: Multimedia Systems | Issue 5/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

With the increase of vehicles and the diversification of road conditions, people pay more attention to the safety of driving. In recent years, autonomous driving technology by Franke et al. (IEEE Intell Syst Their Appl 13(6):40–48, 1998) and unmanned driving technology by Zhang et al. (CAAI Trans Intell Technol 1(1):4–13, 2016) have entered our field of vision. Both automatic driving by Levinson et al. (Towards fully autonomous driving: Systems and algorithms, 2011) and unmanned driving by Im et al. (Unmanned driving of intelligent robotic vehicle, 2009) use a variety of sensors to collect the environment around the vehicle, and use a variety of decision control algorithms to control the vehicle in motion. The visual driving assistance system by Watanabe, et al. (Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle, 2010), used in conjunction with the target recognition algorithm by Pantofaru et al. (Object recognition by integrating multiple image segmentations, 2008)), will provide drivers with real-time environment around the vehicle. In recent years, few-shot learning by Li et al. (Comput Electron Agric 2:2, 2020) has become a new direction of target recognition algorithm, which reduces the difficulty of collecting training samples. In this paper, on one hand, several low-light cameras with fish-eye lenses are used to collect and reconstruct the environment around the vehicle. On the other hand, we use infrared camera and lidar to collect the environment in front of the vehicle. Then, we use the method of few-shot learning to identify vehicles and pedestrians in the forward-view image. In addition, we develop the system on embedded devices according to miniaturization requirements. In conclusion, the system will adapt to the needs of most drivers at this stage, and will effectively cooperate with the development of automatic driving and unmanned driving.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Franke, U., Gavrila, D., Gorzig, S., et al.: Autonomous driving goes downtown. IEEE Intell. Syst. Their Appl. 13(6), 40–48 (1998)CrossRef Franke, U., Gavrila, D., Gorzig, S., et al.: Autonomous driving goes downtown. IEEE Intell. Syst. Their Appl. 13(6), 40–48 (1998)CrossRef
2.
go back to reference Zhang, X., Gao, H., Guo, M., et al.: A study on key technologies of unmanned driving. CAAI Trans. Intell. Technol. 1(1), 4–13 (2016)CrossRef Zhang, X., Gao, H., Guo, M., et al.: A study on key technologies of unmanned driving. CAAI Trans. Intell. Technol. 1(1), 4–13 (2016)CrossRef
3.
go back to reference Levinson, J., Askeland, J., Becker, J., et al.: Towards fully autonomous driving: Systems and algorithms. (2011) Levinson, J., Askeland, J., Becker, J., et al.: Towards fully autonomous driving: Systems and algorithms. (2011)
4.
go back to reference Im, D.Y., Ryoo, Y.J., Kim, D.Y., et al.: Unmanned driving of intelligent robotic vehicle. Isis Symposium on Advanced Intelligent Systems, (2009) Im, D.Y., Ryoo, Y.J., Kim, D.Y., et al.: Unmanned driving of intelligent robotic vehicle. Isis Symposium on Advanced Intelligent Systems, (2009)
5.
go back to reference Watanabe, T., Oshida, K., Matsumoto, Y., et al.: Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle. (2010) Watanabe, T., Oshida, K., Matsumoto, Y., et al.: Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle. (2010)
6.
go back to reference Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. (2008) Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. (2008)
7.
go back to reference Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2, 2 (2020) Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2, 2 (2020)
8.
go back to reference Martinez, E., Diaz, M., Melenchon, J., et al.: Driving assistance system based on the detection of head-on collisions. (2008) Martinez, E., Diaz, M., Melenchon, J., et al.: Driving assistance system based on the detection of head-on collisions. (2008)
9.
go back to reference Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning. (2017) Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning. (2017)
10.
go back to reference Bahl, P., Padmanabhan, V.N.: RADAR: an in-building RF-based user location and tracking system. Infocom Nineteenth Joint Conference of the IEEE Computer & Communications Societies IEEE, (2000) Bahl, P., Padmanabhan, V.N.: RADAR: an in-building RF-based user location and tracking system. Infocom Nineteenth Joint Conference of the IEEE Computer & Communications Societies IEEE, (2000)
11.
go back to reference Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall. International 28(4), 484–486 (2008) Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall. International 28(4), 484–486 (2008)
12.
go back to reference Chen, X.: Reversing radar system based on CAN bus. International Conference on Industrial Mechatronics & Automation, (2009) Chen, X.: Reversing radar system based on CAN bus. International Conference on Industrial Mechatronics & Automation, (2009)
13.
go back to reference Yang, Z.L., Guo, B.L.: Image mosaic based on SIFT. 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (2008) Yang, Z.L., Guo, B.L.: Image mosaic based on SIFT. 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (2008)
14.
go back to reference Projection M: Cylindrical projection. Dictionary Geotechnical Engineering/wörterbuch. Geotechnik 329, 2 (2014) Projection M: Cylindrical projection. Dictionary Geotechnical Engineering/wörterbuch. Geotechnik 329, 2 (2014)
15.
go back to reference Papadakis, P., Pratikakis, I., Perantonis, S., et al.: Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation. Pattern Recogn. 40(9), 2437–2452 (2007)CrossRefMATH Papadakis, P., Pratikakis, I., Perantonis, S., et al.: Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation. Pattern Recogn. 40(9), 2437–2452 (2007)CrossRefMATH
16.
go back to reference Kannala, Juho., Brandt, et al.: A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2006) Kannala, Juho., Brandt, et al.: A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2006)
17.
go back to reference Cooper, K.B., Dengler, R.J., Llombart, N., et al.: Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar. IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008)CrossRef Cooper, K.B., Dengler, R.J., Llombart, N., et al.: Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar. IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008)CrossRef
18.
go back to reference Lim, K., Treitz, P., Wulder, M., et al.: LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 27(1), 88–106 (2003)CrossRef Lim, K., Treitz, P., Wulder, M., et al.: LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 27(1), 88–106 (2003)CrossRef
19.
go back to reference Wen, J., Yang, J., Jiang, B., Song, H., Wang, H.: Big data driven marine environment information forecasting: A time series prediction network. IEEE Trans. Fuzzy Syst. 29(1), 4–18 (2021)CrossRef Wen, J., Yang, J., Jiang, B., Song, H., Wang, H.: Big data driven marine environment information forecasting: A time series prediction network. IEEE Trans. Fuzzy Syst. 29(1), 4–18 (2021)CrossRef
20.
go back to reference Poulton, C.V., Ami, Y., Cole, D.B., et al.: Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42(20), 4091–4094 (2017)CrossRef Poulton, C.V., Ami, Y., Cole, D.B., et al.: Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42(20), 4091–4094 (2017)CrossRef
22.
go back to reference Han, F.: A two-stage approach to peaple and vehicle detection with HOG-based SVM. Permis, (2006) Han, F.: A two-stage approach to peaple and vehicle detection with HOG-based SVM. Permis, (2006)
23.
go back to reference Jiafu, J., Hui, X.: Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost. (2012) Jiafu, J., Hui, X.: Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost. (2012)
24.
go back to reference Jiachen, Y., Yang, Z., Jiacheng, L., Bin, J., Wen, L., Xinbo, G.: No Reference Quality Assessment for Screen Content Images Using Stacked Auto-encoders in Pictorial and Textual Regions. IEEE Transactions on Cybernetics,early access Jiachen, Y., Yang, Z., Jiacheng, L., Bin, J., Wen, L., Xinbo, G.: No Reference Quality Assessment for Screen Content Images Using Stacked Auto-encoders in Pictorial and Textual Regions. IEEE Transactions on Cybernetics,early access
25.
go back to reference Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, (2014) Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, (2014)
26.
go back to reference Girshick, R.: Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. (2015) Girshick, R.: Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. (2015)
27.
go back to reference Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef
28.
go back to reference Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector. (2016) Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector. (2016)
29.
go back to reference Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. (2015) Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. (2015)
30.
go back to reference Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016) Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016)
31.
go back to reference Santoro, A., Bartunov, S., Botvinick, M., et al.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, (2016) Santoro, A., Bartunov, S., Botvinick, M., et al.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:​1605.​06065, (2016)
32.
go back to reference Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2, 2 (2021) Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2, 2 (2021)
33.
go back to reference Jedrasiak, K., Nawrat, A.: The Comparison of Capabilities of Low Light Camera, Thermal Imaging Camera and Depth Map Camera for Night Time Surveillance Applications. (2013) Jedrasiak, K., Nawrat, A.: The Comparison of Capabilities of Low Light Camera, Thermal Imaging Camera and Depth Map Camera for Night Time Surveillance Applications. (2013)
34.
go back to reference Wu, J., Zhao, F., Zhang, X.: Infrared camera. (2006) Wu, J., Zhao, F., Zhang, X.: Infrared camera. (2006)
35.
go back to reference Killinger, D.K., Chan, K.P.: Solid-state lidar measurements at 1 and 2 um. Optics, Electro-optics, & Laser Applications in Science & Engineering. International Society for Optics and Photonics, (1991) Killinger, D.K., Chan, K.P.: Solid-state lidar measurements at 1 and 2 um. Optics, Electro-optics, & Laser Applications in Science & Engineering. International Society for Optics and Photonics, (1991)
36.
go back to reference de Huang, H.: Research on panoramic digital video mosaic algorithm. Appl. Mech. Mater. 71–78, 3967–3970 (2011)CrossRef de Huang, H.: Research on panoramic digital video mosaic algorithm. Appl. Mech. Mater. 71–78, 3967–3970 (2011)CrossRef
37.
go back to reference Zhou, W., Liu, Y., Lyu, C., et al.: Real-time implementation of panoramic mosaic camera based on FPGA. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, (2016) Zhou, W., Liu, Y., Lyu, C., et al.: Real-time implementation of panoramic mosaic camera based on FPGA. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, (2016)
38.
go back to reference Liang, L., Xiao, X., Jia, Y., et al.: NON-OVERLAP REGION BASED AUTOMATIC GLOBAL ALIGNMENT FOR RING CAMERA IMAGE MOSAIC. (2008) Liang, L., Xiao, X., Jia, Y., et al.: NON-OVERLAP REGION BASED AUTOMATIC GLOBAL ALIGNMENT FOR RING CAMERA IMAGE MOSAIC. (2008)
39.
go back to reference Yu, W., Chung, Y., Soh, J.: Vignetting Distortion Correction Method for High Quality Digital Imaging. Proceedings of the 17th International Conference on Pattern Recognition, (2004) Yu, W., Chung, Y., Soh, J.: Vignetting Distortion Correction Method for High Quality Digital Imaging. Proceedings of the 17th International Conference on Pattern Recognition, (2004)
40.
go back to reference Robert, A., Kruger, et al.: Light equalization radiography. Medical Physics, (1998) Robert, A., Kruger, et al.: Light equalization radiography. Medical Physics, (1998)
41.
go back to reference Ali, W., Abdelkarim, S., Zahran, M., et al.: YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud. (2018) Ali, W., Abdelkarim, S., Zahran, M., et al.: YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud. (2018)
42.
go back to reference Redmon, J., Farhadi, A.: YOLO9000: Better, pp. 6517–6525. Stronger, Faster (2017) Redmon, J., Farhadi, A.: YOLO9000: Better, pp. 6517–6525. Stronger, Faster (2017)
43.
go back to reference Yang, J., Wang, C., Wang, H., et al.: A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens. J. 99, 1–1 (2020) Yang, J., Wang, C., Wang, H., et al.: A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens. J. 99, 1–1 (2020)
44.
go back to reference Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv e-prints, (2018) Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv e-prints, (2018)
45.
go back to reference Piella, G.: A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 4(4), 259–280 (2003)CrossRef Piella, G.: A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 4(4), 259–280 (2003)CrossRef
Metadata
Title
Visual driving assistance system based on few-shot learning
Authors
Shan Liu
Yichao Tang
Ying Tian
Hansong Su
Publication date
22-07-2021
Publisher
Springer Berlin Heidelberg
Published in
Multimedia Systems / Issue 5/2023
Print ISSN: 0942-4962
Electronic ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-021-00830-5

Other articles of this Issue 5/2023

Multimedia Systems 5/2023 Go to the issue