Skip to main content
Erschienen in: Multimedia Systems 5/2023

22.07.2021 | Special Issue Paper

Visual driving assistance system based on few-shot learning

verfasst von: Shan Liu, Yichao Tang, Ying Tian, Hansong Su

Erschienen in: Multimedia Systems | Ausgabe 5/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

With the increase of vehicles and the diversification of road conditions, people pay more attention to the safety of driving. In recent years, autonomous driving technology by Franke et al. (IEEE Intell Syst Their Appl 13(6):40–48, 1998) and unmanned driving technology by Zhang et al. (CAAI Trans Intell Technol 1(1):4–13, 2016) have entered our field of vision. Both automatic driving by Levinson et al. (Towards fully autonomous driving: Systems and algorithms, 2011) and unmanned driving by Im et al. (Unmanned driving of intelligent robotic vehicle, 2009) use a variety of sensors to collect the environment around the vehicle, and use a variety of decision control algorithms to control the vehicle in motion. The visual driving assistance system by Watanabe, et al. (Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle, 2010), used in conjunction with the target recognition algorithm by Pantofaru et al. (Object recognition by integrating multiple image segmentations, 2008)), will provide drivers with real-time environment around the vehicle. In recent years, few-shot learning by Li et al. (Comput Electron Agric 2:2, 2020) has become a new direction of target recognition algorithm, which reduces the difficulty of collecting training samples. In this paper, on one hand, several low-light cameras with fish-eye lenses are used to collect and reconstruct the environment around the vehicle. On the other hand, we use infrared camera and lidar to collect the environment in front of the vehicle. Then, we use the method of few-shot learning to identify vehicles and pedestrians in the forward-view image. In addition, we develop the system on embedded devices according to miniaturization requirements. In conclusion, the system will adapt to the needs of most drivers at this stage, and will effectively cooperate with the development of automatic driving and unmanned driving.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Franke, U., Gavrila, D., Gorzig, S., et al.: Autonomous driving goes downtown. IEEE Intell. Syst. Their Appl. 13(6), 40–48 (1998)CrossRef Franke, U., Gavrila, D., Gorzig, S., et al.: Autonomous driving goes downtown. IEEE Intell. Syst. Their Appl. 13(6), 40–48 (1998)CrossRef
2.
Zurück zum Zitat Zhang, X., Gao, H., Guo, M., et al.: A study on key technologies of unmanned driving. CAAI Trans. Intell. Technol. 1(1), 4–13 (2016)CrossRef Zhang, X., Gao, H., Guo, M., et al.: A study on key technologies of unmanned driving. CAAI Trans. Intell. Technol. 1(1), 4–13 (2016)CrossRef
3.
Zurück zum Zitat Levinson, J., Askeland, J., Becker, J., et al.: Towards fully autonomous driving: Systems and algorithms. (2011) Levinson, J., Askeland, J., Becker, J., et al.: Towards fully autonomous driving: Systems and algorithms. (2011)
4.
Zurück zum Zitat Im, D.Y., Ryoo, Y.J., Kim, D.Y., et al.: Unmanned driving of intelligent robotic vehicle. Isis Symposium on Advanced Intelligent Systems, (2009) Im, D.Y., Ryoo, Y.J., Kim, D.Y., et al.: Unmanned driving of intelligent robotic vehicle. Isis Symposium on Advanced Intelligent Systems, (2009)
5.
Zurück zum Zitat Watanabe, T., Oshida, K., Matsumoto, Y., et al.: Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle. (2010) Watanabe, T., Oshida, K., Matsumoto, Y., et al.: Driving assistance system for appropriately making the driver recognize another vehicle behind or next to present vehicle. (2010)
6.
Zurück zum Zitat Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. (2008) Pantofaru, C., Schmid, C., Hebert, M.: Object recognition by integrating multiple image segmentations. (2008)
7.
Zurück zum Zitat Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2, 2 (2020) Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2, 2 (2020)
8.
Zurück zum Zitat Martinez, E., Diaz, M., Melenchon, J., et al.: Driving assistance system based on the detection of head-on collisions. (2008) Martinez, E., Diaz, M., Melenchon, J., et al.: Driving assistance system based on the detection of head-on collisions. (2008)
9.
Zurück zum Zitat Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning. (2017) Sung, F., Yang, Y., Zhang, L., et al.: Learning to compare: Relation network for few-shot learning. (2017)
10.
Zurück zum Zitat Bahl, P., Padmanabhan, V.N.: RADAR: an in-building RF-based user location and tracking system. Infocom Nineteenth Joint Conference of the IEEE Computer & Communications Societies IEEE, (2000) Bahl, P., Padmanabhan, V.N.: RADAR: an in-building RF-based user location and tracking system. Infocom Nineteenth Joint Conference of the IEEE Computer & Communications Societies IEEE, (2000)
11.
Zurück zum Zitat Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall. International 28(4), 484–486 (2008) Gonzalez, R.C., Woods, R.E.: Digital image processing. Prentice Hall. International 28(4), 484–486 (2008)
12.
Zurück zum Zitat Chen, X.: Reversing radar system based on CAN bus. International Conference on Industrial Mechatronics & Automation, (2009) Chen, X.: Reversing radar system based on CAN bus. International Conference on Industrial Mechatronics & Automation, (2009)
13.
Zurück zum Zitat Yang, Z.L., Guo, B.L.: Image mosaic based on SIFT. 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (2008) Yang, Z.L., Guo, B.L.: Image mosaic based on SIFT. 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (2008)
14.
Zurück zum Zitat Projection M: Cylindrical projection. Dictionary Geotechnical Engineering/wörterbuch. Geotechnik 329, 2 (2014) Projection M: Cylindrical projection. Dictionary Geotechnical Engineering/wörterbuch. Geotechnik 329, 2 (2014)
15.
Zurück zum Zitat Papadakis, P., Pratikakis, I., Perantonis, S., et al.: Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation. Pattern Recogn. 40(9), 2437–2452 (2007)CrossRefMATH Papadakis, P., Pratikakis, I., Perantonis, S., et al.: Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation. Pattern Recogn. 40(9), 2437–2452 (2007)CrossRefMATH
16.
Zurück zum Zitat Kannala, Juho., Brandt, et al.: A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2006) Kannala, Juho., Brandt, et al.: A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis & Machine Intelligence, (2006)
17.
Zurück zum Zitat Cooper, K.B., Dengler, R.J., Llombart, N., et al.: Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar. IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008)CrossRef Cooper, K.B., Dengler, R.J., Llombart, N., et al.: Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar. IEEE Trans. Microwave Theory Tech. 56(12), 2771–2778 (2008)CrossRef
18.
Zurück zum Zitat Lim, K., Treitz, P., Wulder, M., et al.: LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 27(1), 88–106 (2003)CrossRef Lim, K., Treitz, P., Wulder, M., et al.: LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 27(1), 88–106 (2003)CrossRef
19.
Zurück zum Zitat Wen, J., Yang, J., Jiang, B., Song, H., Wang, H.: Big data driven marine environment information forecasting: A time series prediction network. IEEE Trans. Fuzzy Syst. 29(1), 4–18 (2021)CrossRef Wen, J., Yang, J., Jiang, B., Song, H., Wang, H.: Big data driven marine environment information forecasting: A time series prediction network. IEEE Trans. Fuzzy Syst. 29(1), 4–18 (2021)CrossRef
20.
Zurück zum Zitat Poulton, C.V., Ami, Y., Cole, D.B., et al.: Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42(20), 4091–4094 (2017)CrossRef Poulton, C.V., Ami, Y., Cole, D.B., et al.: Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42(20), 4091–4094 (2017)CrossRef
22.
Zurück zum Zitat Han, F.: A two-stage approach to peaple and vehicle detection with HOG-based SVM. Permis, (2006) Han, F.: A two-stage approach to peaple and vehicle detection with HOG-based SVM. Permis, (2006)
23.
Zurück zum Zitat Jiafu, J., Hui, X.: Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost. (2012) Jiafu, J., Hui, X.: Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost. (2012)
24.
Zurück zum Zitat Jiachen, Y., Yang, Z., Jiacheng, L., Bin, J., Wen, L., Xinbo, G.: No Reference Quality Assessment for Screen Content Images Using Stacked Auto-encoders in Pictorial and Textual Regions. IEEE Transactions on Cybernetics,early access Jiachen, Y., Yang, Z., Jiacheng, L., Bin, J., Wen, L., Xinbo, G.: No Reference Quality Assessment for Screen Content Images Using Stacked Auto-encoders in Pictorial and Textual Regions. IEEE Transactions on Cybernetics,early access
25.
Zurück zum Zitat Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, (2014) Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, (2014)
26.
Zurück zum Zitat Girshick, R.: Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. (2015) Girshick, R.: Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision. (2015)
27.
Zurück zum Zitat Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef
28.
Zurück zum Zitat Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector. (2016) Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single Shot MultiBox Detector. (2016)
29.
Zurück zum Zitat Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. (2015) Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. (2015)
30.
Zurück zum Zitat Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016) Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. (2016)
31.
Zurück zum Zitat Santoro, A., Bartunov, S., Botvinick, M., et al.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, (2016) Santoro, A., Bartunov, S., Botvinick, M., et al.: One-shot learning with memory-augmented neural networks. arXiv preprint arXiv:​1605.​06065, (2016)
32.
Zurück zum Zitat Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2, 2 (2021) Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2, 2 (2021)
33.
Zurück zum Zitat Jedrasiak, K., Nawrat, A.: The Comparison of Capabilities of Low Light Camera, Thermal Imaging Camera and Depth Map Camera for Night Time Surveillance Applications. (2013) Jedrasiak, K., Nawrat, A.: The Comparison of Capabilities of Low Light Camera, Thermal Imaging Camera and Depth Map Camera for Night Time Surveillance Applications. (2013)
34.
Zurück zum Zitat Wu, J., Zhao, F., Zhang, X.: Infrared camera. (2006) Wu, J., Zhao, F., Zhang, X.: Infrared camera. (2006)
35.
Zurück zum Zitat Killinger, D.K., Chan, K.P.: Solid-state lidar measurements at 1 and 2 um. Optics, Electro-optics, & Laser Applications in Science & Engineering. International Society for Optics and Photonics, (1991) Killinger, D.K., Chan, K.P.: Solid-state lidar measurements at 1 and 2 um. Optics, Electro-optics, & Laser Applications in Science & Engineering. International Society for Optics and Photonics, (1991)
36.
Zurück zum Zitat de Huang, H.: Research on panoramic digital video mosaic algorithm. Appl. Mech. Mater. 71–78, 3967–3970 (2011)CrossRef de Huang, H.: Research on panoramic digital video mosaic algorithm. Appl. Mech. Mater. 71–78, 3967–3970 (2011)CrossRef
37.
Zurück zum Zitat Zhou, W., Liu, Y., Lyu, C., et al.: Real-time implementation of panoramic mosaic camera based on FPGA. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, (2016) Zhou, W., Liu, Y., Lyu, C., et al.: Real-time implementation of panoramic mosaic camera based on FPGA. 2016 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, (2016)
38.
Zurück zum Zitat Liang, L., Xiao, X., Jia, Y., et al.: NON-OVERLAP REGION BASED AUTOMATIC GLOBAL ALIGNMENT FOR RING CAMERA IMAGE MOSAIC. (2008) Liang, L., Xiao, X., Jia, Y., et al.: NON-OVERLAP REGION BASED AUTOMATIC GLOBAL ALIGNMENT FOR RING CAMERA IMAGE MOSAIC. (2008)
39.
Zurück zum Zitat Yu, W., Chung, Y., Soh, J.: Vignetting Distortion Correction Method for High Quality Digital Imaging. Proceedings of the 17th International Conference on Pattern Recognition, (2004) Yu, W., Chung, Y., Soh, J.: Vignetting Distortion Correction Method for High Quality Digital Imaging. Proceedings of the 17th International Conference on Pattern Recognition, (2004)
40.
Zurück zum Zitat Robert, A., Kruger, et al.: Light equalization radiography. Medical Physics, (1998) Robert, A., Kruger, et al.: Light equalization radiography. Medical Physics, (1998)
41.
Zurück zum Zitat Ali, W., Abdelkarim, S., Zahran, M., et al.: YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud. (2018) Ali, W., Abdelkarim, S., Zahran, M., et al.: YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection from LiDAR Point Cloud. (2018)
42.
Zurück zum Zitat Redmon, J., Farhadi, A.: YOLO9000: Better, pp. 6517–6525. Stronger, Faster (2017) Redmon, J., Farhadi, A.: YOLO9000: Better, pp. 6517–6525. Stronger, Faster (2017)
43.
Zurück zum Zitat Yang, J., Wang, C., Wang, H., et al.: A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens. J. 99, 1–1 (2020) Yang, J., Wang, C., Wang, H., et al.: A RGB-D based real-time multiple object detection and ranging system for autonomous driving. IEEE Sens. J. 99, 1–1 (2020)
44.
Zurück zum Zitat Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv e-prints, (2018) Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv e-prints, (2018)
45.
Zurück zum Zitat Piella, G.: A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 4(4), 259–280 (2003)CrossRef Piella, G.: A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 4(4), 259–280 (2003)CrossRef
Metadaten
Titel
Visual driving assistance system based on few-shot learning
verfasst von
Shan Liu
Yichao Tang
Ying Tian
Hansong Su
Publikationsdatum
22.07.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 5/2023
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-021-00830-5

Weitere Artikel der Ausgabe 5/2023

Multimedia Systems 5/2023 Zur Ausgabe