Skip to main content
Top

2019 | OriginalPaper | Chapter

3D Sensing Techniques for Multimodal Data Analysis and Integration in Smart and Autonomous Systems

Authors : Zhenyu Fang, He Sun, Jinchang Ren, Huimin Zhao, Sophia Zhao, Stephen Marshall, Tariq Durrani

Published in: Communications, Signal Processing, and Systems

Publisher: Springer Singapore

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

For smart and autonomous systems, 3D positioning and measurement is essential as the precision can severely affect the applicability of the techniques for a number of applications. In this paper, we summarize and compare different techniques and sensors that can be potentially used in multimodal data analysis and integration. These will provide useful guidance for the design and implementation of relevant systems.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Han, J., et al.: Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013) Han, J., et al.: Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)
2.
go back to reference Fanello, S.R., et al.: HyperDepth: learning depth from structured light without matching. In: Proceedings of the IEEE Conference on CVPR, pp. 5441–5450 (2016) Fanello, S.R., et al.: HyperDepth: learning depth from structured light without matching. In: Proceedings of the IEEE Conference on CVPR, pp. 5441–5450 (2016)
3.
go back to reference Zanuttigh, P., et al.: Time-of-Flight and Structured Light Depth Cameras. Springer, Heidelberg (2016) Zanuttigh, P., et al.: Time-of-Flight and Structured Light Depth Cameras. Springer, Heidelberg (2016)
4.
go back to reference Ke, F., et al.: A flexible and high precision calibration method for the structured light vision system. Optik-Int. J. Light Electron Opt. 127(1), 310–314 (2016) Ke, F., et al.: A flexible and high precision calibration method for the structured light vision system. Optik-Int. J. Light Electron Opt. 127(1), 310–314 (2016)
5.
go back to reference Ren, M., et al.: Novel projector calibration method for monocular structured light system based on digital image correlation. Optik 132, 337–347 (2017) Ren, M., et al.: Novel projector calibration method for monocular structured light system based on digital image correlation. Optik 132, 337–347 (2017)
10.
go back to reference Ghamisi, P., et al.: LiDAR data classification using extinction profiles and a composite Kernel support vector machine. IEEE Geosci. Remote Sens. Lett. 14, 659–663 (2017) Ghamisi, P., et al.: LiDAR data classification using extinction profiles and a composite Kernel support vector machine. IEEE Geosci. Remote Sens. Lett. 14, 659–663 (2017)
11.
go back to reference Fersch, T., et al.: A CDMA modulation technique for automotive time-of-flight LiDAR systems. IEEE Sens. J. 17, 3507–3516 (2017) Fersch, T., et al.: A CDMA modulation technique for automotive time-of-flight LiDAR systems. IEEE Sens. J. 17, 3507–3516 (2017)
12.
go back to reference Kang, Z., et al.: A bayesian-network-based classification method integrating airborne LiDAR data with optical images. IEEE J-STARS 10, 1651–1661 (2016) Kang, Z., et al.: A bayesian-network-based classification method integrating airborne LiDAR data with optical images. IEEE J-STARS 10, 1651–1661 (2016)
13.
go back to reference Altmann, Y., et al.: Robust spectral unmixing of sparse multispectral Lidar waveforms using gamma Markov random fields. arXiv preprint arXiv:1610.04107 (2016) Altmann, Y., et al.: Robust spectral unmixing of sparse multispectral Lidar waveforms using gamma Markov random fields. arXiv preprint arXiv:​1610.​04107 (2016)
14.
go back to reference Martín, A.J., et al.: EMFi-based ultrasonic sensory array for 3D localization of reflectors using positioning algorithms. IEEE Sens. J. 15(5), 2951–2962 (2016) Martín, A.J., et al.: EMFi-based ultrasonic sensory array for 3D localization of reflectors using positioning algorithms. IEEE Sens. J. 15(5), 2951–2962 (2016)
15.
go back to reference Paajanen, M., et al.: ElectroMechanical Film (EMFi)—a new multipurpose electret material. Sens. Actuators A: Phys. 84(1), 95–102 (2000) Paajanen, M., et al.: ElectroMechanical Film (EMFi)—a new multipurpose electret material. Sens. Actuators A: Phys. 84(1), 95–102 (2000)
16.
go back to reference Khyam, M.O., Pickering, M.R., et al.: Pseudo-orthogonal chirp-based multiple ultrasonic transducer positioning. IEEE Sens. J. 17, 3832–3843 (2017) Khyam, M.O., Pickering, M.R., et al.: Pseudo-orthogonal chirp-based multiple ultrasonic transducer positioning. IEEE Sens. J. 17, 3832–3843 (2017)
17.
go back to reference Khyam, M.O., et al.: High-precision OFDM-based multiple ultrasonic transducer positioning using a robust optimization approach. IEEE Sens. J. 16(13), 5325–5336 (2016) Khyam, M.O., et al.: High-precision OFDM-based multiple ultrasonic transducer positioning using a robust optimization approach. IEEE Sens. J. 16(13), 5325–5336 (2016)
18.
go back to reference Chen, C., et al.: Real-time human action recognition based on depth motion maps. J. Real-time Image Process. 12(1), 155–163 (2016) Chen, C., et al.: Real-time human action recognition based on depth motion maps. J. Real-time Image Process. 12(1), 155–163 (2016)
19.
go back to reference Corti, A., et al.: A metrological characterization of the Kinect V2 time-of-flight camera. Robot. Auton. Syst. 75, 584–594 (2016) Corti, A., et al.: A metrological characterization of the Kinect V2 time-of-flight camera. Robot. Auton. Syst. 75, 584–594 (2016)
20.
go back to reference Supancic, J.S., et al.: Depth-based hand pose estimation: data, methods, and challenges. In: Proceedings of the IEEE International Conference on CV, pp. 1868–1876 (2016) Supancic, J.S., et al.: Depth-based hand pose estimation: data, methods, and challenges. In: Proceedings of the IEEE International Conference on CV, pp. 1868–1876 (2016)
22.
go back to reference Das, R., et al.: GeroSim: a simulation framework for gesture driven robotic arm control using Intel RealSense. In: IEEE International Conference on ICPEICES, pp. 1–5, 4 July 2016 Das, R., et al.: GeroSim: a simulation framework for gesture driven robotic arm control using Intel RealSense. In: IEEE International Conference on ICPEICES, pp. 1–5, 4 July 2016
23.
go back to reference Lan, Y., et al.: Data fusion-based real-time hand gesture recognition with Kinect V2. In: 9th International Conference on Human System Interactions (HSI). IEEE (2016) Lan, Y., et al.: Data fusion-based real-time hand gesture recognition with Kinect V2. In: 9th International Conference on Human System Interactions (HSI). IEEE (2016)
25.
go back to reference Chen, L., et al.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013) Chen, L., et al.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013)
26.
go back to reference Allodi, M., et al.: Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In: Intelligent Vehicles Symposium. IEEE (2016) Allodi, M., et al.: Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In: Intelligent Vehicles Symposium. IEEE (2016)
27.
go back to reference Yao, Y., et al.: Integration of indoor and outdoor positioning in a three-dimension scene based on LIDAR and GPS signal. In: Proceedings of the 2nd IEEE ICCC Conference, pp. 1772–1776 (2016) Yao, Y., et al.: Integration of indoor and outdoor positioning in a three-dimension scene based on LIDAR and GPS signal. In: Proceedings of the 2nd IEEE ICCC Conference, pp. 1772–1776 (2016)
28.
go back to reference Nakajima, K., et al.: 3D environment mapping and self-position estimation by a small flying robot mounted with a movable ultrasonic range sensor. J. Elect. Sys. Inf. Tech. 4, 289–298 (2017) Nakajima, K., et al.: 3D environment mapping and self-position estimation by a small flying robot mounted with a movable ultrasonic range sensor. J. Elect. Sys. Inf. Tech. 4, 289–298 (2017)
29.
go back to reference Anghel, A., et al.: Combining spaceborne SAR images with 3D point clouds for infrastructure monitoring applications. ISPRS J. Photogramm. Remote Sens. 111, 45–61 (2016) Anghel, A., et al.: Combining spaceborne SAR images with 3D point clouds for infrastructure monitoring applications. ISPRS J. Photogramm. Remote Sens. 111, 45–61 (2016)
30.
go back to reference Raucoules, D., et al.: Time-variable 3D ground displacements from high-resolution synthetic aperture radar (SAR). Remote Sens. Environ. 139, 198–204 (2013) Raucoules, D., et al.: Time-variable 3D ground displacements from high-resolution synthetic aperture radar (SAR). Remote Sens. Environ. 139, 198–204 (2013)
31.
go back to reference Nitti, D.O., et al.: Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 15(8), 18334–18359 (2015) Nitti, D.O., et al.: Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 15(8), 18334–18359 (2015)
32.
go back to reference Penner, J.F., et al.: Ground-based 3D radar imaging of trees using a 2D synthetic aperture. Electronics 6(1), 11 (2017) Penner, J.F., et al.: Ground-based 3D radar imaging of trees using a 2D synthetic aperture. Electronics 6(1), 11 (2017)
33.
go back to reference Basaca-Preciado, L.C., et al.: Optical 3D laser measurement system for navigation of autonomous mobile robot. Opt. Lasers Eng. 54, 159–169 (2015) Basaca-Preciado, L.C., et al.: Optical 3D laser measurement system for navigation of autonomous mobile robot. Opt. Lasers Eng. 54, 159–169 (2015)
35.
go back to reference Wu, Z., et al.: A novel stereo positioning method based on optical and SAR sensor. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 339–342. IEEE (2014) Wu, Z., et al.: A novel stereo positioning method based on optical and SAR sensor. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 339–342. IEEE (2014)
36.
go back to reference Ren, J., et al.: A general framework for 3D soccer ball estimation and tracking. In: 2004 International Conference on Image Processing, ICIP 2004, vol. 3, pp. 1935–1938 (2004) Ren, J., et al.: A general framework for 3D soccer ball estimation and tracking. In: 2004 International Conference on Image Processing, ICIP 2004, vol. 3, pp. 1935–1938 (2004)
37.
go back to reference Ren, J., et al.: Tracking the soccer ball using multiple fixed cameras. Comput. Vis. Image Underst. 113(5), 633–642 (2009) Ren, J., et al.: Tracking the soccer ball using multiple fixed cameras. Comput. Vis. Image Underst. 113(5), 633–642 (2009)
38.
go back to reference Feng, Y., et al.: Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications. IEEE Trans. Broadcast. 57(2), 500–509 (2011) Feng, Y., et al.: Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications. IEEE Trans. Broadcast. 57(2), 500–509 (2011)
39.
go back to reference Ren, J., et al.: Real-time modeling of 3-D soccer ball trajectories from multiple fixed cameras. IEEE Trans. Circ. Syst. Video Technol. 18(3), 350–362 (2008) Ren, J., et al.: Real-time modeling of 3-D soccer ball trajectories from multiple fixed cameras. IEEE Trans. Circ. Syst. Video Technol. 18(3), 350–362 (2008)
40.
go back to reference Ren, J., et al.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010) Ren, J., et al.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010)
41.
go back to reference Liu, Z., et al.: Template deformation based 3D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2017) Liu, Z., et al.: Template deformation based 3D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2017)
42.
go back to reference Ren, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010) Ren, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010)
Metadata
Title
3D Sensing Techniques for Multimodal Data Analysis and Integration in Smart and Autonomous Systems
Authors
Zhenyu Fang
He Sun
Jinchang Ren
Huimin Zhao
Sophia Zhao
Stephen Marshall
Tariq Durrani
Copyright Year
2019
Publisher
Springer Singapore
DOI
https://doi.org/10.1007/978-981-10-6571-2_71