Skip to main content

2020 | OriginalPaper | Buchkapitel

8. Stereoscopic Vision Systems in Machine Vision, Models, and Applications

verfasst von : Luis Roberto Ramírez-Hernández, Julio Cesar Rodríguez-Quiñonez, Moisés J. Castro-Toscano, Daniel Hernández-Balbuena, Wendy Flores-Fuentes, Moisés Rivas-López, Lars Lindner, Danilo Cáceres-Hernández, Marina Kolendovska, Fabián N. Murrieta-Rico

Erschienen in: Machine Vision and Navigation

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Stereoscopic vision systems (SVS) allow performing digital scene reconstruction through cameras. SVS process the obtained images by their cameras from a three-dimensional scene, identifying similarities between the images that correspond to the same scene and, finally, performing a reconstruction process. SVS can be differentiated by the number of cameras and the geometry used in their design. In this chapter, each SVS is classified by its number of cameras, so that it can be divided into binocular vision systems, where there are SVS of two cameras and multivision systems that consist of SVS of three or more cameras. The aim of this chapter is to provide useful information to students, teachers, and researchers who want to learn about the different methods and applications of SVS used in the industry and current research topics. Also, it will be useful for everyone who wants to implement an SVS and needs an introduction to several available options to use the most convenient according to the application.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Gonzalo, P. M., & de la CruzJesus, M. (2008). Vision por computador. Imagenes digitales y aplicaciones (No. 006.37 P 15100637 P 151). Gonzalo, P. M., & de la CruzJesus, M. (2008). Vision por computador. Imagenes digitales y aplicaciones (No. 006.37 P 15100637 P 151).
2.
Zurück zum Zitat Li, J., Zhao, H., Fu, Q., Zhang, P., & Zhou, X. (2009, June). New 3D high-accuracy optical coordinates measuring technique based on an infrared target and binocular stereo vision. In Optical Measurement Systems for Industrial Inspection VI (Vol. 7389, p. 738925). International Society for Optics and Photonics. Li, J., Zhao, H., Fu, Q., Zhang, P., & Zhou, X. (2009, June). New 3D high-accuracy optical coordinates measuring technique based on an infrared target and binocular stereo vision. In Optical Measurement Systems for Industrial Inspection VI (Vol. 7389, p. 738925). International Society for Optics and Photonics.
3.
Zurück zum Zitat Castro-Toscano, M. J., Rodríguez-Quiñonez, J. C., Hernández-Balbuena, D., Lindner, L., Sergiyenko, O., Rivas-Lopez, M., & Flores-Fuentes, W. (2017, June). A methodological use of inertial navigation systems for strapdown navigation task. In Industrial Electronics (ISIE), 2017 IEEE 26th International Symposium on (pp. 1589–1595). IEEE. Castro-Toscano, M. J., Rodríguez-Quiñonez, J. C., Hernández-Balbuena, D., Lindner, L., Sergiyenko, O., Rivas-Lopez, M., & Flores-Fuentes, W. (2017, June). A methodological use of inertial navigation systems for strapdown navigation task. In Industrial Electronics (ISIE), 2017 IEEE 26th International Symposium on (pp. 1589–1595). IEEE.
4.
Zurück zum Zitat Castro-Toscano, M. J., Rodríguez-Quiñonez, J. C., Hernández-Balbuena, D., Rivas-Lopez, M., Sergiyenko, O., & Flores-Fuentes, W. (2018). Obtención de Trayectorias Empleando el Marco Strapdown INS/KF: Propuesta Metodológica. Revista Iberoamericana de Automática e Informática Industrial, 15(4), 391–403.CrossRef Castro-Toscano, M. J., Rodríguez-Quiñonez, J. C., Hernández-Balbuena, D., Rivas-Lopez, M., Sergiyenko, O., & Flores-Fuentes, W. (2018). Obtención de Trayectorias Empleando el Marco Strapdown INS/KF: Propuesta Metodológica. Revista Iberoamericana de Automática e Informática Industrial, 15(4), 391–403.CrossRef
5.
Zurück zum Zitat Lindner, L., Sergiyenko, O., Rivas-López, M., Hernández-Balbuena, D., Flores-Fuentes, W., Rodríguez-Quiñonez, J. C., et al. (2017). Exact laser beam positioning for measurement of vegetation vitality. Industrial Robot: An International Journal, 44(4), 532–541.CrossRef Lindner, L., Sergiyenko, O., Rivas-López, M., Hernández-Balbuena, D., Flores-Fuentes, W., Rodríguez-Quiñonez, J. C., et al. (2017). Exact laser beam positioning for measurement of vegetation vitality. Industrial Robot: An International Journal, 44(4), 532–541.CrossRef
6.
Zurück zum Zitat Lindner, L., Sergiyenko, O., Rodríguez-Quiñonez, J. C., Rivas-Lopez, M., Hernandez-Balbuena, D., Flores-Fuentes, W., et al. (2016). Mobile robot vision system using continuous laser scanning for industrial application. Industrial Robot: An International Journal, 43(4), 360–369.CrossRef Lindner, L., Sergiyenko, O., Rodríguez-Quiñonez, J. C., Rivas-Lopez, M., Hernandez-Balbuena, D., Flores-Fuentes, W., et al. (2016). Mobile robot vision system using continuous laser scanning for industrial application. Industrial Robot: An International Journal, 43(4), 360–369.CrossRef
7.
Zurück zum Zitat Real-Moreno, O., Rodriguez-Quiñonez, J. C., Sergiyenko, O., Basaca-Preciado, L. C., Hernandez-Balbuena, D., Rivas-Lopez, M., & Flores-Fuentes, W. (2017, June). Accuracy improvement in 3D laser scanner based on dynamic triangulation for autonomous navigation system. In Industrial Electronics (ISIE), 2017 IEEE 26th International Symposium on (pp. 1602–1608). IEEE. Real-Moreno, O., Rodriguez-Quiñonez, J. C., Sergiyenko, O., Basaca-Preciado, L. C., Hernandez-Balbuena, D., Rivas-Lopez, M., & Flores-Fuentes, W. (2017, June). Accuracy improvement in 3D laser scanner based on dynamic triangulation for autonomous navigation system. In Industrial Electronics (ISIE), 2017 IEEE 26th International Symposium on (pp. 1602–1608). IEEE.
8.
Zurück zum Zitat Rivera-Castillo, J., Flores-Fuentes, W., Rivas-López, M., Sergiyenko, O., Gonzalez-Navarro, F. F., Rodríguez-Quiñonez, J. C., et al. (2017). Experimental image and range scanner datasets fusion in shm for displacement detection. Structural Control and Health Monitoring, 24(10), e1967.CrossRef Rivera-Castillo, J., Flores-Fuentes, W., Rivas-López, M., Sergiyenko, O., Gonzalez-Navarro, F. F., Rodríguez-Quiñonez, J. C., et al. (2017). Experimental image and range scanner datasets fusion in shm for displacement detection. Structural Control and Health Monitoring, 24(10), e1967.CrossRef
9.
Zurück zum Zitat López Valles, J. M., Fernández Caballero, A., & Fernández, M. A. (2005). Conceptos y técnicas de estereovisión por computador. Inteligencia Artificial. Revista Iberoamericana de Inteligencia Artificial, 9(27), 35–62. López Valles, J. M., Fernández Caballero, A., & Fernández, M. A. (2005). Conceptos y técnicas de estereovisión por computador. Inteligencia Artificial. Revista Iberoamericana de Inteligencia Artificial, 9(27), 35–62.
10.
Zurück zum Zitat Barnard, S. T., & Fischler, M. A. (1982). Computational stereo. ACM Computing Surveys, 14(4), 553–572.CrossRef Barnard, S. T., & Fischler, M. A. (1982). Computational stereo. ACM Computing Surveys, 14(4), 553–572.CrossRef
11.
Zurück zum Zitat Hernández, J. M., Sanz, G. P., & Guijarro, M. (2011). Técnicas de procesamiento de imágenes estereoscópicas. Revista del CES Felipe II. Hernández, J. M., Sanz, G. P., & Guijarro, M. (2011). Técnicas de procesamiento de imágenes estereoscópicas. Revista del CES Felipe II.
12.
Zurück zum Zitat Xu, G., & Zhang, Z. (2013). Epipolar geometry in stereo, motion and object recognition: A unified approach (Vol. 6). Springer. Xu, G., & Zhang, Z. (2013). Epipolar geometry in stereo, motion and object recognition: A unified approach (Vol. 6). Springer.
13.
Zurück zum Zitat Zhang, Z., Deriche, R., Faugeras, O., & Luong, Q. T. (1995). A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence, 78(1-2), 87–119.CrossRef Zhang, Z., Deriche, R., Faugeras, O., & Luong, Q. T. (1995). A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence, 78(1-2), 87–119.CrossRef
14.
Zurück zum Zitat Quiroga, E. A. C., Martín, L. Y. M., & Caycedo, A. U. (2015). La estereoscopía, métodos y aplicaciones en diferentes áreas del conocimiento. Revista Científica General José María Córdova, 13(16), 201–219.CrossRef Quiroga, E. A. C., Martín, L. Y. M., & Caycedo, A. U. (2015). La estereoscopía, métodos y aplicaciones en diferentes áreas del conocimiento. Revista Científica General José María Córdova, 13(16), 201–219.CrossRef
15.
Zurück zum Zitat Carabias, D. M., Garcıa, R. R., & Salor, J. A. R. (2010). Sistema de Visión Estereoscópica para Navegación Autónoma de vehıculos no tripulados. Carabias, D. M., Garcıa, R. R., & Salor, J. A. R. (2010). Sistema de Visión Estereoscópica para Navegación Autónoma de vehıculos no tripulados.
16.
Zurück zum Zitat Rodríguez-Quiñonez, J. C., Sergiyenko, O., Flores-Fuentes, W., Rivas-lopez, M., Hernandez-Balbuena, D., Rascón, R., & Mercorelli, P. (2017). Improve a 3D distance measurement accuracy in stereo vision systems using optimization methods’ approach. Opto-Electronics Review, 25(1), 24–32.CrossRef Rodríguez-Quiñonez, J. C., Sergiyenko, O., Flores-Fuentes, W., Rivas-lopez, M., Hernandez-Balbuena, D., Rascón, R., & Mercorelli, P. (2017). Improve a 3D distance measurement accuracy in stereo vision systems using optimization methods’ approach. Opto-Electronics Review, 25(1), 24–32.CrossRef
17.
Zurück zum Zitat Pérez, M. A., & López, M. (2015). 3D visual servoing control for robot manipulators without parametric identification. IEEE Latin America Transactions, 13(3), 569–577.CrossRef Pérez, M. A., & López, M. (2015). 3D visual servoing control for robot manipulators without parametric identification. IEEE Latin America Transactions, 13(3), 569–577.CrossRef
18.
Zurück zum Zitat Anderson, B. L. (1999). Stereoscopic occlusion and the aperture problem for motion: a new solution. Vision Research, 39(7), 1273–1284.CrossRef Anderson, B. L. (1999). Stereoscopic occlusion and the aperture problem for motion: a new solution. Vision Research, 39(7), 1273–1284.CrossRef
19.
Zurück zum Zitat López, M. B., Pérez, M. A., & Leite, A. C. (2013). Modelado de sistemas de visión en 2D y 3D: Un enfoque hacia el control de robots manipuladores. Tecnura: Tecnología y Cultura Afirmando el Conocimiento, 17(37), 12–21.CrossRef López, M. B., Pérez, M. A., & Leite, A. C. (2013). Modelado de sistemas de visión en 2D y 3D: Un enfoque hacia el control de robots manipuladores. Tecnura: Tecnología y Cultura Afirmando el Conocimiento, 17(37), 12–21.CrossRef
20.
Zurück zum Zitat Gurewitz, E., Dinstein, I., & Sarusi, B. (1986). More on the benefit of a third eye for machine stereo perception. In Proceedings of the 8th International Conference on Pattern Recognition, Paris, France (pp. 966–968). Gurewitz, E., Dinstein, I., & Sarusi, B. (1986). More on the benefit of a third eye for machine stereo perception. In Proceedings of the 8th International Conference on Pattern Recognition, Paris, France (pp. 966–968).
21.
Zurück zum Zitat Agrawal, M., & Davis, L. S. (2002). Trinocular stereo using shortest paths and the ordering constraint. International Journal of Computer Vision, 47(1–3), 43–50.CrossRef Agrawal, M., & Davis, L. S. (2002). Trinocular stereo using shortest paths and the ordering constraint. International Journal of Computer Vision, 47(1–3), 43–50.CrossRef
22.
Zurück zum Zitat Ayache, N., & Lustman, F. (1991). Trinocular stereovision for robotics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(1). Ayache, N., & Lustman, F. (1991). Trinocular stereovision for robotics. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(1).
23.
Zurück zum Zitat Ohya, A., Miyazaki, Y., & Yuta, S. I. (2001). Autonomous navigation of mobile robot based on teaching and playback using trinocular vision. In Industrial Electronics Society, 2001. IECON’01. The 27th Annual Conference of the IEEE (Vol. 1, pp. 398–403). IEEE. Ohya, A., Miyazaki, Y., & Yuta, S. I. (2001). Autonomous navigation of mobile robot based on teaching and playback using trinocular vision. In Industrial Electronics Society, 2001. IECON’01. The 27th Annual Conference of the IEEE (Vol. 1, pp. 398–403). IEEE.
24.
Zurück zum Zitat Cheng, C. C., & Lin, G. L. (2008). Acquisition of translational motion by the parallel trinocular. Information Sciences, 178(1), 137–151.CrossRef Cheng, C. C., & Lin, G. L. (2008). Acquisition of translational motion by the parallel trinocular. Information Sciences, 178(1), 137–151.CrossRef
25.
Zurück zum Zitat Mulligan, J., & Kaniilidis, K. (2000). Trinocular stereo for non-parallel configurations. In Pattern Recognition, 2000. Proceedings. 15th International Conference on (Vol. 1, pp. 567–570). IEEE. Mulligan, J., & Kaniilidis, K. (2000). Trinocular stereo for non-parallel configurations. In Pattern Recognition, 2000. Proceedings. 15th International Conference on (Vol. 1, pp. 567–570). IEEE.
26.
Zurück zum Zitat Rieder, A. (1996, August). Trinocular divergent stereo vision. In Pattern Recognition, 1996., Proceedings of the 13th International Conference on (Vol. 1, pp. 859–863). IEEE. Rieder, A. (1996, August). Trinocular divergent stereo vision. In Pattern Recognition, 1996., Proceedings of the 13th International Conference on (Vol. 1, pp. 859–863). IEEE.
27.
Zurück zum Zitat Fang, H., & Nurre, J. H. (1993, April). Analysis of three-dimensional point position for skewed-axes stereo vision systems. In Vision geometry (Vol. 1832, pp. 256–266). International Society for Optics and Photonics. Fang, H., & Nurre, J. H. (1993, April). Analysis of three-dimensional point position for skewed-axes stereo vision systems. In Vision geometry (Vol. 1832, pp. 256–266). International Society for Optics and Photonics.
28.
Zurück zum Zitat Hemayed, E. E., Ahmed, M. T., & Farag, A. A. (2001, July). The CardEye: A trinocular active vision system. In International conference on computer vision systems (pp. 157–173). Berlin, Heidelberg: Springer.CrossRef Hemayed, E. E., Ahmed, M. T., & Farag, A. A. (2001, July). The CardEye: A trinocular active vision system. In International conference on computer vision systems (pp. 157–173). Berlin, Heidelberg: Springer.CrossRef
29.
Zurück zum Zitat Häne, C., Heng, L., Lee, G. H., Fraundorfer, F., Furgale, P., Sattler, T., & Pollefeys, M. (2017). 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image and Vision Computing, 68, 14–27.CrossRef Häne, C., Heng, L., Lee, G. H., Fraundorfer, F., Furgale, P., Sattler, T., & Pollefeys, M. (2017). 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image and Vision Computing, 68, 14–27.CrossRef
30.
Zurück zum Zitat Statello, E., Verrastro, R., Robino, B., Gomez, J. C., & Tapino, S. (2016). Navegación por Visión Estereoscópica Asistida por GPS. In IEEE Argencon 2016 Congreso Bienal de IEEE Argentina. Buenos Aires, Argentina: Universidad Tecnológica Nacional Facultad Regional de Buenos Aires. Statello, E., Verrastro, R., Robino, B., Gomez, J. C., & Tapino, S. (2016). Navegación por Visión Estereoscópica Asistida por GPS. In IEEE Argencon 2016 Congreso Bienal de IEEE Argentina. Buenos Aires, Argentina: Universidad Tecnológica Nacional Facultad Regional de Buenos Aires.
31.
Zurück zum Zitat Ballesta, M., Gil, A., Reinoso, O., Juliá, M., & Jiménez, L. M. (2010). Multi-robot map alignment in visual SLAM. WSEAS Transactions on Systems, 9(2), 213–222. Ballesta, M., Gil, A., Reinoso, O., Juliá, M., & Jiménez, L. M. (2010). Multi-robot map alignment in visual SLAM. WSEAS Transactions on Systems, 9(2), 213–222.
32.
Zurück zum Zitat Park, K. H., Kim, H. O., Baek, M. Y., & Kee, C. D. (2003). Multi-range approach of stereo vision for mobile robot navigation in uncertain environments. KSME International Journal, 17(10), 1411.CrossRef Park, K. H., Kim, H. O., Baek, M. Y., & Kee, C. D. (2003). Multi-range approach of stereo vision for mobile robot navigation in uncertain environments. KSME International Journal, 17(10), 1411.CrossRef
33.
Zurück zum Zitat Cai, C., Somani, N., & Knoll, A. (2016). Orthogonal image features for visual servoing of a 6-DOF manipulator with uncalibrated stereo cameras. IEEE Transactions on Robotics, 32(2), 452–461.CrossRef Cai, C., Somani, N., & Knoll, A. (2016). Orthogonal image features for visual servoing of a 6-DOF manipulator with uncalibrated stereo cameras. IEEE Transactions on Robotics, 32(2), 452–461.CrossRef
34.
Zurück zum Zitat Castellani, U., Bicego, M., Iacono, G., & Murino, V. (2005). 3D face recognition using stereoscopic vision. In Advanced studies in biometrics (pp. 126–137). Berlin, Heidelberg: Springer.CrossRef Castellani, U., Bicego, M., Iacono, G., & Murino, V. (2005). 3D face recognition using stereoscopic vision. In Advanced studies in biometrics (pp. 126–137). Berlin, Heidelberg: Springer.CrossRef
35.
Zurück zum Zitat Cai, L., He, L., Xu, Y., Zhao, Y., & Yang, X. (2010). Multi-object detection and tracking by stereo vision. Pattern Recognition, 43(12), 4028–4041.CrossRef Cai, L., He, L., Xu, Y., Zhao, Y., & Yang, X. (2010). Multi-object detection and tracking by stereo vision. Pattern Recognition, 43(12), 4028–4041.CrossRef
36.
Zurück zum Zitat Malassiotis, S., & Strintzis, M. G. (2003). Stereo vision system for precision dimensional inspection of 3D holes. Machine Vision and Applications, 15(2), 101–113.CrossRef Malassiotis, S., & Strintzis, M. G. (2003). Stereo vision system for precision dimensional inspection of 3D holes. Machine Vision and Applications, 15(2), 101–113.CrossRef
37.
Zurück zum Zitat Kim, S. C., Kim, H. K., Lee, C. G., & Kim, S. B. (2006, October). A vision system for identifying structural vibration in civil engineering constructions. In SICE-ICASE, 2006. International Joint Conference (pp. 5813–5818). IEEE. Kim, S. C., Kim, H. K., Lee, C. G., & Kim, S. B. (2006, October). A vision system for identifying structural vibration in civil engineering constructions. In SICE-ICASE, 2006. International Joint Conference (pp. 5813–5818). IEEE.
38.
Zurück zum Zitat Luo, P. F., & Huang, F. C. (2000). Application of stereo vision to the study of mixed-mode crack-tip deformations. Optics and Lasers in Engineering, 33(5), 349–368.CrossRef Luo, P. F., & Huang, F. C. (2000). Application of stereo vision to the study of mixed-mode crack-tip deformations. Optics and Lasers in Engineering, 33(5), 349–368.CrossRef
39.
Zurück zum Zitat Rovira-Más, F., Zhang, Q., & Reid, J. F. (2008). Stereo vision three-dimensional terrain maps for precision agriculture. Computers and Electronics in Agriculture, 60(2), 133–143.CrossRef Rovira-Más, F., Zhang, Q., & Reid, J. F. (2008). Stereo vision three-dimensional terrain maps for precision agriculture. Computers and Electronics in Agriculture, 60(2), 133–143.CrossRef
40.
Zurück zum Zitat Zhang, S., Wang, Y., Zhu, Z., Li, Z., Du, Y., & Mao, E. (2018). Tractor path tracking control based on binocular vision. Information Processing in Agriculture, 5(4), 422–432.CrossRef Zhang, S., Wang, Y., Zhu, Z., Li, Z., Du, Y., & Mao, E. (2018). Tractor path tracking control based on binocular vision. Information Processing in Agriculture, 5(4), 422–432.CrossRef
41.
Zurück zum Zitat Palma, S. R., Becker, B. C., & Riviere, C. N. (2012, March). Simultaneous calibration of stereo vision and 3D optical tracker for robotic microsurgery. In Bioengineering Conference (NEBEC), 2012 38th Annual Northeast (pp. 351–352). IEEE. Palma, S. R., Becker, B. C., & Riviere, C. N. (2012, March). Simultaneous calibration of stereo vision and 3D optical tracker for robotic microsurgery. In Bioengineering Conference (NEBEC), 2012 38th Annual Northeast (pp. 351–352). IEEE.
42.
Zurück zum Zitat Aprile, W. A., Ruffaldi, E., Sotgiu, E., Frisoli, A., & Bergamasco, M. (2008). A dynamically reconfigurable stereoscopic/panoramic vision mobile robot head controlled from a virtual environment. The Visual Computer, 24(11), 941–946.CrossRef Aprile, W. A., Ruffaldi, E., Sotgiu, E., Frisoli, A., & Bergamasco, M. (2008). A dynamically reconfigurable stereoscopic/panoramic vision mobile robot head controlled from a virtual environment. The Visual Computer, 24(11), 941–946.CrossRef
43.
Zurück zum Zitat Kwon, K. C., Lim, Y. T., Kim, N., Yoo, K. H., Hong, J. M., & Park, G. C. (2010). High-definition 3D stereoscopic microscope display system for biomedical applications. Journal on Image and Video Processing, 2010, 2. Kwon, K. C., Lim, Y. T., Kim, N., Yoo, K. H., Hong, J. M., & Park, G. C. (2010). High-definition 3D stereoscopic microscope display system for biomedical applications. Journal on Image and Video Processing, 2010, 2.
44.
Zurück zum Zitat Kang, X., Azizian, M., Wilson, E., Wu, K., Martin, A. D., Kane, T. D., et al. (2014). Stereoscopic augmented reality for laparoscopic surgery. Surgical Endoscopy, 28(7), 2227–2235.CrossRef Kang, X., Azizian, M., Wilson, E., Wu, K., Martin, A. D., Kane, T. D., et al. (2014). Stereoscopic augmented reality for laparoscopic surgery. Surgical Endoscopy, 28(7), 2227–2235.CrossRef
45.
Zurück zum Zitat Murray, D., & Little, J. J. (2000). Using real-time stereo vision for mobile robot navigation. Autonomous Robots, 8(2), 161–171.CrossRef Murray, D., & Little, J. J. (2000). Using real-time stereo vision for mobile robot navigation. Autonomous Robots, 8(2), 161–171.CrossRef
46.
Zurück zum Zitat Sumi, Y., Kawai, Y., Yoshimi, T., & Tomita, F. (2002). 3D object recognition in cluttered environments by segment-based stereo vision. International Journal of Computer Vision, 46(1), 5–23.CrossRef Sumi, Y., Kawai, Y., Yoshimi, T., & Tomita, F. (2002). 3D object recognition in cluttered environments by segment-based stereo vision. International Journal of Computer Vision, 46(1), 5–23.CrossRef
47.
Zurück zum Zitat Andersen, J. C., Andersen, N. A., & Ravn, O. (2004). Trinocular stereo vision for intelligent robot navigation. IFAC Proceedings, 37(8), 502–507.CrossRef Andersen, J. C., Andersen, N. A., & Ravn, O. (2004). Trinocular stereo vision for intelligent robot navigation. IFAC Proceedings, 37(8), 502–507.CrossRef
48.
Zurück zum Zitat Garcia, R., Batlle, J., & Salvi, J. (2002). A new approach to pose detection using a trinocular stereovision system. Real-Time Imaging, 8(2), 73–93.CrossRef Garcia, R., Batlle, J., & Salvi, J. (2002). A new approach to pose detection using a trinocular stereovision system. Real-Time Imaging, 8(2), 73–93.CrossRef
49.
Zurück zum Zitat Blake, A., McCowen, D., Lo, H. R., & Lindsey, P. J. (1993). Trinocular active range-sensing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(5), 477–483.CrossRef Blake, A., McCowen, D., Lo, H. R., & Lindsey, P. J. (1993). Trinocular active range-sensing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(5), 477–483.CrossRef
50.
Zurück zum Zitat Iwasawa, S., Ohya, J., Takahashi, K., Sakaguchi, T., Ebihara, K., & Morishima, S. (2000). Human body postures from trinocular camera images. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on (pp. 326–331). IEEE. Iwasawa, S., Ohya, J., Takahashi, K., Sakaguchi, T., Ebihara, K., & Morishima, S. (2000). Human body postures from trinocular camera images. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on (pp. 326–331). IEEE.
51.
Zurück zum Zitat Häne, C., Sattler, T., & Pollefeys, M. (2015, September). Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on (pp. 5101–5108). IEEE. Häne, C., Sattler, T., & Pollefeys, M. (2015, September). Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on (pp. 5101–5108). IEEE.
52.
Zurück zum Zitat Lee, G. H., Pollefeys, M., & Fraundorfer, F. (2014). Relative pose estimation for a multi-camera system with known vertical direction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 540–547). Lee, G. H., Pollefeys, M., & Fraundorfer, F. (2014). Relative pose estimation for a multi-camera system with known vertical direction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 540–547).
53.
Zurück zum Zitat Heng, L., Bürki, M., Lee, G. H., Furgale, P., Siegwart, R., & Pollefeys, M. (2014, May). Infrastructure-based calibration of a multi-camera rig. In Robotics and Automation (ICRA), 2014 IEEE International Conference on (pp. 4912–4919). IEEE. Heng, L., Bürki, M., Lee, G. H., Furgale, P., Siegwart, R., & Pollefeys, M. (2014, May). Infrastructure-based calibration of a multi-camera rig. In Robotics and Automation (ICRA), 2014 IEEE International Conference on (pp. 4912–4919). IEEE.
54.
Zurück zum Zitat Heng, L., Furgale, P., & Pollefeys, M. (2015). Leveraging image-based localization for infrastructure-based calibration of a multi-camera rig. Journal of Field Robotics, 32(5), 775–802.CrossRef Heng, L., Furgale, P., & Pollefeys, M. (2015). Leveraging image-based localization for infrastructure-based calibration of a multi-camera rig. Journal of Field Robotics, 32(5), 775–802.CrossRef
55.
Zurück zum Zitat Lee, G. H., Fraundorfer, F., & Pollefeys, M. (2013, November). Structureless pose-graph loop-closure with a multi-camera system on a self-driving car. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 564–571). IEEE. Lee, G. H., Fraundorfer, F., & Pollefeys, M. (2013, November). Structureless pose-graph loop-closure with a multi-camera system on a self-driving car. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 564–571). IEEE.
56.
Zurück zum Zitat Geiger, A., Moosmann, F., Car, Ö., & Schuster, B. (2012, May). Automatic camera and range sensor calibration using a single shot. In Robotics and Automation (ICRA), 2012 IEEE International Conference on (pp. 3936–3943). IEEE. Geiger, A., Moosmann, F., Car, Ö., & Schuster, B. (2012, May). Automatic camera and range sensor calibration using a single shot. In Robotics and Automation (ICRA), 2012 IEEE International Conference on (pp. 3936–3943). IEEE.
57.
Zurück zum Zitat Ueda, M., Arita, D., & Taniguchi, R. I. (2004, November). Real-time free-viewpoint video generation using multiple cameras and a PC-cluster. In Pacific-Rim Conference on Multimedia (pp. 418–425). Berlin, Heidelberg: Springer. Ueda, M., Arita, D., & Taniguchi, R. I. (2004, November). Real-time free-viewpoint video generation using multiple cameras and a PC-cluster. In Pacific-Rim Conference on Multimedia (pp. 418–425). Berlin, Heidelberg: Springer.
58.
Zurück zum Zitat Hisatomi, K., Tomiyama, K., Katayama, M., & Iwadate, Y. (2009, September). Method of 3D reconstruction using graph cuts, and its application to preserving intangible cultural heritage. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on (pp. 923–930). IEEE. Hisatomi, K., Tomiyama, K., Katayama, M., & Iwadate, Y. (2009, September). Method of 3D reconstruction using graph cuts, and its application to preserving intangible cultural heritage. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on (pp. 923–930). IEEE.
59.
Zurück zum Zitat Chen, K. W., Lai, C. C., Hung, Y. P., & Chen, C. S. (2008, June). An adaptive learning method for target tracking across multiple cameras. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on (pp. 1–8). IEEE. Chen, K. W., Lai, C. C., Hung, Y. P., & Chen, C. S. (2008, June). An adaptive learning method for target tracking across multiple cameras. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on (pp. 1–8). IEEE.
60.
Zurück zum Zitat Chen, S., Zuo, W., & Zheng, L. (2009, March). Camera calibration via stereo vision using tsai’s method. In Education Technology and Computer Science, 2009. ETCS’09. First International Workshop on (Vol. 3, pp. 273–277). IEEE. Chen, S., Zuo, W., & Zheng, L. (2009, March). Camera calibration via stereo vision using tsai’s method. In Education Technology and Computer Science, 2009. ETCS’09. First International Workshop on (Vol. 3, pp. 273–277). IEEE.
61.
Zurück zum Zitat Liliang, L., Ping, A., Zhuan, Z., & Zhaoyang, Z. (2011). Effective camera calibration in free-viewpoint systems. In IET International Communication Conference on Wireless Mobile and Computing (CCWMC 2011), Shanghai, China. IET. Liliang, L., Ping, A., Zhuan, Z., & Zhaoyang, Z. (2011). Effective camera calibration in free-viewpoint systems. In IET International Communication Conference on Wireless Mobile and Computing (CCWMC 2011), Shanghai, China. IET.
62.
Zurück zum Zitat Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22. Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22.
63.
Zurück zum Zitat Zhao, J., Zhao, D., & Zhang, Z. (2013). Calibration and correction of lens distortion for two-dimensional digital speckle correlation measurement. Optik-International Journal for Light and Electron Optics, 124(23), 6042–6047.CrossRef Zhao, J., Zhao, D., & Zhang, Z. (2013). Calibration and correction of lens distortion for two-dimensional digital speckle correlation measurement. Optik-International Journal for Light and Electron Optics, 124(23), 6042–6047.CrossRef
64.
Zurück zum Zitat Li, B., Heng, L., Koser, K., & Pollefeys, M. (2013, November). A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 1301–1307). IEEE. Li, B., Heng, L., Koser, K., & Pollefeys, M. (2013, November). A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on (pp. 1301–1307). IEEE.
65.
Zurück zum Zitat Svoboda, T., Martinec, D., & Pajdla, T. (2005). A convenient multicamera self-calibration for virtual environments. Presence: Teleoperators & Virtual Environments, 14(4), 407–422.CrossRef Svoboda, T., Martinec, D., & Pajdla, T. (2005). A convenient multicamera self-calibration for virtual environments. Presence: Teleoperators & Virtual Environments, 14(4), 407–422.CrossRef
66.
Zurück zum Zitat Priya, L., & Anand, S. (2017). Object recognition and 3D reconstruction of occluded objects using binocular stereo. Cluster Computing, 1–10. Priya, L., & Anand, S. (2017). Object recognition and 3D reconstruction of occluded objects using binocular stereo. Cluster Computing, 1–10.
67.
Zurück zum Zitat Lim, J., Heo, M., Lee, C., & Kim, C. S. (2017). Contrast enhancement of noisy low-light images based on structure-texture-noise decomposition. Journal of Visual Communication and Image Representation, 45, 107–121.CrossRef Lim, J., Heo, M., Lee, C., & Kim, C. S. (2017). Contrast enhancement of noisy low-light images based on structure-texture-noise decomposition. Journal of Visual Communication and Image Representation, 45, 107–121.CrossRef
Metadaten
Titel
Stereoscopic Vision Systems in Machine Vision, Models, and Applications
verfasst von
Luis Roberto Ramírez-Hernández
Julio Cesar Rodríguez-Quiñonez
Moisés J. Castro-Toscano
Daniel Hernández-Balbuena
Wendy Flores-Fuentes
Moisés Rivas-López
Lars Lindner
Danilo Cáceres-Hernández
Marina Kolendovska
Fabián N. Murrieta-Rico
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-22587-2_8

Neuer Inhalt