Abstract
In order to produce bright images, projectors have large apertures and hence narrow depths of field. In this paper, we present methods for robust scene capture and enhanced image display based on projection defocus analysis. We model a projector's defocus using a linear system. This model is used to develop a novel temporal defocus analysis method to recover depth at each camera pixel by estimating the parameters of its projection defocus kemel in frequency domain. Compared to most depth recovery methods, our approach is more accurate near depth discontinuities. Furthermore, by using a coaxial projector-camera system, we ensure that depth is computed at all camera pixels, without any missing parts. We show that the recovered scene geometry can be used for refocus synthesis and for depth-based image composition. Using the same projector defocus model and estimation technique, we also propose a defocus compensation method that filters a projection image in a spatially-varying, depth-dependent manner to minimize its defocus blur after it is projected onto the scene. This method effectively increases the depth of field of a projector without modifying its optics. Finally, we present an algorithm that exploits projector defocus to reduce the strong pixelation artifacts produced by digital projectors, while preserving the quality of the projected image. We have experimentally verified each of our methods using real scenes.
Supplemental Material
- Bimber, O., and Emmerling, A. 2006. Multi-focal projection. IEEE Trans. on Visualization and Computer Graphics to appear.Google Scholar
- Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. 2005. Enabling view-dependent stereoscopic projection in real environments. In Proc. Int. Symp. on Mixed and Augmented Reality, 14--23. Google ScholarDigital Library
- Curless, B., and Levoy, M. 1995. Better optical triangulation through spacetime analysis. In Proc. Int. Conf. on Computer Vision, 987--994. Google ScholarDigital Library
- Davis, J., Nehab, D., Ramamoothi, R., and Rusinkiewicz, S. 2005. Space-time stereo: A unifying framework for depth from triangulation. IEEE Trans. on Pattern Analysis and Machine Intelligence 27, 2, 296--302. Google ScholarDigital Library
- Faugeras, O. 1993. Three-Dimensional Computer Vision. MIT Press. Google ScholarDigital Library
- Favaro, P., and Soatto, S. 2005. A geometric approach to shape from defocus. IEEE Trans. on Pattern Analysis and Machine Intelligence (in press). Google ScholarDigital Library
- Fujii, K., Grossberg, M., and Nayar, S. 2005. A Projector-Camera System with Real-Time Photometric Adaptation for Dynamic Environments. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 814--821. Google ScholarDigital Library
- Girod, B., and Scherock, S. 1989. Depth from defocus of structured light. In Proc. SPIE Conf. on Optics, Illumination, and Image Sensing for Machine Vision.Google Scholar
- Gonzales-Banos, H., and Davis, J. 2004. A method for computing depth under ambient illumination using multi-shuttered light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 234--241. Google ScholarDigital Library
- Grossberg, M., Peri, H., Nayar, S., and Belhumeur, P. 2004. Making One Object Look Like Another: Controlling Appearance using a Projector-Camera System. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, 452--459.Google Scholar
- Horn, B., and Brooks, M. 1989. Shape from Shading. MIT Press. Google ScholarDigital Library
- Huang, P. S., Zhang, C. P., and Chiang, F. P. 2003. High speed 3-d shape measurement based on digital fringe projection. Optical Engineering 42, 1, 163--168.Google ScholarCross Ref
- Jain, A. K. 1989. Fundamentals of Digital Image Processing. Prentice Hall. Google ScholarDigital Library
- Jin, H., and Favaro, P. 2002. A variational approach to shape from defocus. In Proc. Eur. Conf. on Computer Vision, 18--30. Google ScholarDigital Library
- Kanade, T., Gruss, A., and Carley, L. 1991. A very fast vlsi rangefinder. In Proc. Int. Conf. on Robotics and Automation, vol. 39, 1322--1329.Google Scholar
- Koninckx, T. P., Peers, P., Dutr, P., and Gool, L. V. 2005. Scene-adapted structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 611--619. Google ScholarDigital Library
- Levoy, M., Chen, B., Vaish, V., Horowitz, M., Mcdowall, I., and Bolas, M. 2004. Synthetic aperture confocal imaging. In SIGGRAPH Conference Proceedings, 825--834. Google ScholarDigital Library
- Ljung, L. 1998. System Identification: A Theory for the User, 2 ed. Prentice Hall. Google ScholarDigital Library
- Majumder, A., and Welch, G. 2001. Computer graphics optique: Optical superposition of projected computer graphics. In Proc. Eurographics Workshop on Virtual Enviroment/Immersive Projection Technology. Google ScholarDigital Library
- Mcguire, M., Matusik, W., Pfister, H., Hughes, J. F., and Durand, F. 2005. Defocus video matting. In SIGGRAPH Conference Proceedings, 567--576. Google ScholarDigital Library
- Nayar, S. K., and Nakagawa, Y. 1994. Shape from focus. IEEE Trans. on Pattern Analysis and Machine Intelligence 16, 8, 824--831. Google ScholarDigital Library
- Nayar, S. K., Watanabe, M., and Noguchi, M. 1996. Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 12, 1186--1198. Google ScholarDigital Library
- Nocedal, J., and Wright, S. J. 1999. Numerical Optimization. Springer.Google Scholar
- Oppenheim, A. V., and Willsky, A. S. 1997. Signals and Systems, 2 ed. Prentice Hall. Google ScholarDigital Library
- Pentland, A. 1987. A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence 9, 4, 523--531. Google ScholarDigital Library
- Raj, A., and Zabih, R. 2005. A graph cut algorithm for generalized image deconvolution. In Proc. Int. Conf. on Computer Vision. Google ScholarDigital Library
- Rajagopalan, A. N., and Chaudhuri, S. 1997. A variational approach to recovering depth from defocused images. IEEE Trans. on Pattern Analysis and Machine Intelligence 19, 10, 1158--1164. Google ScholarDigital Library
- Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. 1998. The office of the future: A unified approach to image-based modeling and spatially immersive displays. In SIGGRAPH Conference Proceedings, 179--188. Google ScholarDigital Library
- Raskar, R., Welch, G., Low, K., and Bandyopadhyay, D. 2001. Shader lamps. In Proc. Eurographics Workshop on Rendering.Google Scholar
- Raskar, R., Van Baar, J., Beardsley, P., Willwacher, T., Rao, S., and Forlines, C. 2003. ilamps: geometrically aware and self-configuring projectors. In SIGGRAPH Conference Proceedings, 809--818. Google ScholarDigital Library
- Raskar, R., Han Tan, K., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. In SIGGRAPH Conference Proceedings, 679--688. Google ScholarDigital Library
- Scharstein, D., and Szeliski, R. 2003. High-accuracy stereo depth maps using structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 195--202. Google ScholarDigital Library
- Schechner, Y. Y., Kiryati, N., and Basri, R. 2000. Separation of transparent layers using focus. Int. J. on Computer Vision 39, 1, 25--39. Google ScholarDigital Library
- Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., and Lensch, H. P. A. 2005. Dual photography. In SIGGRAPH Conference Proceedings, 745--755. Google ScholarDigital Library
- Tappen, M. F., Russell, B. C., and Freeman, W. T. 2004. Efficient graphical models for processing images. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, 673--680. Google ScholarDigital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conference on Computer Graphics, 548--558. Google ScholarDigital Library
- Zhang, Z. 2000. A flexible new technique for camera calibration. IEEE Trans. on Pattern Analysis and Machine Intelligence 22, 11, 1330--1334. Google ScholarDigital Library
Index Terms
- Projection defocus analysis for scene capture and image display
Recommendations
Projection defocus analysis for scene capture and image display
SIGGRAPH '06: ACM SIGGRAPH 2006 PapersIn order to produce bright images, projectors have large apertures and hence narrow depths of field. In this paper, we present methods for robust scene capture and enhanced image display based on projection defocus analysis. We model a projector's ...
Multi-epipolar plane image based 3D reconstruction using robust surface fitting
ICVGIP '14: Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image ProcessingIn this paper, we present a novel method for 3D reconstruction from epipolar plane (EP) representation of images and surface fitting for multiview or lightfield images. The proposed method detects parallelograms in EP images using mean shift ...
Optimal depth recovery using image guided TGV with depth confidence for high-quality view synthesis
A confidence-based depth recovery and high quality 3D view synthesis are proposed.The depth recovery relies on image edges and high depth confidence pixels.Texture directions of background are effective in hole filling for view synthesis. This paper ...
Comments