skip to main content
article

Projection defocus analysis for scene capture and image display

Published:01 July 2006Publication History
Skip Abstract Section

Abstract

In order to produce bright images, projectors have large apertures and hence narrow depths of field. In this paper, we present methods for robust scene capture and enhanced image display based on projection defocus analysis. We model a projector's defocus using a linear system. This model is used to develop a novel temporal defocus analysis method to recover depth at each camera pixel by estimating the parameters of its projection defocus kemel in frequency domain. Compared to most depth recovery methods, our approach is more accurate near depth discontinuities. Furthermore, by using a coaxial projector-camera system, we ensure that depth is computed at all camera pixels, without any missing parts. We show that the recovered scene geometry can be used for refocus synthesis and for depth-based image composition. Using the same projector defocus model and estimation technique, we also propose a defocus compensation method that filters a projection image in a spatially-varying, depth-dependent manner to minimize its defocus blur after it is projected onto the scene. This method effectively increases the depth of field of a projector without modifying its optics. Finally, we present an algorithm that exploits projector defocus to reduce the strong pixelation artifacts produced by digital projectors, while preserving the quality of the projected image. We have experimentally verified each of our methods using real scenes.

Skip Supplemental Material Section

Supplemental Material

p907-zhang-high.mov

mov

35.5 MB

p907-zhang-low.mov

mov

14 MB

References

  1. Bimber, O., and Emmerling, A. 2006. Multi-focal projection. IEEE Trans. on Visualization and Computer Graphics to appear.Google ScholarGoogle Scholar
  2. Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. 2005. Enabling view-dependent stereoscopic projection in real environments. In Proc. Int. Symp. on Mixed and Augmented Reality, 14--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Curless, B., and Levoy, M. 1995. Better optical triangulation through spacetime analysis. In Proc. Int. Conf. on Computer Vision, 987--994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Davis, J., Nehab, D., Ramamoothi, R., and Rusinkiewicz, S. 2005. Space-time stereo: A unifying framework for depth from triangulation. IEEE Trans. on Pattern Analysis and Machine Intelligence 27, 2, 296--302. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Faugeras, O. 1993. Three-Dimensional Computer Vision. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Favaro, P., and Soatto, S. 2005. A geometric approach to shape from defocus. IEEE Trans. on Pattern Analysis and Machine Intelligence (in press). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Fujii, K., Grossberg, M., and Nayar, S. 2005. A Projector-Camera System with Real-Time Photometric Adaptation for Dynamic Environments. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 814--821. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Girod, B., and Scherock, S. 1989. Depth from defocus of structured light. In Proc. SPIE Conf. on Optics, Illumination, and Image Sensing for Machine Vision.Google ScholarGoogle Scholar
  9. Gonzales-Banos, H., and Davis, J. 2004. A method for computing depth under ambient illumination using multi-shuttered light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 234--241. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Grossberg, M., Peri, H., Nayar, S., and Belhumeur, P. 2004. Making One Object Look Like Another: Controlling Appearance using a Projector-Camera System. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, 452--459.Google ScholarGoogle Scholar
  11. Horn, B., and Brooks, M. 1989. Shape from Shading. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Huang, P. S., Zhang, C. P., and Chiang, F. P. 2003. High speed 3-d shape measurement based on digital fringe projection. Optical Engineering 42, 1, 163--168.Google ScholarGoogle ScholarCross RefCross Ref
  13. Jain, A. K. 1989. Fundamentals of Digital Image Processing. Prentice Hall. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jin, H., and Favaro, P. 2002. A variational approach to shape from defocus. In Proc. Eur. Conf. on Computer Vision, 18--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Kanade, T., Gruss, A., and Carley, L. 1991. A very fast vlsi rangefinder. In Proc. Int. Conf. on Robotics and Automation, vol. 39, 1322--1329.Google ScholarGoogle Scholar
  16. Koninckx, T. P., Peers, P., Dutr, P., and Gool, L. V. 2005. Scene-adapted structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 611--619. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Levoy, M., Chen, B., Vaish, V., Horowitz, M., Mcdowall, I., and Bolas, M. 2004. Synthetic aperture confocal imaging. In SIGGRAPH Conference Proceedings, 825--834. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ljung, L. 1998. System Identification: A Theory for the User, 2 ed. Prentice Hall. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Majumder, A., and Welch, G. 2001. Computer graphics optique: Optical superposition of projected computer graphics. In Proc. Eurographics Workshop on Virtual Enviroment/Immersive Projection Technology. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Mcguire, M., Matusik, W., Pfister, H., Hughes, J. F., and Durand, F. 2005. Defocus video matting. In SIGGRAPH Conference Proceedings, 567--576. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Nayar, S. K., and Nakagawa, Y. 1994. Shape from focus. IEEE Trans. on Pattern Analysis and Machine Intelligence 16, 8, 824--831. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Nayar, S. K., Watanabe, M., and Noguchi, M. 1996. Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 12, 1186--1198. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Nocedal, J., and Wright, S. J. 1999. Numerical Optimization. Springer.Google ScholarGoogle Scholar
  24. Oppenheim, A. V., and Willsky, A. S. 1997. Signals and Systems, 2 ed. Prentice Hall. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Pentland, A. 1987. A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence 9, 4, 523--531. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Raj, A., and Zabih, R. 2005. A graph cut algorithm for generalized image deconvolution. In Proc. Int. Conf. on Computer Vision. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Rajagopalan, A. N., and Chaudhuri, S. 1997. A variational approach to recovering depth from defocused images. IEEE Trans. on Pattern Analysis and Machine Intelligence 19, 10, 1158--1164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. 1998. The office of the future: A unified approach to image-based modeling and spatially immersive displays. In SIGGRAPH Conference Proceedings, 179--188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Raskar, R., Welch, G., Low, K., and Bandyopadhyay, D. 2001. Shader lamps. In Proc. Eurographics Workshop on Rendering.Google ScholarGoogle Scholar
  30. Raskar, R., Van Baar, J., Beardsley, P., Willwacher, T., Rao, S., and Forlines, C. 2003. ilamps: geometrically aware and self-configuring projectors. In SIGGRAPH Conference Proceedings, 809--818. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Raskar, R., Han Tan, K., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. In SIGGRAPH Conference Proceedings, 679--688. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Scharstein, D., and Szeliski, R. 2003. High-accuracy stereo depth maps using structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 195--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Schechner, Y. Y., Kiryati, N., and Basri, R. 2000. Separation of transparent layers using focus. Int. J. on Computer Vision 39, 1, 25--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., and Lensch, H. P. A. 2005. Dual photography. In SIGGRAPH Conference Proceedings, 745--755. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Tappen, M. F., Russell, B. C., and Freeman, W. T. 2004. Efficient graphical models for processing images. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, 673--680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conference on Computer Graphics, 548--558. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Zhang, Z. 2000. A flexible new technique for camera calibration. IEEE Trans. on Pattern Analysis and Machine Intelligence 22, 11, 1330--1334. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Projection defocus analysis for scene capture and image display

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Graphics
            ACM Transactions on Graphics  Volume 25, Issue 3
            July 2006
            742 pages
            ISSN:0730-0301
            EISSN:1557-7368
            DOI:10.1145/1141911
            Issue’s Table of Contents

            Copyright © 2006 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 July 2006
            Published in tog Volume 25, Issue 3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader