ABSTRACT
Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.
Supplemental Material
- }}Bandyopadhyay, D., Raskar, R., and Fuchs, H. (2001). Dynamic shader lamps: Painting on movable objects. In Proc. of IEEE and ACM International Symposium on Augmented Reality (ISAR '01). 207--216. Google ScholarDigital Library
- }}Benko, H., and Wilson, A. (2009). DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface. Microsoft Research Technical Report MSR-TR-2009-23.Google Scholar
- }}Brooks, R. A., Coen, M., Dang, D., Bonet, J. D., Kramer, J., Lozano-Perez, T., Mellor, J., Pook, P., Stauffer, C., Stein, L., Torrance, M. and Wessler, M. (1997). The Intelligent Room Project. In Proc. of International Conference on Cognitive Technology (CT '97). 271--278. Google ScholarDigital Library
- }}Cao, X., Forlines, C., and Balakrishnan, R. (2007). Multi-user interaction using handheld projectors. In Proc. of ACM UIST '07. 43--52. Google ScholarDigital Library
- }}Cruz-Neira, C., Sandin, D. J., and DeFanti, T. A. (1993). Surround-screen projection-based virtual reality: The design and implementation of the CAVE. In Proc. of ACM SIGGRAPH 93. 135--142. Google ScholarDigital Library
- }}DeMenthon D. and Davis, L. S. (1995). Model-Based Object Pose in 25 Lines of Code. International Journal of Computer Vision, vol. 15, June 1995. 123--141. Google ScholarDigital Library
- }}Deutscher, J. and Reid, I. (2005). Articulated Body Motion Capture by Stochastic Search. Int. Journal of Computer Vision 61, 2 (Feb.). 185--205. Google ScholarDigital Library
- }}Fails, J. A., and Olsen, D. R. (2002) LightWidgets: Interacting in Everyday Spaces. In Proc. of IUI '02. 63--69. Google ScholarDigital Library
- }}Harrison, C., Tan, D., and Morris, D. (2010). Skinput: Appropriating the Body as an Input Surface. In Proc. of ACM SIGCHI '10. 453--462. Google ScholarDigital Library
- }}Hilliges, O., Izadi, S., Wilson, A. D., Hodges, S., Garcia-Mendoza, A., and Butz, A. (2009). Interactions in the Air: Adding Further Depth to Interactive Tabletops. In Proc. of ACM UIST '09. 139--148. Google ScholarDigital Library
- }}Hinckley, K., Pausch, R., Goble, J. C., and Kassell, N. F. (1994). A Survey of Design Issues in Spatial Input. In Proc. of ACM UIST '94. 213--222. Google ScholarDigital Library
- }}Holman, D. and Vertegaal, R. (2008). Organic user interfaces: designing computers in any way, shape, or form. Comm. of the ACM 51, 6 (Jun. 2008). 48--55. Google ScholarDigital Library
- }}Horn, B. K. P. (1987). Closed Form Solution of Absolute Orientation Using Unit Quaternions. J. Opt. Soc. Am. A, 4, 629--642.Google ScholarCross Ref
- }}Hua, H., Brown, L. D., and Gao, C. (2004). Scape: Supporting Stereoscopic Collaboration in Augmented and Projective Environments. IEEE Comput. Graph. Appl. 24, 1 (Jan. 2004). 66--75. Google ScholarDigital Library
- }}Johanson, B., Fox, A. and Winograd, T. (2002). The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive Computing, Vol. 1 (2). 67--74. Google ScholarDigital Library
- }}Krumm, J., Harris, S., Meyers, B., Brumitt, B., Hale, M., and Shafer, S. (2000). Multi-camera multi-person tracking for EasyLiving. In Proc. of IEEE International Workshop on Visual Surveillance. 3--10. Google ScholarDigital Library
- }}Lee, J. C., Hudson, S. E., Summet, J. W., and Dietz, P. H. (2005). Moveable interactive projected displays using projector based tracking. In Proc. of ACM UIST '05. 63--72. Google ScholarDigital Library
- }}Mistry, P., and Maes, P. (2009) SixthSense - A Wearable Gestural Interface. SIGGRAPH Asia '09, Emerging Technologies. Yokohama, Japan. Google ScholarDigital Library
- }}Pinhanez, C. S. (2001). The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces. In Proc. of the International Conference on Ubiquitous Computing (UBICOMP). 315--331. Google ScholarDigital Library
- }}Piper, B., Ratti, C., and Ishii, H. (2002) Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis. In Proc. of ACM SIGCHI '02. 355--362. Google ScholarDigital Library
- }}Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. (1998). The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays. In Proc. of ACM SIGGRAPH '98. 179--188. Google ScholarDigital Library
- }}Rekimoto, J. and Saitoh, M. (1999). Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments. In Proc. of ACM SIGCHI '99. 378--385. Google ScholarDigital Library
- }}Starner, T., Leibe, B., Minnen, D., Westeyn, T., Hurst, A., and Weeks, J. (2003). The Perceptive Workbench: Computer-Vision-Based Gesture Tracking, Object Tracking, and 3D Reconstruction for Augmented Desks. Machine Vision and Applications, vol. 14, 51--71.Google ScholarDigital Library
- }}Streitz, N., Geißler, J., Holmer, T., Konomi, S., Müller-Tomfelde, C., Reischl, W., Rexroth, P., Seitz, P., and Steinmetz, R. (1999). i-LAND: An Interactive Landscape for Creativity and Innovation. In Proc. of ACM SIGCHI '99. 120--127. Google ScholarDigital Library
- }}Underkoffler, J., Ullmer, B., and Ishii, H. (1999). Emancipated pixels: Real-world graphics in the luminous room. In Proc. of ACM SIGGRAPH '99. 385--392. Google ScholarDigital Library
- }}Wellner, P. (1993). Interacting with paper on the DigitalDesk. Communications of the ACM. 36, 7 (Jul. 1993). 87--96. Google ScholarDigital Library
- }}Wilson, A. (2005). PlayAnywhere: A Compact Tabletop Computer Vision System. In Proc. of ACM UIST '05. 83--92. Google ScholarDigital Library
- }}Wilson, A. (2007) Depth-Sensing Video Cameras for 3D Tangible Tabletop Interaction. In Proc. of IEEE International Workshop on Horizontal Interactive Human-Computer Systems (TABLETOP '07). 201--204.Google ScholarCross Ref
- }}Wilson, A. D., Izadi, S., Hilliges, O., Garcia-Mendoza, A., and Kirk, D. (2008). Bringing physics to the surface. In Proc. of ACM UIST '08. 67--76. Google ScholarDigital Library
- }}Wren, C., Azarbayejani, A., Darrell, T., and Pentland, A. (1997). Pfinder: real-time tracking of the human body, IEEE Trans. PAMI 19 (7). Google ScholarDigital Library
- }}Yilmaz, A., Javed, O., and Shah, M. (2006). Object tracking: A survey. ACM Comput. Surv. 38, 4 (Dec. '06), Article #13. Google ScholarDigital Library
Index Terms
- Combining multiple depth cameras and projectors for interactions on, above and between surfaces
Recommendations
A portable system for anywhere interactions
CHI EA '04: CHI '04 Extended Abstracts on Human Factors in Computing SystemsInteractions have taken off from the confinement of a single screen into various personal devices. Projected an interface onto different parts of a physical environment is an escape beyond traditional display devices. Imagine that any walls or floors ...
Design and evaluation of proxemics-aware environments to support navigation in large information spaces
CHI EA '13: CHI '13 Extended Abstracts on Human Factors in Computing SystemsIn my research, I explore the use of proxemics in Human-Computer Interaction to design explicit and implicit interaction with knowledge work environments for literature review, reading & writing, or discussion. This paper proposes the employment of ...
Steerable augmented reality with the beamatron
UIST '12: Proceedings of the 25th annual ACM symposium on User interface software and technologySteerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used ...
Comments