ABSTRACT
LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user's body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user's hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85% more accurately than when guided by video.
Supplemental Material
- Baird, K. M. and Barfield, W. Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality 4, 4, 1999, 250--259.Google ScholarDigital Library
- Banaji, M. R. Stevens, Stanley Smith (1906-73). Journal of Personality and Social Psychology, 1994.Google Scholar
- Bau, O. and Mackay, W. E. OctoPocus: a dynamic guide for learning gesture-based command sets. Proc. UIST, 2008, 37--46. Google ScholarDigital Library
- Britannica. http://britannica.com/EBchecked/topic/150794/dance-notation.Google Scholar
- Cao, X. and Balakrishnan, R. Interacting with dynamically defined information spaces using a handheld projector and a pen. Proc. UIST, 2006, 225--234. Google ScholarDigital Library
- Cao, X., Forlines, C., and Balakrishnan, R. Multi-user interaction using handheld projectors. Proc. UIST, 2007, 43--52. Google ScholarDigital Library
- Daniel Dementhon, L. D. Model-Based object pose in 25 lines of code. Proc. of IJCV 15, 1 (1995), 123--141. Google ScholarDigital Library
- Feiner, Steve, Blaire Macintyre, D. S. Knowledge-Based augmented reality. Commun. ACM 36, 7, 1993, 53--62. Google ScholarDigital Library
- Felzenszwalb, P. F. and Huttenlocher, D. P. Distance transforms of sampled functions. 2004.Google Scholar
- Flagg, M. and Rehg, J. M. Projector guided painting. Proc. UIST, 2006, 235--244. Google ScholarDigital Library
- Freeman, D., Benko, H., Morris, M. R., and Wigdor, D. ShadowGuides: Visualizations for In-Situ Learning of MultiTouch and Whole-Hand Gestures. Proc. ITS, 2009, 165--172. Google ScholarDigital Library
- Harrison, C., Benko, H., Wilson, A. D., and Way, O. M. OmniTouch: Wearable multitouch interaction everywhere. Proc. UIST, 2011, 441--450. Google ScholarDigital Library
- Harrison, C., Tan, D., and Morris, D. Skinput: Appropriating the body as an input surface. Proc. CHI, 2010, 453--462. Google ScholarDigital Library
- Holz, C. and Wilson, A. Data miming: inferring spatial object descriptions from human gesture. Proc. CHI, 2011, 811--820. Google ScholarDigital Library
- Hua, H., Brown, L. D., and Gao, C. Scape: Supporting Stereoscopic Collaboration in Projective Environments. Proc. CG, 2004, 66--75.Google Scholar
- InFocus. http://www.infocus.com.Google Scholar
- Jacob, R. The perceptual structure of multidimensional input device selection. Proc. CHI, 1992, 211--218. Google ScholarDigital Library
- Kane, S. K., Avrahami, D., Wobbrock, J. O., and Harrison, B. Bonfire: A Nomadic system for hybrid laptop-tabletop interaction. Proc. UIST, 2009, 129--138. Google ScholarDigital Library
- Kirk, D. and Stanton Fraser, D. Comparing remote gesture technologies for supporting collaborative physical tasks. Proc. CHI, 2006, 1191--1200. Google ScholarDigital Library
- Mistry, P. and Maes, P. WUW-wear Ur world: a wearable gestural interface. Proc. CHI, 2009, 4111--4116. Google ScholarDigital Library
- Motokawa, Y. and Saito, H. Support system for guitar playing using augmented reality display. Proc. ISMAR, 2006, 243--244. Google ScholarDigital Library
- Neumann, U. and Majoros, A. Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. Proc. of VR, 1998, 4--11. Google ScholarDigital Library
- Palmiter, S. and Elkerton, J. An evaluation of animated demonstrations of learning computer-based tasks. Proc. CHI, 1991, 257--263. Google ScholarDigital Library
- Pinhanez, C. The Everywhere Displays Projector: A Device to create ubiquitous graphical interfaces. Proc. UbiComp, 2001, 315--331. Google ScholarDigital Library
- Raskar, R., Beardsley, P., Van Baar, J., et al. RFIG Lamps: interacting with a self-describing world via photosensing wireless tags and projectors. Proc. of SIGGRAPH, 2004, 406-- 415. Google ScholarDigital Library
- Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. The office of the future: A unified approach to image-based modeling and spatially immersive displays. Proc. SIGGRAPH, 1998, 179--188. Google ScholarDigital Library
- Raskar, R., Welch, G., Low, K. L., and Bandyopadhyay, D. Shader lamps: Animating real objects with image-based illumination. Proc. Eurographics, 2001, 89. Google ScholarDigital Library
- Rosenthal, S., Kane, S. K., Wobbrock, J. O., and Avrahami, D. Augmenting On-Screen Instructions with Micro-Projected Guides: When it works, and when it fails. Proc. UbiComp, 2010, 203--212. Google ScholarDigital Library
- SynchronousObjects. http://synchronousobjects.osu.edu.Google Scholar
- Tan, D. S., Pausch, R., and Hodgins, J. Exploiting the cognitive and social benefits of physically large displays. 2004.Google Scholar
- Watson, G., Curran, R., Butterfield, J., and Craig, C. The effect of using animated work instructions over text and static graphics when performing a small scale engineering assembly. Proc. CE, 2008, 541--550.Google Scholar
- White, S., Lister, L., and Feiner, S. Visual Hints for tangible gestures in augmented reality. Proc. ISMAR, 2007, 1--4. Google ScholarDigital Library
- Wilson, A. D. and Benko, H. Combining multiple depth cameras and projectors for interactions on, above and between surfaces. Proc. UIST, 2010, 273--282. Google ScholarDigital Library
- Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Journal of Computer Vision 13, 2 1994, 119--152. Google ScholarDigital Library
Index Terms
- LightGuide: projected visualizations for hand movement guidance
Recommendations
OmniTouch: wearable multitouch interaction everywhere
UIST '11: Proceedings of the 24th annual ACM symposium on User interface software and technologyOmniTouch is a wearable depth-sensing and projection system that enables interactive multitouch applications on everyday surfaces. Beyond the shoulder-worn system, there is no instrumentation of the user or environment. Foremost, the system allows the ...
A Sensing Technique for Data Glove Using Conductive Fiber
CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing SystemsWe demonstrate a sensing technique for data gloves using conductive fiber. This technique enables us to estimate hand shapes (bend of a finger and contact between fingers) and differentiates a grabbing tag. To estimate how far each finger bends, the ...
Tracking hand rotation and various grasping gestures from an IR camera using extended cylindrical manifold embedding
This paper presents a new approach for tracking hand rotation and various grasping gestures through an infrared camera. For the complexity and ambiguity of an observed hand shape, it is difficult to simultaneously estimate hand configuration and ...
Comments