2014 | OriginalPaper | Chapter
Pointing Gesture Interface for Large Display Environments Based on the Kinect Skeleton Model
Authors : Hansol Kim, Yoonkyung Kim, Daejune Ko, Jinman Kim, Eui Chul Lee
Published in: Future Information Technology
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Even though many three-dimensional pointing gesture recognition methods have been researched, the problem of self-occlusion has not been considered. When two positions are used to define a pointing vector on a single camera perspective line, one position can occlude the other, which causes detection inaccuracies. In this paper, we propose a pointing gesture recognition method for large display environments based on the Kinect skeleton model. By taking the self-occlusion problem into account, a person’s detected shoulder position is compensated for in the case of a hand occlusion. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position is greatly improved. The average root mean square error was approximately 13 pixels in 1920×1080 screen resolution.