Abstract
To gesture interaction based on computer vision, the effective area of hand activity is often the whole reach captured by cameras, so some subconscious action of users may be interpreted as effective computer command. In order to address this problem, we put forward the concept of virtual interface, which makes the effective area of hand activity narrow to a specific region. As the gesture in the virtual interface is regarded as effective, while the gesture outside it is regarded as invalid, it is possible to solve the “Midas Touch Problem”. To begin with, we identify the position and size of virtual interface by utilizing the least square method through learning process. Then users interact with computer by virtual interface, during which the gesture commands are released through it. The experimental results show that the proposed virtual interface can efficiently solve the “Midas Touch Problem” and has a good user experience.