2011 | OriginalPaper | Buchkapitel
Interpreting Dynamic Meanings by Integrating Gesture and Posture Recognition System
verfasst von : Omer Rashid Ahmed, Ayoub Al-Hamadi, Bernd Michaelis
Erschienen in: Computer Vision – ACCV 2010 Workshops
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Integration of information from different systems support enhanced functionality however it requires a rigorous pre-determined results for the fusion. This paper proposes a novel approach for determining the integration criteria using Particle filter for the fusion of hand gesture and posture recognition system at decision level. For decision level fusion, integration framework requires the classification of hand gesture and posture symbols in which HMM is used to classify the alphabets and numbers from hand gesture recognition system whereas ASL finger spelling signs (alphabets and numbers) are classified by posture recognition system using SVM. These classification results are input to integration framework to compute the contribution-weights. For this purpose, Condensation algorithm approximates the optimal a-posterior probability using a-prior probability and Gaussian based likelihood function thus making the weights independent of classification ambiguities. Considering the recognition as a problem of regular grammar, we have developed our production rules based on context free grammar (CFG) for the restaurant scenario. On the basis of contribution-weights, we mapped the recognized outcome over CFG rules and infer meaningful expressions. Experiments are conducted on 500 different combinations of restaurant orders with the overall 98.3% inference accuracy which proves the significance of proposed approach.