2014 | OriginalPaper | Chapter
A Multi-layer Model for Sign Language’s Non-Manual Gestures Generation
Authors : Oussama El Ghoul, Mohamed Jemni
Published in: Computers Helping People with Special Needs
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Contrary to the popular believes, the structure of signs exceeds the simple combination of hands movements and shapes. Furthermore, sign significance resides, not in the hand shape, the position, the movement, the orientation or facial expression but in the combination of all five. In this context, our aim is to propose a model for non-manual gesture generation for sign language machine translation. We developed in previous works a new gesture generator that does not support facial animation. We propose a multi-layer model to be used for the development of new software for generating non-manual gestures NMG. Three layers compose the system. The first layer represents the interface between the system and external programs. Its role is to do the linguistic treatment in order to compute all linguistic information, such as the grammatical structure of the sentence. The second layer contains two modules (the manual gesture generator and the non-manual gesture generator). In first module the non-manual gestures generator uses three dimension facial modeling and animation techniques to produce facial expression in sign language.