2002 | OriginalPaper | Buchkapitel
SignSynth: A Sign Language Synthesis Application Using Web3D and Perl
verfasst von : Angus B. Grieve-Smith
Erschienen in: Gesture and Sign Language in Human-Computer Interaction
Verlag: Springer Berlin Heidelberg
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Sign synthesis (also known as text-to-sign) has recently seen a large increase in the number of projects under development. Many of these focus on translation from spoken languages, but other applications include dictionaries and language learning. I will discuss the architecture of typical sign synthesis applications and mention some of the applications and prototypes currently available. I will focus on SignSynth, a CGI-based articulatory sign synthesis prototype I am developing at the University of New Mexico. SignSynth takes as its input a sign language text in ASCII-Stokoe notation (chosen as a simple starting point) and converts it to an internal feature tree. This underlying linguistic representation is then converted into a three-dimensionala nimation sequence in Virtual Reality Modeling Language (VRML or Web3D), which is automatically rendered by a Web3D browser.