Abstract
Many researchers internationally are studying how to synthesize computer animations of sign language; such animations have accessibility benefits for people who are deaf and have lower literacy in written languages. The field has not yet formed a consensus as to how to best conduct evaluations of the quality of sign language animations, and this article explores an important methodological issue for researchers conducting experimental studies with participants who are deaf. Traditionally, when evaluating an animation, some lower and upper baselines are shown for comparison during the study. For the upper baseline, some researchers use carefully produced animations, and others use videos of human signers. Specifically, this article investigates, in studies where signers view animations of sign language and are asked subjective and comprehension questions, whether participants differ in their subjective and comprehension responses when actual videos of human signers are shown during the study. Through three sets of experiments, we characterize how the Likert-scale subjective judgments of participants about sign language animations are negatively affected when they are also shown videos of human signers for comparison -- especially when displayed side-by-side. We also identify a small positive effect on the comprehension of sign language animations when studies also contain videos of human signers. Our results enable direct comparison of previously published evaluations of sign language animations that used different types of upper baselines -- video or animation. Our results also provide methodological guidance for researchers who are designing evaluation studies of sign language animation or designing experimental stimuli or questions for participants who are deaf.
- Ahlberg, J., Pandzic, I. S., and You, L. 2002. Evaluating face models animated by MPEG-4. In MPEG-4 Facial Animation: The Standard, Implementations and Applications, I. S. Pandzic and R. Forchheimer Eds., Wiley, 291--296.Google Scholar
- Baldassarri, S., Cerezo, E., and Royo-Santas, F. 2009. Automatic translation system to Spanish sign language with a virtual interpreter. In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I (INTERACT’09). T. Gross et al. Eds., Springer, Berlin, 196--199. Google ScholarDigital Library
- Baldassarri, S. and Cerezo, E. 2012. Maxine: Embodied conversational agents for multimodal affective communication. In Computer Graphics, N. Mukai Ed.Google Scholar
- Bergmann, K. 2012. The production of co-speech iconic gestures: Empirical study and computational simulation with virtual agents. Dissertation, Bielefeld University, Germany.Google Scholar
- Davidson, M. J., Alkoby, K., Sedgwick, E., Berthiaume, A., Carter, R., Christopher, J., Craft, B., Furst, J., Hinkle, D., Konie, B., Lancaster, G., Luecking, S., Morris, A., McDonald, J., Tomuro, N., Toro, J., and Wolfe, R. 2000. Usability testing of computer animation of fingerspelling for American Sign Language. In Proceedings of the DePaul CTI Research Conference.Google Scholar
- Elliott, R., Glauert, J., Kennaway, J., Marshall, I., and Safar, E. 2008. Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ. Access Inf. Soc. 6, 4, 375--391. Google ScholarDigital Library
- Emmorey, K., Thompson, R., and Colvin, R. 2008. Eye gaze during comprehension of American sign language by native and beginning signers. J. Deaf Stud. Deaf Edu. 14, 2, 237--243.Google ScholarCross Ref
- Filhol, M., Delorme, M., and Braffort, A. 2010. Combining constraint-based models for sign language synthesis. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Language Resources and Evaluation Conference (LREC).Google Scholar
- Fotinea, S. E., Efthimiou, E., Caridakis, G., and Karpouzis, K. 2008. A knowledge-based sign synthesis architecture. Univ. Access Inf. Soc. 6, 4, 405--418. Google ScholarDigital Library
- Garau, M., Slater, M., Bee, S., and Sasse, M. A. 2001. The impact of eye gaze on communication using humanoid avatars. In SIGCHI’01. ACM, New York. Google ScholarDigital Library
- Gibet, S., Courty, N., Duarte, K., and Le Naour, T. 2011. The SignCom system for data-driven animation of interactive virtual signers: Methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1, 1. Google ScholarDigital Library
- Ham, R. T., Theune, M., Heuvelman, A., and Verleur, R. 2005. Judging Laura: Perceived qualities of a mediated human versus an embodied agent. In Intelligent Virtual Agents, vol. 3661. T. Panayiotopoulos et al. Eds., Springer, Berlin, 381--393. Google ScholarDigital Library
- Huenerfauth, M. 2006. Generating American sign language classifier predicates for English-to-ASL machine translation. Ph.D. dissertation, Computer and Information Science, University of Pennsylvania. Google ScholarDigital Library
- Huenerfauth, M. and Hanson, V. 2009. Sign language in the interface: Access for deaf signers. In Universal Access Handbook, C. Stephanidis Ed., Erlbaum, 38.1--38.18.Google Scholar
- Huenerfauth, M. and Lu, P. 2010. Modeling and synthesizing spatially inflected verbs for American sign language animations. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’10). ACM, New York. Google ScholarDigital Library
- Huenerfauth, M. and Lu, P. 2012. Effect of spatial reference and verb inflection on the usability of American sign language animation. Univ. Access Inf. Soc. 11, 2, 169--184. Google ScholarDigital Library
- Huenerfauth, M., Zhao, L., Gu, E., and Allbeck, J. 2008. Evaluation of American sign language generation by native ASL signers. ACM Trans. Access Comput. 1, 1, 1--27. Google ScholarDigital Library
- Huenerfauth, M., Lu, P., and Rosenberg, A. 2011. Evaluating importance of facial expression in American sign language and pidgin signed English animations. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’11). ACM, New York. Google ScholarDigital Library
- Kacorri, H., Lu, P., and Huenerfauth, M. 2013. Evaluating facial expressions in American sign language animations for accessible online information. In UAHCI/HCII 2013, Part I, Lecture Notes in Computer Science, vol. 8009. C. Stephanidis and M. Antona Eds., Springer, Berlin. 510--519. Google ScholarDigital Library
- Kennaway, J. R., Glauert, J. R. W., and Zwitserlood, I. 2007. Providing signed content on the Internet by synthesized animation. ACM Trans. Comput.-Hum. Interact. 14, 3, Article 15. Google ScholarDigital Library
- Kipp, M., Heloir, A., and Nguyen, Q. 2011a. Sign language avatars: Animation and comprehensibility. In Intelligent Virtual Agents, vol. 6895. H. Vilhjálmsson et al. Eds., Springer, Berlin, 113--126. Google ScholarDigital Library
- Kipp, M., Nguyen, Q., Heloir, A., and Matthes, S. 2011b. Assessing the deaf user perspective on sign language avatars. In Proceedings of ASSETS’11. ACM, New York, 107--114. Google ScholarDigital Library
- López-Colino, F. and Colás, J. 2011. The synthesis of LSE classifiers: From representation to evaluation. J. Univ. Comput. Sci. 17, 3, 399--425.Google Scholar
- Lu, P. and Huenerfauth, M. 2010. Collecting a motion-capture corpus of American sign language for data-driven generation research. In Proceedings of the 1st Workshop on Speech and Language Processing for Assistive Technologies (SLPAT). Google ScholarDigital Library
- Lu, P. and Huenerfauth, M. 2011. Synthesizing American sign language spatially inflected verbs from motion-capture data. In Proceedings of the 2nd International Workshop on Sign Language Translation and Avatar Technology (SLTAT), in conjunction with ASSETS’11.Google Scholar
- Lu, P. and Huenerfauth, M. 2013. Collecting and evaluating the CUNY ASL corpus for research on American sign language animation. Submitted for publication.Google Scholar
- Lu, P. and Huenerfauth, M. 2012. Learning a vector-based model of American sgn language inflecting verbs from motion-capture data. In Proceedings of the 2nd Workshop on Speech and Language Processing for Assistive Technologies (SLPAT). Google ScholarDigital Library
- Lu, P. and Kacorri, H. 2012. Effect of presenting video as a baseline during an American sign language animation user study. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’12). ACM, New York. Google ScholarDigital Library
- McDonnell, R., Jörg, S., McHugh, J., Newell, F., and O’Sullivan, C. 2008. Evaluating the emotional content of human motions on real and virtual characters. In Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization (APGV’08). ACM, New York, 67--74. Google ScholarDigital Library
- Mitchell, R., Young, T., Bachleda, B., and Karchmer, M. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6, 3, 306--335.Google ScholarCross Ref
- Morimoto, K., Kurokawa, T., Kentarou, U., Teruyo, K., Kazushi, T., Katsuo, N., and Tamotsu, F. 2003. Design of an agent to represent Japanese sign language for hearing-impaired people in stomach x-ray inspection. In Proceedings of the Asia Design Conference.Google Scholar
- Morimoto, K., Kurokawa, T., and Kawamura, S. 2006. Improvements and evaluations in sign animation used as instructions for stomach x-ray examination. In Proceedings of the 10th International Conference on Computers Helping People with Special Needs (ICCHP’06). K. Miesenberger et al. Eds., Springer, Berlin, 607--614. Google ScholarDigital Library
- Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., and Lee, R. G. 2000. The Syntax of ASL: Functional Categories And Hierarchical Structure. MIT Press, Cambridge, MA.Google Scholar
- Ow, S. H. 2009. User evaluation of an electronic Malaysian sign language dictionary: E-sign dictionary. Comput. Inf. Sci. 2, 2, 34--52.Google Scholar
- Pražák, M., McDonnell, R., and O’Sullivan, C. 2010. Perceptual evaluation of human animation timewarping. In ACM SIGGRAPH ASIA Sketches (SA’10). ACM, New York, Article 30. Google ScholarDigital Library
- Russell, M., Kavanaugh, M., Masters, J., Higgins, J., and Hoffmann, T. 2009. Computer-based signing accommodations: Comparing a recorded human with an avatar. J. Appl. Testing Technol. 10, 3, 21.Google Scholar
- Schnepp, J. and Shiver, B. 2011. Improving deaf accessibility in remote usability testing. In Proceedings of ASSETS’11. ACM, New York, 255--256. Google ScholarDigital Library
- Schnepp, J., Wolfe, R., and McDonald, J. 2010. Synthetic corpora: A synergy of linguistics and computer animation. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages (LREC’10).Google Scholar
- Schnepp, J., Wolfe, R., Shiver, B., McDonald, J., and Toro, J. 2011. SignQUOTE: A remote testing facility for eliciting signed qualitative feedback. In Proceedings of the 2nd International Workshop on Sign Language Translation and Avatar Technology.Google Scholar
- Schuirmann, D. J. 1987. A comparison of the two one-sided tests procedure and the power approach for assessing equivalence of average bioavailability. J. Pharmacokin Biopharm 15, 657--680. DOI:10.1007/BF01068419.Google ScholarCross Ref
- Traxler, C. 2000. The Stanford achievement test 9th edition: National norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Edu. 5, 4, 337--348.Google ScholarCross Ref
- VCOM3D. 2012. Homepage. http://www.vcom3d.com/.Google Scholar
Index Terms
- Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation
Recommendations
Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users
ASSETS '15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & AccessibilityTechnology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) from an easy-to-update script would make it easier to add ASL content to websites and media, thereby increasing information ...
Comparing native signers' perception of American Sign Language animations and videos via eye tracking
ASSETS '13: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and AccessibilityAnimations of American Sign Language (ASL) have accessibility benefits for signers with lower written-language literacy. Our lab has conducted prior evaluations of synthesized ASL animations: asking native signers to watch different versions of ...
Regression Analysis of Demographic and Technology-Experience Factors Influencing Acceptance of Sign Language Animation
Special Issue (Part 2) of Papers from ASSETS 2015Software for automating the creation of linguistically accurate and natural-looking animations of American Sign Language (ASL) could increase information accessibility for many people who are deaf. As compared to recording and updating videos of human ...
Comments