skip to main content
research-article

Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation

Published:01 October 2013Publication History
Skip Abstract Section

Abstract

Many researchers internationally are studying how to synthesize computer animations of sign language; such animations have accessibility benefits for people who are deaf and have lower literacy in written languages. The field has not yet formed a consensus as to how to best conduct evaluations of the quality of sign language animations, and this article explores an important methodological issue for researchers conducting experimental studies with participants who are deaf. Traditionally, when evaluating an animation, some lower and upper baselines are shown for comparison during the study. For the upper baseline, some researchers use carefully produced animations, and others use videos of human signers. Specifically, this article investigates, in studies where signers view animations of sign language and are asked subjective and comprehension questions, whether participants differ in their subjective and comprehension responses when actual videos of human signers are shown during the study. Through three sets of experiments, we characterize how the Likert-scale subjective judgments of participants about sign language animations are negatively affected when they are also shown videos of human signers for comparison -- especially when displayed side-by-side. We also identify a small positive effect on the comprehension of sign language animations when studies also contain videos of human signers. Our results enable direct comparison of previously published evaluations of sign language animations that used different types of upper baselines -- video or animation. Our results also provide methodological guidance for researchers who are designing evaluation studies of sign language animation or designing experimental stimuli or questions for participants who are deaf.

References

  1. Ahlberg, J., Pandzic, I. S., and You, L. 2002. Evaluating face models animated by MPEG-4. In MPEG-4 Facial Animation: The Standard, Implementations and Applications, I. S. Pandzic and R. Forchheimer Eds., Wiley, 291--296.Google ScholarGoogle Scholar
  2. Baldassarri, S., Cerezo, E., and Royo-Santas, F. 2009. Automatic translation system to Spanish sign language with a virtual interpreter. In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I (INTERACT’09). T. Gross et al. Eds., Springer, Berlin, 196--199. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Baldassarri, S. and Cerezo, E. 2012. Maxine: Embodied conversational agents for multimodal affective communication. In Computer Graphics, N. Mukai Ed.Google ScholarGoogle Scholar
  4. Bergmann, K. 2012. The production of co-speech iconic gestures: Empirical study and computational simulation with virtual agents. Dissertation, Bielefeld University, Germany.Google ScholarGoogle Scholar
  5. Davidson, M. J., Alkoby, K., Sedgwick, E., Berthiaume, A., Carter, R., Christopher, J., Craft, B., Furst, J., Hinkle, D., Konie, B., Lancaster, G., Luecking, S., Morris, A., McDonald, J., Tomuro, N., Toro, J., and Wolfe, R. 2000. Usability testing of computer animation of fingerspelling for American Sign Language. In Proceedings of the DePaul CTI Research Conference.Google ScholarGoogle Scholar
  6. Elliott, R., Glauert, J., Kennaway, J., Marshall, I., and Safar, E. 2008. Linguistic modeling and language-processing technologies for avatar-based sign language presentation. Univ. Access Inf. Soc. 6, 4, 375--391. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Emmorey, K., Thompson, R., and Colvin, R. 2008. Eye gaze during comprehension of American sign language by native and beginning signers. J. Deaf Stud. Deaf Edu. 14, 2, 237--243.Google ScholarGoogle ScholarCross RefCross Ref
  8. Filhol, M., Delorme, M., and Braffort, A. 2010. Combining constraint-based models for sign language synthesis. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies, Language Resources and Evaluation Conference (LREC).Google ScholarGoogle Scholar
  9. Fotinea, S. E., Efthimiou, E., Caridakis, G., and Karpouzis, K. 2008. A knowledge-based sign synthesis architecture. Univ. Access Inf. Soc. 6, 4, 405--418. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Garau, M., Slater, M., Bee, S., and Sasse, M. A. 2001. The impact of eye gaze on communication using humanoid avatars. In SIGCHI’01. ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Gibet, S., Courty, N., Duarte, K., and Le Naour, T. 2011. The SignCom system for data-driven animation of interactive virtual signers: Methodology and evaluation. ACM Trans. Interact. Intell. Syst. 1, 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Ham, R. T., Theune, M., Heuvelman, A., and Verleur, R. 2005. Judging Laura: Perceived qualities of a mediated human versus an embodied agent. In Intelligent Virtual Agents, vol. 3661. T. Panayiotopoulos et al. Eds., Springer, Berlin, 381--393. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Huenerfauth, M. 2006. Generating American sign language classifier predicates for English-to-ASL machine translation. Ph.D. dissertation, Computer and Information Science, University of Pennsylvania. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Huenerfauth, M. and Hanson, V. 2009. Sign language in the interface: Access for deaf signers. In Universal Access Handbook, C. Stephanidis Ed., Erlbaum, 38.1--38.18.Google ScholarGoogle Scholar
  15. Huenerfauth, M. and Lu, P. 2010. Modeling and synthesizing spatially inflected verbs for American sign language animations. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’10). ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Huenerfauth, M. and Lu, P. 2012. Effect of spatial reference and verb inflection on the usability of American sign language animation. Univ. Access Inf. Soc. 11, 2, 169--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Huenerfauth, M., Zhao, L., Gu, E., and Allbeck, J. 2008. Evaluation of American sign language generation by native ASL signers. ACM Trans. Access Comput. 1, 1, 1--27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Huenerfauth, M., Lu, P., and Rosenberg, A. 2011. Evaluating importance of facial expression in American sign language and pidgin signed English animations. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’11). ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kacorri, H., Lu, P., and Huenerfauth, M. 2013. Evaluating facial expressions in American sign language animations for accessible online information. In UAHCI/HCII 2013, Part I, Lecture Notes in Computer Science, vol. 8009. C. Stephanidis and M. Antona Eds., Springer, Berlin. 510--519. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Kennaway, J. R., Glauert, J. R. W., and Zwitserlood, I. 2007. Providing signed content on the Internet by synthesized animation. ACM Trans. Comput.-Hum. Interact. 14, 3, Article 15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Kipp, M., Heloir, A., and Nguyen, Q. 2011a. Sign language avatars: Animation and comprehensibility. In Intelligent Virtual Agents, vol. 6895. H. Vilhjálmsson et al. Eds., Springer, Berlin, 113--126. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Kipp, M., Nguyen, Q., Heloir, A., and Matthes, S. 2011b. Assessing the deaf user perspective on sign language avatars. In Proceedings of ASSETS’11. ACM, New York, 107--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. López-Colino, F. and Colás, J. 2011. The synthesis of LSE classifiers: From representation to evaluation. J. Univ. Comput. Sci. 17, 3, 399--425.Google ScholarGoogle Scholar
  24. Lu, P. and Huenerfauth, M. 2010. Collecting a motion-capture corpus of American sign language for data-driven generation research. In Proceedings of the 1st Workshop on Speech and Language Processing for Assistive Technologies (SLPAT). Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Lu, P. and Huenerfauth, M. 2011. Synthesizing American sign language spatially inflected verbs from motion-capture data. In Proceedings of the 2nd International Workshop on Sign Language Translation and Avatar Technology (SLTAT), in conjunction with ASSETS’11.Google ScholarGoogle Scholar
  26. Lu, P. and Huenerfauth, M. 2013. Collecting and evaluating the CUNY ASL corpus for research on American sign language animation. Submitted for publication.Google ScholarGoogle Scholar
  27. Lu, P. and Huenerfauth, M. 2012. Learning a vector-based model of American sgn language inflecting verbs from motion-capture data. In Proceedings of the 2nd Workshop on Speech and Language Processing for Assistive Technologies (SLPAT). Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lu, P. and Kacorri, H. 2012. Effect of presenting video as a baseline during an American sign language animation user study. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’12). ACM, New York. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. McDonnell, R., Jörg, S., McHugh, J., Newell, F., and O’Sullivan, C. 2008. Evaluating the emotional content of human motions on real and virtual characters. In Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization (APGV’08). ACM, New York, 67--74. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Mitchell, R., Young, T., Bachleda, B., and Karchmer, M. 2006. How many people use ASL in the United States? Why estimates need updating. Sign Lang. Stud. 6, 3, 306--335.Google ScholarGoogle ScholarCross RefCross Ref
  31. Morimoto, K., Kurokawa, T., Kentarou, U., Teruyo, K., Kazushi, T., Katsuo, N., and Tamotsu, F. 2003. Design of an agent to represent Japanese sign language for hearing-impaired people in stomach x-ray inspection. In Proceedings of the Asia Design Conference.Google ScholarGoogle Scholar
  32. Morimoto, K., Kurokawa, T., and Kawamura, S. 2006. Improvements and evaluations in sign animation used as instructions for stomach x-ray examination. In Proceedings of the 10th International Conference on Computers Helping People with Special Needs (ICCHP’06). K. Miesenberger et al. Eds., Springer, Berlin, 607--614. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Neidle, C., Kegl, J., MacLaughlin, D., Bahan, B., and Lee, R. G. 2000. The Syntax of ASL: Functional Categories And Hierarchical Structure. MIT Press, Cambridge, MA.Google ScholarGoogle Scholar
  34. Ow, S. H. 2009. User evaluation of an electronic Malaysian sign language dictionary: E-sign dictionary. Comput. Inf. Sci. 2, 2, 34--52.Google ScholarGoogle Scholar
  35. Pražák, M., McDonnell, R., and O’Sullivan, C. 2010. Perceptual evaluation of human animation timewarping. In ACM SIGGRAPH ASIA Sketches (SA’10). ACM, New York, Article 30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Russell, M., Kavanaugh, M., Masters, J., Higgins, J., and Hoffmann, T. 2009. Computer-based signing accommodations: Comparing a recorded human with an avatar. J. Appl. Testing Technol. 10, 3, 21.Google ScholarGoogle Scholar
  37. Schnepp, J. and Shiver, B. 2011. Improving deaf accessibility in remote usability testing. In Proceedings of ASSETS’11. ACM, New York, 255--256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Schnepp, J., Wolfe, R., and McDonald, J. 2010. Synthetic corpora: A synergy of linguistics and computer animation. In Proceedings of the 4th Workshop on the Representation and Processing of Sign Languages (LREC’10).Google ScholarGoogle Scholar
  39. Schnepp, J., Wolfe, R., Shiver, B., McDonald, J., and Toro, J. 2011. SignQUOTE: A remote testing facility for eliciting signed qualitative feedback. In Proceedings of the 2nd International Workshop on Sign Language Translation and Avatar Technology.Google ScholarGoogle Scholar
  40. Schuirmann, D. J. 1987. A comparison of the two one-sided tests procedure and the power approach for assessing equivalence of average bioavailability. J. Pharmacokin Biopharm 15, 657--680. DOI:10.1007/BF01068419.Google ScholarGoogle ScholarCross RefCross Ref
  41. Traxler, C. 2000. The Stanford achievement test 9th edition: National norming and performance standards for deaf and hard-of-hearing students. J. Deaf Stud. Deaf Edu. 5, 4, 337--348.Google ScholarGoogle ScholarCross RefCross Ref
  42. VCOM3D. 2012. Homepage. http://www.vcom3d.com/.Google ScholarGoogle Scholar

Index Terms

  1. Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Accessible Computing
          ACM Transactions on Accessible Computing  Volume 5, Issue 2
          October 2013
          59 pages
          ISSN:1936-7228
          EISSN:1936-7236
          DOI:10.1145/2522990
          Issue’s Table of Contents

          Copyright © 2013 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 October 2013
          • Accepted: 1 July 2013
          • Received: 1 January 2013
          Published in taccess Volume 5, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader