ABSTRACT
Affective support is crucial during learning, with recent evidence suggesting it is particularly important for female students. Facial expression is a rich channel for affect detection, but a key open question is how facial displays of affect differ by gender during learning. This paper presents an analysis suggesting that facial expressions for women and men differ systematically during learning. Using facial video automatically tagged with facial action units, we find that despite no differences between genders in incoming knowledge, self-efficacy, or personality profile, women displayed one lower facial action unit significantly more than men, while men displayed brow lowering and lip fidgeting more than women. However, numerous facial actions including brow raising and nose wrinkling were strongly correlated with learning in women, whereas only one facial action unit, eyelid raiser, was associated with learning for men. These results suggest that the entire affect adaptation pipeline, from detection to response, may benefit from gender-specific models in order to support students more effectively.
- I. Arroyo, W. Burleson, T. Minghui, M. K, and B. P. Woolf. Gender Differences in the Use and Benefit of Advanced Learning Technologies for Mathematics. Journal of Educational Psychology, 105(4):957--969, 2013.Google ScholarCross Ref
- I. Arroyo, D. G. Cooper, W. Burleson, B. P. Woolf, M. K, and R. M. Christopherson. Emotion Sensors Go To School. In Proceedings of the 14th International Conference on Artificial Intelligence in Education, pages 17--24, 2009. Google ScholarDigital Library
- I. Arroyo, B. P. Woolf, D. G. Cooper, W. Burleson, and K. Muldner. The impact of animated pedagogical agents on girls' and boys' emotions, attitudes, behaviors and learning. In Proceedings of the 11th International Conference on Advanced Learning Technologies, pages 506--510, 2011. Google ScholarDigital Library
- R. S. J. d. Baker, S. K. D'Mello, M. T. Rodrigo, and A. C. Graesser. Better to be frustrated than bored: The incidence, persistence, and impact of learners' cognitive-affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4):223--241, 2010. Google ScholarDigital Library
- L. R. Brody and J. A. Hall. Gender and emotion in context. Handbook of emotions, 3:395--408, 2008.Google Scholar
- W. Burleson and R. W. Picard. Gender-specific approaches to developing emotionally intelligent learning companions. IEEE Intelligent Systems, 22(4):62--69, 2007. Google ScholarDigital Library
- R. A. Calvo and S. K. D'Mello. New Perspectives on Affect and Learning Technologies. Springer Science & Business Media, 2011. Google ScholarDigital Library
- G. Chen, S. M. Gully, and D. Eden. Validation of a new general self-efficacy scale. Organizational Research Methods, 4(1):62--83, 2001.Google ScholarCross Ref
- M. Chi, K. VanLehn, D. J. Litman, and P. W. Jordan. Empirically evaluating the application of reinforcement learning to the induction of effective and adaptive pedagogical strategies. User Modelling and User-Adapted Interaction, 21(1-2):137--180, 2011. Google ScholarDigital Library
- S. Corrigan, T. Barkley, and Z. Pardos. Dynamic Approaches to Modeling Student Affect and its Changing Role in Learning and Performance. In Proceedings of the 23rd International Conference on User Modeling, Adaptation, and Personalization, pages 92--103, 2015.Google ScholarCross Ref
- S. Craig, S. D'Mello, A. Witherspoon, and A. Graesser. Emote-Aloud during Learning with AutoTutor: Applying the Facial Action Coding System to Cognitive-Affective States during Learning. Cognition and Emotion, 22(5):777--788, 2008.Google ScholarCross Ref
- S. D. Craig, A. C. Graesser, J. Sullins, and B. Gholson. Affect and learning: An exploratory look into the role of affect in learning with AutoTutor. Journal of Educational Media, 29(3):241--250, 2004.Google ScholarCross Ref
- F. De la Torre and J. F. Cohn. Facial expression analysis. Visual analysis of humans, pages 377--409, 2011.Google Scholar
- M. Dennis, J. Mastho , and C. Mellish. Adapting Performance Feedback to a Learner's Conscientiousness. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization, pages 297--302, 2012. Google ScholarDigital Library
- M. C. Desmarais and R. S. J. d. Baker. A review of recent advances in learner and skill modeling in intelligent learning environments. User Modelling and User-Adapted Interaction, 22(1-2):9--38, 2012. Google ScholarDigital Library
- U. Dimberg and L.-O. Lundquist. Gender differences in facial reactions to facial expressions. Biological psychology, 30(2):151--159, 1990.Google ScholarCross Ref
- S. D'Mello, B. Lehman, R. Pekrun, and A. Graesser. Confusion can be bene cial for learning. Learning and Instruction, 29:153--170, 2014.Google ScholarCross Ref
- S. K. D'Mello. A Selective Meta-analysis on the Relative Incidence of Discrete Affective States during Learning with Technology. Journal of Educational Psychology, 105(4):1082--1099, 2013.Google ScholarCross Ref
- S. K. D'Mello and A. C. Graesser. AutoTutor and affective AutoTutor: Learning by talking with cognitively and emotionally intelligent computers that talk back. ACM Transactions on Interactive Intelligent Systems, 2:1--39, 2012. Google ScholarDigital Library
- P. Ekman, W. V. Friesen, and P. Ellsworth. Emotion in the Human Face: Guidelines for Research and an Integration of Findings. Pergamon Press, 1972.Google Scholar
- P. Ekman, W. V. Friesen, and J. C. Hager. Facial Action Coding System: Investigator's Guide. 2002.Google Scholar
- L. R. Goldberg. The structure of phenotypic personality traits. American Psychologist, 48(1):26--34, 1993.Google ScholarCross Ref
- J. F. Grafsgaard, K. E. Boyer, and J. C. Lester. Predicting facial indicators of confusion with hidden Markov models. In Proceedings of the 4th International Conference on A ective Computing and Intelligent Interaction, pages 97--106, 2011. Google ScholarDigital Library
- J. F. Grafsgaard, S. Y. Lee, B. W. Mott, K. E. Boyer, and J. C. Lester. Modeling Self-Efficacy Across Age Groups with Automatically Tracked Facial Expression. In Proceedings of the 17th International Conference on Artificial Intelligence in Education, pages 582--585, 2015.Google ScholarCross Ref
- J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe, and J. C. Lester. Automatically Recognizing Facial Expression: Predicting Engagement and Frustration. In Proceedings of the 6th International Conference on Educational Data Mining, pages 43--50, 2013.Google Scholar
- J. F. Grafsgaard, J. B. Wiggins, A. K. Vail, K. E. Boyer, and J. C. Lester. The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring. In Proceedings of the 16th ACM International Conference on Multimodal Interaction, pages 42--49, 2014. Google ScholarDigital Library
- D. T. Green, T. J. Walsh, P. R. Cohen, C. R. Beal, and Y.-H. Chang. Gender Differences and the Value of Choice in Intelligent Tutoring Systems. In Proceedings of the 19th International Conference on User Modeling, Adaptation, and Personalization, pages 341--346, 2011. Google ScholarDigital Library
- E. Y. Ha, J. F. Grafsgaard, C. M. Mitchell, K. E. Boyer, and J. C. Lester. Combining Verbal and Nonverbal Features to Overcome the 'Information Gap' in Task-Oriented Dialogue. In Proceedings of the 13th Annual SIGDIAL Meeting on Discourse and Dialogue, pages 247--256, 2012. Google ScholarDigital Library
- J. A. Hall. Gender e ects in decoding nonverbal cues. Psychological bulletin, 85(4):845, 1978.Google ScholarCross Ref
- J. A. Hall and D. Matsumoto. Gender differences in judgments of multiple emotions from facial expressions. Emotion, 4(2):201, 2004.Google ScholarCross Ref
- O. P. John and S. Srivastava. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: Theory and Research, 2:102--138, 1999.Google Scholar
- D. Keltner. Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame. Journal of Personality and Social Psychology, page 441.Google Scholar
- M. LaFrance and M. A. Hecht. Option or obligation to smile: The effects of power and gender on facial expression. The social context of nonverbal behavior: Studies in emotion and social interaction, page 431, 1999.Google Scholar
- M. R. Lepper and M. Woolverton. The wisdom of practice: Lessons learned from the study of highly effective tutors. Improving Academic Achievement: Impact of Psychological Factors on Education, pages 135--158, 2002.Google Scholar
- G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, M. Javier, and M. Bartlett. The computer expression recognition toolbox (CERT). In Proceedings of the 11th International Conference on Automatic Face & Gesture Recognition and Workshops, pages 298--305, 2011.Google ScholarCross Ref
- G. C. Littlewort, M. S. Bartlett, L. P. Salamanca, and J. Reilly. Automated measurement of children's facial expressions during problem solving tasks. In Proceedings of the International Conference on Automatic Face & Gesture Recognition and Workshops, pages 30--35, 2011.Google ScholarCross Ref
- C. M. Mitchell, E. Y. Ha, K. E. Boyer, and J. C. Lester. Learner characteristics and dialogue: recognising e ective and student-adaptive tutorial strategies. International Journal of Learning Technology, 8(4):382--403, 2013. Google ScholarDigital Library
- A. Mitrovic. Fifteen years of constraint-based tutors: What we have achieved and where we are going. User Modelling and User-Adapted Interaction, 22(1-2):39--72, 2012. Google ScholarDigital Library
- M. O. Z. San Pedro, R. S. J. d. Baker, S. M. Gowda, and N. T. Hefferman. Towards an Understanding of Affect and Knowledge from Student Interaction with an Intelligent Tutoring System. In Proceedings of the 16th International Conference on Artificial Intelligence in Education, pages 41--50, 2013.Google ScholarCross Ref
- A. K. Vail, K. E. Boyer, E. N. Wiebe, and J. C. Lester. The Mars and Venus Effect: The Influence of User Gender on the Effectiveness of Adaptive Task Support. In Proceedings of the 23rd International Conference on User Modeling, Adaptation, and Personalization, pages 216--227, 2015.Google ScholarCross Ref
- K. VanLehn. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4):197--221, 2011.Google ScholarCross Ref
- K. VanLehn, A. C. Graesser, G. T. Jackson, P. W. Jordan, A. Olney, and C. P. Rose. When Are Tutorial Dialogues More E ective Than Reading? Cognitive Science, 31(1):3--62, 2007.Google ScholarCross Ref
- M. Wixon and I. Arroyo. When the Question is Part of the Answer: Examining the Impact of Emotion Self-reports on Student Emotion. In Proceedings of the 22nd International Conference on User Modeling, Adaptation, and Personalization, pages 471--477, 2014.Google ScholarCross Ref
- B. P. Woolf, W. Burleson, I. Arroyo, T. Dragon, D. G. Cooper, and R. W. Picard. A ect-Aware Tutors: Recognizing and Responding to Student A ect. International Journal of Learning Technology, 4(3--4):129--164. Google ScholarDigital Library
Index Terms
- Gender Differences in Facial Expressions of Affect During Learning
Recommendations
Natural facial expression recognition using differential-AAM and manifold learning
This paper proposes a novel natural facial expression recognition method that recognizes a sequence of dynamic facial expression images using the differential active appearance model (AAM) and manifold learning as follows. First, the differential-AAM ...
A bi-modal face recognition framework integrating facial expression with facial appearance
Among many biometric characteristics, the facial biometric is considered to be the least intrusive technology that can be deployed in the real-world visual surveillance environment. However, in facial biometric, little research attention has been paid ...
Extraction of the minimum number of Gabor wavelet parameters for the recognition of natural facial expressions
Facial expression recognition has recently become an important research area, and many efforts have been made in facial feature extraction and its classification to improve face recognition systems. Most researchers adopt a posed facial expression ...
Comments