Skip to main content
Top
Published in: Optical Memory and Neural Networks 3/2022

01-09-2022

Neural Network Model for Video-Based Analysis of Student’s Emotions in E-Learning

Authors: A. V. Savchenko, I. A. Makarov

Published in: Optical Memory and Neural Networks | Issue 3/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper, we consider a problem of an automatic analysis of the emotional state of students during online classes based on video surveillance data. This problem is actual in the field of e-learning. We propose a novel neural network model for recognition of students’ emotions based on video images of their faces and use it to construct an algorithm for classifying the individual and group emotions of students by video clips. At the first step, it performs detection of the faces and extracts their features followed by grouping the face of each student. To increase the accuracy, we propose to match students’ names selected with the aid of the algorithms of the text recognition. At the second step, specially learned efficient neural networks perform the extraction of emotional features of each selected person, their aggregation with the aid of statistical functions, and the subsequent classification. At the final step, it is possible to visualize fragments of the video lesson with the most pronounced emotions of the student. Our experiments with some datasets from EmotiW (Emotion Recognition in the Wild) show that the accuracy of the developed algorithms is comparable with their known analogous. However, when classifying emotions, the computational performance of these algorithms is higher.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Bhardwaj, P., Gupta, P., Panwar, H., Siddiqui, M.K., Morales-Menendez, R., and Bhaik, A., Application of deep learning on student engagement in e-learning environments, Comput. Electr. Eng., 2021, vol. 93, p. 107277.CrossRef Bhardwaj, P., Gupta, P., Panwar, H., Siddiqui, M.K., Morales-Menendez, R., and Bhaik, A., Application of deep learning on student engagement in e-learning environments, Comput. Electr. Eng., 2021, vol. 93, p. 107277.CrossRef
2.
go back to reference Dewan, M.A.A., Murshed, M., and Lin, F., Engagement detection in online learning: a review, Smart Learn. Environ., 2019, vol. 6, no. 1, pp. 1–20.CrossRef Dewan, M.A.A., Murshed, M., and Lin, F., Engagement detection in online learning: a review, Smart Learn. Environ., 2019, vol. 6, no. 1, pp. 1–20.CrossRef
3.
go back to reference Imani, M. and Montazer, G.A., A survey of emotion recognition methods with emphasis on E-Learning environments, J. Network Comput. Appl., 2019, vol. 147, p. 102423.CrossRef Imani, M. and Montazer, G.A., A survey of emotion recognition methods with emphasis on E-Learning environments, J. Network Comput. Appl., 2019, vol. 147, p. 102423.CrossRef
4.
go back to reference Savchenko, A.V., Deep neural networks and maximum likelihood search for approximate nearest neighbor in video-based image recognition, Opt. Mem. Neural Networks, 2017, vol. 26, no. 2, pp. 129–136CrossRef Savchenko, A.V., Deep neural networks and maximum likelihood search for approximate nearest neighbor in video-based image recognition, Opt. Mem. Neural Networks, 2017, vol. 26, no. 2, pp. 129–136CrossRef
5.
go back to reference Savchenko, A.V., Probabilistic Neural Network with complex exponential activation functions in image recognition, IEEE Trans. Neural Networks Learn. Syst., 2020, vol. 31, Issue 2, pp. 651–660MathSciNetCrossRef Savchenko, A.V., Probabilistic Neural Network with complex exponential activation functions in image recognition, IEEE Trans. Neural Networks Learn. Syst., 2020, vol. 31, Issue 2, pp. 651–660MathSciNetCrossRef
6.
go back to reference Ashwin, T.S. and Guddeti, R.M.R., Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures, Future Generation Comput. Syst., 2020, vol. 108, pp. 334–348.CrossRef Ashwin, T.S. and Guddeti, R.M.R., Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures, Future Generation Comput. Syst., 2020, vol. 108, pp. 334–348.CrossRef
7.
go back to reference Mollahosseini, A., Hasani, B., and Mahoor, M.H., AffectNet: A database for facial expression, valence, and arousal computing in the wild, IEEE Trans. Affective Comput., 2017, vol. 10, no. 1, pp. 18–31.CrossRef Mollahosseini, A., Hasani, B., and Mahoor, M.H., AffectNet: A database for facial expression, valence, and arousal computing in the wild, IEEE Trans. Affective Comput., 2017, vol. 10, no. 1, pp. 18–31.CrossRef
8.
go back to reference Savchenko, A.V., Facial expression and attributes recognition based on multi-task learning of lightweight neural networks, Proceedings of 19th IEEE International Symposium on Intelligent Systems and Informatics (SISY), 2021, pp. 119–124 Savchenko, A.V., Facial expression and attributes recognition based on multi-task learning of lightweight neural networks, Proceedings of 19th IEEE International Symposium on Intelligent Systems and Informatics (SISY), 2021, pp. 119–124
9.
go back to reference Dhall, A., EmotiW 2019: Automatic emotion, engagement and cohesion prediction tasks, Proceedings of the International Conference on Multimodal Interaction (ICMI), 2019, pp. 546–550. Dhall, A., EmotiW 2019: Automatic emotion, engagement and cohesion prediction tasks, Proceedings of the International Conference on Multimodal Interaction (ICMI), 2019, pp. 546–550.
10.
go back to reference Liu, C. et al., Multi-feature based emotion recognition for video clips, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2018, pp. 630–634. Liu, C. et al., Multi-feature based emotion recognition for video clips, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2018, pp. 630–634.
11.
go back to reference Kumar, V., Rao, S., and Yu, L., Noisy student training using body language dataset improves facial expression recognition, Proceedings of the European Conference on Computer Vision (ECCV), Cham: Springer, 2020, pp. 756–773. Kumar, V., Rao, S., and Yu, L., Noisy student training using body language dataset improves facial expression recognition, Proceedings of the European Conference on Computer Vision (ECCV), Cham: Springer, 2020, pp. 756–773.
12.
go back to reference Zhou, H. et al., Exploring emotion features and fusion strategies for audio-video emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2019, pp. 562–566. Zhou, H. et al., Exploring emotion features and fusion strategies for audio-video emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2019, pp. 562–566.
13.
go back to reference Li, S. et al., Bi-modality fusion for emotion recognition in the wild, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2019, pp. 589–594. Li, S. et al., Bi-modality fusion for emotion recognition in the wild, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2019, pp. 589–594.
14.
go back to reference Zeng, H. et al., EmotionCues: Emotion-oriented visual summarization of classroom videos, IEEE Trans. Visual. Comput. Graphics, 2020, vol. 27, no. 7, pp. 3168–3181.CrossRef Zeng, H. et al., EmotionCues: Emotion-oriented visual summarization of classroom videos, IEEE Trans. Visual. Comput. Graphics, 2020, vol. 27, no. 7, pp. 3168–3181.CrossRef
15.
go back to reference Sharma, G., Dhall, A., and Cai, J., Audio-visual automatic group affect analysis, IEEE Trans. Affective Comput., 2021. Sharma, G., Dhall, A., and Cai, J., Audio-visual automatic group affect analysis, IEEE Trans. Affective Comput., 2021.
16.
go back to reference Pinto, J.R. et al., Audiovisual classification of group emotion valence using activity recognition Networks, Proceedings of the 4th IEEE International Conference on Image Processing, Applications and Systems (IPAS), 2020, pp. 114–119. Pinto, J.R. et al., Audiovisual classification of group emotion valence using activity recognition Networks, Proceedings of the 4th IEEE International Conference on Image Processing, Applications and Systems (IPAS), 2020, pp. 114–119.
17.
go back to reference Wang, Y. et al., Implicit knowledge injectable cross attention audiovisual model for group emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 827–834. Wang, Y. et al., Implicit knowledge injectable cross attention audiovisual model for group emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 827–834.
18.
go back to reference Sun, M. et al., Multi-modal fusion using spatio-temporal and static features for group emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 835–840. Sun, M. et al., Multi-modal fusion using spatio-temporal and static features for group emotion recognition, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 835–840.
19.
go back to reference Liu, C. et al., Group level audio-video emotion recognition using hybrid Networks, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 807–812. Liu, C. et al., Group level audio-video emotion recognition using hybrid Networks, Proceedings of the ACM International Conference on Multimodal Interaction (ICMI), 2020, pp. 807–812.
20.
go back to reference Savchenko, A.V., Efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output ConvNet, Peer J. Comput. Sci., 2019, vol. 5, e197.CrossRef Savchenko, A.V., Efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output ConvNet, Peer J. Comput. Sci., 2019, vol. 5, e197.CrossRef
21.
go back to reference Cao, Q., Shen, L., Xie, W., Parkhi, O.M., and Zisserman, A., VGGface2: A dataset for recognising faces across pose and age, Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2018, pp. 67–74. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., and Zisserman, A., VGGface2: A dataset for recognising faces across pose and age, Proceedings of the 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG), 2018, pp. 67–74.
22.
go back to reference Zhang, K. et al., Joint face detection and alignment using multitask cascaded convolutional networks, IEEE Signal Process. Lett., 2016, vol. 23, no. 10, pp. 1499–1503.CrossRef Zhang, K. et al., Joint face detection and alignment using multitask cascaded convolutional networks, IEEE Signal Process. Lett., 2016, vol. 23, no. 10, pp. 1499–1503.CrossRef
23.
go back to reference Savchenko, A. V., Savchenko, L. V., and Makarov I. A., Classifying emotions and engagement in online learning based on a single facial expression recognition neural network, IEEE Trans. Affective Comput., 2022, pp. 1–12. Savchenko, A. V., Savchenko, L. V., and Makarov I. A., Classifying emotions and engagement in online learning based on a single facial expression recognition neural network, IEEE Trans. Affective Comput., 2022, pp. 1–12.
24.
go back to reference Facial emotion recognition repository. https://github.com/HSE-asavchenko/face-emotion-recognition/. Facial emotion recognition repository. https://​github.​com/​HSE-asavchenko/​face-emotion-recognition/​.​
25.
go back to reference Sokolova, A.D., Kharchevnikova, A.S., and Savchenko, A.V., Organizing multimedia data in video surveillance systems based on face verification with convolutional neural networks, Proceedings of International Conference on Analysis of Images, Social Networks and Texts (AIST), Cham: Springer, 2017, pp. 223–230 Sokolova, A.D., Kharchevnikova, A.S., and Savchenko, A.V., Organizing multimedia data in video surveillance systems based on face verification with convolutional neural networks, Proceedings of International Conference on Analysis of Images, Social Networks and Texts (AIST), Cham: Springer, 2017, pp. 223–230
26.
go back to reference Veeramani, B., Raymond, J.W., and Chanda, P., DeepSort: deep convolutional networks for sorting haploid maize seeds, BMC Bioinform., 2018, vol. 19, no. 9, pp. 1–9.CrossRef Veeramani, B., Raymond, J.W., and Chanda, P., DeepSort: deep convolutional networks for sorting haploid maize seeds, BMC Bioinform., 2018, vol. 19, no. 9, pp. 1–9.CrossRef
Metadata
Title
Neural Network Model for Video-Based Analysis of Student’s Emotions in E-Learning
Authors
A. V. Savchenko
I. A. Makarov
Publication date
01-09-2022
Publisher
Pleiades Publishing
Published in
Optical Memory and Neural Networks / Issue 3/2022
Print ISSN: 1060-992X
Electronic ISSN: 1934-7898
DOI
https://doi.org/10.3103/S1060992X22030055

Other articles of this Issue 3/2022

Optical Memory and Neural Networks 3/2022 Go to the issue

Premium Partner