Skip to main content

2016 | OriginalPaper | Buchkapitel

Emotion Determination in eLearning Environments Based on Facial Landmarks

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Massive Open Online Courses (MOOCs) are a new kind of e-Learning environment, which enables us to address untold numbers of students. MOOCs allow students all over the world to participate in lectures independent of place and time. The sessions that are in some cases joined by more than 100,000 students are based on small units of teaching material containing videos or texts.
However today’s MOOCs are static environments, which do not take into account the diversity of the students and their situational context. Current MOOCs can be seen as mass processing but not as an individual treatment of individual students. Thus MOOCs need to be personalized in addition to massive.
In order to personalize an e-Learning environment it is first of all necessary to collect data, or personal factors, about the student, his or her current environment and his or her situational context. This data should later be processed and used as input for adaptive functions. Basically there are many input factors imaginable, such as cognitive style, preknowledge, currently used device or personal goals. The input factors can be grouped into technical, personal and situational factors. Especially situational factors may help to support students in different learning situations.
This paper describes an approach to detect the student’s current mood as a situational input factor. The mood of a student in a learning situation might be an interesting feature that can be used as an instant feedback for the currently used teaching materials. The proposed approach is based on widespread availability of built-in cameras in devices that are used by students, such as smart-phones, tablets or laptop computers. The captured frames from these devices are processed by a Java-based server component that detects selected facial landmarks. Based on the relative position of these landmarks the potential shown emotion is determined.
The output of the system may be used to adjust the difficulty level of tests or to determine the preferred media type.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
6.
Zurück zum Zitat Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 4, 34–47 (2001) Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 4, 34–47 (2001)
7.
Zurück zum Zitat Lienhart, R., Kuranov, A., Pisarevsky, V.: Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 297–304. Springer, Heidelberg (2003)CrossRef Lienhart, R., Kuranov, A., Pisarevsky, V.: Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 297–304. Springer, Heidelberg (2003)CrossRef
8.
Zurück zum Zitat Lienhart, R., Maydt, J.: An extended set of haar-like features for rapid object detection. In: Proceedings of 2002 International Conference on Image Processing, vol. 1, pp. I–900. IEEE (2002) Lienhart, R., Maydt, J.: An extended set of haar-like features for rapid object detection. In: Proceedings of 2002 International Conference on Image Processing, vol. 1, pp. I–900. IEEE (2002)
9.
Zurück zum Zitat Amit, Y., Geman, D., Wilder, K.: Joint induction of shape features and tree classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 19(11), 1300–1305 (1997)CrossRef Amit, Y., Geman, D., Wilder, K.: Joint induction of shape features and tree classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 19(11), 1300–1305 (1997)CrossRef
11.
Zurück zum Zitat Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991a) Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991a)
12.
Zurück zum Zitat Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition, Proceedings CVPR 1991, IEEE Computer Society Conference, pp. 586–591. IEEE (1991b) Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition, Proceedings CVPR 1991, IEEE Computer Society Conference, pp. 586–591. IEEE (1991b)
13.
Zurück zum Zitat Uricar, M., Franc, V., Hlavac, V.: Detector of facial landmarks learned by the structured output SVM. In: VISAPP 2012: Proceedings of the 7th International Conference on Computer Vision Theory and Applications (2012) Uricar, M., Franc, V., Hlavac, V.: Detector of facial landmarks learned by the structured output SVM. In: VISAPP 2012: Proceedings of the 7th International Conference on Computer Vision Theory and Applications (2012)
18.
Zurück zum Zitat Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000) Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000)
19.
Zurück zum Zitat Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. In: Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, pp. 94–101 (2010) Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. In: Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, pp. 94–101 (2010)
20.
Zurück zum Zitat Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRef Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRef
21.
Zurück zum Zitat Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, pp. I:886–I:893 (2005) Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, pp. I:886–I:893 (2005)
23.
Zurück zum Zitat Wiarda, J.: Rankings sind was für Angeber, Interview with John Hennessy, Die Zeit, Nr. 14, 23.003.2016 Wiarda, J.: Rankings sind was für Angeber, Interview with John Hennessy, Die Zeit, Nr. 14, 23.003.2016
Metadaten
Titel
Emotion Determination in eLearning Environments Based on Facial Landmarks
verfasst von
Tobias Augustin
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-42147-6_11