Skip to main content
Erschienen in: Pattern Analysis and Applications 3/2022

22.09.2021 | Original Article

Automatic stress analysis from facial videos based on deep facial action units recognition

verfasst von: Giorgos Giannakakis, Mohammad Rami Koujan, Anastasios Roussos, Kostas Marias

Erschienen in: Pattern Analysis and Applications | Ausgabe 3/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Stress conditions are manifested in different human body’s physiological processes and the human face. Facial expressions are modelled consistently through the Facial Action Coding System (FACS) using the facial Action Units (AU) parameters. This paper focuses on the automated recognition and analysis of AUs in videos as quantitative indices to discriminate between neutral and stress states. A novel deep learning pipeline for automatic recognition of facial action units is proposed, relying on two publicly available annotated facial datasets for training, the UNBC and the BOSPHORUS datasets. Two types of descriptive facial features are extracted from the input images, geometric features (non-rigid 3D facial deformations due to facial expressions) and appearance features (deep facial appearance features). The extracted facial features are then fed to deep fully connected layers that regress AU intensities and robustly perform AU classification. The proposed algorithm is applied to the SRD’15 stress dataset, which contains neutral and stress states related to four types of stressors. We present thorough experimental results and comparisons, which indicate that the proposed methodology yields particularly promising performance in terms of both AU detection and stress recognition accuracy. Furthermore, the AUs relevant to stress were experimentally identified, providing evidence that their intensity is significantly increased during stress, which leads to a more expressive human face as compared to neutral states.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Wethington E, Brown GW, Kessler RC (1995) Interview measurement of stressful life events. Meas Stress: A Guide for Health Soc Sci 59–79 Wethington E, Brown GW, Kessler RC (1995) Interview measurement of stressful life events. Meas Stress: A Guide for Health Soc Sci 59–79
2.
Zurück zum Zitat Dohrenwend BP, Raphael KG, Schwartz S, Stueve A, Skodol A (1993) The structured event probe and narrative rating method for measuring stressful life events. Free Press, pp 174–199 Dohrenwend BP, Raphael KG, Schwartz S, Stueve A, Skodol A (1993) The structured event probe and narrative rating method for measuring stressful life events. Free Press, pp 174–199
3.
Zurück zum Zitat Aigrain J, Spodenkiewicz M, Dubuiss S, Detyniecki M, Cohen D, Chetouani M (2016) Multimodal stress detection from multiple assessments. IEEE Trans Affect Comput 9(4):491–506 Aigrain J, Spodenkiewicz M, Dubuiss S, Detyniecki M, Cohen D, Chetouani M (2016) Multimodal stress detection from multiple assessments. IEEE Trans Affect Comput 9(4):491–506
4.
Zurück zum Zitat Chrousos GP (2009) Stress and disorders of the stress system. Nat Rev Endocrinol 5(7):374CrossRef Chrousos GP (2009) Stress and disorders of the stress system. Nat Rev Endocrinol 5(7):374CrossRef
5.
Zurück zum Zitat Giannakakis G, Grigoriadis D, Giannakaki K, Simantiraki O, Roniotis A, Tsiknakis M (2019) Review on psychological stress detection using biosignals. IEEE Trans Affect Computing Giannakakis G, Grigoriadis D, Giannakaki K, Simantiraki O, Roniotis A, Tsiknakis M (2019) Review on psychological stress detection using biosignals. IEEE Trans Affect Computing
6.
Zurück zum Zitat Giannakakis G, Marias K, Tsiknakis M (2019) A stress recognition system using hrv parameters and machine learning techniques. In: 2019 8th international conference on affective computing and intelligent interaction workshops and demos (ACIIW). IEEE, pp 269–272 Giannakakis G, Marias K, Tsiknakis M (2019) A stress recognition system using hrv parameters and machine learning techniques. In: 2019 8th international conference on affective computing and intelligent interaction workshops and demos (ACIIW). IEEE, pp 269–272
7.
Zurück zum Zitat Weber R, Barrielle V, Soladie C, Seguier R (2018) Unsupervised adaptation of a person-specific manifold of facial expressions. IEEE Trans Affect Comput Weber R, Barrielle V, Soladie C, Seguier R (2018) Unsupervised adaptation of a person-specific manifold of facial expressions. IEEE Trans Affect Comput
8.
Zurück zum Zitat Henriquez P, Matuszewski BJ, Andreu-Cabedo Y, Bastiani L, Colantonio S, Coppini G, D’Acunto M, Favilla R, Germanese D, Giorgi D et al (2017) Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization. IEEE Trans Multimed 19(7):1467–1481 Henriquez P, Matuszewski BJ, Andreu-Cabedo Y, Bastiani L, Colantonio S, Coppini G, D’Acunto M, Favilla R, Germanese D, Giorgi D et al (2017) Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization. IEEE Trans Multimed 19(7):1467–1481
9.
Zurück zum Zitat Darwin C (1872) The expression of the emotions in man and animals. John Marry, London, UK Darwin C (1872) The expression of the emotions in man and animals. John Marry, London, UK
10.
Zurück zum Zitat Giannakakis G, Pediaditis M, Manousos D, Kazantzaki E, Chiarugi F, Simos P, Marias K, Tsiknakis M (2017) Stress and anxiety detection using facial cues from videos. Biomed Signal Process Control 31:89–101 Giannakakis G, Pediaditis M, Manousos D, Kazantzaki E, Chiarugi F, Simos P, Marias K, Tsiknakis M (2017) Stress and anxiety detection using facial cues from videos. Biomed Signal Process Control 31:89–101
11.
Zurück zum Zitat Korda AI, Giannakakis G, Ventouras E, Asvestas PA, Smyrnis N, Marias K, Matsopoulos GK (2021) Recognition of blinks activity patterns during stress conditions using cnn and markovian analysis. Signals 2(1):55–71 Korda AI, Giannakakis G, Ventouras E, Asvestas PA, Smyrnis N, Marias K, Matsopoulos GK (2021) Recognition of blinks activity patterns during stress conditions using cnn and markovian analysis. Signals 2(1):55–71
12.
Zurück zum Zitat Martinez B, Valstar MF, Jiang B, Pantic M (2017) Automatic analysis of facial actions: a survey. IEEE Trans Affect Comput Martinez B, Valstar MF, Jiang B, Pantic M (2017) Automatic analysis of facial actions: a survey. IEEE Trans Affect Comput
13.
Zurück zum Zitat Donato G, Bartlett MS, Hager JC, Ekman P, Sejnowski TJ (1999) Classifying facial actions. IEEE Trans Pattern Anal Mach Intell 21(10):974–989 Donato G, Bartlett MS, Hager JC, Ekman P, Sejnowski TJ (1999) Classifying facial actions. IEEE Trans Pattern Anal Mach Intell 21(10):974–989
14.
Zurück zum Zitat Giannakakis G, Koujan MR, Roussos A, Marias K (2020) Automatic stress detection evaluating models of facial action units. In: 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020), pp 817–822 Giannakakis G, Koujan MR, Roussos A, Marias K (2020) Automatic stress detection evaluating models of facial action units. In: 2020 15th IEEE international conference on automatic face and gesture recognition (FG 2020), pp 817–822
15.
Zurück zum Zitat Ruiz A, Van de Weijer J, Binefa X (2015) From emotions to action units with hidden and semi-hidden-task learning. In: Proceedings of the IEEE international conference on computer vision, pp 3703–3711 Ruiz A, Van de Weijer J, Binefa X (2015) From emotions to action units with hidden and semi-hidden-task learning. In: Proceedings of the IEEE international conference on computer vision, pp 3703–3711
16.
Zurück zum Zitat Chu W-S, De la Torre F, Cohn JF (2017) Learning spatial and temporal cues for multi-label facial action unit detection. In: 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017). IEEE, pp 25–32 Chu W-S, De la Torre F, Cohn JF (2017) Learning spatial and temporal cues for multi-label facial action unit detection. In: 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017). IEEE, pp 25–32
17.
Zurück zum Zitat Shao Z, Liu Z, Cai J, Ma L (2018) Deep adaptive attention for joint facial action unit detection and face alignment. In: Proceedings of the European conference on computer vision (ECCV), pp 705–720 Shao Z, Liu Z, Cai J, Ma L (2018) Deep adaptive attention for joint facial action unit detection and face alignment. In: Proceedings of the European conference on computer vision (ECCV), pp 705–720
18.
Zurück zum Zitat Ma C, Chen L, Yong J (2019) Au r-cnn: encoding expert prior knowledge into r-cnn for action unit detection. Neurocomput 355:35–47 Ma C, Chen L, Yong J (2019) Au r-cnn: encoding expert prior knowledge into r-cnn for action unit detection. Neurocomput 355:35–47
19.
Zurück zum Zitat Bevilacqua F, Engstrom H, Backlund P (2018) Automated analysis of facial cues from videos as a potential method for differentiating stress and boredom of players in games. Int J Comput Games Technol Bevilacqua F, Engstrom H, Backlund P (2018) Automated analysis of facial cues from videos as a potential method for differentiating stress and boredom of players in games. Int J Comput Games Technol
20.
Zurück zum Zitat Daudelin-Peltier C, Forget H, Blais C, Deschˆenes A, Fiset D (2017) The effect of acute social stress on the recognition of facial expression of emotions. Sci Rep 7(1):1036 Daudelin-Peltier C, Forget H, Blais C, Deschˆenes A, Fiset D (2017) The effect of acute social stress on the recognition of facial expression of emotions. Sci Rep 7(1):1036
21.
Zurück zum Zitat Gavrilescu M, Vizireanu N (2019) Predicting depression, anxiety, and stress levels from videos using the facial action coding system. Sensors 19(17):3693CrossRef Gavrilescu M, Vizireanu N (2019) Predicting depression, anxiety, and stress levels from videos using the facial action coding system. Sensors 19(17):3693CrossRef
22.
Zurück zum Zitat Viegas C, Lau S-H, Maxion R, Hauptmann A (2018) Distinction of stress and non-stress tasks using facial action units. In: Proceedings of the 20th international conference on multimodal interaction: Adjunct, pp 1–6 Viegas C, Lau S-H, Maxion R, Hauptmann A (2018) Distinction of stress and non-stress tasks using facial action units. In: Proceedings of the 20th international conference on multimodal interaction: Adjunct, pp 1–6
23.
Zurück zum Zitat Koujan MR, Alharbawee L, Giannakakis G, Pugeault N, Roussos A (2020) Real-time facial expression recognition “in the wild” by disentangling 3d expression from identity. In: IEEE international conference on automatic face and gesture recognition (FG 2020) Koujan MR, Alharbawee L, Giannakakis G, Pugeault N, Roussos A (2020) Real-time facial expression recognition “in the wild” by disentangling 3d expression from identity. In: IEEE international conference on automatic face and gesture recognition (FG 2020)
24.
Zurück zum Zitat Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I (2011) Painful data: the unbc-mcmaster shoulder pain expression archive database. In: Automatic face & gesture recognition and workshops (FG 2011), 2011 IEEE international conference On. IEEE, pp 57–64 Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I (2011) Painful data: the unbc-mcmaster shoulder pain expression archive database. In: Automatic face & gesture recognition and workshops (FG 2011), 2011 IEEE international conference On. IEEE, pp 57–64
25.
Zurück zum Zitat Savran A, Alyüz N, Dibeklioğlu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Bosphorus database for 3d face analysis. In: European workshop on biometrics and identity management. Springer, pp 47–56 Savran A, Alyüz N, Dibeklioğlu H, Çeliktutan O, Gökberk B, Sankur B, Akarun L (2008) Bosphorus database for 3d face analysis. In: European workshop on biometrics and identity management. Springer, pp 47–56
26.
Zurück zum Zitat Stroop JR (1935) Studies of interference in serial verbal reactions. J Exp Psychol 18(6):643CrossRef Stroop JR (1935) Studies of interference in serial verbal reactions. J Exp Psychol 18(6):643CrossRef
27.
Zurück zum Zitat Lang PJ, Bradley MM, Cuthbert BN et al (1997) International affective picture system (iaps): technical manual and affective ratings. NIMH Center Study Emot Atten 1:39–58 Lang PJ, Bradley MM, Cuthbert BN et al (1997) International affective picture system (iaps): technical manual and affective ratings. NIMH Center Study Emot Atten 1:39–58
28.
Zurück zum Zitat Andreu Y, Chiarugi F, Colantonio S, Giannakakis G, Giorgi D, Henriquez P, Kazantzaki E, Manousos D, Marias K, Matuszewski BJ, Pascali MA, Pediaditis M, Raccichini G, Tsiknakis M (2016) Wizemirror - a smart, multisensory cardio-metabolic risk monitoring system. Comput Vision Image Underst 148:3–22 Andreu Y, Chiarugi F, Colantonio S, Giannakakis G, Giorgi D, Henriquez P, Kazantzaki E, Manousos D, Marias K, Matuszewski BJ, Pascali MA, Pediaditis M, Raccichini G, Tsiknakis M (2016) Wizemirror - a smart, multisensory cardio-metabolic risk monitoring system. Comput Vision Image Underst 148:3–22
29.
Zurück zum Zitat Ekman P (2002) Facial action coding system (FACS). A human face Ekman P (2002) Facial action coding system (FACS). A human face
30.
Zurück zum Zitat Ekman P, Friesen W (1978) Facial action coding system (FACS): manual. Consulting Psychol Press Palo Alto Ekman P, Friesen W (1978) Facial action coding system (FACS): manual. Consulting Psychol Press Palo Alto
31.
Zurück zum Zitat Hjortsjo C-H (1969) Man’s face and mimic language. Studentlitteratur Hjortsjo C-H (1969) Man’s face and mimic language. Studentlitteratur
32.
Zurück zum Zitat Najibi M, Samangouei P, Chellappa R, Davis LS (2017) Ssh: single stage headless face detector. In: Proceedings of the IEEE international conference on computer vision, pp 4875–4884 Najibi M, Samangouei P, Chellappa R, Davis LS (2017) Ssh: single stage headless face detector. In: Proceedings of the IEEE international conference on computer vision, pp 4875–4884
33.
Zurück zum Zitat Deng J, Zhou Y, Cheng S, Zaferiou S (2018) Cascade multi-view hourglass model for robust 3d face alignment. In: FG Deng J, Zhou Y, Cheng S, Zaferiou S (2018) Cascade multi-view hourglass model for robust 3d face alignment. In: FG
34.
Zurück zum Zitat Matthews I, Baker S (2004) Active appearance models revisited. Int J Comput Vis 60(2):135–164CrossRef Matthews I, Baker S (2004) Active appearance models revisited. Int J Comput Vis 60(2):135–164CrossRef
35.
Zurück zum Zitat Cootes TF, Taylor CJ (2004) Statistical models of appearance for computer vision. Technical report, University of Manchester Cootes TF, Taylor CJ (2004) Statistical models of appearance for computer vision. Technical report, University of Manchester
36.
Zurück zum Zitat Watson D (2013) Contouring: a guide to the analysis and display of spatial data, vol. 10. Elsevier Watson D (2013) Contouring: a guide to the analysis and display of spatial data, vol. 10. Elsevier
37.
Zurück zum Zitat Blanz V, Vetter T (1999) A morphable model for the synthesis of 3d faces. In: Proceedings of the 26th annual conference on computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., pp 187–194 Blanz V, Vetter T (1999) A morphable model for the synthesis of 3d faces. In: Proceedings of the 26th annual conference on computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., pp 187–194
38.
Zurück zum Zitat Zafeiriou S, Chrysos GG, Roussos A, Ververas E, Deng J, Trigeorgis G (2017) The 3d menpo facial landmark tracking challenge. In: ICCV, pp 2503–2511 Zafeiriou S, Chrysos GG, Roussos A, Ververas E, Deng J, Trigeorgis G (2017) The 3d menpo facial landmark tracking challenge. In: ICCV, pp 2503–2511
39.
Zurück zum Zitat Deng J, Roussos A, Chrysos G, Ververas E, Kotsia I, Shen J, Zafeiriou S (2018) The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. IJCV Deng J, Roussos A, Chrysos G, Ververas E, Kotsia I, Shen J, Zafeiriou S (2018) The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. IJCV
40.
Zurück zum Zitat Koujan MR, Roussos A (2018) Combining dense nonrigid structure from motion and 3d morphable models for monocular 4d face reconstruction. In: CVMP Koujan MR, Roussos A (2018) Combining dense nonrigid structure from motion and 3d morphable models for monocular 4d face reconstruction. In: CVMP
41.
Zurück zum Zitat Gecer B, Ploumpis S, Kotsia I, Zafeiriou S (2019) Ganfit: generative adversarial network fitting for high fidelity 3d face reconstruction. arXiv preprint arXiv:1902.05978 Gecer B, Ploumpis S, Kotsia I, Zafeiriou S (2019) Ganfit: generative adversarial network fitting for high fidelity 3d face reconstruction. arXiv preprint arXiv:​1902.​05978
42.
Zurück zum Zitat Booth J, Roussos A, Ponniah A, Dunaway D, Zafeiriou S (2018) Large scale 3d morphable models. IJCV Booth J, Roussos A, Ponniah A, Dunaway D, Zafeiriou S (2018) Large scale 3d morphable models. IJCV
43.
Zurück zum Zitat Cao C, Weng Y, Zhou S, Tong Y, Zhou K (2014) Facewarehouse: a 3dfacial expression database for visual computing. IEEE Trans Vis Comp Gr 20(3):413–425CrossRef Cao C, Weng Y, Zhou S, Tong Y, Zhou K (2014) Facewarehouse: a 3dfacial expression database for visual computing. IEEE Trans Vis Comp Gr 20(3):413–425CrossRef
44.
Zurück zum Zitat He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
45.
Zurück zum Zitat Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: International conference on automatic face and gesture recognition Cao Q, Shen L, Xie W, Parkhi OM, Zisserman A (2018) Vggface2: a dataset for recognising faces across pose and age. In: International conference on automatic face and gesture recognition
46.
Zurück zum Zitat Kingma D, Ba J (2014) Adam: a method for stochastic optimization. International conference on learning representations Kingma D, Ba J (2014) Adam: a method for stochastic optimization. International conference on learning representations
48.
Zurück zum Zitat Fürnkranz J, Hüllermeier E (2003) Pairwise preference learning and ranking. In: European conference on machine learning. Springer, pp 145–156 Fürnkranz J, Hüllermeier E (2003) Pairwise preference learning and ranking. In: European conference on machine learning. Springer, pp 145–156
49.
Zurück zum Zitat Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Elect Eng 40(1):16–28CrossRef Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Elect Eng 40(1):16–28CrossRef
50.
Zurück zum Zitat Ding C, Peng H (2005) Minimum redundancy feature selection from microarraygene expression data. J Bioinf Comput Biol 3(02):185–205CrossRef Ding C, Peng H (2005) Minimum redundancy feature selection from microarraygene expression data. J Bioinf Comput Biol 3(02):185–205CrossRef
51.
Zurück zum Zitat Gulgezen G, Cataltepe Z, Yu L (2009) Stable and accurate feature selection. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 455–468 Gulgezen G, Cataltepe Z, Yu L (2009) Stable and accurate feature selection. In: Joint European conference on machine learning and knowledge discovery in databases. Springer, pp 455–468
53.
Zurück zum Zitat Baltrusaitis T, Zadeh A, Lim YC, Morency L-P (2018) Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, pp 59–66 Baltrusaitis T, Zadeh A, Lim YC, Morency L-P (2018) Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, pp 59–66
54.
Zurück zum Zitat Giannakakis G, Manousos D, Chaniotakis V, Tsiknakis M (2018) Evaluation of head pose features for stress detection and classification. In: 2018 IEEE EMBS international conference on biomedical & health informatics (BHI), pp 406–409 Giannakakis G, Manousos D, Chaniotakis V, Tsiknakis M (2018) Evaluation of head pose features for stress detection and classification. In: 2018 IEEE EMBS international conference on biomedical & health informatics (BHI), pp 406–409
55.
Zurück zum Zitat Anis K, Zakia H, Mohamed D, Jeffrey C (2018) Detecting depression severity by interpretable representations of motion dynamics. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG2018), pp 739–745 Anis K, Zakia H, Mohamed D, Jeffrey C (2018) Detecting depression severity by interpretable representations of motion dynamics. In: 2018 13th IEEE international conference on automatic face & gesture recognition (FG2018), pp 739–745
Metadaten
Titel
Automatic stress analysis from facial videos based on deep facial action units recognition
verfasst von
Giorgos Giannakakis
Mohammad Rami Koujan
Anastasios Roussos
Kostas Marias
Publikationsdatum
22.09.2021
Verlag
Springer London
Erschienen in
Pattern Analysis and Applications / Ausgabe 3/2022
Print ISSN: 1433-7541
Elektronische ISSN: 1433-755X
DOI
https://doi.org/10.1007/s10044-021-01012-9

Weitere Artikel der Ausgabe 3/2022

Pattern Analysis and Applications 3/2022 Zur Ausgabe

Premium Partner