Skip to main content
Top
Published in: Pattern Recognition and Image Analysis 4/2020

01-10-2020 | PATTERN RECOGNITION AND IMAGE ANALYSIS AUTOMATED SYSTEMS, HARDWARE AND SOFTWARE

Deep Learning Based Approach of Emotion Detection and Grading System

Authors: Bhakti Sonawane, Priyanka Sharma

Published in: Pattern Recognition and Image Analysis | Issue 4/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the past two decades, there has been an increasing research in recognition and analysis of human emotions in areas of neuroscience, psychology, cognitive science, and computer science. Facial expressions play an important role in assessment of several neuropsychiatric disorders where patients are having impairment in recognition as well as expression of emotional facial expressivity. The majority of the previous work in the domain of machine analysis of human emotion was focused on recognition of prototypic expressions of six basic emotions or also on finding out movements of the individual muscle that the face can produce. Most of these automated facial expression recognition methods were based on data that has been posed on demand and acquired in laboratory settings. But, for real-world applications, instead of classifying a face image into one of the facial expression categories, focus needs to be given on the problem of estimating the depth of facial expression for further grading the emotions leading to applicability in various clinical research. In this paper we have proposed a deep learning-based Emotion Detection and Grading System (D-EDGS) for estimation of facial emotional expression into one of the seven basic categories of emotion. Along with this the proposed model D-EDGS also works towards grading the emotions into low, medium and high scale. In future, effectiveness of D-EDGS can be applied for the development of decision support system in assessment of neuro psychiatric disorders.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference C. Darwin and P. Prodger, The Expression of the Emotions in Man and Animals (Oxford Univ. Press, 1998). C. Darwin and P. Prodger, The Expression of the Emotions in Man and Animals (Oxford Univ. Press, 1998).
2.
go back to reference T. S.H. Wingenbach, C. Ashwin, and M. Brosnan, “Validation of the Amsterdam Dynamic Facial Expression Set–Bath Intensity Variations (ADFES–BIV): A set of videos expressing low, intermediate, and high intensity emotions,” PloS ONE 11 (1), e0147112 (2016).CrossRef T. S.H. Wingenbach, C. Ashwin, and M. Brosnan, “Validation of the Amsterdam Dynamic Facial Expression Set–Bath Intensity Variations (ADFES–BIV): A set of videos expressing low, intermediate, and high intensity emotions,” PloS ONE 11 (1), e0147112 (2016).CrossRef
3.
go back to reference V. Bettadapura, “Face expression recognition and analysis: The state of the art” (2012). arXiv:1203.6722. V. Bettadapura, “Face expression recognition and analysis: The state of the art” (2012). arXiv:1203.6722.
4.
go back to reference E. Friesen and P. Ekman, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Palo Alto, 1978). E. Friesen and P. Ekman, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Palo Alto, 1978).
5.
go back to reference B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognit. 36 (1), pp. 259–275 (2003).CrossRef B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognit. 36 (1), pp. 259–275 (2003).CrossRef
6.
go back to reference Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proc. IEEE 86 (11), 2278–2324 (1998).CrossRef Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proc. IEEE 86 (11), 2278–2324 (1998).CrossRef
7.
go back to reference A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012). A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012).
8.
go back to reference K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition” (2014). arXiv:1409.1556. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition” (2014). arXiv:1409.1556.
9.
go back to reference H.-W. Ng et al., “Deep learning for emotion recognition on small datasets using transfer learning,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (2015). H.-W. Ng et al., “Deep learning for emotion recognition on small datasets using transfer learning,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (2015).
10.
go back to reference H. Kaya, F. Gürpınar, and A. A. Salah, “Video-based emotion recognition in the wild using deep transfer learning and score fusion,” Image Vision Comput. 65, 66–75 (2017).CrossRef H. Kaya, F. Gürpınar, and A. A. Salah, “Video-based emotion recognition in the wild using deep transfer learning and score fusion,” Image Vision Comput. 65, 66–75 (2017).CrossRef
11.
go back to reference M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition (Nara, 1998), pp. 200–205. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition (Nara, 1998), pp. 200–205.
12.
go back to reference T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000). T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000).
13.
go back to reference P. Lucey et al., “The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition–Workshops (2010). P. Lucey et al., “The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition–Workshops (2010).
14.
go back to reference D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska directed emotional faces (KDEF),” in CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet 91:630 (1998). D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska directed emotional faces (KDEF),” in CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet 91:630 (1998).
15.
go back to reference S. T. Hawk, J. Van der Schalk, and A. H. Fischer, “Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES),” in 12th European Conference on Facial Expressions (Geneva, 2008). S. T. Hawk, J. Van der Schalk, and A. H. Fischer, “Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES),” in 12th European Conference on Facial Expressions (Geneva, 2008).
16.
go back to reference M. Pantic, M. Valstar, R. Rademaker, and L. Maat,” Web-based database for facial expression analysis,” in IEEE Int. Conf. on Multimedia and Expo (Amsterdam, 2005). M. Pantic, M. Valstar, R. Rademaker, and L. Maat,” Web-based database for facial expression analysis,” in IEEE Int. Conf. on Multimedia and Expo (Amsterdam, 2005).
17.
go back to reference B. Montagne, R. P. C. Kessels, E. H. F. De Haan, and D. I. Perrett, “The emotion recognition task: A paradigm to measure the perception of facial emotional expressions at different intensities,” Perceptual Motor Skills 104, 589–598 (2007).CrossRef B. Montagne, R. P. C. Kessels, E. H. F. De Haan, and D. I. Perrett, “The emotion recognition task: A paradigm to measure the perception of facial emotional expressions at different intensities,” Perceptual Motor Skills 104, 589–598 (2007).CrossRef
18.
go back to reference K. Kaulard et al., “The MPI facial expression database—a validated database of emotional and conversational facial expressions,” PloS ONE 7 (3), e32321 (2012).CrossRef K. Kaulard et al., “The MPI facial expression database—a validated database of emotional and conversational facial expressions,” PloS ONE 7 (3), e32321 (2012).CrossRef
19.
go back to reference M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. 22 (12), 1424–1445 (2000).CrossRef M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. 22 (12), 1424–1445 (2000).CrossRef
20.
go back to reference C. Mayer, M. Eggers, and B. Radig, “Cross-database evaluation for facial expression recognition,” Pattern Recognit. Image Anal. 24 (1), 124–132 (2014).CrossRef C. Mayer, M. Eggers, and B. Radig, “Cross-database evaluation for facial expression recognition,” Pattern Recognit. Image Anal. 24 (1), 124–132 (2014).CrossRef
21.
go back to reference C. Shan, S. Gong, and P. W. McOwan, “Robust facial expression recognition using local binary patterns,” in IEEE International Conference on Image Processing 2005 (2005), Vol. 2. C. Shan, S. Gong, and P. W. McOwan, “Robust facial expression recognition using local binary patterns,” in IEEE International Conference on Image Processing 2005 (2005), Vol. 2.
22.
go back to reference X. Feng, M. Pietikainen, and A. Hadid, “Facial expression recognition based on local binary patterns,” Pattern Recognit. Image Anal. 17 (4), 592–598 (2007).CrossRef X. Feng, M. Pietikainen, and A. Hadid, “Facial expression recognition based on local binary patterns,” Pattern Recognit. Image Anal. 17 (4), 592–598 (2007).CrossRef
23.
go back to reference G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29 (6), 915–928 (2007).CrossRef G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29 (6), 915–928 (2007).CrossRef
24.
go back to reference M. Abdulrahman and A. Eleyan, “Facial expression recognition using support vector machines,” in 2015 23rd Signal Processing and Communications Applications Conference (SIU) (2015). M. Abdulrahman and A. Eleyan, “Facial expression recognition using support vector machines,” in 2015 23rd Signal Processing and Communications Applications Conference (SIU) (2015).
25.
go back to reference D. Debishree, A. Hudait, H. K. Tripathy, and M. N. Das, “Automatic emotion detection model from facial expression,” in International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016). D. Debishree, A. Hudait, H. K. Tripathy, and M. N. Das, “Automatic emotion detection model from facial expression,” in International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016).
26.
go back to reference G. Ramkumar and E. Logashanmugam, “An effectual facial expression recognition using HMM,” in 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016). G. Ramkumar and E. Logashanmugam, “An effectual facial expression recognition using HMM,” in 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016).
27.
go back to reference M. Matsugu et al., “Subject independent facial expression recognition with robust face detection using a convolutional neural network,” Neural Networks 16 (5–6), 555–559 (2003).CrossRef M. Matsugu et al., “Subject independent facial expression recognition with robust face detection using a convolutional neural network,” Neural Networks 16 (5–6), 555–559 (2003).CrossRef
28.
go back to reference S. S. Kulkarni, N. P. Reddy, and S. I. Hariharan, “Facial expression (mood) recognition from facial images using committee neural networks,” Biomed. Eng. Online 8 (1), 16 (2009).CrossRef S. S. Kulkarni, N. P. Reddy, and S. I. Hariharan, “Facial expression (mood) recognition from facial images using committee neural networks,” Biomed. Eng. Online 8 (1), 16 (2009).CrossRef
29.
go back to reference H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE International Conference on Computer Vision (2015). H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE International Conference on Computer Vision (2015).
30.
go back to reference A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (2016). A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (2016).
31.
go back to reference R. Girshick et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014). R. Girshick et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014).
32.
go back to reference B. Sun, L. Li, G. Zhou, and J. He., “Facial expression recognition in the wild based on multimodal texture features,” J. Electron. Imaging 25 (6), 061407 (2016).CrossRef B. Sun, L. Li, G. Zhou, and J. He., “Facial expression recognition in the wild based on multimodal texture features,” J. Electron. Imaging 25 (6), 061407 (2016).CrossRef
33.
go back to reference J. Li, D. Zhang, J. Zhang, J. Zhang, T. Li, Y. Xia, Q. Yan, and L. Xun, “Facial expression recognition with faster R-CNN,” Procedia Comput. Sci. 107, 135–140 (2017).CrossRef J. Li, D. Zhang, J. Zhang, J. Zhang, T. Li, Y. Xia, Q. Yan, and L. Xun, “Facial expression recognition with faster R-CNN,” Procedia Comput. Sci. 107, 135–140 (2017).CrossRef
34.
go back to reference S. Li and W. Deng, “Deep facial expression recognition: A survey” (2018). arXiv:1804.08348. S. Li and W. Deng, “Deep facial expression recognition: A survey” (2018). arXiv:1804.08348.
35.
go back to reference Y. Fan, J. C. K. Lam, and V. O. K. Li, “Multi-region ensemble convolutional neural network for facial expression recognition,” in International Conference on Artificial Neural Networks (Springer, Cham, 2018). Y. Fan, J. C. K. Lam, and V. O. K. Li, “Multi-region ensemble convolutional neural network for facial expression recognition,” in International Conference on Artificial Neural Networks (Springer, Cham, 2018).
36.
go back to reference B. Sonawane and P. Sharma, “Acceleration of CNN-based facial emotion detection using NVIDIA GPU,” in Intelligent Computing and Information and Communication (Springer, Singapore, 2018), pp. 257–264. B. Sonawane and P. Sharma, “Acceleration of CNN-based facial emotion detection using NVIDIA GPU,” in Intelligent Computing and Information and Communication (Springer, Singapore, 2018), pp. 257–264.
37.
go back to reference W. C. De Melo, E. Granger, and A. Hadid, “Combining global and local convolutional 3D networks for detecting depression from facial expressions,” in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (2019). W. C. De Melo, E. Granger, and A. Hadid, “Combining global and local convolutional 3D networks for detecting depression from facial expressions,” in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (2019).
38.
go back to reference J. R. Delannoy and J. McDonald, “Automatic estimation of the dynamics of facial expression using a three-level model of intensity,” in 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition (2008). J. R. Delannoy and J. McDonald, “Automatic estimation of the dynamics of facial expression using a three-level model of intensity,” in 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition (2008).
39.
go back to reference C. Alvino, C. Kohler, F. Barrett, R. E. Gur, R. C. Gur, and R. Verma, “Computerized measurement of facial expression of emotions in schizophrenia,” J. Neurosci. Methods 163 (2), 350–361 (2007).CrossRef C. Alvino, C. Kohler, F. Barrett, R. E. Gur, R. C. Gur, and R. Verma, “Computerized measurement of facial expression of emotions in schizophrenia,” J. Neurosci. Methods 163 (2), 350–361 (2007).CrossRef
40.
go back to reference G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of facial expression extracted automatically from video,” in Conference on Computer Vision and Pattern Recognition Workshop (2004). G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of facial expression extracted automatically from video,” in Conference on Computer Vision and Pattern Recognition Workshop (2004).
41.
go back to reference Md. A. Amin and H. Yan, “Expression intensity measurement from facial images by self organizing maps,” in 2008 International Conference on Machine Learning and Cybernetics (2008), Vol. 6. Md. A. Amin and H. Yan, “Expression intensity measurement from facial images by self organizing maps,” in 2008 International Conference on Machine Learning and Cybernetics (2008), Vol. 6.
42.
go back to reference N. Esau, E. Wetzel, L. Kleinjohann, and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” in 2007 IEEE International Fuzzy Systems Conference (2007), pp. 1–6. N. Esau, E. Wetzel, L. Kleinjohann, and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” in 2007 IEEE International Fuzzy Systems Conference (2007), pp. 1–6.
43.
go back to reference K. Chang, C. Chen, and Y. Hung, “Intensity rank estimation of facial expressions based on a single image,” in 2013 IEEE International Conference on Syst. Man Cybern. (SMC) (2013). K. Chang, C. Chen, and Y. Hung, “Intensity rank estimation of facial expressions based on a single image,” in 2013 IEEE International Conference on Syst. Man Cybern. (SMC) (2013).
44.
go back to reference A. Gudi et al., “Deep learning based FACS action unit occurrence and intensity estimation,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (2015), Vol. 6. A. Gudi et al., “Deep learning based FACS action unit occurrence and intensity estimation,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (2015), Vol. 6.
45.
go back to reference D. Bowers, K. Miller, W. Bosch, D. Gokcay, O. Pedraza, U. Springer, and M. Okun, “Faces of emotion in Parkinson’s disease: Micro-expressivity and bradykinesia during voluntary facial expressions,” J. Int. Neuropsychol. Soc. 12 (6), 765–773 (2006).CrossRef D. Bowers, K. Miller, W. Bosch, D. Gokcay, O. Pedraza, U. Springer, and M. Okun, “Faces of emotion in Parkinson’s disease: Micro-expressivity and bradykinesia during voluntary facial expressions,” J. Int. Neuropsychol. Soc. 12 (6), 765–773 (2006).CrossRef
46.
go back to reference M. G. Beaupre, N. Cheung, and U. Hess, The Montreal Set of Facial Displays of Emotion (Univ. Quebec Montreal, 2000). M. G. Beaupre, N. Cheung, and U. Hess, The Montreal Set of Facial Displays of Emotion (Univ. Quebec Montreal, 2000).
47.
go back to reference N. C. Ebner, M. Riediger, and U. Lindenberger, “FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation,” Behav. Res. Methods 42 (1), 351–362 (2010).CrossRef N. C. Ebner, M. Riediger, and U. Lindenberger, “FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation,” Behav. Res. Methods 42 (1), 351–362 (2010).CrossRef
48.
go back to reference J. M. Leppanen and J. K. Hietanen, “Positive facial expressions are recognized faster than negative facial expressions, but why?,” Psychol. Res. 69 (1–2), 22–29 (2004).CrossRef J. M. Leppanen and J. K. Hietanen, “Positive facial expressions are recognized faster than negative facial expressions, but why?,” Psychol. Res. 69 (1–2), 22–29 (2004).CrossRef
49.
go back to reference E. Battini Sonmez, “An automatic multilevel facial expression recognition system,” Suleyman Demirel Univ. Fen Bilimleri Enst. Dergisi 22 (1), 160–165 (2018).CrossRef E. Battini Sonmez, “An automatic multilevel facial expression recognition system,” Suleyman Demirel Univ. Fen Bilimleri Enst. Dergisi 22 (1), 160–165 (2018).CrossRef
50.
go back to reference P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, 57(2), 2004, pp 137–154. P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, 57(2), 2004, pp 137–154.
51.
go back to reference https://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html https://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html
Metadata
Title
Deep Learning Based Approach of Emotion Detection and Grading System
Authors
Bhakti Sonawane
Priyanka Sharma
Publication date
01-10-2020
Publisher
Pleiades Publishing
Published in
Pattern Recognition and Image Analysis / Issue 4/2020
Print ISSN: 1054-6618
Electronic ISSN: 1555-6212
DOI
https://doi.org/10.1134/S1054661820040239

Other articles of this Issue 4/2020

Pattern Recognition and Image Analysis 4/2020 Go to the issue

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Survey of Learning Based Single Image Super-Resolution Reconstruction Technology

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Taxonomy of Performance Measures for Contrast Enhancement

Premium Partner