Skip to main content
Erschienen in:
Buchtitelbild

2017 | OriginalPaper | Buchkapitel

Active Shape Model vs. Deep Learning for Facial Emotion Recognition in Security

verfasst von : Monica Bebawy, Suzan Anwar, Mariofanna Milanova

Erschienen in: Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

As Facial Emotion Recognition is becoming more important everyday, A research experiment was conducted to find the best approach for Facial Emotion Recognition. Deep Learning (DL) and Active Shape Model (ASM) were tested. Researchers have worked with Facial Emotion Recognition in the past, with both Deep learning and Active Shape Model, with wanting to find out which approach is better for this kind of technology. Both methods were tested with two different datasets and our findings were consistent. Active shape Model was better when tested versus Deep Learning. However, Deep Learning was faster, and easier to implement, which means with better Deep Learning software, Deep Learning will be better in recognizing and classifying facial emotions. For this experiment Deep Learning showed accuracy for the CAFE dataset by 60% whereas Active Shape Model showed accuracy at 93%. Likewise with the JAFFE dataset; Deep Learning showed accuracy at 63% and Active Shape Model showed accuracy at 83%.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
3.
4.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (n.d.) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (n.d.)
5.
Zurück zum Zitat Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ((2015)). doi:10.1109/cvpr.2015.7298594 Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ((2015)). doi:10.​1109/​cvpr.​2015.​7298594
6.
Zurück zum Zitat Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recogn. 36, 259–275 (2003)CrossRefMATH Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recogn. 36, 259–275 (2003)CrossRefMATH
7.
Zurück zum Zitat Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000)CrossRef Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000)CrossRef
8.
Zurück zum Zitat Paleari, M., Chellali, R., Huet, B.: Features for multimodal emotion recognition: an extensive study. In: IEEE Conference on Cybernetics and Intelligent Systems, pp. 90-95 (2010) Paleari, M., Chellali, R., Huet, B.: Features for multimodal emotion recognition: an extensive study. In: IEEE Conference on Cybernetics and Intelligent Systems, pp. 90-95 (2010)
9.
Zurück zum Zitat Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Hazry, D., Zunaidi, I.: Time-frequency analysis of EEG signals for human emotion detection. In: 4th Kuala Lumpur International Conference on Biomedical Engineering (2008) Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Hazry, D., Zunaidi, I.: Time-frequency analysis of EEG signals for human emotion detection. In: 4th Kuala Lumpur International Conference on Biomedical Engineering (2008)
10.
Zurück zum Zitat Arı, İ., Akarun, L.: Facial feature tracking and expression recognition for sign language, In: IEEE, Signal Processing and Communications Applications, Antalya (2009) Arı, İ., Akarun, L.: Facial feature tracking and expression recognition for sign language, In: IEEE, Signal Processing and Communications Applications, Antalya (2009)
11.
Zurück zum Zitat Akakın, H.Ç., Sankur, B.: Spatiotemporal features for effective facial expression recognition. In: IEEE 11th European Conference on Computer Vision, Workshop on Sign Gesture Activity (2010) Akakın, H.Ç., Sankur, B.: Spatiotemporal features for effective facial expression recognition. In: IEEE 11th European Conference on Computer Vision, Workshop on Sign Gesture Activity (2010)
12.
Zurück zum Zitat Kumano, S., Otsuka, K., Yamato, J., Maeda, E., Sato, Y.: Pose-invariant facial expression recognition using variable-intensity templates. Int. J. Comput. Vis. 83, 178–194 (2008)CrossRef Kumano, S., Otsuka, K., Yamato, J., Maeda, E., Sato, Y.: Pose-invariant facial expression recognition using variable-intensity templates. Int. J. Comput. Vis. 83, 178–194 (2008)CrossRef
13.
Zurück zum Zitat Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25, 1856–1863 (2007)CrossRef Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25, 1856–1863 (2007)CrossRef
14.
Zurück zum Zitat Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24, 615–625 (2006)CrossRef Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24, 615–625 (2006)CrossRef
15.
Zurück zum Zitat Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)CrossRef Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)CrossRef
16.
Zurück zum Zitat Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces - ICMI 2004, New York (2004) Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces - ICMI 2004, New York (2004)
17.
Zurück zum Zitat Shaohua, W., Aggarwal, J.K.: Spotaneous facial expression recognition: a robust metric learning approach, Computer Vision Research Center, The University of Texas at Austin, Austin, TX 78712–1084. US, Pattern Recognition 47 (2014) Shaohua, W., Aggarwal, J.K.: Spotaneous facial expression recognition: a robust metric learning approach, Computer Vision Research Center, The University of Texas at Austin, Austin, TX 78712–1084. US, Pattern Recognition 47 (2014)
18.
Zurück zum Zitat Mao, Q., Xinyu, P., Zhan, Y., Xiangiun, S.: Usinng Kinect for real time emotion recognition via facial expression. Front. Inf. Technol. Electron. Eng. 16(4), 272–282 (2015)CrossRef Mao, Q., Xinyu, P., Zhan, Y., Xiangiun, S.: Usinng Kinect for real time emotion recognition via facial expression. Front. Inf. Technol. Electron. Eng. 16(4), 272–282 (2015)CrossRef
19.
Zurück zum Zitat Anwar, S., Milanova, M., Bigazzi, A., Bocchi, L., Guazzini, A.: Real Time Intention Recognition IEEE IECON 2016, Florence, 24–28 October 2016 Anwar, S., Milanova, M., Bigazzi, A., Bocchi, L., Guazzini, A.: Real Time Intention Recognition IEEE IECON 2016, Florence, 24–28 October 2016
20.
Zurück zum Zitat Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y.: Convolutional-Recursive Deep Learning for 3D Object Classification, pp. 1–9 (n.d.) Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y.: Convolutional-Recursive Deep Learning for 3D Object Classification, pp. 1–9 (n.d.)
21.
Zurück zum Zitat Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y.: On Optimization Methods for Deep Learning (n.d.) Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y.: On Optimization Methods for Deep Learning (n.d.)
22.
Zurück zum Zitat Arı, İ., Açıköz, Y.: Fast image annotation with Pinotator. In: IEEE 19th Signal Processing and Communications Applications Conference (2011) Arı, İ., Açıköz, Y.: Fast image annotation with Pinotator. In: IEEE 19th Signal Processing and Communications Applications Conference (2011)
Metadaten
Titel
Active Shape Model vs. Deep Learning for Facial Emotion Recognition in Security
verfasst von
Monica Bebawy
Suzan Anwar
Mariofanna Milanova
Copyright-Jahr
2017
DOI
https://doi.org/10.1007/978-3-319-59259-6_1