Skip to main content
Top
Published in: Neural Computing and Applications 9/2024

14-12-2023 | Original Article

Spatial deep feature augmentation technique for FER using genetic algorithm

Authors: Nudrat Nida, Muhammad Haroon Yousaf, Aun Irtaza, Sajid Javed, Sergio A. Velastin

Published in: Neural Computing and Applications | Issue 9/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The generation of a large human-labelled facial expression dataset is challenging due to ambiguity in labelling the facial expression class, and annotation cost. However, facial expression recognition (FER) systems demand discriminative feature representation, and require many training samples to establish stronger decision boundaries. Recently, FER approaches have used data augmentation techniques to increase the number of training samples for model generation. However, these augmented samples are derived from existing training data, and therefore have limitations for developing an accurate FER system. To achieve meaningful facial expression representations, we introduce an augmentation technique based on deep learning and genetic algorithms for FER. The proposed approach exploits the hypothesis that augmenting the feature-set is better than augmenting the visual data for FER. By evaluating this relationship, we discovered that the genetic evolution of discriminative features for facial expression is significant in developing a robust FER approach. In this approach, facial expression samples are generated from RGB visual data from videos considering human face frames as regions of interest. The face detected frames are further processed to extract key-frames within particular intervals. Later, these key-frames are convolved through a deep convolutional network for feature generation. A genetic algorithm’s fitness function is gauged to select optimal genetically evolved deep facial expression receptive fields to represent virtual facial expressions. The extended facial expression information is evaluated through an extreme learning machine classifier. The proposed technique has been evaluated on five diverse datasets i.e. JAFFE, CK+, FER2013, AffectNet and our application-specific Instructor Facial Expression (IFEV) dataset. Experimentation results and analysis show the promising accuracy and significance of the proposed technique on all these datasets.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Bhatti YK, Jamil A, Nida N et al (2021) Facial expression recognition of instructor using deep features and extreme learning machine. Comput Intell Neurosci Bhatti YK, Jamil A, Nida N et al (2021) Facial expression recognition of instructor using deep features and extreme learning machine. Comput Intell Neurosci
3.
go back to reference Carrier PL, Courville A, Goodfellow IJ et al (2013) Fer-2013 face database. Universit de Montral Carrier PL, Courville A, Goodfellow IJ et al (2013) Fer-2013 face database. Universit de Montral
4.
go back to reference Chen J, Chen Z, Chi Z et al (2014) Facial expression recognition based on facial components detection and hog features. In: International workshops on electrical and computer engineering subfields, pp 884–888 Chen J, Chen Z, Chi Z et al (2014) Facial expression recognition based on facial components detection and hog features. In: International workshops on electrical and computer engineering subfields, pp 884–888
5.
go back to reference Chengeta K, Viriri S (2019) A review of local, holistic and deep learning approaches in facial expressions recognition. In: 2019 conference on information communications technology and society (ICTAS), IEEE, pp 1–7 Chengeta K, Viriri S (2019) A review of local, holistic and deep learning approaches in facial expressions recognition. In: 2019 conference on information communications technology and society (ICTAS), IEEE, pp 1–7
6.
go back to reference Chorowski J, Wang J, Zurada JM (2014) Review and performance comparison of SVM-and ELM-based classifiers. Neurocomputing 128:507–516CrossRef Chorowski J, Wang J, Zurada JM (2014) Review and performance comparison of SVM-and ELM-based classifiers. Neurocomputing 128:507–516CrossRef
7.
go back to reference Chu Q, Hu M, Wang X et al (2019) Facial expression recognition based on contextual generative adversarial network. In: 2019 IEEE 6th international conference on cloud computing and intelligence systems (CCIS), IEEE, pp 120–125 Chu Q, Hu M, Wang X et al (2019) Facial expression recognition based on contextual generative adversarial network. In: 2019 IEEE 6th international conference on cloud computing and intelligence systems (CCIS), IEEE, pp 120–125
8.
go back to reference Cruz AC, Rinaldi A (2017) Video summarization for expression analysis of motor vehicle operators. In: International conference on universal access in human–computer interaction, Springer, pp 313–323 Cruz AC, Rinaldi A (2017) Video summarization for expression analysis of motor vehicle operators. In: International conference on universal access in human–computer interaction, Springer, pp 313–323
9.
go back to reference Cruz EAS, Jung CR, Franco CHE (2018) Facial expression recognition using temporal poem features. Pattern Recogn Lett 114:13–21ADSCrossRef Cruz EAS, Jung CR, Franco CHE (2018) Facial expression recognition using temporal poem features. Pattern Recogn Lett 114:13–21ADSCrossRef
10.
go back to reference Dehghan A, Ortiz EG, Shu G et al (2017) DAGER: deep age, gender and emotion recognition using convolutional neural network. arXiv preprint arXiv:1702.04280 Dehghan A, Ortiz EG, Shu G et al (2017) DAGER: deep age, gender and emotion recognition using convolutional neural network. arXiv preprint arXiv:​1702.​04280
12.
go back to reference Fan X, Tjahjadi T (2017) A dynamic framework based on local zernike moment and motion history image for facial expression recognition. Pattern Recogn 64:399–406ADSCrossRef Fan X, Tjahjadi T (2017) A dynamic framework based on local zernike moment and motion history image for facial expression recognition. Pattern Recogn 64:399–406ADSCrossRef
13.
go back to reference Hadid A, Pietikäinen M, Li SZ (2007) Learning personal specific facial dynamics for face recognition from videos. In: International workshop on analysis and modeling of faces and gestures. Springer, Berlin pp 1–15 Hadid A, Pietikäinen M, Li SZ (2007) Learning personal specific facial dynamics for face recognition from videos. In: International workshop on analysis and modeling of faces and gestures. Springer, Berlin pp 1–15
14.
go back to reference Han K, Yu D, Tashev I (2014) Speech emotion recognition using deep neural network and extreme learning machine. In: Fifteenth annual conference of the international speech communication association Han K, Yu D, Tashev I (2014) Speech emotion recognition using deep neural network and extreme learning machine. In: Fifteenth annual conference of the international speech communication association
15.
go back to reference Han S, Meng Z, Khan AS et al (2016) Incremental boosting convolutional neural network for facial action unit recognition. In: Advances in neural information processing systems, pp 109–117 Han S, Meng Z, Khan AS et al (2016) Incremental boosting convolutional neural network for facial action unit recognition. In: Advances in neural information processing systems, pp 109–117
17.
go back to reference He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition pp 770–778 He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition pp 770–778
19.
go back to reference Hou X, Ding S, Ma L (2016) Robust feature encoding for age-invariant face recognition. In: 2016 IEEE international conference on multimedia and expo (ICME), IEEE, pp 1–6 Hou X, Ding S, Ma L (2016) Robust feature encoding for age-invariant face recognition. In: 2016 IEEE international conference on multimedia and expo (ICME), IEEE, pp 1–6
20.
go back to reference Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef
21.
go back to reference Iandola F, Moskewicz M, Karayev S et al (2014) DENSENET: implementing efficient ConvNet descriptor pyramids. arXiv preprint arXiv:1404.1869 Iandola F, Moskewicz M, Karayev S et al (2014) DENSENET: implementing efficient ConvNet descriptor pyramids. arXiv preprint arXiv:​1404.​1869
23.
go back to reference Jung H, Lee S, Yim J et al (2015b) Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE international conference on computer vision, pp 2983–2991 Jung H, Lee S, Yim J et al (2015b) Joint fine-tuning in deep neural networks for facial expression recognition. In: Proceedings of the IEEE international conference on computer vision, pp 2983–2991
25.
go back to reference Karnati M, Seal A, Yazidi A et al (2021) LIENET: a deep convolution neural networks framework for detecting deception. In: IEEE transactions on cognitive and developmental systems Karnati M, Seal A, Yazidi A et al (2021) LIENET: a deep convolution neural networks framework for detecting deception. In: IEEE transactions on cognitive and developmental systems
26.
go back to reference Karnati M, Seal A, Yazidi A et al (2022) FLEPNET: feature level ensemble parallel network for facial expression recognition. IEEE Trans. Affect. Comput. 13(4):2058–2070CrossRef Karnati M, Seal A, Yazidi A et al (2022) FLEPNET: feature level ensemble parallel network for facial expression recognition. IEEE Trans. Affect. Comput. 13(4):2058–2070CrossRef
27.
go back to reference Khorrami P, Paine T, Huang T (2015) Do deep neural networks learn facial action units when doing expression recognition? In: Proceedings of the IEEE international conference on computer vision workshops, pp 19–27 Khorrami P, Paine T, Huang T (2015) Do deep neural networks learn facial action units when doing expression recognition? In: Proceedings of the IEEE international conference on computer vision workshops, pp 19–27
28.
go back to reference Knyazev B, Shvetsov R, Efremova N et al (2017) Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video. arXiv preprint arXiv:1711.04598 Knyazev B, Shvetsov R, Efremova N et al (2017) Convolutional neural networks pretrained on large face recognition datasets for emotion classification from video. arXiv preprint arXiv:​1711.​04598
30.
go back to reference Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105 Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
32.
go back to reference Li K, Jin Y, Akram MW et al (2020) Facial expression recognition with convolutional neural networks via a new face cropping and rotation strategy. Visual Comput 36(2):391–404CrossRef Li K, Jin Y, Akram MW et al (2020) Facial expression recognition with convolutional neural networks via a new face cropping and rotation strategy. Visual Comput 36(2):391–404CrossRef
33.
go back to reference Li Y, Zeng J, Shan S et al (2018) Occlusion aware facial expression recognition using CNN with attention mechanism. IEEE Trans Image Process 28(5):2439–2450ADSMathSciNetCrossRef Li Y, Zeng J, Shan S et al (2018) Occlusion aware facial expression recognition using CNN with attention mechanism. IEEE Trans Image Process 28(5):2439–2450ADSMathSciNetCrossRef
34.
go back to reference Liang D, Liang H, Yu Z et al (2020) Deep convolutional BILSTM fusion network for facial expression recognition. Visual Comput 36(3):499–508CrossRef Liang D, Liang H, Yu Z et al (2020) Deep convolutional BILSTM fusion network for facial expression recognition. Visual Comput 36(3):499–508CrossRef
35.
go back to reference Liu K, Zhang M, Pan Z (2016a) Facial expression recognition with CNN ensemble. In: 2016 international conference on cyberworlds (CW), IEEE, pp 163–166 Liu K, Zhang M, Pan Z (2016a) Facial expression recognition with CNN ensemble. In: 2016 international conference on cyberworlds (CW), IEEE, pp 163–166
36.
go back to reference Liu P, Han S, Meng Z et al (2014) Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1805–1812 Liu P, Han S, Meng Z et al (2014) Facial expression recognition via a boosted deep belief network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1805–1812
37.
go back to reference Liu W, Zheng WL, Lu BL (2016b) Emotion recognition using multimodal deep learning. In: International conference on neural information processing, Springer, Berlin, pp 521–529 Liu W, Zheng WL, Lu BL (2016b) Emotion recognition using multimodal deep learning. In: International conference on neural information processing, Springer, Berlin, pp 521–529
38.
go back to reference Lo L, Ruan BK, Shuai HH et al (2023) Modeling uncertainty for low-resolution facial expression recognition. IEEE transactions on affective computing Lo L, Ruan BK, Shuai HH et al (2023) Modeling uncertainty for low-resolution facial expression recognition. IEEE transactions on affective computing
39.
go back to reference Lopes AT, De Aguiar E, Oliveira-Santos T (2015) A facial expression recognition system using convolutional networks. In: 2015 28th SIBGRAPI conference on graphics. Patterns and Images, IEEE, pp 273–280 Lopes AT, De Aguiar E, Oliveira-Santos T (2015) A facial expression recognition system using convolutional networks. In: 2015 28th SIBGRAPI conference on graphics. Patterns and Images, IEEE, pp 273–280
40.
go back to reference Lu Y, Tai YW, Tang CK (2018) Attribute-guided face generation using conditional cyclegan. In: Proceedings of the European conference on computer vision (ECCV), pp 282–297 Lu Y, Tai YW, Tang CK (2018) Attribute-guided face generation using conditional cyclegan. In: Proceedings of the European conference on computer vision (ECCV), pp 282–297
41.
go back to reference Lucey P, Cohn JF, Kanade T et al (2010) The extended Cohn–Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, IEEE, pp 94–101 Lucey P, Cohn JF, Kanade T et al (2010) The extended Cohn–Kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE computer society conference on computer vision and pattern recognition-workshops, IEEE, pp 94–101
42.
go back to reference Lyons MJ, Akamatsu S, Kamachi M et al (1998) The Japanese female facial expression (JAFFE) database. In: Proceedings of third international conference on automatic face and gesture recognition, pp 14–16 Lyons MJ, Akamatsu S, Kamachi M et al (1998) The Japanese female facial expression (JAFFE) database. In: Proceedings of third international conference on automatic face and gesture recognition, pp 14–16
43.
44.
go back to reference Matsumoto D, Hwang HS (2011) Reading facial expressions of emotion. Psychol Sci Agenda 25(5) Matsumoto D, Hwang HS (2011) Reading facial expressions of emotion. Psychol Sci Agenda 25(5)
45.
go back to reference Meng Z, Liu P, Cai J et al (2017a) Identity-aware convolutional neural network for facial expression recognition. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017), IEEE, pp 558–565 Meng Z, Liu P, Cai J et al (2017a) Identity-aware convolutional neural network for facial expression recognition. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017), IEEE, pp 558–565
46.
go back to reference Meng Z, Liu P, Cai J et al (2017b) Identity-aware convolutional neural network for facial expression recognition. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017), IEEE, pp 558–565 Meng Z, Liu P, Cai J et al (2017b) Identity-aware convolutional neural network for facial expression recognition. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017), IEEE, pp 558–565
47.
go back to reference Mohan K, Seal A, Krejcar O et al (2020) Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks. IEEE Trans Instrum Meas 70:1–12CrossRef Mohan K, Seal A, Krejcar O et al (2020) Facial expression recognition using local gravitational force descriptor-based deep convolution neural networks. IEEE Trans Instrum Meas 70:1–12CrossRef
48.
go back to reference Mohan K, Seal A, Krejcar O et al (2021) FER-NET: facial expression recognition using deep neural net. Neural Comput Appl 33(15):9125–9136CrossRef Mohan K, Seal A, Krejcar O et al (2021) FER-NET: facial expression recognition using deep neural net. Neural Comput Appl 33(15):9125–9136CrossRef
50.
go back to reference Mollahosseini A, Hasani B, Mahoor MH (2017) AFFECTNET: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans Affect Comput 10(1):18–31CrossRef Mollahosseini A, Hasani B, Mahoor MH (2017) AFFECTNET: a database for facial expression, valence, and arousal computing in the wild. IEEE Trans Affect Comput 10(1):18–31CrossRef
52.
go back to reference Nida N, Yousaf MH, Irtaza A et al (2019) Instructor activity recognition through deep spatiotemporal features and feedforward extreme learning machines. Math Problems Eng 2019:1–13CrossRef Nida N, Yousaf MH, Irtaza A et al (2019) Instructor activity recognition through deep spatiotemporal features and feedforward extreme learning machines. Math Problems Eng 2019:1–13CrossRef
53.
go back to reference Noh SR, Isaacowitz DM (2013) Emotional faces in context: age differences in recognition accuracy and scanning patterns. Emotion 13(2):238CrossRefPubMed Noh SR, Isaacowitz DM (2013) Emotional faces in context: age differences in recognition accuracy and scanning patterns. Emotion 13(2):238CrossRefPubMed
54.
go back to reference Pham TTD, Kim S, Lu Y et al (2019) Facial action units-based image retrieval for facial expression recognition. IEEE Access 7:5200–5207CrossRef Pham TTD, Kim S, Lu Y et al (2019) Facial action units-based image retrieval for facial expression recognition. IEEE Access 7:5200–5207CrossRef
55.
go back to reference Pitaloka DA, Wulandari A, Basaruddin T et al (2017) Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Comput Sci 116:523–529CrossRef Pitaloka DA, Wulandari A, Basaruddin T et al (2017) Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Comput Sci 116:523–529CrossRef
56.
go back to reference Rouast PV, Adam M, Chiong R (2021) Deep learning for human affect recognition: insights and new developments. IEEE Trans Affect Comput 12(2):524–543CrossRef Rouast PV, Adam M, Chiong R (2021) Deep learning for human affect recognition: insights and new developments. IEEE Trans Affect Comput 12(2):524–543CrossRef
58.
go back to reference Sun M, Cui W, Zhang Y et al (2023) Attention-rectified and texture-enhanced cross-attention transformer feature fusion network for facial expression recognition. IEEE Trans Ind Inform 19(12):11823–11832CrossRef Sun M, Cui W, Zhang Y et al (2023) Attention-rectified and texture-enhanced cross-attention transformer feature fusion network for facial expression recognition. IEEE Trans Ind Inform 19(12):11823–11832CrossRef
59.
go back to reference Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9 Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9
60.
go back to reference Tümen V, Söylemez ÖF, Ergen B (2017) Facial emotion recognition on a dataset using convolutional neural network. In: 2017 international artificial intelligence and data processing symposium (IDAP), IEEE pp 1–5 Tümen V, Söylemez ÖF, Ergen B (2017) Facial emotion recognition on a dataset using convolutional neural network. In: 2017 international artificial intelligence and data processing symposium (IDAP), IEEE pp 1–5
61.
go back to reference Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of the 3rd international workshop on EMOTION (satellite of LREC): corpora for research on emotion and affect, Paris, France, p 65 Valstar M, Pantic M (2010) Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of the 3rd international workshop on EMOTION (satellite of LREC): corpora for research on emotion and affect, Paris, France, p 65
62.
go back to reference Wang K, Peng X, Yang J et al (2020) Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans Image Process 29:4057–4069ADSCrossRef Wang K, Peng X, Yang J et al (2020) Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans Image Process 29:4057–4069ADSCrossRef
63.
go back to reference Wang Y, Li Y, Song Y et al (2019) Facial expression recognition based on random forest and convolutional neural network. Information 10(12):375CrossRef Wang Y, Li Y, Song Y et al (2019) Facial expression recognition based on random forest and convolutional neural network. Information 10(12):375CrossRef
64.
go back to reference Wang Z, Ying Z (2012) Facial expression recognition based on local phase quantization and sparse representation. In: 2012 8th international conference on natural computation, IEEE, pp 222–225 Wang Z, Ying Z (2012) Facial expression recognition based on local phase quantization and sparse representation. In: 2012 8th international conference on natural computation, IEEE, pp 222–225
66.
go back to reference Yu J, Lin Z, Yang J et al (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5505–5514 Yu J, Lin Z, Yang J et al (2018) Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5505–5514
67.
go back to reference Ze Dvan der Haar (2020) Student emotion recognition using computer vision as an assistive technology for education. In: Information science and applications. Springer, pp 183–192 Ze Dvan der Haar (2020) Student emotion recognition using computer vision as an assistive technology for education. In: Information science and applications. Springer, pp 183–192
68.
go back to reference Zeng N, Zhang H, Song B et al (2018) Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273:643–649CrossRef Zeng N, Zhang H, Song B et al (2018) Facial expression recognition via learning deep sparse autoencoders. Neurocomputing 273:643–649CrossRef
69.
go back to reference Zhang X, Mahoor MH (2016) Task-dependent multi-task multiple kernel learning for facial action unit detection. Pattern Recogn 51:187–196ADSCrossRef Zhang X, Mahoor MH (2016) Task-dependent multi-task multiple kernel learning for facial action unit detection. Pattern Recogn 51:187–196ADSCrossRef
70.
go back to reference Zhao L, Li X, Zhuang Y et al (2017) Deeply-learned part-aligned representations for person re-identification. In: Proceedings of the IEEE international conference on computer vision, pp 3219–3228 Zhao L, Li X, Zhuang Y et al (2017) Deeply-learned part-aligned representations for person re-identification. In: Proceedings of the IEEE international conference on computer vision, pp 3219–3228
72.
go back to reference Zhong H, Miao C, Shen Z et al (2014) Comparing the learning effectiveness of BP, ELM, I-ELM, and SVM for corporate credit ratings. Neurocomputing 128:285–295CrossRef Zhong H, Miao C, Shen Z et al (2014) Comparing the learning effectiveness of BP, ELM, I-ELM, and SVM for corporate credit ratings. Neurocomputing 128:285–295CrossRef
73.
go back to reference Zhou Y, Xue H, Geng X (2015) Emotion distribution recognition from facial expressions. In: Proceedings of the 23rd ACM international conference on multimedia, pp 1247–1250 Zhou Y, Xue H, Geng X (2015) Emotion distribution recognition from facial expressions. In: Proceedings of the 23rd ACM international conference on multimedia, pp 1247–1250
74.
go back to reference Zhu X, Liu Y, Qin Z et al (2017) Data augmentation in emotion classification using generative adversarial networks. arXiv preprint arXiv:1711.00648 Zhu X, Liu Y, Qin Z et al (2017) Data augmentation in emotion classification using generative adversarial networks. arXiv preprint arXiv:​1711.​00648
Metadata
Title
Spatial deep feature augmentation technique for FER using genetic algorithm
Authors
Nudrat Nida
Muhammad Haroon Yousaf
Aun Irtaza
Sajid Javed
Sergio A. Velastin
Publication date
14-12-2023
Publisher
Springer London
Published in
Neural Computing and Applications / Issue 9/2024
Print ISSN: 0941-0643
Electronic ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-023-09245-x

Other articles of this Issue 9/2024

Neural Computing and Applications 9/2024 Go to the issue

Premium Partner