Skip to main content
Erschienen in: Pattern Recognition and Image Analysis 4/2020

01.10.2020 | PATTERN RECOGNITION AND IMAGE ANALYSIS AUTOMATED SYSTEMS, HARDWARE AND SOFTWARE

Deep Learning Based Approach of Emotion Detection and Grading System

verfasst von: Bhakti Sonawane, Priyanka Sharma

Erschienen in: Pattern Recognition and Image Analysis | Ausgabe 4/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the past two decades, there has been an increasing research in recognition and analysis of human emotions in areas of neuroscience, psychology, cognitive science, and computer science. Facial expressions play an important role in assessment of several neuropsychiatric disorders where patients are having impairment in recognition as well as expression of emotional facial expressivity. The majority of the previous work in the domain of machine analysis of human emotion was focused on recognition of prototypic expressions of six basic emotions or also on finding out movements of the individual muscle that the face can produce. Most of these automated facial expression recognition methods were based on data that has been posed on demand and acquired in laboratory settings. But, for real-world applications, instead of classifying a face image into one of the facial expression categories, focus needs to be given on the problem of estimating the depth of facial expression for further grading the emotions leading to applicability in various clinical research. In this paper we have proposed a deep learning-based Emotion Detection and Grading System (D-EDGS) for estimation of facial emotional expression into one of the seven basic categories of emotion. Along with this the proposed model D-EDGS also works towards grading the emotions into low, medium and high scale. In future, effectiveness of D-EDGS can be applied for the development of decision support system in assessment of neuro psychiatric disorders.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat C. Darwin and P. Prodger, The Expression of the Emotions in Man and Animals (Oxford Univ. Press, 1998). C. Darwin and P. Prodger, The Expression of the Emotions in Man and Animals (Oxford Univ. Press, 1998).
2.
Zurück zum Zitat T. S.H. Wingenbach, C. Ashwin, and M. Brosnan, “Validation of the Amsterdam Dynamic Facial Expression Set–Bath Intensity Variations (ADFES–BIV): A set of videos expressing low, intermediate, and high intensity emotions,” PloS ONE 11 (1), e0147112 (2016).CrossRef T. S.H. Wingenbach, C. Ashwin, and M. Brosnan, “Validation of the Amsterdam Dynamic Facial Expression Set–Bath Intensity Variations (ADFES–BIV): A set of videos expressing low, intermediate, and high intensity emotions,” PloS ONE 11 (1), e0147112 (2016).CrossRef
3.
Zurück zum Zitat V. Bettadapura, “Face expression recognition and analysis: The state of the art” (2012). arXiv:1203.6722. V. Bettadapura, “Face expression recognition and analysis: The state of the art” (2012). arXiv:1203.6722.
4.
Zurück zum Zitat E. Friesen and P. Ekman, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Palo Alto, 1978). E. Friesen and P. Ekman, Facial Action Coding System: A Technique for the Measurement of Facial Movement (Palo Alto, 1978).
5.
Zurück zum Zitat B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognit. 36 (1), pp. 259–275 (2003).CrossRef B. Fasel and J. Luettin, “Automatic facial expression analysis: A survey,” Pattern Recognit. 36 (1), pp. 259–275 (2003).CrossRef
6.
Zurück zum Zitat Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proc. IEEE 86 (11), 2278–2324 (1998).CrossRef Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proc. IEEE 86 (11), 2278–2324 (1998).CrossRef
7.
Zurück zum Zitat A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012). A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (2012).
8.
Zurück zum Zitat K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition” (2014). arXiv:1409.1556. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition” (2014). arXiv:1409.1556.
9.
Zurück zum Zitat H.-W. Ng et al., “Deep learning for emotion recognition on small datasets using transfer learning,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (2015). H.-W. Ng et al., “Deep learning for emotion recognition on small datasets using transfer learning,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (2015).
10.
Zurück zum Zitat H. Kaya, F. Gürpınar, and A. A. Salah, “Video-based emotion recognition in the wild using deep transfer learning and score fusion,” Image Vision Comput. 65, 66–75 (2017).CrossRef H. Kaya, F. Gürpınar, and A. A. Salah, “Video-based emotion recognition in the wild using deep transfer learning and score fusion,” Image Vision Comput. 65, 66–75 (2017).CrossRef
11.
Zurück zum Zitat M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition (Nara, 1998), pp. 200–205. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition (Nara, 1998), pp. 200–205.
12.
Zurück zum Zitat T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000). T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000).
13.
Zurück zum Zitat P. Lucey et al., “The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition–Workshops (2010). P. Lucey et al., “The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition–Workshops (2010).
14.
Zurück zum Zitat D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska directed emotional faces (KDEF),” in CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet 91:630 (1998). D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska directed emotional faces (KDEF),” in CD ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet 91:630 (1998).
15.
Zurück zum Zitat S. T. Hawk, J. Van der Schalk, and A. H. Fischer, “Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES),” in 12th European Conference on Facial Expressions (Geneva, 2008). S. T. Hawk, J. Van der Schalk, and A. H. Fischer, “Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES),” in 12th European Conference on Facial Expressions (Geneva, 2008).
16.
Zurück zum Zitat M. Pantic, M. Valstar, R. Rademaker, and L. Maat,” Web-based database for facial expression analysis,” in IEEE Int. Conf. on Multimedia and Expo (Amsterdam, 2005). M. Pantic, M. Valstar, R. Rademaker, and L. Maat,” Web-based database for facial expression analysis,” in IEEE Int. Conf. on Multimedia and Expo (Amsterdam, 2005).
17.
Zurück zum Zitat B. Montagne, R. P. C. Kessels, E. H. F. De Haan, and D. I. Perrett, “The emotion recognition task: A paradigm to measure the perception of facial emotional expressions at different intensities,” Perceptual Motor Skills 104, 589–598 (2007).CrossRef B. Montagne, R. P. C. Kessels, E. H. F. De Haan, and D. I. Perrett, “The emotion recognition task: A paradigm to measure the perception of facial emotional expressions at different intensities,” Perceptual Motor Skills 104, 589–598 (2007).CrossRef
18.
Zurück zum Zitat K. Kaulard et al., “The MPI facial expression database—a validated database of emotional and conversational facial expressions,” PloS ONE 7 (3), e32321 (2012).CrossRef K. Kaulard et al., “The MPI facial expression database—a validated database of emotional and conversational facial expressions,” PloS ONE 7 (3), e32321 (2012).CrossRef
19.
Zurück zum Zitat M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. 22 (12), 1424–1445 (2000).CrossRef M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. 22 (12), 1424–1445 (2000).CrossRef
20.
Zurück zum Zitat C. Mayer, M. Eggers, and B. Radig, “Cross-database evaluation for facial expression recognition,” Pattern Recognit. Image Anal. 24 (1), 124–132 (2014).CrossRef C. Mayer, M. Eggers, and B. Radig, “Cross-database evaluation for facial expression recognition,” Pattern Recognit. Image Anal. 24 (1), 124–132 (2014).CrossRef
21.
Zurück zum Zitat C. Shan, S. Gong, and P. W. McOwan, “Robust facial expression recognition using local binary patterns,” in IEEE International Conference on Image Processing 2005 (2005), Vol. 2. C. Shan, S. Gong, and P. W. McOwan, “Robust facial expression recognition using local binary patterns,” in IEEE International Conference on Image Processing 2005 (2005), Vol. 2.
22.
Zurück zum Zitat X. Feng, M. Pietikainen, and A. Hadid, “Facial expression recognition based on local binary patterns,” Pattern Recognit. Image Anal. 17 (4), 592–598 (2007).CrossRef X. Feng, M. Pietikainen, and A. Hadid, “Facial expression recognition based on local binary patterns,” Pattern Recognit. Image Anal. 17 (4), 592–598 (2007).CrossRef
23.
Zurück zum Zitat G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29 (6), 915–928 (2007).CrossRef G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell. 29 (6), 915–928 (2007).CrossRef
24.
Zurück zum Zitat M. Abdulrahman and A. Eleyan, “Facial expression recognition using support vector machines,” in 2015 23rd Signal Processing and Communications Applications Conference (SIU) (2015). M. Abdulrahman and A. Eleyan, “Facial expression recognition using support vector machines,” in 2015 23rd Signal Processing and Communications Applications Conference (SIU) (2015).
25.
Zurück zum Zitat D. Debishree, A. Hudait, H. K. Tripathy, and M. N. Das, “Automatic emotion detection model from facial expression,” in International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016). D. Debishree, A. Hudait, H. K. Tripathy, and M. N. Das, “Automatic emotion detection model from facial expression,” in International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016).
26.
Zurück zum Zitat G. Ramkumar and E. Logashanmugam, “An effectual facial expression recognition using HMM,” in 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016). G. Ramkumar and E. Logashanmugam, “An effectual facial expression recognition using HMM,” in 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (2016).
27.
Zurück zum Zitat M. Matsugu et al., “Subject independent facial expression recognition with robust face detection using a convolutional neural network,” Neural Networks 16 (5–6), 555–559 (2003).CrossRef M. Matsugu et al., “Subject independent facial expression recognition with robust face detection using a convolutional neural network,” Neural Networks 16 (5–6), 555–559 (2003).CrossRef
28.
Zurück zum Zitat S. S. Kulkarni, N. P. Reddy, and S. I. Hariharan, “Facial expression (mood) recognition from facial images using committee neural networks,” Biomed. Eng. Online 8 (1), 16 (2009).CrossRef S. S. Kulkarni, N. P. Reddy, and S. I. Hariharan, “Facial expression (mood) recognition from facial images using committee neural networks,” Biomed. Eng. Online 8 (1), 16 (2009).CrossRef
29.
Zurück zum Zitat H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE International Conference on Computer Vision (2015). H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” in Proceedings of the IEEE International Conference on Computer Vision (2015).
30.
Zurück zum Zitat A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (2016). A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (2016).
31.
Zurück zum Zitat R. Girshick et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014). R. Girshick et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014).
32.
Zurück zum Zitat B. Sun, L. Li, G. Zhou, and J. He., “Facial expression recognition in the wild based on multimodal texture features,” J. Electron. Imaging 25 (6), 061407 (2016).CrossRef B. Sun, L. Li, G. Zhou, and J. He., “Facial expression recognition in the wild based on multimodal texture features,” J. Electron. Imaging 25 (6), 061407 (2016).CrossRef
33.
Zurück zum Zitat J. Li, D. Zhang, J. Zhang, J. Zhang, T. Li, Y. Xia, Q. Yan, and L. Xun, “Facial expression recognition with faster R-CNN,” Procedia Comput. Sci. 107, 135–140 (2017).CrossRef J. Li, D. Zhang, J. Zhang, J. Zhang, T. Li, Y. Xia, Q. Yan, and L. Xun, “Facial expression recognition with faster R-CNN,” Procedia Comput. Sci. 107, 135–140 (2017).CrossRef
34.
Zurück zum Zitat S. Li and W. Deng, “Deep facial expression recognition: A survey” (2018). arXiv:1804.08348. S. Li and W. Deng, “Deep facial expression recognition: A survey” (2018). arXiv:1804.08348.
35.
Zurück zum Zitat Y. Fan, J. C. K. Lam, and V. O. K. Li, “Multi-region ensemble convolutional neural network for facial expression recognition,” in International Conference on Artificial Neural Networks (Springer, Cham, 2018). Y. Fan, J. C. K. Lam, and V. O. K. Li, “Multi-region ensemble convolutional neural network for facial expression recognition,” in International Conference on Artificial Neural Networks (Springer, Cham, 2018).
36.
Zurück zum Zitat B. Sonawane and P. Sharma, “Acceleration of CNN-based facial emotion detection using NVIDIA GPU,” in Intelligent Computing and Information and Communication (Springer, Singapore, 2018), pp. 257–264. B. Sonawane and P. Sharma, “Acceleration of CNN-based facial emotion detection using NVIDIA GPU,” in Intelligent Computing and Information and Communication (Springer, Singapore, 2018), pp. 257–264.
37.
Zurück zum Zitat W. C. De Melo, E. Granger, and A. Hadid, “Combining global and local convolutional 3D networks for detecting depression from facial expressions,” in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (2019). W. C. De Melo, E. Granger, and A. Hadid, “Combining global and local convolutional 3D networks for detecting depression from facial expressions,” in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) (2019).
38.
Zurück zum Zitat J. R. Delannoy and J. McDonald, “Automatic estimation of the dynamics of facial expression using a three-level model of intensity,” in 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition (2008). J. R. Delannoy and J. McDonald, “Automatic estimation of the dynamics of facial expression using a three-level model of intensity,” in 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition (2008).
39.
Zurück zum Zitat C. Alvino, C. Kohler, F. Barrett, R. E. Gur, R. C. Gur, and R. Verma, “Computerized measurement of facial expression of emotions in schizophrenia,” J. Neurosci. Methods 163 (2), 350–361 (2007).CrossRef C. Alvino, C. Kohler, F. Barrett, R. E. Gur, R. C. Gur, and R. Verma, “Computerized measurement of facial expression of emotions in schizophrenia,” J. Neurosci. Methods 163 (2), 350–361 (2007).CrossRef
40.
Zurück zum Zitat G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of facial expression extracted automatically from video,” in Conference on Computer Vision and Pattern Recognition Workshop (2004). G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, “Dynamics of facial expression extracted automatically from video,” in Conference on Computer Vision and Pattern Recognition Workshop (2004).
41.
Zurück zum Zitat Md. A. Amin and H. Yan, “Expression intensity measurement from facial images by self organizing maps,” in 2008 International Conference on Machine Learning and Cybernetics (2008), Vol. 6. Md. A. Amin and H. Yan, “Expression intensity measurement from facial images by self organizing maps,” in 2008 International Conference on Machine Learning and Cybernetics (2008), Vol. 6.
42.
Zurück zum Zitat N. Esau, E. Wetzel, L. Kleinjohann, and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” in 2007 IEEE International Fuzzy Systems Conference (2007), pp. 1–6. N. Esau, E. Wetzel, L. Kleinjohann, and B. Kleinjohann, “Real-time facial expression recognition using a fuzzy emotion model,” in 2007 IEEE International Fuzzy Systems Conference (2007), pp. 1–6.
43.
Zurück zum Zitat K. Chang, C. Chen, and Y. Hung, “Intensity rank estimation of facial expressions based on a single image,” in 2013 IEEE International Conference on Syst. Man Cybern. (SMC) (2013). K. Chang, C. Chen, and Y. Hung, “Intensity rank estimation of facial expressions based on a single image,” in 2013 IEEE International Conference on Syst. Man Cybern. (SMC) (2013).
44.
Zurück zum Zitat A. Gudi et al., “Deep learning based FACS action unit occurrence and intensity estimation,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (2015), Vol. 6. A. Gudi et al., “Deep learning based FACS action unit occurrence and intensity estimation,” in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (2015), Vol. 6.
45.
Zurück zum Zitat D. Bowers, K. Miller, W. Bosch, D. Gokcay, O. Pedraza, U. Springer, and M. Okun, “Faces of emotion in Parkinson’s disease: Micro-expressivity and bradykinesia during voluntary facial expressions,” J. Int. Neuropsychol. Soc. 12 (6), 765–773 (2006).CrossRef D. Bowers, K. Miller, W. Bosch, D. Gokcay, O. Pedraza, U. Springer, and M. Okun, “Faces of emotion in Parkinson’s disease: Micro-expressivity and bradykinesia during voluntary facial expressions,” J. Int. Neuropsychol. Soc. 12 (6), 765–773 (2006).CrossRef
46.
Zurück zum Zitat M. G. Beaupre, N. Cheung, and U. Hess, The Montreal Set of Facial Displays of Emotion (Univ. Quebec Montreal, 2000). M. G. Beaupre, N. Cheung, and U. Hess, The Montreal Set of Facial Displays of Emotion (Univ. Quebec Montreal, 2000).
47.
Zurück zum Zitat N. C. Ebner, M. Riediger, and U. Lindenberger, “FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation,” Behav. Res. Methods 42 (1), 351–362 (2010).CrossRef N. C. Ebner, M. Riediger, and U. Lindenberger, “FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation,” Behav. Res. Methods 42 (1), 351–362 (2010).CrossRef
48.
Zurück zum Zitat J. M. Leppanen and J. K. Hietanen, “Positive facial expressions are recognized faster than negative facial expressions, but why?,” Psychol. Res. 69 (1–2), 22–29 (2004).CrossRef J. M. Leppanen and J. K. Hietanen, “Positive facial expressions are recognized faster than negative facial expressions, but why?,” Psychol. Res. 69 (1–2), 22–29 (2004).CrossRef
49.
Zurück zum Zitat E. Battini Sonmez, “An automatic multilevel facial expression recognition system,” Suleyman Demirel Univ. Fen Bilimleri Enst. Dergisi 22 (1), 160–165 (2018).CrossRef E. Battini Sonmez, “An automatic multilevel facial expression recognition system,” Suleyman Demirel Univ. Fen Bilimleri Enst. Dergisi 22 (1), 160–165 (2018).CrossRef
50.
Zurück zum Zitat P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, 57(2), 2004, pp 137–154. P. Viola and M. Jones, “Robust real-time face detection”, International Journal of Computer Vision, 57(2), 2004, pp 137–154.
51.
Zurück zum Zitat https://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html https://www.mathworks.com/help/vision/ug/train-a-cascade-object-detector.html
Metadaten
Titel
Deep Learning Based Approach of Emotion Detection and Grading System
verfasst von
Bhakti Sonawane
Priyanka Sharma
Publikationsdatum
01.10.2020
Verlag
Pleiades Publishing
Erschienen in
Pattern Recognition and Image Analysis / Ausgabe 4/2020
Print ISSN: 1054-6618
Elektronische ISSN: 1555-6212
DOI
https://doi.org/10.1134/S1054661820040239

Weitere Artikel der Ausgabe 4/2020

Pattern Recognition and Image Analysis 4/2020 Zur Ausgabe

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Image Classification by Mixed Finite Element Method and Orthogonal Legendre Moments

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Descriptive Image Analysis: Part IV. Information Structure for Generating Descriptive Algorithmic Schemes for Image Recognition

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Image Captioning using Reinforcement Learning with BLUDEr Optimization

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

Computational Approaches to Aesthetic Quality Assessment of Digital Photographs: State of the Art and Future Research Directives

MATHEMATICAL THEORY OF IMAGES AND SIGNALS REPRESENTING, PROCESSING, ANALYSIS, RECOGNITION, AND UNDERSTANDING

An Efficient Image Retrieval Method Using Fused Heterogeneous Feature