skip to main content
survey

Facial Expression Analysis under Partial Occlusion: A Survey

Published:18 April 2018Publication History
Skip Abstract Section

Abstract

Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment, and human–computer interaction. The vast majority of completed FEA studies are based on nonoccluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better-informed and benchmarked future work.

References

  1. M. Amirian et al. 2017. Support vector regression of sparse dictionary-based features for view-independent action unit intensity estimation. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 854--859.Google ScholarGoogle ScholarCross RefCross Ref
  2. A. Asthana et al. 2014. Incremental face alignment in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1859--1866. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Asthana et al. 2015. From pixels to response maps: Discriminative image filtering for face alignment in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1312--1320.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Azeem et al. 2014. A survey: face recognition techniques under partial occlusion. International Arab Journal of Information Technology 11, 1, 1--10.Google ScholarGoogle Scholar
  5. R. Azmi and S. Yegane. 2012. Facial expression recognition in the presence of occlusion using local Gabor binary patterns. In Proceedings of the 20th Iranian Conference on Electrical Engineering. IEEE, Tehran, Iran, 742--747.Google ScholarGoogle Scholar
  6. Y. Ban et al. 2014. Face detection based on skin color likelihood. Pattern Recognition 47, 4, 1573--1585. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. N. Bassili. 1979. Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37, 11, 2049--2058.Google ScholarGoogle ScholarCross RefCross Ref
  8. J. C. Batista et al. 2017. AUMPNet: Simultaneous action units detection and intensity estimation on multipose facial images using a single convolutional neural network. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 866--871.Google ScholarGoogle ScholarCross RefCross Ref
  9. C. F. Benitez-Quiroz et al. 2017. EmotioNet challenge: Recognition of facial expressions of emotion in the wild. arXiv:1703.01210.Google ScholarGoogle Scholar
  10. C. F. Benitez-Quiroz et al. 2016. EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, 5562--5570.Google ScholarGoogle Scholar
  11. V. Bettadapura. 2012. Face expression recognition and analysis: The state of the art. arXiv:1203.6722.Google ScholarGoogle Scholar
  12. J. D. Boucher and P. Ekman. 1975. Facial areas and emotional information. Journal of Communication 25, 2, 21--29.Google ScholarGoogle ScholarCross RefCross Ref
  13. F. Bourel et al. 2002. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Washington, USA, 106--111. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. F. Bourel, C. C. Chibelushi, A. A. Low. 2001. Recognition of facial expressions in the presence of occlusion. In Proceedings of the 12th British Machine Vision Conference. BMVA, Manchester, UK, 213--222.Google ScholarGoogle ScholarCross RefCross Ref
  15. I. Buciu et al. 2005. Facial expression analysis under partial occlusion. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Philadelphi, USA, 453--456.Google ScholarGoogle ScholarCross RefCross Ref
  16. X. P. Burgos-Artizzu et al. 2013. Robust face landmark estimation under occlusion. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Sydney, Australia, 1513--1520. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. A. Calvo and S. D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1, 1, 18--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Canavan et al. 2015. Landmark localization on 3D/4D range data using a shape index-based statistical shape model with global and local constraints. Computer Vision and Image Understanding 139, 136--148. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. X. Cao et al. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2, 177--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. W. Y. Chang et al. 2017. FATAUVA-Net: An integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Honolulu, USA, 1963--1971.Google ScholarGoogle ScholarCross RefCross Ref
  21. Y. Cheng et al. 2014. A deep structure for facial expression recognition under partial occlusion. In Proceedings of the 10th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Kitakyushu, Japan, 211--214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Colombo et al. 2010. Three-dimensional occlusion detection and restoration of partially occluded faces. Journal of Mathematical Imaging and Vision, 1--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. A. Colombo et al. 2011. UMB-DB: A database of partially occluded 3D faces. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2113--2119.Google ScholarGoogle ScholarCross RefCross Ref
  24. C. A. Corneanu et al. 2016. Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 8, 1548--1568.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. J. R. Cornejo et al. 2015. Facial expression recognition with occlusions based on geometric representation. In Proceedings of the Iberoamerican Congress on Pattern Recognition. Springer, Montevideo, Uruguay, 263--270.Google ScholarGoogle ScholarCross RefCross Ref
  26. J. Y. R. Cornejo and H. Pedrini. 2016. Recognition of occluded facial expressions based on CENTRIST features. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Shanghai, China, 1298--1302.Google ScholarGoogle Scholar
  27. S. F. Cotter. 2010a. Sparse representation for accurate classification of corrupted and occluded facial expressions. Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing. IEEE, Dallas, USA, 838--841.Google ScholarGoogle ScholarCross RefCross Ref
  28. S. F. Cotter. 2010b. Weighted voting of sparse representation classifiers for facial expression recognition. In Proceedings of the 18th European Signal Processing Conference. IEEE, Aalborg, Denmark, 1164--1168.Google ScholarGoogle Scholar
  29. S. F. Cotter. 2011. Recognition of occluded facial expressions using a fusion of localized sparse representation classifiers. In Proceedings of the IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop. IEEE, Sedona, USA, 437--442.Google ScholarGoogle ScholarCross RefCross Ref
  30. L. Dahua and T. Xiaoou. 2007. Quality-driven face occlusion detection and recovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Minneapolis, USA, 1--7.Google ScholarGoogle Scholar
  31. A. Dapogny et al. 2016. Confidence-weighted local expression predictions for occlusion handling in expression recognition and action unit detection. ArXiv Preprint ArXiv:1607.06290.Google ScholarGoogle Scholar
  32. A. Dhall et al. 2015a. Automatic group happiness intensity analysis. IEEE Transactions on Affective Computing 6, 1, 13--26.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. A. Dhall et al. 2017. From individual to group-level emotion recognition: Emotiw 5.0. Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 524--528. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. A. Dhall et al. 2016a. Emotion recognition in the wild challenge 2016. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 587--588. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. Dhall et al. 2016b. EmotiW 2016: Video and group-level emotion recognition challenges. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 427--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. A. Dhall et al. 2011. Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2106--2112.Google ScholarGoogle ScholarCross RefCross Ref
  37. A. Dhall et al. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia 19, 3, 34--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. A. Dhall et al. 2013. Finding happiest moments in a social context. In Proceedings of the 11th Asian Conference on Computer Vision. Springer, Berlin Heidelberg, 613--626. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. A. Dhall et al. 2015b. The more the merrier: Analysing the affect of a group of people in images. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  40. A. Dhall et al. 2015c. Video and image based emotion recognition challenges in the wild: EmotiW 2015. In Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Seattle, USA, 423--426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. H. Drira et al. 2013. 3D Face recognition under expressions, occlusions, and pose variations. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 9, 2270--2283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. M. Dunja et al. 2004. Feature selection using linear classifier weights: Interaction with classification models. Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Sheffield, UK, 234--241. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. K. Dunlap. 1927. The role of eye-muscles and mouth-muscles in the expression of the emotions. Genetic Psychology Monographs 2, 3 (1927), 196--233.Google ScholarGoogle Scholar
  44. H. Ekenel and R. Stiefelhagen. 2009. Why Is facial occlusion a challenging problem? Proceedings of the International Conference on Biometrics. Springer, Alghero, Italy, 299--308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. P. Ekman. 1994. Strong evidence for universals in facial expressions: A reply to Russells mistaken critique. Psychological Bulletin 115, 2, 268--287.Google ScholarGoogle ScholarCross RefCross Ref
  46. P. Ekman. 2003. METT. Micro expression training tool. CD-ROM.Google ScholarGoogle Scholar
  47. P. Ekman et al. 2002. Facial Action Coding System: The Manual on CD ROM. A Human Face, Salt Lake City.Google ScholarGoogle Scholar
  48. P. Ekman et al. 2013. Emotion in the human face: Guidelines for research and an integration of findings. Pergamon Press, New York, USA.Google ScholarGoogle Scholar
  49. P. Ekman and W. Friesen. 1978. The facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA, 274--280.Google ScholarGoogle Scholar
  50. Y. Fan et al. 2016. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 445--450. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. B. Fasel and J. Luettin. 2003. Automatic facial expression analysis: A survey. Pattern Recognition 36, 1, 259--275.Google ScholarGoogle ScholarCross RefCross Ref
  52. G. Ghiasi and C. C. Fowlkes. 2014. Occlusion coherence: Localizing occluded faces with a hierarchical deformable part model. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1899--1906. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. H. Gunes and B. Schuller. 2013. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing 31, 2, 120--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. H. Gunes et al. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Proceedings of the IEEE International Conference on Automatic Face 8 Gesture Recognition and Workshops. IEEE, Santa Barbara, USA, 827--834.Google ScholarGoogle ScholarCross RefCross Ref
  55. L. A. Halliday. 2008. Emotion detection: Can perceivers identify an emotion from limited information? Master's thesis, University of Canterbury.Google ScholarGoogle Scholar
  56. Z. Hammal et al. 2009. Comparing a novel model based on the transferable belief model with humans during the recognition of partially occluded facial expressions. Journal of Vision 9, 2, 1--19.Google ScholarGoogle ScholarCross RefCross Ref
  57. N. G. Hanawalt. 1942. The role of the upper and lower parts of the face as a basis for judging facial expressions: I. In painting and sculpture. The Journal of General Psychology 27, 2, 331--346.Google ScholarGoogle ScholarCross RefCross Ref
  58. Y. Heng et al. 2015. Robust face alignment under occlusion via regional predictive power estimation. IEEE Transactions on Image Processing 24, 8, 2393--2403.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Y. Hu et al. 2008. Multi-view facial expression recognition. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  60. X. Huang et al. 2015. Riesz-based volume local binary pattern and a novel group expression model for group happiness intensity analysis. In Proceedings of the British Machine Vision Conference. BMVA, Swansea, UK, 1--13.Google ScholarGoogle ScholarCross RefCross Ref
  61. X. Huang et al. 2012. Towards a dynamic expression recognition system under facial occlusion. Pattern Recognition Letters 33, 16, 2181--2191. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. C. E. Izard et al. 1979. Maximally discriminative facial movement coding system. University of Delaware, Instructional Resources Center, Newark, USA.Google ScholarGoogle Scholar
  63. B. Jiang and K.-B. Jia. 2011. Research of robust facial expression recognition under facial occlusion condition. In Proceedings of the International Conference on Active Media Technology. Springer, Lanzhou, China, 92--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. K. Jongsun et al. 2005. Effective representation using ICA for face recognition robust to local distortion and partial occlusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 12, 1977--1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. S. E. Kahou et al. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM International Conference on Multimodal Interaction. ACM, Sydney, Australia, 543--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. S. Kaltwang et al. 2015. Doubly sparse relevance vector machine for continuous facial behavior estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 9, 1748--1761.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. T. Kanade et al. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Grenoble, France, 46--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. W. Kangkan et al. 2014. A two-stage framework for 3D face reconstruction from RGBD images. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 8, 1493--1504. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. A. Kapoor et al. 2003. Fully automatic upper facial action recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures. IEEE, Nice, France, 195--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. D. Keltner et al. 2003. Facial expression of emotion. Oxford University Press, New York.Google ScholarGoogle Scholar
  71. T. Kenade. 1973. Picture processing system by computer complex and recognition of human faces. Doctoral dissertation, Kyoto University.Google ScholarGoogle Scholar
  72. I. Kotsia et al. 2008. An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing 26, 7, 1052--1067. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. S. M. Lajevardi and W. Hong Ren. 2012. Facial expression recognition in perceptual color space. IEEE Transactions on Image Processing 21, 8, 3721--3733. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. H. Li et al. 2015. An efficient multimodal 2D + 3D feature-based approach to automatic facial expression recognition. Computer Vision and Image Understanding 140, 83--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. J. Li et al. 2016. Happiness level prediction with sequential inputs via multiple regressions. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 487--493. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. X. Li et al. 2017a. Facial action units detection with multi-features and -AUs fusion. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 860--865.Google ScholarGoogle ScholarCross RefCross Ref
  77. X. Li et al. 2017b. Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing (in press).Google ScholarGoogle Scholar
  78. Y. Lijun et al. 2006. A 3D facial expression database for facial behavior research. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. IEEE, Southampton, UK, 211--216. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. D.-T. Lin and M.-J. Liu. 2006. Face occlusion detection for automated teller machine surveillance. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology. Springer, Hsinchu, Taiwan, 641--651. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. J.-C. Lin et al. 2013. Facial action unit prediction under partial occlusion based on error weighted cross-correlation model. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Vancouver, Canada, 3482--3486.Google ScholarGoogle ScholarCross RefCross Ref
  81. M. Liu et al. 2014a. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 494--501. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. P. Liu et al. 2014b. Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1805--1812. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. S. Liu et al. 2014c. Facial expression recognition under partial occlusion based on Weber local descriptor histogram and decision fusion. In Proceedings of the 33rd Chinese Control Conference. IEEE, Nanjing, China, 4664--4668.Google ScholarGoogle ScholarCross RefCross Ref
  84. S. S. Liu et al. 2014d. Facial expression recognition under random block occlusion based on maximum likelihood estimation sparse representation. In Proceedings of the International Joint Conference on Neural Networks. IEEE, Beijing, China, 1285--1290.Google ScholarGoogle ScholarCross RefCross Ref
  85. P. Lucey et al. 2010. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, San Francisco, USA, 94--101.Google ScholarGoogle ScholarCross RefCross Ref
  86. M. Lyons et al. 1998. Coding facial expressions with Gabor wavelets. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Nara, Japan, 200--205. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. M. Mahmoud et al. 2011. 3D Corpus of spontaneous complex mental states. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Memphis, USA, 205--214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. M. M. Mahmoud et al. 2014. Automatic detection of naturalistic hand-over-face gesture descriptors. In Proceedings of the 16th ACM International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. P. Maja et al. 2005. Affective multimodal human-computer interaction. In Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM, Hilton, Singapore, 669--676. Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. B. Martinez et al. 2017. Automatic analysis of facial actions: A survey. IEEE Transactions on Affective Computing (in press).Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Y. Miyakoshi and S. Kato. 2011. Facial emotion detection considering partial occlusion of face using Bayesian network. In Proceedings of the IEEE Symposium on Computers 8 Informatics. IEEE, Kuala Lumpur, Malaysia, 96--101.Google ScholarGoogle Scholar
  92. S. Moore and R. Bowden, 2011. Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding 115, 4, 541--558. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. K. Nakajima et al. 2017. Interaction between facial expression and color. Scientific Reports 7, 41019.Google ScholarGoogle ScholarCross RefCross Ref
  94. D. T. Nguyen et al. 2016. Human detection from images and videos: A survey. Pattern Recognition 51, 148--175. Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. M. Nusseck et al. 2008. The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision 8, 8, 1--23.Google ScholarGoogle ScholarCross RefCross Ref
  96. Y. Ouyang et al. 2013. Robust automatic facial expression detection method based on sparse representation plus LBP map. Optik - International Journal for Light and Electron Optics 124, 24, 6827--6833.Google ScholarGoogle ScholarCross RefCross Ref
  97. E. Owusu et al. 2015. Facial expression recognition--a comprehensive review. International Journal of Technology and Management Research 1, 4, 29--46.Google ScholarGoogle ScholarCross RefCross Ref
  98. M. Pantic and L. J. M. Rothkrantz. 2000. Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 12, 1424--1445. Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. H. Patil et al. 2015. 3-D face recognition: features, databases, algorithms and challenges. Artificial Intelligence Review 44, 3, 393--441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. G. A. Ramirez et al. 2014. Color analysis of facial skin: Detection of emotional state. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Columbus, USA, 474--479. Google ScholarGoogle ScholarDigital LibraryDigital Library
  101. M. Ranzato et al. 2011. On deep generative models with applications to recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Colorado Springs, USA, 2857--2864. Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. F. Ringeval et al. 2015. AVEC 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proceedings of the 5th ACM International Workshop on Audio/Visual Emotion Challenge. ACM, Brisbane, Australia, 3--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. D. Roberson et al. 2012. Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition 125, 2, 195--206.Google ScholarGoogle ScholarCross RefCross Ref
  104. P. Rodriguez et al. 2017. Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE Transactions on Cybernetics (in press).Google ScholarGoogle ScholarCross RefCross Ref
  105. C. A. Ruckmick. 1921. A preliminary study of the emotions. Psychological Monographs 30, 3, 30--35.Google ScholarGoogle ScholarCross RefCross Ref
  106. O. Rudovic et al. 2010. Regression-based multi-view facial expression recognition. In Proceedings of the 20th International Conference on Pattern Recognition. IEEE, Istanbul, Turkey, 4121--4124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. O. Rudovic et al. 2015. Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 5, 944--958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6, 1161--1178.Google ScholarGoogle ScholarCross RefCross Ref
  109. G. Sandbach et al. 2012. Static and dynamic 3D facial expression recognition: A comprehensive survey. Image and Vision Computing 30, 10, 683--697. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. E. Sariyanidi et al. 2015. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1113--1133.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. A. Savran et al. 2008. Bosphorus database for 3D face analysis. In Proceedings of the European Workshop on Biometrics and Identity Management. Springer, Roskilde, Denmark, 47--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. L. Shuai-Shi et al. 2013. Facial expression recognition under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence. In Proceedings of the 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Beijing, China, 218--222. Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. G. Song and R. Qiuqi. 2011. Facial expression recognition using local binary covariance matrices. In Proceedings of the 4th IET International Conference on Wireless, Mobile 8 Multimedia Networks. IET, Beijing, China, 237--242.Google ScholarGoogle Scholar
  114. B. Sun et al. 2016. LSTM for dynamic emotion and group emotion recognition in the wild. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 451--457. Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. Y. Sun and L. Yin. 2008. Facial expression recognition based on 3D dynamic range model sequences. In Proceedings of the European Conference on Computer Vision. Springer, Marseille, France, 58--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. M. Suwa et al. 1978. A preliminary note on pattern recognition of human emotional expression. In Proceedings of the International Joint Conference on Pattern Recognition. IEEE, Kyoto, Japan, 408--410.Google ScholarGoogle Scholar
  117. N. Tan Dat and S. Ranganath. 2008. Tracking facial features under occlusions and recognizing facial expressions in sign language. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--7.Google ScholarGoogle Scholar
  118. N. Tan Dat and R. Surendra. 2008. Towards recognition of facial expressions in sign language: Tracking facial features under occlusion. In Proceedings of the 15th IEEE International Conference on Image Processing. IEEE, San Diego, USA, 3228--3231.Google ScholarGoogle Scholar
  119. C. Tang et al. 2017. View-independent facial action unit detection. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 878--882.Google ScholarGoogle ScholarCross RefCross Ref
  120. U. Tariq et al. 2012. Multi-view facial expression recognition analysis with generic sparse coding feature. In Proceedings of the European Conference on Computer Vision. Springer, Florence, Italy, 578--588. Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. F. D. L. Torre et al. 2015. IntraFace. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.Google ScholarGoogle Scholar
  122. Z. Tősér et al. 2016. Deep learning for facial action unit detection under large head poses. In Proceedings of the European Conference on Computer Vision. Springer, Amsterdam, Netherlands, 359--371.Google ScholarGoogle Scholar
  123. H. Towner and M. Slater. 2007. Reconstruction and recognition of occluded facial expressions using PCA. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Lisbon, Portugal, 36--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. M. Valstar et al. 2016. AVEC 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, Amsterdam, Netherlands, 3--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. M. F. Valstar et al. 2015. FERA 2015: Second facial expression recognition and analysis challenge. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 6, 1--8.Google ScholarGoogle Scholar
  126. M. F. Valstar et al. 2017. FERA 2017: Addressing head pose in the third facial expression recognition and analysis challenge. ArXiv Preprint arXiv:1702.04174.Google ScholarGoogle Scholar
  127. R. L. Vieriu et al. 2015. Facial expression recognition under a wide range of head poses. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1, 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  128. J. Wright et al. 2009. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 2, 210--227. Google ScholarGoogle ScholarDigital LibraryDigital Library
  129. Q. Wu et al. 2011. The machine knows what you are hiding: An automatic micro-expression recognition system. Affective Computing and Intelligent Interaction, 152--162. Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. Z. Xi et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41, 5, 1417--1428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. M. Xia et al. 2009. Robust facial expression recognition based on RPCA and AdaBoost. In Proceedings of the 10th Workshop on Image Analysis for Multimedia Interactive Services. IEEE, London, UK, 113--116.Google ScholarGoogle Scholar
  132. L. Yin et al. 2008. A high-resolution 3D dynamic facial expression database. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  133. Z. Yongmian and J. Qiang. 2005. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 5, 699--714. Google ScholarGoogle ScholarDigital LibraryDigital Library
  134. X. Yu-Li et al. 2006. Beihang University facial expression database and multiple facial expression recognition. In Proceedings of the International Conference on Machine Learning and Cybernetics. IEEE, Dalian, China, 3282--3287.Google ScholarGoogle Scholar
  135. S. Zafeiriou et al. 2016. Facial affect “in-the-wild”: A survey and a new database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Las Vegas, USA, 1487--1498.Google ScholarGoogle ScholarCross RefCross Ref
  136. S. Zafeiriou et al. 2015. A survey on face detection in the wild: Past, present and future. Computer Vision and Image Understanding 138, 1--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. Z. Zeng et al. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1, 39--58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. S. Zhalehpour et al. 2016. BAUM-1: A spontaneous audio-visual face database of affective and mental states. IEEE Transactions on Affective Computing 8, 3, 300--313.Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. K. Zhang et al. 2017. Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Transactions on Image Processing 26, 9, 4193--4203.Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. L. Zhang et al. 2015. Adaptive facial point detection and emotion recognition for a humanoid robot. Computer Vision and Image Understanding 140, 93--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. L. Zhang et al. 2011. Toward a more robust facial expression recognition in occluded images using randomly sampled Gabor based templates. In Proceedings of the IEEE International Conference on Multimedia and Expo. IEEE, Barcelona, Spain, 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  142. L. Zhang et al. 2014a. Facial expression recognition experiments with data from television broadcasts and the World Wide Web. Image and Vision Computing 32, 2, 107--119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  143. L. Zhang et al. 2014b. Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 145, 0, 451--464.Google ScholarGoogle ScholarCross RefCross Ref
  144. L. Zhang et al. 2016. Towards robust automatic affective classification of images using facial expressions for practical applications. Multimedia Tools and Applications 75, 8, 4669--4695. Google ScholarGoogle ScholarDigital LibraryDigital Library
  145. S. Zhang et al. 2012. Robust facial expression recognition via compressive sensing. Sensors 12, 3, 3747--3761.Google ScholarGoogle ScholarCross RefCross Ref
  146. X. Zhang et al. 2014c. BP4D-Spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing 32, 10, 692--706.Google ScholarGoogle ScholarCross RefCross Ref
  147. X. Zhao et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 5, 1417--1428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  148. X. Zhao et al. 2016. Automatic 2.5-D facial landmarking and emotion annotation for social interaction assistance. IEEE Transactions on Cybernetics 46, 9, 2042--2055.Google ScholarGoogle ScholarCross RefCross Ref
  149. R. Zhi et al. 2011. Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 1, 38--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  150. Y. Zhou et al. 2017. Pose-independent facial action unit intensity regression based on multi-task deep transfer learning. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 872--877.Google ScholarGoogle ScholarCross RefCross Ref
  151. W. Ziheng et al. 2013. Capturing complex spatio-temporal relations among facial muscles for facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Portland, USA, 3422--3429. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Facial Expression Analysis under Partial Occlusion: A Survey

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Computing Surveys
        ACM Computing Surveys  Volume 51, Issue 2
        March 2019
        748 pages
        ISSN:0360-0300
        EISSN:1557-7341
        DOI:10.1145/3186333
        • Editor:
        • Sartaj Sahni
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 18 April 2018
        • Revised: 1 November 2017
        • Accepted: 1 November 2017
        • Received: 1 March 2017
        Published in csur Volume 51, Issue 2

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • survey
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader