Abstract
Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment, and human–computer interaction. The vast majority of completed FEA studies are based on nonoccluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better-informed and benchmarked future work.
- M. Amirian et al. 2017. Support vector regression of sparse dictionary-based features for view-independent action unit intensity estimation. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 854--859.Google ScholarCross Ref
- A. Asthana et al. 2014. Incremental face alignment in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1859--1866. Google ScholarDigital Library
- A. Asthana et al. 2015. From pixels to response maps: Discriminative image filtering for face alignment in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1312--1320.Google ScholarDigital Library
- A. Azeem et al. 2014. A survey: face recognition techniques under partial occlusion. International Arab Journal of Information Technology 11, 1, 1--10.Google Scholar
- R. Azmi and S. Yegane. 2012. Facial expression recognition in the presence of occlusion using local Gabor binary patterns. In Proceedings of the 20th Iranian Conference on Electrical Engineering. IEEE, Tehran, Iran, 742--747.Google Scholar
- Y. Ban et al. 2014. Face detection based on skin color likelihood. Pattern Recognition 47, 4, 1573--1585. Google ScholarDigital Library
- J. N. Bassili. 1979. Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37, 11, 2049--2058.Google ScholarCross Ref
- J. C. Batista et al. 2017. AUMPNet: Simultaneous action units detection and intensity estimation on multipose facial images using a single convolutional neural network. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 866--871.Google ScholarCross Ref
- C. F. Benitez-Quiroz et al. 2017. EmotioNet challenge: Recognition of facial expressions of emotion in the wild. arXiv:1703.01210.Google Scholar
- C. F. Benitez-Quiroz et al. 2016. EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, 5562--5570.Google Scholar
- V. Bettadapura. 2012. Face expression recognition and analysis: The state of the art. arXiv:1203.6722.Google Scholar
- J. D. Boucher and P. Ekman. 1975. Facial areas and emotional information. Journal of Communication 25, 2, 21--29.Google ScholarCross Ref
- F. Bourel et al. 2002. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Washington, USA, 106--111. Google ScholarDigital Library
- F. Bourel, C. C. Chibelushi, A. A. Low. 2001. Recognition of facial expressions in the presence of occlusion. In Proceedings of the 12th British Machine Vision Conference. BMVA, Manchester, UK, 213--222.Google ScholarCross Ref
- I. Buciu et al. 2005. Facial expression analysis under partial occlusion. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Philadelphi, USA, 453--456.Google ScholarCross Ref
- X. P. Burgos-Artizzu et al. 2013. Robust face landmark estimation under occlusion. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Sydney, Australia, 1513--1520. Google ScholarDigital Library
- R. A. Calvo and S. D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1, 1, 18--37. Google ScholarDigital Library
- S. Canavan et al. 2015. Landmark localization on 3D/4D range data using a shape index-based statistical shape model with global and local constraints. Computer Vision and Image Understanding 139, 136--148. Google ScholarDigital Library
- X. Cao et al. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2, 177--190. Google ScholarDigital Library
- W. Y. Chang et al. 2017. FATAUVA-Net: An integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Honolulu, USA, 1963--1971.Google ScholarCross Ref
- Y. Cheng et al. 2014. A deep structure for facial expression recognition under partial occlusion. In Proceedings of the 10th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Kitakyushu, Japan, 211--214. Google ScholarDigital Library
- A. Colombo et al. 2010. Three-dimensional occlusion detection and restoration of partially occluded faces. Journal of Mathematical Imaging and Vision, 1--15. Google ScholarDigital Library
- A. Colombo et al. 2011. UMB-DB: A database of partially occluded 3D faces. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2113--2119.Google ScholarCross Ref
- C. A. Corneanu et al. 2016. Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 8, 1548--1568.Google ScholarDigital Library
- J. R. Cornejo et al. 2015. Facial expression recognition with occlusions based on geometric representation. In Proceedings of the Iberoamerican Congress on Pattern Recognition. Springer, Montevideo, Uruguay, 263--270.Google ScholarCross Ref
- J. Y. R. Cornejo and H. Pedrini. 2016. Recognition of occluded facial expressions based on CENTRIST features. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Shanghai, China, 1298--1302.Google Scholar
- S. F. Cotter. 2010a. Sparse representation for accurate classification of corrupted and occluded facial expressions. Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing. IEEE, Dallas, USA, 838--841.Google ScholarCross Ref
- S. F. Cotter. 2010b. Weighted voting of sparse representation classifiers for facial expression recognition. In Proceedings of the 18th European Signal Processing Conference. IEEE, Aalborg, Denmark, 1164--1168.Google Scholar
- S. F. Cotter. 2011. Recognition of occluded facial expressions using a fusion of localized sparse representation classifiers. In Proceedings of the IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop. IEEE, Sedona, USA, 437--442.Google ScholarCross Ref
- L. Dahua and T. Xiaoou. 2007. Quality-driven face occlusion detection and recovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Minneapolis, USA, 1--7.Google Scholar
- A. Dapogny et al. 2016. Confidence-weighted local expression predictions for occlusion handling in expression recognition and action unit detection. ArXiv Preprint ArXiv:1607.06290.Google Scholar
- A. Dhall et al. 2015a. Automatic group happiness intensity analysis. IEEE Transactions on Affective Computing 6, 1, 13--26.Google ScholarDigital Library
- A. Dhall et al. 2017. From individual to group-level emotion recognition: Emotiw 5.0. Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 524--528. Google ScholarDigital Library
- A. Dhall et al. 2016a. Emotion recognition in the wild challenge 2016. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 587--588. Google ScholarDigital Library
- A. Dhall et al. 2016b. EmotiW 2016: Video and group-level emotion recognition challenges. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 427--432. Google ScholarDigital Library
- A. Dhall et al. 2011. Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2106--2112.Google ScholarCross Ref
- A. Dhall et al. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia 19, 3, 34--41. Google ScholarDigital Library
- A. Dhall et al. 2013. Finding happiest moments in a social context. In Proceedings of the 11th Asian Conference on Computer Vision. Springer, Berlin Heidelberg, 613--626. Google ScholarDigital Library
- A. Dhall et al. 2015b. The more the merrier: Analysing the affect of a group of people in images. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.Google ScholarCross Ref
- A. Dhall et al. 2015c. Video and image based emotion recognition challenges in the wild: EmotiW 2015. In Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Seattle, USA, 423--426. Google ScholarDigital Library
- H. Drira et al. 2013. 3D Face recognition under expressions, occlusions, and pose variations. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 9, 2270--2283. Google ScholarDigital Library
- M. Dunja et al. 2004. Feature selection using linear classifier weights: Interaction with classification models. Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Sheffield, UK, 234--241. Google ScholarDigital Library
- K. Dunlap. 1927. The role of eye-muscles and mouth-muscles in the expression of the emotions. Genetic Psychology Monographs 2, 3 (1927), 196--233.Google Scholar
- H. Ekenel and R. Stiefelhagen. 2009. Why Is facial occlusion a challenging problem? Proceedings of the International Conference on Biometrics. Springer, Alghero, Italy, 299--308. Google ScholarDigital Library
- P. Ekman. 1994. Strong evidence for universals in facial expressions: A reply to Russells mistaken critique. Psychological Bulletin 115, 2, 268--287.Google ScholarCross Ref
- P. Ekman. 2003. METT. Micro expression training tool. CD-ROM.Google Scholar
- P. Ekman et al. 2002. Facial Action Coding System: The Manual on CD ROM. A Human Face, Salt Lake City.Google Scholar
- P. Ekman et al. 2013. Emotion in the human face: Guidelines for research and an integration of findings. Pergamon Press, New York, USA.Google Scholar
- P. Ekman and W. Friesen. 1978. The facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA, 274--280.Google Scholar
- Y. Fan et al. 2016. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 445--450. Google ScholarDigital Library
- B. Fasel and J. Luettin. 2003. Automatic facial expression analysis: A survey. Pattern Recognition 36, 1, 259--275.Google ScholarCross Ref
- G. Ghiasi and C. C. Fowlkes. 2014. Occlusion coherence: Localizing occluded faces with a hierarchical deformable part model. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1899--1906. Google ScholarDigital Library
- H. Gunes and B. Schuller. 2013. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing 31, 2, 120--136. Google ScholarDigital Library
- H. Gunes et al. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Proceedings of the IEEE International Conference on Automatic Face 8 Gesture Recognition and Workshops. IEEE, Santa Barbara, USA, 827--834.Google ScholarCross Ref
- L. A. Halliday. 2008. Emotion detection: Can perceivers identify an emotion from limited information? Master's thesis, University of Canterbury.Google Scholar
- Z. Hammal et al. 2009. Comparing a novel model based on the transferable belief model with humans during the recognition of partially occluded facial expressions. Journal of Vision 9, 2, 1--19.Google ScholarCross Ref
- N. G. Hanawalt. 1942. The role of the upper and lower parts of the face as a basis for judging facial expressions: I. In painting and sculpture. The Journal of General Psychology 27, 2, 331--346.Google ScholarCross Ref
- Y. Heng et al. 2015. Robust face alignment under occlusion via regional predictive power estimation. IEEE Transactions on Image Processing 24, 8, 2393--2403.Google ScholarDigital Library
- Y. Hu et al. 2008. Multi-view facial expression recognition. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.Google ScholarCross Ref
- X. Huang et al. 2015. Riesz-based volume local binary pattern and a novel group expression model for group happiness intensity analysis. In Proceedings of the British Machine Vision Conference. BMVA, Swansea, UK, 1--13.Google ScholarCross Ref
- X. Huang et al. 2012. Towards a dynamic expression recognition system under facial occlusion. Pattern Recognition Letters 33, 16, 2181--2191. Google ScholarDigital Library
- C. E. Izard et al. 1979. Maximally discriminative facial movement coding system. University of Delaware, Instructional Resources Center, Newark, USA.Google Scholar
- B. Jiang and K.-B. Jia. 2011. Research of robust facial expression recognition under facial occlusion condition. In Proceedings of the International Conference on Active Media Technology. Springer, Lanzhou, China, 92--100. Google ScholarDigital Library
- K. Jongsun et al. 2005. Effective representation using ICA for face recognition robust to local distortion and partial occlusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 12, 1977--1981. Google ScholarDigital Library
- S. E. Kahou et al. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM International Conference on Multimodal Interaction. ACM, Sydney, Australia, 543--550. Google ScholarDigital Library
- S. Kaltwang et al. 2015. Doubly sparse relevance vector machine for continuous facial behavior estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 9, 1748--1761.Google ScholarDigital Library
- T. Kanade et al. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Grenoble, France, 46--53. Google ScholarDigital Library
- W. Kangkan et al. 2014. A two-stage framework for 3D face reconstruction from RGBD images. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 8, 1493--1504. Google ScholarDigital Library
- A. Kapoor et al. 2003. Fully automatic upper facial action recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures. IEEE, Nice, France, 195--202. Google ScholarDigital Library
- D. Keltner et al. 2003. Facial expression of emotion. Oxford University Press, New York.Google Scholar
- T. Kenade. 1973. Picture processing system by computer complex and recognition of human faces. Doctoral dissertation, Kyoto University.Google Scholar
- I. Kotsia et al. 2008. An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing 26, 7, 1052--1067. Google ScholarDigital Library
- S. M. Lajevardi and W. Hong Ren. 2012. Facial expression recognition in perceptual color space. IEEE Transactions on Image Processing 21, 8, 3721--3733. Google ScholarDigital Library
- H. Li et al. 2015. An efficient multimodal 2D + 3D feature-based approach to automatic facial expression recognition. Computer Vision and Image Understanding 140, 83--92. Google ScholarDigital Library
- J. Li et al. 2016. Happiness level prediction with sequential inputs via multiple regressions. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 487--493. Google ScholarDigital Library
- X. Li et al. 2017a. Facial action units detection with multi-features and -AUs fusion. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 860--865.Google ScholarCross Ref
- X. Li et al. 2017b. Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing (in press).Google Scholar
- Y. Lijun et al. 2006. A 3D facial expression database for facial behavior research. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. IEEE, Southampton, UK, 211--216. Google ScholarDigital Library
- D.-T. Lin and M.-J. Liu. 2006. Face occlusion detection for automated teller machine surveillance. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology. Springer, Hsinchu, Taiwan, 641--651. Google ScholarDigital Library
- J.-C. Lin et al. 2013. Facial action unit prediction under partial occlusion based on error weighted cross-correlation model. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Vancouver, Canada, 3482--3486.Google ScholarCross Ref
- M. Liu et al. 2014a. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 494--501. Google ScholarDigital Library
- P. Liu et al. 2014b. Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1805--1812. Google ScholarDigital Library
- S. Liu et al. 2014c. Facial expression recognition under partial occlusion based on Weber local descriptor histogram and decision fusion. In Proceedings of the 33rd Chinese Control Conference. IEEE, Nanjing, China, 4664--4668.Google ScholarCross Ref
- S. S. Liu et al. 2014d. Facial expression recognition under random block occlusion based on maximum likelihood estimation sparse representation. In Proceedings of the International Joint Conference on Neural Networks. IEEE, Beijing, China, 1285--1290.Google ScholarCross Ref
- P. Lucey et al. 2010. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, San Francisco, USA, 94--101.Google ScholarCross Ref
- M. Lyons et al. 1998. Coding facial expressions with Gabor wavelets. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Nara, Japan, 200--205. Google ScholarDigital Library
- M. Mahmoud et al. 2011. 3D Corpus of spontaneous complex mental states. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Memphis, USA, 205--214. Google ScholarDigital Library
- M. M. Mahmoud et al. 2014. Automatic detection of naturalistic hand-over-face gesture descriptors. In Proceedings of the 16th ACM International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 319--326. Google ScholarDigital Library
- P. Maja et al. 2005. Affective multimodal human-computer interaction. In Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM, Hilton, Singapore, 669--676. Google ScholarDigital Library
- B. Martinez et al. 2017. Automatic analysis of facial actions: A survey. IEEE Transactions on Affective Computing (in press).Google ScholarDigital Library
- Y. Miyakoshi and S. Kato. 2011. Facial emotion detection considering partial occlusion of face using Bayesian network. In Proceedings of the IEEE Symposium on Computers 8 Informatics. IEEE, Kuala Lumpur, Malaysia, 96--101.Google Scholar
- S. Moore and R. Bowden, 2011. Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding 115, 4, 541--558. Google ScholarDigital Library
- K. Nakajima et al. 2017. Interaction between facial expression and color. Scientific Reports 7, 41019.Google ScholarCross Ref
- D. T. Nguyen et al. 2016. Human detection from images and videos: A survey. Pattern Recognition 51, 148--175. Google ScholarDigital Library
- M. Nusseck et al. 2008. The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision 8, 8, 1--23.Google ScholarCross Ref
- Y. Ouyang et al. 2013. Robust automatic facial expression detection method based on sparse representation plus LBP map. Optik - International Journal for Light and Electron Optics 124, 24, 6827--6833.Google ScholarCross Ref
- E. Owusu et al. 2015. Facial expression recognition--a comprehensive review. International Journal of Technology and Management Research 1, 4, 29--46.Google ScholarCross Ref
- M. Pantic and L. J. M. Rothkrantz. 2000. Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 12, 1424--1445. Google ScholarDigital Library
- H. Patil et al. 2015. 3-D face recognition: features, databases, algorithms and challenges. Artificial Intelligence Review 44, 3, 393--441. Google ScholarDigital Library
- G. A. Ramirez et al. 2014. Color analysis of facial skin: Detection of emotional state. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Columbus, USA, 474--479. Google ScholarDigital Library
- M. Ranzato et al. 2011. On deep generative models with applications to recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Colorado Springs, USA, 2857--2864. Google ScholarDigital Library
- F. Ringeval et al. 2015. AVEC 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proceedings of the 5th ACM International Workshop on Audio/Visual Emotion Challenge. ACM, Brisbane, Australia, 3--8. Google ScholarDigital Library
- D. Roberson et al. 2012. Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition 125, 2, 195--206.Google ScholarCross Ref
- P. Rodriguez et al. 2017. Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE Transactions on Cybernetics (in press).Google ScholarCross Ref
- C. A. Ruckmick. 1921. A preliminary study of the emotions. Psychological Monographs 30, 3, 30--35.Google ScholarCross Ref
- O. Rudovic et al. 2010. Regression-based multi-view facial expression recognition. In Proceedings of the 20th International Conference on Pattern Recognition. IEEE, Istanbul, Turkey, 4121--4124. Google ScholarDigital Library
- O. Rudovic et al. 2015. Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 5, 944--958.Google ScholarDigital Library
- J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6, 1161--1178.Google ScholarCross Ref
- G. Sandbach et al. 2012. Static and dynamic 3D facial expression recognition: A comprehensive survey. Image and Vision Computing 30, 10, 683--697. Google ScholarDigital Library
- E. Sariyanidi et al. 2015. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1113--1133.Google ScholarDigital Library
- A. Savran et al. 2008. Bosphorus database for 3D face analysis. In Proceedings of the European Workshop on Biometrics and Identity Management. Springer, Roskilde, Denmark, 47--56. Google ScholarDigital Library
- L. Shuai-Shi et al. 2013. Facial expression recognition under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence. In Proceedings of the 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Beijing, China, 218--222. Google ScholarDigital Library
- G. Song and R. Qiuqi. 2011. Facial expression recognition using local binary covariance matrices. In Proceedings of the 4th IET International Conference on Wireless, Mobile 8 Multimedia Networks. IET, Beijing, China, 237--242.Google Scholar
- B. Sun et al. 2016. LSTM for dynamic emotion and group emotion recognition in the wild. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 451--457. Google ScholarDigital Library
- Y. Sun and L. Yin. 2008. Facial expression recognition based on 3D dynamic range model sequences. In Proceedings of the European Conference on Computer Vision. Springer, Marseille, France, 58--71. Google ScholarDigital Library
- M. Suwa et al. 1978. A preliminary note on pattern recognition of human emotional expression. In Proceedings of the International Joint Conference on Pattern Recognition. IEEE, Kyoto, Japan, 408--410.Google Scholar
- N. Tan Dat and S. Ranganath. 2008. Tracking facial features under occlusions and recognizing facial expressions in sign language. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--7.Google Scholar
- N. Tan Dat and R. Surendra. 2008. Towards recognition of facial expressions in sign language: Tracking facial features under occlusion. In Proceedings of the 15th IEEE International Conference on Image Processing. IEEE, San Diego, USA, 3228--3231.Google Scholar
- C. Tang et al. 2017. View-independent facial action unit detection. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 878--882.Google ScholarCross Ref
- U. Tariq et al. 2012. Multi-view facial expression recognition analysis with generic sparse coding feature. In Proceedings of the European Conference on Computer Vision. Springer, Florence, Italy, 578--588. Google ScholarDigital Library
- F. D. L. Torre et al. 2015. IntraFace. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.Google Scholar
- Z. Tősér et al. 2016. Deep learning for facial action unit detection under large head poses. In Proceedings of the European Conference on Computer Vision. Springer, Amsterdam, Netherlands, 359--371.Google Scholar
- H. Towner and M. Slater. 2007. Reconstruction and recognition of occluded facial expressions using PCA. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Lisbon, Portugal, 36--47. Google ScholarDigital Library
- M. Valstar et al. 2016. AVEC 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, Amsterdam, Netherlands, 3--10. Google ScholarDigital Library
- M. F. Valstar et al. 2015. FERA 2015: Second facial expression recognition and analysis challenge. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 6, 1--8.Google Scholar
- M. F. Valstar et al. 2017. FERA 2017: Addressing head pose in the third facial expression recognition and analysis challenge. ArXiv Preprint arXiv:1702.04174.Google Scholar
- R. L. Vieriu et al. 2015. Facial expression recognition under a wide range of head poses. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1, 1--7.Google ScholarCross Ref
- J. Wright et al. 2009. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 2, 210--227. Google ScholarDigital Library
- Q. Wu et al. 2011. The machine knows what you are hiding: An automatic micro-expression recognition system. Affective Computing and Intelligent Interaction, 152--162. Google ScholarDigital Library
- Z. Xi et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41, 5, 1417--1428. Google ScholarDigital Library
- M. Xia et al. 2009. Robust facial expression recognition based on RPCA and AdaBoost. In Proceedings of the 10th Workshop on Image Analysis for Multimedia Interactive Services. IEEE, London, UK, 113--116.Google Scholar
- L. Yin et al. 2008. A high-resolution 3D dynamic facial expression database. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.Google ScholarCross Ref
- Z. Yongmian and J. Qiang. 2005. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 5, 699--714. Google ScholarDigital Library
- X. Yu-Li et al. 2006. Beihang University facial expression database and multiple facial expression recognition. In Proceedings of the International Conference on Machine Learning and Cybernetics. IEEE, Dalian, China, 3282--3287.Google Scholar
- S. Zafeiriou et al. 2016. Facial affect “in-the-wild”: A survey and a new database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Las Vegas, USA, 1487--1498.Google ScholarCross Ref
- S. Zafeiriou et al. 2015. A survey on face detection in the wild: Past, present and future. Computer Vision and Image Understanding 138, 1--24. Google ScholarDigital Library
- Z. Zeng et al. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1, 39--58. Google ScholarDigital Library
- S. Zhalehpour et al. 2016. BAUM-1: A spontaneous audio-visual face database of affective and mental states. IEEE Transactions on Affective Computing 8, 3, 300--313.Google ScholarDigital Library
- K. Zhang et al. 2017. Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Transactions on Image Processing 26, 9, 4193--4203.Google ScholarDigital Library
- L. Zhang et al. 2015. Adaptive facial point detection and emotion recognition for a humanoid robot. Computer Vision and Image Understanding 140, 93--114. Google ScholarDigital Library
- L. Zhang et al. 2011. Toward a more robust facial expression recognition in occluded images using randomly sampled Gabor based templates. In Proceedings of the IEEE International Conference on Multimedia and Expo. IEEE, Barcelona, Spain, 1--6. Google ScholarDigital Library
- L. Zhang et al. 2014a. Facial expression recognition experiments with data from television broadcasts and the World Wide Web. Image and Vision Computing 32, 2, 107--119. Google ScholarDigital Library
- L. Zhang et al. 2014b. Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 145, 0, 451--464.Google ScholarCross Ref
- L. Zhang et al. 2016. Towards robust automatic affective classification of images using facial expressions for practical applications. Multimedia Tools and Applications 75, 8, 4669--4695. Google ScholarDigital Library
- S. Zhang et al. 2012. Robust facial expression recognition via compressive sensing. Sensors 12, 3, 3747--3761.Google ScholarCross Ref
- X. Zhang et al. 2014c. BP4D-Spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing 32, 10, 692--706.Google ScholarCross Ref
- X. Zhao et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 5, 1417--1428. Google ScholarDigital Library
- X. Zhao et al. 2016. Automatic 2.5-D facial landmarking and emotion annotation for social interaction assistance. IEEE Transactions on Cybernetics 46, 9, 2042--2055.Google ScholarCross Ref
- R. Zhi et al. 2011. Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 1, 38--52. Google ScholarDigital Library
- Y. Zhou et al. 2017. Pose-independent facial action unit intensity regression based on multi-task deep transfer learning. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 872--877.Google ScholarCross Ref
- W. Ziheng et al. 2013. Capturing complex spatio-temporal relations among facial muscles for facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Portland, USA, 3422--3429. Google ScholarDigital Library
Index Terms
- Facial Expression Analysis under Partial Occlusion: A Survey
Recommendations
An analysis of facial expression recognition under partial facial image occlusion
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor ...
Recognizing Action Units for Facial Expression Analysis
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
Facial Expression Recognition under Partial Occlusion Based on Gabor Multi-orientation Features Fusion and Local Gabor Binary Pattern Histogram Sequence
IIH-MSP '13: Proceedings of the 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal ProcessingIn this paper, we propose a novel facial expression recognition method under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence (LGBPHS). Firstly, the Gabor filter is adopted to extract ...
Comments