Skip to main content

2011 | OriginalPaper | Buchkapitel

19. Facial Expression Analysis

verfasst von : Fernando De la Torre, Jeffrey F. Cohn

Erschienen in: Visual Analysis of Humans

Verlag: Springer London

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 15 years, there has been increasing interest in automated facial expression analysis within the computer vision and machine learning communities. This chapter reviews fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition. We consider challenges, review databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Bold uppercase letters denote matrices (e.g., D), bold lowercase letters denote column vectors (e.g., d). d j represents the jth column of the matrix D. d ij denotes the scalar in the row ith and column jth of the matrix D. Non-bold letters represent scalar variables. tr(D)=∑ i d ii is the trace of square matrix D. \(\|\mathbf{d}\|_{2} = \sqrt{\mathbf{d}^{T}\mathbf{d}}\) designates Euclidean norm of d.
 
Literatur
1.
Zurück zum Zitat Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33(1), 17–34 (2009) Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33(1), 17–34 (2009)
2.
Zurück zum Zitat Ambadar, Z., Schooler, J.W., Cohn, J.F.: Deciphering the enigmatic face. Psychol. Sci. 16(5), 403–410 (2005) Ambadar, Z., Schooler, J.W., Cohn, J.F.: Deciphering the enigmatic face. Psychol. Sci. 16(5), 403–410 (2005)
3.
Zurück zum Zitat Anderson, K., McOwan, P.W.: A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36(1), 96–105 (2006) Anderson, K., McOwan, P.W.: A real-time automated system for the recognition of human facial expressions. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36(1), 96–105 (2006)
4.
Zurück zum Zitat Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009) Ashraf, A.B., Lucey, S., Cohn, J.F., Chen, T., Ambadar, Z., Prkachin, K.M., Solomon, P.E.: The painful face-pain expression recognition using active appearance models. Image Vis. Comput. 27(12), 1788–1796 (2009)
5.
Zurück zum Zitat Baker, S., Matthews, I.: Lucas–Kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004) Baker, S., Matthews, I.: Lucas–Kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)
6.
Zurück zum Zitat Bartlett, M., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: Development and applications to human computer interaction. In: CVPR Workshops for HCI (2003) Bartlett, M., Littlewort, G., Fasel, I., Movellan, J.R.: Real time face detection and facial expression recognition: Development and applications to human computer interaction. In: CVPR Workshops for HCI (2003)
7.
Zurück zum Zitat Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006) Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006)
8.
Zurück zum Zitat Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Fully automatic facial action recognition in spontaneous behavior. In: AFGR, pp. 223–230 (2006) Bartlett, M.S., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Fully automatic facial action recognition in spontaneous behavior. In: AFGR, pp. 223–230 (2006)
9.
Zurück zum Zitat Beebe, B., Badalamenti, A., Jaffe, J., Feldstein, S., Marquette, L., Helbraun, E.: Distressed mothers and their infants use a less efficient timing mechanism in creating expectancies of each other’s looking patterns. J. Psycholinguist. Res. 37(5), 293–307 (2008) Beebe, B., Badalamenti, A., Jaffe, J., Feldstein, S., Marquette, L., Helbraun, E.: Distressed mothers and their infants use a less efficient timing mechanism in creating expectancies of each other’s looking patterns. J. Psycholinguist. Res. 37(5), 293–307 (2008)
10.
Zurück zum Zitat Bergen, J.R., Anandan, P., Hanna, K.J., Hingorani, R.: Hierarchical model-based motion estimation. In: European Conference on Computer Vision, pp. 237–252 (1992) Bergen, J.R., Anandan, P., Hanna, K.J., Hingorani, R.: Hierarchical model-based motion estimation. In: European Conference on Computer Vision, pp. 237–252 (1992)
11.
Zurück zum Zitat Bettinger, F., Cootes, T.F., Taylor, C.J.: Modelling facial behaviours. In: BMVC (2002) Bettinger, F., Cootes, T.F., Taylor, C.J.: Modelling facial behaviours. In: BMVC (2002)
12.
Zurück zum Zitat Black, M.J., Jepson, A.D.: Eigentracking: Robust matching and tracking of objects using view-based representation. Int. J. Comput. Vis. 26(1), 63–84 (1998) Black, M.J., Jepson, A.D.: Eigentracking: Robust matching and tracking of objects using view-based representation. Int. J. Comput. Vis. 26(1), 63–84 (1998)
13.
Zurück zum Zitat Black, M.J., Yacoob, Y.: Recognizing facial expressions in image sequences using local parameterized models of image motion. Int. J. Comput. Vis. 25(1), 23–48 (1997) Black, M.J., Yacoob, Y.: Recognizing facial expressions in image sequences using local parameterized models of image motion. Int. J. Comput. Vis. 25(1), 23–48 (1997)
14.
Zurück zum Zitat Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH (1999) Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: SIGGRAPH (1999)
15.
Zurück zum Zitat Blaschko, M., Lampert, C.: Learning to localize objects with structured output regression. In: ECCV, pp. 2–15 (2008) Blaschko, M., Lampert, C.: Learning to localize objects with structured output regression. In: ECCV, pp. 2–15 (2008)
16.
Zurück zum Zitat Bobick, A., Davis, J.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001) Bobick, A., Davis, J.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)
17.
Zurück zum Zitat Breiman, L.: Classification and Regression Trees. Chapman & Hall, London (1998) Breiman, L.: Classification and Regression Trees. Chapman & Hall, London (1998)
18.
Zurück zum Zitat Bruce, V.: What the human face tells the human mind: Some challenges for the robot–human interface. In: IEEE Int. Workshop on Robot and Human Communication (1992) Bruce, V.: What the human face tells the human mind: Some challenges for the robot–human interface. In: IEEE Int. Workshop on Robot and Human Communication (1992)
19.
Zurück zum Zitat Chang, K.Y., Liu, T.L., Lai, S.H.: Learning partially-observed hidden conditional random fields for facial expression recognition. In: CVPR (2009) Chang, K.Y., Liu, T.L., Lai, S.H.: Learning partially-observed hidden conditional random fields for facial expression recognition. In: CVPR (2009)
20.
Zurück zum Zitat Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. In: CVPR Workshops, p. 81 (2004) Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. In: CVPR Workshops, p. 81 (2004)
21.
Zurück zum Zitat Chetverikov, D., Péteri, R.: A brief survey of dynamic texture description and recognition. In: Computer Recognition Systems, pp. 17–26 (2005) Chetverikov, D., Péteri, R.: A brief survey of dynamic texture description and recognition. In: Computer Recognition Systems, pp. 17–26 (2005)
22.
Zurück zum Zitat Cohen, I., Sebe, N., Cozman, F.G., Cirelo, M.C., Huang, T.S.: Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data. In: CVPR (2003) Cohen, I., Sebe, N., Cozman, F.G., Cirelo, M.C., Huang, T.S.: Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data. In: CVPR (2003)
23.
Zurück zum Zitat Cohen, I., Sebe, N., Garg, A., Chen, L.S., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modeling. Comput. Vis. Image Underst. 91(1–2), 160–187 (2003) Cohen, I., Sebe, N., Garg, A., Chen, L.S., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modeling. Comput. Vis. Image Underst. 91(1–2), 160–187 (2003)
24.
Zurück zum Zitat Cohn, J.F., Ambadar, Z., Ekman, P.: Observer-based measurement of facial expression with the facial action coding system. In: The Handbook of Emotion Elicitation and Assessment. Series in Affective Science. Oxford University Press, New York (2007) Cohn, J.F., Ambadar, Z., Ekman, P.: Observer-based measurement of facial expression with the facial action coding system. In: The Handbook of Emotion Elicitation and Assessment. Series in Affective Science. Oxford University Press, New York (2007)
25.
Zurück zum Zitat Cohn, J.F., Ekman, P.: Measuring facial action by manual coding, facial emg, and automatic facial image analysis. In: Handbook of Nonverbal Behavior Research Methods in the Affective Sciences, pp. 9–64 (2005) Cohn, J.F., Ekman, P.: Measuring facial action by manual coding, facial emg, and automatic facial image analysis. In: Handbook of Nonverbal Behavior Research Methods in the Affective Sciences, pp. 9–64 (2005)
26.
Zurück zum Zitat Cohn, J.F., Kanade, T.: Automated facial image analysis for measurement of emotion expression. In: The Handbook of Emotion Elicitation and Assessment, pp. 222–238 (2007) Cohn, J.F., Kanade, T.: Automated facial image analysis for measurement of emotion expression. In: The Handbook of Emotion Elicitation and Assessment, pp. 222–238 (2007)
27.
Zurück zum Zitat Cohn, J.F., Simon, T., Hoai, M., Zhou, F., Tejera, M., De la Torre, F.: Detecting depression from facial actions and vocal prosody. In: ACII (2009) Cohn, J.F., Simon, T., Hoai, M., Zhou, F., Tejera, M., De la Torre, F.: Detecting depression from facial actions and vocal prosody. In: ACII (2009)
28.
Zurück zum Zitat Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001) Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)
29.
Zurück zum Zitat Dai, Y., Shibata, Y., Ishii, T., Hashimoto, K., Katamachi, K., Noguchi, K., Kakizaki, N., Ca, D.: An associate memory model of facial expressions and its application in facial expression recognition of patients on bed. In: ICME, pp. 591–594 (2001) Dai, Y., Shibata, Y., Ishii, T., Hashimoto, K., Katamachi, K., Noguchi, K., Kakizaki, N., Ca, D.: An associate memory model of facial expressions and its application in facial expression recognition of patients on bed. In: ICME, pp. 591–594 (2001)
30.
Zurück zum Zitat Darwin, C.: The Expression of the Emotions in Man and Animals. Oxford University Press New York (1872/1998) Darwin, C.: The Expression of the Emotions in Man and Animals. Oxford University Press New York (1872/1998)
31.
Zurück zum Zitat De la Torre, F., Black, M.J.: Robust parameterized component analysis: theory and applications to 2d facial appearance models. Comput. Vis. Image Underst. 91, 53–71 (2003) De la Torre, F., Black, M.J.: Robust parameterized component analysis: theory and applications to 2d facial appearance models. Comput. Vis. Image Underst. 91, 53–71 (2003)
32.
Zurück zum Zitat De la Torre, F., Campoy, J., Ambadar, Z., Cohn, J.: Temporal segmentation of facial behavior. In: International Conference on Computer Vision (2007) De la Torre, F., Campoy, J., Ambadar, Z., Cohn, J.: Temporal segmentation of facial behavior. In: International Conference on Computer Vision (2007)
33.
Zurück zum Zitat De la Torre, F., Collet, A., Cohn, J., Kanade, T.: Filtered component analysis to increase robustness to local minima in appearance models. In: CVPR (2007) De la Torre, F., Collet, A., Cohn, J., Kanade, T.: Filtered component analysis to increase robustness to local minima in appearance models. In: CVPR (2007)
34.
Zurück zum Zitat De la Torre, F., Vitrià, J., Radeva, P., Melenchón, J.: Eigenfiltering for flexible eigentracking. In: ICPR (2000) De la Torre, F., Vitrià, J., Radeva, P., Melenchón, J.: Eigenfiltering for flexible eigentracking. In: ICPR (2000)
35.
Zurück zum Zitat De la Torre, F., Yacoob, Y., Davis, L.: A probabilistic framework for rigid and non-rigid appearance based tracking and recognition. In: AFGR, pp. 491–498 (2000) De la Torre, F., Yacoob, Y., Davis, L.: A probabilistic framework for rigid and non-rigid appearance based tracking and recognition. In: AFGR, pp. 491–498 (2000)
36.
Zurück zum Zitat DePaulo, B., Lindsay, J., Malone, B., Muhlenbruck, L., Charlton, K., Cooper, H.: Cues to deception. Psychol. Bull. 129(1), 74–118 (2003) DePaulo, B., Lindsay, J., Malone, B., Muhlenbruck, L., Charlton, K., Cooper, H.: Cues to deception. Psychol. Bull. 129(1), 74–118 (2003)
37.
Zurück zum Zitat Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Classifying facial actions. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 979–984 (1999) Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Classifying facial actions. IEEE Trans. Pattern Anal. Mach. Intell. 21(10), 979–984 (1999)
38.
Zurück zum Zitat Ekman, P.: An argument for basic emotions. Cogn. Emot. 6, 169–200 (1992) Ekman, P.: An argument for basic emotions. Cogn. Emot. 6, 169–200 (1992)
39.
Zurück zum Zitat Ekman, P., Davidson, R.J., Friesen, W.V.: The Duchenne smile: Emotional expression and brain physiology II. J. Pers. Soc. Psychol. 58(2), 342–353 (1990) Ekman, P., Davidson, R.J., Friesen, W.V.: The Duchenne smile: Emotional expression and brain physiology II. J. Pers. Soc. Psychol. 58(2), 342–353 (1990)
40.
Zurück zum Zitat Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978) Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)
41.
Zurück zum Zitat Ekman, P., Huang, T.S., Sejnowski, T.J., Hager, J.C.: Final report to NSF of the planning workshop on facial expression understanding. Human Interaction Laboratory, University of California, San Francisco (1993) Ekman, P., Huang, T.S., Sejnowski, T.J., Hager, J.C.: Final report to NSF of the planning workshop on facial expression understanding. Human Interaction Laboratory, University of California, San Francisco (1993)
42.
Zurück zum Zitat Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, London (2005) Ekman, P., Rosenberg, E.L.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, London (2005)
43.
Zurück zum Zitat Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 757–763 (2002) Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 757–763 (2002)
44.
Zurück zum Zitat Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003) MATH Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognit. 36(1), 259–275 (2003) MATH
45.
Zurück zum Zitat Forbes, E.E., Cohn, J.F., Allen, N.B., Lewinsohn, P.M.: Infant affect during parent–infant interaction at 3 and 6 months: Differences between mothers and fathers and influence of parent history of depression. Infancy 5, 61–84 (2004) Forbes, E.E., Cohn, J.F., Allen, N.B., Lewinsohn, P.M.: Infant affect during parent–infant interaction at 3 and 6 months: Differences between mothers and fathers and influence of parent history of depression. Infancy 5, 61–84 (2004)
46.
Zurück zum Zitat Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: A review. Image Vis. Comput. 27(12), 1775–1787 (2009) Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: A review. Image Vis. Comput. 27(12), 1775–1787 (2009)
47.
Zurück zum Zitat Griffin, K.M., Sayette, M.A.: Facial reactions to smoking cues relate to ambivalence about smoking. Psychol. Addict. Behav. 22(4), 551 (2008) Griffin, K.M., Sayette, M.A.: Facial reactions to smoking cues relate to ambivalence about smoking. Psychol. Addict. Behav. 22(4), 551 (2008)
48.
Zurück zum Zitat Gross, R., Matthews, I., Cohn, J.F., Kanade, T., Baker, S.: The cmu multi-pose, illumination, and expression (multi-pie) face database. Technical report, Carnegie Mellon University Robotics Institute, TR-07-08 (2007) Gross, R., Matthews, I., Cohn, J.F., Kanade, T., Baker, S.: The cmu multi-pose, illumination, and expression (multi-pie) face database. Technical report, Carnegie Mellon University Robotics Institute, TR-07-08 (2007)
49.
Zurück zum Zitat Guerra-Filho, G., Aloimonos, Y.: A language for human action. Computer 40, 42–51 (2007) Guerra-Filho, G., Aloimonos, Y.: A language for human action. Computer 40, 42–51 (2007)
50.
Zurück zum Zitat Guo, G., Dyer, C.R.: Learning from examples in the small sample case: Face expression recognition. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 35(3), 477–488 (2005) Guo, G., Dyer, C.R.: Learning from examples in the small sample case: Face expression recognition. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 35(3), 477–488 (2005)
51.
Zurück zum Zitat Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) MATH Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) MATH
52.
Zurück zum Zitat Hatfield, E., Cacioppo, J.T., Rapson, R.L.: Primitive emotional contagion. Emotion and Social Behavior 13, 151–177 (1992) Hatfield, E., Cacioppo, J.T., Rapson, R.L.: Primitive emotional contagion. Emotion and Social Behavior 13, 151–177 (1992)
53.
Zurück zum Zitat Hoey, J.: Hierarchical unsupervised learning of facial expression categories. In: IEEE Workshop on Detection and Recognition of Events in Video, pp. 99–106 (2002) Hoey, J.: Hierarchical unsupervised learning of facial expression categories. In: IEEE Workshop on Detection and Recognition of Events in Video, pp. 99–106 (2002)
54.
Zurück zum Zitat Huang, D., De la Torre, F.: Bilinear kernel reduced rank regression for facial expression synthesis. In: ECCV (2010) Huang, D., De la Torre, F.: Bilinear kernel reduced rank regression for facial expression synthesis. In: ECCV (2010)
55.
Zurück zum Zitat Izard, C.E., Huebner, R.R., Risser, D., Dougherty, L.: The young infant’s ability to produce discrete emotion expressions. Dev. Psychol. 16(2), 132–140 (1980) Izard, C.E., Huebner, R.R., Risser, D., Dougherty, L.: The young infant’s ability to produce discrete emotion expressions. Dev. Psychol. 16(2), 132–140 (1980)
56.
Zurück zum Zitat Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986) Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986)
57.
Zurück zum Zitat Jones, M.J., Poggio, T.: Multidimensional morphable models. In: ICCV (1998) Jones, M.J., Poggio, T.: Multidimensional morphable models. In: ICCV (1998)
58.
Zurück zum Zitat Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: AFGR (2000) Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: AFGR (2000)
59.
Zurück zum Zitat Koelstra, S., Pantic, M.: Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics. In: AFGR (2008) Koelstra, S., Pantic, M.: Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics. In: AFGR (2008)
60.
Zurück zum Zitat Kohler, C.G., Martin, E.A., Stolar, N., Barrett, F.S., Verma, R., Brensinger, C., Bilker, W., Gur, R.E., Gur, R.C.: Static posed and evoked facial expressions of emotions in schizophrenia. Schizophr. Res. 105, 49–60 (2008) Kohler, C.G., Martin, E.A., Stolar, N., Barrett, F.S., Verma, R., Brensinger, C., Bilker, W., Gur, R.E., Gur, R.C.: Static posed and evoked facial expressions of emotions in schizophrenia. Schizophr. Res. 105, 49–60 (2008)
61.
Zurück zum Zitat Kotsia, I., Pitas, I.: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process. 16, 172–187 (2007) MathSciNet Kotsia, I., Pitas, I.: Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans. Image Process. 16, 172–187 (2007) MathSciNet
62.
Zurück zum Zitat Krumhuber, E., Manstead, A.S., Cosker, D., Marshall, D., Rosin, P.: Effects of dynamic attributes of smiles in human and synthetic faces: A simulated job interview setting. J. Nonverbal Behav. 33(1), 1–15 (2009) Krumhuber, E., Manstead, A.S., Cosker, D., Marshall, D., Rosin, P.: Effects of dynamic attributes of smiles in human and synthetic faces: A simulated job interview setting. J. Nonverbal Behav. 33(1), 1–15 (2009)
63.
Zurück zum Zitat Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001) Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001)
64.
Zurück zum Zitat Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., van Knippenberg, A.: Presentation and validation of the Radboud Faces Database. Cogn. Emot. 24(8), 1377–1388 (2010) Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., van Knippenberg, A.: Presentation and validation of the Radboud Faces Database. Cogn. Emot. 24(8), 1377–1388 (2010)
65.
Zurück zum Zitat Lee, C., Elgammal, A.: Facial expression analysis using nonlinear decomposable generative models. In: IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 17–31 (2005) Lee, C., Elgammal, A.: Facial expression analysis using nonlinear decomposable generative models. In: IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 17–31 (2005)
66.
Zurück zum Zitat Levenson, R.W., Ekman, P., Friesen, W.V.: Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology 27(4), 363–384 (1990) Levenson, R.W., Ekman, P., Friesen, W.V.: Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology 27(4), 363–384 (1990)
67.
Zurück zum Zitat Li, S., Jain, A.: Handbook of Face Recognition. Springer, New York (2005) MATH Li, S., Jain, A.: Handbook of Face Recognition. Springer, New York (2005) MATH
68.
Zurück zum Zitat Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24(6), 615–625 (2006) Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24(6), 615–625 (2006)
69.
Zurück zum Zitat Littlewort, G.C., Bartlett, M.S., Lee, K.: Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis. Comput. 12(27), 1797–1803 (2009) Littlewort, G.C., Bartlett, M.S., Lee, K.: Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis. Comput. 12(27), 1797–1803 (2009)
70.
Zurück zum Zitat Littlewort, G., Bartlett, M.S., Whitehill, J., Wu, T.F., Butko, N., Ruvulo, P., et al.: The motion in emotion: A cert based approach to the fera emotion challenge. In: Paper presented at the 1st Facial Expression Recognition and Analysis challenge 2011, 9th IEEE International Conference on AFGR (2011) Littlewort, G., Bartlett, M.S., Whitehill, J., Wu, T.F., Butko, N., Ruvulo, P., et al.: The motion in emotion: A cert based approach to the fera emotion challenge. In: Paper presented at the 1st Facial Expression Recognition and Analysis challenge 2011, 9th IEEE International Conference on AFGR (2011)
71.
Zurück zum Zitat Liu, X.: Generic face alignment using boosted appearance model. In: CVPR (2007) Liu, X.: Generic face alignment using boosted appearance model. In: CVPR (2007)
72.
Zurück zum Zitat Lo, H., Chung, R.: Facial expression recognition approach for performance animation. In: International Workshop on Digital and Computational Video (2001) Lo, H., Chung, R.: Facial expression recognition approach for performance animation. In: International Workshop on Digital and Computational Video (2001)
73.
Zurück zum Zitat Lowe, D.: Object recognition from local scale-invariant features. In: ICCV (1999) Lowe, D.: Object recognition from local scale-invariant features. In: ICCV (1999)
74.
Zurück zum Zitat Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop (1981) Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Imaging Understanding Workshop (1981)
75.
Zurück zum Zitat Lucey, P., Cohn, J., Howlett, J., Lucey, S., Sridharan, S.: Recognizing emotion with head pose variation: Identifying pain segments in video. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 41(3), 664–674 (2011) Lucey, P., Cohn, J., Howlett, J., Lucey, S., Sridharan, S.: Recognizing emotion with head pose variation: Identifying pain segments in video. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 41(3), 664–674 (2011)
76.
Zurück zum Zitat Lucey, P., Cohn, J., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting action units from faces of pain: Comparing shape and appearance features. In: CVPR Workshops (2009) Lucey, P., Cohn, J., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting action units from faces of pain: Comparing shape and appearance features. In: CVPR Workshops (2009)
77.
Zurück zum Zitat Lucey, P., Cohn, J.F., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting pain using facial actions. In: ACII (2009) Lucey, P., Cohn, J.F., Lucey, S., Sridharan, S., Prkachin, K.M.: Automatically detecting pain using facial actions. In: ACII (2009)
78.
Zurück zum Zitat Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: CVPR Workshops for Human Communicative Behavior Analysis (2010) Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn–Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: CVPR Workshops for Human Communicative Behavior Analysis (2010)
79.
Zurück zum Zitat Lucey, P., Cohn, J.F., Matthews, I., Lucey, S., Sridharan, S., Howlett, J., Prkachin, K.M.: Automatically detecting pain in video through facial action units. IEEE Trans. Syst. Man Cybern., Part B, Cybern. PP(99), 1–11 (2010) Lucey, P., Cohn, J.F., Matthews, I., Lucey, S., Sridharan, S., Howlett, J., Prkachin, K.M.: Automatically detecting pain in video through facial action units. IEEE Trans. Syst. Man Cybern., Part B, Cybern. PP(99), 1–11 (2010)
80.
Zurück zum Zitat Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P., Matthews, I.: Painful data: The UNBC-McMaster shoulder pain expression archive database. In: AFGR (2011) Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P., Matthews, I.: Painful data: The UNBC-McMaster shoulder pain expression archive database. In: AFGR (2011)
81.
Zurück zum Zitat Lucey, S., Matthews, I., Hu, C., Ambadar, Z., De la Torre, F., Cohn, J.: AAM derived face representations for robust facial action recognition. In: AFGR (2006) Lucey, S., Matthews, I., Hu, C., Ambadar, Z., De la Torre, F., Cohn, J.: AAM derived face representations for robust facial action recognition. In: AFGR (2006)
82.
Zurück zum Zitat Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: AFGR (2002) Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: AFGR (2002)
83.
Zurück zum Zitat Madsen, M., el Kaliouby, R., Eckhardt, M., Hoque, M., Goodwin, M., Picard, R.W.: Lessons from participatory design with adolescents on the autism spectrum. In: Proc. Computer Human Interaction (2009) Madsen, M., el Kaliouby, R., Eckhardt, M., Hoque, M., Goodwin, M., Picard, R.W.: Lessons from participatory design with adolescents on the autism spectrum. In: Proc. Computer Human Interaction (2009)
84.
Zurück zum Zitat Malatesta, C.Z., Culver, C., Tesman, J.R., Shepard, B., Fogel, A., Reimers, M., Zivin, G.: The Development of Emotion Expression During the First Two Years of Life. Monographs of the Society for Research in Child Development, pp. 97–136 (1989) Malatesta, C.Z., Culver, C., Tesman, J.R., Shepard, B., Fogel, A., Reimers, M., Zivin, G.: The Development of Emotion Expression During the First Two Years of Life. Monographs of the Society for Research in Child Development, pp. 97–136 (1989)
85.
Zurück zum Zitat Martinez, A.M., Benavente, R.: The AR face database. In: CVC Technical Report, number 24 (June 1998) Martinez, A.M., Benavente, R.: The AR face database. In: CVC Technical Report, number 24 (June 1998)
86.
Zurück zum Zitat Mase, K., Pentland, A.: Automatic lipreading by computer. Trans. Inst. Electron. Inf. Commun. Eng. J73-D-II(6), 796–803 (1990) Mase, K., Pentland, A.: Automatic lipreading by computer. Trans. Inst. Electron. Inf. Commun. Eng. J73-D-II(6), 796–803 (1990)
87.
Zurück zum Zitat Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vis. 60(2), 135–164 (2004) Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vis. 60(2), 135–164 (2004)
88.
Zurück zum Zitat Matthews, I., Xiao, J., Baker, S.: 2d vs. 3d deformable face models: Representational power, construction, and real-time fitting. Int. J. Comput. Vis. 75(1), 93–113 (2007) Matthews, I., Xiao, J., Baker, S.: 2d vs. 3d deformable face models: Representational power, construction, and real-time fitting. Int. J. Comput. Vis. 75(1), 93–113 (2007)
89.
Zurück zum Zitat Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005) Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005)
90.
Zurück zum Zitat Nguyen, N., Guo, Y.: Comparisons of sequence labeling algorithms and extensions. In: ICML (2007) Nguyen, N., Guo, Y.: Comparisons of sequence labeling algorithms and extensions. In: ICML (2007)
91.
Zurück zum Zitat O’Toole, A.J., Harms, J., Snow, S.L., Hurst, D.R., Pappas, M.R., Ayyad, J.H., Abdi, H.: A video database of moving faces and people. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 812–816 (2005) O’Toole, A.J., Harms, J., Snow, S.L., Hurst, D.R., Pappas, M.R., Ayyad, J.H., Abdi, H.: A video database of moving faces and people. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 812–816 (2005)
92.
Zurück zum Zitat Pandzic, I.S., Forchheimer, R.R. (eds.): MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, New York (2002) Pandzic, I.S., Forchheimer, R.R. (eds.): MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, New York (2002)
93.
Zurück zum Zitat Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Face Recognition, pp. 377–416 (2007) Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Face Recognition, pp. 377–416 (2007)
94.
Zurück zum Zitat Pantic, M., Patras, I.: Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36, 433–449 (2006) Pantic, M., Patras, I.: Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 36, 433–449 (2006)
95.
Zurück zum Zitat Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1424–1445 (2002) Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12), 1424–1445 (2002)
96.
Zurück zum Zitat Pantic, M., Rothkrantz, L.J.M.: Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 34(3), 1449–1461 (2004) Pantic, M., Rothkrantz, L.J.M.: Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 34(3), 1449–1461 (2004)
97.
Zurück zum Zitat Pantic, M., Sebe, N., Cohn, J.F., Huang, T.: Affective multimodal human–computer interaction. In: ACM International Conference on Multimedia, pp. 669–676 (2005) Pantic, M., Sebe, N., Cohn, J.F., Huang, T.: Affective multimodal human–computer interaction. In: ACM International Conference on Multimedia, pp. 669–676 (2005)
98.
Zurück zum Zitat Pentland, A.: Looking at people: Sensing for ubiquitous and wearable computing. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 107–119 (2000) Pentland, A.: Looking at people: Sensing for ubiquitous and wearable computing. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 107–119 (2000)
99.
Zurück zum Zitat Pilz, S.K., Thornton, I.M., Bülthoff, H.H.: A search advantage for faces learned in motion. Exp. Brain Res. 171(4) 436–447 (2006) Pilz, S.K., Thornton, I.M., Bülthoff, H.H.: A search advantage for faces learned in motion. Exp. Brain Res. 171(4) 436–447 (2006)
100.
Zurück zum Zitat Prkachin, K.M., Solomon, P.E.: The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain 139(2), 267–274 (2008) Prkachin, K.M., Solomon, P.E.: The structure, reliability and validity of pain expression: Evidence from patients with shoulder pain. Pain 139(2), 267–274 (2008)
101.
Zurück zum Zitat Rademaker, R., Pantic, M., Valstar, M.F., Maat, L.: Web-based database for facial expression analysis. In: ICME (2005) Rademaker, R., Pantic, M., Valstar, M.F., Maat, L.: Web-based database for facial expression analysis. In: ICME (2005)
102.
Zurück zum Zitat Saragih, J., Goecke, R.: A nonlinear discriminative approach to AAM fitting. In: ICCV (2007) Saragih, J., Goecke, R.: A nonlinear discriminative approach to AAM fitting. In: ICCV (2007)
103.
Zurück zum Zitat Sayette, M.A., Cohn, J.F., Wertz, J.M., Perrott, M.A., Parrott, D.J.: A psychometric evaluation of the facial action coding system for assessing spontaneous expression. J. Nonverbal Behav. 25(3), 167–185 (2001) Sayette, M.A., Cohn, J.F., Wertz, J.M., Perrott, M.A., Parrott, D.J.: A psychometric evaluation of the facial action coding system for assessing spontaneous expression. J. Nonverbal Behav. 25(3), 167–185 (2001)
104.
Zurück zum Zitat Scherer, K., Ekman, P.: Handbook of Methods in Nonverbal Behavior Research. Cambridge University Press, Cambridge (1982) Scherer, K., Ekman, P.: Handbook of Methods in Nonverbal Behavior Research. Cambridge University Press, Cambridge (1982)
105.
Zurück zum Zitat Schmidt, K.L., Cohn, J.F.: Human facial expressions as adaptations: Evolutionary perspectives in facial expression research. Yearb. Phys. Antropol. 116, 8–24 (2001) Schmidt, K.L., Cohn, J.F.: Human facial expressions as adaptations: Evolutionary perspectives in facial expression research. Yearb. Phys. Antropol. 116, 8–24 (2001)
106.
Zurück zum Zitat Shang, L.F., Chan, K.P.: Nonparametric discriminant HMM and application to facial expression recognition. In: CVPR (2009) Shang, L.F., Chan, K.P.: Nonparametric discriminant HMM and application to facial expression recognition. In: CVPR (2009)
107.
Zurück zum Zitat Shergill, G.H., Sarrafzadeh, H., Diegel, O., Shekar, A.: Computerized sales assistants: The application of computer technology to measure consumer interest;a conceptual framework. J. Electron. Commer. Res. 9(2), 176–191 (2008) Shergill, G.H., Sarrafzadeh, H., Diegel, O., Shekar, A.: Computerized sales assistants: The application of computer technology to measure consumer interest;a conceptual framework. J. Electron. Commer. Res. 9(2), 176–191 (2008)
108.
Zurück zum Zitat Simon, T., Nguyen, M.H., De la Torre, F., Cohn, J.F.: Action unit detection with segment-based SVMs. In: Conference on Computer Vision and Pattern Recognition, pp. 2737–2744 (2010) Simon, T., Nguyen, M.H., De la Torre, F., Cohn, J.F.: Action unit detection with segment-based SVMs. In: Conference on Computer Vision and Pattern Recognition, pp. 2737–2744 (2010)
109.
Zurück zum Zitat Taskar, B., Guestrin, C., Koller, D.: Max-margin Markov networks. In: NIPS (2003) Taskar, B., Guestrin, C., Koller, D.: Max-margin Markov networks. In: NIPS (2003)
110.
Zurück zum Zitat Theobald, B.J., Cohn, J.F.: Facial image synthesis. In: Oxford Companion to Emotion and the Affective Sciences, pp. 176–179. Oxford University Press, London (2009) Theobald, B.J., Cohn, J.F.: Facial image synthesis. In: Oxford Companion to Emotion and the Affective Sciences, pp. 176–179. Oxford University Press, London (2009)
111.
Zurück zum Zitat Tian, Y., Kanade, T., Cohn, J.F.: Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: AFGR (2002) Tian, Y., Kanade, T., Cohn, J.F.: Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: AFGR (2002)
112.
Zurück zum Zitat Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97–115 (2002) Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 97–115 (2002)
113.
Zurück zum Zitat Tian, Y., Kanade, T., Cohn, J.F.: Facial expression analysis. In: Handbook of Face Recognition, Springer, Berlin (2008) Tian, Y., Kanade, T., Cohn, J.F.: Facial expression analysis. In: Handbook of Face Recognition, Springer, Berlin (2008)
114.
Zurück zum Zitat Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: CVPR (2008) Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: CVPR (2008)
115.
Zurück zum Zitat Tola, E., Lepetit, V., Fua, P.: Daisy: An efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 99(1) (2009) Tola, E., Lepetit, V., Fua, P.: Daisy: An efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 99(1) (2009)
116.
Zurück zum Zitat Tomkins, S.S.: Affect, Imagery, Consciousness. Springer, New York (1962) Tomkins, S.S.: Affect, Imagery, Consciousness. Springer, New York (1962)
117.
Zurück zum Zitat Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Trans. Pattern Anal. Mach. Intell. 29 1683–1699 (2007) Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Trans. Pattern Anal. Mach. Intell. 29 1683–1699 (2007)
118.
Zurück zum Zitat Tremeau, F., Malaspina, D., Duval, F., Correa, H., Hager-Budny, M., Coin-Bariou, L., Macher, J.P., Gorman, J.M.: Facial expressiveness in patients with schizophrenia compared to depressed patients and nonpatient comparison subjects. Am. J. Psychiatr. 162(1), 92 (2005) Tremeau, F., Malaspina, D., Duval, F., Correa, H., Hager-Budny, M., Coin-Bariou, L., Macher, J.P., Gorman, J.M.: Facial expressiveness in patients with schizophrenia compared to depressed patients and nonpatient comparison subjects. Am. J. Psychiatr. 162(1), 92 (2005)
119.
Zurück zum Zitat Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005) MathSciNet Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005) MathSciNet
120.
Zurück zum Zitat Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005) MathSciNet Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005) MathSciNet
121.
Zurück zum Zitat Valstar, M., Pantic, M., Patras, I.: Motion history for facial action detection in video. In: IEEE Int’l Conf. on Systems, Man and Cybernetics, pp. 635–640 (2005) Valstar, M., Pantic, M., Patras, I.: Motion history for facial action detection in video. In: IEEE Int’l Conf. on Systems, Man and Cybernetics, pp. 635–640 (2005)
122.
Zurück zum Zitat Valstar, M.F., Pantic, M.: Fully automatic facial action unit detection and temporal analysis. In: CVPR (2006) Valstar, M.F., Pantic, M.: Fully automatic facial action unit detection and temporal analysis. In: CVPR (2006)
123.
Zurück zum Zitat Valstar, M.F., Pantic, M.: Combined support vector machines and hidden Markov models for modeling facial action temporal dynamics. In: ICCV Workshop on HCI (2007) Valstar, M.F., Pantic, M.: Combined support vector machines and hidden Markov models for modeling facial action temporal dynamics. In: ICCV Workshop on HCI (2007)
124.
Zurück zum Zitat Valstar, M.F., Pantic, M.: Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of the EMOTION 2010 Workshop (2010) Valstar, M.F., Pantic, M.: Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proceedings of the EMOTION 2010 Workshop (2010)
125.
Zurück zum Zitat Valstar, M.F., Patras, I., Pantic, M.: Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: CVPR Workshops (2005) Valstar, M.F., Patras, I., Pantic, M.: Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: CVPR Workshops (2005)
126.
Zurück zum Zitat van Dam, A.: Beyond wimp. IEEE Comput. Graph. Appl. 20(1), 50–51 (2000) van Dam, A.: Beyond wimp. IEEE Comput. Graph. Appl. 20(1), 50–51 (2000)
127.
Zurück zum Zitat Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR (2001) Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR (2001)
128.
Zurück zum Zitat Vural, E., Bartlett, M., Littlewort, G., Cetin, M., Ercil, A., Movellan, J.: Discrimination of moderate and acute drowsiness based on spontaneous facial expressions. In: ICPR (2010) Vural, E., Bartlett, M., Littlewort, G., Cetin, M., Ercil, A., Movellan, J.: Discrimination of moderate and acute drowsiness based on spontaneous facial expressions. In: ICPR (2010)
129.
Zurück zum Zitat Wen, Z., Huang, T.S.: Capturing subtle facial motions in 3d face tracking. In: CVPR (2008) Wen, Z., Huang, T.S.: Capturing subtle facial motions in 3d face tracking. In: CVPR (2008)
130.
Zurück zum Zitat Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2D+3D active appearance models. In: CVPR (2004) Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2D+3D active appearance models. In: CVPR (2004)
131.
Zurück zum Zitat Yacoob, Y., Davis, L.S.: Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 18(6), 636–642 (2002) Yacoob, Y., Davis, L.S.: Recognizing human facial expressions from long image sequences using optical flow. IEEE Trans. Pattern Anal. Mach. Intell. 18(6), 636–642 (2002)
132.
Zurück zum Zitat Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3d facial expression database for facial behavior research. In: AFGR (2006) Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3d facial expression database for facial behavior research. In: AFGR (2006)
133.
Zurück zum Zitat Zelnik-Manor, L., Irani, M.: Temporal factorization vs. spatial factorization. In: ECCV (2004) Zelnik-Manor, L., Irani, M.: Temporal factorization vs. spatial factorization. In: ECCV (2004)
134.
Zurück zum Zitat Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2008) Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2008)
135.
Zurück zum Zitat Zeng, Z., Hu, Y., Roisman, G.I., Wen, Z., Fu, Y., Huang, T.S.: Audio-visual emotion recognition in adult attachment interview. In: 8th International Conference on Multimodal Interfaces (2009) Zeng, Z., Hu, Y., Roisman, G.I., Wen, Z., Fu, Y., Huang, T.S.: Audio-visual emotion recognition in adult attachment interview. In: 8th International Conference on Multimodal Interfaces (2009)
136.
Zurück zum Zitat Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 31–58 (2009) Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 31–58 (2009)
137.
Zurück zum Zitat Zhang, C., Zhango, Z.: A survey of recent advances in face detection. In: Technical Report, MSR-TR-2010-66 Microsoft Research (June 2010) Zhang, C., Zhango, Z.: A survey of recent advances in face detection. In: Technical Report, MSR-TR-2010-66 Microsoft Research (June 2010)
138.
Zurück zum Zitat Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: AFGR (2002) Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: AFGR (2002)
139.
Zurück zum Zitat Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007) Zhao, G., Pietikainen, M.: Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 915–928 (2007)
140.
Zurück zum Zitat Zhao, W., Chellappa, R.: Face Processing: Advanced Modeling and Methods. Academic Press, San Diego (2006) Zhao, W., Chellappa, R.: Face Processing: Advanced Modeling and Methods. Academic Press, San Diego (2006)
141.
Zurück zum Zitat Zhou, F., De la Torre, F., Hodgins, J.: Aligned cluster analysis for temporal segmentation of human motion. In: IEEE Automatic Face and Gesture Recognition (2008) Zhou, F., De la Torre, F., Hodgins, J.: Aligned cluster analysis for temporal segmentation of human motion. In: IEEE Automatic Face and Gesture Recognition (2008)
142.
Zurück zum Zitat Zhu, Y., De la Torre, F., Cohn, J.F.: Dynamic cascades with bidirectional bootstrapping for spontaneous facial action unit detection. In: ACII (2009) Zhu, Y., De la Torre, F., Cohn, J.F.: Dynamic cascades with bidirectional bootstrapping for spontaneous facial action unit detection. In: ACII (2009)
143.
Zurück zum Zitat Zhou, F., De la Torre, F., Cohn, J.: Unsupervised discovery of facial events. In: CVPR (2010) Zhou, F., De la Torre, F., Cohn, J.: Unsupervised discovery of facial events. In: CVPR (2010)
144.
Zurück zum Zitat Zhou, F., De la Torre, F., Cohn, J.F.: Unsupervised discovery of facial events. In: Conference on Computer Vision and Pattern Recognition, pp. 2574–2581 (2010) Zhou, F., De la Torre, F., Cohn, J.F.: Unsupervised discovery of facial events. In: Conference on Computer Vision and Pattern Recognition, pp. 2574–2581 (2010)
145.
Zurück zum Zitat Zue, V.W., Glass, J.R.: Conversational interfaces: Advances and challenges. Proc. IEEE 88(8), 1166–1180 (2002) Zue, V.W., Glass, J.R.: Conversational interfaces: Advances and challenges. Proc. IEEE 88(8), 1166–1180 (2002)
Metadaten
Titel
Facial Expression Analysis
verfasst von
Fernando De la Torre
Jeffrey F. Cohn
Copyright-Jahr
2011
Verlag
Springer London
DOI
https://doi.org/10.1007/978-0-85729-997-0_19