Elsevier

Psychiatry Research

Volume 168, Issue 3, 15 August 2009, Pages 242-249
Psychiatry Research

The NimStim set of facial expressions: Judgments from untrained research participants

https://doi.org/10.1016/j.psychres.2008.05.006Get rights and content

Abstract

A set of face stimuli called the NimStim Set of Facial Expressions is described. The goal in creating this set was to provide facial expressions that untrained individuals, characteristic of research participants, would recognize. This set is large in number, multiracial, and available to the scientific community online. The results of psychometric evaluations of these stimuli are presented. The results lend empirical support for the validity and reliability of this set of facial expressions as determined by accurate identification of expressions and high intra-participant agreement across two testing sessions, respectively.

Introduction

The literature on human social and emotional behavior is rich with studies of face processing. An online search in databases like PubMed for “face perception” or “face processing” to date results in over 7000 relevant papers. A similar search for “facial expression” results in over 5000 hits. Integral to these studies is the availability of valid and reliable face stimuli. This paper describes a set of face stimuli (the NimStim Set of Facial Expressions — available to the scientific community at http://www.macbrain.org/resources.htm) and reports validity and reliability for this set based on ratings from healthy, adult participants.

Facial expressions and their interpretation are a topic of interest to researchers because of the links between emotional experience and facial expression. It has been argued that expressions of facial emotion have communicative value (Darwin, 1872) with phylogenic roots (lzard, 1971). Both the production (Ekman and Friesen, 1978) and the interpretation of facial expressions (Schlosberg, 1954, Russell and Bullock, 1985) have been examined empirically. Previous research using existing sets (Ekman and Friesen, 1976) has established much of what is known about face processing. However, the parameters of existing sets may not always satisfy the objectives of the experiment (Erwin et al., 1992). For example, the number of stimuli may be too few (Winston et al., 2002), the stimulus set may not have enough racial or ethnic diversity, or there might not be an appropriate comparison or baseline expression (Phillips et al., 1998). Issues like these have motivated researchers to create their own stimuli (Hart et al., 2000, Phelps et al., 2000, Gur et al., 2002, Batty and Taylor, 2003, Phelps et al., 2003, Tanaka et al., 2004). The goal of creating the NimStim Set of Facial Expressions was to provide a set of facial expressions that address these issues.

A number of features of the set are advantageous for researchers who study face expression processing. Perhaps the most important is the racial diversity of the actors. Studies often show that the race or ethnicity of a model impacts face processing both behaviorally (Elfenbein and Ambady, 2002, Herrmann et al., 2007) and in terms of the underlying neurobiology of face processing (Hart et al., 2000, Phelps et al., 2000, Golby et al., 2001, Lieberman et al., 2005). This modulation by race or ethnicity is not identified for all populations (Beaupre and Hess, 2005) and may be driven by experience (Elfenbein and Ambady, 2003) and bias (Phelps et al., 2000, Hugenberg and Bodenhausen, 2003).

To address such findings, face sets have been developed that consist of models from various backgrounds. For example, the JACFEE set (Ekman and Matsumoto, 1993–2004) consists of Japanese and Caucasian models, Mandal's (1987) set consists of Indian models, Mandal et al.'s (2001) set consists of Japanese models, the Montreal Set of Facial Displays of Emotion (Beaupre et al., 2000) consists of French Canadian, Chinese, and sub-Saharan African models, and Wang and Markham's (1999) set consists of Chinese models. Unlike these other sets, the NimStim set provides one uniform set of Asian-American, African-American, European-American, and Latino-American actors, all photographed under identical conditions. Additionally, because these actors all live in the same metropolitan city within the U.S., the subtle differences that accompany expressions when posed by individuals from different countries are minimized (Ekman and Friesen, 1969, Matsumoto et al., 2005). The NimStim set has attributes, described below, that are not typically found in other sets that include models from non-European populations.

There are four distinguishing attributes of the set that build on previously established sets. First, the NimStim set is available in color, and it contains a large number of stimuli and a large variety of facial expressions. Whereas most sets contain roughly 100 or fewer stimuli (Mandal, 1987, Ekman and Matsumoto, 1993–2004, Wang and Markham, 1999, Mandal et al., 2001), the NimStim set contains 672, consisting of 43 professional actors, each modeling 16 different facial poses, including different examples of happy, sad, disgusted, fearful, angry, surprised, neutral, and calm. Secondly, a neutral expression is included in this set. The neutral expression is sometimes included in other facial expression sets (Ekman and Friesen, 1976, Erwin et al., 1992, Ekman and Matsumoto, 1993–2004), but is often omitted, particularly in sets that include models from different racial and ethnic backgrounds (Mandal, 1987, Wang and Markham, 1999, Beaupre et al., 2000, Mandal et al., 2001). The inclusion of the neutral expression is important since neutral is often a comparison condition, particularly in neuroimaging studies (Breiter et al., 1996, Thomas et al., 2001). Thirdly, the NimStim set contains open- and closed-mouth versions of each expression, which can be useful to experimentally control for perceptual differences (e.g., toothiness) that can vary from one expression to another, as this featural difference may bias responses (Kestenbaum and Nelson, 1990). Having closed- and open-mouth versions also facilitates morphing between various expressions by reducing blending artifacts. Lastly, since positively valenced faces are generally lower in arousal than negatively valenced faces and can present an arousal/valence confound, this set includes three degrees of happy faces (e.g., closed-mouth, open-mouth, and high arousal/exuberant).

A final distinguishing feature of this set is the inclusion of a calm face. Studies examining the perception of facial expressions often use neutral faces as the comparison face (Breiter et al., 1996, Vuilleumier et al., 2001). However, there is evidence to suggest that neutral faces may not always be perceived as emotionally neutral (Donegan et al., 2003, Somerville et al., 2004, Iidaka et al., 2005), especially for children (Thomas et al., 2001, Lobaugh et al., 2006). Researchers have artificially generated other comparison faces (i.e., 25% happy) to address this concern (Phillips et al., 1997). Within the NimStim Set, a calm expression category is provided, which is perceptually similar to neutral, but may be perceived as having a less negative valence. Here we provide data validating the use of the calm face.

Validation of the entire set was accomplished by asking participants to label each stimulus. A different method of rating face stimuli involves having highly trained raters use facial action units to make judgments about the expressions (Ekman and Friesen, 1977, Ekman and Friesen, 1978). This method is very useful for establishing the uniformity of expression exemplars. The merit of the method used in this article is that the ratings were obtained from untrained volunteers, who are characteristic of those in face expression processing studies. In other words, the approach taken in this study to establish whether a certain expression was perceived as the intended expression was to measure concordance between the subjects' labels and the intended expressions posed by the actors (the validity measure) as well as the intra-participant test–retest reliability. We hypothesized that the participants' judgments of these stimuli would provide empirical support for the reliability and validity of this new set of facial expressions.

Section snippets

Participants

Data were collected from two groups of participants, producing a total N of 81 participants. The first group included 47 undergraduate students from a liberal arts college located in the Midwestern United States. Mean age was 19.4 years (18–22, S.D. = 1.2), and the majority (39/47) of these respondents were female. The participants received course credit for their participation in the study. Based on the participant pool data, it was estimated that 81% (38/47) were European-American, 6% (3/47)

Validity

There were two validity measures (proportion correct and kappa scores) for each stimulus, thus resulting in 672 proportion correct and kappa scores. These 672 proportion correct and kappa scores are presented individually in Supplemental Tables 2a and 2b and in aggregate by actor (43 actors × 2 mouth positions = 86 scores) in Supplemental Table 3, but by expression (17 scores) for succinctness within this manuscript (Table 1; closed and open mouth separately). The overall proportion correct was

Discussion

The purpose of this article was to present a set of facial expression stimuli and to present data describing the judgments of these stimuli by untrained, healthy adult research participants. This face set is large in number, is multiracial, has contemporary looking, professional actors, contains a variety of expressions, and is available to the scientific community online. For these reasons, this set may be a resource for scientists who study face perception.

Validity was indexed as how accurate

Acknowledgements

The research reported was supported by the John D. & Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development (C.A. Nelson, Chair).

References (50)

  • StraussE. et al.

    Perception of facial expressions

    Brain and Language

    (1981)
  • ThomasK.M. et al.

    Amygdala response to facial expressions in children and adults

    Biological Psychiatry

    (2001)
  • VuilleumierP. et al.

    Effects of attention and emotion on face processing in the human brain: an Event-related fMRI study

    Neuron

    (2001)
  • AmbadarZ. et al.

    Deciphering the enigmatic face: the importance of facial dynamics in interpreting subtle facial expressions

    Psychological Science

    (2005)
  • BeaupreM.G. et al.

    Cross-cultural emotion recognition among Canadian ethnic groups

    Journal of Cross-Cultural Psychology

    (2005)
  • BeaupreM.G. et al.

    The Montreal Set of Facial Displays of Emotion

    (2000)
  • BiehlM. et al.

    Matsumoto and Ekman's Japanese and Caucasian facial expressions of emotion (JACFEE): reliability data and cross-national differences

    Journal of Cross-Cultural Psychology

    (1997)
  • CohenJ.

    A coefficient of agreement for nominal scales

    Educational and Psychological Measurement

    (1960)
  • CohenJ.D. et al.

    PsyScope: a new graphic interactive environment for designing psychology experiments

    Behavioral Research Methods, Instruments, and Computers. Behavioral Research Methods, Instruments, and Computers

    (1993)
  • DarwinC.R.

    The Expression of the Emotions in Man and Animals

    (1872)
  • EkmanP. et al.

    The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding

    Semiotica

    (1969)
  • EkmanP. et al.

    Pictures of Facial Affect

    (1976)
  • EkmanP. et al.

    Manual for the Facial Action Coding System

    (1977)
  • EkmanP. et al.

    Facial Action Coding System: A Technique for the Measurement of Facial Movement

    (1978)
  • EkmanP. et al.

    Japanese and Caucasian Facial Expressions of Emotion (JACFEE)

    (1993–2004)
  • Cited by (2631)

    View all citing articles on Scopus
    View full text