Skip to content
Publicly Available Published by De Gruyter Mouton January 13, 2018

Too many Americans are trapped in fear, violence and poverty”: a psychology-informed sentiment analysis of campaign speeches from the 2016 US Presidential Election

  • Thomas Hoffmann EMAIL logo
From the journal Linguistics Vanguard

Abstract

Most automatic sentiment analyses of texts tend to only employ a simple positive-negative polarity to classify emotions. In this paper, I illustrate a more fine-grained automatic sentiment analysis [Jockers, Matthew. 2016. Introduction to the Syuzhet package. https://cran.r-project.org/web/packages/syuzhet/vignettes/syuzhet-vignette.html (accessed 07 March 2017).; Mohammad, Saif M. & Peter D. Turney. 2013. Crowd sourcing a word-emotion association lexicon. Computational Intelligence 29(3). 436–465.] that is based on a classification of human emotions that has been put forward by psychological research [Plutchik, Robert. 1994. The psychology and biology of emotion. New York, NY: HarperCollins College Publishers.]. The advantages of this approach are illustrated by a sample study that analyses the emotional sentiment of the campaign speeches of the two main candidates of the 2016 US presidential election.

1 Introduction

It’s become cliche to decry each election as the most negative of our lives; the atmospherics of this race, though, are completely unique in recent American history – and uniquely conducive to negativity. (Aaron Blake, The Washington Post, July 29, 2016)[1]

Right after Hillary Clinton had officially accepted her nomination as the presidential candidate for the Democratic Party for the 2016 US presidential election, The Washington Post published the above negative prediction. As a recent study by the Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy showed (Patterson 2016), the US media itself contributed significantly to the perceived negativity of the campaign by a largely negative tone in their coverage of the two main candidates, Hillary Clinton as well as Donald Trump. In addition to that, however, the language of the two candidates themselves has also been described as fairly negative: a pilot study by undergraduates from the University of Michigan (Bayagich et al. 2016) indicated that the tone of both candidates in the 2016 presidential debates was considerably negative. Both Clinton and Trump showed negativity scores higher than any other presidential candidate of their respective party in the last 30 years – with Trump exhibiting considerably more negative language than Clinton. In a similar vein, a linguistic analysis of the two candidates Twitter posts (Crockett 2016) revealed, perhaps rather unsurprisingly, that Trump’s tweets were considerably more negative than Clinton’s posts.

All of these analyses certainly yield interesting results concerning the emotional language used by the two candidates, yet they also suffer from a simplistic dichotomy of language as either negative or positive (or neutral, if a third category is included in the analysis). Bayagich et al. (2016), for example, relied on the Lexicoder Sentiment Dictionary (Young and Soroka 2012), which classifies lexical items as only either ‘positive’ or ‘negative’ (Young and Soroka 2012: 212). Such binary classifications are still the dominant approach in sentiment analysis (Nissim and Patti 2017: 32–34; Pozzi et al. 2017: 3) and do allow researchers to get a first glimpse at the emotional language of a speaker. Yet, they obviously fail to provide a detailed analysis of the complex set of emotions that speakers have. Sadness and fear, for example, are both negative emotions, yet, as we all know, they crucially differ in their physiological as well as psychological effects. Similarly, we consider joy and trust as positive emotions, but are well aware of how different we feel when we experience the two. From a psychological perspective, emotions are a set of complex physiological, cognitive and behavioral reactions to a stimulus, which have evolved as functional adaptions to particular situations and as impulses for specific types of situation-adequate behavior (Plutchik 2001: 345–348; Becker-Carus and Wendt 2017: 541–542): while fear might trigger an impulse to flee, sadness can lead individuals to seek social support and solace. This evolutionary advantage obviously raises the issue of whether there are basic, universal emotions that all humans share as part of their genetic makeup. One piece of evidence for the existence of such universal human emotions comes from research by Paul Ekman and colleagues, who argue for the existence of unconscious cross-cultural facial expression of emotions (inter aliaEkman and Friesen 1969; Ekman and Friesen 1971; Becker-Carus and Wendt 2017: 551–552; though see Russell 1994 for a critical review of these results).[2] Based on the results of this line of research, Ekman (1992, 2005) postulated the following six basic human emotions, for all of which he identified specific corresponding facial expressions: joy, sadness, anger, fear, disgust and surprise. Yet, despite receiving some empirical support, Ekman’s classification is far from being generally accepted, with competing approaches arguing for anything from three up to eleven universal emotions (Plutchik 2001: 349).[3] An alternative approach based on psychological and physiological research that has received considerable support was advocated by Plutchik (1980, 1994), who postulated the following eight basic human emotions: joy, sadness, anger, fear, trust, disgust, anticipation and surprise. As can be seen, Plutchik’s list subsumes Ekman’s six emotions and only adds trust and anticipation to them (Mohammad and Turney 2013). In contrast to Ekman’s list, which largely comprises negative emotions (i.e. sadness, anger, fear and disgust), Plutchik assumes an identical number of positive and negative emotions, which can be further subdivided into four opposing pairs, namely anger–fear, anticipation–surprise, joy–sadness, and trust–disgust (cf., e.g. Mohammad and Turney 2010; Mohammad and Turney 2013). In addition to this, in Plutchik’s approach, specific types of emotions that go beyond the basic ones are straightforwardly accounted for either by different degrees of intensity (e.g., terror is seen as a stronger form of fear, while apprehension is a weaker form) or by a combination of emotions (e.g. hatred is analysed as a combination of disgust plus anger; Plutchik 2001: 348-350)

Currently, there is thus no single accepted psychological theory of basic human emotions. Nevertheless, what all psychological research agrees on is that a simple positive-negative dichotomy is far to simplistic to capture the full range of human emotions. Consequently, automatic sentiment analyses should also employ more fine-grained emotional classifications. In light of this, following Mohammad and Turney (2010, 2013), the present study aims for a psychologically more detailed analysis of the emotional language employed by Clinton and Trump. Adopting a corpus-based approach, the presidential campaign speeches of the two main candidates were subjected to an automatic sentiment analysis (Mohammad and Turney 2013; Jockers 2016) that not only identified positive or negative lexical items, but also detected lexis associated with Plutchik’s eight basic human emotions. In particular, the analysis draws on the NRC Word-Emotion Association Lexicon (Mohammad and Turney 2013), which contains 10,170 lexical items that are coded for Plutchik’s basic human emotions as well as positive or negative polarity (cf. also Mohammad and Turney 2010; Schweinberger 2016). Alternative models of human emotions such as Ekman’s approach could, of course, also form the basis for automatic sentiment analysis. Yet, as pointed out above, Plutchik’s categories have the advantage of not only subsuming Ekman’s six emotions but also of providing a more balanced list of positive and negative emotions (and, potentially, forming the basis for more fine-grained future analyses that also incorporate the intensity parameter as well as the combination of basic emotions to create specific emotional feelings).

The 10,170 lexical items of the NRC contain (n.b.: 189 items occur in more than one set, which is why the numbers below add up to 10,359; Mohammad and Turney 2013):

  • the 1,587 most frequent noun, verb, adverb and adjective uni- and bigrams from the Macquarie Thesaurus (with frequency being assessed via the Google n-gram corpus; for details see Mohammad and Turney 2013),

  • all 640 words of the Ekman subset of the WordNet Affect Lexicon,

  • 8,132 terms from the General Inquirer.

The emotional ratings for all these items were then collected via the crowed-sourced Amazon Mechanical Turk service and are based on 38,726 ratings from 2,216 subjects. All lexical items of the NRC were rated by five different individuals using a Likert-scale design: subjects were, e.g., asked whether a given word was not positive, weakly positive, moderately positive or strongly positive. Similar judgments had to be made for all other parameters (negativity as well as the eight Plutchik categories). 85% of all items had a rating on which at least four raters agreed (assignments with two or more standard deviations from the mean were discarded; for more details see Mohammad and Turney 2013).

The NRC is available via open access and has been implemented (method = “nrc”) as one type of sentiment analysis in the syuzhet package version 1.0.1 (Jockers 2016), which is a freely available library for the program R (R Core Team 2016). For the present analysis, using the openNLP sentence tokenizer implemented in the syuzhet library, the data from each candidate were first split into individual sentences. Then, for each of these sentences the number of words that according to the NRC express positive or negative sentiment as well as one of Plutchik’s eight basic emotions were identified (via the get_nrc_sentiment() function). Earlier versions of the syuzhet package had attracted several points of criticism (Swafford 2015)[4] concerning the accuracy of the sentence splitting algorithm, the use of ironic or sarcastic language, or the lack of modifier impact. In response, Jockers (2015) improved the sentence tokenizer in the version used for this paper. In addition to that, he pointed out that irony and sarcasm are, currently, not only problematic for any automatic sentiment analysis program, but that even humans sometimes fail to detect these in a text. Nevertheless, since the large scale use of irony and sarcasm is normally only associated with certain types of spoken texts, and not texts such as political speeches as a whole, this was not considered a problem for the present study. More pressing is the issue of collocating modifiers: lexicon-based sentiment analyses such as the syuzhet package adopt a bag-of-words approach that only assigns emotional values to individual words, but ignores the context in which they occur (for a criticism and a more sophisticated approach, see Kennedy and Inkpen 2006). However, a sentence such as He is not bad. is, of course, clearly less negative than He is bad. (though still not as positive as He is good.; Mohammad and Turney 2013; Jockers 2015).

The syuzhet library offers no option for assessing the influence of co-occurring ‘shifters’, words that affect the degree or even polarity of emotional words. It was therefore decided to draw on the sentimentr package (Rinker 2016) to test the effect of shifters on the present results. This R library has the advantage of also offering the NRC as one of its emotional lexicons. In contrast to the syuzhet library, sentimentr does not provide an in-depth analysis of Plutchik’s basic emotions, but only yields a single polarity score, thus only allowing us to check the accuracy of the initial binary emotion analysis presented below:[5] The sentimentr score is calculated by first assigning a positive ‘+1’, or negative ‘−1’ value to an emotion word in accordance with its classification in the NRC. This score S1 is then further modified by checking whether a shifter occurs within a context specified by the user (the default window is 5 words before and 2 words after the polarized word). Sentimentr distinguishes four types of shifters with the following default values (all of which can be modified and whose precise settings remain to be identified by future research):

  • Amplifiers (currently comprising 50 items, such as certain, totally, or very) and de-amplifiers (14 items, e.g. barely, hardly or rarely), respectively, increase and decrease the original score S1 by 1.8 to yield a score S2,

  • Negators (26 items such as didn’t, never or not), if present, then flip the sign of S2 to give the score S3,

  • Adversative conjunctions (3 items: although, but, and however) lower or increase the score S3 by 1.85, depending on whether they follow or precede the polarized item, to yield S4.

Finally, all the weighted S4 scores of a sentence are summed and divided by the square root of the total number of words of the sentence. This procedure thus gives a normalized score for each sentence that is slightly different from the one calculated by the syuzhet package. Nevertheless, the sentimentr package made it possible to run two NRC-based analyses on the data: one excluding any shifter effect (an analysis identical in approach to the syuzhet package) and one taking into account the effect of contextual shifters (using the default settings mentioned above). As a statistical analysis of the correlation of these two models showed, both approaches yielded similar results, which offers some corroboration for the validity of the results from the ‘simpler’ bag-of-words method provided by the syuzhet package.

Note, however, that like the bag-of-words approaches the more sophisticated shifters approach cannot be considered perfect models of human understanding. Take, e.g., the following sentences:

  • (1) He is stupid.

  • (2) He is not just stupid …,

  • (3) He is stupid and dangerous.

  • (4) He is not just stupid and dangerous …

  • (5) He is not just stupid but dangerous.

The sentences in (1)–(5) are all clearly negative and the not just construction in (2), (4) and (5) only serves to amplify this negative meaning. Yet, while a context-blind syuzhet method correctly identifies (1–5) as negative, the sentimentr analysis, e.g., ‘blindly’ treats not as a negator that reverses the polarity of stupid and gives an overall positive or neutral score for (2), (4) and (5) (see the Appendix analysis in the accompanying R script for details).[6] In order to provide a more adequate model of human sentiment word processing, future automatic sentiment analysis will therefore have to draw on more elaborate syntactic analyses that account for the sophisticated interaction of complex form-meaning correspondences (as advocated, e.g., in Construction Grammar theories; see, e.g. Hoffmann 2017).

All in all, the present analysis confirms that Trump does indeed use statistically significantly more negative lexical items in his campaign speeches than Clinton. Moreover, the results also show that the most frequent emotion evoked by both candidates, as expected for campaign speeches, is trust. In addition to this, the statistical analysis of the basic emotions detects significant preferences for each candidate: while lexical items from the emotional fields of anticipation, trust and joy appear significantly more often in Clinton’s speeches, Trump’s speeches significantly exhibit a preference for emotional lexis evoking disgust, anger, fear and sadness. The present results thus provide further empirical support for the view that Trump’s campaign was much more negative than Clinton’s, adding that there are a particular set of negative emotions that were more frequently expressed by the former. Finally, the study also shows how automatic sentiment analyses of political speech can benefit in general from complementing simple binary polarity analysis of emotions with a psychologically more fine-grained and relevant emotion word classification.

2 Methodology

The data for the present study comes from the American Presidency Project website (http://www.presidency.ucsb.edu), which is hosted at the University of California, Santa Barbara. The site contains more than 120,000 official presidential documents, including the documents of the 2016 Presidential Election (http://www.presidency.ucsb.edu/2016_election.php). The relevant campaign speeches were scraped from this site and statistically analysed using the program R version 3.4.1 (R Core Team 2016). All scripts used in the extraction and analysis of the data are provided together with this article.

In a first step, the presidential campaign speeches by Clinton and Trump were downloaded from the American Presidency Project website as text files following the procedure outlined by Francom (2015). At the time the study was carried out, the website with Clinton’s data[7] also contained 106 speeches from her 2008 campaign, which were excluded from further analysis. The remaining data from each candidate were then saved in a single text file each. These files were then submitted to an automatic sentiment analysis using the syuzhet package (Jockers 2016). Additionally, as outlined in the previous section, the sentimentr package (Rinker 2016) was used to test the validity of the binary syuzhet sentiment analysis against a model that also takes into account the influence of co-occurring shifter items.

Finally, the data were subjected to two separate statistical analyses: first of all, for each sentence exhibiting words identified as either positive or negative, a negativity score NEG was calculated. This ratio variable was simply the proportion of negative words per sentence (that is, NEG = (frequency of negative words in a sentence/(frequency of negative words in a sentence + frequency of positive words in a sentence)). The NEG scores for Clinton and Trump were then subjected to a Wilcoxon rank sum test (alternatively known as Mann-Whitney U test) in R, since an F-test had revealed that the two distributions did not meet the criterion of variance homogeneity, thus precluding an independent T-test (Oaks 1998: 17; Gries 2009: 208, 218–226). In a second step, the overall frequencies of emotion words per Plutchik category were calculated for each candidate. Both variables included in this second analysis are nominal: the factor CANDIDATE has two levels (‘Clinton’ and ‘Trump’), the factor EMOTIONS has eight levels (‘joy’, ‘sadness’, ‘anger’, ‘fear’, ‘trust’, ‘disgust’, ‘anticipation’ and ‘surprise’). In order to find out whether any of the 16 (2×8) resulting variable combinations are statistically associated, that is whether any CANDIDATE significantly uses more or fewer words of a particular EMOTIONS category, these data were subjected to a “configural frequency analysis” (CFA; cf. Bortz et al. 1990: 155–157; Gries 2009: 240–252) using the HCFA 3.2 script (Gries 2004). For each CANDIDATE × EMOTIONS variable combination, the script calculated an exact binomial test and adjusted the significance of all tested factor associations (so-called ‘configurations’) for multiple testing (using the Holm method; see Gries 2009: 249 for details). Since p-values are dependent on sample size, CFAs also include a sample size-independent measure of effect size ranging from 0 to 1 labeled “coefficient of pronouncedness” (“Q”, a measure equivalent to r2; cf. Bortz et al. 1990: 156; Gries 2009: 249).[8]

3 Results

The data retrieved from the American Presidency Project website yielded the following two corpora: the campaign speeches of Clinton with 249,185 words and the campaign speeches of Trump, which amounted to 183,521 words. Note that this slight difference in number of words does not affect the following statistical analyses since all methods employed in this paper correct for unequal sample size.

Let us first look at the distribution of negative words in the two corpora. Overall, the data yield the following results:

Table 1 already shows that overall Trump’s campaign speeches contain proportionally more negative words than Clinton’s (30% vs. 40%). This, however, is obviously only a very blunt summary of the use of emotional words of the two candidates. As mentioned in section 2, in addition to this, a NEG score was calculated for both candidates that indicates to what degree individual sentences are negative that include any words identified as positive and/or negative by the Word-Emotion Association Lexicon. This measurement thus allows us to track the proportion of negative words for each individual sentence and is less affected by single outliers (of particularly positive or negative words in a single utterance). Figure 1 gives a visual representation of the distribution of NEG scores for both candidates:

Table 1:

Raw frequencies of positive and negative words in the campaign speeches of Clinton and Trump.

CandidateNegative wordsPositive wordsSum
Clinton7,137 (30%)16,680 (70%)23,817
Trump6,939 (40%)10,455 (60%)17,394
Figure 1: NEG scores for Trump and Clinton.
Figure 1:

NEG scores for Trump and Clinton.

Figure 1 provides further support for the claim that Clinton (median = 0%, mean = 30%) uses considerably less negative words than Trump (median = 33%, mean = 38%). Since an F-test indicated that the two distributions have a significantly different variance (F = 1.1013, num dfnum = 5849, dfdenom = 10502, p-value <0.001), the NEG scores of Clinton and Trump were subjected to a Wilcoxon rank sum test. This test showed that the difference between the two candidates is highly statistically significant (W = 34961000, p-value <0.001).

How reliable are the above results in light of the fact that the syuzhet package does not take into account the potential effect of co-occurring shifters? As Figure 2 shows, using the sentimentr package, we actually find a strong correlation between the simple syuzhet-type of analysis and a more sophisticated analysis including the effect of shifters:

Figure 2: Comparison of NRC-based binary sentiment analysis excluding and including co-occurring shifter items for Trump and Clinton.
Figure 2:

Comparison of NRC-based binary sentiment analysis excluding and including co-occurring shifter items for Trump and Clinton.

For both data sets, Figure 2 shows a strong positive correlation between the analysis without and with shifters (with r = 0.66 for the Trump data and r = 0.99 for the Clinton data). These results thus corroborate the validity of the syuzhet presented above. Nevertheless, as mentioned in the preceding section, neither model should be considered perfect and future research on automatic sentiment analyses will definitely have to draw on far more sophisticated models to better approximate human sentiment word understanding.

As mentioned in the introduction, one step in this direction would be to move beyond a simple binary classification of words as positive or negative. Such an approach can, of course, yield interesting first results concerning the sentiment of a text. Yet, in order to provide a psychologically more revealing analysis, in a next step the data were also analysed for Plutchik’s eight basic human emotions. This more detailed sentiment analysis yielded the following results:

Plutchik’s eight emotions can be subdivided into four complementary pairs, namely anger–fear, anticipation–surprise, joy–sadness, and trust–disgust (cf., e.g. Mohammad and Turney 2010). In Table 2, we find a similar distribution for the last of these pairs in the two corpora: both Clinton and Trump most frequently use emotion words evoking trust (26% and 23%, respectively) and that words with the meaning of disgust appear with the least frequency in both corpora (with 4% and 7%, respectively). All the other emotions, however, are employed to different degrees by the two candidates. Figure 3 gives a visual representation of the data distribution provided in Table 2:

Table 2:

EMOTIONS by CANDIDATE.

CANDIDATEEMOTIONS
AngerAnticipationDisgustFearJoySadnessSurpriseTrust
Clinton3,7237,6951,8294,5536,3343,5413,82611,192
%9%18%4%11%15%8%9%26%
Trump3,8974,4262,2014,6384,3293,5483,1137,602
%12%13%7%14%13%11%9%23%

As Figure 3 reveals, Clinton seems to use considerably more words from more positive emotional categories with joy and anticipation in second and third place. In contrast to this, Trump has fear as the second most frequent emotion category evoked. Moreover, while sadness and anger are in penultimate and antepenultimate position of Clinton’s range of emotional words, they are ranked above surprise in Trump’s data.

Figure 3: Ranking of Plutchik emotion categories by CANDIDATE (in %).The colors of the various emotion categories are adopted from Plutchik (2001: 349).
Figure 3:

Ranking of Plutchik emotion categories by CANDIDATE (in %).[9]

In order to statistically test the relative preferences of two candidates for particular emotions, the raw data from Table 2 was subjected to a CFA test. The results from this analysis are summarized in Figure 4:

Figure 4: CFA analysis of CANDIDATE × EMOTIONS combinations.
Figure 4:

CFA analysis of CANDIDATE × EMOTIONS combinations.

For each CANDIDATE × EMOTIONS combination, Figure 4 provides the raw frequencies (column ‘Freq’), the expected frequencies (‘Exp’), how much the former deviate from the latter as calculated by a binomial test (‘Cont.chisq’), the direction of the deviation (‘Obs-exp’, with ‘<’ indicating fewer and ‘>’ indicating more observed data points than expected), the Holm-adjusted significance value of the combination (‘P.adj.Holm’) as well as the coefficient of pronouncedness (‘Q’).

In Figure 4, I have marked by color (blue for the Democratic candidate Clinton and red for the Republican candidate Trump) those factor combinations that a candidate uses statistically significantly more often than their opponent. As the CFA analysis reveals, we do, indeed, find the expected emotional bias: Compared to her opponent’s speeches, in Hillary Clinton’s campaign speeches positive emotions such as anticipation, trust and joy are significantly preferred (and words evoking disgust, anger, fear and sadness are significantly dispreferred). Trump’s campaign speeches, in contrast to Clinton’s speeches, show a relative statistical preference for negative emotions such as disgust, anger, fear and sadness (and a significant dispreference for positive emotions such as anticipation, trust and joy). The only category for which there is no statistical difference between the two candidates is surprise.

4 Conclusion

The findings of the sentiment analysis of the campaign speeches by the main candidates of the 2016 US presidential election largely confirms what previously has been suggested in the media: Donald Trump’s language in the analysed documents is, on the whole, relatively more negative than that of Hillary Clinton. More importantly, however, by adopting a psychological classification of emotions (Plutchik 1980, Plutchik 1994), the present study was able to provide a much more detailed and fine-grained analysis of the data. This analysis revealed important findings that cannot be detected by simplistic binary models of human emotions: as we have seen, for both candidates trust is the most frequently evoked emotion (making up 26% and 23% of all emotional speech, respectively). Considering that the major function of campaign speeches is to gather support for and develop trust in a candidate, that in itself is not surprising. Yet, it is important that this positive emotion is also the most frequent one in Donald Trump’s speeches – the candidate that is normally described as someone who relied on a largely negative rhetoric. What the statistical analysis showed, however, is that there are specific negative emotions that are significantly favored in Trump’s campaign speeches (namely disgust, anger, fear and sadness) that probably contributed to the overall assessment of his campaign as predominantly negative. Still, an open question that remains to be addressed by future studies is whether the 2016 campaign was indeed more negative in tone than previous campaigns (as suggested, e.g., by the quote from the The Washington Post at the start of the paper or the initial findings from Bayagich et al. 2016). As argued in the present paper, the syuzhet package (Jockers 2016) is an ideal tool to address this question.

The package, which carries out an automatic sentiment analysis drawing on a lexicon that is based on Plutchik’s eight basic human emotions (the Word-Emotion Association Lexicon; Mohammad and Turney 2013) is freely available online, as is the software program R (R Core Team 2016) with which all analyses for the present study have been conducted. The resources necessary for similar types of studies are therefore easily available and it is hoped that future research will therefore make much more use of this psychologically more plausible type of sentiment analysis. Yet, as pointed out earlier, this automatic model of sentiment detection is only a first attempt at modeling human sentiment word processing. More adequate models will have to be developed in the future that not only draw on psychological theories of human emotions, but that also incorporate the complex effect of co-occurring words and constructions that shift the sentiment score of an emotional word (and which go beyond a blind application of contextual information).

Acknowledgments

I would like to thank all three anonymous reviewers for their critical feedback, which has greatly improved the quality of the final paper.

References

Bayagich, Megan, Laura Cohen, Lauren Farfel, Andrew Krowitz, Emily Kuchman, Sarah Lindenberg, Natalie Sochacki, Hannah Suh & Stuart Soroka. 2016. Exploring the tone of the 2016 Campaign. http://cpsblog.isr.umich.edu/?p=1884 (accessed 07 March 2017).Search in Google Scholar

Becker-Carus, Christian & Mike Wendt. 2017. Emotionen. In Christian Becker-Carus & Mike Wendt (eds.), Allgemeine Psychologie: Eine Einführung, 539–568. Berlin et al: Springer.10.1007/978-3-662-53006-1_12Search in Google Scholar

Bortz, Jürgen, Gustav A. Lienert & Klaus Boehnke. 1990. Verteilungsfreie Methoden in der Biostatistik. Berlin et al: Springer.10.1007/978-3-662-22593-6Search in Google Scholar

Calvo, Rafael A. & Sunghwan M. Kim. 2013. Emotions in text: Dimensional and categorical models. Computational Intelligence 29(3). 527–543.10.1111/j.1467-8640.2012.00456.xSearch in Google Scholar

Crockett, Zachary. 2016. What I learned reading 4,000 Trump and Clinton tweets. Source: http://www.vox.com/2016/11/7/13550796/clinton-trump-twitter (accessed 07 March 2017).Search in Google Scholar

Ekman, Paul 1992. An argument for basic emotions. Cognition and Emotion 6(3). 169–200.10.1080/02699939208411068Search in Google Scholar

Ekman, Paul. 2005. Emotion in the human face. Oxford: Oxford University Press.Search in Google Scholar

Ekman, Paul & Wallace V. Friesen. 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1. 49–98.10.1515/semi.1969.1.1.49Search in Google Scholar

Ekman, Paul & Wallace V. Friesen. 1971. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17. 124–129.10.1037/h0030377Search in Google Scholar

Francom, Jerid. 2015. Web scraping with ‘rvest’ in R. Source: http://francojc.github.io/web-scraping-with-rvest/ (accessed 07 March 2017).Search in Google Scholar

Gendron, Maria, Debi Roberson, Jacoba Marietta van der Vyver & Lisa Feldman Barrett. 2014. Cultural relativity in perceiving emotion from vocalizations. Psychological Science 25. 911–920.10.1177/0956797613517239Search in Google Scholar

Gries, St. Th. 2004. HCFA 3.2 ‑ A Program for Hierarchical Configural Frequency Analysis for R for Windows.http://www.linguistics.ucsb.edu/faculty/stgries/research (accessed 17 November 2014).Search in Google Scholar

Gries, St. Th. 2009. Statistics for linguistics with R: A practical introduction. Berlin: Mouton de Gruyter.10.1515/9783110216042Search in Google Scholar

Hoffmann, Thomas. 2017. Construction Grammars. In Barbara Dancygier (ed.), The Cambridge handbook of cognitive linguistics, 310–329. Cambridge: Cambridge University Press.10.1017/9781316339732.020Search in Google Scholar

Jockers, Matthew. 2015. Some thoughts on Annie’s thoughts … about Syuzhet. http://www.matthewjockers.net/2015/03/04/some-thoughts-on-annies-thoughts-about-syuzhet/ (accessed 17 July 2017).Search in Google Scholar

Jockers, Matthew. 2016. Introduction to the Syuzhet package. https://cran.r-project.org/web/packages/syuzhet/vignettes/syuzhet-vignette.html (accessed 07 March 2017).Search in Google Scholar

Kennedy, Alistair & Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence 22. 110–125.10.1111/j.1467-8640.2006.00277.xSearch in Google Scholar

Mohammad, Saif M. & Peter D. Turney. 2010. Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. In Proceedings of the NAACL-HLT 2010 workshop on computational approaches to analysis and generation of emotion in text, 26–34. June 2010, LA, California. http://aclweb.org/anthology/W10-02 (accessed 26 September 2017).Search in Google Scholar

Mohammad, Saif M. & Peter D. Turney. 2013. Crowd sourcing a word-emotion association lexicon. Computational Intelligence 29(3). 436–465.10.1111/j.1467-8640.2012.00460.xSearch in Google Scholar

Nissim, Malvina & Viviana Patti. 2017. Semantic aspects in sentiment analysis. In Federico Alberto Pozzi, Elisabetta Fersini, Enza Messina & Bing Liu (eds.), Sentiment analysis in social networks, 31–48. Amsterdam et al.: Elsevier.10.1016/B978-0-12-804412-4.00003-6Search in Google Scholar

Oaks, Michael P. 1998. Statistics for corpus linguistics (Edinburgh Textbooks in Empirical Linguistics). Edinburgh: Edinburgh University Press.Search in Google Scholar

Patterson, Thomas E. 2016. News coverage of the 2016 general election: How the press failed the voters. https://shorensteincenter.org/news-coverage-2016-general-election/ (accessed 07 March 2017).10.2139/ssrn.2884837Search in Google Scholar

Plutchik, Robert. 1980. A general psychoevolutionary theory of emotion. Volume 1: Of theories of emotion. New York, NY: Academic Press.10.1016/B978-0-12-558701-3.50007-7Search in Google Scholar

Plutchik, Robert. 1994. The psychology and biology of emotion. New York, NY: HarperCollins College Publishers.Search in Google Scholar

Plutchik, Robert. 2001. The nature of emotions. American Scientist 89. 344–350.10.1511/2001.4.344Search in Google Scholar

Pozzi, Federico Alberto, Elisabetta Fersini, Enza Messina & Bing Liu. 2017. Challenges of sentiment analysis in social networks: An overview. In Federico Alberto Pozzi, Elisabetta Fersini, Enza Messina & Bing Liu (eds.), Sentiment analysis in social networks, 1–11. Amsterdam et al.: Elsevier.10.1016/B978-0-12-804412-4.00001-2Search in Google Scholar

R Core Team. 2016. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. https://www.R-project.org/.Search in Google Scholar

Rinker, Tyler. 2016. Sentimentr v0.4.0. https://www.rdocumentation.org/packages/sentimentr/versions/0.4.0 (accessed 17 July 2017).Search in Google Scholar

Russell, James A. 1994. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin 115(1). 102–141.10.1037/0033-2909.115.1.102Search in Google Scholar

Sauter, Disa A., Frank Eisner, Paul Ekman & Sophie K. Scott. 2010. Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences of the United States of America 107(6). 2408–2412.10.1073/pnas.0908239106Search in Google Scholar

Sauter, Disa A., Frank Eisner, Paul Ekman & Sophie K. Scott. 2015. Emotional vocalizations are recognized across cultures regardless of the valence of distractors. Psychological Sciences 26(3). 354–356.10.1177/0956797614560771Search in Google Scholar

Schrepp, Martin. 2006. The use of configural frequency analysis for explorative data analysis. British Journal of Mathematical and Statistical Psychology 59(1). 59–73.10.1348/000711005X66761Search in Google Scholar

Schweinberger, Martin. 2016. A sociolinguistic analysis of emotives in Irish English. Poster presented at the Annual Meeting of the Society for Text & Discourse 2016, Kassel, 18–20 July 2016.Search in Google Scholar

Swafford, Annie. 2015. Problems with the syuzhet package. https://annieswafford.wordpress.com/2015/03/02/syuzhet/ (accessed 17 July 2017).Search in Google Scholar

Young, Lori & Stuart Soroka. 2012. Affective news: The automated coding of sentiment in political texts. Political Communication 29. 205–231.10.1080/10584609.2012.671234Search in Google Scholar


Article note

Donald Trump. “Remarks in Virginia Beach”, Virginia July 11, 2016, Source: http://www.presidency.ucsb.edu/ws/?pid=117815 (accessed 8 March 2017).


Received: 2017-03-08
Accepted: 2017-10-21
Published Online: 2018-01-13

©2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 15.5.2024 from https://www.degruyter.com/document/doi/10.1515/lingvan-2017-0008/html
Scroll to top button