1 Introduction
Online consumer reviews are a regular feature on most consumer websites such as Amazon or Yelp and have attracted much attention in the information systems community in recent years (e.g., Li et al.
2019). In particular, research has highlighted that certain properties of reviews determine their effects on review helpfulness, purchase intention, and product sales. In this regard, apart from the effects of review ratings (e.g., Chevalier and Mayzlin
2006; Clemons et al.
2006), a number of studies are concerned with review language, i.e., length (e.g., Pan and Zhang
2011; Schindler and Bickart
2012), content (e.g., Willemsen et al.
2011; Yin et al.
2014), and linguistic style (e.g., Li et al.
2019; Liu et al.
2008), which are arguably at least as important for review quality and effectiveness as purely numerical ratings (Archak et al.
2011; Pavlou and Dimoka
2006).
Despite the prominence of research on the effects of review language, however, little is known about why reviewers vary in the ways they use language, build arguments, and expend effort on their reviews. Particularly, while some nascent research has emerged in this field (e.g., Hu et al.
2008; Willemsen et al.
2011), only one study (Safi and Yu
2017) has so far specifically illuminated the influence of reviewer personality on the characteristics of online consumer review language. This is surprising given that personality has long been considered an important factor to explain differences in e-commerce behavior (e.g., Gefen
2000) and information systems use in general (Zmud
1979). It therefore appears reasonable to expect that personality characteristics might help explain why people vary in the way they compose reviews.
Our paper aims to establish a novel link between reviewer personality and online reviews. Specifically, we draw on the concept of political ideology, i.e., individuals’ leanings on a continuum between liberal and conservative. Political ideology is a particularly intriguing concept because strong evidence exists that it reflects various stable, underlying personality characteristics (see Jost et al.
2003,
2009 for reviews). In addition, political ideology contains an explicitly motivational component and thus “helps to explain why people do what they do” (Jost
2006, p. 653). As a result, the implications of ideology have often been studied in research related to information systems (Flaxman et al.
2016; Gentzkow and Shapiro
2011), e.g., with regard to its consequences for IT investment (Pang
2016), technology adoption (e.g., Chen
2010; Smith
2013), user behavior on social networking sites (Yang et al.
2017), and engagement in online piracy (Graf-Vlachy et al.
2017).
We introduce political ideology to online review research because we expect several of the associated personality characteristics and motivations to predict differences in review language. Building on previous research, we theorize that individuals’ pro-social behavior and altruism (e.g., Zettler and Hilbig
2010; Van Lange et al.
2012), cognitive complexity (e.g., Van Hiel and Mervielde
2003; Jost et al.
2003), and sensitivity to negative stimuli (e.g., Hibbing et al.
2014; Joel et al.
2013) are related to the way reviews are composed. We then link these personality characteristics associated with political ideology to three of the most-studied properties of review language which have been suggested to have a pivotal impact on review helpfulness and sales, namely review depth (e.g., Mudambi and Schuff
2010; Schindler and Bickart
2012), multifacetedness (e.g., Ghose and Ipeirotis
2006,
2011; Willemsen et al.
2011), and valence (e.g., Cao et al.
2011; Yin et al.
2014). Overall, our research thus addresses the following research question:
How is reviewers’ political ideology related to the language they use when composing online reviews?
We view technology – specifically websites using online reviews – as a socially “embedded system” (Orlikowski and Iacono
2001, p. 126) and aim to contribute to research on how “different user groups [engage] with that technology” (
2001, p. 127). To the best of our knowledge, our study is the first to show that the differences in review language described in extant literature are associated with differences in personality of the reviewers, as reflected in their political ideology. By adding the additional factor of reviewers’ political ideology, our study goes beyond prior research, which was limited to situational antecedents such as experience or expertise (e.g., Liu et al.
2008; Smith et al.
2005), and we reach a more granular understanding of the determinants of review language. In addition, we provide evidence for the potential of political ideology as an important construct in information systems research at large. In particular, we highlight that the political ideology of system users is closely related to how they engage with information technology, which has critical implications for the design of IT systems and user interactions.
5 Estimation Approach and Results
Table
1 contains summary statistics and pairwise correlations for all variables used in our analyses. To test for multicollinearity, we calculated the mean variance inflation factors, which, at values between 1.70 (Model 9) and 1.81 (Model 4) for the saturated models, are well below the suggested threshold of 10.0 (Hair et al.
2009; Kutner et al.
2004).
Table 1
Descriptives and correlations (n = 245)
(1) Cognitively complex language | 0.00 | 3.11 | 1.00 | | | | | | | | | | |
(2) Argument diversity | 0.24 | 0.39 | 0.07 | 1.00 | | | | | | | | | |
| | | (0.29) | | | | | | | | | | |
(3) Language valence | 0.01 | 0.03 | − 0.06 | 0.21 | 1.00 | | | | | | | | |
| | | (0.38) | (0.00) | | | | | | | | | |
(4) Wordcount | 95.75 | 106.19 | − 0.00 | − 0.04 | − 0.20 | 1.00 | | | | | | | |
| | | (0.97) | (0.49) | (0.00) | | | | | | | | |
(5) Number of arguments | 2.96 | 2.47 | − 0.08 | − 0.23 | − 0.20 | 0.64 | 1.00 | | | | | | |
| | | (0.23) | (0.00) | (0.00) | (0.00) | | | | | | | |
(6) Political ideologya | 0.43 | 0.05 | − 0.14 | − 0.08 | − 0.17 | − 0.16 | − 0.20 | 1.00 | | | | | |
| | | (0.03) | (0.22) | (0.01) | (0.01) | (0.00) | | | | | | |
(7) Age | 44.89 | 17.95 | − 0.21 | 0.11 | 0.05 | 0.03 | 0.05 | 0.09 | 1.00 | | | | |
| | | (0.00) | (0.08) | (0.41) | (0.60) | (0.41) | (0.18) | | | | | |
(8) Genderb | 0.44 | 0.50 | 0.16 | 0.21 | 0.03 | − 0.11 | − 0.27 | 0.17 | − 0.13 | 1.00 | | | |
| | | (0.01) | (0.00) | (0.61) | (0.09) | (0.00) | (0.01) | (0.04) | | | | |
(9) Internet usage | 2.43 | 0.56 | 0.04 | − 0.08 | − 0.26 | 0.18 | 0.21 | 0.15 | − 0.04 | − 0.14 | 1.00 | | |
| | | (0.58) | (0.20) | (0.00) | (0.00) | (0.00) | (0.02) | (0.58) | (0.03) | | | |
(10) Income | 6.64 | 3.16 | − 0.07 | 0.03 | 0.20 | − 0.08 | − 0.10 | 0.00 | − 0.04 | 0.12 | − 0.73 | 1.00 | |
| | | (0.24) | (0.61) | (0.00) | (0.23) | (0.13) | (0.95) | (0.55) | (0.07) | (0.00) | | |
(11) Review stars | 4.26 | 1.25 | − 0.19 | − 0.09 | 0.10 | − 0.27 | − 0.17 | 0.03 | − 0.02 | − 0.10 | − 0.03 | 0.08 | 1.00 |
| | | (0.00) | (0.17) | (0.13) | (0.00) | (0.01) | (0.59) | (0.77) | (0.12) | (0.62) | (0.20) | |
To test H1a, H1b, and H2a, we use panel random effects regression models to accommodate the panel structure of our dataset. To test H2b and H2c, we employ a pooled fractional probit model, as argument diversity, the dependent variable, is a fractional outcome variable (Baum
2008; Papke and Wooldridge
2008). To test H3, we use a pooled Tobit model, as language valence, the dependent variable, is a censored variable (Wooldridge
2001). To account for the fact that our observations are not independent but are nested in reviewers, we clustered standard errors at the individual reviewer in all models. The results for all models are presented in Table
2. Models 1, 3, 5, 7 and 10 are control models for H1a, H1b, H2a, H2b, and H3 respectively.
Table 2
Regression results
We find marginal support for H1a in Model 2 and support for H1b in Model 4. As anticipated, the results in Model 2 show that the word count, i.e., the review length, is higher for reviews submitted by less conservative reviewers (p < 0.10). Model 4 supports our hypothesis that less conservative reviewers also make use of more arguments in their online reviews (p < 0.05). On average, reviews by less conservative reviewers (political ideology score < 0.5) contain 97 words and 3 arguments, while reviews by more conservative reviewers (political ideology score > 0.5) contain only 71 words and 2 arguments.
Model 6 provides support for H2a as we find a negative and significant (
p < 0.01) coefficient for political ideology, suggesting that the more conservative a reviewer, the less cognitively complex language will their online reviews contain. Similarly, we find support for H2b in Model 8, albeit with a slightly less significant coefficient (
p < 0.05) for political ideology. Contrary to our expectations, we do not find a mediating effect of cognitively complex language for the effect of political ideology on argument diversity. As depicted in Model 9, when cognitively complex language is added to Model 8, political ideology still has a significant effect on the dependent variable (
p < 0.05). A possible explanation could be that the greater argument diversity exhibited by less conservative reviewers is not only a result of greater cognitive complexity, but also directly of greater ambiguity tolerance (Jost et al.
2003).
Finally, the results of Model 11 lend support to H3, as political ideology has a negative and significant (p < 0.05) coefficient. This suggests that more conservative individuals tend to use language with a less positive valence in online reviews.
We ran several robustness checks to see whether our models were robust to the use of alternative estimators. Using OLS models with clustered standard errors provided consistent results. We further obtained very similar results when we re-ran our analyses using multilevel models. While both approaches neglect the fact that our dependent variables are not always continuous, the similar results further increase our confidence in the reported models.