Distinctions in the attributes of niche versus mainstream brands are leveraged to explain differences in the drivers of online review ratings. Specifically, we examine how customer review valence, professional critics review valence, community characteristics, location similarity, and reviewer characteristics may impact a reviewer’s rating. We use a unique dataset on the U.S. beer product category to address our research questions and find that niche brands are more impacted by OWOM activity across the board because consumers are less likely to have established brand awareness and brand imagery formed. Likewise, a reviewer is prone to rating a local niche brand more favorably. Professional critics are generally less influential than the online community for the typical focal reviewer. A prior review from the online community becomes particularly influential when its expertise is high and/or when the prior reviewer has shared geographic locational traits with the focal reviewer. Reviewers that engage more with products/brands tend to align sentiments with professional critics, while those that engage more with the online community tend to align sentiment with that community. Utilizing insights from these results, we provide several guidelines for brand managers in devising appropriate social media strategies.
An alternative explanation is that as reviewers gain more expertise in the area, their opinions start to mirror or track those of critics (experts), as highly experienced reviewers become de facto experts in their own right. This may not be the result of critics influence but due to the evolution in taste, or the refining of the palate that allows an experienced reviewer to detect product attributes that less experienced reviewers miss. To address this possibility, in our econometric model, we control for the experience of the reviewer which should at least partially capture the evolving taste of the reviewer. In addition, we also include time effects, which controls for overall changes in taste and preferences over time. However, it is important to also show exposure to critics ratings. From www.beerdvocate.com, we were able to obtain some evidence that suggests that the reviewers are exposed to critics ratings for the products. First, the website posts the winners of the critics ratings competitions. For example, the following link showcases the winners for 2019: https://www.beeradvocate.com/community/threads/2019-great-american-beer-festival-gabf-winners.624310/. Second, the beer reviewers on the online forum discuss the critics ratings. Please see the following example of a reviewer discussing the influence of the rating received by critics during the Great American Beer Festival (GABF): “Setting aside the fact that “best” is subjective, I’m not sure how that’s the same logic. GABF is judged by a panel of judges, and I would hope that they are people with refined palates and extensive beer knowledge. In other words, what they think should carry some weight.”
We, nevertheless, still tested an ordered Probit model as an alternative analysis approach and found no substantive changes in the results. This consistency in results has been found by prior researchers as well (e.g., Goldfarb & Tucker, 2011; Koçaş & Akkan, 2016).