## Weitere Artikel dieser Ausgabe durch Wischen aufrufen

Erschienen in:

Open Access 03.03.2021 | Original Empirical Research

# Online program engagement and audience size during television ads

verfasst von: Beth L. Fossen, Alexander Bleier

Erschienen in: Journal of the Academy of Marketing Science | Ausgabe 4/2021

print
DRUCKEN
insite
SUCHEN

Begleitmaterial
Hinweise

## Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s11747-021-00769-z.
Beth L. Fossen and Alexander Bleier contributed equally to this work.

## Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Introduction

Watching live television is currently the most popular form of media consumption, to which the average U.S. consumer devotes about four hours per day (Nielsen 2019). Given the abundance of television content available, viewers typically engage in substantial channel switching once screened content, such as ads, ceases to meet their preferences (Wilbur 2016). Since audience size—the number of households watching a program at a specific time—is the core performance metric for the television industry, understanding what drives dynamics in viewership over the course of an episode and especially during ads is important to networks and advertisers.
In this research, we leverage the continued megatrend of mobile devices becoming an integral part of consumers’ lives to shed light on this issue. In particular, 85% of U.S. consumers use their mobile devices while watching television (eMarketer 2017; Nielsen 2015a). Moreover, online program engagement (OPE)—viewers’ engagement in social media conversations about television programs, also known as social TV activity (Benton and Hill 2012)—is among the most widespread multiscreen behaviors in which nearly 40% of U.S. multiscreeners engage (IAB 2015). Given that OPE has recently drawn interest as a readily observable indicator for viewers’ program involvement, a measure that is typically difficult to obtain (Nielsen 2015b), we propose that observing viewers’ OPE during live television episodes can help explain audience size during ads.
Prior research has begun to investigate how OPE before and after a program’s airing relates to aggregate1 episode viewership. While the results indicate a link between OPE and consumers’ television consumption, the findings are ambivalent as some studies suggest that episode audience size may rise with increasing volumes of OPE prior to that program’s airing (e.g., Gong et al. 2017; Liu et al. 2016), while others find that post-program OPE is more important (Seiler et al. 2017). However, so far research has not examined within-episode fluctuations in audience size and thus offers little guidance in explaining audience size during individual ads. We thus expand prior work by investigating how viewers’ OPE during television episodes relates to ad audience size in these episodes. Investigating this relationship is important because it provides advertisers and networks with insights on how to strategically place ads to increase ad audience size. Such insights are valuable because (1) ad audience size is vitally important to the television advertising industry and (2) viewers’ channel-changing renders aggregate episode viewership not representative of ad audience size, as explored in previous research.
To answer our research questions, we build a multisource dataset including second-level data on national primetime television ad instances, audience size during these ads, OPE, that is, social media conversations mentioning the programs in which the ads air, and a rich set of ad and program characteristics. Altogether, the data comprise 8417 ad instances for 248 brands that aired on 83 television programs during the fall 2013 television season.
Our research contributes to the literature in three key ways. First, previous research shows that OPE can relate to aggregate audience size of television programs (e.g., Gong et al. 2017; Liu et al. 2016; Seiler et al. 2017) but has not accounted for changes in audience size over the course of an episode. Building on this work, we present the first examination of the relationship between OPE and audience size during individual ads. Leveraging OPE as an observable measure of viewers’ involvement with an ongoing program, we provide an enhanced understanding of how OPE can serve as an indicator of viewers’ ad response at specific moments of an episode. These results extend insights from extant work on OPE and provide television networks and advertisers with actionable insights on determining ad placements (within social episodes and after social moments) to improve ad audience size.
Second, we enrich the growing debate on whether or not social episodes are beneficial for advertisers (e.g., Nielsen 2015b; Nielsen 2015c). Specifically, we extend Fossen and Schweidel (2017, 2019), who find that airing ads during social episodes impacts the volume of online WOM about the ad and increases online shopping behavior on the advertiser’s website, by showing that ads airing during social episodes see heightened audience size. We further contribute to this research by showing that OPE volume and deviation are distinct measures (correlation r = −.001) to identify social episodes and social moments in episodes. To the best of our knowledge, we are the first to distinguish between these two measures and show that both are individually important to identifying attractive ad opportunities.
Third, technological advances of the evolving digital economy have created opportunities for viewers to engage with programs online and for firms to leverage this data to learn about consumer preferences and markets with unprecedented degrees of accuracy. In this regard, we further emerging research efforts that examine how firms can listen in on social media to generate cost-effective insights about consumer behavior in real-time (e.g., Lamberton and Stephen 2016; Schweidel and Moe 2014). Based on publicly available OPE data (Tweets), we gain a better understanding of audience size dynamics during television episodes and individual ads, information that is currently slow and costly for firms to obtain (e.g., Friedman 2012; Tuchman et al. 2018; expert interviews in Web Appendix 1). As the ongoing process of digitalization will continue to push the frontier on what firms can achieve by leveraging social media data, we hope this research fuels further exploration of innovative approaches for learning about consumer preferences at scale.
We proceed as follows. In the next section, we review relevant extant work on online WOM, OPE, and television advertising as a foundation for our research. We then describe the data and discuss our modeling approach. Last, we present the results of our empirical investigation, discuss corresponding implications, and provide directions for further research.

## Background

Previous research has investigated the relationship between OPE and aggregate measures of audience size at the episode level. We extend this work, as summarized in Table 1, by (1) investigating audience size during individual ads rather than aggregate audience size at the episode level, (2) considering OPE during, instead of before or after, programs, and (3) examining OPE volume and deviation as two distinct measures to identify promising advertising opportunities.
Table 1
Summary of research on the relationship between OPE and audience size
Study
Measures of OPE
Level of analysis
Dependent variable
Main findings
Post-program volume (measured through IMDb votes) and valence (measured through IMDb reviews)
Episode
Aggregate audience size at episode level
Program WOM volume does not significantly relate to DV for early episodes but does increase over time, eventually declining for later episodes; valence of program WOM is not significant
Godes and Mayzlin (2004)
Pre-program volume and dispersion (measured through blog conversations)
Episode
Aggregate audience size at episode level
Dispersion of program WOM is associated with higher DV levels; volume of program WOM is not significant
Gong et al. (2017)
Pre-program volume (measured through media company’s Tweets about programs and influentials’ Retweets on Sina Weibo)
Episode
Aggregate audience size at episode level
Program WOM volume of company Tweets and influentials’ Retweets before a program’s airing increase program viewership
Liu et al. (2016)
Pre-program volume, sentiment, and content (measured using program-related Tweets)
Episode
Aggregate audience size at episode level
Program WOM Volume and sentiment improve prediction of DV minimally while content significantly improves prediction of DV
Lovett and Staelin (2016)
Pre- and post-program volume (measured through self-reported exposure to online conversations)
Episode
Aggregate audience size at episode level
Program WOM Volume increases DV
Seiler et al. (2017)
Pre- and post-program volume (measured through Sina Weibo comments about the program)
Episode
Aggregate audience size at episode level
Post-program WOM volume increases DV while pre-program volume is not significant
This research
Volume and deviation during episode (measured using program-related Tweets)
OPE volume and positive OPE deviation are associated with increases in DV; stronger relationship between OPE deviation and DV for earlier ads in an ad break
Previous research finds mixed results regarding the relationship between OPE before or after an episode and aggregate episode audience size, as shown in Table 1. Some studies indicate a positive relationship, such as Lovett and Staelin (2016) and Gong et al. (2017), who show that customer-driven offline and online WOM as well as firm-initiated Tweets, respectively, have positive impacts on aggregate program audience size. Also, Liu et al. (2016) find that the volume of Tweets about a program prior to its airing predicts aggregate program audience size better than other online data such as search trends or reviews.
Yet, other studies point to a less clear relationship. Cadario (2015), for instance, shows that online program WOM volume affects aggregate program audience size only for episodes in the middle of a program’s lifetime, while Godes and Mayzlin (2004) and Seiler et al. (2017) find no effect of pre-program online WOM volume on aggregate program audience size. However, Seiler and coauthors show that online program WOM levels after the airing of a program exert positive effects on aggregate program audience size, as viewers anticipate positive utility from post-show online WOM and therefore tune in. Thus, extant work is ambivalent about the relationship between OPE volume and aggregate episode audience size. Yet, since this research is based on aggregated data at the program episode level, insights may be limited. In our study, we further explore this relationship and are the first to investigate how OPE volume and deviation, as observable proxies of viewers’ program involvement, relate to ad audience size and thereby viewers’ channel-changing behavior over the course of an episode.

### Conceptual framework

We propose that OPE is a measurable representation of viewers’ involvement with television programs. We then build on extant research about how program involvement influences viewers’ response to ensuing advertisements to explain the relationship between OPE and audience size during ads. Involvement is a motivational state elicited by one’s interest in and arousal caused by a specific subject in a particular situation (e.g., Bloch and Richins 1983; Celsi and Olson 1988; Richins et al. 1992). Television viewing is regarded as an involving experience where viewers’ program involvement represents an active, motivated state (Moorman et al. 2007). Overall, involvement is comprised of two core components: (1) enduring involvement, a general sustained care or concern with a subject, and (2) situational involvement, an ephemeral involvement with a subject in a given moment (Houston and Rothschild 1978; Richins et al. 1992). We next present support for OPE as a measure of viewers’ program involvement and then explain how such involvement influences viewers’ response to ensuing advertisements.
Program involvement and OPE
Our argumentation about the relationship between OPE and ad audience size rests on the notion that OPE reflects viewers’ program involvement. We provide two key arguments for this. First, and most importantly, viewers’ neurological involvement with a television program exhibits a substantial positive correlation with the volume of program-related Tweets for the program’s general audience (Nielsen 2015b). That is, increases in neurological involvement of non-Tweeting viewers accurately predict increases in OPE. Building on this, prior research has employed OPE as a measurable operationalization of viewer involvement with a television program (Fossen and Schweidel 2019). Extant work thus supports using OPE as a publicly available metric for an audience’s program involvement.
Second, we find further evidence for the relationship between program involvement and OPE in prior research that shows how involvement relates to WOM. For example, involvement motivates consumers to share their personal experience with others (Richins and Root-Shafer 1988). This notion aligns with Dichter’s (1966) observation that involvement in an experience can produce a tension that must be channeled by way of talking about it. Moreover, involvement can determine the extent to which customer satisfaction translates into WOM (Von Wangenheim and Bayón 2007). Research on WOM activity provides further support for the involvement-OPE link. Specifically, several studies show that interest and arousal, factors that directly relate to involvement (e.g., Bloch and Richins 1983; Celsi and Olson 1988; Richins et al. 1992), drive WOM activities. For example, more interesting products, news, and subjects generate more WOM (Berger and Iyengar 2013; Berger and Milkman 2012; Berger and Schwartz 2011). Similarly, several investigations illustrate that topics are more likely to be discussed if they are more arousing (e.g., Berger 2011, 2014; Huang et al. 2017; Ladhari 2007; Luminet et al. 2000).
To accurately capture involvement as the key mechanism underlying the relationship between OPE and ad audience size, we investigate the effects of two focal OPE measures: (1) volume, referring to the relative volume of program-related online WOM in an episode leading up to an ad’s airing and (2) deviation, referring to the difference between OPE volume right before an ad airs and the average per-minute OPE volume in an episode leading up to the ad’s airing. These two measures follow previous work showing that involvement comprises an enduring and a situational component (Houston and Rothschild 1978; Richins et al. 1992). We detail each OPE measure in turn and visually summarize our conceptual framework and corresponding hypotheses in Fig. 1.
OPE volume and audience size during ads
H1a:
H1b:
The positive relationship between OPE volume and ad audience size strengthens as ad position increases.
OPE deviation and audience size during ads
In addition to a general, enduring level of involvement with a program, particular moments of a show may elicit a sudden situational deviation in involvement. Such ephemeral deviations may constitute an increase in involvement, that is, a positive deviation (e.g., following unexpected, happy, or controversial events) or a decrease in involvement, that is, a negative deviation (e.g., following uneventful or foreseeable events). Building on past involvement research which is typically concerned with the effects of ephemeral surges in involvement (e.g., Feltham and Arnold 1994; Richins et al. 1992; Zaichkowsky 1985), we focus on positive deviations of involvement in our conceptual framework.
Research on the excitation-transfer paradigm proposes that sudden increases in involvement in one situation can influence consumers’ behavior in a subsequent situation. This occurs due to a short-term misattribution of involvement across situations until an individual returns to a normal, baseline level of involvement (Zillmann 1971). Research has investigated this phenomenon in the television context. For instance, Mattes and Cantor (1982) find that viewers perceive ads following highly involving moments of a television program as more enjoyable than the same ads following less involving program content. Further studies find that surges in television viewers’ involvement improve attention to and response toward ads in the short term (e.g., McGrath and Mahood 2004; Wang and Lang 2012). A key aspect to these findings is the short-lived nature of the observed deviations in situational involvement which only persist until an individual returns to a normal, baseline level of involvement or arousal, a process that does not take long to occur (Zillman 1971).
In line with this research, we pose that a sudden increase in situational involvement right before the airing of an ad2 will positively impact viewers’ response to the ad, leading them to more readily stay with the ad instead of changing the channel. We thus expect that positive OPE deviations, reflecting heightened situational involvement, will associate with less viewers changing the channel during a subsequent ad and, thus, higher ad audience size.
H2a:
Positive OPE deviations prior to an ad relate positively to ad audience size.
We further anticipate the relationship between positive OPE deviations and audience size to also be moderated by ad position. Specifically, the effects of a sudden positive deviation in program-induced interest and/or arousal are likely to decrease over the course of an ad break (Mattes and Cantor 1982; Zillmann 1971). Wang and Lang (2012), for instance, show attitudinal responses toward ads that air after an increase in program involvement to be more positive for ads that air earlier in an ad break. This may occur because the activating effects of a sudden burst in involvement from program content are likely to decay over time, as the temporal distance from the program content increases, and will thus exert less influence on viewers’ response to ads that air later in the ad break. Accordingly, we expect the relationship between positive OPE deviation and ad audience size to be more pronounced for earlier ads in an ad break.
H2b:
The positive relationship between positive OPE deviation and audience size during ads weakens as ad position increases.

## Data and measures

### Audience size data

We obtain data on audience size during ads from Comscore, Inc.’s TV Essentials database (Comscore 2020), which includes set-top box tuning data from more than 30 million U.S. households. We use the most granular measure of audience size provided, which is the estimated average number of televisions in U.S. households (extrapolated from the Comscore panel to the U.S. population of TV households) tuned into a given telecast throughout a 30-second interval. For example, this measure provides the number of U.S. households tuned into Dancing with the Stars on a given date from 8:00:00 PM to 8:00:30 PM, 8:00:30 PM to 8:01:00 PM, etc.
While these data provide a granular measure of audience size, the 30-second intervals do not always line up exactly with an ad’s airing. As a remedy, we determine ad audience size using three different approaches. We estimate our model with each of these approaches and find consistent results. For our primary approach, we treat the audience size measures from the data as constant such that viewership for a given episode recorded at 8:00:00 PM would be used to represent the number of households tuned into that episode from 8:00:00 PM–8:00:29 PM. We detail the two alternative approaches to measuring audience size and the corresponding analyses in Web Appendix 3.1.

### OPE data

We supplement these data with Tweets referring to the programs that the ads air in. We focus on Twitter mentions because the vast majority of public social media conversations about television programming occurs on this platform (Schreiner 2013). For the 83 television programs in our data, we collect the volume of Twitter mentions via Topsy Pro at the minute-level (the most granular level provided by Topsy Pro).4 We collect program-related Twitter mentions by tallying the number of Tweets that contain a program’s name or nickname (e.g., Dancing with the Stars or DWTS), hashtag(s) with a program’s name or nickname, and/or the program’s Twitter handle.
We utilize this data to construct our two OPE measures: volume and deviation. We operationalize OPE volume by first dividing the number of program-related Tweets between the start of the episode and the beginning of the focal ad’s airing by the minutes in the episode up until the ad airs. This creates the average per-minute volume of Tweets about the program. We then divide this number by the number of viewers at the beginning of the focal ad to create a relative measure of volume.5 For example, if the focal ad begins airing 10 min into an episode with 200 viewers and received 100 program-related Tweets during those 10 min, our OPE volume measure would equal .05 ((100/10)/200 = .05).
Table 2 summarizes our primary operationalizations of OPE. We moreover illustrate the robustness of our findings by exploring 23 alternative operationalizations of our OPE measures, as detailed in Web Appendices 3.5 and 3.6. Furthermore, while we focus on OPE volume and deviation in our main analysis, we explore OPE valence in the Additional analyses section.
Table 2
Summary of variables
Variable
Description
Freq.(%)
Mean
SD
Min
Max
AudienceSizeBegi
Audience size at the beginning of ad i

4,632,236
2,409,515
561,563
12,603,068
AudienceSizeEndi
Audience size at the end of ad i

4,585,324
2,380,155
561,563
12,190,214
OPE
ProgramWOMVolumei is the number of Tweets about the program in which ad i airs from start of episode until ad i airs divided by the minutes in episode up until ad i airs. We then divide ProgramWOMVolumei by the number of viewers at the beginning of ad i to create our relative volume measure. We take the log of this measure plus one to create LogProgramWOMVolumei (descriptives are shown for ProgramWOMVolumei).

266.59
510.48
.00
8778.50
ProgramWOMDeviationi is the difference between the number of Tweets about the program between one minute before ad i airs until ad i airs and ProgramWOMVolumei. We take the log of this measure plus one (see table notes for log transformation) to create LogProgramWOMDeviationi (descriptives are shown for ProgramWOMDeviationi).

23.91
667.86
−2271.52
32,690.37

3.53
1.94
1.00
12.00

25.33
8.17
5.00
120.00
62.10%

Number of ads aired by the brand in ad i in primetime on the same network and day ad i airs before ad i airs

.27
.62
.00
5.00

.88
1.54
.00
13.00
Break position
Relative ad break position in episode, calculated as (position of the ad break in episode)/(number of ad breaks in episode)

.59
.29
.07
1.00
Day (Baseline: Sunday)
19.26%

19.18%

16.94%

19.39%

14.30%

.10%

Viewer episode rating
IMDb rating for the episode in which the ad airs

7.69
.88
3.70
9.80
Half-hour break
Ad airs within two minutes of a half-hour break
13.24%

Program
Fixed effects for the program ad i airs in

Special episode
Ad airs on a season premiere
9.43%

10.57%

Time
Fixed effects for time ad i airs (5-min intervals)

Model interactions

Notes: For the log transformation of OPE deviation, if OPE deviation is negative, we take the log of the absolute value of ProgramWOMDeviationi plus 1 and then multiply this value by −1

### Data on advertising and program characteristics

We also collect data on program characteristics that can impact viewers’ channel-changing behavior (e.g., Deng and Mela 2017; Schweidel et al. 2014; Schweidel and Kent 2010; Wilbur 2016). In particular, we capture the program the ad airs in, season premieres, fall finales, and viewers’ ratings of episodes on IMDb (henceforth referred to as viewer episode ratings).
Table 2 summarizes the variables in our empirical analysis. We present the correlations between these variables in Web Appendix 2. Importantly, we find no correlation between our two measures of OPE (r = −.001).

### Descriptive statistics

Table 3
Descriptive statistics for audience size during ads
Variable
Mean
SD
Min
1st quartile
Median
3rd quartile
Max
AudienceSizeBegi
4,632,236
2,409,515
561,563
2,853,649
4,589,262
6,272,361
12,603,068
AudienceSizeEndi
4,585,324
2,380,155
561,563
2,837,303
4,552,178
6,213,937
12,190,214
Absolute change in audience size during ad
−46,912
120,460
−1,205,684
−49,177
−1133
4327
256,571
Percentage change in audience size during ad
−.95%
1.99%
−12.10%
−1.39%
−.04%
.14%
5.67%
1
−4.19%
1.90%
−12.10%
−5.29%
−4.05%
−2.98%
1.60%
2
−1.40%
1.55%
−9.15%
−2.00%
−1.03%
−.27%
3.08%
3
−.26%
.63%
−5.58%
−.50%
−.13%
.00%
4.20%
4
.05%
.53%
−3.67%
−.13%
.00%
.21%
5.67%
5
.23%
.45%
−4.78%
.00%
.13%
.41%
3.29%
6
.35%
.51%
−1.70%
.00%
.24%
.53%
3.72%
7+
.39%
.60%
−4.76%
.00%
.30%
.65%
3.12%
Notes: AudienceSizeBegi (AudienceSizeEndi) is the audience size at the beginning (end) of ad i. Absolute change in audience size is calculated as (AudienceSizeEndiAudienceSizeBegi); percentage change in audience size is calculated as (AudienceSizeEndiAudienceSizeBegi)/(AudienceSizeBegi)
Table 4 shows the key descriptive statistics for OPE. The average ad airs in a program that receives 267 Twitter mentions per minute prior to its airing; yet, some ads appear in programs that average as many as 8779 program mentions per minute prior to their airing. We also observe substantial variation in OPE deviation. While the average ad sees an increase of 24 program-related Tweets in the minute before its airing relative to average per-minute program-related Tweets in the episode up until the ad airs, some ads see Tweets increase in the tens of thousands. We further illustrate average OPE volume and average OPE deviation across ad positions in Panel (B) of Fig. 2. While the volume measure generally increases over the break until around the last ad positions, the deviation measure shows an inverted U-shaped relationship that starts low, increases steadily to the midpoint of the break, and then decreases.
Table 4
Descriptive statistics for OPE
Variable
Mean
SD
Min
1st quartile
Median
3rd quartile
Max
ProgramWOMVolumei
266.59
510.48
.00
44.63
115.17
289.31
8778.50
1
266.44
440.42
.00
46.07
115.83
314.93
5366.52
2
257.45
489.21
.25
41.85
110.08
273.42
7291.29
3
266.32
544.56
.00
40.54
103.17
288.16
7291.29
4
263.42
585.28
.46
42.36
106.70
262.90
8778.50
5
265.73
532.77
.35
46.65
119.06
284.37
8778.50
6
283.40
504.61
2.20
52.00
129.59
304.34
4776.16
7+
275.55
421.95
2.20
50.67
141.34
341.45
3728.68
ProgramWOMDeviationi
23.91
667.86
−2271.52
−27.23
−4.78
13.25
32,690.37
1
−18.61
513.21
−2271.52
−51.95
−15.45
−0.57
17,291.02
2
−3.88
528.39
−2250.05
−33.62
−7.04
6.91
18,897.20
3
32.17
718.47
−1920.20
−18.11
−1.43
18.34
26,168.95
4
62.74
903.27
−1847.92
−13.84
.94
28.83
32,690.37
5
70.09
798.49
−391.20
−16.08
−.75
29.37
19,048.00
6
34.34
625.73
−547.84
−26.74
−4.18
16.88
17,221.11
7+
−10.03
136.05
−911.91
−37.83
−7.90
5.78
1359.64
Notes: See Table 2 for variable descriptions
We next develop our empirical model to examine how OPE relates to audience size.

### Model

To investigate the relationship between OPE and audience size during ads, we model the log of audience size at the end of ad instance i (LogAudienceSizeEndi) as follows6:
$${LogAudienceSizeEnd}_i={\beta}_0+{\beta}_1{LogAudienceSizeBeg}_i+{\beta}_2{LogProgramWOMVolume}_i+{\beta}_3{LogProgramWOMDeviation}_i+{\beta}_4{AdPosition}_i+{\beta}_5{LogProgramWOMVolume}_i\times {AdPosition}_i+{\beta}_6{LogProgramWOMDeviation}_i\times AdPositi\mathrm{o}{n}_i+{\gamma X}_i+{\varepsilon}_i,$$
(1)
where LogAudienceSizeBegi is the log of audience size at the beginning of ad i. We assess the relationship between OPE and LogAudienceSizeEndi by considering both OPE volume (LogProgramWOMVolumei) and OPE deviation (LogProgramWOMDeviationi), as defined in the previous section and in Table 2. We also explore the interactions of our two OPE measures with ad position (AdPositioni). Importantly, our data show that both OPE measures exhibit substantial variation across episodes of a program as well as across programs and are, thus, not solely driven by fixed program or episode characteristics (see Web Appendix 2). This suggests sufficient variation to identify the relationships between our OPE measures and ad audience size.

### Key identification arguments

Given that we control for the audience size at the beginning of ad i (LogAudienceSizeBegi), any relationship between OPE and LogAudienceSizeEndi is above and beyond the effect of LogAudienceSizeBegi. In this way, LogAudienceSizeBegi functions as a strong predictor that allows us to control for the factors not captured by our model variables that influence audience size prior to ad i’s airing. This approach of using very narrow time windows to investigate our focal relationships leverages an identification strategy employed in extant research to study the causal effects of television advertising (e.g., Fossen and Schweidel 2017; Liaukonyte et al. 2015). The core argument of this identification strategy is that the use of granular time windows makes it unlikely that variables outside the model will impact the outcome measure in such a short time window. Thus, provided a strong and appropriate set of model controls is employed (e.g., relevant and granular fixed effects), the relationship between a predictor and an outcome can be examined during the granular time window. In our setting, we leverage LogAudienceSizeBegi and the fixed effects in Xi, discussed above, as our set of strong model controls.

### Model estimation

Endogeneity may arise if one or both of our OPE measures is correlated with the error term in Eq. (1). One reason this may occur is from omitted variable bias. Specifically, if our OPE measures are not strong operationalizations of viewer involvement, the portion of viewer involvement not captured by our measures—or our other model variables—would be correlated with the error term. To help avoid this concern, we (1) construct our OPE measures using data (program-related Tweets) that accurately measure viewers’ neurological involvement with a program (Nielsen 2015b) and (2) include a rich set of model controls as noted above.
Nevertheless, to also formally rule out this potential endogeneity concern and to provide a robust test of our hypothesized relationships, we estimate our model using two different estimation procedures: OLS and Gaussian copulas. The latter is an endogeneity-correcting estimation approach that does not require instrumental variables (Park and Gupta 2012). Instead, this approach explicitly estimates and models the correlation between potentially endogenous variables and the error term by building on their joint distribution function. We leverage the control function approach in which an additional variable is added to the model based on the Gaussian copulas for each suspected endogenous regressor. This approach has been used in recent marketing studies to test and adjust for endogeneity if necessary (e.g., Bornemann et al. 2020; Guitart et al. 2018; Mathys et al. 2016; Wetzel et al. 2018).
Utilizing Gaussian copulas estimation, we test if either or both of our OPE measures are subject to significant endogeneity. For this, we estimate Eq. (1) with the Gaussian copulas approach using bootstrapped standard errors with 1000 replications and find that neither OPE measure is endogenous as the copula terms for both OPE measures, which test the correlations between the measures and the error term, are not significant (copula term for LogProgramWOMVolumei (SE) .002 (.001) with 95% confidence interval [−.000, .005]; copula term for LogProgramWOMDeviationi (SE) .002 (.001) with 95% confidence interval [−.001, .004]). Since endogeneity correction is not needed in our model, we estimate our main analysis of Eq. (1) using OLS and present the results from this estimation in the next section.

## Results

Table 5 presents the relationship between OPE and ad audience size. A positive (negative) coefficient indicates that a variable is associated with higher (lower) audience size at the end of the ad. As our data are not at the individual viewer level, we do not observe whether a viewer tunes into or out of a program. Thus, if a variable (e.g., variable X) relates positively to audience size at the ad’s end, this reflects either (1) attenuated channel-changing behavior and/or (2) more individuals tuning into the program. Note that (2) is an improbable explanation given the short time frame of ads. Indeed, it would be very unlikely for anyone not watching a show, let alone enough individuals for an effect to become significant, to become aware of X occurring and then tune into the show due to X before the ad completes. Thus, as our data show that audience size typically drops over the course of an ad (see Table 3), a finding supported by the literature (e.g., Danaher 1995; Fossen et al. 2021; Schweidel and Kent 2010), we argue that a variable’s positive relationships with ad audience size likely stems from viewers’ attenuated channel-changing behavior. This argument further aligns with recent research on the effects of television advertising characteristics on ad audience size (Fossen et al. 2021).
Table 5
Relationship between OPE, ad characteristics, and program characteristics with audience size during ads
Notes: Measures for OPE and ad position are mean-centered for ease of interpretation
* p < .10, ** p < .05
For ease of interpretation, we mean-center the variables that are part of our key interactions, namely OPE volume and deviation and ad position. The effects of these three measures in Table 5 are thus estimated at the mean levels of the variables they interact with.

### Relationship between OPE and audience size during ads

We only find support for one of our hypotheses about the moderating role of ad position. Specifically, H1b is not supported, as OPE volume leading up to an ad’s airing relates positively to audience size during that ad, regardless of ad position (β = −.358, p = .637). By contrast, the interaction between OPE deviation prior to an ad’s airing and ad position is negative and significant (β = −.000, p < .001), supporting H2b. Positive OPE deviations are thus more strongly associated with increased audience size for ads that air earlier in an ad break.

### Exploration of viewer involvement mechanism

The results provide strong evidence that increases in OPE volume and positive OPE deviations relate positively to audience size during ads. In our conceptual framework, we propose that viewers’ program involvement may be the key mechanism underlying these relationships. As such, we might anticipate that OPE would have a weaker (stronger) relationship with audience size in programming conditions where viewer involvement is generally lower (higher) and OPE might be less (more) reflective of true involvement. To investigate this proposal, we use three alternative operationalizations of viewer involvement beyond OPE based on three external programming contexts with lower- and higher-involvement conditions. We re-estimate our model in each context separately for the lower- and higher-involvement conditions, and then compare the respective results.
In the first context, following Nielsen (2011), we argue that viewers’ program involvement may be higher during peak primetime (9 PM–10 PM; higher-involvement condition) compared to other times (lower-involvement condition). In the second context, we propose that viewers may be more involved toward the end of an episode as the show approaches its conclusion (higher-involvement condition) than at its beginning (lower-involvement condition) (e.g., Page 2017). In the third context, we argue that viewers may be more involved with programs that have been on air longer (higher-involvement condition) than newer programs (lower-involvement condition) (e.g., Derrick 2013; Russell and Levy 2012). Consistent with our proposal, we find weaker associations between OPE and audience size during ads in all three lower-involvement conditions than in the corresponding higher-involvement conditions. We present the key results from these analyses in Web Appendix 3.3.

### Might results differ depending on OPE valence?

Research has extensively analyzed WOM valence, producing mixed results on its effects (e.g., Babić Rosario et al. 2016). In the context of movie sales, for example, Chintagunta et al. (2010) show the importance of accounting for valence information when predicting box office movie sales. By contrast, Liu (2006) finds valence to be less important to this end. In an extension of our main analysis, we thus explore the valence of viewers’ OPE. We obtain valence data for each program-related Tweet in our sample from Topsy Pro. Topsy Pro codes Tweets as either positive, neutral, or negative by analyzing the weighted sentiment of words and phrases.
We estimate two alternative specifications of our main model where we operationalize our OPE measures using (1) only non-negative Tweets (positive and neutral Tweets) and (2) only negative Tweets. These analyses allow us to determine whether negatively-valenced versus non-negatively-valenced OPE exhibits a different relationship with ad audience size. The key results, shown in Table 6, are consistent with our main findings and illustrate that, regardless of valence, increases in OPE volume and positive OPE deviations relate positively to ad audience size.7 Thus, in line with Liu (2006), valence plays only a limited role in the relationship between OPE and audience size during ads. Moreover, these findings also align with research showing benefits of both positive and negative WOM (Berger et al. 2010; Han et al. 2020).
Table 6
Key results on the relationship between valence of OPE and audience size during ads
Variable
Non-negative WOM
Negative WOM
Estimate (SE)
Estimate (SE)
.007
(.000)
**
.007
(.000)
**
OPE
LogProgramWOMVolumei
10.480
(3.689)
**
38.540
(11.550)
**
LogProgramWOMDeviationi
.001
(.000)
**
.001
(.000)
**
.250
(.929)

−8.974
(3.304)
**
−.000
(.000)
**
−.000
(.000)
*
.9995
.9995

Notes: Measures for OPE and ad position are mean-centered for ease of interpretation
* p < .10, ** p < .05

### Do consumers pay less attention to ads when OPE is high?

The results provide convincing evidence that increases in OPE volume and positive OPE deviations relate positively to ad audience size. While a higher audience size is crucial to advertisers and television networks, viewers’ attention to ads is also important. That is, while OPE relates to increased ad audience size, these additional viewers may also ignore the television in favor of their second screens.

## Discussion

This research contributes to extant literature on television advertising and online WOM by presenting the first examination of the relationship between OPE and audience size during ads. We use a multisource dataset of national primetime television ads, audience size during these ads, and social media conversations mentioning the programs in which the ads air. The results show that OPE has a meaningful relationship with ad audience size over and above conventionally studied ad and program characteristics. In particular, we find that increases in OPE volume and positive OPE deviations associate positively with ad audience size and that ad position is an important moderator of the deviation-audience size relationship.

### Managerial implications

Social episodes create attractive advertising environments
A lively debate in research and practice evolves around whether or not social episodes—episodes with high OPE volume—provide attractive ad inventory. This question is relevant for advertisers and networks alike, with important implications for their ad buy negotiations. On the one hand, social episodes may offer attractive ad inventory because they provide more engaged, tuned-in audiences that may respond more favorably to subsequent ads (e.g., Feltham and Arnold 1994; Flomenbaum 2016; Fossen and Schweidel 2019; Mattes and Cantor 1982; Nielsen 2015b). By contrast, increased involvement with social episodes may also hurt viewers’ ad response, thereby resulting in an inferior attractiveness of corresponding ad inventory (e.g., Norris and Colman 1993; Pavelchak et al. 1988; Tavassoli et al. 1995; Teixeira et al. 2014).
Altogether, our research provides evidence in favor of airing ads during social episodes. Specifically, we find that episodes with higher levels of OPE volume prior to the airing of ads yield improved ad audience sizes. Armed with these insights, networks may be better able to negotiate higher rates for ad inventory in their programs that generate social episodes, in both the upfront advertising market, which occurs before the start of the television season, and in the scatter advertising market, which occurs throughout the television season. The results may also help networks incorporate program engagement metrics into the ad buy process, which has proven difficult thus far due to remaining uncertainty about the value of social episodes as worthwhile ad environments (e.g., Calder et al. 2009; Fossen and Schweidel 2019).
For advertisers, our results suggest that buying ad inventory in programs that generate social episodes may be a promising strategy for improved audience size. Even though current television ad buying practice restricts ad placements requests to the quarter-hour-level of granularity at best, if desired, advertisers can negotiate ad placements in specific programs (e.g., Katz 2013) and, thus, implement this strategy immediately. A helpful resource to identify programs that create social episodes is Nielsen’s (2020) Social Content Ratings, a weekly listing of the best-performing programs with respect to the total OPE activity generated across different social networks.
Social moments create attractive advertising environments
Our results further show that ads airing after social moments, characterized by positive OPE deviations, see higher audience sizes, especially when those ads air early in an ad break. These results help television networks and advertisers determine the most attractive ad placements in episodes for improved ad audience size. These findings can moreover guide networks to strategically initiate ad breaks following social moments for instance during live events where the timing of ad breaks is typically more flexible.
Social episodes and social moments represent distinct constructs, and social moments can occur in any episode at any time. As such, important additional questions evolve around how to identify episodes that generate social moments and how to predict social moments in specific episodes. To probe these questions, we regress our OPE deviation measure on several program characteristics. The results, shown in Web Appendix 3.7, underscore the independence of social episodes and social moments8 and illustrate two key takeaways. First, several fixed episode characteristics relate to OPE deviation. Specifically, fall final episodes, episodes that air on Tuesdays (relative to Fridays), and episodes with higher viewer episode ratings associate with positive OPE deviations and may thus generate more social moments. By contrast, season premiere episodes, episodes that air on Wednesdays (relative to Fridays), and programs of the drama/adventure, suspense/mystery, or comedy genre (relative to slice-of-life genre) associate with negative OPE deviations and may thus produce fewer social moments.
Second, several within-episode characteristics relate to OPE deviations. For instance, social moments are more likely to occur close to half-hour breaks as well as later in an episode, especially during episodes of the drama/adventure and suspense/mystery genre (relative to slice-of-life genre). By contrast, earlier moments in an episode, especially in episodes of the comedy genre (relative to slice-of-life genre), exhibit comparably less potential to be social.
Program content strategies
Our findings may moreover help television networks to more strategically manage the creation of new program content and ad opportunities. We show when in an ad break positive OPE deviation most strongly associates with higher ad audience size. These results should allow networks to more purposefully experiment with and optimize program content strategies, such as how to pace program content and/or time ad breaks to viewers’ involvement levels at relevant points during an episode.
Implications for the future of television advertising in a programmatic world

### Limitations and directions for future research

Our study uses OPE as a scalable measure of program involvement to help explain audience size during ads. Future work may build on our results to overcome its limitations. First, while we focus on how OPE volume and deviation relate to ad audience size, future work may delve more deeply into OPE content. In supplementary analyses, we show that our key results are consistent after considering OPE valence. Yet, further research could investigate under which conditions positive versus negative OPE may relate differently to viewer behavior. Additional dimensions of OPE content could also be the subject of future work. For example, researchers could extract topics that viewers discuss related to a program and see how they relate to ad audience size or other measures of ad effectiveness. Investigating characteristics of the poster of OPE (e.g., number of followers, network centrality, user- versus firm-generated content9) might also be worthwhile. Relatedly, future research may explore the relationship of OPE with other important outcomes in the short- and long-term, such as viewers’ loyalty to specific programs, or explore OPE in non-linear television viewing contexts.
Second, our data do not allow for the direct observation of both an individual viewer’s OPE and channel-changing behavior. Such individual-level data could allow examination of viewer characteristics and within-viewer variation of OPE with channel-changing. Moreover, while we provide evidence for program involvement as the key driver of our investigated relationships, individual-level data as well as lab experiments could allow further exploration of the psychological processes behind the key relationships we investigate.
Last, future research could also expand our exploration of what spurs social moments in episodes. For this, researchers could obtain videos of the episodes and employ manual coding or image/audio recognition technology to identify important content aspects. Such research could also complement our findings by helping television networks produce more involving shows which should also see higher audience size during ad breaks.

## Acknowledgments

The authors thank the Marketing Science Institute, Carroll School of Management at Boston College, Goizueta Business School at Emory University, and Kelley School of Business at Indiana University for providing financial support to help conduct this research. They are also thankful for the helpful suggestions and comments from Girish Mallapragada, David Schweidel, Simone Wies, and participants at the 2019 JAMS Thought Leaders’ Conference on Innovating in the Digital Economy.

## Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
print
DRUCKEN
Anhänge

## Supplementary Information

Fußnoten
1
We use the term “aggregate” to indicate an episode-level measure of audience size, as opposed to a more granular, within-episode measure of audience size.

2
We detail our operationalization of OPE deviation in the next section. Furthermore, we explore several alternative operationalization of this measure, including varying the time window relative to the focal ad, in Web Appendix 3.6.

3
The advertising data contains only live programming in the Eastern and Central Time Zones, which accounts for 76% of the U.S. population (based on U.S. Census Bureau 2013 State Population Estimates). Programming in the Pacific Time Zone is not deemed an initial broadcasting since it airs three hours after Eastern/Central programming. The granular level of the audience size and social media data, discussed next, allows us to attribute these measures to the Eastern/Central Time Zone programming.

4
Topsy Pro was a certified Twitter partner with comprehensive access to the public firehose of Twitter posts at the time of data collection. It was later acquired by Apple.

5
Using a relative volume measure allows us to investigate changes in OPE intensity, as our volume measure is not simply driven by changes in audience size during the episode which would naturally relate positively to ad audience size. Yet, a relative volume measure may create difficulties for practitioners trying to use our approach since audience size data is not available in real-time. We thus also explore an alternative operationalization of volume where we do not divide by the number of viewers at the beginning of the focal ad and find consistent results (see Web Appendix 3.5).

6
For robustness, we test 5 alternative dependent variables, all of which present consistent results. We detail these analyses in Web Appendix 3.2.

7
In addition, consistent with our main analysis, for only non-negative Tweets, we find no significant interaction between ad position and OPE volume and a significant negative interaction between ad position and OPE deviation. However, for only negative Tweets, both the OPE volume-ad position interaction and the OPE deviation-ad position interaction are negative and significant. While these results largely align with our main findings, as we explain in the discussion section, we believe that further analyses of OPE content are an important area of future research.

8
That is, OPE volume does not exhibit a significant relationship with OPE deviation.

9
The OPE data we gathered from Topsy Pro does not allow us to distinguish user- from firm-generated content. For one of the programs in our data, we thus re-collected this data for every episode from a second data source—Crimson Hexagon—to probe the ratio of user-generated content versus firm-generated content. We find that firm-generated content makes up only a very small share of program-related Tweets (4.7% on average) during its airing.

Literatur
Babić Rosario, A., Sotgiu, F., Valck, K. D., & Bijmolt, T. H. A. (2016). The effect of electronic word of mouth on sales: A meta-analytic review of platform, product, and metric factors. Journal of Marketing Research, 53(June), 297–318. CrossRef
Benton, A., & S. Hill (2012). The spoiler effect? Designing social TV content that promotes ongoing WOM,  Conference on Information Systems and Technology (CIST). Phoenix, AZ.
Berger, J. (2011). Arousal increases social transmission of information. Psychological Science, 22(7), 891–893. CrossRef
Berger, J. (2014). Word of mouth and interpersonal communication: A review and directions for future research. Journal of Consumer Psychology, 24(4), 586–607. CrossRef
Berger, J., & Iyengar, R. (2013). Communication channels and word of mouth: How the medium shapes the message. Journal of Consumer Research, 40(3), 567–579. CrossRef
Berger, J., & Milkman, K. L. (2012). What makes online content viral? Journal of Marketing Research, 49(2), 192–205. CrossRef
Berger, J., & Schwartz, E. M. (2011). What drives immediate and ongoing word of mouth? Journal of Marketing Research, 48(5), 869–880. CrossRef
Berger, J., Sorensen, A. T., & Rasmussen, S. J. (2010). Positive effects of negative publicity: When negative reviews increase sales. Marketing Science, 29(5), 815–827. CrossRef
Bloch, P. H., & Richins, M. L. (1983). A theoretical model for the study of product importance perceptions. Journal of Marketing, 47(3), 69–81. CrossRef
Bornemann, T., Hattula, C., & Hattula, S. (2020). Successive product generations: Financial implications of industry release rhythm alignment. Journal of the Academy of Marketing Science, forthcoming , 48, 1174–1191. CrossRef
Cadario, R. (2015). The impact of online word-of-mouth on television show viewership: An inverted U-shaped temporal dynamic. Marketing Letters, 26(4), 411–422. CrossRef
Calder, B. J., Malthouse, E. C., & Schaedel, U. (2009). An experimental study of the relationship between online engagement and advertising effectiveness. Journal of Interactive Marketing, 23(4), 321–331. CrossRef
Celsi, R. L., & Olson, J. C. (1988). The role of involvement in attention and comprehension processes. Journal of Consumer Research, 15(2), 210–224. CrossRef
Chintagunta, P. K., Gopinath, S. S., & Venkataraman, S. (2010). The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets. Marketing Science, 29(5), 944–957. CrossRef
Chordia, A. (2018). Programmatic TV future coming into focus slowly, but surely, (May 6). https://​www.​mediapost.​com/​publications/​article/​318812/​programmatic-tv-future-coming-into-focus-slowly-b.​html. Accessed 30 Dec 2020.
Comscore TV Essentials (2020), Average audience, September-December 2013, US.  https://​www.​comscore.​com/​.
Danaher, P. J. (1995). What happens to television ratings during commercial breaks? Journal of Advertising Research, 35(1), 37–37.
Deng, Y., & Mela, C. F. (2017). TV viewing and advertising targeting. Journal of Marketing Research, 55(1), 99–118. CrossRef
Derrick, J. L. (2013). Energized by television: Familiar fictional worlds restore self-control. Social Psychological and Personality Science, 4(3), 299–307. CrossRef
Dichter, E. (1966). How word-of-mouth advertising works. Harvard Business Review, 44((November–December)), 147–166.
eMarketer (2017). Few viewers are giving the TV set their undivided attention, (Nov. 7), https://​www.​emarketer.​com/​Article/​Few-Viewers-Giving-TV-Set-Their-Undivided-Attention/​1016717?​ecid=​NL1001. Accessed 30 Dec 2020.
Feltham, T. S., & Arnold, S. J. (1994). Program involvement and ad/program consistency as moderators of program context effects. Journal of Consumer Psychology, 3(1), 51–77. CrossRef
Flomenbaum, A. (2016). Exclusive: Nielsen study shows that TV advertising drives earned media for brands on Twitter,  The Drum, (February 21).  http://​www.​thedrum.​com/​news/​2016/​02/​21/​exclusive-nielsen-study-shows-tvadvertising-drives-earned-media-brands-twitter. Accessed 30 Dec 2020.
Fossen, B. L., & Schweidel, D. A. (2017). Television advertising and online word-of-mouth: An empirical investigation of social TV activity. Marketing Science, 36(1), 105–123. CrossRef
Fossen, B. L., & Schweidel, D. A. (2019). Social TV, advertising, and sales: Are social shows good for advertisers? Marketing Science, 38(2), 274–295. CrossRef
Friedman, W. (2012). Why TV networks want to move from C3 to C7 ratings, MediaPost (Nov). https://​www.​mediapost.​com/​publications/​article/​187080/​why-tv-networks-want-to-move-from-c3-to-c7-ratings.​html. Accessed 30 Dec 2020.
Friedman, W. (2019). Viacom starts CFlight for Upfront TV Marketers, MediaPost (April).  https://​www.​mediapost.​com/​publications/​article/​334580/​viacom-starts-cflight-for-upfront-tv-marketers.​html. Accessed 30 Dec 2020.
Godes, D., & Mayzlin, D. (2004). Using online conversations to study word-of-mouth communication. Marketing Science, 23(4), 545–560. CrossRef
Gong, S., Zhang, J., Zhao, P., & Jiang, X. (2017). Tweeting as a marketing tool: A field experiment in the TV industry. Journal of Marketing Research, 54(6), 833–850. CrossRef
Guitart, I. A., Gonzalez, J., & Stremersch, S. (2018). Advertising non-premium products as if they were premium: The impact of advertising up on advertising elasticity and brand equity. International Journal of Research in Marketing, 35(3), 471–489. CrossRef
Gustafson, P., & Siddarth, S. (2007). Describing the dynamics of attention to TV commercials: A hierarchical Bayes analysis of the time to zap an ad. Journal of Applied Statistics, 34(5), 585–609. CrossRef
Han, J. A., McDonnell Feit, E., & Srinivasan, S. (2020). Can negative buzz increase awareness and purchase intent? Marketing Letters, 31(1), 89–104. CrossRef
Houston, M. J., & Rothschild, M. L. (1978). Conceptual and methodological perspectives on involvement. In Jain, S. (Ed.),  Research Frontiers in Marketing: Dialogues and Directions (pp.  184–187). Chicago.
Huang, M., Ali, R., & Liao, J. (2017). The effect of user experience in online games on word of mouth: A pleasure-arousal-dominance (PAD) model perspective. Computers in Human Behavior, 75, 329–338. CrossRef
IAB (2015). The changing TV experience: Attitudes and usage across multiple screens, (April).  http://​www.​iab.​com/​insights/​the-changing-tv-experience-attitudes-and-usage-across-multiple-screens/​. Accessed 30 Dec 2020.
Katz, H. (2013). The media handbook: A complete guide to advertising media selection, planning, research, and buying. New York: Routledge.
Kent, R. J., & Schweidel, D. A. (2011). Introducing the ad ECG: How the set-top box tracks the lifeline of television. Journal of Advertising Research, 51(4), 586–593. CrossRef
Kilger, M., & Romer, E. (2007). Do measures of media engagement correlate with product purchase likelihood? Journal of Advertising Research, 47(3), 313–325. CrossRef
Ladhari, R. (2007). The effect of consumption emotions on satisfaction and word-of-mouth communications. Psychology & Marketing, 24(12), 1085–1108. CrossRef
Lafayette, J. (2018a). C3 Primetime ratings dropped 12% during August, Broadcasting+Cable (Sept).  https://​www.​broadcastingcabl​e.​com/​news/​c3-primetime-ratings-dropped-12-during-august. Accessed 30 Dec 2020.
Lafayette, J. (2018b). Most networks plan to use C7 in Upfront, Broadcasting+Cable (Mar).  https://​www.​broadcastingcabl​e.​com/​news/​most-networks-plan-use-c7-upfront-156967. Accessed 30 Dec 2020.
Lamberton, C., & Stephen, A. T. (2016). A thematic exploration of digital, social media, and Mobile marketing: Research evolution from 2000 to 2015 and an agenda for future inquiry. Journal of Marketing, 80(Nov), 146–172. CrossRef
Liaukonyte, J., Teixeira, T., & Wilbur, K. C. (2015). Television advertising and online shopping. Marketing Science, 34(3), 311–330. CrossRef
Liu, Y. (2006). Word of mouth for movies: Its dynamics and impact on box office revenue. Journal of Marketing, 70(July), 74–89. CrossRef
Liu, X., Singh, P. V., & Srinivasan, K. (2016). A structured analysis of unstructured big data by leveraging cloud computing. Marketing Science, 35(3), 363–388. CrossRef
Lovett, M. J., & Staelin, R. (2016). The role of paid, earned, and owned Media in Building Entertainment Brands: Reminding, informing, and enhancing enjoyment. Marketing Science, 35(1), 142–157. CrossRef
Luminet IV, O., Bouts, P., Delie, F., Manstead, A. S., & Rimé, B. (2000). Social sharing of emotion following exposure to a negatively Valenced situation. Cognition & Emotion, 14(5), 661–688. CrossRef
Mathys, J., Burmester, A. B., & Clement, M. (2016). What drives the market popularity of celebrities? A longitudinal analysis of consumer interest in film stars. International Journal of Research in Marketing, 33(2), 428–448. CrossRef
Mattes, J., & Cantor, J. (1982). Enhancing responses to television advertisements via the transfer of residual arousal from prior programming. Journal of Broadcasting & Electronic Media, 26(2), 553–566. CrossRef
McGrath, J. M., & Mahood, C. (2004). The impact of arousing programming and product involvement on advertising effectiveness. Journal of Current Issues & Research in Advertising, 26(2), 41–52. CrossRef
McSherry, J. (1985). The current scope of channel switching. Marketing and Media Decisions, 20(8), 144–146.
Moorman, M., Neijens, P. C., & Smit, E. G. (2007). The effects of program involvement on commercial exposure and recall in a naturalistic setting. Journal of Advertising, 36(1), 121–137. CrossRef
Nielsen (2011). What time is really primetime, (September 14).  https://​www.​nielsen.​com/​us/​en/​insights/​article/​2011/​what-time-is-really-primetime. Accessed 30 Dec 2020.
Nielsen (2015a). New America. New Consumers, (July 21).  https://​www.​nielsen.​com/​us/​en/​insights/​article/​2014/​whats-empowering-the-new-digital-consumer/​. Accessed 30 Dec 2020.
Nielsen (2015b). Brain activity predicts cocial TV engagement, (March 9).  https://​www.​nielsen.​com/​us/​en/​insights/​report/​2015/​brain-activity-predicts-social-tv-engagement. Accessed 30 Dec 2020.
Nielsen (2015c). Live TV + Social Media = Engaged Viewers, (April 6).  http://​www.​nielsen.​com/​us/​en/​insights/​news/​2015/​live-tv-social-media-engaged-viewers.​html. Accessed 30 Dec 2020.
Nielsen (2019). Nielsen total audience report | Q1 2019, (June 28).  https://​www.​nielsen.​com/​us/​en/​insights/​report/​2019/​the-nielsen-total-audience-report-september-2019/​. Accessed 30 Dec 2020.
Nielsen (2020). Social content ratings, (July 28).  https://​www.​nielsensocial.​com/​socialcontentrat​ings. Accessed 30 Dec 2020.
Norris, C. E., & Colman, A. M. (1993). Context effects on memory for television advertisements. Social Behavior and Personality, 21(4), 279–296. CrossRef
Page, D. (2017). What happens to your brain when you Binge-Watch a TV Series, (November 4). https://​www.​nbcnews.​com/​better/​amp/​ncna816991. Accessed 30 Dec 2020.
Park, S., & Gupta, S. (2012). Handling endogenous Regressors by joint estimation using copulas. Marketing Science, 31(4), 567–586. CrossRef
Pavelchak, M. A., Antil, J. H., & Munch, J. M. (1988). The super bowl: An investigation into the relationship among program context, emotional experience, and ad recall. Journal of Consumer Research, 15(3), 360–367. CrossRef
Phalen, P. F. (1998). The market information system and personalized exchange: Business practices in the market for television audiences. Journal of Media Economics, 11(4), 17–34. CrossRef
Richins, M. L., & Root-Shaffer T. (1988). The role of evolvement and opinion leadership in consumer word-of-mouth: An implicit model made explicit, in Houston. In M. J. (Ed.),  Advances in Consumer Research, (vol. 15, pp. 32–36). Provo: Association for Consumer Research. https://​www.​scienceopen.​com/​document?​vid=​ad126ce7-850b-4869-8772-b11d0f66f5f1. Accessed 30 Dec 2020.
Richins, M. L., Bloch, P. H., & McQuarrie, E. F. (1992). How enduring and situational involvement combine to create involvement responses. Journal of Consumer Psychology, 1(2), 143–153. CrossRef
Russell, C. A., & Levy, S. J. (2012). The temporal and focal dynamics of volitional Reconsumption: A phenomenological investigation of repeated hedonic experiences. Journal of Consumer Research, 39(2), 341–359. CrossRef
Schreiner, T. (2013). Amplifiers study: The Twitter users who are most likely to retweet and how to engage them, Twitter, (January 10).  https://​blog.​twitter.​com/​2013/​amplifiers-study-the-twitter-users-who-are-most-likely-to-retweet-and-how-to-engage-them. Accessed 30 Dec 2020.
Schwarz, T. (2019). TV Long View: A guide to the ever-expanding world of ratings data, The Hollywood Reporter, (October 5). https://​www.​hollywoodreporte​r.​com/​live-feed/​tv-ratings-explained-a-guide-what-data-all-means-1245591. Accessed 30 Dec 2020.
Schweidel, D. A., & Kent, R. J. (2010). Predictors of the gap between program and commercial audiences: An investigation using live tuning data. Journal of Marketing, 74(3), 18–33. CrossRef
Schweidel, D. A., & Moe, W. W. (2014). Listening in on social media: A joint model of sentiment and venue format choice. Journal of Marketing Research, 51(4), 387–384. CrossRef
Schweidel, D. A., Foutz, N. Z., & Tanner, R. J. (2014). Synergy or interference: The effect of product placement on commercial break audience decline. Marketing Science, 33(6), 763–780. CrossRef
Seiler, S., Yao, S., & Wang, W. (2017). Does online word-of-mouth increase demand? (and how?): Evidence from a natural experiment. Marketing Science, 36(6), 838–861. CrossRef
Siddarth, S., & Chattopadhyay, A. (1998). To zap or not to zap: A study of the determinants of channel switching during commercials. Marketing Science, 17(2), 124–138. CrossRef
Statista (2019). TV advertising spending in the United States from 2011 to 2020 (in Billion U.S. Dollars), (February 11). https://​www.​statista.​com/​statistics/​272404/​tv-advertising-spending-in-the-us. Accessed 30 Dec 2020.
Story, L. (2007). Assigning ratings to commercials turns out to be a tricky task, New York Times (March).   https://​www.​nytimes.​com/​2007/​03/​13/​business/​media/​13adco.​html. Accessed 30 Dec 2020.
Tavassoli, N. T., Shultz II, C. J., & Fitzsimons, G. J. (1995). Program involvement: Are moderate levels best for ad memory and attitude toward the ad? Journal of Advertising Research, 35(5), 61–72.
Teixeira, T. S., Wedel, M., & Pieters, R. (2010). Moment-to-moment optimal branding in TV commercials: Preventing avoidance by pulsing. Marketing Science, 29(5), 783–804. CrossRef
Teixeira, T., Picard, R., & el Kaliouby, R. (2014). Why, when, and how much to entertain consumers in advertisements? A web-based facial tracking field study. Marketing Science, 33(6), 809–827. CrossRef
Tuchman, A. E., Nair, H. S., & Gardete, P. M. (2018). Television ad-skipping, consumption complementarities and the consumer demand for advertising. Quantitative Marketing and Economics, 16(2), 111–174. CrossRef
Wang, Z., & Lang, A. (2012). Reconceptualizing excitation transfer as motivational activation changes and a test of the television program context effects. Media Psychology, 15(1), 68–92. CrossRef
Wangenheim, F. v., & Bayón, T. (2007). The chain from customer satisfaction via word-of-mouth referrals to new customer acquisition. Journal of the Academy of Marketing Science, 35(2), 233–249. CrossRef
Wetzel, H. A., Hattula, S., Hammerschmidt, M., & van Heerde, H. J. (2018). Building and leveraging sports brands: Evidence from 50 years of German professional soccer. Journal of the Academy of Marketing Science, 46(4), 591–611. CrossRef
Wilbur, K. C. (2016). Advertising content and television advertising avoidance. Journal of Media Economics, 29(2), 51–72. CrossRef
Wilbur, K. C., Xu, L., & Kempe, D. (2013). Correcting audience externalities in television advertising. Marketing Science, 32(6), 892–912. CrossRef
Zaichkowsky, J. L. (1985). Measuring the involvement construct. Journal of Consumer Research, 12(3), 341–352. CrossRef
Zillmann, D. (1971). Excitation transfer in communication-mediated aggressive behavior. Journal of Experimental Social Psychology, 7(4), 419–434. CrossRef
Titel
Online program engagement and audience size during television ads
verfasst von
Beth L. Fossen
Alexander Bleier
Publikationsdatum
03.03.2021
Verlag
Springer US
Erschienen in
Journal of the Academy of Marketing Science / Ausgabe 4/2021
Print ISSN: 0092-0703
Elektronische ISSN: 1552-7824
DOI
https://doi.org/10.1007/s11747-021-00769-z

Zur Ausgabe