Skip to main content
Top
Published in: Review of Accounting Studies 1/2022

Open Access 01-10-2021

Rationalizing forecast inefficiency

Authors: Charles G. Ham, Zachary R. Kaplan, Zawadi R. Lemayian

Published in: Review of Accounting Studies | Issue 1/2022

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We show analysts’ own earnings forecasts predict error in their own forecasts of earnings at other horizons, which we argue provides a measure of the extent to which analysts inefficiently use information. We construct our measure by exploiting two sources of variation in analysts’ incentives: (i) more recent forecasts have greater salience at the time of the earnings release so accuracy incentives are higher (lower) at shorter (longer) forecast horizons and (ii) analysts have greater incentives for optimism (pessimism) at longer (shorter) horizons. Consistent with these incentives affecting the incorporation of information into forecasts, we document (i) current year forecasts underweight (overweight) information in shorter (longer) horizon forecasts and (ii) the mis-weighting is more pronounced when recent news is negative—when analysts have greater (weaker) incentives to incorporate the news into shorter (longer) horizon forecasts. Finally, returns tests suggest that forecasts adjusted for the inefficiency we document better represent market expectations of earnings.
Notes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Predicting earnings is a central function of accounting research. Early studies on the properties of earnings relied on time-series models to predict earnings (e.g., Ball and Watts 1972). As analyst forecasts became more widespread, studies documented that analysts provide more accurate forecasts than time-series models and their forecasts generate larger earnings response coefficients, suggesting they better proxy for market expectations, due to both a timing and information advantage (Collins and Hopwood 1980; Fried and Givoly 1982; Brown et al. 1987; Stickel 1990; Schipper 1991). While this evidence fueled the use of analyst forecasts as proxies for market expectations, subsequent studies have documented a range of predictable errors related to publicly available information (e.g., DeBondt and Thaler 1990; Lys and Sohn 1990; Abarbanell 1991; Mendenhall 1991; Abarbanell and Bernard 1992; Easterwood and Nutt 1999; So 2013). Moreover, analyst forecasts’ superiority over time-series models dissipates as the forecast horizon increases, and, in some cases, analysts’ forecasts even become less accurate (Brown et al. 1987; Kross et al. 1990; Lys and Soo 1995; Bradshaw et al. 2012).1
We take a novel approach to predict forecast error: using a regression model that combines analyst forecasts of earnings at multiple horizons into a more accurate forecast than the published version. Whereas prior studies demonstrate that analysts’ forecasts are inefficient by predicting forecast errors using publicly available information, we show that analysts are inefficient in using the information they possess. We argue analysts’ incentives to use information efficiently will vary with horizon, because forecast inaccuracy exposes analysts to greater reputational risk at shorter horizons, given the outstanding forecasts can be more readily compared to actual earnings. If the quality of analysts’ forecasts deteriorates at longer forecast horizons, we hypothesize users of analyst forecasts face a trade-off when obtaining information about expectations of longer horizon earnings.2 Shorter horizon forecasts contain higher quality information about changes in fundamentals, but that information has not been calibrated for longer horizons. In contrast, longer horizon forecasts have horizon-specific information about future earnings but are of lower quality. Therefore we predict more accurate forecasts can be created by combining analysts’ forecasts of earnings at multiple horizons. The model we employ regresses future earnings on earnings forecasts at multiple horizons and allows the weights on those forecasts to vary to minimize squared forecast error.
We find there is substantial information about future earnings in analysts’ forecasts of earnings at other forecast horizons. This suggests that analysts use the information in their own forecasts inefficiently. A simple model regressing the current year’s forecast error on (i) the current quarter’s earnings forecast, (ii) the remainder of the current year’s forecast (i.e., quarters two through four), and (iii) next year’s forecast predicts over 10% of the error in the current year’s forecast.3 The significantly positive coefficient on the current quarter’s forecast suggests analysts underweight the information in that horizon’s forecast. The significantly negative coefficient on next year’s forecast suggests analysts overweight the information in that horizon’s forecast. These results suggest that forecast accuracy could improve by placing more weight on the information in the shorter horizon forecast (which is relatively higher quality) and by placing less weight on the information in the longer horizon forecast (which is relatively lower quality).
Our next set of tests examines whether analyst optimism increases the overweighting of long-horizon information. A long line of literature on the forecast walkdown (e.g., Bradshaw et al. 2016) suggests that managers and investors prefer optimistic forecasts at longer horizons yet beatable expectations at shorter horizons. This suggests analysts’ longer horizon forecasts are of even lower quality when they are more optimistic. We thus expect our model to re-weight analysts’ earnings forecasts to a greater degree when analysts’ incentives for optimism are greater.
We use two proxies to capture analyst optimism. First, we use past returns to capture the horizon-dependent variation in analysts’ incentives to incorporate news into forecasts. We expect analysts will incorporate negative (positive) news more fully into shorter (longer) horizon forecasts, which will tend to make longer horizon forecasts more informationally inefficient in the presence of negative news. Second, we measure the analyst’s actual optimism as the difference between the analyst’s share price target and the outstanding share price. Share price target forecasts, which are often based off a discounted cash flow model, differ significantly from the market’s expectation of cash flows (e.g., Bradshaw et al. 2013). We expect this optimism (or pessimism) will largely be reflected in longer horizon earnings expectations, resulting in overweighted information. We interact both proxies with the shorter horizon forecast (current quarter’s) and the longer horizon forecast (next year’s). In both cases, we find significantly greater re-weighting in the presence of incentives for optimism—the model places even greater weight on shorter horizon forecasts and even less weight on longer horizon forecasts—along with increased explanatory power.
We then benchmark the ability of analysts’ own forecasts to predict error against a firm characteristic-based model. We find that analysts’ own forecasts predict a similar magnitude of forecast error relative to an extensive set of firm characteristics used by prior studies to predict forecast errors (Hou et al. 2012; Larocque 2013; So 2013). Specifically, analysts’ own forecasts predict 6.5% (10.5%) of forecast error in individual (consensus) forecasts, whereas firm characteristics, defined as the combination of variables used by Larocque (2013) and So (2013), predict 6.2% (10.7%) of forecast error. After incorporating the information in both past returns and share price target optimism, analysts’ own forecasts explain 11.3% of the current year’s forecast error and the incremental predictability from firm characteristics is only 2.2%. These findings are consistent with analysts’ own forecasts explaining much of the error predicted by firm characteristics and suggest that analysts do not have sufficiently strong incentives to efficiently incorporate the information they collect into all of their forecasts.
Next, we assess whether forecasts adjusted for the documented inefficiency provide better proxies for market expectations of earnings (Li and Mohanram 2014). We do so by regressing future returns on the earnings surprise, calculated using both unadjusted forecasts and forecasts adjusted for predictable errors (Gu and Wu 2003; Hughes et al. 2008). Measurement error in expectations will bias the earnings response coefficient toward zero, leading the expectation with the least measurement error to have the greatest coefficient (Brown et al. 1987). The earnings response coefficient is significantly larger when using the adjusted forecast to calculate the earnings surprise, suggesting it better captures market expectations, relative to the unadjusted forecast. We also find that the earnings response coefficient from our adjusted model outperforms the coefficient from a model adjusted for firm characteristics. We thus argue our methodology improves measurement of earnings response coefficients by purging error from earnings expectations (Brown et al. 1987; Kothari and Sloan 1992).4
Finally, we examine forecast frequency to provide evidence consistent with our assumption that analysts’ accuracy incentives decline with the forecast’s horizon. We argue the issuance of a forecast provides a measure of analyst effort and that less frequent forecasting is consistent with weaker accuracy incentives. We find that analysts issue forecasts less frequently at longer forecast horizons, consistent with lower accuracy incentives. We also find analysts’ forecasts are asymmetric—analysts issue shorter horizon forecasts more in response to negative news and longer horizon forecasts more in response to positive news—consistent with stronger incentives to incorporate positive (negative) news at longer (shorter) horizons.
We contribute to the literature by providing an improved measure of earnings expectations. Our approach predicts over 10% of forecast error at the current year’s forecast horizon, a commonly used benchmark that research has shown to be highly accurate (Basu and Markov 2004; Bradshaw et al. 2012). These predictable errors resemble more conventional methods of correcting forecast errors, which regress earnings or forecast errors on firm characteristics known to predict forecast biases (So 2013; Larocque 2013). Moreover, our method yields similar improvements using fewer variables and is parsimonious, as it only requires the use of forecasts of multiple horizons. Finally, the estimates we produce have a stronger association with future returns, suggesting they serve as a better measure of earnings expectations.
Second, our analysis relates to how analysts map information into forecasts. Given that analysts possess both timing and information advantages over time-series models, it is unclear why analyst forecasts are inferior expectations of earnings than time-series models at long (or really any) horizon (Schipper 1991). By using the analyst’s own forecast to predict forecast error, our evidence suggests much of the predictable errors in forecasts arises from a failure to incorporate information into all of the analyst’s forecasts. We find analysts’ own forecasts can predict much of the forecast error explained by firm characteristics, inconsistent with these predictable errors resulting from a failure to collect information or an inability to incorporate collected information into any earnings forecast. We construct our regression model using two simple trade-offs that affect accuracy incentives (e.g., Chen and Jiang 2006; Bagnoli et al. 2008), so we believe our findings are most consistent with incentive-based explanations. This does not rule out behavioral explanations, such as difficulty increasing with horizon or analysts better understanding the implications of information for shorter horizon earnings. However, these explanations are potentially related—if long-horizon forecast accuracy is sufficiently unimportant to investors, analysts may not have sufficiently strong incentives to learn how information maps into long-horizon earnings.

2 Prior literature

We contend that analysts face trade-offs in producing accurate forecasts. Producing accurate earnings forecasts requires effort to properly calibrate information to the horizon of the forecast. If there are benefits to biasing forecasts—for example, overweighting private information or catering to managers (Bernhardt et al. 2006; Berger et al. 2019)—then analysts will trade off their incentives to bias with their incentives for accuracy. We argue accuracy incentives are stronger (weaker) at shorter (longer) horizons. Our main result documents substantial accuracy improvements by using a regression model that re-weights analysts’ own forecasts according to their incentives to produce accurate forecasts. In this section, we place our study in the context of the literature.

2.1 Forecast inefficiency

In the 1990s and early 2000s, there was significant academic debate as to whether analysts use information efficiently. Several scholars argued that available information does not predict forecast errors (e.g., Givoly 1985; Keane and Runkle 1998; Gu and Wu 2003; Basu and Markov 2004) and that statistical evidence of an association between firm characteristics and forecast errors arose because of outliers or analysts’ loss functions. However, the evidence from more recent studies is more challenging to reconcile with the view that analysts use information efficiently. For instance, studies have documented larger biases, shown evidence of median bias (i.e., Hou et al. 2012, Table 3), and have greater statistical power because of a longer time-series.
The intuition behind the studies arguing against “forecast inefficiency” remains persuasive—highly paid professionals engaged in the repetitive task of forecasting earnings are unlikely to make costly and easily identifiable errors (e.g., Kahneman 2011). However, given analysts have weak accuracy incentives (Bagnoli et al. 2008; Brown et al. 2015; Brown et al. 2016), the cost of effort and/or incentives to issue non-Bayesian forecasts could rationally lead to predictable errors. Our study builds on the intuition of the studies arguing for forecast inefficiency by attempting to “rationalize inefficiency.” In particular, we exploit two sources of variation in analysts’ incentives to produce accurate forecasts. First, because more recent forecasts have greater salience at the time of the earnings release, we argue accuracy incentives are higher (lower) at shorter (longer) forecast horizons (e.g., DeBondt and Thaler 1990). Second, analysts have incentives to provide beatable forecasts at shorter horizons and optimistic forecasts at longer horizons (e.g., Ke and Yu 2006; Berger et al. 2019). Thus their incentives for optimism increase with the forecast’s horizon, which decreases accuracy incentives by introducing bias.
There are a number of more specific explanations that our tests cannot differentiate between. For example, motivated reasoning could explain greater overweighting of private information at longer horizons. As Kunda (1990) explains, motivated reasoning suggests that “the motivation to be accurate enhances the use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion.” The wider (narrower) range of possible outcomes at longer (shorter) horizons allows analysts to (disciplines analysts not to) overweight their private information and underweight firm characteristics (Bradshaw et al. 2016). Motivated reasoning thus predicts that analysts will use information less efficiently in forecasts where they are less constrained (i.e., longer horizon forecasts). Alternatively, analysts could be cognizant of the reason for biasing forecasts. For example, analysts could strategically overweight private information and may not comply with Bayes’ rule because sales incentives make such a strategy optimal. Further, weak accuracy incentives could simply lead analysts to exert less effort calibrating their short-horizon information to longer horizons. We cannot pinpoint the precise strategic explanation for the inefficiency documented herein.

3 Sample selection and descriptive statistics

3.1 Sample selection

We use samples from both the I/B/E/S detail file and the I/B/E/S consensus file. The detail file analysis allows us to hold the information set of the analyst constant by comparing forecasts across horizons issued by the same analyst on the same date. This ensures each forecast was made with an identical information set. A downside of the detail file analysis is that we require the analyst forecast each horizon, which imposes somewhat restrictive sampling procedures and introduces generalizability concerns, because analysts sometimes do not respond to information at each forecast horizon (as we illustrate in section 4.5). The consensus file analysis, which uses the average forecast across analysts, mitigates this concern while allowing us to benchmark our results against the literature on forecast error predictability, which predominantly uses the consensus file (Hou et al. 2012; So 2013; Larocque 2013). Therefore we view both sets of analyses as complements.
In both the detail and summary file samples, we use all firm-years over the years 1985 to 2018 from the unadjusted file. In the detail file analysis, we retain analyst-firm-years with analyst forecasts for the first, second, third, and fourth quarter earnings of the current year as well as next year earnings, all issued on the same date between the prior year earnings announcement and the first quarter earnings announcement. In the consensus file analysis, we retain firm-years with consensus forecasts for the first, second, third, and fourth quarter earnings of the current year as well as next year earnings, all with a statistical period end-date between the prior year earnings announcement and the first quarter earnings announcement. If the forecasts are available on the same day more than once for the same analyst-firm-year (firm-year), we retain only the final set of forecasts for the detail file (consensus file). We scale all earnings forecasts and forecast errors by price computed on the trading day before we record the forecast. We require data from CRSP to split-adjust forecasts and actuals to the forecast announcement date (Payne and Thomas 2003). We use data from Compustat to calculate control variables, with all control variables coming from the prior fiscal year. We remove observations with a price below $5 to avoid scaling by small denominators. After imposing these restrictions, the detail (consensus) file sample includes 166,030 (54,073) analyst-firm-years (firm-years). We detail how we select our sample in Appendix B. All continuous variables are winsorized at the top and bottom 1%. Below, we introduce a simple timeline to help clarify when we calculate the variables (Fig. 1). The periods in the timeline correspond to the time subscripts used in the variable names. The forecasts used in the analysis are all made between the prior year’s earnings announcement, EA(YRt-1), and the earnings announcement for the first quarter of the current year, EA(Q1). The control variables all correspond to values from YRt-1 unless stated otherwise.

3.2 Descriptive statistics

Table 1, Panel A, reports descriptive statistics for the detail file sample. The first quarter forecasts are slightly pessimistically biased as the mean forecast error (actual minus forecast) scaled by price equals 0.0005. The forecasts become more optimistic as the horizon increases, moving from a forecast error of −0.0003 for the second quarter forecasts to −0.0014 for the third quarter forecasts to −0.0028 for the fourth quarter forecasts, and to −0.0134 for the next year forecasts (−0.0034 per quarter). These statistics confirm prior findings that analysts’ estimates are optimistic and their errors increase with the forecast horizon (O’Brien 1988; Richardson et al. 2004).
Table 1
Descriptive Statistics and Correlations
Panel A: Detail File Descriptive Statistics
Variable
N
Mean
StdDev
Min
P25
Median
P75
Max
FE(Q1)
166,030
0.0005
0.0053
−0.0245
−0.0006
0.0005
0.0020
0.0197
FE(Q2)
166,030
−0.0003
0.0080
−0.0388
−0.0019
0.0002
0.0021
0.0275
FE(Q3)
166,030
−0.0014
0.0102
−0.0500
−0.0033
−0.0001
0.0019
0.0324
FE(Q4)
166,030
−0.0028
0.0132
−0.0700
−0.0051
−0.0006
0.0019
0.0374
FE(YRt + 1)
166,030
−0.0134
0.0530
−0.2478
−0.0276
−0.0049
0.0079
0.1512
FORE(Q1)
166,030
0.0084
0.0145
−0.0591
0.0049
0.0105
0.0153
0.0411
FORE(Q2)
166,030
0.0116
0.0138
−0.0505
0.0074
0.0129
0.0179
0.0501
FORE(Q3)
166,030
0.0135
0.0139
−0.0474
0.0088
0.0142
0.0197
0.0554
FORE(Q4)
166,030
0.0148
0.0143
−0.0468
0.0098
0.0152
0.0210
0.0588
FORE(YRt + 1)
166,030
0.0626
0.0463
−0.1470
0.0463
0.0649
0.0841
0.1897
ACCRUALS+
166,030
0.0169
0.0354
0.0000
0.0000
0.0011
0.0176
0.2197
ACCRUALS
166,030
−0.0148
0.0337
−0.2136
−0.0136
0.0000
0.0000
0.0000
AG
166,030
0.1774
0.3796
−0.3242
0.0012
0.0801
0.2116
2.3740
NODIV
166,030
0.5017
0.5000
0.0000
0.0000
1.0000
1.0000
1.0000
DIV
166,030
0.0119
0.0191
0.0000
0.0000
0.0000
0.0182
0.1063
BTM
166,030
0.4247
0.3108
−0.1343
0.2119
0.3597
0.5651
1.6150
PRC
166,030
43.6163
39.8940
5.5800
18.3750
33.0200
54.6250
261.9200
ACT(YRt-1)+
166,030
0.0521
0.0374
0.0000
0.0294
0.0492
0.0684
0.2060
ACT(YRt-1)
166,030
0.1064
0.3084
0.0000
0.0000
0.0000
0.0000
1.0000
RETPRE
166,030
0.0736
0.4799
−0.7416
−0.2045
0.0001
0.2361
2.3479
FE(YRt-1)
166,030
0.0002
0.0092
−0.0486
−0.0007
0.0005
0.0020
0.0372
Panel B: Consensus File Descriptive Statistics
Variable
N
Mean
StdDev
Min
P25
Median
P75
Max
FE(Q1)
54,073
0.0001
0.0057
−0.0277
−0.0009
0.0004
0.0018
0.0194
FE(Q2)
54,073
−0.0012
0.0089
−0.0444
−0.0027
0.0000
0.0018
0.0273
FE(Q3)
54,073
−0.0028
0.0115
−0.0590
−0.0047
−0.0005
0.0015
0.0297
FE(Q4)
54,073
−0.0050
0.0166
−0.0958
−0.0071
−0.0010
0.0015
0.0344
FE(YRt + 1)
54,073
−0.0213
0.0600
−0.2920
−0.0364
−0.0082
0.0054
0.1413
FORE(Q1)
54,073
0.0083
0.0149
−0.0588
0.0047
0.0104
0.0155
0.0425
FORE(Q2)
54,073
0.0120
0.0143
−0.0500
0.0076
0.0130
0.0184
0.0534
FORE(Q3)
54,073
0.0139
0.0146
−0.0481
0.0090
0.0144
0.0204
0.0569
FORE(Q4)
54,073
0.0154
0.0149
−0.0469
0.0102
0.0156
0.0219
0.0608
FORE(YRt + 1)
54,073
0.0661
0.0467
−0.1499
0.0493
0.0679
0.0884
0.1864
ACCRUALS+
54,073
0.0266
0.0534
0.0000
0.0000
0.0039
0.0288
0.3260
ACCRUALS
54,073
−0.0181
0.0444
−0.2863
−0.0147
0.0000
0.0000
0.0000
AG
54,073
0.1979
0.4078
−0.3171
0.0060
0.0887
0.2333
2.5229
NODIV
54,073
0.5287
0.4992
0.0000
0.0000
1.0000
1.0000
1.0000
DIV
54,073
0.0127
0.0204
0.0000
0.0000
0.0000
0.0196
0.1055
BTM
54,073
0.4683
0.3223
−0.1217
0.2437
0.4059
0.6230
1.6860
PRC
54,073
31.0045
24.3776
5.3125
13.8300
24.4100
40.0800
139.2100
ACT(YRt-1)+
54,073
0.0496
0.0352
0.0000
0.0267
0.0476
0.0672
0.1796
ACT(YRt-1)
54,073
0.1221
0.3274
0.0000
0.0000
0.0000
0.0000
1.0000
RETPRE
54,073
0.1084
0.5705
−0.7569
−0.2120
0.0048
0.2671
3.0058
FE(YRt-1)
54,073
−0.0007
0.0105
−0.0600
−0.0011
0.0004
0.0019
0.0346
Panel C: Detail File Correlations
  
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(1)
FE(Q1)
1.000
                   
(2)
FE(Q2)
0.410
1.000
                  
(3)
FE(Q3)
0.290
0.562
1.000
                 
(4)
FE(Q4)
0.210
0.403
0.605
1.000
                
(5)
FE(YRt + 1)
0.206
0.370
0.494
0.573
1.000
               
(6)
FORE(Q1)
−0.004
0.038
0.047
0.060
0.030
1.000
              
(7)
FORE(Q2)
0.004
−0.081
−0.047
−0.022
−0.037
0.790
1.000
             
(8)
FORE(Q3)
0.013
−0.065
−0.138
−0.089
−0.076
0.658
0.820
1.000
            
(9)
FORE(Q4)
0.015
−0.046
−0.115
−0.172
−0.112
0.640
0.668
0.717
1.000
           
(10)
FORE(YRt;+1)
−0.002
−0.085
−0.139
−0.148
−0.190
0.696
0.812
0.844
0.830
1.000
          
(11)
ACCRUALS+
−0.023
−0.052
−0.073
−0.090
−0.089
0.038
0.081
0.105
0.122
0.135
1.000
         
(12)
ACCRUALS
−0.034
−0.014
0.016
0.025
0.015
0.109
0.059
0.039
0.038
0.014
0.209
1.000
        
(13)
AG
−0.025
−0.030
−0.028
−0.036
−0.066
−0.021
−0.052
−0.057
−0.054
−0.056
0.153
0.032
1.000
       
(14)
NODIV
0.005
−0.020
−0.033
−0.055
−0.071
−0.242
−0.243
−0.203
−0.166
−0.191
0.070
−0.040
0.177
1.000
      
(15)
DIV
−0.020
0.000
0.016
0.023
0.026
0.188
0.175
0.147
0.112
0.131
−0.046
−0.011
−0.095
−0.627
1.000
     
(16)
BTM
−0.014
−0.067
−0.105
−0.126
−0.121
0.006
0.062
0.119
0.131
0.165
0.153
−0.139
−0.106
−0.031
0.052
1.000
    
(17)
PRC
0.016
0.068
0.097
0.113
0.126
0.143
0.088
0.041
0.006
−0.007
−0.109
0.103
0.068
−0.151
0.034
−0.304
1.000
   
(18)
ACT(YRt-1)+
0.032
−0.050
−0.080
−0.075
−0.073
0.534
0.588
0.585
0.557
0.617
0.098
0.066
−0.075
−0.209
0.168
0.280
−0.067
1.000
  
(19)
ACT(YRt-1)
−0.044
−0.036
−0.040
−0.060
−0.063
−0.583
−0.563
−0.524
−0.489
−0.502
−0.014
−0.157
0.011
0.223
−0.128
0.018
−0.175
−0.481
1.000
 
(20)
RETPRE
0.049
0.096
0.109
0.117
0.098
0.039
0.000
−0.029
−0.051
−0.059
0.069
−0.152
0.166
0.086
−0.025
−0.253
0.168
−0.216
0.027
1.000
(21)
FE(YRt-1)
0.134
0.107
0.105
0.103
0.077
0.077
0.046
0.025
0.039
0.019
0.011
0.025
−0.005
0.003
−0.022
−0.026
0.022
0.128
−0.143
0.050
Panel D: Consensus File Correlations
  
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(1)
FE(Q1)
1.000
                   
(2)
FE(Q2)
0.410
1.000
                  
(3)
FE(Q3)
0.302
0.568
1.000
                 
(4)
FE(Q4)
0.238
0.430
0.611
1.000
                
(5)
FE(YRt + 1)
0.230
0.385
0.504
0.580
1.000
               
(6)
FORE(Q1)
0.030
0.044
0.057
0.081
0.054
1.000
              
(7)
FORE(Q2)
−0.001
−0.063
−0.060
−0.028
−0.039
0.747
1.000
             
(8)
FORE(Q3)
−0.021
−0.080
−0.146
−0.101
−0.089
0.600
0.781
1.000
            
(9)
FORE(Q4)
−0.033
−0.093
−0.156
−0.174
−0.133
0.599
0.604
0.680
1.000
           
(10)
FORE(YRt+1)
−0.043
−0.124
−0.180
−0.168
−0.202
0.678
0.800
0.829
0.824
1.000
          
(11)
ACCRUALS+
−0.047
−0.078
−0.104
−0.121
−0.121
0.067
0.118
0.143
0.169
0.187
1.000
         
(12)
ACCRUALS
−0.033
−0.021
0.013
0.030
0.022
0.093
0.035
0.013
0.004
−0.014
0.203
1.000
        
(13)
AG
−0.032
−0.033
−0.034
−0.042
−0.081
−0.027
−0.055
−0.059
−0.055
−0.065
0.199
0.055
1.000
       
(14)
NODIV
0.002
−0.026
−0.049
−0.071
−0.099
−0.272
−0.252
−0.207
−0.153
−0.193
0.079
−0.036
0.179
1.000
      
(15)
DIV
−0.011
0.015
0.038
0.045
0.063
0.246
0.200
0.163
0.103
0.155
−0.061
−0.010
−0.128
−0.660
1.000
     
(16)
BTM
−0.039
−0.094
−0.131
−0.151
−0.122
0.053
0.134
0.193
0.204
0.262
0.133
−0.150
−0.151
−0.058
0.101
1.000
    
(17)
PRC
0.056
0.118
0.157
0.171
0.191
0.165
0.098
0.042
−0.004
−0.034
−0.122
0.099
0.025
−0.269
0.134
−0.282
1.000
   
(18)
ACT(YRt-1)+
−0.014
−0.077
−0.097
−0.074
−0.073
0.562
0.612
0.603
0.563
0.651
0.124
0.069
−0.067
−0.241
0.226
0.256
−0.002
1.000
  
(19)
ACT(YRt-1)
−0.035
−0.045
−0.053
−0.089
−0.090
−0.594
−0.570
−0.518
−0.478
−0.505
−0.054
−0.144
0.004
0.246
−0.157
−0.006
−0.218
−0.525
1.000
 
(20)
RETPRE
0.085
0.140
0.146
0.141
0.120
−0.010
−0.064
−0.091
−0.105
−0.133
0.067
−0.154
0.127
0.115
−0.057
−0.219
0.147
−0.221
0.047
1.000
(21)
FE(YRt-1)
0.145
0.130
0.136
0.150
0.111
0.116
0.060
0.023
0.032
0.002
0.000
0.064
0.009
0.002
−0.016
−0.080
0.075
0.165
−0.208
0.104
Panel A (B) reports descriptive statistics for the detail (consensus) file sample. N is the number of observations. StdDev is the standard deviation Min is the minimum. Max is the maximum. P25 (P75) is the 25th (75th) percentile of the variable’s distribution. Panel C (D) reports correlations for the detail (consensus) file sample. Variable definitions are in Appendix A
Table 1, Panel B, reports descriptive statistics for the consensus file sample, which are generally consistent with the corresponding figures in the detail sample. The first quarter forecasts are slightly pessimistically biased on average, as evidenced by the mean forecast error scaled by price of 0.0001. As in the detail file sample, the consensus forecasts become optimistically biased at the second quarter horizon, and the optimistic bias increases monotonically with horizon through the next year’s forecast. Panels C and D of Table 1 report correlations for the detail and consensus files, respectively.

4 Research design and empirical results

4.1 Testing for cross-horizon forecast error predictability

Studies have examined whether analyst forecasts efficiently use information by regressing forecast error on the forecast for the same horizon (DeBondt and Thaler 1990; Keane and Runkle 1998)—for example, by regressing the current year’s forecast error on the current year’s forecast. Coefficients above (below) zero provide evidence that analysts underweight (overweight) the information incorporated into the forecast.5 We adjust this methodology to test whether analysts efficiently use the information they incorporate into forecasts at any horizon. Specifically, we regress forecast error on earnings forecasts from a variety of horizons, allowing us to assess whether analysts assign Bayesian or non-Bayesian weights to information in their own forecasts at other horizons. Evidence that a different horizon’s earnings forecast has a positive (negative) coefficient would suggest that analysts underweight (overweight) the information in that horizon’s forecast. We interpret the R-squared as the amount analysts can reduce forecast bias by more efficiently weighting the information they have collected and incorporated into their own forecasts. Because accuracy incentives decrease with horizon, we expect forecast bias can be reduced by placing greater weight on the shorter horizon forecasts and less weight on the longer horizon forecasts. We estimate the following model.
$$ FE={\beta}_0+{\beta}_1 FORE(Shorter)+{\beta}_2 FORE(Contemporaneous)+{\beta}_3 FORE(Longer)+\varepsilon $$
(1)
FE is the analyst’s forecast error and is calculated as the firm’s actual earnings per share less the analyst’s forecasted earnings per share scaled by price. We study forecast errors at five horizons: the four quarterly forecasts of the current year (FE(Q1), FE(Q2), FE(Q3), FE(Q4)), and next year’s annual forecast (FE(YRt+1)). Consistent with studies on forecast efficiency, we control for the contemporaneous forecast, but we are primarily interested in the forecasts at shorter and longer horizons. FORE(Contemporaneous) is the analyst’s forecast of earnings per share for the same horizon as the forecast error in the dependent variable. FORE(Shorter) is the sum of the shorter horizon earnings per share forecasts. FORE(Longer) is the sum of the longer horizon earnings per share forecasts. For instance, when the dependent variable is FE(Q3), FORE(Contemporaneous) is the third quarter forecast, FORE(Shorter) is the sum of the first and second quarter forecasts, and FORE(Longer) is the sum of the fourth quarter and next year’s forecasts. All of the forecast variables are scaled by price. Again, the forecasts used to calculate each of the above variables are all made on the same day to hold the analyst’s information set constant (Kang et al. 1994).
The results from estimating Eq. (1) are reported in Table 2. In column (1), we regress the first quarter forecast error on a contemporaneous forecast (the first quarter forecast) and a longer horizon forecast (the sum of the second, third, and fourth quarter forecasts as well as next year’s forecast). We observe no forecast error predictability as both of the independent variables have insignificant coefficients and the adjusted R-squared is very low at 0.0001. In column (2), we regress the second quarter forecast error on a shorter horizon forecast (the first quarter forecast), a contemporaneous forecast (the second quarter forecast), and a longer horizon forecast (the sum of the third and fourth quarter forecasts as well as next year’s forecast). The coefficient on the shorter horizon forecast is significantly positive, whereas the coefficient on the longer horizon forecast is significantly negative. Thus the second quarter forecast bias could decrease by placing greater weight on the information in the shorter horizon forecast and less weight on the information in the longer horizon forecasts. We observe similar results in columns (3) to (5) where the dependent variable is the third quarter, fourth quarter, and next year’s forecast error, respectively. In each column, the shorter horizon forecast loads positively, and the longer horizon forecast loads negatively. Further, the predictability of forecast error increases with horizon, as evidenced by the adjusted R-squared increasing monotonically with horizon, from 0.037 at the second quarter horizon to 0.104 at the next year horizon.6
Table 2
Testing for Cross-Horizon Forecast Error Predictability
 
(1)
(2)
(3)
(4)
(5)
FE(Q1)
FE(Q2)
FE(Q3)
FE(Q4)
FE(YRt + 1)
FORE(Shorter)
 
0.1592***
0.1393***
0.1464***
0.6207***
  
(12.7094)
(11.3024)
(9.3435)
(10.8635)
FORE(Contemporaneous)
−0.0061
−0.1355***
−0.1550***
−0.1794***
−0.8199***
 
(−1.3317)
(−6.9062)
(−7.9184)
(−10.1349)
(−13.3631)
FORE(Longer)
0.0011
−0.0101***
−0.0436***
−0.1009***
 
 
(1.1895)
(−3.0337)
(−6.1504)
(−7.3597)
 
CONS
0.0004***
0.0008***
0.0013***
0.0013***
0.0079***
 
(5.5826)
(7.3765)
(7.7775)
(4.9005)
(6.7361)
Observations
166,030
166,030
166,030
166,030
166,030
Adjusted R2
0.0001
0.0367
0.0641
0.0778
0.1035
This table reports OLS regression results. The dependent variable is the forecast error at five horizons: the four quarters of the current year and next year. The explanatory variables of interest include the contemporaneous forecast, the sum of the shorter-horizon forecasts, and the sum of the longer-horizon forecasts. Robust standard errors are clustered by quarter. T-statistics are reported in parentheses. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels for two-tailed tests, respectively. Variable definitions are in Appendix A
There are several important takeaways from these analyses. First, analysts’ own forecasts from other horizons do not predict forecast error at the current quarter’s forecast horizon. This suggests the shorter horizon of the forecast disciplines the analyst to incorporate information efficiently. However, we begin to observe forecast inefficiency at the second quarter horizon, and it increases monotonically at longer horizons. We observe shorter horizon forecasts are positively associated with forecast error, while longer horizon forecasts are negatively associated with forecast error.

4.2 Measuring the ability of other horizon forecasts to explain current year forecast error

Next, we examine the ability of other horizon forecasts to explain current year forecast error and contrast the predictability with other sources of predictable error identified in the literature. We focus on current year forecast error predictability, because (i) it is a commonly used horizon to forecast earnings by both academics and practitioners (i.e., Martin et al. 2018; Call et al. 2021) and (ii) the strong forecast error predictability for the third and fourth quarters in Table 2 suggests we can plausibly explain a significant proportion of forecast error at the annual horizon.7 We regress the current year forecast error on the first quarter forecast, the remainder of the current year forecast, and next year’s forecast via the following model.
$$ FE\left({YR}_t\right)={\beta}_0+{\beta}_1 FORE\left({Q}_1\right)+{\beta}_2 FORE\left({Q}_{2- 4}\right)+{\beta}_3 FORE\left({YR}_{t+ 1}\right)+\varepsilon . $$
(2)
FE(YRt) is the analyst’s current year forecast error. FORE(Q1) is the first quarter forecast, FORE(Q2–4) is the remainder of the current year forecast (i.e., quarters two through four), and FORE(YRt+1) is next year’s forecast.8
The results from estimating Eq. (2) using the detail file sample are reported in Panel A of Table 3. To present a baseline specification, column (1) reports the results from regressing current year forecast error on the current year forecast, similar to prior forecast efficiency studies (DeBondt and Thaler 1990; Keane and Runkle 1998). The coefficient on the current year forecast is negative and significant, but the model explains minimal variation in forecast error as evidenced by the adjusted R-squared of 0.004. In other words, the traditional approach to detect forecast inefficiency, by regressing forecast error on earnings forecasts for the same horizon, explains little of the error in the current year’s forecast (DeBondt and Thaler 1990; Keane and Runkle 1998).
Table 3
Cross-Horizon Forecast Error Predictability
 
(1)
(2)
(3)
(4)
(5)
FE(YRt)
FE(YRt)
FE(YRt)
FE(YRt)
FE(YRt)
Panel A: Detail File
FORE(YRt)
−0.0387***
    
 
(−4.3395)
    
FORE(Q1)
 
0.6691***
 
0.5155***
0.5814***
  
(12.8813)
 
(9.6217)
(8.1096)
FORE(Q2–4)
 
−0.0842***
 
−0.1336***
−0.2661***
  
(−2.6139)
 
(−3.9286)
(−9.8370)
FORE(YRt + 1)
 
−0.1715***
 
−0.1239***
−0.0807***
  
(−5.8713)
 
(−5.0498)
(−3.9491)
ACCRUALS+
  
−0.0594***
−0.0323***
−0.0047
   
(−6.0734)
(−3.3985)
(−0.6952)
ACCRUALS
  
0.0175*
−0.0079
−0.0166**
   
(1.6639)
(−0.7516)
(−2.1474)
AG
  
−0.0040***
−0.0049***
−0.0016***
   
(−6.4097)
(−7.2153)
(−3.4926)
NODIV
  
−0.0023***
−0.0026***
0.0016**
   
(−3.7913)
(−4.7211)
(2.4239)
DIV
  
−0.0077
−0.0305
0.0103
   
(−0.3624)
(−1.5687)
(0.5280)
BTM
  
−0.0041***
−0.0029***
−0.0059***
   
(−3.3106)
(−2.8645)
(−2.8912)
PRC
  
0.0000***
0.0000***
−0.0001***
   
(5.1567)
(3.8476)
(−4.5874)
ACT(YRt-1)+
  
−0.0702***
−0.0068
−0.0756***
   
(−4.5697)
(−0.3266)
(−3.9194)
ACT(YRt-1)
  
−0.0067***
−0.0088***
−0.0022
   
(−6.4623)
(−6.6664)
(−1.5948)
RETPRE
  
0.0066***
0.0062***
0.0040***
   
(8.1497)
(7.2031)
(5.6605)
FE(YRt-1)
  
0.4409***
0.3726***
0.2280***
   
(8.9222)
(7.3518)
(6.7295)
CONS
−0.0024***
0.0042***
0.0030***
0.0085***
0.0142***
 
(−3.1703)
(8.1702)
(2.8836)
(9.0878)
(9.1807)
Observations
166,030
166,030
166,030
166,030
166,030
Adjusted R2
0.0041
0.0647
0.0619
0.1054
0.3223
Out of Sample Adjusted R2
0.0014
0.0546
0.0457
0.0836
NA
Panel B: Consensus File
FORE(YRt)
−0.0528***
    
 
(−5.2571)
    
FORE(Q1)
 
0.8867***
 
0.6559***
0.8710***
  
(16.0396)
 
(13.3663)
(12.2787)
FORE(Q2–4)
 
0.0080
 
−0.0817**
−0.2699***
  
(0.2869)
 
(−2.5986)
(−7.8975)
FORE(YRt + 1)
 
−0.3380***
 
−0.2224***
−0.1227***
  
(−12.5360)
 
(−11.0711)
(−4.7704)
ACCRUALS+
  
−0.0614***
−0.0337***
−0.0199***
   
(−8.6521)
(−5.6520)
(−3.4922)
ACCRUALS
  
0.0162*
−0.0132*
−0.0180***
   
(1.9355)
(−1.8890)
(−2.8560)
AG
  
−0.0049***
−0.0060***
−0.0026***
   
(−9.1472)
(−10.4367)
(−5.3458)
NODIV
  
−0.0016***
−0.0020***
0.0044***
   
(−3.0729)
(−4.3077)
(5.1311)
DIV
  
0.0486***
−0.0077
0.0342*
   
(2.7177)
(−0.5334)
(1.8709)
BTM
  
−0.0062***
−0.0025*
−0.0059***
   
(−3.5770)
(−1.7849)
(−2.8778)
PRC
  
0.0001***
0.0001***
−0.0000***
   
(11.2820)
(9.3183)
(−2.6261)
ACT(YRt-1)+
  
−0.1281***
−0.0422*
−0.1266***
   
(−6.7510)
(−1.7636)
(−7.5485)
ACT(YRt-1)
  
−0.0103***
−0.0114***
−0.0018
   
(−8.3827)
(−8.0521)
(−1.2346)
RETPRE
  
0.0077***
0.0065***
0.0056***
   
(8.7669)
(7.4573)
(8.7284)
FE(YRt-1)
  
0.5538***
0.4351***
0.2863***
   
(9.1360)
(7.2012)
(5.8168)
CONS
−0.0066***
0.0054***
−0.0000
0.0078***
0.0107***
 
(−8.3096)
(7.4018)
(−0.0491)
(8.6524)
(7.8001)
Observations
54,073
54,073
54,073
54,073
54,073
Adjusted R2
0.0054
0.1046
0.1067
0.1587
0.3102
Out of Sample Adjusted R2
0.0078
0.1016
0.0988
0.1427
NA
Panels A and B report OLS regression results. The dependent variable is the current year forecast error. The explanatory variables of interest are the first quarter forecast and next year’s forecast. In Panel A (B), the detail (consensus) file sample is used, and the forecast variables are detail (consensus) forecasts. Column (5) in both Panels includes firm and quarter fixed effects. Robust standard errors are clustered by quarter. T-statistics are reported in parentheses. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels for two-tailed tests, respectively. Variable definitions are in Appendix A
Column (2) reports the results from estimating Eq. (2) without control variables. The coefficient on the first quarter (next year) forecast is significantly positive (significantly negative). This suggests, in the current year forecast, the analyst underweights (overweights) the information included in the shorter (longer) horizon forecasts. The model substantially increases forecast error predictability as evidenced by the adjusted R-squared of 0.065.9
In column (3), we regress the current year forecast error on a series of firm characteristics that studies have shown to predict forecast error, to compare the power of the forecast inefficiency we document to models used in the literature. The control variables follow both So (2013) and Larocque (2013). Following So (2013), we include controls for prior earnings, accruals, asset growth, dividends, book to market, and price. ACT[YRt-1]+ is the prior year earnings per share scaled by price when positive and zero otherwise. ACT[YRt-1] is an indicator variable set equal to one when the prior year earnings per share scaled by price is negative and zero otherwise. ACCRUALS is the change in current assets minus the change in cash and cash equivalents minus the change in current liabilities plus the change in current debt, scaled by beginning market value of equity. ACCRUALS+ (ACCRUALS) is equal to ACCRUALS when ACCRUALS is positive (negative) and zero otherwise. AG is asset growth. DIV is dividends scaled by beginning market value of equity and NODIV is an indicator set equal to one if no dividends were paid and zero otherwise. BTM is the book value of common equity scaled by the market value of equity. PRC is the share price the day before the forecasts are issued. Following Larocque (2013), we also control for past forecast error and past returns. FE(YRt-1) is the prior year forecast error, and RETPRE is the 12 month market-adjusted return ending the month before the forecast date. Specific variable definitions are reported in Appendix A, and descriptive statistics are reported in Table 1. The adjusted R-squared of 0.062 in column (3) resembles the adjusted R-squared of 0.065 in column (2). Thus the forecast inefficiency we document predicts a similar level of the current year’s forecast error, relative to an extensive set of firm characteristics used in prior studies.
In column (4), we estimate a model combining the forecast characteristics and other horizon forecasts to examine whether the two models predict similar errors. The incremental R-squared from adding forecast characteristics to the forecast inefficiency model is 0.041. To measure the ability of the forecast inefficiency we document to explain the predictable errors from firm characteristics, we compute the ratio of the incremental R-squared (from incorporating firm characteristics into the forecast inefficiency model) to the R-squared of the model with just the firm characteristics included. We find roughly 34% of the information explained by firm characteristics is also explained by analysts’ forecasts at other horizons (1–0.041/0.062). Finally, in column (5), we include firm and quarter fixed effects, to ensure our results are not driven by time or firm-invariant characteristics. In both columns, the coefficient on the first quarter forecast remains significantly positive, and the coefficient on next year’s forecast remains significantly negative.10
Importantly, we also demonstrate out of sample forecast bias reductions to more fully ensure all of the information used to adjust the forecasts would be available at the time the forecasts are issued. To do so, we estimate rolling, out-of-sample regressions for each of the specifications in Panel A of Table 3 (except column 5, because this specification includes fixed effects). For each year t in the sample, we estimate the regression using the prior three years of data (years t-3 to t-1). We then use the coefficients from these regressions to determine the predicted forecast error in year t. Finally, we report the out of sample adjusted R-squared values to gauge the level of forecast bias reduction on an out of sample basis. Similar to the in-sample values, the out-of-sample adjusted R-squared is larger in column (2) than in column (3) (0.055 versus 0.046, respectively), and the difference is even larger on a percentage basis.
In Panel B of Table 3, we conduct a similar set of tests using consensus analyst forecasts rather than individual analyst forecasts. These analyses have several purposes. First, they allow us to assess whether identifying forecast inefficiency yields significant improvements in the consensus forecast, which is more commonly used in the literature to predict earnings because of its greater accuracy (Bradshaw et al. 2012). Second, they allow us to assuage concerns that the results are not generalizable beyond the restrictive sample used in the detail file analyses (which requires the same analyst to forecast earnings at five horizons for the same firm on the same day).
In column (2), using the consensus file sample, we document higher forecast error predictability than in the detail file sample, inconsistent with idiosyncratic errors leading to forecast inefficiency. Specifically, the coefficient on the first quarter forecast increases from 0.669 in the detail file to 0.887 in the consensus file, the coefficient on next year’s forecast decreases from −0.172 to −0.338, and the adjusted R-squared increases from 0.065 to 0.105. Contrasting the forecast inefficiency model in column (2) with the firm characteristic model in column (3), both models continue to predict similar levels of forecast error (adjusted R-squared of 0.105 versus 0.107). In column (4), we see that adding the firm characteristics increases the R-squared only 5.4%, relative to the model including only forecasts at other horizons, so the information in other forecasts explains nearly 50% of the error predicted by firm characteristics (1–0.054/0.107). In column (5), we again include year and quarter fixed effects. In both columns, the coefficient on the first quarter forecast remains significantly positive, and the coefficient on next year’s forecast remains significantly negative. Overall, information from forecasts at other horizons explains much of the error predicted by firm characteristics, which is consistent with analysts processing the information that predicts errors in at least one horizon’s forecast.

4.3 Analyst optimism and forecast inefficiency

In this section, we examine how optimism affects the forecast inefficiency documented in the prior section. Analysts have incentives to issue optimistically biased longer horizon forecasts (Kang et al. 1994; Ke and Yu 2006; Jackson 2005), because doing so stimulates trading volume and enhances their access to firm managers by catering to the reporting preferences of those managers. In contrast, we expect analysts have strong reputational incentives to provide an accurate forecast for shorter horizon forecasts, given the proximity of the earnings realization. We thus suspect that optimism will lead to greater inefficiency and greater forecast re-weighting in the presence of more optimistic forecasts (i.e., incrementally more weight on the first quarter forecast and incrementally less weight on next year’s forecast). We test this via the following model.
$$ FE\left({YR}_t\right)={\beta}_0+{\beta}_1 FORE\left({Q}_1\right)+{\beta}_2 FORE\left({Q}_{2- 4}\right)+{\beta}_3 FORE\left({YR}_{t+ 1}\right)+{\beta}_4 OPTIMISM+{\beta}_5 FORE\left({Q}_1\right)\ast OPTIMISM+{\beta}_6 FORE\left({YR}_{t+ 1}\right)\ast OPTIMISM+\varepsilon . $$
(3)
We use two proxies to capture distinct aspects of optimism (OPTIMISM). First, we use past firm news, because analysts are incentivized to incorporate (not incorporate) negative firm news into their shorter (longer) horizon forecasts.11 We measure past news by multiplying prior returns compounded over the 12 months ending the month before the forecast by negative one, percentile ranking the variable, and rescaling the variable between zero and one (NEWS). The lowest (highest) values of NEWS represent firm-years with the most positive (negative) past returns. Second, we use analysts’ share price target optimism as a measure of the observed optimism, because (i) analysts are incentivized to issue optimistic price targets to generate trading commissions and (ii) analysts’ price targets convey little information about future price changes (e.g., Bradshaw et al. 2013). We measure share price target optimism as the share price target on the I/B/E/S consensus file, divided by the outstanding share price, minus one. We then percentile rank the variable and rescale it between zero and one (SPT). The lowest (highest) values of SPT represent firm-years with the least (greatest) share price target optimism. NEWS captures analysts’ incentives for optimism, whereas SPT captures analysts’ realized optimism.
We interact each optimism variable with the shortest horizon forecast (the current year’s first quarter forecast) and the longest horizon forecast (next year’s annual forecast).12 The coefficients on the main effects of the forecasts can be interpreted as the re-weighting effect when analyst optimism is weakest, and the coefficients on the interaction terms can be interpreted as the incremental re-weighting effect when analyst optimism is strongest. The main effects on the optimism proxies capture their relations with forecast bias, but our interest is in the effect of optimism on forecast inefficiency (thus the interactions).
In column (1) of Table 4, we report the results from estimating eq. (3), using the optimism proxy NEWS. Consistent with prior research, we find a significantly negative coefficient on the main effect of NEWS (i.e., analysts underreact to past news). We also find a significantly positive coefficient on FORE(Q1)*NEWS and a significantly negative coefficient on FORE(YRt+1)*NEWS. The model incorporating both accuracy incentives related to horizon and the variation in those incentives associated with recent public information predicts 13.6% of forecast error. In column (2) of Table 4, we report the results from estimating eq. (3), using the optimism proxy SPT. The main effect on SPT loads with a significantly negative coefficient, suggesting analysts are consistent in their optimism across both types of forecasts. More importantly, its interaction with FORE(Q1) is significantly positive, and its interaction with FORE(YRt+1) is significantly negative. As predicted, these results suggest that incrementally more (less) weight should be placed on the shorter (longer) horizon forecast when analyst optimism is greater.
Table 4
Incentives for Optimism & Cross-Horizon Forecast Error Predictability
 
(1)
(2)
(3)
(4)
(5)
(6)
FE(YRt)
FE(YRt)
FE(YRt)
FE(YRt)
FE(YRt)
FE(YRt)
FORE(Q1)
0.4097***
0.4570***
0.2408**
0.2425***
0.3029***
0.1243
 
(6.0052)
(3.9256)
(2.2912)
(3.2599)
(2.7096)
(1.0647)
FORE(Q2–4)
−0.0189
−0.0425
−0.0470
−0.0948***
−0.0932*
−0.0966**
 
(−0.6935)
(−1.0338)
(−1.1145)
(−2.9620)
(−1.9854)
(−2.0144)
FORE(YRt + 1)
−0.0981***
−0.0286
0.0674
−0.0354
0.0390
0.1080***
 
(−3.4364)
(−0.6587)
(1.5756)
(−1.3049)
(1.0169)
(2.7849)
NEWS
−0.0041**
 
−0.0053***
−0.0004
 
−0.0024
 
(−2.1969)
 
(−2.9759)
(−0.1866)
 
(−0.9883)
NEWS*FORE(Q1)
0.6472***
 
0.3698***
0.6410***
 
0.3371**
 
(6.6168)
 
(2.7433)
(6.7235)
 
(2.5580)
NEWS*FORE(YRt + 1)
−0.3155***
 
−0.1725***
−0.2909***
 
−0.1533***
 
(−9.9874)
 
(−4.3912)
(−9.7925)
 
(−3.9863)
SPT
 
−0.0052***
−0.0037**
 
0.0004
−0.0004
  
(−2.8117)
(−2.2209)
 
(0.2217)
(−0.2706)
SPT*FORE(Q1)
 
0.3160**
0.2677
 
0.3844**
0.3441**
  
(2.1142)
(1.5925)
 
(2.5187)
(2.0908)
SPT*FORE(YRt + 1)
 
−0.2536***
−0.2273***
 
−0.2731***
−0.2437***
  
(−6.3332)
(−5.3683)
 
(−7.1028)
(−5.8622)
CONS
0.0051***
0.0030***
0.0041***
0.0070***
0.0037***
0.0052***
 
(6.1723)
(3.0466)
(3.4800)
(5.0898)
(3.7608)
(3.1217)
Controls
No
No
No
Yes
Yes
Yes
Observations
54,073
35,509
35,509
54,073
35,509
35,509
Adjusted R2
0.1364
0.0991
0.1132
0.1705
0.1305
0.1348
This table reports OLS regression results. The dependent variable is the current year forecast error. The explanatory variables of interest are the first quarter forecast, next year’s forecast, and the corresponding interactions. Results are reported for the consensus file sample, and the forecast variables are consensus forecasts. The control variables in columns (4) to (6) are identical to those included in Table 3. Robust standard errors are clustered by quarter. T-statistics are reported in parentheses. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels for two-tailed tests, respectively. Variable definitions are in Appendix A
In column (3), we combine both sets of interactions and find the interactions with both NEWS and SPT remain significant. Moreover, the main effect on the first quarter forecast reduces from 0.887 (in Table 3 Panel B column 2) to 0.241, and the main effect on next year’s forecast becomes insignificantly positive. Thus, when optimism is lowest, we find little evidence of forecast inefficiency.
In columns (4) to (6), we include the firm characteristics from Table 3 to document two main findings. First, after including this series of control variables, the interactions from columns (1) to (3) remain significant. Second, including the interactions, which identify when analysts have incentives to use information less efficiently, substantially attenuates the forecast error predictability from firm characteristics. We measure the ability of our model to explain the predictable forecast errors from firm characteristics as the ratio of the incremental R-squared (the increase in R-squared attributable to incorporating firm characteristics to a model that already includes the forecast variables and interactions) to the R-squared from a model that includes only forecast characteristics. The model that includes the NEWS interactions explains 68% of the forecast error predictability from firm characteristics.13 When estimating our most extensive model including both NEWS and SPT, the incremental R-squared from including firm characteristics is only 2.2% (column (6) versus column (3)), suggesting our most extensive forecast inefficiency model explains 80% of the predictable errors from firm characteristics.
Overall, we conclude that accuracy incentives drive forecast inefficiency and that forecast inefficiency explains a substantial portion of the predictable errors in firm characteristics. Thus we argue that incentives likely explain the majority of the cross-section of predictable forecast errors.14

4.4 Do market expectations adjust for the predictability of forecast errors?

In the next set of analyses, we test whether market expectations adjust for predictable errors. Specifically, we test whether unexpected earnings computed using analyst forecasts adjusted for forecast inefficiency have a stronger association with future returns than published analyst forecasts (Gu and Wu 2003; Hughes et al. 2008). The strength of the association between earnings surprises and future returns should allow us to identify the model the market uses to compute expected earnings. This is because measurement error in the calculation of the earnings surprise will bias the regression coefficient toward zero and generate a smaller R-squared value. If market expectations adjust for predictable errors, removing the predictable errors from the earnings surprise would increase the coefficient estimate on the earnings surprise as well as the R-squared value. We examine whether an adjusted forecast better represents the market’s expectations of earnings, relative to the unadjusted forecast, by estimating the following set of equations.
$$ RET={\beta}_0+{\beta}_1 FE{\left({YR}_t\right)}_{UNADJ}+\varepsilon . $$
(4a)
$$ RET={\theta}_0+{\theta}_1 FE{\left({YR}_t\right)}_{ADJ}+\varepsilon . $$
(4b)
The dependent variable is either market adjusted returns over the 12 months beginning the first month after the first-quarter earnings announcement (RETPOST) or market-adjusted returns over the three days surrounding the current fiscal-year earnings announcement (RETEA).
FE(YRt)UNADJ is the current-year forecast error calculated using analysts’ published forecasts, and FE(YRt)ADJ is the current-year forecast error adjusted for predictable errors. We compare the unadjusted forecast error to the forecast error adjusted for three sources of predictable errors. First, we adjust for predictable errors from analysts’ own forecasts. FE(YRt)ADJ-FORECASTS is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (2) in Panel B of Table 3. Second, we adjust for predictable errors from firm characteristics. FE(YRt)ADJ-CONTROLS is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (3) in Panel B of Table 3. The control variables are measured as of year t-1 to avoid look ahead bias. Third, we adjust for predictable errors from both sources. FE(YRt)ADJ-BOTH is the adjusted forecast error, calculated as FE(YRt)UNADJ less the predicted forecast error from estimating column (4) in Panel B of Table 3. If the adjusted forecast better represents the market’s expectations of earnings, we expect it to have a stronger association with future returns. Thus the coefficient θ1 would be significantly greater than the coefficient β1 (and the R-squared would significantly increase). We calculate the predicted component of forecast error on an out-of-sample basis, using the same methodology wherein we calculated the out of sample R-squared values in Table 3. Specifically, we regress the first stage model on years t-3 to t-1 and then use the coefficient estimates on the data in year t to calculate predicted forecast error.
Table 5 reports the results from estimating eqs. (4a) and (4b). Panel A reports descriptive statistics. Panel B reports the results for the long-window returns. In column (2), the coefficient on the earnings surprise adjusted for forecasts at other horizons is significantly greater than the coefficient on the earnings surprise computed using published forecasts in column (1). The difference is statistically significant at the 1% level. Further, the adjusted R-squared increases by roughly 21% from 0.061 to 0.074. A Vuong (1989) test, which compares R-squared values, indicates this difference is statistically significant at the 1% level.
Table 5
Do Adjusted Forecasts Better Approximate the Market’s Expectation of Earnings?
Panel A: Descriptive Statistics
Variable
N
Mean
StdDev
Min
P25
Median
P75
Max
RETPOST
53,536
−0.0008
0.4261
−0.8356
−0.2592
−0.0432
0.1852
1.7091
RETEA
53,536
0.0029
0.0864
−0.2515
−0.0427
0.0013
0.0470
0.2733
FE(YRt)UNADJ
53,536
−0.0089
0.0359
−0.1956
−0.0140
−0.0010
0.0047
0.0830
FE(YRt)ADJ-FORECASTS
53,536
0.0007
0.0341
−0.2457
−0.0063
0.0039
0.0132
0.2405
FE(YRt)ADJ-CONTROLS
53,536
0.0008
0.0343
−0.2180
−0.0078
0.0029
0.0141
0.1824
FE(YRt)ADJ-BOTH
53,536
0.0007
0.0335
−0.2410
−0.0083
0.0019
0.0131
0.2157
Panel B: Long-Run Return ERC Tests
 
(1)
(2)
 
(3)
(4)
 
RETPOST
RETPOST
 
RETPOST
RETPOST
FE(YRt)UNADJ
2.9408***
   
 
(18.8512)
   
FE(YRt)ADJ-FORECASTS
 
3.4054***
   
  
(20.6695)
   
FE(YRt)ADJ-CONTROLS
  
3.1933***
 
   
(20.8426)
 
FE(YRt)ADJ-BOTH
   
3.4100***
    
(21.0290)
CONS
0.0255**
−0.0031
 
−0.0035
−0.0031
 
(2.3539)
(−0.2883)
 
(−0.3246)
(−0.2862)
Coef = UNADJ
 
p-val = 0.0000
 
p-val = 0.0024
p-val = 0.0000
Coef = ADJ-CONTROLS
 
p-val = 0.0002
  
p-val = 0.0000
Coef = ADJ-FORECASTS
   
p-val = 0.9219
Observations
53,536
53,536
 
53,536
53,536
R2
0.0614
0.0744
 
0.0661
0.0717
Panel C: Earnings Announcement Return ERC Tests
 
(1)
(2)
(3)
(4)
 
RETEA
RETEA
RETEA
RETEA
FE(YRt)UNADJ
0.1335***
   
 
(8.9678)
   
FE(YRt)ADJ-FORECASTS
 
0.1682***
  
  
(9.8660)
  
FE(YRt)ADJ-CONTROLS
  
0.1508***
 
   
(9.8604)
 
FE(YRt)ADJ-BOTH
   
0.1687***
    
(10.2036)
CONS
0.0041***
0.0028***
0.0027***
0.0028***
 
(6.4351)
(4.3654)
(4.2475)
(4.2929)
Coef = UNADJ
 
p-val = 0.0000
p-val = 0.0343
p-val = 0.0001
Coef = ADJ-CONTROLS
 
p-val = 0.0270
 
p-val = 0.0000
Coef = ADJ-FORECASTS
   
p-val = 0.9395
Observations
53,536
53,536
53,536
53,536
R2
0.0031
0.0044
0.0036
0.0043
Panel A reports descriptive statistics for the earnings response coefficient tests. N is the number of observations. StdDev is the standard deviation. Min is the minimum. Max is the maximum. P25 (P75) is the 25th (75th) percentile of the variable’s distribution. Panels B and C report OLS regression results. The explanatory variable of interest is either the published forecast error, FE(YRt)UNADJ, the forecast error adjusted for cross-horizon forecast-error predictability, FE(YRt)ADJ-FORECASTS, the forecast error adjusted for firm characteristics, FE(YRt)ADJ-CONTROLS, or the forecast error adjusted for both, FE(YRt)ADJ-BOTH. In Panel B, the dependent variable is returns over the 12 months beginning the first month after the first quarter earnings announcement (RETPOST). In Panel C, the dependent variable is returns over the three days surrounding the current year earnings announcement (RETEA). P-values are reported for several F-tests, which test the equality of coefficients across the models. Robust standard errors are clustered by quarter. T-statistics are reported in parentheses. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels for two-tailed tests, respectively. Variable definitions are in Appendix A
In column (3), we benchmark this improvement against that obtained by adjusting the earnings surprise using firm characteristics and lagged news variables (i.e., the regression model used to estimate column (3) in Panel B of Table 3). Consistent with prior research (e.g., Larocque 2013), we find these adjustments substantially improve explanatory power, relative to the unadjusted forecasts. However, the improvement is lower than that obtained by using the adjustment for other horizon forecasts in column (2).
In column (4), we adjust the earnings surprise using the combination of both firm characteristics and other horizon forecasts. As expected, the coefficient on the adjusted earnings surprise is significantly larger than the coefficient on the unadjusted earnings surprise. More importantly, the coefficient on this adjusted earnings surprise is significantly larger than the coefficient adjusted for firm characteristics only (column (3)), but it is not significantly different from the coefficient adjusted for other horizon forecasts only (column (2)). Thus adding the other horizon forecasts to the firm characteristic model improves its explanatory power, but adding the firm characteristics to the other horizon forecast model does not. Untabulated Vuong (1989) tests yield similar inferences as the F-tests.
In Panel B of Table 5, we obtain similar results estimating earnings response coefficients using returns over the three-day window centered on the current fiscal year’s earnings announcement. Earnings surprises adjusted using other horizon forecasts have larger earnings response coefficients than both unadjusted forecasts and forecasts adjusted using a series of firm characteristics.15

4.5 Do analysts issue longer-horizon forecasts less frequently in response to bad news?

Two assumptions underlying our analyses are that (i) analysts’ accuracy incentives decrease with horizon and (ii) analysts’ incentives for optimism increase with horizon. Although prior findings are consistent with these assumptions, in this section, we provide evidence on both assumptions by examining the frequency of forecast issuances. While forecast accuracy is a complex process, the issuance of a forecast has an intuitive link to effort. Evidence that analysts exert less effort to forecast longer horizon earnings and that this becomes stronger with greater incentives for optimism would support our assumptions. This analysis also contributes to prior research that finds analysts selectively respond to good news (McNichols and O’Brien 1997; Scherbina 2008) by showing the decision to forecast varies with horizon. That is, longer (shorter) horizon forecasts are more responsive to good (bad) news (Berger et al. 2019).16
We first present descriptive statistics in Table 6, Panel A. Like the previous analysis, the sample is limited to forecasts issued between the prior fiscal year’s earnings announcement and the first fiscal quarter’s earnings announcement. We also require market-adjusted returns from CRSP. We see that analysts issue more forecasts for the current year horizon than the next year horizon (before the log transformation, 7.8 versus 6.6 forecasts, respectively). For the quarterly forecasts, analysts issue more forecasts for the first (i.e., current) quarter than each of the next three quarters (before the log transformation, 6.4 forecasts for the first quarter versus 5.4, 5.3, and 5.5 forecasts for the second, third, and fourth quarters, respectively). These findings suggest that analyst effort declines with horizon, which is consistent with our assumption that accuracy incentives decline with horizon.
Table 6
Analyst Forecast Frequency
Panel A: Descriptive Statistics
Variable
N
Mean
StdDev
Min
P25
Median
P75
Max
RETQTR
120,023
0.0091
0.2034
−0.5246
−0.1009
−0.0009
0.1029
0.7556
RETEAt-1
120,023
0.0020
0.0759
−0.2400
−0.0337
0.0011
0.0379
0.2443
#FORE(YRt)
120,023
1.7452
0.9233
0.0000
1.0986
1.6094
2.3979
3.8918
#FORE(YRt + 1)
120,023
1.5932
0.9438
0.0000
0.6931
1.6094
2.3026
3.7377
#FORE(Q1)
120,023
1.5562
0.9481
0.0000
0.6931
1.6094
2.1972
3.7377
#FORE(Q2)
120,023
1.3704
0.9677
0.0000
0.6931
1.3863
2.0794
3.6376
#FORE(Q3)
120,023
1.3646
0.9644
0.0000
0.6931
1.3863
2.0794
3.6376
#FORE(Q4)
120,023
1.3788
0.9774
10.0000
0.6931
1.3863
2.0794
3.6636
Panel B: Quarterly Return Regression Analyses
 
(1)
(2)
(3)
(4)
(5)
(6)
 
#FORE(YRt)
#FORE(YRt + 1)
#FORE(Q1)
#FORE(Q2)
#FORE(Q3)
#FORE(Q4)
RETQTR
−0.1214***
0.0245*
−0.0737***
−0.0079**
0.0004
0.0171**
 
(−7.2156)
(1.7655)
(−7.1091)
(−2.1293)
(0.0933)
(2.4457)
#FORE(YRt)
 
0.7520***
    
  
(59.1379)
    
#FORE(YRt + 1)
0.7454***
     
 
(48.4270)
     
#FORE(Q1)
   
0.1250***
0.0492***
0.1147***
    
(17.7332)
(6.9964)
(5.7194)
#FORE(Q2)
  
0.3763***
 
0.5643***
0.3379***
   
(30.3502)
 
(19.9673)
(38.4675)
#FORE(Q3)
  
0.1629***
0.6204***
 
0.5412***
   
(7.1735)
(27.3093)
 
(36.6135)
#FORE(Q4)
  
0.2574***
0.2521***
0.3672***
 
   
(8.2931)
(13.2560)
(13.7700)
 
CONS
0.5588***
0.2806***
0.4639***
−0.0182***
0.0082
−0.0014
 
(19.7052)
(13.4537)
(16.9631)
(−2.9665)
(0.7718)
(−0.0732)
Fixed Effects
Firm
Firm
Firm
Firm
Firm
Firm
Observations
120,023
120,023
120,023
120,023
120,023
120,023
Adjusted R2
0.8191
0.8253
0.8799
0.9617
0.9649
0.9497
Panel C: Earnings Announcement Return Regression Analyses
 
(1)
(2)
(3)
(4)
(5)
(6)
 
#FORE(YRt)
#FORE(YRt;+1)
#FORE(Q1)
#FORE(Q2)
#FORE(Q3)
#FORE(Q4)
RETEAt-1
−0.2173***
0.0795***
−0.0797***
−0.0492***
−0.0039
0.0671***
 
(−11.1117)
(4.5146)
(−4.4386)
(−6.4858)
(−0.5045)
(6.1408)
#FORE(YRt)
 
0.7518***
    
  
(59.1189)
    
#FORE(YRt+1)
0.7467***
     
 
(48.7045)
     
#FORE(Q1)
   
0.1250***
0.0492***
0.1144***
    
(17.7089)
(7.0093)
(5.7329)
#FORE(Q2)
  
0.3770***
 
0.5643***
0.3382***
   
(30.3379)
 
(19.9606)
(38.5487)
#FORE(Q3)
  
0.1631***
0.6202***
 
0.5411***
   
(7.1990)
(27.2875)
 
(36.6643)
#FORE(Q4)
  
0.2575***
0.2523***
0.3673***
 
   
(8.3357)
(13.2531)
(13.7622)
 
CONS
0.5559***
0.2810***
0.4622***
−0.0181***
0.0083
−0.0013
 
(19.7020)
(13.5540)
(16.9222)
(−2.9679)
(0.7830)
(−0.0676)
Fixed Effects
Firm
Firm
Firm
Firm
Firm
Firm
Observations
120,023
120,023
120,023
120,023
120,023
120,023
Adjusted R2
0.8187
0.8254
0.8797
0.9617
0.9649
0.9497
Panel A reports descriptive statistics for the analyst forecast frequency tests. N is the number of observations. StdDev is the standard deviation. Min is the minimum. Max is the maximum. P25 (P75) is the 25th (75th) percentile of the variable’s distribution. Panels B and C report OLS regressions results. The dependent variable is the log of the number of forecasts issued during the quarter for the respective horizon. In Panel B, the explanatory variable of interest is market-adjusted returns between the last fiscal year’s EA and the first quarter’s EA. In Panel C, the explanatory variable of interest is market-adjusted returns computed over the three-day interval centered on the last fiscal year’s EA. Robust standard errors are clustered by quarter. T-statistics are reported in parentheses. ***, **, and * denote statistical significance at the 1%, 5%, and 10% levels for two-tailed tests, respectively. Variable definitions are in Appendix A.
Next, we regress the log of the number of forecasts during a firm-quarter on market-adjusted returns (calculated during the quarter and at the prior year earnings announcement) while controlling for other horizon forecasts and firm fixed effects. We measure the number of forecasts for both longer and shorter horizons. Our use of market-adjusted returns as well as firm fixed effects removes time-invariant and economy-wide shocks that affect both returns and the information analysts respond to. We use returns to measure news because it provides a comprehensive measure of firm news. We control for revisions to other forecasts, because this allows us to control for the average response to news and examine which horizon analysts intensively map information into. We estimate the following model.
$$ Log\left(\# Forecasts\right)={\beta}_0+{\beta}_1\ Returns+{\beta}_k\ Log\left(\# Other\ Horizon\ Forecasts\right)+\sum Firm\ Fixed\ Effects+\varepsilon . $$
(5)
If analysts respond more to negative (positive) news in their shorter (longer) horizon forecasts, we expect a smaller coefficient (β1) when the dependent variable is the number of shorter horizon forecasts and a larger coefficient when the dependent variable is the number of longer horizon forecasts. We present the results from estimating eq. (5) in Table 6, Panels B and C. Our main result is that analysts revise shorter horizon forecasts relatively more in response to negative news and longer horizon forecasts relatively more in response to positive news.
In Panel B, the returns are measured over the corresponding quarter. In column (1), the dependent variable is the log of the number of current fiscal year forecasts, and we control for the number of forecasts for the next fiscal year. We find that returns in this specification load with a significantly negative coefficient, suggesting that forecasts of current year earnings are more responsive to negative news. In column (2), the dependent variable is the log of the number of next fiscal year forecasts, and we find a significantly positive coefficient, suggesting that forecasts of next year earnings respond more to positive news.
In columns (3) through (6), we estimate a similar set of equations, except our dependent variables and control variables are forecasts for the four quarterly forecasts of the current year’s earnings. Specifically, in column (3), the dependent variable is the number of current quarter forecasts, and the control variables include the number of forecasts issued for the subsequent three quarters. We find a significantly negative coefficient on returns, consistent with the number of forecasts issued for one quarter ahead being more sensitive to negative news. In column (4), we replace the dependent variable with the number of next quarter forecasts and use the number of forecasts issued at all other quarterly horizons as control variables. We find the coefficient on returns remains significantly negative, although it is smaller in magnitude than in column (3). In columns (5) and (6), the coefficient on returns continues to monotonically increase as the forecast horizon lengthens. In column (6), we find a significantly positive coefficient on returns, suggesting that, relative to shorter horizon forecasts, longer horizon forecasts are more sensitive to positive news.
Finally, in Panel C, we re-estimate eq. (5), using as our independent variable of interest market-adjusted returns over the three-day window centered on the prior year earnings announcement. Because earnings announcement returns tend to be driven by public information, this allows clean identification of how the sign of news influences analysts’ responses to information (eliminating concerns of reverse causation, such as the analyst forecasting causing returns during the quarter). Similar to Panel B, we find (i) significantly negative coefficients on the shortest horizon forecasts (in both annual and quarterly forecast regressions), (ii) significantly positive coefficients on the longest horizon forecasts, and (iii) monotonic increases in the coefficient on returns as the horizon lengthens. Collectively, these results support our contention that analysts’ incentives for optimism increase with the forecast horizon.

5 Conclusion

We propose a novel measure of forecast inefficiency: the extent to which analysts’ earnings forecasts predict error in their own earnings forecasts at other horizons. Because the analyst has by definition processed the information we use to predict forecast errors, our methodology isolates information that analysts use inefficiently (as opposed to the analyst not collecting or not being aware of the information). We document current year forecasts underweight information in shorter horizon forecasts (the first quarter of the current year) and overweight information in longer horizon forecasts (next year’s forecast).We document that information from other forecasts predicts similar errors as an extensive set of firm characteristics and a similar percentage of error, suggesting a failure to incorporate information in other forecasts explains much of the cross-section of predictable errors. Finally, we measure earnings response coefficients to demonstrate that forecasts adjusted for the inefficiency we document better represent the market’s earnings expectations, both relative to unadjusted analysts’ forecasts and analysts’ forecasts adjusted for predictable errors from firm characteristics.
Because we construct our model using variation in analysts’ accuracy incentives, we argue our analysis identifies rational forecast inefficiency. While we highlight that this is potentially consistent with several specific interpretations (i.e., less effort or more bias), these findings suggest incentives explain a substantial portion of the cross-section of analyst forecast errors.

Acknowledgments

The authors thank Peter Easton (editor), two anonymous reviewers, Phil Berger, Jonathan Black (discussant), Wei Chen, Jimmy Downes (discussant), Rich Frankel, Jeremiah Green, Jared Jennings, Stephannie Larocque, Xiumin Martin, Joe Pacelli, Oded Rozenbaum, Sunny Yang, as well as seminar participants at University of Connecticut, University of Notre Dame, Villanova University, Washington University in St. Louis, the AAA Annual Meeting, and the AAA Midwest Meeting for helpful discussions and comments.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix

Appendix A
Variable Definitions
Forecast Error & Forecast Variables
Name
Definition
FE(Qn)
Quarter n (first, second, third, or fourth quarter) realized earnings per share minus quarter n forecasted earnings per share scaled by price. Price is measured the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
FE(YRn)
Year n (prior, current, next) realized earnings per share minus year n forecasted earnings per share scaled by price. Price is measured the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
FORE(Qn)
Quarter n (first, second, third, or fourth quarter) forecasted earnings per share scaled by price. Quarter 2–4 denotes the sum of the second, third, and fourth quarter forecasts. Price is measured the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
FORE(YRn)
Year n (prior, current, next) forecasted earnings per share scaled by price. Price is measured the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
#FORE(Qn / YRn)
Log of one plus the number of quarter n or year n forecasts issued during the quarter.
Other Variables
Name
Definition
ACT(YRn)
Year n (prior, current, next) earnings per share scaled by price. ACT(YRt-1)+ is equal to ACT(YRt-1) when ACT(YRt-1) is positive, zero otherwise. ACT(YRt-1) is an indicator set equal to one when ACT(YRt-1) is negative, zero otherwise. Price is measured the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
ACCRUALS
Change in current assets minus the change in cash and cash equivalents minus the change in current liabilities plus the change in current debt scaled by beginning market value of equity. ACCRUALS+ (ACCRUALS) is equal to ACCRUALS when ACCRUALS is positive (negative), zero otherwise. Measured as of year t-1.
AG
Change in total assets scaled by beginning total assets. Measured as of year t-1.
NODIV
Indicator set equal to one if no dividends were paid, zero otherwise. Measured as of year t-1.
DIV
Dividends scaled by beginning market value of equity. Measured as of year t-1.
BTM
Book value of common equity scaled by market value of equity. Measured as of year t-1.
PRC
Share price as of the day before the forecast is issued (statistical period end date) in the detail file (consensus file).
NEWS
Prior returns (RETPRE) multiplied by negative one, percentile ranked, and rescaled between zero and one. The lowest (highest) values represent firm-years with the most positive (negative) past return news.
SPT
Consensus share price target divided by the outstanding share price (as of the price target issuance date) and minus one, then percentile ranked, and rescaled between zero and one. The lowest (highest) values represent firm-years with the least (most) share price target optimism. We include share price targets issued within 365 days before the earnings forecasts (including the day of the earnings forecasts).
RETEAn
Three-day market-adjusted return centered on the firm’s year n earnings announcement (EA(YRt) or EA(YRt-1) in the timeline in Section 3.1).
RETPOST
Twelve-month market-adjusted return beginning the month after the first quarter earnings announcement (EA(Q1) in the timeline in Section 3.1).
RETPRE
Twelve-month market-adjusted return ending the month before the forecast date.
RETQTR
Market-adjusted returns between the prior year’s earnings announcement (EA(YRt-1) in the timeline in Section 3.1) and the first quarter’s earnings announcement (EA(Q1) in the timeline in Section 3.1).
Appendix B
Sample Selection
Data Restrictions
Observations
Detail File
Analyst announcements of forecasts of Q1–Q4 earnings for FY1 and annual earnings for FY2
401,237
Observations without actuals for Q1–Q4 of FY1 earnings and FY2 earnings (requiring Q1 EA before forecast announcement date)
(92,963)
Limit sample to last observation of the quarter for each analyst
(38,027)
Require forecast and actual for prior fiscal year
(26,343)
Require a link to CRSP, past 12 month returns, and split factors for all forecasts
(18,662)
Require forecast after prior year EA, after 1985, and with share price above $5
(18,745)
Require controls and nonmissing data
(40,467)
Maximum sample size with IBES/CRSP data
166,030
Consensus File
Statistical period end dates with consensus forecasts of Q1–Q4 earnings for FY1 and annual earnings for FY2
261,958
Observations without actuals for Q1–Q4 of FY1 earnings and FY2 earnings (requiring Q1 EA before forecast announcement date)
(58,347)
Limit sample to last statistical period end date of the quarter for each firm
(117,849)
Require forecast and actual for prior fiscal year
(1,787)
Require a link to CRSP, past 12 month returns, and split factors for all forecasts
(4,704)
Require forecast after prior year EA, after 1985, and with share price above $5
(6,536)
Require controls and nonmissing data
(18,662)
Maximum sample size with IBES/CRSP data
54,073
Footnotes
1
Recent studies have also shown that earnings forecasts and stock recommendations generated using machine learning techniques can outperform analysts’ forecasts or forecasts based on time-series models (e.g., Anand et al. 2019; Hunt et al. 2019; Coleman et al. 2020).
 
2
At longer forecast horizons, analyst forecasts are too extreme, less accurate, and underweight firm characteristics to a larger degree (DeBondt and Thaler 1990; Bradshaw et al. 2012; Raedy et al. 2006).
 
3
We predict forecast error instead of earnings in the model because we can then interpret the R-squared values as the percentage decrease in bias, but our inferences would remain if we predicted earnings instead.
 
4
In untabulated analyses, we find the difference between our predicted earnings and published earnings forecasts cannot be used to trade profitably, providing additional evidence our methodology helps identify expected earnings.
 
5
We predict forecast errors (rather than earnings) so that we can interpret the R-Squared value as the percentage reduction in forecast error. In contrast, DeBondt and Thaler (1990) predict realized earnings as opposed to forecast errors. Our inferences hold if predicting earnings instead of forecast errors.
 
6
We sum shorter horizon and longer horizon forecasts to make the model more tractable. In untabulated analyses, the inferences are similar if we (i) include all five forecasts as regressors or (ii) include only the first quarter forecast and next year’s forecast. That is, the coefficient on the first quarter forecast increases monotonically, the coefficient on the next year forecast decreases monotonically (becomes more negative), and the R-squared increases monotonically.
 
7
Current quarter forecasts are also a common horizon to forecast future earnings, but we do not conduct further analysis, because other horizon forecasts do not predict error in the current quarter’s forecast (per Table 2).
 
8
We divide the annual forecast into two variables (rather than all four quarters) to limit the number of forecast horizons modeled, and we group the second, third, and fourth quarters, because we are primarily interested in the shortest and the longest horizon forecast (not the intermediate forecasts).
 
9
We also estimate the model in column (2), using median regression (Basu and Markov 2004) with robust standard errors and our inferences remain unchanged.
 
10
The coefficient on the first quarter forecast remains significantly positive and the coefficient on next year’s forecast remains significantly negative if we include analyst and quarter fixed effects or analyst, firm, and quarter fixed effects.
 
11
In Section 4.5, we use the frequency of forecasts to provide evidence consistent with this assertion. We show that analysts are more responsive to negative (positive) news in their shorter (longer) horizon forecasts, which is consistent with their having greater incentives to incorporate positive (negative) news in longer (shorter) horizon forecasts.
 
12
We do not include an interaction with the second through fourth quarter forecast to reduce multi-collinearity and because we make no hypothesis related to this intermediate horizon forecast.
 
13
For example, to compute the ratio for the NEWS model we take the incremental R-squared from adding firm characteristics (the difference in the column (1) and column (4) R-squared, 17.1% minus 13.6%) and then divide the incremental R-squared by the R-squared from just firm characteristics, 10.7% (i.e., 3.4%/10.7% = 32%). We then subtract the 32% from one, to obtain the percentage of the error predictability explained by our forecast inefficiency model including the NEWS interactions.
 
14
In untabulated analyses, we interact the first quarter forecast and the next year forecast with an indicator variable denoting the presence of a management forecast—neither interaction is significant.
 
15
In untabulated analyses, we find adjusting earnings forecasts using other horizon forecasts does not improve cost of capital estimates.
 
16
Berger et al. (2019) show that, after the initial forecast of the quarter, analysts tend to respond to good news by revising share price targets and bad news by revising the current quarter’s earnings forecast. In this section, we document that longer horizon earnings forecast revisions respond more to good news than shorter horizon earnings forecasts.
 
Literature
go back to reference Abarbanell, J.S. (1991). Do analysts’ earnings forecasts incorporate information in prior stock price changes? Journal of Accounting and Economics 14 (2): 147–165.CrossRef Abarbanell, J.S. (1991). Do analysts’ earnings forecasts incorporate information in prior stock price changes? Journal of Accounting and Economics 14 (2): 147–165.CrossRef
go back to reference Abarbanell, J.S., and L. Bernard. (1992). Tests of analysts’ overreaction/underreaction to earnings information as an explanation for anomalous stock price behavior. The Journal of Finance 47 (3): 1181–1207.CrossRef Abarbanell, J.S., and L. Bernard. (1992). Tests of analysts’ overreaction/underreaction to earnings information as an explanation for anomalous stock price behavior. The Journal of Finance 47 (3): 1181–1207.CrossRef
go back to reference Anand, V., R. Brunner, K. Ikegwu, and T. Sougiannis. (2019). Predicting profitability using machine learning. Working paper, University of Illinois at Urbana-Champaign. Anand, V., R. Brunner, K. Ikegwu, and T. Sougiannis. (2019). Predicting profitability using machine learning. Working paper, University of Illinois at Urbana-Champaign.
go back to reference Bagnoli, M., S.G. Watts, and Y. Zhang. (2008). Reg-FD and the competitiveness of all-star analysts. Journal of Accounting and Public Policy 27 (4): 295–316.CrossRef Bagnoli, M., S.G. Watts, and Y. Zhang. (2008). Reg-FD and the competitiveness of all-star analysts. Journal of Accounting and Public Policy 27 (4): 295–316.CrossRef
go back to reference Ball, R., and R. Watts. (1972). Some time series properties of accounting income. The Journal of Finance 27 (3): 663–681.CrossRef Ball, R., and R. Watts. (1972). Some time series properties of accounting income. The Journal of Finance 27 (3): 663–681.CrossRef
go back to reference Basu, S., and S. Markov. (2004). Loss function assumptions in rational expectations tests on financial analysts’ earnings forecasts. Journal of Accounting and Economics 38: 171–203. Basu, S., and S. Markov. (2004). Loss function assumptions in rational expectations tests on financial analysts’ earnings forecasts. Journal of Accounting and Economics 38: 171–203.
go back to reference Berger, P.G., C.G. Ham, and Z.R. Kaplan. (2019). Do analysts say anything about earnings without revising their earnings forecasts? The Accounting Review 94 (2): 29–52.CrossRef Berger, P.G., C.G. Ham, and Z.R. Kaplan. (2019). Do analysts say anything about earnings without revising their earnings forecasts? The Accounting Review 94 (2): 29–52.CrossRef
go back to reference Bernhardt, D., M. Campello, and E. Kutsoati. (2006). Who herds? Journal of Financial Economics 80 (3): 657–675.CrossRef Bernhardt, D., M. Campello, and E. Kutsoati. (2006). Who herds? Journal of Financial Economics 80 (3): 657–675.CrossRef
go back to reference Bradshaw, M.T., M.S. Drake, J.N. Myers, and L.A. Myers. (2012). A re-examination of analysts’ superiority over time-series forecasts of annual earnings. Review of Accounting Studies 17 (4): 944–968.CrossRef Bradshaw, M.T., M.S. Drake, J.N. Myers, and L.A. Myers. (2012). A re-examination of analysts’ superiority over time-series forecasts of annual earnings. Review of Accounting Studies 17 (4): 944–968.CrossRef
go back to reference Bradshaw, M.T., L.D. Brown, and K. Huang. (2013). Do sell-side analysts exhibit differential target price forecasting ability? Review of Accounting Studies 18 (4): 930–955.CrossRef Bradshaw, M.T., L.D. Brown, and K. Huang. (2013). Do sell-side analysts exhibit differential target price forecasting ability? Review of Accounting Studies 18 (4): 930–955.CrossRef
go back to reference Bradshaw, M.T., L.F. Lee, and K. Peterson. (2016). The interactive role of difficulty and incentives in explaining the annual earnings forecast walkdown. The Accounting Review 91 (4): 995–1021. Bradshaw, M.T., L.F. Lee, and K. Peterson. (2016). The interactive role of difficulty and incentives in explaining the annual earnings forecast walkdown. The Accounting Review 91 (4): 995–1021.
go back to reference Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2015). Inside the “black box” of sell-side financial analysts. Journal of Accounting Research 53 (1): 1–47. Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2015). Inside the “black box” of sell-side financial analysts. Journal of Accounting Research 53 (1): 1–47.
go back to reference Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2016). The activities of buy-side analysts and the determinants of their stock recommendations. Journal of Accounting and Economics 62 (1): 139–156.CrossRef Brown, L.D., A.C. Call, M.B. Clement, and N.Y. Sharp. (2016). The activities of buy-side analysts and the determinants of their stock recommendations. Journal of Accounting and Economics 62 (1): 139–156.CrossRef
go back to reference Brown, L.D., R.L. Hagerman, P.A. Griffin, and M. Zmijewski. (1987). Security analyst superiority relative to univariate time-series models in forecasting quarterly earnings. Journal of Accounting and Economics 9 (1): 61–87. Brown, L.D., R.L. Hagerman, P.A. Griffin, and M. Zmijewski. (1987). Security analyst superiority relative to univariate time-series models in forecasting quarterly earnings. Journal of Accounting and Economics 9 (1): 61–87.
go back to reference Call, A.C., J. Donovan, and J.N. Jennings. (2021). Private lenders’ use of analyst earnings forecasts when establishing debt covenant thresholds. Working paper, Arizona State University and University of Notre Dame and Washington University in St. Louis. Call, A.C., J. Donovan, and J.N. Jennings. (2021). Private lenders’ use of analyst earnings forecasts when establishing debt covenant thresholds. Working paper, Arizona State University and University of Notre Dame and Washington University in St. Louis.
go back to reference Chen, Q., and W. Jiang. (2006). Analysts’ weighting of private and public information. Review of Financial Studies. 19 (1): 319–355.CrossRef Chen, Q., and W. Jiang. (2006). Analysts’ weighting of private and public information. Review of Financial Studies. 19 (1): 319–355.CrossRef
go back to reference Coleman, B., K. Merkley, and J. Pacelli. (2020). Man versus machine: A comparison of Robo-analyst and traditional research analyst investment recommendations. Working paper, Indiana University. Coleman, B., K. Merkley, and J. Pacelli. (2020). Man versus machine: A comparison of Robo-analyst and traditional research analyst investment recommendations. Working paper, Indiana University.
go back to reference Collins, W.A., and W.S. Hopwood. (1980). A multivariate analysis of annual earnings forecasts generated from quarterly forecasts of financial analysts and univariate time-series models. Journal of Accounting Research 18 (2): 390–406. Collins, W.A., and W.S. Hopwood. (1980). A multivariate analysis of annual earnings forecasts generated from quarterly forecasts of financial analysts and univariate time-series models. Journal of Accounting Research 18 (2): 390–406.
go back to reference DeBondt, W.F.M., and R.H. Thaler. (1990). Do security analysts overreact? The American Economic Review 80 (2): 52–57. DeBondt, W.F.M., and R.H. Thaler. (1990). Do security analysts overreact? The American Economic Review 80 (2): 52–57.
go back to reference Easterwood, J.C., and S.H. Nutt. (1999). Inefficiency in analysts' earnings forecasts: Systematic misreaction or systematic optimism? The Journal of Finance 54 (5): 1777–1797.CrossRef Easterwood, J.C., and S.H. Nutt. (1999). Inefficiency in analysts' earnings forecasts: Systematic misreaction or systematic optimism? The Journal of Finance 54 (5): 1777–1797.CrossRef
go back to reference Fried, D., and D. Givoly. (1982). Financial analysts' forecasts of earnings: A better surrogate for market expectations. Journal of Accounting and Economics 4 (2): 85–107.CrossRef Fried, D., and D. Givoly. (1982). Financial analysts' forecasts of earnings: A better surrogate for market expectations. Journal of Accounting and Economics 4 (2): 85–107.CrossRef
go back to reference Givoly, D. (1985). The formation of earnings expectations. The Accounting Review 60 (3): 372–386. Givoly, D. (1985). The formation of earnings expectations. The Accounting Review 60 (3): 372–386.
go back to reference Gu, Z., and J.S. Wu. (2003). Earnings skewness and analyst forecast bias. Journal of Accounting and Economics 35 (1): 5–29.CrossRef Gu, Z., and J.S. Wu. (2003). Earnings skewness and analyst forecast bias. Journal of Accounting and Economics 35 (1): 5–29.CrossRef
go back to reference Hou, K., M.A. Van Dijk, and Y. Zhang. (2012). The implied cost of capital: A new approach. Journal of Accounting and Economics. 53 (3): 504–526.CrossRef Hou, K., M.A. Van Dijk, and Y. Zhang. (2012). The implied cost of capital: A new approach. Journal of Accounting and Economics. 53 (3): 504–526.CrossRef
go back to reference Hughes, J., J. Liu, and W. Su. (2008). On the relation between predictable market returns and predictable analyst forecast errors. Review of Accounting Studies. 13 (2–3): 266–291.CrossRef Hughes, J., J. Liu, and W. Su. (2008). On the relation between predictable market returns and predictable analyst forecast errors. Review of Accounting Studies. 13 (2–3): 266–291.CrossRef
go back to reference Hunt, J., J. Myers, and L. Myers. (2019). Improving earnings predictions with machine learning. Working paper, Mississippi State University and University of Tennessee. Hunt, J., J. Myers, and L. Myers. (2019). Improving earnings predictions with machine learning. Working paper, Mississippi State University and University of Tennessee.
go back to reference Jackson, A.R. (2005). Trade generation, reputation, and sell-side analysts. Journal of Finance. 60: 673–717.CrossRef Jackson, A.R. (2005). Trade generation, reputation, and sell-side analysts. Journal of Finance. 60: 673–717.CrossRef
go back to reference Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux. Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus, and Giroux.
go back to reference Kang, S., J. O’Brien, and K. Sivaramakrishnan. (1994). Analysts' interim earnings forecasts: Evidence on the forecasting process. Journal of Accounting Research. 32 (1): 103–112.CrossRef Kang, S., J. O’Brien, and K. Sivaramakrishnan. (1994). Analysts' interim earnings forecasts: Evidence on the forecasting process. Journal of Accounting Research. 32 (1): 103–112.CrossRef
go back to reference Ke, B., and Y. Yu. (2006). The effect of issuing biased earnings forecasts on analysts' access to management and survival. Journal of Accounting Research. 44 (5): 965–999.CrossRef Ke, B., and Y. Yu. (2006). The effect of issuing biased earnings forecasts on analysts' access to management and survival. Journal of Accounting Research. 44 (5): 965–999.CrossRef
go back to reference Keane, M.P., and D.E. Runkle. (1998). Are financial analysts' forecasts of corporate profits rational? Journal of Political Economy. 106 (4): 768–805.CrossRef Keane, M.P., and D.E. Runkle. (1998). Are financial analysts' forecasts of corporate profits rational? Journal of Political Economy. 106 (4): 768–805.CrossRef
go back to reference Kothari, S.P., and R.G. Sloan. (1992). Information in prices about future earnings: Implications for earnings response coefficients. Journal of Accounting and Economics. 15 (2): 143–171.CrossRef Kothari, S.P., and R.G. Sloan. (1992). Information in prices about future earnings: Implications for earnings response coefficients. Journal of Accounting and Economics. 15 (2): 143–171.CrossRef
go back to reference Kross, W., B. Ro, and D. Schroeder. (1990). Earnings expectations: The analysts' information advantage. The Accounting Review 65 (2): 461–476. Kross, W., B. Ro, and D. Schroeder. (1990). Earnings expectations: The analysts' information advantage. The Accounting Review 65 (2): 461–476.
go back to reference Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin 108 (3): 480–498.CrossRef Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin 108 (3): 480–498.CrossRef
go back to reference Larocque, S. (2013). Analysts’ earnings forecast errors and cost of equity capital estimates. Review of Accounting Studies 18 (1): 135–166.CrossRef Larocque, S. (2013). Analysts’ earnings forecast errors and cost of equity capital estimates. Review of Accounting Studies 18 (1): 135–166.CrossRef
go back to reference Li, K.K., and P. Mohanram. (2014). Evaluating cross-sectional forecasting models for implied cost of capital. Review of Accounting Studies 19 (3): 1152–1185.CrossRef Li, K.K., and P. Mohanram. (2014). Evaluating cross-sectional forecasting models for implied cost of capital. Review of Accounting Studies 19 (3): 1152–1185.CrossRef
go back to reference Lys, T., and S. Sohn. (1990). The association between revisions of financial analysts' earnings forecasts and security-price changes. Journal of Accounting and Economics 13 (4): 341–363.CrossRef Lys, T., and S. Sohn. (1990). The association between revisions of financial analysts' earnings forecasts and security-price changes. Journal of Accounting and Economics 13 (4): 341–363.CrossRef
go back to reference Lys, T., and L.G. Soo. (1995). Analysts' forecast precision as a response to competition. Journal of Accounting, Auditing & Finance 10 (4): 751–765.CrossRef Lys, T., and L.G. Soo. (1995). Analysts' forecast precision as a response to competition. Journal of Accounting, Auditing & Finance 10 (4): 751–765.CrossRef
go back to reference Martin, X., H. Seo, J. Yang, and D.S. Kim. (2018). Target performance goals in CEO compensation contracts and management earnings guidance. Working paper, Indiana University and National University of Singapore and Peking University and Washington University in St. Louis. Martin, X., H. Seo, J. Yang, and D.S. Kim. (2018). Target performance goals in CEO compensation contracts and management earnings guidance. Working paper, Indiana University and National University of Singapore and Peking University and Washington University in St. Louis.
go back to reference McNichols, M., and P.C. O’Brien. (1997). Self-selection and analyst coverage. Journal of Accounting Research 35: 167–199.CrossRef McNichols, M., and P.C. O’Brien. (1997). Self-selection and analyst coverage. Journal of Accounting Research 35: 167–199.CrossRef
go back to reference Mendenhall, R.R. (1991). Evidence on the possible underweighting of earnings-related information. Journal of Accounting Research 29 (1): 170–179. Mendenhall, R.R. (1991). Evidence on the possible underweighting of earnings-related information. Journal of Accounting Research 29 (1): 170–179.
go back to reference O’Brien, P.C. (1988). Analysts' forecasts as earnings expectations. Journal of Accounting and Economics 10 (1): 53–83. O’Brien, P.C. (1988). Analysts' forecasts as earnings expectations. Journal of Accounting and Economics 10 (1): 53–83.
go back to reference Payne, J.L., and W.B. Thomas. (2003). The implications of using stock-split adjusted I/B/E/S data in empirical research. The Accounting Review 78 (4): 1049–1067. Payne, J.L., and W.B. Thomas. (2003). The implications of using stock-split adjusted I/B/E/S data in empirical research. The Accounting Review 78 (4): 1049–1067.
go back to reference Raedy, J.S., P. Shane, and Y. Yang. (2006). Horizon-dependent underreaction in financial analysts’ earnings forecasts. Contemporary Accounting Research 23 (1): 291–322.CrossRef Raedy, J.S., P. Shane, and Y. Yang. (2006). Horizon-dependent underreaction in financial analysts’ earnings forecasts. Contemporary Accounting Research 23 (1): 291–322.CrossRef
go back to reference Richardson, S.A., S.H. Teoh, and P.D. Wysocki. (2004). The walk-down to beatable analyst forecasts: The role of equity issuance and insider trading incentives. Contemporary Accounting Research 21 (3): 885–924.CrossRef Richardson, S.A., S.H. Teoh, and P.D. Wysocki. (2004). The walk-down to beatable analyst forecasts: The role of equity issuance and insider trading incentives. Contemporary Accounting Research 21 (3): 885–924.CrossRef
go back to reference Scherbina, A. (2008). Suppressed negative information and future underperformance. Review of Finance 12 (3): 533–565.CrossRef Scherbina, A. (2008). Suppressed negative information and future underperformance. Review of Finance 12 (3): 533–565.CrossRef
go back to reference Schipper, K. (1991). Commentary on analysts' forecasts. Accounting Horizons 5 (4): 105–121. Schipper, K. (1991). Commentary on analysts' forecasts. Accounting Horizons 5 (4): 105–121.
go back to reference So, E.C. (2013). A new approach to predicting analyst forecast errors: Do investors overweight analyst forecasts? Journal of Financial Economics 108 (3): 615–640.CrossRef So, E.C. (2013). A new approach to predicting analyst forecast errors: Do investors overweight analyst forecasts? Journal of Financial Economics 108 (3): 615–640.CrossRef
go back to reference Stickel, S. (1990). Predicting individual analyst earnings forecasts. Journal of Accounting Research 28: 409–417.CrossRef Stickel, S. (1990). Predicting individual analyst earnings forecasts. Journal of Accounting Research 28: 409–417.CrossRef
go back to reference Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57 (2): 307–333. Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57 (2): 307–333.
Metadata
Title
Rationalizing forecast inefficiency
Authors
Charles G. Ham
Zachary R. Kaplan
Zawadi R. Lemayian
Publication date
01-10-2021
Publisher
Springer US
Published in
Review of Accounting Studies / Issue 1/2022
Print ISSN: 1380-6653
Electronic ISSN: 1573-7136
DOI
https://doi.org/10.1007/s11142-021-09622-8

Other articles of this Issue 1/2022

Review of Accounting Studies 1/2022 Go to the issue