Skip to main content
Top

2015 | Book

Handbook of Financial Econometrics and Statistics

Editors: Cheng-Few Lee, John C. Lee

Publisher: Springer New York

insite
SEARCH

About this book

​The Handbook of Financial Econometrics and Statistics provides, in four volumes and over 100 chapters, a comprehensive overview of the primary methodologies in econometrics and statistics as applied to financial research. Including overviews of key concepts by the editors and in-depth contributions from leading scholars around the world, the Handbook is the definitive resource for both classic and cutting-edge theories, policies, and analytical techniques in the field. Volume 1 (Parts I and II) covers all of the essential theoretical and empirical approaches. Volumes 2, 3, and 4 feature contributed entries that showcase the application of financial econometrics and statistics to such topics as asset pricing, investment and portfolio research, option pricing, mutual funds, and financial accounting research. Throughout, the Handbook offers illustrative case examples and applications, worked equations, and extensive references, and includes both subject and author indices.​

Table of Contents

Frontmatter
1. Introduction to Financial Econometrics and Statistics

The main purposes of this introduction chapter are (i) to discuss important financial econometrics and statistics which have been used in finance and accounting research and (ii) to present an overview of 98 chapters which have been included in this handbook. Sections 1.2 and 1.3 briefly review and discuss financial econometrics and statistics. Sections 1.4 and 1.5 discuss application of financial econometrics and statistics. Section 1.6 first classifies 98 chapters into 14 groups in accordance with subjects and topics. Then this section has classified the keywords from each chapter into two groups: finance and accounting topics and methodology topics. Overall, this chapter gives readers of this handbook guideline of how to apply this handbook to their research.

Cheng-Few Lee, John C. Lee
2. Experience, Information Asymmetry, and Rational Forecast Bias

We use a Bayesian model of updating forecasts in which the bias in forecast endogenously determines how the forecaster’s own estimates weigh into the posterior beliefs. Our model predicts a concave relationship between accuracy in forecast and posterior weight that is put on the forecaster’s self-assessment. We then use a panel regression to test our analytical findings and find that an analyst’s experience is indeed concavely related to the forecast error.This study examines whether it is ever rational for analysts to post biased estimates and how information asymmetry and analyst experience factor into the decision. Using a construct where analysts wish to minimize their forecasting error, we model forecasted earnings when analysts combine private information with consensus estimates to determine the optimal forecast bias, i.e., the deviation from the consensus. We show that the analyst’s rational bias increases with information asymmetry, but is concavely related with experience. Novice analysts post estimates similar to the consensus but as they become more experienced and develop private information channels, their estimates become biased as they deviate from the consensus. Highly seasoned analysts, who have superior analytical skills and valuable relationships, need not post biased forecasts.

April Knill, Kristina L. Minnick, Ali Nejadmalayeri
3. An Appraisal of Modeling Dimensions for Performance Appraisal of Global Mutual Funds

A number of studies have been conducted to examine investment performance of mutual funds of the developed capital markets. Grinblatt and Titman (1989, 1994) found that small mutual funds perform better than large ones and that performance is negatively correlated to management fees but not to fund size or expenses. Hendricks, Patel, and Zeckhauser (1993), Goetzmann and Ibbotson (1994), and Brown and Goetzmann (1995) present evidence of persistence in mutual fund performance. Grinblatt and Titman (1992) and Elton, Gruber, and Blake (Journal of Financial Economics 42:397–421, 1996) show that past performance is a good predictor of future performance. Blake, Elton, and Grubber (1993), Detzler (1999), and Philpot, Hearth, Rimbey, and Schulman (1998) find that performance is negatively correlated to fund expense, and that past performance does not predict future performance. However, Philpot, Hearth, and Rimbey (2000) provide evidence of short-term performance persistence in high-yield bond mutual funds. In their studies of money market mutual funds, Domian and Reichenstein (1998) find that the expense ratio is the most important factor in explaining net return differences. Christoffersen (2001) shows that fee waivers matter to performance. Smith and Tito (1969) conducted a study into 38 funds for 1958–1967 and obtained similar results. Treyner (1965) advocated the use of beta coefficient instead of the total risk.

G. V. Satya Sekhar
4. Simulation as a Research Tool for Market Architects

Financial economists have three primary research tools at their disposal: theoretical modeling, statistical analysis, and computer simulation. In this chapter, we focus on using simulation to gain insights into trading and market structure topics, which are growing in importance for practitioners, policy-makers, and academics. We show how simulation can be used to gather data on trading decision behavior and to analyze performance in securities markets under controlled yet competitive conditions. We find that controlled simulations with participants are a flexible and reliable research tool when it comes to studying issues involving traders and market architecture. The role of the discrete event simulation model we have developed is to create a backdrop, or a controlled stochastic environment, for running market experiments with live subjects. Simulations enable us to gather data on trading participants’ decision making and to ascertain the ability of incentives and market structures to influence outcomes. The statistical methods we use include experimental design and careful controls over experimental parameters such as the instructions given to participants. Furthermore, results are assessed both at the individual level to understand how participants respond to incentives in a trading setting and also at the market level to know whether the predicted outcomes are achieved and how well the market operated.There are two statistical methods described in the chapter. The first is discrete event simulation and the model of computer-generated trade order flow that we describe in Sect. 4.3. To create a realistic, but not ad hoc, market background, we use draws from a log-normal returns distribution to simulate changes in a stock’s fundamental value, or P*. The model uses price-dependent Poisson distributions to generate a realistic flow of computer-generated buy and sell orders whose intensity and supply-demand balance vary over time. The order flow fluctuations depend on the difference between the current market price and the P* value. In Sect. 4.4, we illustrate the second method, which is experimental control to create groupings of participants in our simulations that have the same trading “assignment.” The result is the ability to make valid comparisons of traders’ performances in the simulations.

Robert A. Schwartz, Bruce W. Weber
5. Motivations for Issuing Putable Debt: An Empirical Analysis

This paper examines the motivations for issuing putable bonds in which the embedded put option is not contingent upon a company-related event. We find that the market favorably views the issue announcement of these bonds that we refer to as bonds with European put options or European putable bonds. This response is in contrast to the response documented by the literature to other bond issues (straight, convertible, and most studies examining poison puts) and to the response documented in the current paper to the issue announcements of poison put bonds. Our results suggest that the market views issuing European putable bonds as helping mitigate security mispricing. Our study is an application of important statistical methods in corporate finance, namely, event studies and the use of general method of moments for cross-sectional regressions.

Ivan E. Brick, Oded Palmon, Dilip K. Patro
6. Multi-Risk Premia Model of US Bank Returns: An Integration of CAPM and APT

Interest rate sensitivity of bank stock returns has been studied using an augmented CAPM, a multiple regression model with market returns and interest rate as independent variables. In this paper, we test an asset-pricing model in which the CAPM is augmented by three orthogonal factors which are proxies for the innovations in inflation, maturity risk, and default risk. The model proposed is an integration of CAPM and APT. The results of the two models are compared to shed light on sources of interest rate risk.Our results using the integrated model indicate the inflation beta to be statistically significant. Hence, innovations in short-term interest rates contain valuable information regarding inflation premium; as a result the interest rate risk is priced with respect to the short-term interest rates. Further, it also indicates that innovations in long-term interest rates contain valuable information regarding maturity premium. Consequently the interest rate risk is priced with respect to the long-term interest rates. Using the traditional augmented CAPM, our investigation of the pricing of the interest rate risk is inconclusive. It shows that interest rate risk was priced from 1979 to 1984 irrespective of the choice of interest rate variable. However, during the periods 1974–1978 and 1985–1990, bank stock returns were sensitive only to the innovations in the long-term interest rates.

Suresh Srivastava, Ken Hung
7. Nonparametric Bounds for European Option Prices

There is much research whose efforts have been devoted to discovering the distributional defects in the Black-Scholes model, which are known to cause severe biases. However, with a free specification for the distribution, one can only find upper and lower bounds for option prices. In this paper, we derive a new nonparametric lower bound and provide an alternative interpretation of Ritchken’s (1985) upper bound to the price of the European option. In a series of numerical examples, our new lower bound is substantially tighter than previous lower bounds. This is prevalent especially for out-of-the-money (OTM) options where the previous lower bounds perform badly. Moreover, we present that our bounds can be derived from histograms which are completely nonparametric in an empirical study. We first construct histograms from realizations of S&P 500 index returns following Chen, Lin, and Palmon (2006); calculate the dollar beta of the option and expected payoffs of the index and the option; and eventually obtain our bounds. We discover violations in our lower bound and show that those violations present arbitrage profits. In particular, our empirical results show that out-of-the-money calls are substantially overpriced (violate the lower bound).

Hsuan-Chu Lin, Ren-Raw Chen, Oded Palmon
8. Can Time-Varying Copulas Improve the Mean-Variance Portfolio?

Research in structuring asset return dependence has become an indispensable element of wealth management, particularly after the experience of the recent financial crises. In this paper, we evaluate whether constructing a portfolio using time-varying copulas yields superior returns under various weight updating strategies. Specifically, minimum-risk portfolios are constructed based on various copulas and the Pearson correlation, and a 250-day rolling window technique is adopted to derive a sequence of time-varied dependencies for each dependence model. Using daily data of the G7 countries, our empirical findings suggest that portfolios using time-varying copulas, particularly the Clayton dependence, outperform those constructed using Pearson correlations. The above results still hold under different weight updating strategies and portfolio rebalancing frequencies.

Chin-Wen Huang, Chun-Pin Hsu, Wan-Jiun Paul Chiou
9. Determinations of Corporate Earnings Forecast Accuracy: Taiwan Market Experience

Individual investors are actively involved in stock market and are making investment decision based on publicly available and nonproprietary information, such as corporate earnings forecasts from the management and the financial analyst. Also, the management forecast is another important index investors might use.To examine the accuracy of the earnings forecasts, the following test methodology have been conducted. Multiple regression models are used to examine the effect of six factors: firm size, market volatility, trading volume turnover, corporate earnings variances, type of industry, and experience. If the two-sample groups are related, Wilcoxon two-sample test will be used to determine the relative earnings forecast accuracy.The results indicate that firm size has no effect on management forecast, voluntary management forecast, mandatory management forecast, and analysts’ forecast. There are some indications that forecasting accuracy is affected by market ups and downs. The results also reveal that relative accuracy of earnings forecasts is not a function of trading volume turnover. However, management’s earnings forecast and analysts’ forecasts are sensitive to earnings variances.Readers are well advised and referred to the chapter appendix for methodological issues such as sample selection, variable definition, regression model, and Wilcoxon two-sample test.

Ken Hung, Kuo-Hao Lee
10. Market-Based Accounting Research (MBAR) Models: A Test of ARIMAX Modeling

The purpose of this study is to provide evidence drawn from publicly traded companies in Greece as far as some of the standard models of accounting earnings and returns relations mainly collected through the literature. Standard models such as earnings level and earnings changes have been investigated in this study. Models that fit better to the data drawn from companies listed on the Athens Stock Exchange have been selected employing autoregressive integrated moving average with exogenous variables (ARIMAX) models. Models I (price on earnings model), II (returns on change in earnings divided by beginning-of-period price and prior period), V (returns on change in earnings over opening market value),VII (returns deflated by lag of 2 years on earnings over opening market value), and IX (differenced-price model) have statistically significant coefficients of explanatory variables. In addition, model II (returns on change in earnings divided by beginning-of-period price and prior period with MSE (minimum squared error) loss function in ARIMAX (2,0,2)) is prevalent. These models take place with backward-looking information instead of forward-looking information that recent literature is assessed. Application of generalized autoregressive conditional heteroscedasticity (GARCH) models is suggested for further future research.

Anastasia Maggina
11. An Assessment of Copula Functions Approach in Conjunction with Factor Model in Portfolio Credit Risk Management

In credit risk modeling, factor models, either static or dynamic, are often used to account for correlated defaults among a set of financial assets. Within the realm of factor models, default dependence is due to a set of common systemic factors. Conditional on these common factors, defaults are independent. The benefit of a factor model is straightforward coupling with a copula function to give an analytic formulation of the joint distribution of default times. However, factor models fail to account for the contagion mechanism of defaults in which a firm’s default risk increases due to their commercial or financial counterparties’ defaults. This study considers a mixture of the dynamic factor model of Duffee (Review of Financial Studies 12, 197–226, 1999) and a contagious effect in the specification of a Hawkes process, a class of counting processes which allows intensities to depend on the timing of previous events (Hawkes. Biometrika 58(1), 83–90, 1971). Using the mixture factor-contagious-effect model, Monte Carlo simulation is performed to generate default times of two hypothesized firms.The goodness-of-fit of the joint distributions based on the most often used copula functions in literature including the normal, t-, Clayton, Frank, and Gumbel copula, respectively, is assessed against the simulated default times. It is demonstrated that as the contagious effect increases, the goodness-of-fit of the joint distribution functions based on copula functions decreases, which highlights the deficiency of the copula function approach.

Lie-Jane Kao, Po-Cheng Wu, Cheng-Few Lee
12. Assessing Importance of Time-Series Versus Cross-Sectional Changes in Panel Data: A Study of International Variations in Ex-Ante Equity Premia and Financial Architecture

In the study of economic and financial panel data, it is often important to differentiate between time series and cross-sectional effects. We present two estimation procedures that can do so and illustrate their application by examining international variations in expected equity premia and financial architecture where a number of variables vary across time but not cross-sectionally, while other variables vary cross-sectionally but not across time. Using two different estimation procedures, we find a preference for market financing to be negatively associated with the size of expected premia. However, we also find that US corporate bond spreads negatively determine financial architecture according to the first procedure but not according to the second estimation as US corporate bond spreads change value each year but have the same value across countries. Similarly some measures that change across countries but do not change across time, such as cultural dimensions as well as the index of measures against self-dealing, are significant determinants of financial architecture according second estimation but not according to the first estimation. Our results show that using these two estimation procedures together can assess time series versus cross-sectional variations in panel data. This research should be of considerable interest to empirical researchers.We illustrate with simultaneous-equation modeling. Following a Hausman test to determine whether to report fixed or random-effects estimates, we first report random-effects estimates based on the estimation procedure of Baltagi (Baltagi 1981; Baltagi and Li 1995; Baltagi and Li 1994). We consider that the error component two-stage least squares (EC2SLS) estimator of Baltagi and Li (1995) is more efficient than the generalized two-stage least squares (G2SLS) estimator of Balestra and Varadharajan-Krishnakumar (1987). For our second estimation procedure, for comparative purposes we use the dynamic panel modeling estimates recommended by Blundell and Bond (1998). We employ the model of Blundell and Bond (1998), as these authors argue that their estimator is more appropriate than the Arellano and Bond (1991) model for smaller time periods relative to the size of the panels. We also use this two-step procedure and use as an independent variable the first lag of the dependent variable, reporting robust standard errors of Windmeijer (2005). Thus, our two different panel estimation techniques place differing emphases on cross-sectional and time series effects, with the Baltagi-Li estimator emphasizing cross-sectional effects and the Blundell-Bond estimator emphasizing time series effects.

Raj Aggarwal, John W. Goodell
13. Does Banking Capital Reduce Risk? An Application of Stochastic Frontier Analysis and GMM Approach

In this chapter, we thoroughly analyze the relationship between capital and bank risk-taking. We collect cross section of bank holding company data from 1993 to 2008. To deal with the endogeneity between risk and capital, we employ stochastic frontier analysis to create a new type of instrumental variable. The unrestricted frontier model determines the highest possible profitability based solely on the book value of assets employed. We develop a second frontier based on the level of bank holding company capital as well as the amount of assets. The implication of using the unrestricted model is that we are measuring the unconditional inefficiency of the banking organization.We further apply generalized method of moments (GMM) regression to avoid the problem caused by departure from normality. To control for the impact of size on a bank’s risk-taking behavior, the book value of assets is considered in the model. The relationship between the variables specifying bank behavior and the use of equity is analyzed by GMM regression. Our results support the theory that banks respond to higher capital ratios by increasing the risk in their earning asset portfolios and off-balance-sheet activity. This perverse result suggests that bank regulation should be thoroughly reexamined and alternative tools developed to ensure a stable financial system.

Wan-Jiun Paul Chiou, Robert L. Porter
14. Evaluating Long-Horizon Event Study Methodology

We describe the fundamental issues that long-horizon event studies face in choosing the proper research methodology and summarize findings from existing simulation studies about the performance of commonly used methods. We document in details how to implement a simulation study and report our own findings on large-size samples. The findings have important implications for future research.We examine the performance of more than 20 different testing procedures that fall into two categories. First, the buy-and-hold benchmark approach uses a benchmark to measure the abnormal buy-and-hold return for every event firm and tests the null hypothesis that the average abnormal return is zero. Second, the calendar-time portfolio approach forms a portfolio in each calendar month consisting of firms that have had an event within a certain time period prior to the month and tests the null hypothesis that the intercept is zero in the regression of monthly portfolio returns against the factors in an asset-pricing model. We find that using the sign test and the single most correlated firm being the benchmark provides the best overall performance for various sample sizes and long horizons. In addition, the Fama-French three-factor model performs better in our simulation study than the four-factor model, as the latter leads to serious over-rejection of the null hypothesis.We evaluate the performance of bootstrapped Johnson’s skewness-adjusted t-test. This computation-intensive procedure is considered because the distribution of long-horizon abnormal returns tends to be highly skewed to the right. The bootstrapping method uses repeated random sampling to measure the significance of relevant test statistics. Due to the nature of random sampling, the resultant measurement of significance varies each time such a procedure is used. We also evaluate simple nonparametric tests, such as the Wilcoxon signed-rank test or the Fisher’s sign test, which are free from random sampling variation.

James S. Ang, Shaojun Zhang
15. The Effect of Unexpected Volatility Shocks on Intertemporal Risk-Return Relation

We suggest that an unexpected volatility shock is an important risk factor to induce the intertemporal relation, and the conflicting findings on the relation could be attributable to an omitting variable bias resulting from ignoring the effect of an unexpected volatility shock on the relation. With the effect of an unexpected volatility shock incorporated in estimation, we find a strong positive intertemporal relation for the US monthly excess returns for 1926:12–2008:12. We reexamine the relation for the sample period studied by Glosten, Jagannathan, and Runkle (Journal of Finance 48, 1779–1801, 1993) and find that their sample period is indeed characterized by a positive (negative) relation under a positive (negative) volatility shock with the effect of a volatility shock incorporated in estimation. We also find a significant link between the asymmetric mean reversion and the intertemporal relation in that the quicker reversion of negative returns is attributed to the negative intertemporal relation under a prior negative return shock.For estimations we employ the ANST-GARCH model that is capable of capturing the asymmetric volatility effect of a positive and negative return shock. The key feature of the model is the regime-shift mechanism that allows a smooth, flexible transition of the conditional volatility between different states of volatility persistence. The regime-switching mechanism is governed by a logistic transition function that changes values depending on the level of the previous return shock. With a negative (positive) return shock, the conditional variance process is described as a high (low)-persistence-in-volatility regime. The ANST-GARCH model describes the heteroskedastic return dynamics more accurately and generates better volatility forecasts.

Kiseok Nam, Joshua Krausz, Augustine C. Arize
16. Combinatorial Methods for Constructing Credit Risk Ratings

This study uses a novel method, the Logical Analysis of Data (LAD), to reverse engineer and construct credit risk ratings which represent the creditworthiness of financial institutions and countries. LAD is a data mining method based on combinatorics, optimization, and Boolean logic that utilizes combinatorial search techniques to discover various combinations of attribute values that are characteristic of the positive or negative character of observations. The proposed methodology is applicable in the general case of inferring an objective rating system from archival data, given that the rated objects are characterized by vectors of attributes taking numerical or ordinal values. The proposed approaches are shown to generate transparent, consistent, self-contained, and predictive credit risk rating models, closely approximating the risk ratings provided by some of the major rating agencies. The scope of applicability of the proposed method extends beyond the rating problems discussed in this study and can be used in many other contexts where ratings are relevant.We use multiple linear regression to derive the logical rating scores.

Alexander Kogan, Miguel A. Lejeune
17. Dynamic Interactions Between Institutional Investors and the Taiwan Stock Returns: One-Regime and Threshold VAR Models

This paper constructs a six-variable VAR model (including NASDAQ returns, TSE returns, NT/USD returns, net foreign purchases, net domestic investment companies (dic) purchases, and net registered trading firms (rtf) purchases) to examine: (i) the interaction among three types of institutional investors, particularly to test whether net foreign purchases lead net domestic purchases by dic and rtf (the so-called demonstration effect); (ii) whether net institutional purchases lead market returns or vice versa; and (iii) whether the corresponding lead-lag relationship is positive or negative? The results of unrestricted VAR, structural VAR, and multivariate threshold autoregression models show that net foreign purchases lead net purchases by domestic institutions and the relation between them is not always unidirectional. In certain regimes, depending on whether previous day’s TSE returns are negative or previous day’s NASDAQ returns are positive, we find ample evidence of a feedback relation between net foreign purchases and net domestic institutional purchases. The evidence also supports a strong positive-feedback trading by institutional investors in the TSE. In addition, it is found that net dic purchases negatively lead market returns in Period 4. The MVTAR results indicate that net foreign purchases lead market returns when previous day’s NASDAQ returns are positive and have a positive influence on returns.Readers are well advised to refer to chapter appendix for detailed discussion of the unrestricted VAR model, the structural VAR model, and the threshold VAR analysis.

Bwo-Nung Huang, Ken Hung, Chien-Hui Lee, Chin W. Yang
18. Methods of Denoising Financial Data

Denoising analysis imposes new challenges for financial data mining due to the irregularities and roughness observed in financial data, particularly, for instantaneously collected massive amounts of tick-by-tick data from financial markets for information analysis and knowledge extraction. Inefficient decomposition of the systematic pattern (the trend) and noises of financial data will lead to erroneous conclusions since irregularities and roughness of the financial data make the application of traditional methods difficult.In this chapter, we provide a review to discuss some methods applied for denoising analysis of financial data.

Thomas Meinl, Edward W. Sun
19. Analysis of Financial Time Series Using Wavelet Methods

This chapter presents a set of tools, which allow gathering information about the frequency components of a time series. In a first step, we discuss spectral analysis and filtering methods. Spectral analysis can be used to identify and to quantify the different frequency components of a data series. Filters permit to capture specific components (e.g., trends, cycles, seasonalities) of the original time series. Both spectral analysis and standard filtering methods have two main drawbacks: (i) they impose strong restrictions regarding the possible processes underlying the dynamics of the series (e.g., stationarity) and (ii) they lead to a pure frequency-domain representation of the data, i.e., all information from the time-domain representation is lost in the operation.In a second step, we introduce wavelets, which are relatively new tools in economics and finance. They take their roots from filtering methods and Fourier analysis, but overcome most of the limitations of these two methods. Their principal advantages derive from (i) combined information from both time domain and frequency domain and (ii) their flexibility as they do not make strong assumptions concerning the data-generating process for the series under investigation.

Philippe Masset
20. Composite Goodness-of-Fit Tests for Left-Truncated Loss Samples

In many financial models, such as those addressing value at risk and ruin probabilities, the accuracy of the fitted loss distribution in the upper tail of the loss data is crucial. In such situations, it is important to test the fitted loss distribution for the goodness of fit in the upper quantiles, while giving lesser importance to the fit in the low quantiles and the center of the distribution of the data. Additionally, in many loss models the recorded data are left truncated with the number of missing data unknown. We address this gap in literature by proposing appropriate goodness-of-fit tests.We derive the exact formulae for several goodness-of-fit statistics that should be applied to loss models with left-truncated data where the fit of a distribution in the right tail of the distribution is of central importance. We apply the proposed tests to real financial losses, using a variety of distributions fitted to operational loss and the natural catastrophe insurance claims data, which are subject to the recording thresholds of $1 and $25 million, respectively.

Anna Chernobai, Svetlozar T. Rachev, Frank J. Fabozzi
21. Effect of Merger on the Credit Rating and Performance of Taiwan Security Firms

The effect of a merger on credit rating is investigated by testing the significance of change in a firm’s rank based on comprehensive performance score and synergistic gains. We extract principle component factors from a set of financial ratios. Percentage of variability explained and factor loadings are adjusted to get a modified average weight for each financial ratio. This weight is multiplied by the standardized Z value of the variable, and summed a set of variables to get a firm’s performance score. Performance scores are used to rank the firm. Statistical significance of difference in pre- and post-merger rank is tested using the Wilcoxon sign rank (double end).We studied the merger of financial firms after the enactment of Taiwan’s Merger Law for Financial Institution in November 2000 to examine synergies produced by merger. Synergistic gains affect corporate credit ratings. After taking into account the large Taiwan market decline from 1999 to 2000, test results show there is no significant operating, market, and financial synergy produced by the merger firms. Most likely explanations for the insignificant rank changes are short observation period and the lack of an adequate sample in this investigation.We identify and define variables for merger synergy analysis followed by principal component factor analysis, variability percentage adjustment, and performance score calculation. Finally, Wilcoxon sign rank test is used for hypothesis testing. Reader is well referred to the appendix for details.

Suresh Srivastava, Ken Hung
22. On-/Off-the-Run Yield Spread Puzzle: Evidence from Chinese Treasury Markets

In this chapter, we document a negative on-/off-the-run yield spread in Chinese Treasury markets. This is in contrast with a positive on-/off-the-run yield spread in most other countries and could be called an “on-/off-the-run yield spread puzzle.” To explain this puzzle, we introduce a latent factor in the pricing of Chinese off-the-run government bonds and use this factor to model the yield difference between Chinese on-the-run and off-the-run issues. We use the nonlinear Kalman filter approach to estimate the model. Regressions results suggest that liquidity difference, market-wide liquidity condition, and disposition effect (unwillingness to sell old bonds) could help explain the dynamics of a latent factor in Chinese Treasury markets. The empirical results of this chapter show evidence of phenomena that are quite specific in emerging markets such as China.The Kalman filter is a mathematical method named after Rudolf E. Kalman. It is a set of mathematical equations that provides an efficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean of the squared error. The nonlinear Kalman filter is the nonlinear version of the Kalman filter which linearizes about the current mean and covariance. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown.

Rong Chen, Hai Lin, Qianni Yuan
23. Factor Copula for Defaultable Basket Credit Derivatives

In this article, we consider a factor copula approach for evaluating basket credit derivatives with issuer default risk and demonstrate its application in a basket credit linked note (BCLN). We generate the correlated Gaussian random numbers by using the Cholesky decomposition, and then the correlated default times can be decided by these random numbers and the reduced-form model. Finally, the fair BCLN coupon rate is obtained by the Monte Carlo simulation. We also discuss the effect of issuer default risk on BCLN. We show that the effect of issuer default risk cannot be accounted for thoroughly by considering the issuer as a new reference entity in the widely used one-factor copula model, in which constant default correlation is often assumed. A different default correlation between the issuer and the reference entities affects the coupon rate greatly and must be taken into account in the pricing model.

Po-Cheng Wu, Lie-Jane Kao, Cheng-Few Lee
24. Panel Data Analysis and Bootstrapping: Application to China Mutual Funds

Thompson (Journal of Financial Economics 99, 1–10, 2011) argues that double clustering the standard errors of parameter estimators matters the most when the number of firms and time periods are not too different. Using panel data of similar number in firms and time periods on China’s mutual funds, we estimate double- and single-clustered standard errors by wild-cluster bootstrap procedure. To obtain the wild bootstrap samples in each cluster, we reuse the regressors (X) but modify the residuals by transforming the OLS residuals with weights which follow the popular two-point distribution suggested by Mammen (Annals of Statistics 21, 255–285, 1993) and others. We then compare them with other estimates in a set of asset pricing regressions. The comparison indicates that bootstrapped standard errors from double clustering outperform those from single clustering. Our findings support Thompson’s argument. They also suggest that bootstrapped critical values are preferred to standard asymptotic t-test critical values to avoid misleading test results.

Win Lin Chou, Shou Zhong Ng, Yating Yang
25. Market Segmentation and Pricing of Closed-End Country Funds: An Empirical Analysis

This paper finds that for closed-end country funds, the international CAPM can be rejected for the underlying securities (NAVs) but not for the share prices. This finding indicates that country fund share prices are determined globally where as the NAVs reflect both global and local prices of risk. Cross-sectional variations in the discounts or premiums for country funds are explained by the differences in the risk exposures of the share prices and the NAVs. Finally, this paper shows that the share price and NAV returns exhibit predictable variation and country fund premiums vary over time due to time-varying risk premiums. The paper employs generalized method of moments (GMM) to estimate stochastic discount factors and examines if the price of risk of closed-end country fund shares and NAVs is identical. GMM is an econometric method that was a generalization of the method of moments developed by Hansen (Econometrica 50, 1029–1054, 1982). Essentially GMM finds the values of the parameters so that the sample moment conditions are satisfied as closely as possible.

Dilip K. Patro
26. A Comparison of Portfolios Using Different Risk Measurements

In order to find out which risk measurement is the best indicator of efficiency in a portfolio, this study considers three different risk measurements: the mean-variance model, the mean absolute deviation model, and the downside risk model. Meanwhile short selling is also taken into account since it is an important strategy that can bring a portfolio much closer to the efficient frontier by improving a portfolio’s risk-return trade-off. Therefore, six portfolio rebalancing models, including the MV model, MAD model, and the downside risk model, with/without short selling, are compared to determine which is the most efficient. All models simultaneously consider the criteria of return and risk measurement. Meanwhile, when short selling is allowed, models also consider minimizing the proportion of short selling. Therefore, multiple objective programming is employed to transform multiple objectives into a single objective in order to obtain a compromising solution. An example is used to perform simulation, and the results indicate that the MAD model, incorporated with a short selling model, has the highest market value and lowest risk.

Jing Rung Yu, Yu Chuan Hsu, Si Rou Lim
27. Using Alternative Models and a Combining Technique in Credit Rating Forecasting: An Empirical Study

Credit rating forecasting has long time been very important for bond classification and loan analysis. In particular, under the Basel II environment, regulators in Taiwan have requested the banks to estimate the default probability of the loan based on its credit classification. A proper forecasting procedure for credit rating of the loan is crucially important in abiding the rule.Credit rating is an ordinal scale from which the credit category of a firm can be ranked from high to low, but the scale of the difference between them is unknown. To model the ordinal outcomes, this study first constitutes an attempt utilizing the ordered logit and the ordered probit models, respectively. Then, we use ordered logit combining method to weigh different techniques’ probability measures as described in Kamstra and Kennedy (International Journal of Forecasting 14, 83–93, 1998) to form the combining model.The samples consist of firms in the TSE and the OTC market and are divided into three industries for analysis. We consider financial variables, market variables, as well as macroeconomic variables and estimate their parameters for out-of-sample tests. By means of cumulative accuracy profile, the receiver operating characteristics, and McFadden R2, we measure the goodness-of-fit and the accuracy of each prediction model. The performance evaluations are conducted to compare the forecasting results, and we find that combining technique does improve the predictive power.

Cheng-Few Lee, Kehluh Wang, Yating Yang, Chan-Chien Lien
28. Can We Use the CAPM as an Investment Strategy?: An Intuitive CAPM and Efficiency Test

The aim of this chapter is to check whether certain playing rules, based on the undervaluation concept arising from the CAPM, could be useful as investment strategies, and can therefore be used to beat the Market. If such strategies work, we will be provided with a useful tool for investors, and, otherwise, we will obtain a test whose results will be connected with the Efficient Market Hypothesis (EMH) and with the CAPM.The basic strategies were set out in Gómez-Bezares, Madariaga, and Santibáñez (Análisis Financiero 68:72–96, 1996). Our purpose now is to reconsider them, to improve the statistical analysis, and to examine a more recent period for our study.The methodology used is both intuitive and rigorous: analyzing how many times we beat the Market with different strategies, in order to check whether beating the Market happens by chance. Furthermore, we set out to study, statistically, when and by how much we beat it, and to analyze whether this is significant.

Fernando Gómez-Bezares, Luis Ferruz, Maria Vargas
29. Group Decision-Making Tools for Managerial Accounting and Finance Applications

To deal with today’s uncertain and dynamic business environments with different background of decision makers in computing trade-offs among multiple organizational goals, our series of papers adopts an analytic hierarchy process (AHP) approach to solve various accounting or finance problems such as developing a business performance evaluation system and developing a banking performance evaluation system. AHP uses hierarchical schema to incorporate nonfinancial and external performance measures. Our model has a broader set of measures that can examine external and nonfinancial performance as well as internal and financial performance. While AHP is one of the most popular multiple goals decision-making tools, multiple-criteria and multiple-constraint (MC2) linear programming approach also can be used to solve group decision-making problems such as transfer pricing and capital budgeting problems. This model is rooted by two facts. First, from the linear system structure’s point of view, the criteria and constraints may be “interchangeable.” Thus, like multiple criteria, multiple-constraint (resource availability) levels can be considered. Second, from the application’s point of view, it is more realistic to consider multiple resource availability levels (discrete right-hand sides) than a single resource availability level in isolation. The philosophy behind this perspective is that the availability of resources can fluctuate depending on the decision situation forces, such as the desirability levels believed by the different managers. A solution procedure is provided to show step-by-step procedure to get possible solutions that can reach the best compromise value for the multiple goals and multiple-constraint levels.

Wikil Kwak, Yong Shi, Cheng-Few Lee, Heeseok Lee
30. Statistics Methods Applied in Employee Stock Options

This study presents model-based and compensation-based approaches to determining the price-subjective value of employee stock options (ESOs). In the model-based approach, we consider a utility-maximizing model in which the employees allocate their wealth among company stock, a market portfolio, and risk-free bonds, and then we derive the ESO formulas, which take into account illiquidity and sentiment effects. By using the method of change of measure, the derived formulas are simply like those of the market value with altered parameters. To calculate the compensation-based subjective value, we group employees by hierarchical clustering with a K-means approach and back out the option value in an equilibrium competitive employment market.Further, we test illiquidity and sentiment effects on ESO values by running regressions that consider the problem of standard errors in the finance panel data. Using executive stock options and compensation data paid between 1992 and 2004 for firms covered by the Compustat Executive Compensation Database, we find that subjective value is positively related to sentiment and negatively related to illiquidity in all specifications, consistent with the offsetting roles of sentiment and risk aversion. Moreover, executives value ESOs at a 48 % premium to the Black-Scholes value and ESO premiums are explained by a sentiment level of 12 % in risk-adjusted, annualized excess return, suggesting a high level of executive overconfidence.

Li-jiun Chen, Cheng-der Fuh
31. Structural Change and Monitoring Tests

This chapter focuses on various structural change and monitoring tests for a class of widely used time series models in economics and finance, including I(0), I(1), I(d) processes and the cointegration relationship. A structural break appears a change in endogenous relationships. This break could be caused by a shift in mean, variance, or a persistent change in the data property. In general, structural change tests can be categorized into two types: one is the classical approach to testing for structural change, which employs retrospective tests using a historical data set of a given length; the other one is the fluctuation-type test in a monitoring scheme, which means for given a history period for which a regression relationship is known to be stable, we then test whether incoming data are consistent with the previously established relationship. Several structural changes such as CUSUM squared tests, the QLR test, the prediction test, the multiple break test, bubble tests, cointegration breakdown tests, and the monitoring fluctuation test are discussed in this chapter, and we further illustrate all details and usefulness of these tests.

Cindy Shin-Huei Wang, Yi Meng Xie
32. Consequences for Option Pricing of a Long Memory in Volatility

Conditionally heteroscedastic time series models are used to describe the volatility of stock index returns. Volatility has a long memory property in the most general models and then the autocorrelations of volatility decay at a hyperbolic rate; contrasts are made with popular, short memory specifications whose autocorrelations decay more rapidly at a geometric rate.Options are valued for ARCH volatility models by calculating the discounted expectations of option payoffs for an appropriate risk-neutral measure. Monte Carlo methods provide the expectations. The speed and accuracy of the calculations is enhanced by two variance reduction methods, which use antithetic and control variables. The economic consequences of a long memory assumption about volatility are documented, by comparing implied volatilities for option prices obtained from short and long memory volatility processes.Results are given for options on the S & P 100-share index, with lives up to 2 years. The long memory assumption is found to have a significant impact upon the term structure of implied volatilities and a relatively minor impact upon smile shapes. These conclusions are important because evidence for long memory in volatility has been found in the prices of many assets.

Stephen J. Taylor
33. Seasonal Aspects of Australian Electricity Market

Australian electricity spot prices differ considerably from equity spot prices in that they contain an extremely rapid mean-reversion process. The electricity spot price could increase to a market cap price of AU$12,500 per megawatt hour (MWh) and revert back to a mean level (AUD$30) within a half-hour interval. This has implications for derivative pricing and risk management. For example, while the Black and Scholes option pricing model works reasonably well for equity market-based securities, it performs poorly for commodities like electricity. Understanding the dynamics of electricity spot prices and demand is also important in order to correctly forecast electricity prices. We develop econometric models for seasonal patterns in both price returns and proportional changes in demand for electricity. We also model extreme spikes in the data. Our study identifies both seasonality effects and dramatic price reversals in the Australian electricity market. The pricing seasonality effects include time-of-day, day-of-week, monthly, and yearly effects. There is also evidence of seasonality in demand for electricity.

Vikash Ramiah, Stuart Thomas, Richard Heaney, Heather Mitchell
34. Pricing Commercial Timberland Returns in the United States

Commercial timberland assets have attracted more attention in recent decades. One unique feature of this asset class roots in the biological growth, which is independent of traditional financial markets. Using both parametric and nonparametric approaches, we evaluate private- and public-equity timberland investments in the United States. Private-equity timberland returns are proxied by the NCREIF Timberland Index, whereas public-equity timberland returns are proxied by the value-weighted returns on a dynamic portfolio of the US publicly traded forestry firms that had or have been managing timberlands. The results from parametric analysis reveal that private-equity timberland investments outperform the market and have low systematic risk, whereas public-equity timberland investments fare similarly as the market. The nonparametric stochastic discount factor analyses reveal that both private- and public-equity timberland assets have higher excess returns.Static estimations of the capital asset pricing model and Fama-French three-factor model are obtained by ordinary least squares, whereas dynamic estimations are by state space specifications with the Kalman filter. In estimating the stochastic discount factors, linear programming is used.

Bin Mei, Michael L. Clutter
35. Optimal Orthogonal Portfolios with Conditioning Information

Optimal orthogonal portfolios are a central feature of tests of asset pricing models and are important in active portfolio management problems. The portfolios combine with a benchmark portfolio to form ex ante mean variance efficient portfolios. This paper derives and characterizes optimal orthogonal portfolios in the presence of conditioning information in the form of a set of lagged instruments. In this setting, studied by Hansen and Richard (1987), the conditioning information is used to optimize with respect to the unconditional moments. We present an empirical illustration of the properties of the optimal orthogonal portfolios. From an asset pricing perspective, a standard stock market index is far from efficient when portfolios trade based on lagged interest rates and dividend yields. From an active portfolio management perspective, the example shows that a strong tilt toward bonds improves the efficiency of equity portfolios.The methodology in this paper includes regression and maximum likelihood parameter estimation, as well as method of moments estimation. We form maximum likelihood estimates of nonlinear functions as the functions evaluated at the maximum likelihood parameter estimates. Our analytical results also provide economic interpretation for test statistics like the Wald test or multivariate F test used in asset pricing research.

Wayne E. Ferson, Andrew F. Siegel
36. Multifactor, Multi-indicator Approach to Asset Pricing: Method and Empirical Evidence

This paper uses a multifactor, multi-indicator approach to test the capital asset pricing model (CAPM) and the arbitrage pricing theory (APT). This approach is able to solve the measuring problem in the market portfolio in testing CAPM; and it is also able to directly test APT by linking the common factors to the macroeconomic indicators. Our results from testing CAPM support Stambough’s (Journal of Financial Economics, 10, 237–268, 1982) argument that the inference about the tests of CAPM is insensitive to alternative market indexes.We propose a MIMIC approach to test CAPM and APT. The beta estimated from the MIMIC model by allowing measurement error on the market portfolio does not significantly improve the OLS beta, while the MLE estimator does a better job than the OLS and GLS estimators in the cross-sectional regressions because the MLE estimator takes care of the measurement error in beta. Therefore, the measurement error problem on beta is more serious than that on the market portfolio.

Cheng-Few Lee, K. C. John Wei, Hong-Yi Chen
37. Binomial OPM, Black–Scholes OPM, and Their Relationship: Decision Tree and Microsoft Excel Approach

This chapter will first demonstrate how Microsoft Excel can be used to create the decision trees for the binomial option pricing model. At the same time, this chapter will discuss the binomial option pricing model in a less mathematical fashion. All the mathematical calculations will be taken care by the Microsoft Excel program that is presented in this chapter. Finally, this chapter also uses the decision tree approach to demonstrate the relationship between the binomial option pricing model and the Black–Scholes option pricing model.

John C. Lee
38. Dividend Payments and Share Repurchases of US Firms: An Econometric Approach

The analyses of dividends paid by firms and decisions to repurchase their own shares require an econometric approach because of the complex dynamic interrelationships. This chapter begins by, first, highlighting the importance of developing comprehensive econometric models for these interrelationships. It is common in finance research to spell out “specific hypotheses” and conduct empirical research to investigate validity of the hypotheses. However, such an approach can be misleading in situations where variables are simultaneously determined as is often the case in financial applications. Second, financial and accounting databases such as Compustat are complex and contain useful longitudinal information on variables that display considerable heterogeneity across the firms. Empirical analyses of financial databases demand the use of econometric and computational methods in order to draw robust inferences. For example, using longitudinal data on the same US firms, it was found that dividends were neither “disappearing” nor “reappearing” but were relatively stable in the period 1992–2007 Bhargava (Journal of the Royal Statistical Society A, 173, 631–656, 2010). Third, the econometric methodology tackled the dynamics of relationships and investigated endogeneity of certain explanatory variables. Identification of the model parameters is achieved in such models by exploiting the cross-equations restrictions on the coefficients in different time periods. Moreover, the estimation entails using nonlinear optimization methods to compute the maximum likelihood estimates of the dynamic random effects models and for testing statistical hypotheses using likelihood ratio tests. For example, share repurchases were treated as endogenous explanatory variable in the models for dividend payments, and dividends were treated as endogenous variables in the models for share repurchases. The empirical results showed that dividends are decided quarterly at the first stage, and higher dividends payments lowered share repurchases by firms that are made at longer intervals. These findings cast some doubt on evidence for the simple “substitution” hypothesis between dividends and share repurchases. The appendix outlines some of the econometric estimation techniques and tests that are useful for research in finance.

Alok Bhargava
39. Term Structure Modeling and Forecasting Using the Nelson-Siegel Model

In this chapter, we illustrate some recent developments in the yield curve modeling by introducing a latent factor model called the dynamic Nelson-Siegel model. This model not only provides good in-sample fit, but also produces superior out-of-sample performance. Beyond Treasury yield curve, the model can also be useful for other assets such as corporate bond and volatility. Moreover, the model also suggests generalized duration components corresponding to the level, slope, and curvature risk factors.The dynamic Nelson-Siegel model can be estimated via a one-step procedure, like the Kalman filter, which can also easily accommodate other variables of interests. Alternatively, we could estimate the model through a two-step process by fixing one parameter and estimating with ordinary least squares. The model is flexible and capable of replicating a variety of yield curve shapes: upward sloping, downward sloping, humped, and inverted humped. Forecasting the yield curve is achieved through forecasting the factors and we can impose either a univariate autoregressive structure or a vector autoregressive structure on the factors.

Jian Hua
40. The Intertemporal Relation Between Expected Return and Risk on Currency

The literature has so far focused on the risk-return trade-off in equity markets and ignored alternative risky assets. This paper examines the presence and significance of an intertemporal relation between expected return and risk in the foreign exchange market. The paper provides new evidence on the intertemporal capital asset pricing model by using high-frequency intraday data on currency and by presenting significant time variation in the risk aversion parameter. Five-minute returns on the spot exchange rates of the US dollar vis-à-vis six major currencies (the euro, Japanese yen, British pound sterling, Swiss franc, Australian dollar, and Canadian dollar) are used to test the existence and significance of a daily risk-return trade-off in the FX market based on the GARCH, realized, and range volatility estimators. The results indicate a positive but statistically weak relation between risk and return on currency.Our empirical analysis relies on the maximum likelihood estimation of the GARCH-in-mean models as described in Appendix 1. We also use the seemingly unrelated (SUR) regressions and panel data estimation to investigate the significance of a time-series relation between expected return and risk on currency as described in Appendix 2.

Turan G. Bali, Kamil Yilmaz
41. Quantile Regression and Value at Risk

This paper studies quantile regression (QR) estimation of Value at Risk (VaR). VaRs estimated by the QR method display some nice properties. In this paper, different QR models in estimating VaRs are introduced. In particular, VaR estimations based on quantile regression of the QAR models, copula models, ARCH models, GARCH models, and the CaViaR models are systematically introduced. Comparing the proposed QR method with traditional methods based on distributional assumptions, the QR method has the important property in that it is robust to non-Gaussian distributions. Quantile estimation is only influenced by the local behavior of the conditional distribution of the response near the specified quantile. As a result, the estimates are not sensitive to outlier observations. Such a property is especially attractive in financial applications since many financial data like, say, portfolio returns (or log returns) are usually not normally distributed. To highlight the importance of the QR method in estimating VaR, we apply the QR techniques to estimate VaRs in International Equity Markets. Numerical evidence indicates that QR is a robust estimation method for VaR.

Zhijie Xiao, Hongtao Guo, Miranda S. Lam
42. Earnings Quality and Board Structure: Evidence from South East Asia

Using a sample of listed firms in Southeast Asian countries, this paper examines the association among board structure and corporate ownership structure in affecting earnings quality. I find that the negative association between separation of control rights from cash flow rights and earnings quality varies systematically with board structure. I find that the negative association between separation of control rights from cash flow rights and earnings quality is less pronounced in firms with high equity ownership by outside directors. I also document that in firms with high separation of control rights from cash flow rights, those firms with higher proportion of outside directors on the board have higher earnings quality. Overall, my results suggest that outside directors’ equity ownership and board independence are associated with better financial reporting outcome, especially in firms with high expected agency costs arising from misalignment of control rights and cash flow rights.The econometric method employed is regressions of panel data. In a panel data setting, I address both cross-sectional and time-series dependence. Gow et al. (2010, The Accounting Review 85(2), 483–512) find that in the presence of both cross-sectional and time-series dependence, the two-way clustering method which allows for both cross-sectional and time-series dependence produces well-specified test statistics. Following Gow et al. (2010, The Accounting Review 85(2), 483–512), I employ the two-way clustering method where the standard errors are clustered by both firm and year in my regressions of panel data. Johnston and DiNardo (1997, Econometrics method. New York: Mc-Graw Hill) and Greene (2000, Econometrics analysis. Upper Saddle River: Prentice-Hall) are two econometric textbooks that contain a detailed discussion of the econometrics issues relating to panel data.

Kin-Wai Lee
43. Rationality and Heterogeneity of Survey Forecasts of the Yen-Dollar Exchange Rate: A Reexamination

This chapter examines the rationality and diversity of industry-level forecasts of the yen-dollar exchange rate collected by the Japan Center for International Finance. In several ways we update and extend the seminal work by Ito (1990, American Economic Review 80, 434–449). We compare three specifications for testing rationality: the “conventional” bivariate regression, the univariate regression of a forecast error on a constant and other information set variables, and an error correction model (ECM). We find that the bivariate specification, while producing consistent estimates, suffers from two defects: first, the conventional restrictions are sufficient but not necessary for unbiasedness; second, the test has low power. However, before we can apply the univariate specification, we must conduct pretests for the stationarity of the forecast error. We find a unit root in the 6-month horizon forecast error for all groups, thereby rejecting unbiasedness and weak efficiency at the pretest stage. For the other two horizons, we find much evidence in favor of unbiasedness but not weak efficiency. Our ECM rejects unbiasedness for all forecasters at all horizons. We conjecture that these results, too, occur because the restrictions test sufficiency, not necessity.We extend the analysis of industry-level forecasts to a SUR-type structure using an innovative GMM technique (Bonham and Cohen 2001, Journal of Business & Economic Statistics, 19, 278–291) that allows for forecaster cross-correlation due to the existence of common shocks and/or herd effects. Our GMM tests of micro-homogeneity uniformly reject the hypothesis that forecasters exhibit similar rationality characteristics.

Richard Cohen, Carl S. Bonham, Shigeyuki Abe
44. Stochastic Volatility Structures and Intraday Asset Price Dynamics

The behavior of financial asset price data when observed intraday is quite different from these same processes observed from day to day and longer sampling intervals. Volatility estimates obtained from intraday observed data can be badly distorted if anomalies and intraday trading patterns are not accounted for in the estimation process.In this paper I consider conditional volatility estimators as special cases of a general stochastic volatility structure. The theoretical asymptotic distribution of the measurement error process for these estimators is considered for particular features observed in intraday financial asset price processes. Specifically, I consider the effects of (i) induced serial correlation in returns processes, (ii) excess kurtosis in the underlying unconditional distribution of returns, (iii) market anomalies such as market opening and closing effects, and (iv) failure to account for intraday trading patterns.These issues are considered with applications in option pricing/trading strategies and the constant/dynamic hedging frameworks in mind. Empirical examples are provided from transactions data sampled into 5-, 15-, 30-, and 60-min intervals for heavily capitalized stock market, market index, and index futures price processes.

Gerard L. Gannon
45. Optimal Asset Allocation Under VaR Criterion: Taiwan Stock Market

Value at risk (VaR) measures the worst expected loss over a given time horizon under normal market conditions at a specific level of confidence. These days, VaR is the benchmark for measuring, monitoring, and controlling downside financial risk. VaR is determined by the left tail of the cumulative probability distribution of expected returns. Expected probability distribution can be generated assuming normal distribution, historical simulation, or Monte Carlo simulation. Further, a VaR-efficient frontier is constructed, and an asset allocation model subject to a target VaR constraint is examined.This paper examines the riskiness of the Taiwan stock market by determining the VaR from the expected return distribution generated by historical simulation. Our result indicates the cumulative probability distribution has a fatter left tail, compared with the left tail of a normal distribution. This implies a riskier market. We also examined a two-sector asset allocation model subject to a target VaR constraint. The VaR-efficient frontier of the TAIEX traded stocks recommended mostly a corner portfolio.

Ken Hung, Suresh Srivastava
46. Alternative Methods for Estimating Firm’s Growth Rate

The most common valuation model is the dividend growth model. The growth rate is found by taking the product of the retention rate and the return on equity. What is less well understood are the basic assumptions of this model. In this paper, we demonstrate that the model makes strong assumptions regarding the financing mix of the firm. In addition, we discuss several methods suggested in the literature on estimating growth rates and analyze whether these approaches are consistent with the use of using a constant discount rate to evaluate the firm’s assets and equity. The literature has also suggested estimating growth rate by using the average percentage change method, compound-sum method, and/or regression methods. We demonstrate that the average percentage change is very sensitive to extreme observations. Moreover, on average, the regression method yields similar but somewhat smaller estimates of the growth rate compared to the compound-sum method. We also discussed the inferred method suggested by Gordon and Gordon (1997) to estimate the growth rate. Advantages, disadvantages, and the interrelationship among these estimation methods are also discussed in detail.

Ivan E. Brick, Hong-Yi Chen, Cheng-Few Lee
47. Econometric Measures of Liquidity

A security is liquid to the extent that an investor can trade significant quantities of the security quickly, at or near the current market price, and bearing low transaction costs. As such, liquidity is a multidimensional concept. In this chapter, I review several widely used econometrics or statistics-based measures that researchers have developed to capture one or more dimensions of a security’s liquidity (i.e., limited dependent variable model (Lesmond, D. A. et al. Review of Financial Studies, 12(5), 1113–1141, 1999) and autocovariance of price changes (Roll, R., Journal of Finance, 39, 1127–1139, 1984). These alternative proxies have been designed to be estimated using either low-frequency or high-frequency data, so I discuss four liquidity proxies that are estimated using low-frequency data and two proxies that require high-frequency data. Low-frequency measures permit the study of liquidity over relatively long time horizons; however, they do not reflect actual trading processes. To overcome this limitation, high-frequency liquidity proxies are often used as benchmarks to determine the best low-frequency proxy. In this chapter, I find that estimates from the effective tick measure perform best among the four low-frequency measures tested.

Jieun Lee
48. A Quasi-Maximum Likelihood Estimation Strategy for Value-at-Risk Forecasting: Application to Equity Index Futures Markets

We present the first empirical evidence for the validity of the ARMA-GARCH model with tempered stable innovations to estimate 1-day-ahead value at risk in futures markets for the S&P 500, DAX, and Nikkei. We also provide empirical support that GARCH models based on normal innovations appear not to be as well suited as infinitely divisible models for predicting financial crashes. The results are compared with the predictions based on data in the cash market. We also provide the first empirical evidence on how adding trading volume to the GARCH model improves its forecasting ability.In our empirical analysis, we forecast 1 % value at risk in both spot and futures markets using normal and tempered stable GARCH models following a quasi-maximum likelihood estimation strategy. In order to determine the accuracy of forecasting for each specific model, backtesting using Kupiec’s proportion of failures test is applied. For each market, the model with a lower number of violations is preferred. Our empirical result indicates the usefulness of classical tempered stable distributions for market risk management and asset pricing.

Oscar Carchano, Young Shin (Aaron) Kim, Edward W. Sun, Svetlozar T. Rachev, Frank J. Fabozzi
49. Computer Technology for Financial Service

Securities trading is one of the few business activities where a few seconds processing delay can cost a company big fortune. The growing competition in the market exacerbates the situation and pushes further towards instantaneous trading even in split second. The key lies on the performance of the underlying information system. Following the computing evolution in financial services, it was a centralized process to begin with and gradually decentralized into a distribution of actual application logic across service networks. Financial services have tradition of doing most of its heavy-duty financial analysis in overnight batch cycles. However, in securities trading it cannot satisfy the need due to its ad hoc nature and requirement of fast response. New computing paradigms, such grid and cloud computing, aiming at scalable and virtually standardized distributed computing resources, are well suited to the challenge posed by the capital market practices. Both consolidate computing resources by introducing a layer of middleware to orchestrate the use of geographically distributed powerful computers and large storages via fast networks. It is nontrivial to harvest the most of the resources from this kind of architecture. Wiener process plays a central role in modern financial modeling. Its scaled random walk feature, in essence, allows millions of financial simulation to be conducted simultaneously. The sheer scale can only be tackled via grid or cloud computing. In this study the core computing competence for financial services is examined. Grid and cloud computing will be briefly described. How the underlying algorithm for financial analysis can take advantage of grid environment is chosen and presented. One of the most popular practiced algorithms Monte Carlo simulation is used in our case study for option pricing and risk management. The various distributed computational platforms are carefully chosen to demonstrate the performance issue for financial services.

Fang-Pang Lin, Cheng-Few Lee, Huimin Chung
50. Long-Run Stock Return and the Statistical Inference

This article introduces the long-run stock return methodologies and their statistical inference. The long-run stock return is usually computed by using a holding strategy more than 1 year but up to 5 years. Two categories of long-run return methods are illustrated in this article: the event-time approach and calendar-time approach. The event-time approach includes cumulative abnormal return, buy-and-hold abnormal return, and abnormal returns around earnings announcements. In former two methods, it is recommended to apply the empirical distribution (from the bootstrapping method) to examine the statistical inference, whereas the last one uses classical t-test. In addition, the benchmark selections in the long-run return literature are introduced. Moreover, the calendar-time approach contains mean monthly abnormal return, factor models, and Ibbotson’s RATS, which could be tested by time-series volatility. Generally, calendar-time approach is more prevailing due to its robustness, yet event-time method is still popular for its ease of implementation in the real world.

Yanzhi Wang
51. Value-at-Risk Estimation via a Semi-parametric Approach: Evidence from the Stock Markets

This study utilizes the parametric approach (GARCH-based models) and the semi-parametric approach of Hull and White (Journal of Risk 1: 5–19, 1998) (HW-based models) to estimate the Value-at-Risk (VaR) through the accuracy evaluation of accuracy for the eight stock indices in Europe and Asia stock markets. The measure of accuracy includes the unconditional coverage test by Kupiec (Journal of Derivatives 3: 73–84, 1995) as well as two loss functions, quadratic loss function, and unexpected loss. As to the parametric approach, the parameters of generalized autoregressive conditional heteroskedasticity (GARCH) model are estimated by the method of maximum likelihood and the quantiles of asymmetric distribution like skewed generalized student’s t (SGT) can be solved by composite trapezoid rule. Sequentially, the VaR is evaluated by the framework proposed by Jorion (Value at Risk: the new benchmark for managing financial risk. New York: McGraw-Hill, 2000). Turning to the semi-parametric approach of Hull and White (Journal of Risk 1: 5–19, 1998), before performing the traditional historical simulation, the raw return series is scaled by a volatility ratio where the volatility is estimated by the same procedure of parametric approach. Empirical results show that the kind of VaR approaches is more influential than that of return distribution settings on VaR estimate. Moreover, under the same return distributional setting, the HW-based models have the better VaR forecasting performance as compared with the GARCH-based models. Furthermore, irrespective of whether the GARCH-based model or HW-based model is employed, the SGT has the best VaR forecasting performance followed by student’s t, while the normal owns the worst VaR forecasting performance. In addition, all models tend to underestimate the real market risk in most cases, but the non-normal distributions (student’s t and SGT) and the semi-parametric approach try to reverse the trend of underestimating.

Cheng-Few Lee, Jung-Bin Su
52. Modeling Multiple Asset Returns by a Time-Varying t Copula Model

We illustrate a framework to model joint distributions of multiple asset returns using a time-varying Student’s t copula model. We model marginal distributions of individual asset returns by a variant of GARCH models and then use a Student’s t copula to connect all the margins. To build a time-varying structure for the correlation matrix of t copula, we employ a dynamic conditional correlation (DCC) specification. We illustrate the two-stage estimation procedures for the model and apply the model to 45 major US stocks returns selected from nine sectors. As it is quite challenging to find a copula function with very flexible parameter structure to account for difference dependence features among all pairs of random variables, our time-varying t copula model tends to be a good working tool to model multiple asset returns for risk management and asset allocation purposes. Our model can capture time-varying conditional correlation and some degree of tail dependence, while it also has limitations of featuring symmetric dependence and inability of generating high tail dependence when being used to model a large number of asset returns.

Long Kang
53. Internet Bubble Examination with Mean-Variance Ratio

To evaluate the performance of the prospects X and Y, financial professionals are interested in testing the equality of their Sharpe ratios (SRs), the ratios of the excess expected returns to their standard deviations. Bai et al. (Statistics and Probability Letters 81, 1078–1085, 2011d) have developed the mean-variance-ratio (MVR) statistic to test the equality of their MVRs, the ratios of the excess expected returns to its variances. They have also provided theoretical reasoning to use MVR and proved that their proposed statistic is uniformly most powerful unbiased. Rejecting the null hypothesis infers that X will have either smaller variance or larger excess mean return or both leading to the conclusion that X is the better investment. In this paper, we illustrate the superiority of the MVR test over the traditional SR test by applying both tests to analyze the performance of the S&P 500 index and the NASDAQ 100 index after the bursting of the Internet bubble in the 2000s. Our findings show that while the traditional SR test concludes the two indices being analyzed to be indistinguishable in their performance, the MVR test statistic shows that the NASDAQ 100 index underperformed the S&P 500 index, which is the real situation after the bursting of the Internet bubble in the 2000s. This shows the superiority of the MVR test statistic in revealing short-term performance and, in turn, enables investors to make better decisions in their investments.

Zhidong D. Bai, Yongchang C. Hui, Wing-Keung Wong
54. Quantile Regression in Risk Calibration

Financial risk control has always been challenging and becomes now an even harder problem as joint extreme events occur more frequently. For decision makers and government regulators, it is therefore important to obtain accurate information on the interdependency of risk factors. Given a stressful situation for one market participant, one likes to measure how this stress affects other factors. The CoVaR (Conditional VaR) framework has been developed for this purpose. The basic technical elements of CoVaR estimation are two levels of quantile regression: one on market risk factors; another on individual risk factor.Tests on the functional form of the two-level quantile regression reject the linearity. A flexible semiparametric modeling framework for CoVaR is proposed. A partial linear model (PLM) is analyzed. In applying the technology to stock data covering the crisis period, the PLM outperforms in the crisis time, with the justification of the backtesting procedures. Moreover, using the data on global stock markets indices, the analysis on marginal contribution of risk (MCR) defined as the local first order derivative of the quantile curve sheds some light on the source of the global market risk.

Shih-Kang Chao, Wolfgang Karl Härdle, Weining Wang
55. Strike Prices of Options for Overconfident Executives

We explore via simulations the impacts of managerial overconfidence on the optimal strike prices of executive incentive options. Although it has been shown that, optimally, managerial incentive options should be awarded in-the-money, in practice most firms award them at-the-money. We show that the optimal strike prices of options granted to overconfident executive are directly related to their overconfidence level and that this bias brings the optimal strike prices closer to the institutionally prevalent at-the-money prices. Our results thus support the viability of the common practice of awarding managers with at-the-money incentive options. We also show that overoptimistic CEOs receive lower compensation than their realistic counterparts and that the stockholders benefit from their managers bias. The combined welfare of the firm’s stakeholders is, however, positively related to managerial overconfidence.The Monte Carlo simulation procedure described in Sect. 55.3 uses a Mathematica program to find the optimal effort by managers and the optimal (for stockholders) contract parameters. An expanded discussion of the simulations, including the choice of the functional forms and the calibration of the parameters, is provided in Appendix 1.

Oded Palmon, Itzhak Venezia
56. Density and Conditional Distribution-Based Specification Analysis

The technique of using densities and conditional distributions to carry out consistent specification testing and model selection amongst multiple diffusion processes has received considerable attention from both financial theoreticians and empirical econometricians over the last two decades. In this chapter, we discuss advances to this literature introduced by Corradi and Swanson (J Econom 124:117–148, 2005), who compare the cumulative distribution (marginal or joint) implied by a hypothesized null model with corresponding empirical distributions of observed data. We also outline and expand upon further testing results from Bhardwaj et al. (J Bus Econ Stat 26:176–193, 2008) and Corradi and Swanson (J Econom 161:304–324, 2011). In particular, parametric specification tests in the spirit of the conditional Kolmogorov test of Andrews (Econometrica 65:1097–1128, 1997) that rely on block bootstrap resampling methods in order to construct test critical values are first discussed. Thereafter, extensions due to Bhardwaj et al. (J Bus Econ Stat 26:176–193, 2008) for cases where the functional form of the conditional density is unknown are introduced, and related continuous time simulation methods are introduced. Finally, we broaden our discussion from single process specification testing to multiple process model selection by discussing how to construct predictive densities and how to compare the accuracy of predictive densities derived from alternative (possibly misspecified) diffusion models. In particular, we generalize simulation steps outlined in Cai and Swanson (J Empir Financ 18:743–764, 2011) to multifactor models where the number of latent variables is larger than three. We finish the chapter with an empirical illustration of model selection amongst alternative short-term interest rate models.

Diep Duong, Norman R. Swanson
57. Assessing the Performance of Estimators Dealing with Measurement Errors

We describe different procedures to deal with measurement error in linear models and assess their performance in finite samples using Monte Carlo simulations and data on corporate investment. We consider the standard instrumental variable approach proposed by Griliches and Hausman (Journal of Econometrics 31:93–118, 1986) as extended by Biorn (Econometric Reviews 19:391–424, 2000) [OLS-IV], the Arellano and Bond (Review of Economic Studies 58:277–297, 1991) instrumental variable estimator, and the higher-order moment estimator proposed by Erickson and Whited (Journal of Political Economy 108:1027–1057, 2000, Econometric Theory 18:776–799, 2002). Our analysis focuses on characterizing the conditions under which each of these estimators produce unbiased and efficient estimates in a standard “errors-in-variables” setting. In the presence of fixed effects, under heteroscedasticity, or in the absence of a very high degree of skewness in the data, the EW estimator is inefficient and returns biased estimates for mismeasured and perfectly measured regressors. In contrast to the EW estimator, IV-type estimators (OLS-IV and AB-GMM) easily handle individual effects, heteroscedastic errors, and different degrees of data skewness. The IV approach, however, requires assumptions about the autocorrelation structure of the mismeasured regressor and the measurement error. We illustrate the application of the different estimators using empirical investment models. Our results show that the EW estimator produces inconsistent results when applied to real-world investment data, while the IV estimators tend to return results that are consistent with theoretical priors.

Heitor Almeida, Murillo Campello, Antonio F. Galvao
58. Realized Distributions of Dynamic Conditional Correlation and Volatility Thresholds in the Crude Oil, Gold, and Dollar/Pound Currency Markets

This chapter proposes a modeling framework for the study of co-movements in price changes among crude oil, gold, and dollar/pound currencies that are conditional on volatility regimes. Methodologically, we extend the dynamic conditional correlation (DCC) multivariate GARCH model to examine the volatility and correlation dynamics depending on the variances of price returns involving a threshold structure. The results indicate that the periods of market turbulence are associated with an increase in co-movements in commodity (gold and oil) prices. By contrast, high market volatility is associated with a decrease in co-movements between gold and the dollar/pound or oil and the dollar/pound. The results imply that gold may act as a safe haven against major currencies when investors face market turmoil. By looking at different subperiods based on the estimated thresholds, we find that the investors’ behavior changes in different subperiods. Our model presents a useful tool for market participants to engage in better portfolio allocation and risk management.

Tung-Li Shih, Hai-Chin Yu, Der-Tzon Hsieh, Chia-Ju Lee
59. Pre-IT Policy, Post-IT Policy, and the Real Sphere in Turkey

We estimate two SVECM (structural vector error correction) models for the Turkish economy based on imposing short-run and long-run restrictions that account for examining the behavior of the real sphere in the pre-IT policy (before inflation-targeting adoption) and post-IT policy (after inflation-targeting adoption).Responses reveal that an expansionary interest policy shock leads to a decrease in price level, a fall in output, an appreciation in the exchange rate, and an improvement in the share prices in the very short run for the most of pre-IT period.Central Bank of the Republic of Turkey (CBT) stabilizes output fluctuations in the short run while maintaining a very medium-run inflation target since January 2006. One of the most important results of this study is that the impact of a monetary policy shock on the real sphere is insignificant during the post-IT policy.

Ahmed Hachicha, Cheng-Few Lee
60. Determination of Capital Structure: A LISREL Model Approach

Most previous studies investigate theoretical variables which affect the capital structure of a firm; however, these latent variables are unobservable and generally estimated by accounting items with measurement errors. The use of these observed accounting variables as theoretical explanatory latent variables will cause error-in-variable problems during the analysis of the factors of capital structure. Since Titman and Wessels (Journal of Finance 43, 1–19, 1988) first utilize LISREL system to analyze the determinants of capital structure choice based on a structural equation modeling (SEM) framework, Chang et al. (The Quarterly Review of Economic and Finance 49, 197–213, 2009) and Yang et al. (The Quarterly Review of Economics and Finance 50, 222–233, 2010) extend the empirical work on capital structure research and obtain more convincing results by using multiple indicators and multiple causes (MIMIC) model and structural equation modeling (SEM) with confirmatory factor analysis (CFA) approach, respectively.In this chapter, we employ structural equation modeling (SEM) in LISREL system to solve the measurement errors problems in the analysis of the determinants of capital structure and find the important factors consistent with capital structure theory by using date from 2002 to 2010. The purpose of this chapter is to investigate whether the influences of accounting factors on capital structure change and whether the important factors are consistent with the previous literature.

Cheng-Few Lee, Tzu Tai
61. Evidence on Earning Management by Integrated Oil and Gas Companies

The objective of this chapter is to demonstrate specific test methodology for detection of earnings management in the oil and gas industry. This study utilized several parametric and nonparametric statistical methods to test for such earnings management. The oil and gas industry was used given the earlier evidence where such firms manage earnings in order to ease the public view of the significant price swings that occur in oil and gas prices. In this chapter, our focus is on total accruals as the primary means of earnings management. The prevailing view is that total accruals account for a greater amount of earnings management and should be more readily detected. The model to be considered is the Jones model (Journal of Accounting Research 29, 193–228, 1991) which projects the expected level of discretionary accruals. By comparing actuals vs. projected accruals, we are able to compute the total unexpected accruals. Next, we correlate unexpected total accruals with several difficult to manipulate indicators that reflect company’s level of activities. The significant positive correlations between unexpected total accruals and these variables are an indication that oil and gas firms do not manage income before extraordinary items and discontinued operations. A second test is conducted by focusing on the possible use of special items to reduce reported net income by comparing mean levels of several special items pre-2008 and 2008. The test results indicate significant difference between 2008 means and the pre-2008 period.

Raafat R. Roubi, Hemantha Herath, John S. Jahera Jr.
62. A Comparative Study of Two Models SV with MCMC Algorithm

This paper examines two asymmetric stochastic volatility models used to describe the volatility dependencies found in most financial returns. The first is the autoregressive stochastic volatility model with Student’s t-distribution (ARSV-t), and the second is the basic Svol of JPR (Journal of Business and Economic Statistics 12(4), 371–417, 1994). In order to estimate these models, our analysis is based on the Markov Chain Monte Carlo (MCMC) method. Therefore, the technique used is a Metropolis-Hastings (Hastings, Biometrika 57, 97–109, 1970), and the Gibbs sampler (Casella and George The American Statistician 46(3) 167–174, 1992; Gelfand and Smith, Journal of the American Statistical Association 85, 398–409, 1990; Gilks et al. 1993). The empirical results concerned on the Standard and Poor’s 500 Composite Index (S&P), CAC 40, Nasdaq, Nikkei, and Dow Jones stock price indexes reveal that the ARSV-t model provides a better performance than the Svol model on the mean squared error (MSE) and the maximum likelihood function.

Ahmed Hachicha, Fatma Hachicha, Afif Masmoudi
63. Internal Control Material Weakness, Analysts Accuracy and Bias, and Brokerage Reputation

We examine the impact of internal control material weaknesses (ICMW hereafter) on sell-side analysts. Using matched firms, we find that ICMW reporting firms have less accurate analyst forecasts relative to non-reporting firms when the reported ICMWs belong to the Pervasive type. ICMW reporting firms have more optimistically biased analyst forecasts compared to non-reporting firms. The optimistic bias exists only in the forecasts issued by the analysts affiliated with less highly reputable brokerage houses. The differences in accuracy and bias between ICMW and non-ICMW firms disappear when ICMW disclosing firms stop disclosing ICMWs. Collectively, our results suggest that the weaknesses in internal control increase forecasting errors and upward bias for financial analysts. However, a good brokerage reputation can curb the optimistic bias.We use the Ordinary Least Squares (OLS) methodology in the main tests to examine the impact of internal control material weaknesses (ICMW hereafter) on sell-side analysts. We match our ICMW firms with non-ICMWs based on industry, sales, and assets. We reestimate the models using rank regression technique to assess the sensitivity of the results to the underlying functional form assumption made by OLS. We use Cook’s distance to test the outliers.

Li Xu, Alex P. Tang
64. What Increases Banks Vulnerability to Financial Crisis: Short-Term Financing or Illiquid Assets?

It is not clear whether short-term financing increases banks’ vulnerability to financial crisis or just reflects the weakness of banks’ balance sheets. This chapter examines the role of short-term financing and assets with deteriorated quality in the financial crisis 2007–2009.We apply logit and OLS econometric techniques to analyze the Federal Reserve Y-9C report data. We show that short-term financing is a response to the adverse economic shocks rather than a cause of the recent crisis. The likelihood of financial crisis actually stems from the illiquidity and low creditworthiness of the investment. Our results are robust to endogeneity concerns when we use a difference-in-differences (DiD) approach with the Lehman bankruptcy in 2008 proxying for an exogenous shock.

Gang Nathan Dong, Yuna Heo
65. Accurate Formulas for Evaluating Barrier Options with Dividends Payout and the Application in Credit Risk Valuation

To price the stock options with discrete dividend payout reasonably and consistently, the stock price falls due to dividend payout must be faithfully modeled. However, this will significantly increase the mathematical difficulty since the post-dividend stock price process, the stock price process after the price falls due to dividend payout, no longer follows the lognormal diffusion process. Analytical pricing formulas are hard to be derived even for the simplest vanilla options. This chapter approximates the discrete dividend payout by a stochastic continuous dividend yield, so the post-dividend stock price process can be approximated by another lognormally diffusive stock process with a stochastic continuous payout ratio up to the ex dividend date. Accurate approximation analytical pricing formulas for barrier options are derived by repeatedly applying the reflection principle. Besides, our formulas can be applied to extend the applicability of the first passage model — a branch of structural credit risk model. The stock price falls due to the dividend payout in the option pricing problem is analog to selling the firm’s asset to finance the loan repayment or dividend payout in the first passage model. Thus, our formulas can evaluate vulnerable bonds or the equity values given that the firm’s future loan/dividend payments are known.

Tian-Shyr Dai, Chun-Yuan Chiu
66. Pension Funds: Financial Econometrics on the Herding Phenomenon in Spain and the United Kingdom

This work reflects the impact of the Spanish and UK pension funds investment on the market efficiency; specifically, we analyze if manager’s behavior enhances the existence of herding phenomena.To implement this study, we apply a less common methodology: the estimated cross-sectional standard deviations of betas. We also estimate the betas with an econometric technique less applied in the financial literature: state-space models and the Kalman filter. Additionally, in order to obtain a robust estimation, we apply the Huber estimator.Finally, we apply several models and study the existence of herding towards the market, size, book-to-market, and momentum factors.The results are similar for the two countries and style factors, revealing the existence of herding. Nonetheless, this is smaller on size, book-to-market, and momentum factors.

Mercedes Alda García, Luis Ferruz
67. Estimating the Correlation of Asset Returns: A Quantile Dependence Perspective

In the practice of risk management, an important consideration in the portfolio choice problem is the correlation structure across assets. However, the correlation is an extremely challenging parameter to estimate as it is known to vary substantially over the business cycle and respond to changing market conditions. Focusing on international stock markets, I consider a new approach of estimating correlation that utilizes the idea that the condition of a stock market is related to its return performance, particularly to the conditional quantile of its return, as the lower return quantiles reflect a weak market while the upper quantiles reflect a bullish one.Combining the techniques of quantile regression and copula modeling, I propose the copula quantile-on-quantile regression (C-QQR) approach to construct the correlation between the conditional quantiles of stock returns. The C-QQR approach uses the copula to generate a regression function for modeling the dependence between the conditional quantiles of the stock returns under consideration. It is estimated using a two-step quantile regression procedure, where in principle, the first step is implemented to model the conditional quantile of one stock return, which is then related in the second step to the conditional quantile of another return. The C-QQR approach is then applied to study how the US stock market is correlated with the stock markets of Australia, Hong Kong, Japan, and Singapore.

Nicholas Sim
68. Multi-criteria Decision Making for Evaluating Mutual Funds Investment Strategies

Investors often need to evaluate the investment strategies in terms of numerical values based upon various criteria when making investment. This situation can be regarded as a multiple criteria decision-making (MCDM) problem. This approach is oftentimes the basic assumption in applying hierarchical system for evaluating the strategies of selecting the investment style. We employ the criteria measurements to evaluate investment style. To achieve this objective, first, we employ factor analysis to extract independent common factors from those criteria. Second, we construct the evaluation frame using hierarchical system composed of the above common factors with evaluation criteria and then derive the relative weights with respect to the considered criteria. Third, the synthetic utility value corresponding to each investment style is aggregated by the weights with performance values. Finally, we compare with empirical data and find that the model of MCDM predicts the rate of return.

Shin Yun Wang, Cheng-Few Lee
69. Econometric Analysis of Currency Carry Trade

The carry trade is a popular strategy in the currency markets whereby investors fund positions in high interest rate currencies by selling low interest rate currencies to earn the interest rate differential. In this article, we first provide an overview of the risk and return profile of currency carry trade; second, we introduce two popular models, the regime-switch model and the logistic smooth transition regression model, to analyze carry trade returns because the carry trade returns are highly regime dependent. Finally, an empirical example is illustrated.

Yu-Jen Wang, Huimin Chung, Bruce Mizrach
70. Evaluating the Effectiveness of Futures Hedging

This chapter examines the Ederington hedging effectiveness (EHE) comparisons between unconditional OLS hedge strategy and other conditional hedge strategies. It is shown that OLS hedge strategy outperforms most of the optimal conditional hedge strategies when EHE is used as the hedging effectiveness criteria. Before concluding that OLS hedge is better than the others, however, we need to understand under what circumstances the result is derived. We explain why OLS is the best hedge strategy under EHE criteria in most cases and how most conditional hedge strategies are judged as inferior to OLS hedge strategy by an EHE comparison.

Donald Lien, Geul Lee, Li Yang, Chunyang Zhou
71. Analytical Bounds for Treasury Bond Futures Prices

The pricing of delivery options, particularly timing options, in Treasury bond futures is prohibitively expensive. Recursive use of the lattice model is unavoidable for valuing such options, as Boyle (1989) demonstrates. As a result, the main purpose of this study is to derive upper bounds and lower bounds for Treasury bond futures prices.This study employs a maximum likelihood estimation technique presented by Chen and Scott (1993) to estimate the parameters for two-factor Cox-Ingersoll-Ross models of the term structure. Following the estimation, the factor values are solved for by matching the short rate with the cheapest-to-deliver bond price. Then, upper bounds and lower bounds for Treasury bond futures prices can be calculated.This study first shows that the popular preference-free, closed-form cost of carry model is an upper bound for the Treasury bond futures price. Then, the next step is to derive analytical lower bounds for the futures price under one- and two-factor Cox-Ingersoll-Ross models of the term structure. The bound under the two-factor Cox-Ingersoll-Ross model is then tested empirically using weekly futures prices from January 1987 to December 2000.

Ren-Raw Chen, Shih-Kuo Yeh
72. Rating Dynamics of Fallen Angels and Their Speculative Grade-Rated Peers: Static vs. Dynamic Approach

This study adopts the survival analysis framework (Allison, P. D. (1984). Event history analysis. Beverly Hills: Sage) to examine issuer-heterogeneity and time-heterogeneity in the rating migrations of fallen angels (FAs) and their speculative grade-rated peers (FA peers). Cox’s hazard model is considered the preeminent method to estimate the probability that an issuer survives in its current rating grade at any point in time t over the time horizon T. In this study, estimation is based on two Cox’s hazard models, including a proportional hazard model (Cox, Journal of Royal Statistical Society Series B (Methodological) 34:187–220, 1972) and a dynamic hazard model. The first model employs a static estimation approach and time-independent covariates, whereas the second uses a dynamic estimation approach and time-dependent covariates. To allow for any dependence among rating states of the same issuer, the marginal event-specific method (Wei et al., Journal of The American Statistical Association 84:1065–1073, 1989) was used to obtain robust variance estimates. For validation purpose, the Brier score (Brier, Monthly Weather Review 78(1):1–3, 1950) and its covariance decomposition (Yates, Organizational Behaviour and Human Performance 30:132–156, 1982) were applied to assess the forecast performance of estimated models in forming time-varying survival probability estimates for issuers out of sample.It was found that FAs and their peers exhibit strong but markedly different dependences on rating history, industry sectors, and macroeconomic conditions. These factors jointly, and in several cases separately, are more important than the current rating in determining future rating changes. A key finding is that past rating behaviors persist even after controlling for the industry sector and the evolution of the macroeconomic environment over the time for which the current rating persists. Switching from a static to a dynamic estimation framework markedly improves the forecast performance of the upgrade model for FAs. The results suggest that rating history provides important diagnostic information and different rating paths require different dynamic migration models.

Huong Dang
73. Creation and Control of Bubbles: Managers Compensation Schemes, Risk Aversion, and Wealth and Short Sale Constraints

Persistent divergence of an asset price from its fundamental value has been a subject of much theoretical and empirical discussion. This chapter takes an alternative approach of inquiry – that of using laboratory experiments – to study the creation and control of speculative bubbles. The following three factors are chosen for analysis: the compensation scheme of portfolio managers, wealth and supply constraints, and the relative risk aversion of traders. Under a short investment horizon induced by a tournament compensation scheme, speculative bubbles are observed in markets of speculative traders and in mixed markets of conservative and speculative traders. These results maintain with super-experienced traders who are aware of the presence of a bubble. A binding wealth constraint dampens the bubbles as does an increased supply of securities. These results are unchanged when traders risk their own money in lieu of initial endowments provided by the experimenter.The primary method of analysis is to use live subjects in a laboratory setting to generate original trading data, which are compared to their fundamental values. Standard statistical techniques are used to supplement analysis in explaining the divergence of asset prices from their fundamental values.

James S. Ang, Dean Diavatopoulos, Thomas V. Schwarz
74. Range Volatility: A Review of Models and Empirical Studies

The literature on range volatility modeling has been rapidly expanding due to its importance and applications. This chapter provides alternative price range estimators and discusses their empirical properties and limitations. Besides, we review some relevant financial applications for range volatility, such as value-at-risk estimation, hedge, spillover effect, portfolio management, and microstructure issues.In this chapter, we survey the significant development of range-based volatility models, beginning with the simple random walk model up to the conditional autoregressive range (CARR) model. For the extension to range-based multivariate volatilities, some approaches developed recently are adopted, such as the dynamic conditional correlation (DCC) model, the double smooth transition conditional correlation (DSTCC) GARCH model, and the copula method. At last, we introduce different approaches to build bias-adjusted realized range to obtain a more efficient estimator.

Ray Yeutien Chou, Hengchih Chou, Nathan Liu
75. Business Models: Applications to Capital Budgeting, Equity Value, and Return Attribution

This chapter describes a business model in a contingent claim modeling framework. The model defines a “primitive firm” as the underlying risky asset of a firm. The firm’s revenue is generated from a fixed capital asset and the firm incurs both fixed operating costs and variable costs. In this context, the shareholders hold a retention option (paying the fixed operating costs) on the core capital asset with a series of growth options on capital investments. In this framework of two interacting options, we derive the firm value.The chapter then provides three applications of the business model. Firstly, the chapter determines the optimal capital budgeting decision in the presence of fixed operating costs and shows how the fixed operating cost should be accounted by in an NPV calculation. Secondly, the chapter determines the values of equity value, the growth option, and the retention option as the building blocks of primitive firm value. Using a sample of firms, the chapter illustrates a method in comparing the equity values of firms in the same business sector. Thirdly, the chapter relates the change in revenue to the change in equity value, showing how the combined operating leverage and financial leverage may affect the firm valuation and risks.

Thomas S. Y. Ho, Sang Bin Lee
76. VAR Models: Estimation, Inferences, and Applications

Vector autoregression (VAR) models have been used extensively in finance and economic analysis. This paper provides a brief overview of the basic VAR approach by focusing on model estimation and statistical inferences. Applications of VAR models in some finance areas are discussed, including asset pricing, international finance, and market microstructure. It is shown that such approach provides a powerful tool to study financial market efficiency, stock return predictability, exchange rate dynamics, and information content of stock trades and market quality.

Yangru Wu, Xing Zhou
77. Model Selection for High-Dimensional Problems

High-dimensional data analysis is becoming more and more important to both academics and practitioners in finance and economics but is also very challenging because the number of variables or parameters in connection with such data can be larger than the sample size. Recently, several variable selection approaches have been developed and used to help us select significant variables and construct a parsimonious model simultaneously. In this chapter, we first provide an overview of model selection approaches in the context of penalized least squares. We then review independence screening, a recently developed method for analyzing ultrahigh-dimensional data where the number of variables or parameters can be exponentially larger than the sample size. Finally, we discuss and advocate multistage procedures that combine independence screening and variable selection and that may be especially suitable for analyzing high-frequency financial data.Penalized least squares seek to keep important predictors in a model while penalizing coefficients associated with irrelevant predictors. As such, under certain conditions, penalized least squares can lead to a sparse solution for linear models and achieve asymptotic consistency in separating relevant variables from irrelevant ones. Independence screening selects relevant variables based on certain measures of marginal correlations between candidate variables and the response.

Jing-Zhi Huang, Zhan Shi, Wei Zhong
78. Hedonic Regression Models

This provides a basic overview of the nature and variety of hedonic empirical pricing models that are employed in the economics literature. It explores the history of hedonic modeling and summarizes the field’s utility-theory-based, microeconomic foundations. It also provides a discussion of and potential solutions for common problems associated with hedonic modeling.This paper examines three specific, different hedonic specifications, the linear, semilog, and Box–Cox transformed hedonic models, and applies them to real estate data. It also discusses recent innovations related to hedonic models and how these models are being used in contemporary studies.

Ben J. Sopranzetti
79. Optimal Payout Ratio Under Uncertainty and the Flexibility Hypothesis: Theory and Empirical Evidence

We theoretically extend the proposition of DeAngelo and DeAngelo’s (Journal of Financial Economics 79, 293–315, 2006) optimal payout policy in terms of the flexibility dividend hypothesis. We also introduce growth rate, systematic risk, and total risk variables into the theoretical model. Our empirical findings show that based on flexibility considerations, a company will reduce its payout when the growth rate increases. In addition, a nonlinear relationship exists between the payout ratio and the risk. In other words, the relationship between the payout ratio and risk is negative (or positive) when the growth rate is higher (or lower) than the rate of return on total assets.We use a panel data collected in the USA from 1969 to 2009 to empirically investigate the impact of growth rate, systematic risk, and total risk on the optimal payout ratio in terms of the fixed-effects model. Furthermore, we implement the moving estimates process to find the empirical breakpoint of the structural change for the relationship between the payout ratio and risks and confirm that the empirical breakpoint is not different from our theoretical breakpoint. Our theoretical model and empirical results can therefore be used to identify whether flexibility or the free cash flow hypothesis should be used to determine the dividend policy.

Cheng-Few Lee, Manak C. Gupta, Hong-Yi Chen, Alice C. Lee
80. Modeling Asset Returns with Skewness, Kurtosis, and Outliers

This chapter uses an exponential generalized beta distribution of the second kind (EGB2) to model the returns on 30 Dow Jones industrial stocks. The model accounts for stock return characteristics, including fat tails, peakedness (leptokurtosis), skewness, clustered conditional variance, and leverage effect. The evidence suggests that the error assumption based on the EGB2 distribution is capable of taking care of skewness, kurtosis, and peakedness and therefore is also capable of making good predictions on extreme values. The goodness-of-fit statistic provides supporting evidence in favor of EGB2 distribution in modeling stock returns. This chapter also finds evidence that the leverage effect is diminished when higher moments are considered.The EGB2 distribution used in this chapter is a four-parameter distribution. It has a closed-form density function and its higher-order moments are finite and explicitly expressed by its parameters. The EGB2 distribution nests many widely used distributions such as normal distribution, log-normal distribution, Weibull distribution, and standard logistic distribution.

Thomas C. Chiang, Jiandong Li
81. Does Revenue Momentum Drive or Ride Earnings or Price Momentum?

This study examines the profits of revenue, earnings, and price momentum strategies in an attempt to understand investor reactions when facing multiple information of firm performance in various scenarios. We first offer evidence that there is no dominating momentum strategy among the revenue, earnings, and price momentums, suggesting that revenue surprises, earnings surprises, and prior returns each carry some exclusive unpriced information content. We next show that the profits of momentum driven by firm fundamental performance information (revenue or earnings) depend upon the accompanying firm market performance information (price), and vice versa. The robust monotonicity in multivariate momentum returns is consistent with the argument that the market does not only underestimate the individual information but also the joint implications of multiple information on firm performance, particularly when they point in the same direction. A three-way combined momentum strategy may offer monthly return as high as 1.44 %. The information conveyed by revenue surprises and earnings surprises combined account for about 19 % of price momentum effects, which finding adds to the large literature on tracing the sources of price momentum.

Hong-Yi Chen, Sheng-Syan Chen, Chin-Wen Hsin, Cheng-Few Lee
82. A VG-NGARCH Model for Impacts of Extreme Events on Stock Returns

This article compares two types of GARCH models, namely, the VG-NGARCH and the GARCH-jump model with autoregressive conditional jump intensity, i.e., the GARJI model, to make inferences on the log of stock returns when there are irregular substantial price fluctuations. The VG-NGARCH model imposes a nonlinear asymmetric structure on the conditional shape parameters in a variance-gamma process, which describes the arrival rates for news with different degrees of influence on price movements and provides an ex ante probability for the occurrence of large price movements. On the other hand, the GARJI model, a mixed GARCH-jump model proposed by Chan and Maheu (Journal of Business & Economic Statistics 20:377–389, 2002), adopts two independent autoregressive processes to model the variances corresponding to moderate and large price movements, respectively. An empirical study using daily stock prices of four major banks, namely, Bank of America, J.P. Morgan Chase, Citigroup, and Wells Fargo, from 2006 to 2009 is performed to compare the two models. The goodness of fit of the VG-NGARCH model vs. the GARJI model is demonstrated.

Lie-Jane Kao, Li-Shya Chen, Cheng-Few Lee
83. Risk-Averse Portfolio Optimization via Stochastic Dominance Constraints

We consider the problem of constructing a portfolio of finitely many assets whose return rates are described by a discrete joint distribution. We present a new approach to portfolio selection based on stochastic dominance.The portfolio return rate in the new model is required to stochastically dominate a random benchmark. We formulate optimality conditions and duality relations for these models and construct equivalent optimization models with utility functions. Two different formulations of the stochastic dominance constraint, primal and inverse, lead to two dual problems which involve von Neumann–Morgenstern utility functions for the primal formulation and rank-dependent (or dual) utility functions for the inverse formulation. We also discuss the relations of our approach to value at risk and conditional value at risk. Numerical illustration is provided.

Darinka Dentcheva, Andrzej Ruszczynski
84. Implementation Problems and Solutions in Stochastic Volatility Models of the Heston Type

In Heston stochastic volatility framework, the main problem for implementing Heston semi-analytic formulae for European-style financial claims is the inverse Fourier integration. Without good implementation procedures, the numerical results obtained from Heston formulae cannot be robust, even for customarily used Heston parameters, as the time to maturity is increased. We compare three major approaches to solve the numerical instability problem inherent in the fundamental solution of the Heston model and show that the simple adjusted-formula method is much simpler than the rotation-corrected angle method of Kahl and Jäckel and also greatly superior to the direct integration method of Shaw if taking computing time into consideration.In this chapter, we used the fundamental transform method proposed by Lewis to reduce the number of variables from two to one and separate the payoff function from the calculation of the Green function for option pricing. Furthermore, the simple adjusted formula is shown to be a robust option pricer as no complex discontinuities arise in this formulation even without the rotation-corrected angle.

Jia-Hau Guo, Mao-Wei Hung
85. Stochastic Change-Point Models of Asset Returns and Their Volatilities

We begin with an overview of frequentist and Bayesian approaches to incorporating change points in time series models of asset returns and their volatilities. It has been found in many empirical studies of stock returns and exchange rate that ignoring the possibilities of parameter changes yields time series models with long memory, such as unit-root nonstationarity and high volatility persistence. We therefore focus on the ARX-GARCH model and introduce two timescales, using the “short” timescale to define GARCH dynamics and the “long” timescale to incorporate parameter jumps. This leads to a Bayesian change-point ARX-GARCH model, whose unknown parameters may undergo occasional changes at unspecified times and can be estimated by explicit recursive formulas when the hyperparameters of the Bayesian model are specified. We describe efficient estimators of the hyperparameters of the Bayesian model leading to empirical Bayes estimators of the piecewise constant parameters with relatively low computational complexity. We also show how the computationally tractable empirical Bayes approach can be applied to the frequentist problem of partitioning the time series into segments under sparsity assumptions on the change points.

Tze Leung Lai, Haipeng Xing
86. Unspanned Stochastic Volatilities and Interest Rate Derivatives Pricing

This paper first reviews the recent literature on the unspanned stochastic volatilities (USV) documented in the interest rate derivatives markets. The USV refers to the volatility factors implied in the interest rate derivatives prices that have little correlation with the yield curve factors. We then present the result in Li and Zhao (Journal of Finance, 62, 345–382, 2006) that a sophisticated DTSM without USV feature can have serious difficulties in hedging caps and cap straddles, even though they capture bond yields well. Furthermore, at-the-money straddle hedging errors are highly correlated with cap implied volatilities and can explain a large fraction of hedging errors of all caps and straddles across moneyness and maturities. These findings strongly suggest that the unmodified dynamic term structure models, assuming the same set of state variables for both bonds and derivatives, are seriously challenged in capturing the term structure volatilities. We also present a multifactor term structure model with stochastic volatility and jumps that yields a closed-form formula for cap prices from Jarrow et al. (Journal of Finance, 61, 341–378, 2007). The three-factor stochastic volatility model with Poisson jumps can price interest rate caps well across moneyness and maturity. Last we present the nonparametric estimation results from Li and Zhao (2009). Specifically, the forward densities depend significantly on the slope and volatility of LIBOR rates, and mortgage market activities have strong impacts on the shape of the forward densities. These results provide nonparametric evidence of unspanned stochastic volatility and suggest that the unspanned factors could be partly driven by activities in the mortgage markets. These findings reinforce the claim that term structure models need to accommodate the unspanned stochastic volatilities in pricing and hedging interest rate derivatives.The econometric methods in this chapter include extended Kalman filtering, maximum likelihood estimation with latent variables, local polynomial method, and nonparametric density estimation.

Feng Zhao
87. Alternative Equity Valuation Models

This chapter examines alternative equity valuation models and their ability to forecast future stock prices. Equity valuation models included Ohlson’s (1995) Model, Feltham and Ohlson’s (1995) Model, and Warren and Shelton’s (1971) Model. Five research hypotheses are developed to examine whether different estimation techniques, earnings measures, and combined forecasting methods can improve the ability to predict future stock prices. We find that the simultaneous equation estimation procedure can produce more accurate future stock price forecasts than the traditional single equation estimation method in terms of smaller prediction errors. In addition, the combined forecast method can further reduce the prediction errors by using combination of individual forecasts. Empirical evidence also shows that investors can use comprehensive earnings to more accurately forecast future stock prices in these valuation models.We use simultaneous equation estimation technique to investigate the stock price forecast ability of Ohlson’s Model, Feltham and Ohlson’s Model, and Warren and Shelton’s (1971) Model. Moreover, we use the combined forecasting methods proposed by Granger and Newbold (1973) and Granger and Ramanathan (1984) to form combined stock price forecasts from individual models. Finally, we examine whether comprehensive earnings can provide incremental price-relevant information beyond net income.

Hong-Yi Chen, Cheng-Few Lee, Wei K. Shih
88. Time Series Models to Predict the Net Asset Value (NAV) of an Asset Allocation Mutual Fund VWELX

This research examines the use of various forms of time series models to predict the total NAV of an asset allocation mutual fund. In particular, the mutual fund case used is the Vanguard Wellington Fund. This fund maintains a balance between relatively conservative stocks and bonds. The period of the study on which the prediction of the total NAV is based is the 24-month period of 2010 and 2011, and the forecasting period is the first 3 months of 2012. Forecasting the total NAV of a massive conservative allocation fund, composed of an extremely large number of investments, requires a method that produces accurate result. Achieving this accuracy has no necessary relationship to the complexity of the variety of the methods typically present in many financial forecasting studies.Various types of methods and models were used to predict the total NAV of the Vanguard Wellington Fund. The first set of model structures included simple exponential smoothing, double exponential smoothing, and Winter’s method of smoothing. The second set of predictive models used represented trend models. They were developed using regression estimation. They included linear trend model, quadratic trend model, and an exponential model. The third type of method used was a moving-average method. The fourth set of models incorporated the Box-Jenkins method, including an autoregressive model, a moving-average model, and an unbounded autoregressive and moving-average method.

Kenneth D. Lawrence, Gary Kleinman, Sheila M. Lawrence
89. Discriminant Analysis and Factor Analysis: Theory and Method

Three multivariate techniques, namely, discriminant analysis, factor analysis, and principal component analysis, are discussed in detail. In addition, the stepwise discriminant analysis by Pinches and Mingo (1973) is improved using a goal programming technique. These methodologies are applied to determine useful financial ratios and the subsequent bond ratings. The analysis shows that the stepwise discriminant analysis fails to be an efficient solution as the hybrid approach using the goal programming technique outperforms it, which is a compromised solution for the maximization of the two objectives, namely, the maximization of the explanatory power and the maximization of discriminant power.

Lie-Jane Kao, Cheng-Few Lee, Tzu Tai
90. Implied Volatility: Theory and Empirical Method

The estimation of the implied volatility is the one of most important topics in option pricing research. The main purpose of this chapter is to review the different theoretical methods used to estimate implied standard deviation and to show how the implied volatility can be estimated in empirical work. The OLS method for estimating implied standard deviation is first introduced, and the formulas derived by applying a Taylor series expansion method to Black–Scholes option pricing model are also described. Three approaches of estimating implied volatility are derived from one, two, and three options, respectively. Regarding to these formulas with the remainder terms, the accuracy of these formulas depends on how an underlying asset is close to the present value of exercise price in an option. The formula utilizing three options for estimating implied volatility is more accurate rather than other two approaches.In empirical work, we use call options on S&P 500 index futures in 2010 and 2011 to illustrate how MATLAB can be used to deal with the issue of convergence in estimating implied volatility of future options. The results show that the time series of implied volatility significantly violates the assumption of constant volatility in Black–Scholes option pricing model. The volatility parameter in the option pricing model fluctuates over time and therefore should be estimated by the time series and cross-sectional model.

Cheng-Few Lee, Tzu Tai
91. Measuring Credit Risk in a Factor Copula Model

In this chapter, we provide a new approach to estimate future credit risk on target portfolio based on the framework of CreditMetrics™ by J.P. Morgan. However, we adopt the perspective of factor copula and then bring the principal component analysis concept into factor structure to construct a more appropriate dependence structure among credits.In order to examine the proposed method, we use real market data instead of virtual one. We also develop a tool for risk analysis which is convenient to use, especially for banking loan businesses. The results show the fact that people assume dependence structures are normally distributed will indeed lead to risk underestimation. On the other hand, our proposed method captures better features of risks and shows the fat-tail effects conspicuously even though assuming the factors are normally distributed.

Jow-Ran Chang, An-Chi Chen
92. Instantaneous Volatility Estimation by Nonparametric Fourier Transform Methods

Malliavin and Mancino (2009) proposed a nonparametric Fourier transform method to estimate the instantaneous volatility under the assumption that the underlying asset price process is a semi-martingale. Based on this theoretical result, this chapter first conducts some simulation tests to justify the effectiveness of the Fourier transform method. Two correction schemes are proposed to improve the accuracy of volatility estimation. By means of these Fourier transform methods, some documented phenomena such as volatility daily effect and multiple risk factors of volatility can be observed. Then, a linear hypothesis between the instantaneous volatility and VIX derived from Zhang and Zhu (2006) is investigated. We extend their result and adopt a general linear test for empirical analysis.

Chuan-Hsiang Han
93. A Dynamic CAPM with Supply Effect Theory and Empirical Results

Breeden (1979) and Grinols (1984) and Cox et al. (1985) have described the importance of supply side for the capital asset pricing. Black (1976) derives a dynamic, multiperiod CAPM, integrating endogenous demand and supply. However, Black’s theoretically elegant model has never been empirically tested for its implications in dynamic asset pricing. We first theoretically extend Black’s CAPM. Then, we use price, dividend per share, and earnings per share to test the existence of supply effect with US equity data. We find the supply effect is important in US domestic stock markets. This finding holds as we break the companies listed in the S&P 500 into ten portfolios by different level of payout ratio. It also holds consistently if we use individual stock data.A simultaneous equation system is constructed through a standard structural form of a multiperiod equation to represent the dynamic relationship between supply and demand for capital assets. The equation system is exactly identified under our specification. Then, two hypotheses related to supply effect are tested regarding the parameters in the reduced form system. The equation system is estimated by the seemingly unrelated regression (SUR) method, since SUR allow one to estimate the presented system simultaneously while accounting for the correlated errors.

Cheng-Few Lee, Chiung-Min Tsai, Alice C. Lee
94. A Generalized Model for Optimum Futures Hedge Ratio

Under martingale and joint-normality assumptions, various optimal hedge ratios are identical to the minimum variance hedge ratio. As empirical studies usually reject the joint-normality assumption, we propose the generalized hyperbolic distribution as the joint log-return distribution of the spot and futures. Using the parameters in this distribution, we derive several most widely used optimal hedge ratios: minimum variance, maximum Sharpe measure, and minimum generalized semivariance. Under mild assumptions on the parameters, we find that these hedge ratios are identical. Regarding the equivalence of these optimal hedge ratios, our analysis suggests that the martingale property plays a much important role than the joint distribution assumption.To estimate these optimal hedge ratios, we first write down the log-likelihood functions for symmetric hyperbolic distributions. Then we estimate these parameters by maximizing the log-likelihood functions. Using these MLE parameters for the generalized hyperbolic distributions, we obtain the minimum variance hedge ratio and the optimal Sharpe hedge ratio. Also based on the MLE parameters and the numerical method, we can calculate the minimum generalized semivariance hedge ratio.

Cheng-Few Lee, Jang-Yi Lee, Kehluh Wang, Yuan-Chung Sheu
95. Instrumental Variables Approach to Correct for Endogeneity in Finance

The endogeneity problem has received a mixed treatment in corporate finance research. Although many studies implicitly acknowledge its existence, the literature does not consistently account for endogeneity using formal econometric methods. This chapter reviews the instrumental variables (IV) approach to endogeneity from the point of view of a finance researcher who is implementing instrumental variables methods in empirical studies. This chapter is organized into two parts. Part I discusses the general procedure of the instrumental variables approach, including two-stage least square (2SLS) and generalized method of moments (GMM), the related diagnostic statistics for assessing the validity of instruments, which are important but not used very often in finance applications, and some recent advances in econometrics research on weak instruments. Part II surveys corporate finance applications of instrumental variables. We found that the instrumental variables used in finance studies are usually chosen arbitrarily, and very few diagnostic statistics are performed to assess the adequacy of IV estimation. The resulting IV estimates thus are questionable.

Chia-Jane Wang
96. Application of Poisson Mixtures in the Estimation of Probability of Informed Trading

This research first discusses the evolution of probability for informed trading in finance literature. Motivated by asymmetric effects, e.g., return and trading volume in up and down markets, this study modifies a mixture of the Poisson distribution model by different arrival rates of informe d buys and sells to measure the probability of informed trading proposed by Easley et al. (Journal of Finance 51:1405–1436, 1996).By applying the expectation–maximization (EM) algorithm to estimate the parameters of the model, we derive a set of equations for maximum likelihood estimation, and these equations are encoded in a SAS Macro utilizing SAS/IML for implementation of the methodology.

Emily Lin, Cheng-Few Lee
97. CEO Stock Options and Analysts’ Forecast Accuracy and Bias

In this study, we investigate the relations between CEO stock options and analysts’ earnings forecast accuracy and bias. We argue that a higher level of stock options may induce managers to undertake riskier projects, to change and/or reallocate their effort, and to possibly engage in gaming (such as opportunistic earnings and disclosure management) and hypothesize that these managerial behaviors will result in an increase in the complexity of forecasting and, hence, in less accurate analysts’ forecasts. We also posit that analysts’ optimistic forecast bias will increase as the level of stock options pay increases. We reason that as forecast complexity increases with stock options pay, analysts, needing greater access to management’s information to produce accurate forecasts, have incentives to increase the optimistic bias in their forecasts. Alternatively, a higher level of stock options pay may lead to improved disclosure because it better aligns managers’ and shareholders’ interests. The improved disclosure, in turn, may result in more accurate and less biased analysts’ forecasts.Using ordinary least squares estimation, we test these hypotheses relating the level of CEO stock options pay to analysts’ forecast accuracy and bias on a sample of firms from the Standard & Poor’s ExecuComp database over the period 1993–2003. Our OLS models relate forecast accuracy and forecast bias (the dependent variables) to CEO stock options (the independent variable) and controls for earnings characteristics, firm characteristics, and forecast characteristics. We measure forecast accuracy as negative one times the absolute value of the difference between forecasted and actual earnings scaled by beginning of period stock price and forecast bias as forecasted minus actual earnings scaled by beginning of period stock price. We control for differences in earnings characteristics by including earnings volatility, whether the firm has a loss, and earnings surprise; for differences in firm characteristics by including firm size, growth (measured as book-to-market ratio, percentage change in total assets, and percentage change in annual sales), and corporate governance quality (measured as percentage of shares outstanding owned by the CEO, whether the CEO is also chairman of the board of directors, number of annual board meetings, and whether directors are awarded stock options); and for differences in forecast characteristics by including analyst following and analyst forecast dispersion. In addition, the models include controls for industry and year. We use four measures of options: new options, existing exercisable options, existing unexercisable options, and total options (sum of the previous three), all scaled by total number of shares outstanding, and estimate two models for each dependent variable, one including total options and the other including new options, existing exercisable options, and existing unexercisable options. We also use both contemporaneous as well as lagged values of options in our main tests.Our results indicate that analysts’ earnings forecast accuracy decreases and forecast optimism increases as the level of stock options (particularly new options and existing exercisable options) in CEO pay increases. These findings suggest that the incentive alignment effects of stock options are more than offset by the investment, effort allocation, and gaming incentives induced by stock options grants to CEOs. Given that analysts’ forecasts are an important source of information to capital markets, our finding of a decline in the quality of the information provided by analysts has implications for the level and variability of stock prices. It also has implications for information asymmetry and cost of capital, as well as for valuation models that rely on analysts’ earnings forecasts.

Kiridaran Kanagaretnam, Gerald J. Lobo, Robert Mathieu
98. Option Pricing and Hedging Performance Under Stochastic Volatility and Stochastic Interest Rates

Recent studies have extended the Black–Scholes model to incorporate either stochastic interest rates or stochastic volatility. But, there is not yet any comprehensive empirical study demonstrating whether and by how much each generalized feature will improve option pricing and hedging performance.This chapter fills this gap by first developing an implementable option model in closed form that admits both stochastic volatility and stochastic interest rates and that is parsimonious in the number of parameters. The model includes many known ones as special cases. Based on the model, both delta-neutral and single-instrument minimum-variance hedging strategies are derived analytically. Using S&P 500 option prices, we then compare the pricing and hedging performance of this model with that of three existing ones that respectively allow for (i) constant volatility and constant interest rates (the Black–Scholes), (ii) constant volatility but stochastic interest rates, and (iii) stochastic volatility but constant interest rates. Overall, incorporating stochastic volatility and stochastic interest rates produces the best performance in pricing and hedging, with the remaining pricing and hedging errors no longer systematically related to contract features. The second performer in the horse race is the stochastic volatility model, followed by the stochastic interest rate model and then by the Black–Scholes.

Charles Cao, Gurdip S. Bakshi, Zhiwu Chen
99. The Le Châtelier Principle of the Capital Market Equilibrium

This chapter purports to provide a theoretical underpinning for the problem of the Investment Company Act. The theory of the Le Chatelier principle is well known in thermodynamics. The system tends to adjust itself to a new equilibrium as far as possible. In capital market equilibrium, added constraints on portfolio investment in each stock can lead to inefficiency manifested in the right-shifting efficiency frontier. According to the empirical study, the potential loss can amount to millions of dollars coupled with a higher risk-free rate and greater transaction and information costs.

Chin W. Yang, Ken Hung, Matthew D. Brigida
Backmatter
Metadata
Title
Handbook of Financial Econometrics and Statistics
Editors
Cheng-Few Lee
John C. Lee
Copyright Year
2015
Publisher
Springer New York
Electronic ISBN
978-1-4614-7750-1
Print ISBN
978-1-4614-7749-5
DOI
https://doi.org/10.1007/978-1-4614-7750-1