main-content

This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility.

The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular but are theoretically challenging when based on conventional methods. Among others, it introduces a modern text-mining method called dynamic topic modeling in detail and applies it to the message board of Bitcoins.

The unique synthesis of theory and practice supported by computational tools is reflected not only in the selection of topics, but also in the fine balance of scientific contributions on practical implementation and theoretical concepts. This link between theory and practice offers theoreticians insights into considerations of applicability and, vice versa, provides practitioners convenient access to new techniques in quantitative finance. Hence the book will appeal both to researchers, including master and PhD students, and practitioners, such as financial engineers. The results presented in the book are fully reproducible and all quantlets needed for calculations are provided on an accompanying website.

The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book.

Erratum to: Copulae in High Dimensions: An Introduction

Without Abstract
Ostap Okhrin, Alexander Ristig, Ya-Fei Xu

Chapter 1. VaR in High Dimensional Systems-A Conditional Correlation Approach

Abstract
In empirical finance, multivariate volatility models are widely used to capture both volatility clustering and contemporaneous correlation of asset return vectors. In higher dimensional systems, parametric specifications often become intractable for empirical analysis owing to large parameter spaces. On the contrary, feasible specifications impose strong restrictions that may not be met by financial data as, for instance, constant conditional correlation (CCC). Recently, dynamic conditional correlation (DCC) models have been introduced as a means to solve the trade off between model feasibility and flexibility. Here, we employ alternatively the CCC and the DCC modeling framework to evaluate the Value-at-Risk associated with portfolios comprising major U.S. stocks. In addition, we compare their performances with corresponding results obtained from modeling portfolio returns directly via univariate volatility models.
H. Herwartz, B. Pedrinha, F. H. C. Raters

Chapter 2. Multivariate Volatility Models

Abstract
Multivariate volatility models are widely used in finance to capture both volatility clustering and contemporaneous correlation of asset return vectors. Here, we focus on multivariate GARCH models. In this common model class, it is assumed that the covariance of the error distribution follows a time dependent process conditional on information which is generated by the history of the process. To provide a particular example, we consider a system of exchange rates of two currencies measured against the US Dollar (USD), namely the Deutsche Mark (DEM) and the British Pound Sterling (GBP). For this process, we compare the dynamic properties of the bivariate model with univariate GARCH specifications where cross sectional dependencies are ignored. Moreover, we illustrate the scope of the bivariate model by ex-ante forecasts of bivariate exchange rate densities.
M. R. Fengler, H. Herwartz, F. H. C. Raters

Chapter 3. Portfolio Selection with Spectral Risk Measures

Abstract
In this chapter, a portfolio selection problem with spectral risk measure is considered. The spectral risk measure is a general family of coherent risk measures and is capable of reflecting investor’s risk preference. A multivariate conditional heteroscedastic model with vine copulae is employed to describe the dynamics and dependence of the underlying asset returns. The technique of linear programming is used to accurately and quickly determine the optimal asset allocations. Simulation studies are conducted for investigating the impacts of the magnitude of tail dependence among the underlying assets and the degrees of risk aversion on the performance of the optimal portfolio. An empirical study is conducted by using the stock prices included in the FTSE TWSE Taiwan 100 Index. Numerical results indicate that the optimal portfolios have different reactions to different economic situations.
S. F. Huang, H. C. Lin, T. Y. Lin

Chapter 4. Implementation of Local Stochastic Volatility Model in FX Derivatives

Abstract
In this paper, we present our implementations of the Local Stochastic Volatility (LSV) Model in pricing exotic options in FX Market. Firstly, we briefly discuss the limitations of the Black-Scholes model, the Local Volatility (LV) Model and the Stochastic Volatility (SV) Model. To overcome the drawbacks of the above three models, a more generalized LSV model has been proposed to describe the dynamics of implied volatilities. Secondly, we present the details of LSV Model calibration in terms of the Forward Kolgomorov equation. Thirdly, we introduce the numerical methods of option pricing using the LSV model, including both the Backward Partial Differential Equation (PDE) method and Forward Monte Carlo method. Finally, based on our implementations, we compare the calibration and pricing results of the LSV model with the LV model and the SV model, lower calibration errors and relatively accurate pricing results are achieved, which demonstrates the effectiveness of the methods presented in the paper.
J. Zheng, X. Yuan

Chapter 5. Estimating Distance-to-Default with a Sector-Specific Liability Adjustment via Sequential Monte Carlo

Abstract
Distance-to-Default (DTD), a widely adopted corporate default predictor, arises from the classical structural credit risk model of Merton (1974). The modern way of estimating DTD applies the model on an observed time series of equity values along with the default point definition made popular by the commercial KMV model. It is meant to be a default trigger level one year from the evaluation time, and is assumed to be the short-term debt plus 50% of the long-term debt. This default point assumption, however, leaves out other corporate liabilities, which can be substantial and particularly so for financial firms. Duan et al. (2012) rectified it by adding other liabilities after applying an unknown but estimable haircut. Typical DTD estimation uses a one-year long daily time series. With at most four quarterly balance sheets, the estimated haircut is bound to be highly unstable. Post-estimation averaging of the haircuts being applied to a sector of firms is thus sensible for practical applications. Instead of relying on post-estimation averaging, we assume a common haircut for all firms in a sector and devise a novel density-tempered expanding-data sequential Monte Carlo method to jointly estimate this common and other firm-specific parameters. Joint estimation is challenging due to a large number of parameters, but the benefits are manifold, for example, rigorous statistical inference on the common parameter becomes possible and estimates for asset correlations are a by-product. Four industry groups of US firms in 2009 and 2014 are used to demonstrate this estimation method. Our results suggest that this haircut is materially important, and varies over time and across industries; for example, the estimates are 78.97% in 2009 and 66.4% in 2014 for 40 randomly selected insurance firms, and 0.76% for all 31 engineering and construction and 83.92% for 40 randomly selected banks in 2014.
J.-C. Duan, W.-T. Wang

Chapter 6. Risk Measurement with Spectral Capital Allocation

Abstract
Spectral risk measures provide the framework to formulate the risk aversion of a firm specifically for each quantile of the loss distribution of a portfolio. More precisely the risk aversion is codified in a weight function, weighting each quantile. Since the basic coherent building blocks of spectral risk measures are expected shortfall measures, the most intuitive approach comes from combinations of those. For investment decisions the marginal risk or the capital allocation is the sensible approach. Since spectral risk measures are coherent there exists also a sensible capital allocation based on the notion of derivatives or more in the light of the coherency approach as an expectation under a generalized maximal scenario.
L. Overbeck, M. Sokolova

Chapter 7. Market Based Credit Rating and Its Applications

Abstract
Credit rating plays a critical role in financial risk management. It is like a name tag of a firm indicating its health condition. Generally, ratings involve a lot of firm-specific information which is hard to obtain or only available quarterly. In this chapter, we propose a two-step algorithm involving ARIMA-GARCH modelling and clustering to obtain a market based credit rating utilizing easily obtained public information. The algorithm is applied to 3-year CDS spreads of 247 publicly listed firms. Empirical result of the application and comparisons between the obtained ratings with the ratings given by agencies show that such a market based credit rating performs quite well.
R. S. Tsay, H. Zhu

Chapter 8. Using Public Information to Predict Corporate Default Risk

Abstract
Corporate defaults are often affected by many factors that are roughly divided into the two types: internal factors and external factors. Internal factors can be measured precisely with firm-specific financial statistics while external factors contain qualitative data, like related news. There are large amount of timely information from news which affects the default probability of corporates. Efficient extraction information contained in the news is the main focus of this study and we propose to use empirical Bayes and Bayesian Networks to achieve this goal. First, we retrieve both macroeconomic and firm-specific news published by major newspapers in Taiwan. Then, word segmentation is applied, keywords are extracted and then the news variables are computed. Instead of adding the news variables to the logistic regression model, we convert them into prior distribution for the parameters in the corporate default model. Finally, we compute the posterior distribution of the model parameters to predict the corporate default. The estimation is performed using the integrated nested Laplace approximations which, to our belief, is better than the traditional Markov Chain Monte Carlo for our model. Empirical analysis using Taiwanese data finds that news has a significant impact on the corporate default rate prediction. Adding the news variable does improve the forecast precision and prove its usefulness.
C. N. Peng, J. L. Lin

Chapter 9. Stress Testing in Credit Portfolio Models

Abstract
As, in light of the recent financial crises, stress tests have become an integral part of risk management and banking supervision, the analysis and understanding of risk model behaviour under stress has become ever more important. In this paper, we present a general approach to implementing stress scenarios in a multi-factor credit portfolio model and analyse asset correlations, default probabilities and default correlations under stress. We use our results to study the implications for credit reserves and capital requirements and illustrate the proposed methodology by stressing a large investment banking portfolio. Although our stress testing approach is developed in a particular credit portfolio model, the main concept - stressing risk factors through a truncation of their distributions - is independent of the model specification and can be applied to other risk types as well.
M. Kalkbrener, L. Overbeck

Chapter 10. Penalized Independent Factor

Abstract
We propose a penalized independent factor (PIF) method to extract independent factors via a sparse estimation. Compared to the conventional independent component analysis, each PIF only depends on a subset of the measured variables and is assumed to follow a realistic distribution. Our main theoretical result claims that the sparse loading matrix is consistent. We detail the algorithm of PIF, investigate its finite sample performance and illustrate its possible application in risk management. We implement the PIF to the daily probability of default data from 1999 to 2013. The proposed method provides good interpretation of the dynamic structure of 14 economies’ global default probability from pre-Dot Com bubble to post-Sub Prime crisis.
Y. Chen, R. B. Chen, Q. He

Chapter 11. Term Structure of Loss Cascades in Portfolio Securitisation

Abstract
We report on the term structure of loss cascades generated through portfolio tranching. The results are based on the analytical form of the loss distribution for uniform loan portfolios and show that the expected loss of the first loss position increases roughly linear whereas the expected losses of the more senior tranches increase exponentially over time depending on the relation between mean default probability and tranching limits.
L. Overbeck, C. Wagner

Chapter 12. Credit Rating Score Analysis

Abstract
We analyse a sample of funds and other securities each assigned a total rating score by an unknown expert entity. The scores are based on a number of risk and complexity factors, each assigned a category (factor score) of Low, Medium, or High by the expert entity. A principal component analysis of the data reveals that based on the chosen risk factors alone we cannot identify a single underlying latent source of risk in the data. Conversely, the chosen complexity factors are clearly related to one or two underlying sources of complexity. For the sample we find a clear positive relation between the first principal component and the total expert score. An attempt to match the securities’ expert score by linear projection of their individual factor scores yields a best case correlation between expert score and projection of 0.9952. However, the sum of squared differences is, at 46.5552, still notable.
Wolfgang Karl Härdle, K. F. Phoon, D. K. C. Lee

Chapter 13. Copulae in High Dimensions: An Introduction

Abstract
This paper reviews the latest proceeding of research in high dimensional copulas. At the beginning the bivariate copulas are given as a fundamental followed with the multivariate copulas which are the concentration of the paper. In multivariate copula sections, the hierarchical Archimedean copula, the factor copula and vine copula are introduced. In the following section the estimation methods for multivariate copulas including parametric and nonparametric routines, are presented. Also the introduction of the goodness of fit tests in copula context is given. An empirical study of multivariate copulas in risk management is performed thereafter.
Ostap Okhrin, Alexander Ristig, Ya-Fei Xu

Chapter 14. Measuring and Modeling Risk Using High-Frequency Data

Abstract
Measuring and modelling financial volatility is the key to derivative pricing, asset allocation and risk management. The recent availability of high-frequency data allows for refined methods in this field. In particular, more precise measures for the daily or lower frequency volatility can be obtained by summing over squared high-frequency re- turns. In turn, this so called realized volatility can be used for more accurate model evaluation and description of the dynamic and distributional structure of volatility. Moreover, non-parametric measures of systematic risk are attainable, that can straightforwardly be used to model the commonly observed time-variation in the betas. The discussion of these new measures and methods is accompanied by an empirical illustration using high-frequency data of the IBM incorporation and of the DJIA index.
Wolfgang Karl Härdle, N. Hautsch, U. Pigorsch

Chapter 15. Measuring Financial Risk in Energy Markets

Abstract
We investigate the relative performance of a wide array of Value at risk (VaR) and Expected Tail Loss (ETL) risk models in the energy commodities markets. The risk models are tested on a sample of daily spot prices of WTI oil, Brent oil, natural gas, heating oil, coal and uranium yellow cake during the recent global financial crisis. The analysed sample includes periods of backwardation and contango. After obtaining the VaR and ETL estimates we proceed to evaluate the statistical significance of the differences in performance of the analysed risk models. We employ a novel methodology for comparing VaR performance allowing us to rank competing models. Our simulation results show that for a significant number of different VaR models there is no statistical difference in the performance.
S. Žiković

Chapter 16. Risk Analysis of Cryptocurrency as an Alternative Asset Class

Abstract
The purpose of this study is to analyze the risk of cryptocurrencies, as an alternative investment. In particular, we find the wealth distribution of the cryptocurrency, evaluate its corresponding effects on the market and analyze other risk factors resulting in the death of altcoins. The paper concludes that the closer the right tail of wealth distribution approaching the Power-Law model, the more stable the market will be. This result is quite useful for investors to make decisions when investing in cryptocurrencies.
L. Guo, X. J. Li

Chapter 17. Time Varying Quantile Lasso

Abstract
In the present chapter we study the dynamics of penalization parameter $$\lambda$$ of the least absolute shrinkage and selection operator (Lasso) method proposed by Tibshirani (J Roy Stat Soc Series B 58:267–288, 1996) and extended into quantile regression context by Li and Zhu (J Comput Graph Stat 17:1–23, 2008). The dynamic behaviour of the parameter $$\lambda$$ can be observed when the model is assumed to vary over time and therefore the fitting is performed with the use of moving windows. The proposal of investigating time series of $$\lambda$$ and its dependency on model characteristics was brought into focus by Härdle et al. (J Econom 192:499–513, 2016), which was a foundation of FinancialRiskMeter. Following the ideas behind the two aforementioned projects, we use the derivation of the formula for the penalization parameter $$\lambda$$ as a result of the optimization problem. This reveals three possible effects driving $$\lambda$$; variance of the error term, correlation structure of the covariates and number of nonzero coefficients of the model. Our aim is to disentangle these three effects and investigate their relationship with the tuning parameter $$\lambda$$, which is conducted by a simulation study. After dealing with the theoretical impact of the three model characteristics on $$\lambda$$, empirical application is performed and the idea of implementing the parameter $$\lambda$$ into a systemic risk measure is presented.
Wolfgang Karl Härdle, W. Wang, L. Zboňáková

Chapter 18. Dynamic Topic Modelling for Cryptocurrency Community Forums

Abstract
Cryptocurrencies are more and more used in official cash flows and exchange of goods. Bitcoin and the underlying blockchain technology have been looked at by big companies that are adopting and investing in this technology. The CRIX Index of cryptocurrencies http://​hu.​berlin/​CRIX indicates a wider acceptance of cryptos. One reason for its prosperity certainly being a security aspect, since the underlying network of cryptos is decentralized. It is also unregulated and highly volatile, making the risk assessment at any given moment difficult. In message boards one finds a huge source of information in the form of unstructured text written by e.g. Bitcoin developers and investors. We collect from a popular crypto currency message board texts, user information and associated time stamps. We then provide an indicator for fraudulent schemes. This indicator is constructed using dynamic topic modelling, text mining and unsupervised machine learning. We study how opinions and the evolution of topics are connected with big events in the cryptocurrency universe. Furthermore, the predictive power of these techniques are investigated, comparing the results to known events in the cryptocurrency space. We also test hypothesis of self-fulling prophecies and herding behaviour using the results.
M. Linton, E. G. S. Teo, E. Bommes, C. Y. Chen, Wolfgang Karl Härdle