Skip to main content

Über dieses Buch

The risk of counterparty default in banking, insurance, institutional, and pension-fund portfolios is an area of ongoing and increasing importance for finance practitioners. It is, unfortunately, a topic with a high degree of technical complexity. Addressing this challenge, this book provides a comprehensive and attainable mathematical and statistical discussion of a broad range of existing default-risk models. Model description and derivation, however, is only part of the story. Through use of exhaustive practical examples and extensive code illustrations in the Python programming language, this work also explicitly shows the reader how these models are implemented. Bringing these complex approaches to life by combining the technical details with actual real-life Python code reduces the burden of model complexity and enhances accessibility to this decidedly specialized field of study. The entire work is also liberally supplemented with model-diagnostic, calibration, and parameter-estimation techniques to assist the quantitative analyst in day-to-day implementation as well as in mitigating model risk. Written by an active and experienced practitioner, it is an invaluable learning resource and reference text for financial-risk practitioners and an excellent source for advanced undergraduate and graduate students seeking to acquire knowledge of the key elements of this discipline.



Chapter 1. Getting Started

The first order of business is to highlight the scope of the book, its organization, and the underlying principles associated with its construction. This chapter thus begins by explaining the purview of credit-risk perspectives adopted, or scope, in the following discussion. The subsequent chapters, to be clear, consider static, structural and reduced-form, credit-risk models in a portfolio setting from a predominately risk-management perspective. Most of the ideas, however, with a few changes such as choice of probability measure, are readily applied to the pricing context. It then proceeds to highlight the three thematic elements of the book’s organization: models, diagnostic tools, and parameter estimation. This structure was designed to naturally incorporate the challenges of model selection and management through multiplicity of perspective. It also seeks to promote transparency and accessibility of presentation and practical concreteness. The chapter, therefore, concludes with a brief introduction to a compact portfolio example, which will be used throughout the remainder of the book. Preliminary analysis is performed on this portfolio to help understand its characteristics and foreshadow the forthcoming analytic elements.
David Jamieson Bolder

Part I


Chapter 2. A Natural First Step

Starting from first principles, this chapter follows a chain of common sense logic to construct an initial model. Beginning from a sequence of independent Bernoulli trials, a binomial default-loss distribution emerges. A critical assumption of this introductory approach, however, is statistical independence between the default of various obligors in one’s portfolio. While of great mathematical convenience, this is an indefensible supposition. Not only is it economically questionable, imposition of modest amounts of default dependence has dramatic effects on risk estimates. It thus lacks conservatism. All realistic and reasonable credit-risk models, therefore, place the dependence structure at the centre of their methodology. That said, many lessons can still be drawn from this simple approach, which underpins many extant default-risk models. Maintaining the presumption of independence of default, the following discussion thus walks through its implementation—both numerically and analytically—in the context of our concrete portfolio example. It also explores its asymptotic properties and provides an alternative entry point, which will aid in expanding this foundation in the following chapters.
David Jamieson Bolder

Chapter 3. Mixture or Actuarial Models

The independent-default model is deeply flawed. Not only is it fair to argue that dependence is the single most important aspect of credit-risk modelling, but the tails of the associated loss distribution are overly thin and its asymptotic behaviour is simply too well behaved. This chapter offers a family of approaches—generally referred to as mixture or actuarial models—to address each of these shortcomings. The principle idea behind this new methodology is the randomization of the default probability. Practically, a common state variable is introduced, which induces default dependence among all obligors. Conditionally, default events remain independent, but unconditionally they are related through the realization of the systematic state variable. The structure of the default-loss distribution thus depends on the statistical properties of one’s state-variable choice mixed with underlying binomial-default structure. A variety of possible choices are investigated, convergence properties are explored, and our portfolio example is examined from both analytic and numerical perspectives. Through the law of rare events, a separate class of Poisson-mixture models are explored ultimately leading to the celebrated CreditRisk+ model used widely in practice and first suggested by Wilde (1997, CreditRisk+: A credit risk management framework, Credit Suisse First Boston).
David Jamieson Bolder

Chapter 4. Threshold Models

The binomial- and Poisson-mixture models offer a useful range of possible credit-risk implementations. Not only do we actively seek alternative approaches, but the mixture models are silent on the ultimate reason for default. In other words, the actuarial or mixture methodology is reduced form. A competing structural modelling family, referred to as the set of threshold models, is offered in this chapter. This approach is, in fact, a clever combination of a pragmatic, latent-variable approach and the classic Merton (1974, Journal of Finance, 29, 449–470) model. Default, therefore, occurs when a statistical proxy of the firm’s asset value falls below a predetermined threshold. The eponymous threshold is inferred from the obligor’s unconditional default probability. The basic structure and convergence properties of this technique are first examined in the Gaussian setting. This initial logic is then generalized—allowing for both thicker tails and tail dependence—through the introduction of the class of normal-variance mixture models. Parameter-calibration techniques are also reviewed and all models are exhaustively applied to our ongoing portfolio example.
David Jamieson Bolder

Chapter 5. The Genesis of Credit-Risk Modelling

The path-breaking work, Merton (1974, Journal of Finance, 29, 449–470), not only addressed a number of important asset-pricing and corporate-finance questions, but was also the genesis of the field of credit-risk modelling. This chapter focuses exclusively on this approach. Not only would it be an injustice to ignore this still-pertinent model, but it offers a range of useful insights into the class of threshold models. The Merton (1974, Journal of Finance, 29, 449–470) framework was conceived and developed in a continuous-time, mathematical-finance setting. To address this complicating factor, a significant amount of effort is allocated to the basic intuition, notation, and mathematical structure leading to a motivating discussion regarding the notion of geometric Brownian motion. Armed with this detail, the chapter proceeds to investigate two possible implementations of Merton (1974, Journal of Finance, 29, 449–470)’s model, which we term the indirect and direct approaches. The indirect approach will turn out to be quite familiar, whereas the direct method requires a significant amount of heavy lifting for its implementation. As in previous chapters, parameter calibration options are explored and both methods are applied to practical portfolio examples.
David Jamieson Bolder

Part II


Chapter 6. A Regulatory Perspective

Quantitative analysts seek to construct useful models to assess the magnitude of credit risk in their portfolios and inform associated management decisions. Regulators, in a slightly alternative context, perform a similar task. They face, however, a rather different set of constraints and objectives. Regulators, in fact, use standardized models to promote fairness, a level playing field, monitor the solvency of individual entities, and enhance overall economic stability. Much can be learned from the regulatory perspective; indeed, actions and trends in the regulatory field are an important diagnostic for internal modellers. To underscore this point, this chapter focuses principally on the widely used internal-ratings-based (IRB) approach proposed by the Basel Committee on Banking Supervision. Closer inspection reveals that this approach is founded on a portfolio-invariant version of the Gaussian threshold model. Portfolio invariance implies that the risk of the portfolio is independent of the overall portfolio structure and depends only on the characteristics of the individual exposures. Expedient rather than realistic, this choice reduces the computational and system burden on regulated organizations, but ignores concentration risk. After applying the IRB approach to our portfolio example, we then carefully explore Gordy (2003, Journal of Financial Intermediation, 12, 199–232)’s granularity adjustment, which was proposed to address this shortcoming.
David Jamieson Bolder

Chapter 7. Risk Attribution

VaR and expected-shortfall are extremely useful portfolio risk measures. They are, however, reasonably difficult to compute and often challenging to interpret. Effective use and communication of one’s risk measures nonetheless requires significant insight into their underlying structure. In particular, it is inordinately useful to understand how individual obligors contribute to one’s risk estimates. This is referred to as risk attribution and is, it must be admitted, a non-trivial undertaking. To address this important area and do justice to its complexity, we allocate the entire chapter to this task. We begin with a surprising relationship between risk attribution and conditional expectation, which subsequently motivates development of a general-purpose numerical algorithm. To offer alternatives to this computationally intensive and often noisy approach, we examine two analytical techniques. The first, termed the normal approximation, provides insight into the underlying problem, but is unfortunately not a robust solution. The saddlepoint approximation, conversely, offers an accurate and fast alternative in the one-factor setting; it also enhances our technical understanding of the underlying risk measures. As always, analysis of all approaches are accompanied by detailed derivations and concrete examples.
David Jamieson Bolder

Chapter 8. Monte Carlo Methods

Stochastic-simulation, or Monte-Carlo, methods are used extensively in the area of credit-risk modelling. This technique has, in fact, been employed inveterately in previous chapters. Care and caution are always advisable when employing a complex numerical technique. Prudence is particularly appropriate, in this context, because default is a rare event. Unlike the asset-pricing setting, where we typically simulate expectations in the central part of the distribution, credit risk operates in the tails. As a consequence, this chapter is dedicated to a closer examination of the intricacies of the Monte-Carlo method. Working from first principles, the importance of convergence analysis and confidence intervals is highlighted. The principal shortcoming of this method, its inherent slowness, is also explained and demonstrated. This naturally leads to a discussion of the set of variance-reduction techniques employed to enhance the speed of these estimators. The chapter concludes with the investigation and implementation of the Glasserman and Li (2005, Management Science, 51(11), 1643–1656.) importance-sampling method to the t-threshold model. This method, which employs the so-called Esscher transform, has close conceptual links to the saddlepoint technique introduced in the previous chapter.
David Jamieson Bolder

Part III


Chapter 9. Default Probabilities

Unconditional default probabilities are the coin of the realm for credit and quantitative analysts. These assessments of relative creditworthiness associated with individual obligors are of enormous value. This information, however, does not come for free. There are, in fact, two broad approaches used in the determination of default probabilities: estimation and calibration. This chapter examines both. Estimation exploits credit-counterparty transition and default history by employing statistical techniques to approximate one’s desired values. The general framework is presented along with, quite importantly, two approaches used to assess the uncertainty in one’s estimates. Rarity of default and relatively modest historical data make interval estimation an essential practice; its efficacy is examined in a simulation study. Calibration, conversely, examines market instruments—such as bond obligations or credit-default swaps—and seeks to extract implied default probabilities from their observed prices. Using credit-default swaps, the theory and practice of this default-probability calibration method are carefully investigated. Although tempting, it is a mistake to treat these two approaches as equivalent. Reconciling these alternative default probability values involves grappling with risk preferences. The pitfalls and limits of this thorny task are the final topic of this chapter.
David Jamieson Bolder

Chapter 10. Default and Asset Correlation

If default dependence is the heart of credit-risk modelling, then an empirical estimate of its magnitude is of primordial importance. Unlike estimation of default probabilities, addressed in the previous chapter, the characterization of default dependence is model dependent rendering this task more difficult. Since different models incorporate the relationship between obligor defaults in alternative ways, dependence is governed by some subset of a model’s parameters. As usual, a variety of techniques are presented, examined, and concretely implemented. The first, based on the method of moments, applies quite generally and is conceptually similar to the calibration techniques employed in previous chapters. A second approach, using observed default outcomes, exploits conditional independence to build a likelihood function and applies in both mixture and threshold settings. This permits use of the maximum-likelihood framework for the production of both point and interval estimates. The final, somewhat complex and fragile, approach is only applicable to the family of threshold models. It enjoys the advantage of using all transition data, but simultaneously requires inference of the unobservable global state variable values. The robustness of the final two techniques are assessed within separate simulation studies.
David Jamieson Bolder


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Blockchain-Effekte im Banking und im Wealth Management

Es steht fest, dass Blockchain-Technologie die Welt verändern wird. Weit weniger klar ist, wie genau dies passiert. Ein englischsprachiges Whitepaper des Fintech-Unternehmens Avaloq untersucht, welche Einsatzszenarien es im Banking und in der Vermögensverwaltung geben könnte – „Blockchain: Plausibility within Banking and Wealth Management“. Einige dieser plausiblen Einsatzszenarien haben sogar das Potenzial für eine massive Disruption. Ein bereits existierendes Beispiel liefert der Initial Coin Offering-Markt: ICO statt IPO.
Jetzt gratis downloaden!