Skip to main content

1997 | Buch

Fractals and Scaling in Finance

Discontinuity, Concentration, Risk. Selecta Volume E

verfasst von: Benoit B. Mandelbrot

Verlag: Springer New York

insite
SUCHEN

Über dieses Buch

IN 1959-61, while the huge Saarinen-designed research laboratory at Yorktown Heights was being built, much of IBM's Research was housed nearby. My group occupied one of the many little houses on the Lamb Estate complex which had been a sanatorium housing wealthy alcoholics. The picture below was taken about 1960. It shows from right to left, T. e. Hu, now at the University of California, Santa Barbara. I am next, staring at a network I have just written on the blackboard. Then comes Paul Gilmore, late of the University of British Columbia, then (seated) Richard Levitan, now retired, and at the left is Benoit Mandelbrot. x FOREWORD EF Even in a Lamb Estate populated exclusively with bright research­ oriented people, Benoit always stood out. His thinking was always fresh, and I enjoyed talking with him about any subject, whether technical, poli­ tical, or historical. He introduced me to the idea that distributions having infinite second moments could be more than a mathematical curiosity and a source of counter-examples. This was a foretaste of the line of thought that eventually led to fractals and to the notion that major pieces of the physical world could be, and in fact could only be, modeled by distrib­ utions and sets that had fractional dimensions. Usually these distributions and sets were known to mathematicians, as they were known to me, as curiosities and counter-intuitive examples used to show graduate students the need for rigor in their proofs.

Inhaltsverzeichnis

Frontmatter

Nonmathematical Presentations

E1. Introduction
Abstract
A pragmatic attitude towards the slippery notion of randomness is described in Section 1. A premise of my work is that graphics must not be spurned; Section 2 argues that it remains indispensable even beyond the first stage of investigation.
A more pointed premise of my work is that the rules of price variation are not the same on all markets, hence a single statistical model may not describe every market without unacceptable complication. I agree that “it is better to be approximately right than certifiably wrong,” and worked, in succession or in parallel, with several distinct fractal models of finance of increasing generality. Adapting the style of reference used throughout this book, those models will be denoted by the letter M followed by the year of original publication or announcement.
The core of this chapter is made of Sections 5 to 9. A first glance at scaling is taken in Section 5. Section 6 introduces the M 1963 model, which deals with tail-driven variability and the “Noah” effect and is based on L-stable processes. Section 7 introduces the M 1965 model, which deals with dependence-driven variability and the “Joseph” effect and is based on fractional Brownian motion. Old-timers recall these models as centered on “cotton prices and the River Nile.” Section 8 introduces the M 1972 combined Noah-Joseph model, which this book expands beyond the original fleeting reference in M 1972j{N14}. That model is based on fractional Brownian motion of multifractal trading time.
The M 1965 and M 1972 models have a seemingly peculiar but essential feature: they account for the “bunching” of large price changes indirectly, by invoking unfamiliar forms of serial dependence with an infinite memory. All the alternative models (as sketched and critized in Section 4 of Chapter E2) follow what seems to be common sense, and seek to reach the same goal by familiar short memory fixes. Infinite memory and infinite variance generate many paradoxes that are discussed throughout this book, beginning with Section 8.3.
Here are the remaining topics of this chapter: Brownian motion (the 1900 model!) and martingales are discussed in Section 3. The inadequacies of Brownian motion are listed in Section 4. Section 9 gives fleeting indications on possible future directions of research. Finally, the notions of “creative model” and of “understanding without explanation” are the topics of Section 10.
Benoit B. Mandelbrot
E2. Discontinuity and scaling: their scope and likely limitations
Abstract
Chapter E1 stated emphatically my view that Gaussianity, random walks and martingales are attractive hypotheses, but disagree with the evidence concerning price variation. This chapter presents, in largely non-mathematical style, the processes I propose as replacements for Brownian motion. Their foundations include an evidence-based theme and a conceptual tool.
The theme is discussed in Section 1: it is discontinuity and the related notions of concentration and cyclicity. The tool, scaling, is discussed in Section 2. The possible limitations of scaling expressed by cutoffs and crossovers are discussed in Section 3. Section 4 comments on alternative approaches that contradict scaling, and instead replace Brownian motion by a “patchwork” of step-by-step “fixes.” Section 5 describes some paradoxes of scaling.
Stationarity and scaling express invariances with respect to translation in time and change in the unit of time. Diverse principles of invariance are essential to my work, in economics as well as in physics.
Benoit B. Mandelbrot
E3. New methods in statistical economics
Abstract
An interesting relationship between the methods in this chapter and renormalization as understood by physicists is described in the Annotation for the physicists that follows this text.
Benoit B. Mandelbrot
E4. Sources of inspiration and historical background
Abstract
This chapter is written in the style of an acknowledgement of broad intellectual debts. All my scienfitic work fell under the influence of the branch of physics called thermodynamics, and of other independent traditions ranging from deep to very shallow. I came to scaling and renormalization by cross-fertilizing the influences of probability theory (Lévy) and the social sciences (Pareto, Zipf and the economists’ idea of aggregation.)
At a point where my views on scaling were already formulated, I became aware that this notion is also fundamental in the study of turbulence (Richardson, Kolmogorov.) The theories of disorder and chaos, which also make extensive use of scaling and renormalization, arose from a different and independent tradition, and did not influence my work until quite late. Furthermore, diverse scaling rules were recorded in geology, but not appreciated, and the biologists’ allometry is yet another expression of scaling.
As my study of scaling became increasingly visual and grew into fractal geometry, it became widely agreed that fractal aspects are present in many fields; their importance is limited is some and fundamental in others — including finance.
Benoit B. Mandelbrot

Mathematical Presentations

E5. States of randomness from mild to wild, and concentration from the short to the long run
Abstract
An innovative useful metaphor is put forward in this chapter, and described in several increasingly technical stages. Section 1 is informal, but Sections 4 and 5 are specialized beyond the concerns of most readers; in fact, the mathematical results they use are new.
At the core is a careful examination of three well-known distributions: the Gaussian, the lognormal and the scaling with infinite variance (α < 2). They differ deeply from one another from the viewpoint of the addition of independent addends in small or large numbers, and this chapter proposes to view them as “prototypes,” respectively, of three distinct “states of randomness:” mild, slow and wild. Slow randomness is a complex intermediate state between two states of greater simplicity. It too splits more finely, and there are probability distributions beyond the wild.
Given N addends, portioning concerns the relative contribution of the addends Un to their sum \( \sum\nolimits_1^N {{U_n}} \). Mildness and wildness are defined by criteria that distinguish between even portioning, meaning that the addends are roughly equal, ex-post, and concentrated portioning, meaning that one or a “few” of the addends predominate, ex-post. This issue is especially important in the case of dependent random variables (Chapter E6), but this chapter makes a start by tackling the simplest circumstances: it deals with independent and identically distributed addends.
Classical mathematical arguments concerning the long-run (N→∞) will suffice to distinguish between the “wild” state of randomness and the remaining states, jointly called “preGaussian.”
Novel mathematical arguments will be needed to tackle the short-run (N = 2 or “a few”). The resulting criterion will be used to distinguish between a “mild” or “tail-mixing” state of randomness, and the remaining states, jointly called “long-tailed” or “tail-preserving.” This discussion of long-tailedness may be of interest even to readers reluctant to follow me in describing the levels of randomness as “states.”
In short-run partition, short-run concentration will be defined in two ways. The criterion needed for “concentration in mode” will involve the convexity of log p(u),where p(u) is the probability density of the addends. The concept of “concentration in probability” is more meaningful but more delicate, and will involve a limit theorem of a new kind. Long-tailed distributions will be defined by the very important “tail-preservation criterion” under addition; it is written in shorthand as P N ~ NP.
Randomness that is “preGaussian” but “tail-preserving” will be called “slow.” Its study depends heavily on middle-run arguments (N = “many”) that involve delicate transients.
Benoit B. Mandelbrot
E6. Self-similarity and panorama of self-affinity
Abstract
This long and essential chapter provides this book with two of its multiple alternative introductions. The mathematically ambitious reader who will enter here will simply glance through Section 1, which distinguishes between self-similarity and self-affinity, and Section 2, which is addressed to the reader new to fractals and takes an easy and very brief look at self-similarity. Later sections approach subtle and diverse facets of self-affine scaling from two distinct directions, each with its own significant assets and liabilities.
Section 3 begins with WBM, the Wiener Brownian motion. In strict adherence to the scaling principle of economics described in Chapter E2, WBM is self-affine in a statistical sense. This is true with respect to an arbitrary reduction ratio r, and there is no underlying grid, hence WBM can be called the grid free. Repeating in more formal terms some material in Sections 6 to 8 of Chapter El, Section 3 discusses generalizations that share the scaling properties of WBM, namely, Wiener or fractional Brownian motion of fractal or multifractal time.
Section 4 works within grids, hence limits the reduction ratio r to certain particular values. Being grid-bound weakens the scaling principle of economics, but this is the price to pay in exchange for a significant benefit, namely the availability of a class of self-affine non-random functions whose patterns of variability include and exceed those of Section 3. Yet, those functions fall within a unified overall master structure. They are simplified to such an extent that they can be called “toy models” or “cartoons.”
The cartoons are grid-bound because they are constructed by recursive multiplicative interpolation, proceeding in a self-affine grid that is the sim?plest case prescribed in advance. The value of grid-bound non-random fractality is that it proves for many purposes to be an excellent surrogate for randomness. The properties of the models in Section 3 can be reproduced with differences that may be viewed as elements of either indeterminacy or increased versatility. Both the close relations and the differences between the cartoons could have been baffling, but they are pinpointed immediately by the enveloping master structure. At some cost, that structure can be randomly shuffled or more deeply randomized. Its overall philosophy also suggests additional implementations, of which some are dead-ends, but others deserve being explored.
Wiener Brownian motion and its cartoons belong to the mild state of variability or noisiness, while the variability or noisiness of other functions of Section 3 and cartoons of Section 4 are wild. The notions of states of mild and wild randomness, as put forward in Chapter E5, are generalized in Section 5 from independent random variables to dependent random processes and non-random cartoons. Section 5.4 ends by describing an ominous scenario of extraordinary wildness.
Being constrained to scaling functions, this chapter leaves no room for slow variability.
Benoit B. Mandelbrot
E7. Rank-size plots, Zipf’s law, and scaling
Abstract
Rank-size plots, also called Zipf plots, have a role to play in representing statistical data. The method is somewhat peculiar, but throws light on one aspect of the notions of concentration. This chapter’s first goals are to define those plots and show that they are of two kinds. Some are simply an analytic restatement of standard tail distributions but other cases stand by themselves. For example, in the context of word frequencies in natural discourse, rank-size plots provide the most natural and most direct way of expressing scaling.
Of greatest interest are the rank-size plots that are rectilinear in log-log coordinates. In most cases, this rectilinearity is shown to simply rephrase an underlying scaling distribution, by exchanging its coordinate axes. This rephrasing would hardly seem to deserve attention, but continually proves its attractiveness. Unfortunately, it is all too often misinterpreted and viewed as significant beyond the scaling distribution drawn in the usual axes. These are negative but strong reasons why rank-size plots deserve to be discussed in some detail. They throw fresh light on the meaning and the pitfalls of infinite expectation, and occasionally help understand upper and lower cutoffs to scaling.
Benoit B. Mandelbrot
E8. Proportional growth with or without diffusion, and other explanations of scaling
Abstract
However useful and “creative” scaling may be, it is not accepted as an irreducible scientific principle. Several isolated instances of scaling are both unquestioned and easy to reduce to more fundamental principles, as will be seen in Section 1. There also exists a broad class of would-be universal explanations, many of them variants of proportional growth of U, with or without diffusion of log U. This chapter shows why, countering widely accepted opinion, I view those explanations as unconvincing and unacceptable.
The models to be surveyed and criticized in this expository text were scattered in esoteric and repetitive references. Those I quote are the ear?liest I know. Many were rephrased in terms of the distribution of the sizes of firms. They are easily translated into terms of other scaling random variables that are positive. The two-tailed scaling variables that represent change of speculative prices (M 1963b{E14}) pose a different problem, since the logarithm of a negative change has no meaning.
Benoit B. Mandelbrot
E9. A case against the lognormal distribution
Abstract
The lognormal distribution is, in some respects, of great simplicity. This is one reason why, next to the Gaussian, it is widely viewed as the practical statistician’s best friend. From the viewpoint described in Chapter E5, it is short-run concentrated and long-run even. This makes it the prototype of the state of slow randomness, the difficult middle ground between the wild and mild state of randomness. Metaphorically, every lognormal resembles a liquid, and a very skew lognormal resembles a glass, which physicists view as a very viscous liquid.
A hard look at the lognormal reveals a new phenomenon of delocalized moments. This feature implies several drawbacks, each of which suffices to make the lognormal dangerous to use in scientific research. Population moments depend overly on exact lognormality. Small sample sequential moments oscillate to excess as the sample size increases. A non-negligible concentration rate can only represent a transient that vanishes for large samples.
Benoit B. Mandelbrot

Personal Incomes and Firms

E10. L-stable model for the distribution of income
Abstract
This paper introduces a new model for the distribution of income, and hopes to draw attention to the great potential of the “L-stable” family of nonGaussian probability distributions. This new tool may be as important as the specific application to be discussed. In other words, the same approach translates immediately to analogous quantities for which it may be more reasonable, or give a better fit. One might even paraphrase Einstein’s cagey comment about Brownian motion: it is possible that the properties studied in that paper are identical to those of income; however, the information available regarding incomes is so lacking in precision that one cannot really form a judgement on the matter.
Benoit B. Mandelbrot
E11. L-stability and multiplicative variation of income
Abstract
This paper describes a theory of the stationary stochastic variation of income based on a new family of nonGaussian random functions, U ( t). This approach is intimately connected with random walks of log U(t), but no use is made of the “principle of proportionate effect.” Instead, the model is based upon the fact that there exist limits for sums of random functions, in which the effect of chance in time is multiplicative. This feature provides a new type of motivation for the widespread, convenient, and frequently fruitful use of the logarithm of income, considered as a “moral wealth.”
I believe that these new stochastic processes will play in linear economics, for example in certain problems of aggregation. The reader will fine that the results are easily rephrased in terms of diverse economic quantities other than income. As a result, the tools to be introduced may be as important as the immediate results to be achieved. In particular, the distribution and variation of city sizes raises very similar problems.
Benoit B. Mandelbrot
E12. Scaling distributions and income maximization
Abstract
Judged by the illustrations, this chapter is an exercise in something close to linear programming. The formulas show that this exercise is carried out in a random context ruled by scaling distributions, including the Pareto law for the distribution of personal income. To help both concepts become “broken in” and better understood, I inves?tigated this and other standard topics afresh with the Gaussian replaced by the scaling distribution. For many issues, major “qualitative” changes follow. As is often the case, the root reason lies in sharp contrast in the convexity of probability isolines, between the circles of the form “ x 2 + y 2 = constant, ” which characterize independent Gaussian coordinates, and the hyperbolas of the form “ xy = constant, ” which characterize independent scaling coordinates. The resulting changes concern an issue central to this book and especially in Chapter E5: evenness of distribution associated with the Gaussian, versus concentration associated with the scaling.
Benoit B. Mandelbrot
E13. Industrial concentration and scaling
Abstract
Since the theme of concentration is mentioned in the title of this book, and this book boasts several alternative entrances, it is appropriate that yet another introductory chapter, even a very brief one, should take up the phenomenon of industrial concentration, and the meaning — or absence of meaning — of the notion of average firm size in an industry.
Benoit B. Mandelbrot

The 1963 Model of Price Change

E14. The variation of certain speculative prices
Abstract
The classic model of the temporal variation of speculative prices (Bachelier 1900) assumes that successive changes of a price Z(t) are independent Gaussian random variables. But, even if Z(t) is replaced by log Z(t),this model is contradicted by facts in four ways, at least:
(1)
Large price changes are much more frequent than predicted by the Gaussian; this reflects the “excessively peaked” (“leptokurtic”) character of price relatives, which has been well-established since at least 1915.
 
(2)
Large practically instantaneous price changes occur often, contrary to prediction, and it seems that they must be explained by causal rather than stochastic models.
 
(3)
Successive price changes do not “look” independent, but rather exhibit a large number of recognizable patterns, which are, of course, the basis of the technical analysis of stocks.
 
(4)
Price records do not look stationary, and statistical expressions such as the sample variance take very different values at different times; this nonstationarity seems to put a precise statistical model of price change out of the question.
 
Benoit B. Mandelbrot
E15. The variation of the prices of cotton, wheat, and railroad stocks, and of some financial rates
Abstract
M 1963b{E14} argues that the description of price variation requires probability models less special than the widely used Brownian, because the price relatives of certain prices series have a variance so large that it may in practice be assumed infinite. This theme is developed further in the present chapter, which covers the following topics.
Benoit B. Mandelbrot
E16. Mandelbrot on price variation
Abstract
There has been a tradition among economists which holds that prices in speculative markets, such as grain and securities markets, behave very much like random walks. References include Bachelier 1900, Kendall 1953, Osborne 1959, Roberts 1959, Cootner 1962, and Moore 1962. The random walk theory is based on two assumptions: (1) price changes are independent random variables, and (2) the changes conform to some probability distribution. This paper will be concerned with the nature of the distribution of price changes rather than with the assumption of independence. Attention will be focused on an important new hypothesis concerning the form of the distribution which has recently been advanced by Benoit Mandelbrot. We shall see later that if Mandelbrot’s hypothesis is upheld, it will radically revise our thinking concerning both the nature of speculative markets and the proper statistical tools to be used when dealing with speculative prices.
Benoit B. Mandelbrot
E17. Comments by P. H. Cootner, E. Parzen, and W. S. Morris (1960s) and responses
Abstract
While teaching economics at harvard, I spent part of the 1962 Christmas vacation in Pittsburgh PA, where the Econometric Society held its Annual Meeting. M 1962i, which was to provide the substance of M 1963b{E14} and 1967b{E15}, was honored by being made the sole topic of a session. Instead of three talks, each followed by a brief discussion, this session included my talk followed by several discussions. The two that were written down were thoughtful, but demanded a response. Sections 1 and 2 reproduce some telling points of those comments, in italics and between quote marks, and followed by my responses. Two other discussants, Lawrence Fisher and George Hadley of the University of Chicago, left no record. The source of the quite separate contribution by W. S. Morris will be mentioned in Section 3.
Benoit B. Mandelbrot
E18. Computation of the L-stable distributions
Abstract
The first tables, M & Zarnfaller 1959, were quite difficult to compute, and the resulting figures were drawn by hand. Today, several dependable programs are available. This brief and casual chapter is neither a systematic discussion nor a full bibliography, only a collection of hopefully useful odds and ends that are close at hand.
Benoit B. Mandelbrot

Beyond the M 1963 Model

E19. Nonlinear forecasts, rational bubbles, and martingales
Abstract
Two terms are found in the title of this reprint, but not of the originals, namely, “nonlinear” and “rational bubble.” They express the two main points of this paper in words that were not available to me in 1966.
Benoit B. Mandelbrot
E20. Limitations of efficiency and of martingale models
Abstract
In the moving away process
$$C(t) = \sum\limits_{s = - \infty }^t {L(t - s)N(s)} $$
, the quantities N(s), called “innovations,” are random variables with finite variance and are orthogonal (uncorrelated) but are not necessarily Gaussian. Knowing the value of C(s) for s < t, that is, knowing the present and past “innovations” N(s),the optimal least squares estimator of C(t + n) is the conditional expected value E c C(t + n) In terms of the N(s),
$${E_c}C(t + n) = \sum\limits_{s = - \infty }^t {L(t + n - s)N(s)} $$
, which is a linear function of the N(s) for st. This paper that the large n behavior of E c C(t + n) depends drastically on the value of \(\Lambda = \sum\nolimits_{m = 0}^\infty {L(m)} \).
Benoit B. Mandelbrot
E21. Self-affine variation in fractal time
Abstract
Since the number of transactions in any time period is random, different distributions are needed to represent price changes over fixed numbers of transactions and fixed time periods. It appears that the former follow a Gaussian distribution, while the latter follow a L-stable distribution. M & Taylor 1967 shows that those two distributions are by no means contratictory: a scenario based on a fractal subordination time is proposed by Taylor (Section 1), then shown by Mandelbrot (Section 2) to be intimately related to an earlier discussion of the specialists’ function of “ensuring the continuity of the market.” Note that this scenario is only compatible with the M 1963 model restricted to a symmetric distribution of price changes. Section 3 — reproducing M 1973c — elaborates by responding to Clark 1973.
Howard M. Taylor, Peter K. Clark
Backmatter
Metadaten
Titel
Fractals and Scaling in Finance
verfasst von
Benoit B. Mandelbrot
Copyright-Jahr
1997
Verlag
Springer New York
Electronic ISBN
978-1-4757-2763-0
Print ISBN
978-1-4419-3119-1
DOI
https://doi.org/10.1007/978-1-4757-2763-0