2011 | OriginalPaper | Chapter
High-Frequency Simulations of an Order Book: a Two-scale Approach
Authors : Charles-Albert Lehalle, Olivier Guéant, Julien Razafinimanana
Published in: Econophysics of Order-driven Markets
Publisher: Springer Milan
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Models of market microstructure at the order book scale can be split into two families:
First, the agent-based models [
5
] aiming at simulating a large number of agents, each of them having its utility function or feedback rule. The philosophy of this kind of modelling is similar to Minsky’s paradigm in artificial intelligence in the eighties: build each agent so that if you stealthily replace, one by one, each real person interacting in the market with such a virtual ersatz, you will finally obtain a full synthetic replica of a real market. The actual limits faced by this research programme are: first, the difficulty to rationalise and quantify the utility function of real persons, and then the computational capabilities of today’s computers. Last but not least, the lack of analytical results of this fully non-parametric approach is also a problem for a lot of applications. It is, for instance, usually difficult to know how to choose the parameters of such models to reach a given intra-day volatility, given sizes of jumps, average bid-ask spread, etc.
Second, the “zero intelligence”; models [
9
] aiming at reproducing stylised facts (Epps effect on correlations, signature plot of volatility, order book shapes, etc.) using random number generators for time between orders, order sizes, prices, etc. This approach is more oriented to “knowledge extraction” from existing recordings than the agent-based one. Its focus on stylized facts and our capability to emulate them using as simple as possible generators is close to the usual definition of “knowledgerd (following for instance Kolmogorov or Shannon in terms of complexity reduction). It succeeds in identifying features like short-term memory, Epps effect on correlations, signature plots for high-frequency volatility estimates, dominance of power laws [
25
], and the general profile of market impact [
11
], among others, that are now part of the usual benchmarks to validate any microscopic market model. The limits of this approach are: first, the usual stationarity assumptions that are made, and the difficulty of linking the microscopic characteristics with macroscopic ones, for instance linking characteristics of the underlying probability distributions to market volatility (even if recent advances have been made in this direction using Hawkes models [
2
] or usual distributions [
7
]). The search for such links is motivated by the fact that as they are probability-based, their diffusive limits (or equivalent) should behave similarly to usual quantitative models on a large scale (for instance Levy processes [
24
]).