2009 | OriginalPaper | Buchkapitel
Discrete State Space Methods
Erschienen in: Dynamic General Equilibrium Modeling
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this chapter we explore methods that replace the original model by a model whose state space consists of a finite number of discrete points. In this case, the value function is a finite dimensional object. For instance, if the state space is one-dimensional and has elements
X
= {
x
1
,
x
2
, …,
x
n
}, the value function is just a vector of
n
elements where each element gives the value attained by the optimal policy if the initial state of the system is
x
j
∈
X
. We can start with an arbitrary vector of values representing our initial guess of the value function and then obtain a new vector by solving the maximization problem on the rhs of the Bellman equation. This procedure will converge to the true value function of this discrete valued problem. Though simple in principle, this approach has a serious drawback. It suffers from the curse of dimensionality. On a one-dimensional state space, the maximization step is simple. We just need to search for the maximal element among n. Yet, the value function of an
m
-dimensional problem with
n
different points in each dimension is an array of
n
m
different elements and the computation time needed to search this array may be prohibitively high.