Skip to main content
main-content

Über dieses Buch

Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis.

The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models.

The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also

provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes.

give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge.

<

give several examples and exercises exploiting computer systems for dealing with Bayesian networks and decision graphs.

present a thorough introduction to state-of-the-art solution and analysis algorithms.

The book is intended as a textbook, but it can also be used for self-study and as a reference book.

Inhaltsverzeichnis

Frontmatter

Prerequisites on Probability Theory

1. Prerequisites on Probability Theory

Abstract
In this chapter we review some standard results and definitions from probability theory. The reader is assumed to have had some contact with probability theory before, and the purpose of this section is simply to brush up on some of the basic concepts and to introduce some of the notation used in the later chapters. Sections 1.1–1.3 are prerequisites for Section 2.3 and forward. Section 1.4 is a prerequisite for Chapter 4. and Section 1.5 is a prerequisite for Chapter 6 and Chapter 7.

Probabilistic Graphical Models

Frontmatter

2. Causal and Bayesian Networks

Abstract
In this chapter we introduce causal networks, which are the basic graphical feature for (almost) everything in this book. We give rules for reasoning about relevance in causal networks; is knowledge of A relevant for my belief about B? These sections deal with reasoning under uncertainty in general. Next, Bayesian networks are defined as causal networks with the strength of the causal links represented as conditional probabilities. Finally, the chain rule for Bayesian networks is presented. The chain rule is the property that makes Bayesian networks a very powerful tool for representing domains with inherent uncertainty. The sections on Bayesian networks assume knowledge of probability calculus as laid out in Sections 1.1–1.4.

3. Building Models

Abstract
The framework of Bayesian networks is a very efficient language for building models of domains with inherent uncertainty. However, as can be seen from the calculations in Section 2.6, it is a tedious job to perform evidence transmission even for very simple Bayesian networks. Fortunately, software tools that can do the calculation job for us are available. In the rest of this book, we assume that the reader has access to such a system (some URLs are given in the Preface). Therefore, we can start by concentrating on how to use Bayesian networks in model building and defer a presentation of methods for probability updating to Chapter 4.

4. Belief Updating in Bayesian Networks

Abstract
In this chapter, we present algorithms for probability updating. An efficient updating algorithm is fundamental to the applicability of Bayesian networks. As shown in Chapter 2, access to P(\( \mathcal{U} \), e) is sufficient for the calculations. However, because the joint probability table increases exponentially with the number of variables, we look for more efficient methods. Unfortunately, no method guarantees a tractable calculation task. However, the method presented here represents a substantial improvement, and it is among the most efficient methods known.

5. Analysis Tools for Bayesian Networks

Abstract
The main reason for building a Bayesian network is to estimate the state of certain variables given some evidence. In Chapter 4, we gave methods which made it easy to access P(A|e) for any variable A. However, this may not be sufficient. It may be crucial to establish the joint probability for a set of variables. Section 5.2 gives a general method for calculating P(\( \mathcal{V} \)|e) for any set \( \mathcal{V} \) of variables.

6. Parameter estimation

Abstract
Assume that you know the structure of a Bayesian network model over the variables \( \mathcal{U} \), but you do not have any estimates for the conditional probabilities. On the other hand, you have access to a database of cases, i.e., a set of simultaneous values for some of the variables in \( \mathcal{U} \). You can now use these cases to estimate the parameters of the model, namely the conditional probabilities. In this chapter we consider two approaches for handling this problem: First we show how a database of cases can be used to estimate the parameters once and for all (so-called batch learning). After that we shall investigate the situation where the cases are accumulated sequentially, i.e., we would like to adapt the model as each new case arrives. The reader is expected to be familiar with Section 1.5.

7. Learning the Structure of Bayesian Networks

Abstract
Consider the following situation. Some agent produces samples of cases from a Bayesian network N over the universe \( \mathcal{U} \). The cases are handed over to you, and you are asked to reconstruct the Bayesian network from the cases. This is the general setting for structural learning of Bayesian networks. In the real world you cannot be sure that the cases are actually sampled from a “true” network, but this we will assume. We will also assume that the sample is fair. That is, the set \( \mathcal{D} \) of cases reflects the distribution \( P_N \left( \mathcal{U} \right) \) determined by N. In other words, the distribution \( P_\mathcal{D}^\# \left( \mathcal{U} \right) \) of the cases is very close to \( P_N \left( \mathcal{U} \right) \). Furthermore, we assume that all links in N are essential, i.e., if you remove a link, then the resulting network cannot represent \( P\left( \mathcal{U} \right) \). Mathematically, it can be expressed as follows: if pa(A) are the parents of A, and B is any of them, then there are two states b1 and b2 of B and a configuration c of the other parents such that \( P\left( {A\left| {b_1 ,c} \right.} \right) \ne P\left( {A\left| {b_2 ,c} \right.} \right) \).

8. Bayesian Networks as Classifiers

Abstract
You receive a mail and wish to determine whether it is spam; you see a bird and wish to determine its species; you examine a patient and wish to diagnose him. These are only a few examples of the very common human task, classification.

Decision Graphs

Frontmatter

9. Graphical Languages for Specification of Decision Problems

Abstract
A Bayesian network serves as a model for a part of the world, and the relations in the model reflect causal impact between events. The reason for building these computer models is to use them when taking decisions. In other words, the probabilities provided by the network are used to support some kind of decision making. In principle, there are two kinds of decisions, namely test decisions and action decisions.

10. Solution Methods for Decision Graphs

Abstract
In Chapter 9 we presented graphical languages for modeling decision problems. The languages ease the burden of specifying the problem and transfer the complexity of the problem to the computer. For problems with a finite time horizon, the computer may fold out the specification to a decision tree and determine an optimal strategy by averaging out and folding back as described in Section 9.3.3. However, the calculations may be intractable, and in this chapter we present alternative methods exploiting symmetries in the decision problem. Sections 10.1–10.3 are devoted to solution methods for influence diagrams. Section 10.4 presents a method for solving unconstrained influence diagrams. In Section 10.5 we consider decision theoretic troubleshooting, which has next to no temporal ordering, and for which the decision trees tend to be intractably large. In Section 10.6 we present two methods for solving MDPs, and a method for solving POMDPs is indicated. The last section presents LIMIDs, which is a way of approximating influence diagrams by limiting the memory of the decision maker.

11. Methods for Analyzing Decision Problems

Abstract
The primary issue in dealing with a decision problem is to determine an optimal strategy, but other issues may be relevant. This chapter deals with value of information, the relevant past and future for a decision, and the sensitivity of decisions with respect to parameters.

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise