Skip to main content
Log in

Measuring the overall incoherence of credence functions

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Many philosophers hold that the probability axioms constitute norms of rationality governing degrees of belief. This view, known as subjective Bayesianism, has been widely criticized for being too idealized. It is claimed that the norms on degrees of belief postulated by subjective Bayesianism cannot be followed by human agents, and hence have no normative force for beings like us. This problem is especially pressing since the standard framework of subjective Bayesianism only allows us to distinguish between two kinds of credence functions—coherent ones that obey the probability axioms perfectly, and incoherent ones that don’t. An attractive response to this problem is to extend the framework of subjective Bayesianism in such a way that we can measure differences between incoherent credence functions. This lets us explain how the Bayesian ideals can be approximated by humans. I argue that we should look for a measure that captures what I call the ‘overall degree of incoherence’ of a credence function. I then examine various incoherence measures that have been proposed in the literature, and evaluate whether they are suitable for measuring overall incoherence. The competitors are a qualitative measure that relies on finding coherent subsets of incoherent credence functions, a class of quantitative measures that measure incoherence in terms of normalized Dutch book loss, and a class of distance measures that determine the distance to the closest coherent credence function. I argue that one particular Dutch book measure and a corresponding distance measure are particularly well suited for capturing the overall degree of incoherence of a credence function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In the examples I consider in this paper, I will mostly be concerned with unconditional credences. However, the measure I end up favoring can measure the incoherence of conditional as well as unconditional credences, so nothing here depends on focusing on unconditional credences for most of the paper. For a more detailed overview of the requirements on rational credences, see Weisberg (2011). For an excellent discussion of the status of the rules of probability as requirements of rationality, see Ch. 1–4 of Titelbaum (2013).

  2. The Brier score is a proper scoring rule that can be used to measure the accuracy, or epistemic utility, of an agent’s credence, or credence function, at a given world. It corresponds to the squared distance measure, and it is essentially a way of measuring how far away the agent’s credences are from the truth (or some other privileged credence function)—the larger the difference between an agent’s credence in some proposition p and the truth value of p at that world, the higher the inaccuracy, or epistemic disutility, of that credence. More formally: suppose \(c\) is a credence function defined over a set of propositions \(F\), and the function \(I\) indicates the truth values of the propositions in \(F\) at world \(w\) by mapping them onto {0,1}. Then the following function gives us the Brier score of \(c\) at \(w\):

    $$\begin{aligned} Brier (c,w)=\sum _{A\in F} {(c(A)-Iw(A))^{2}} \end{aligned}$$
  3. Thanks to an anonymous referee for pointing me to this result.

  4. Source: http://www.worldbank.org/depweb/beyond/global/chapter2.html, accessed on November 26, 2013.

  5. A set of propositions has the structure of a Boolean algebra just in case it contains every logically distinct proposition that can be expressed by combining the atomic propositions in the set with the standard logical connectives.

  6. Zynda is in fact aware of this kind of result of his measure, and he comments on it in a footnote of his paper.

    Consider, for example, a person whose degree of belief function \(f\) is thoroughly incoherent but is everywhere numerically close to a probability function. [...] Intuitively, there is a sense in which such a person’s state of opinion is very ‘close’ to being coherent, but it would come out very badly on my account, since very little of it is actually coherent. [...] This is a distinct sense of comparative coherence from the one offered above; in my view, both senses are interesting and worth developing in greater detail. (Zynda 1996, p. 215)

    Interestingly, Zynda acknowledges that his measure does not capture the very intuitive idea that the degree of incoherence of a credence function depends on numerical closeness to a probability function. Yet, he claims that there is a different graded notion of incoherence, which only depends on which parts of a credence function are actually coherent. I don’t think that this notion is what we are aiming for when we try to find a measure of probabilistic incoherence. If we want to know how much an agent diverges from being perfectly coherent, it seems natural and important to take numerical differences between agents’ credences into account. Zynda’s measure may still be of technical interest, but I think it fails to capture our most natural and interesting judgments about degrees of incoherence.

  7. For a good overview of the ongoing debate about Dutch book arguments, see (Hájek 2008).

  8. The name for this problem is due to Schervish, Seidenfeld, and Kadane.

  9. SSK use the opposite terminology, i.e. the bookie is the incoherent person and the agent sets up the Dutch book. However, I’ve found it to be more common that the person is setting up the Dutch book is called the bookie, so I am diverging from SSK’s terminology here.

  10. As SSK point out, there can also be intermediate normalizations that lie in between the ‘sum’ and the ‘max’ proposals. I won’t discuss them separately here, since the two extreme proposals are most interesting for our purposes.

  11. I choose an informal presentation of SSK’s measures here to make my discussion more reader-friendly. Readers who are interested in the formal details of SSK’s proposals are encouraged to consult the appendices, as well as their presentations of the material. SSK’s proposals can be applied to probability intervals as well; I am focusing on precise probabilities to simplify the discussion.

  12. The notation has been slightly altered from the original, which in no way affects the meaning.

  13. The paper by Osherson and Vardi (2006) contains a more sophisticated, theoretical discussion of using distance measures to measure incoherence. I’d like to thank an anonymous referee for making me aware of it.

  14. To see how SSK’s measures relate to metrics, the reader is also invited to consult Sect. 6 in (Schervish et al. 2002a).

  15. An interesting overview of the properties of different distance measures can be found in Cha (2007).

  16. Thanks to Branden Fitelson and Richard Pettigrew for raising this question.

  17. For the Brier score, this was shown by de Finetti (1974). For strictly proper scoring rules, the result is sketched by Savage (1971). More detailed discussions can be found in Joyce (1998, 2009), and Predd et al. (2009). See Schervish et al. (2009) for a demonstration that this result does not hold for proper scoring rules that are not continuous.

  18. Thanks to an anonymous reviewer for helping me clarify this section.

  19. Their incoherence measure is defined in terms of upper and lower previsions and it uses random variables instead of propositions. The version I present here is somewhat simplified, because I use indicator functions instead of random variables, and I take credences to determine both the buying and selling price of a bet. That means that the measure I discuss is strictly speaking a special case of their more general measure. My criticisms of the measure are independent of these simplifying assumptions.

  20. The function “sup” picks out the least upper bound of a set. In this context, it selects the highest value from the combined payoffs in all worlds in S. Thus, if the highest possible payoff is still negative, the agent can be Dutch-booked.

  21. The “min” function is used here to select the smallest number of the numbers in a set. It ensures that if no Dutch book can be made against an agent, the guaranteed loss she faces is 0. If a Dutch book can be made, the “min” function selects it, and the negative sign in front guarantees that we end up with a positive number that indicates the agent’s guaranteed loss.

  22. If we set \(\upalpha _{1}\), \(\upalpha _2 <0\), the combined payoff would be guaranteed to be positive.

  23. Recall that this is the relevant equation:

    $$\begin{aligned} G(Y)=-\min \left\{ 0,\mathop {\sup }\limits _{s\in S} \sum _{i=1}^n {\alpha _i } (I_{A_i } (s)-P(A_i ))\right\} \end{aligned}$$
  24. Here’s a proof of the result (thanks to Kenny Easwaran for pointing this out to me): We want to prove that if we combine two fractions by adding their respective numerators and denominators, the resulting fraction is going to lie in between the two original fractions. Suppose that \(\hbox {a}/\hbox {b}<\hbox {c}/\hbox {d}\), with a,b,c,d being positive. Then it is the case that \(\hbox {ad}<\hbox {bc}\).

    First, we show that the sum of the fractions is greater than a/b:

    $$\begin{aligned}&\hbox {a}/\hbox {b }=\left( {\hbox {ab}+\hbox {ad}} \right) /\hbox {b}\left( {\hbox {b}+\hbox {d}} \right) \\&\left( {\hbox {a}+\hbox {c}} \right) \!/\!\left( {\hbox {b}+\hbox {d}} \right) =\left( {\hbox {ab}+\hbox {cb}} \right) \!/\hbox {b}\left( {\hbox {b}+\hbox {d}} \right) \end{aligned}$$

    If you compare the two terms on the right side of each equation, you notice that they are the same except for the right summand in the numerator. And since ad \(<\) bc, we can conclude that a/b \(<\) (a + c)/(b + d).

    Then we show that the sum of the fractions is smaller than c/d:

    $$\begin{aligned}&\hbox {c}/\hbox {d}=\left( {\hbox {cb}+\hbox {cd}} \right) /\hbox {d}\left( {\hbox {b}+\hbox {d}} \right) \\&\left( {\hbox {a}+\hbox {c}} \right) \!/\!\left( {\hbox {b}+\hbox {d}} \right) =\left( {\hbox {ad}+\hbox {cd}} \right) \!/\hbox {d}\left( {\hbox {b}+\hbox {d}} \right) \end{aligned}$$

    If you compare the two terms on the right side of each equation, you notice that they are the same except for the left summand in the numerator. And since \(\hbox {ad}<\hbox {bc}\), we can conclude that \(\hbox {c}/\hbox {d }>\hbox { }\left( {\hbox {a}+\hbox {c}} \right) /\left( {\hbox {b}+\hbox {d}} \right) \).

References

  • Cha, S.-H. (2007). Comprehensive survey on distance/similarity measures between probability density functions. International Journal of Mathematical Models and Methods in Applied Sciences, 1(4), 300–307.

    Google Scholar 

  • Christensen, D. (2004). Putting logic in its place. Oxford: Oxford University Press.

    Book  Google Scholar 

  • De Bona, G. & Finger, M. (2014). Notes on measuring inconsistency in probabilistic logic. Technical report RT-MAC-2014-02, Department of Computer Science, IME/USP. http://www.ime.usp.br/~mfinger/www-home/papers/DBF2014-reltec.pdf.

  • De Finetti, B. (1974). Theory of probability (Vol. 1). New York: Wiley.

    Google Scholar 

  • Earman, J. (1992). Bayes or bust? A critical examination of bayesian confirmation theory. Cambridge: MIT.

    Google Scholar 

  • Hacking, I. (1967). Slightly more realistic personal probability. Philosophy of Science, 34(4), 311–325.

    Article  Google Scholar 

  • Hájek, A. (2008). Dutch book arguments. In P. Anand, P. Pattanaik, & C. Puppe (Eds.), The oxford handbook of rational and social choice (pp. 173–195). Oxford: Oxford University Press.

    Google Scholar 

  • Harman, G. (1986). Change in view. Cambridge: MIT Press.

    Google Scholar 

  • Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.

    Article  Google Scholar 

  • Joyce, J. M. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In F. Huber & C. Schmidt-Petri (Eds.), Degrees of belief, synthese (pp. 263–297). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Osherson, D., Lane, D., Hartley, P., & Batsell, R. R. (2001). Coherent probability from incoherent judgment. Journal of Experimental Psychology: Applied, 7(1), 3–12.

    Google Scholar 

  • Osherson, D., & Vardi, M. Y. (2006). Aggregating disparate estimates of chance. Games and Economic Behavior, 56(1), 148–173.

    Article  Google Scholar 

  • Predd, J., Seiringer, R., Lieb, E. H., Osherson, D., Poor, H. V., & Kulkarni, S. (2009). Probabilistic coherence and proper scoring rules. IEEE Transactions on Information Theory, 55(10), 4786–4792.

    Article  Google Scholar 

  • Savage, L. J. (1971). Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336), 783–801.

    Article  Google Scholar 

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2000). How sets of coherent probabilities may serve as models for degrees of incoherence. International Journal of Uncertainty, Fuzziness and Knowlegde-based Systems, 8, 347–355.

    Article  Google Scholar 

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2002a). Measuring incoherence. Sankhya: The Indian Journal of Statistics, 64(Series A, Pt. 3), 561–587.

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2002b). Measures of incoherence: How not to gamble if you must. In J. M. Bernardo, et al. (Eds.), Bayesian statistics 7 (pp. 385–401). Oxford: Oxford University Press 2003.

  • Schervish, M. J., Seidenfeld, T., & Kadane, J. B. (2009). Proper scoring rules, dominated forecasts, and coherence. Decision Analysis, 6(4), 202–221.

    Article  Google Scholar 

  • Titelbaum, M. (2013). Quitting certainties. A Bayesian framework modeling degrees of belief. Oxford: Oxford University Press.

    Google Scholar 

  • Wang, G., Kulkarni, S. R., Poor, H. V., & Osherson, D. N. (2011). Aggregating large sets of probabilistic forecasts by weighted coherent adjustment. Decision Analysis, 8(2), 128–144.

    Article  Google Scholar 

  • Weisberg, J. (2011). Varieties of Bayesianism. In D. Gabbay, S. Hartmann, & J. Woods (Eds.), Handbook of the history of logic (Vol. 10). Amsterdam: Elsevier.

    Google Scholar 

  • Zynda, L. (1996). Coherence as an ideal of rationality. Synthese, 109(2), 175–216.

    Article  Google Scholar 

Download references

Acknowledgments

I would like to thank Horacio Arló-Costa, Brad Armendt, Glauber De Bona, Kenny Easwaran, Branden Fitelson, Alan Hájek, Adam Joel Keeney, Hanti Lin, Anya Plutynski, Jacob Ross, Mark Schroeder, Teddy Seidenfeld, Brian Talbot, Lyle Zynda, and two anonymous referees for helpful feedback and discussion. Part of this paper was written while I was a postdoctoral fellow at the Australian National University, thanks to the Australian Research Council Grant for the Discovery Project ‘The Objects of Probabilities’, DP 1097075.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julia Staffel.

Appendix: SSK’s neutral/sum and neutral/max measures

Appendix: SSK’s neutral/sum and neutral/max measures

In a series of papers, SSK have developed a class of measures of degrees of incoherence based on Dutch books (2000, 2002a, 2002b). In this appendix, I focus on the neutral/sum measure, to give a more detailed formal exposition of my arguments in Sect. 6.2. To see how this Dutch book measure works, suppose there is an agent who has a credence function P that assigns credences to a set of propositions \(\{\hbox {A}_{1},\ldots , \hbox {A}_{\mathrm{n}}\}\).Footnote 19 We can represent a bet on or against one of these propositions according to the agent’s credences in the following way:

$$\begin{aligned} \hbox {Bet: }\upalpha \left( {\hbox {I}\left( {\hbox {A}_\mathrm{i} } \right) -\hbox {P}\left( {\hbox {A}_\mathrm{i} } \right) } \right) \end{aligned}$$

In this case, \(\hbox {I}(\hbox {A}_{\mathrm{i}})\) is the indicator function of \(\hbox {A}_{\mathrm{i}}\), which assigns a value of 1 if \(\hbox {A}_{\mathrm{i}}\) is true and a value of 0 if \(\hbox {A}_\mathrm{i}\) is false. \(\hbox {P}(\hbox {A}_\mathrm{i})\) is the credence the agent assigns to the proposition \(\hbox {A}_\mathrm{i}\). The coefficient \(\upalpha \) determines both the size of the bet, as well as whether the agent is betting on or against \(\hbox {A}_\mathrm{i}\). If \(\upalpha >0\), then the agent bets on the truth of \(\hbox {A}_\mathrm{i}\), whereas if \(\upalpha <0\), the agents bets against the truth of \(\hbox {A}_\mathrm{i}\).

In the following, it will be assumed that an agent who assigns a precise credence to a proposition thereby evaluates as fair the bet on and the bet against that proposition at the price that is fixed by her credence.

An agent is incoherent if there is a collection of gambles she evaluates as fair that together guarantee a loss. Formally, we can represent this as follows: Let \(\hbox {A}_1,\ldots ,\hbox {A}_\mathrm{n}\) be the propositions that some agent assigns credences to, let Cr be the agent’s credence function, which may or may not be probabilistically coherent, and let S be the set of possible world states. If there is some choice of coefficients \(\upalpha _1, \ldots ,\upalpha _\mathrm{n}\), such that the sum of the payoffs of the bets on or against \(\hbox {A}_1,\ldots ,\hbox { A}_\mathrm{n} \) is negative for every world state \(\hbox {s}\in \hbox {S}\), then the agent is vulnerable to a Dutch book. Thus, there is a Dutch book iff Footnote 20

$$\begin{aligned} \mathop {\sup }\limits _{s\in S} \sum _{i=1}^n {\alpha _i } (I_{A_i } (s)-Cr(A_i ))<0 \end{aligned}$$

This formula tells us how to determine whether a Dutch book can be made against an agent who has a given credence function in a given set of propositions. We can capture the guaranteed loss an agent faces from a collection of gambles of the form \(\hbox {Y}_\mathrm{i} =\upalpha \left( {\hbox {I}\left( {\hbox {A}_\mathrm{i}}\right) -\hbox {Cr}\left( {\hbox {A}_\mathrm{i}}\right) } \right) \) as follows:Footnote 21

$$\begin{aligned} G(Y)=-\min \left\{ 0,\mathop {\sup }\limits _{s\in S} \sum _{i=1}^n {\alpha _i } (I_{A_i } (s)-Cr(A_i ))\right\} \end{aligned}$$

In order to normalize the guaranteed loss to be able to measure an agent’s degree of incoherence, we can divide the guaranteed loss by the sum of the coefficients of the individual bets. This normalization is called the “neutral/sum” normalization by SSK. We can thus compute the rate of loss H(Y):

$$\begin{aligned} H(Y)=\frac{G(Y)}{\sum \limits _{i=1}^n {\left| {\mathop \alpha \nolimits _i } \right| } } \end{aligned}$$

The degree of incoherence can be determined for a set of propositions and a credence function over these propositions by choosing the coefficients \(\upalpha _{1},\ldots , \upalpha _{\mathrm{n}}\) in such a way that H(Y) is maximized. To maximize H(Y), it may be necessary not to include certain propositions in the Dutch book, which can be achieved by setting the relevant coefficients \(\upalpha _\mathrm{i}\) to 0.

We can illustrate how the measure works with an example. Suppose an agent has credences in two propositions, q and \(\sim \hbox {q}\). Her credence assignment \(f\) is incoherent, since she assigns \(f\left( \hbox {q} \right) =0.5\) and \(f({\sim }\hbox {q})=0.6\). In order to measure her rate of incoherence, we will first set up two bets with her, one for each proposition, and sum them in order to determine their combined payoff: \(Y=\alpha _1 (I_q (s)-0.5)+\alpha _2 (I_{\lnot q} (s)-0.6)\)

Since we can either be in a world where q is true or in a world where q is false, we can get two values for Y:

  • If q, then \(Y=\alpha _1 0.5-\alpha _2 0.6\)

  • If \(\sim \)q, then \(Y=\alpha _2 0.4-\alpha _1 0.5\)

Thus, we can calculate G(Y) as follows, where \(\upalpha _{1}\), \(\upalpha _2 >0\): Footnote 22

if \(\upalpha _2 \ge 1.25 \upalpha _1\) or \(\upalpha _1 \ge 1.2 \upalpha _2\), then the second term in the braces in the G(Y) equationFootnote 23 is positive, which means that \(G(Y)=0\)

otherwise, \(G(Y)=-\mathop {\sup }\limits _{s\in S} \{\alpha _1 (I_q (s)-0.5)+\alpha _2 (I_{\lnot q} (s)-0.6)\}\)

Thus, when a Dutch book can be made, (i.e. when \(\hbox {G}\left( \hbox {Y} \right) >0)\) we can measure the rate of incoherence by choosing the coefficients in such a way that H(Y) is maximized:

$$\begin{aligned} H(Y)=\frac{\mathop {-\sup }\limits _{s\in S} \{\alpha _1 (I_q (s)-0.5)+\alpha _2 (I_{\lnot q} (s)-0.6)\}}{\left| {\alpha _1 } \right| +\left| {\alpha _2 } \right| } \end{aligned}$$

The rate of incoherence is maximized in this example if we choose \(\upalpha _1 =\upalpha _2 \), which results in a rate of incoherence of 0.05.

I will now move on to the problems with the measure discussed in Sect. 6.2. The first example involved a comparison of the following two credence functions:

$$\begin{aligned} \begin{array}{lll} f\left( \hbox {p} \right) =0.6&{}\quad \quad &{} g\left( \hbox {p} \right) =0.6 \\ f\left( {{\sim }\hbox {p}} \right) =0.6&{}\quad \quad &{} g\left( {{\sim }\hbox {p}} \right) =0.6 \\ f\left( \hbox {q} \right) =0.5, &{}\quad \quad &{} g\left( \hbox {q} \right) =0.6 \\ f\left( {{\sim }\hbox {q}} \right) =0.5&{}\quad \quad &{} g\left( {{\sim }\hbox {q}} \right) =0.6 \\ \end{array} \end{aligned}$$

We noted that intuitively, \(g\) is overall more incoherent than \(f\). However, this is not the result we get from the neutral/sum measure. According to this measure, the agent would be equally incoherent in both cases. Here’s how that result comes about. First, consider the case in which the agent adopts \(f\). The formula to calculate the degree of incoherence is the following (as the reader can easily verify, including bets on q and \(\sim \)q couldn’t possibly lead to a higher guaranteed loss, so they can and should be left out):

$$\begin{aligned} H(Y)=\frac{\mathop {-\sup }\limits _{s\in S} \{\alpha _1 (I_p (s)-0.6)+\alpha _2 (I_{\lnot p} (s)-0.6)\}}{\left| {\alpha _1 } \right| +\left| {\alpha _2 } \right| } \end{aligned}$$

As before, the agent’s rate of loss is maximized when we set \(\upalpha _{1 }=\upalpha _{2} >\) 0. We can thus simplify the calculation as follows:

$$\begin{aligned} H(Y)=\frac{0.2a_1 }{2a_1 }=0.1 \end{aligned}$$

Thus, if the agent adopts \(f\) as her credence function, her rate of loss is 0.1. Let us now compare this to what happens if the agent adopts \(g\). If her credence function is \(g\), we can calculate the rate of loss as follows:

$$\begin{aligned}&H(Y)\\&\quad =\!\frac{\mathop {\!-\!\sup }\limits _{s\in S} \{\alpha _1 (I_p (s)-0.6)+\alpha _2 (I_{\lnot p} (s)-0.6)+\alpha _3 (I_q (s)-0.6)+\alpha _4 (I_{\lnot q} (s)-0.6)\}}{\left| {\alpha _1 } \right| +\left| {\alpha _2 } \right| \!+\!\left| {a_3 } \right| +\left| {a_4 } \right| } \end{aligned}$$

If we try to find values for \(\upalpha _{1}-\upalpha _{4}\) that maximize H(Y), we get an interesting result. There is no way of picking values for \(\upalpha _{1}-\upalpha _{4}\) that lead to a higher rate of incoherence than 0.1. Rather, we get exactly the same rate of incoherence we had before as long as we choose \(\upalpha _{1}=\upalpha _{2 }\)and \(\upalpha _{3}=\upalpha _{4}\), and we choose \(\upalpha _1 >0\) and/or \(\upalpha _3 >0\). However, this result is in tension with the intuition that an agent who adopts \(g\) is more incoherent than an agent who adopts \(f\). We can see more easily how this result arises if we strip our example down to the essential parts. In the calculation of the rate of loss of \(g\), we are essentially combining two normalized Dutch books of the same kind into one, by adding the numerators and adding the denominators of each one. The two normalized Dutch books are the same as the one Dutch book we made against the agent who adopts \(f\). Thus, to make it simple, the calculation for the rate of loss of the agent who adopts \(g\) goes as follows:

$$\begin{aligned} H(Y)=\frac{0.2a_1 +0.2a_3 }{2a_1 +2a_3 } \end{aligned}$$

If two fractions are combined in such a way that the numerators and the denominators are added, the value of the resulting fraction is always in between or equal to the two original fractions. Thus, if we combine two normalized Dutch books with the same rate of loss in the neutral/sum measure, the resulting rate of loss is the same as before.Footnote 24

Moreover, it can even be beneficial when employing the neutral/sum measure not to make certain Dutch books at all in order to maximize the rate of loss. Remember that we are allowed to choose \(\upalpha _\mathrm{i}\) = 0 if necessary to maximize the rate of loss. In a case in which there are two Dutch books that can be made against an agent, but one of them leads to a greater rate of loss on its own, the total rate of loss can be maximized by setting the relevant coefficients to 0. For example, suppose an agent has the credence function \(f\), that is defined as follows: \(f\left( \hbox {p} \right) =0.6,f\left( {{\sim }\hbox {p}} \right) =0.5,f\left( \hbox {T} \right) =0.9\). An agent who adopts \(f\) can be Dutch booked in two ways: on her incoherent credences in the partition \(\left\{ {\hbox {p}, {\sim }\hbox {p}} \right\} \), and on her non-zero credence in a contradiction. In this case, the rate of loss comes down to:

$$\begin{aligned} H(Y)=\frac{0.1a_1 +0.1a_3}{2a_1 +a_3} \end{aligned}$$

The rate of loss in this case reaches its maximum value of 0.1 if we set \(\upalpha _{1}\) = 0. This amounts to only Dutch booking the agent on her credence in the tautology, but refraining from Dutch booking her on her incoherent credences in the partition \(\left\{ {\hbox {p}, {\sim }\hbox {p}} \right\} \).

This feature of the measure’s normalization is the source of the swamping problem. Since only the worst Dutch book determines the agent’s degree of incoherence, incoherencies in other parts of the credence function get swamped and are not reflected in the agent’s degree of incoherence. This also gives rise to the problematic example in Sect. 6.2, in which the measure orders two credence functions in a way that seems intuitively exactly backwards, which makes the measure unsuitable to evaluate reasoning processes.

In order to determine the degree of incoherence of a credence function according to the neutral/max measure instead, we must make the following adjustment: instead of normalizing the Dutch book loss by dividing the betting loss by the sum of the stakes of the individual bets, we must normalize it by dividing the Dutch book loss by the stakes of the largest bet included in the gamble. Hence, the rate of loss that must be maximized becomes:

$$\begin{aligned} H(Y)=\frac{G(Y)}{\max \{\left| {\alpha _1 } \right| ,\ldots ,\left| {\alpha _n } \right| \}} \end{aligned}$$

I hope this appendix helpfully supplements the informal presentation of the arguments in the main body of the paper, and I will leave it to the reader to verify the results in the remaining sections of the main text.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Staffel, J. Measuring the overall incoherence of credence functions. Synthese 192, 1467–1493 (2015). https://doi.org/10.1007/s11229-014-0640-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-014-0640-x

Keywords

Navigation