Abstract
In the previous chapter, we have developed a qualitative (classificatory and comparative) theory of deductive confirmation, guided by the success perspective. In this chapter we will present, in Section 3.1., the corresponding quantitative theory of confirmation, more specifically, the corresponding probabilistic theory of confirmation of a Bayesian nature, with a decomposition in deductive and non-deductive confirmation. It is again pure in the sense that all equally successful hypotheses profit from their success to the same degree. It is inclusive in the sense that it leaves room for confirmation of hypotheses with zero probability (p-zero hypotheses). In Section 3.2. the resulting qualitative theory of (general) confirmation, encompassing the qualitative theory of deductive confirmation, will be indicated. Finally, in Section 3.3. we will briefly discuss the acceptance of hypotheses in the light of quantitative confirmation. In Appendix 1, it will be argued that Popper’s quantitative theory of corroboration amounts to an inclusive and impure Bayesian theory of confirmation. In Appendix 2 the quantitative treatment of the raven paradoxes resulting from our quantitative theory is compared in detail with an analysis in terms of the standard Bayesian solution as presented by Horwich.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
Standard versions of Bayesian philosophy of science, leaving no room for confirmation of p-zero hypotheses, can be found in (Horwich 1982, Earman 1992, Howson and Urbach 1989, Schaffner 1993, Ch. 5). These non-inclusive versions are pure or impure depending on whether they support the difference degree or the ratio degree of confirmation (see below), respectively.
That is, without assuming, as statisticians do, that H and ¬H are simple hypotheses in the sense of generating a certain probability distribution. Hence, H and ¬H may well be disjunctions of such simple hypotheses, in which case p is based on a prior distribution over the latter hypotheses and their corresponding conditional probability distributions. To be sure, H itself is primarily thought of as a non-statistical hypothesis. For the extrapolation of the Bayesian approach to statistical hypotheses, see e.g., (Howson and Urbach 1989) and (Schaffner 1993 ).
Note that p(E) is equal to p(H )p(E/H ) + p( H)p(E/7H). Note also that p(E/¬H ) is equal to p(E)p( ¬H/E)/p(¬H).
The formal possibility to include conditional probabilities with ‘p-zero’conditions in a probability calculus has been introduced by Rényi (1955) and was, for instance, followed by Popper (1959, Appendix *iv).
In view of the F(orward)-criterion, see below, the S-criterion might also be called the B(ackward)-criterion.
This formulation applies, strictly speaking, only to a language with a finite domain. However, in many cases it can be extended to infinite domains, provided E deals with a finite number of individuals.
Popper and Miller (1983) have tried to argue that inductive probabilities, supposedly realizing the idea of extrapolation or ‘going beyond the evidence’, cannot exist. However, in the next chapter we will see that there exist probability functions which are clearly inductive in a straightforward sense, and hence that the explication of Miller and Popper was an unlucky mistake.
See (Festa 1999a) for a lucid survey. Added in proof see also Fitelson (1999) for a comparative survey.
Popper’s arguments (Popper 1959) against p(H E)as degree of confirmation convinced even Carnap (19632, the new foreword) that the ‘genuine’ degree of confirmation should be identified with, or at least be proportional to p(H/E)—p(H) or p(H/E)/p(H).
This ratio of likelihoods of H and ¬H might be called the ‘likelihood ratio’, but we will not do so because this expression has a different meaning in statistics. There it means the ratio of the likelihoods of two alternative (but usually non-exhaustive) hypotheses assuming one underlying statistical model. However, the ratio p(E/H)/p(E/¬H)is also (unconditionally) equivalent to the ratio of the posterior odds,p(H/E)/p(¬H/E), and the prior odds, p(H)/p(¬H). For this reason, this ratio could also be conceived as an inclusive (and impure) degree of confirmation.
The term is used in this strict sense by Festa (1999a).
Since p(H E)itself is a function of p(H), viz., p(H) • p(E/H)/p(E), this does not exclude that some P-incremental degrees of confirmation, e.g., the d-measure, increase under certain conditions with increasing p(H). E.g., for deductive confirmation of H andH* by E, d(H, E)= p(H)(1/p(E) — 1) lt d(H*, E) iff p(H) lt p(H*).
Note that this ratio may be defined for two p-zero hypotheses and that values for p(E/¬H1) and p(E/¬H2)are not needed.
This definition has some complications. Strictly speaking, it provides only a necessary condition for independence. It is nevertheless plausible to call, in general, the probabilistic expression p(A &B)/(p(A) • p(B))the degree of mutual or inter-dependence of A and B. Carnap (1950/63, par. 66) has called it the (mutual) relevance quotient.
The term ‘neutral’ is already used within the presented theory of confirmation, viz., in the phrase ‘neutral evidence’, which makes that term less attractive for our present purposes.
See (Jeffrey 1975) for a comparison of a couple of measures, including r, d, and d’. His emphasis is on r and d, and on second thoughts, that is, in his “Replies” he favors d over r, mainly because of its ‘impure’ character. In our opinion (see also the next section), the impact of different prior probabilities is perfectly accounted for in the resulting different posterior probabilities.
Note in particular that assigning the probability value 0 to “for all E: M iff G” amounts to adding the strong irrelevance assumption (SIA) as described in Subsection 2.2.2. In that case, the posterior probability and the posterior odds of the grue hypothesis are and remain O.
Note that, since d(H, E) = p(H)(r(H, E) — 1), r(H, E)is also the crucial expression in calculating the relevant d-values.
The assumption that c is some positive number when RH is false is of course a simplification. However, it is easy to check that the proofs can be refined by conditionalization on the hypotheses that c = 1, 2, 3, ..., with the result that the claims remain valid, independent of the prior distribution.
Popper calls the r-value in general the ‘explanatory power’ of H with respect to E. Although Popper does not do so, it would have been plausible for him to call it the `explanatory success’ as soon as E has turned out to be the result of the test. We simply call it the degree of success.
A more general version of this defence can be obtained by generalizing the success-criterion of confirmation to: “E confirms H” iff p(E/H)>(E) for all p for which 0<p(E <1.
In view of the nature of our analysis and the relation d(H, E) = p(H)(r(H, E)— 1), it is clear that d(H, E) also realizes the severity intuitions dealt with, though in a somewhat less transparent way.
Note that the principle of partial entailment, suggested at the end of Subsection 3.1.1., may be seen as a special case of SDC or RPC as soon as we assume that “H partially entails E” implies that H makes E more plausible or that E makes H more plausible, respectively. Similarly, the principle of inductive extrapolation, also suggested there, is realized by RPC as soon as we assume that “H inductively extrapolates upon E” implies that E makes H more plausible. In this case, the implication that H makes E more plausible does not seem as natural as the reverse implication, hence, the principle is not as easy to see as a special case of SDC.
Think of a context in which not only the evidence results from some kind of random sampling, but also the population resulted from some earlier random sampling in a larger universe, and hence the division of true and false hypotheses. The urn-model argument in favor of the P.2-feature of the r-degree of confirmation in Subsection 3.1.3. and the urn-model illustration of the superiority of new tests in Subsection 3.2.3. were of this kind.
Of course, r(H, E) may be conceived as depending on p(H), by using p(E)= p(H)p(E/H) + p(¬H)p(E/¬H)to calculate p(E). However, nothing forces us to use this particular `decomposition’ of p(E). The relative independence of r(H, E) from p(H) may be conceived as an additional, pragmatic advantage of the r-degree: people may agree about it, without having to agree about p(H).
In fact Popper formulates (v) in one respect more restricted and in another respect somewhat more general, but the present formulation is more suitable for our purposes. Popper’s formulation is more restricted in the sense that he starts with the condition, between brackets, that H* entails H. But if this restricted version is plausible, then so is (v) itself, or so it seems. However this may be, the resulting degree of corroboration, see below, satisfies (v) in the unrestricted sense. (A similar remark applies to (vi).) On the other hand, Popper’s formulation of (v) is more general in the sense that, assuming that H* entails H, he requires that there is a statement H’ such that c(H, H’) gt c(H*, H’). However, he mentions as example that H’ may be H*, in which case the requirement amounts to c(H, H*) gtc(H*, H*). Since c(H, H) is the maximal value for H, according to (ii), this implies that c(H, H) gt c(H*, H*), i.e., our requirement. Since Popper’s motivation is entirely in terms of this example, we prefer this restricted formulation.
Moreover, there is the interesting suggestion of Jeffrey (1975, p. 150) to assign infinitesimal numbers, developed in non-standard analysis, to p-zero hypotheses in the standard sense.
Note that from the informal expositions of Popper one might sometimes get the idea that he is pleading for the opposite ofP.2lcor, favoring less plausible hypotheses when equally successful, that is, the stronger hypothesis should be praised more by the corroborating evidence than the weaker one. However, he is well aware of this consequence of (vi), for he speaks (Popper 1983, p. 251) of an aspect in which degree of corroboration resembles probability. Hence, it may be assumed that Popper, at least on second thoughts, subscribed to P.2lcor.
He does not explicitly deal with the second paradox, but implicitly the situation is clear.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Kuipers, T.A.F. (2000). Quantitative Confirmation, and Its Qualitative Consequences. In: From Instrumentalism to Constructive Realism. Synthese Library, vol 287. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-1618-5_3
Download citation
DOI: https://doi.org/10.1007/978-94-017-1618-5_3
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-5369-5
Online ISBN: 978-94-017-1618-5
eBook Packages: Springer Book Archive