Skip to main content
Log in

Immoderately rational

  • Published:
Philosophical Studies Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. This is often called “Uniqueness”. See White (2007), Feldman (2007), Christensen (2007), Kelly (forthcoming), Cohen (forthcoming), Schoenfield (2013) for various formulations of the thesis.

  2. Bayesianism will also provide a natural setting for the discussion here; many people find my main target view, moderate permissivism, much more compelling for credences than for full belief.

  3. My uses of “extreme” and “moderate” in this context follow Meacham (forthcoming). White (2007) uses the same terms to mark a different distinction.

  4. I don’t mean to build in much by “rule” here; we can think of an epistemic rule as a mapping from evidence to belief states. “Following” a rule, for present purposes, just involves adopting the belief state that corresponds to your total evidence.

  5. To head off a couple of worries here: you might be concerned that, because it dictates how one should regard one’s own beliefs and epistemic methods, Immodesty implausibly requires us to form beliefs about all of these matters. To avoid that, we can think of Immodesty as a principle of propositional, rather than doxastic justification: it tells us what opinions would make sense for a rational agent to have, should she happen to form opinions at all. Second, one might worry that, since it’s framed in terms of maximizing a certain value, Immodesty appears to commit us to consequentialism about epistemic rationality. For reasons of space, I won’t get into this issue in depth here. But it’s not clear to me that non-consequentialists should be concerned that the Immodesty demand requires consequentialism. In general, we can think about Immodesty as a kind of “internal harmony” among one’s beliefs about the world and one’s beliefs about truth. If a rational agent believes P, she should also regard believing P as a better way to align one’s beliefs with the truth than disbelieving P or suspending judgment.

  6. See Joyce (2009). Joyce (1998) provides a slightly different argument for coherence. If you have incoherent credences, he argues, they will be dominated by a coherent credence function: that is, some coherent credence function will be at least as accurate as yours in every world, and more accurate in some. I prefer expected accuracy arguments rather than dominance in this context because they can get us a stronger immodesty principle: that your credences aren’t just best, but uniquely best. (Realizing that your coherent credences are non-dominated gives you no reason to think that they are better than other coherent credences, which are also non-dominated.) One wrinkle here is that it’s a bit hard to know what to say about probabilistically incoherent believers if we focus on expected accuracy rather than dominance. It’s true that probabilistically incoherent agents won’t be immodest in the sense I’m working with, but that’s because our definition of immodesty refers to expected value, which is not defined for incoherent credence functions. It might not seem so bad if incoherent agents fail to be immodest for this reason. In response to this worry, one could define a broader notion of expected value meant to apply to both coherent and incoherent credence functions (see Leitgeb and Pettigrew (2010), p. 214–215 for a view like this) and use that to make arguments about what incoherent agents should believe. Less contentiously, one might just point out that, on a Subjective Bayesian view, we now have an argument for immodesty on behalf of coherent believers, but that we have no such argument for incoherent believers. This still seems to bring out something good about being coherent. Thanks to Kenny Easwaran and Richard Pettigrew for drawing my attention to these issues.

  7. Greaves and Wallace (2006). Accuracy in this context is assessed by a scoring rule, which assigns value to one’s credences as they get closer to the truth. A complication in these arguments is that not all scoring rules will do the job of supporting immodesty; those that do form a class called “strictly proper scoring rules”. So one way of seeing these arguments is as defending two things simultaneously: the Subjective Bayesian constraints, as rules of rationality, and proper scoring rules as a refinement of our notion of accuracy. I won’t get into the details of this issue here.

  8. For example, Lewis (1971) argues that inductive methods are only eligible for rational use if they “recommend themselves”, or take themselves to be optimal. Field (2000) adopts a similar notion of immodesty, arguing that we should take our own epistemic rules to be good guides to the truth. Gibbard (2008) defends something like immodesty—he argues that epistemic rationality involves seeing oneself as believing optimally, in some sense—but objects to interpreting it in terms of expected accuracy. Many epistemologists working in the Bayesian framework take up Immodesty, spelled out in expected accuracy terms, as a datum; one clear example is Moss (2011).

  9. See White (2007) for a defense of impermissivism. See also Feldman (2007), Christensen (2007). Williamson (2000)’s notion of “evidential probability” is a version of Objective Bayesianism, though its relation to justified beliefs or degrees of confidence is not straightforward.

  10. Dogramaci (2012) makes a similar point, in explaining why we should defer to testimony from other rational agents: “…[Y]our own beliefs can serve as bases for inferred conclusions that I can then acquire by testimony. And this is all possible because, when we share rules, I can trust that you will draw the same conclusion from an evidential basis that I would.” (p. 524) Dogramaci does not explicitly endorse either impermissivism or permissivism, but his defense of “epistemic communism” is similar to the account of epistemic value that I offer on behalf of the impermissivist.

  11. While I’m calling this a “truth-guiding” account of the value of rationality, there’s a sense in which it isn’t “really” truth-guiding; the connection to truth is cast in subjective, rather than objective, terms. So it’s possible on this account to have a rational agent who is vastly mistaken about the world. An impermissivist should not say that rationality guarantees that this is not our situation. But she can say that such a situation is very unlikely. Cohen (1984) raises some worries for a subjective connection between rationality and truth, mainly targeting the view that one must believe that one’s epistemic rules are reliable in order to use them. Requiring us to form all of these higher-order beliefs, Cohen argues, is an unrealistic and undue cognitive burden. We can sidestep many of these worries by thinking of this view as one about propositional, rather than doxastic justification: the idea here is that if one were to form beliefs about the reliability of one’s methods, one would be justified in taking the attitudes described by Immodesty.

  12. See, e.g. Rosen (2001), Douven (2009), Cohen (forthcoming), Kelly (forthcoming), Teller (2011), Schoenfield (2013), Meacham (forthcoming) for objections to impermissivism along these lines.

  13. Kelly (2010), p. 11. Douven makes a similar observation: “[M]ost Bayesians nowadays think rational degrees of belief are to satisfy additional constraints [beyond satisfying the axioms of probability]… Still, to the best of my knowledge no one calling him- or herself a Bayesian thinks that we could reasonably impose additional constraints that would fix a unique degrees-of-belief function to be adopted by any rational person and would thereby turn Bayesianism into a fully objective confirmation theory.” (Douven (2009), p. 348).

  14. See especially Kelly (forthcoming), Schoenfield (2013) for examples of the types of view I have in mind.

  15. Moderately permissive views can be more or less moderate. Pettigrew (2012), Meacham (forthcoming), for example, discuss versions of moderate permissivism on which the only substantive constraint is compliance with the Principal Principle. This proposal is very close to Subjective Bayesianism, and Pettigrew argues that it can be defended on similar lines. It might turn out, then, that this view can answer the value question along similar lines as well. Because it is so permissive, though, it will not be attractive to anyone who is worried about things like skepticism and grue-projection. For the present discussion, I will set aside this type of view and focus on less-permissive moderate views.

  16. It’s true that moderate permissivists can’t say that rational belief maximizes expected accuracy as assessed from the rational perspective. But couldn’t they say, instead, that rational belief maximizes expected accuracy from a rational perspective? (Each rational credence function will maximize expected accuracy relative to one rational credence function: itself.) Moderate permissivists could say this, but they also will have to say more: extreme permissivists, after all, can also say that rational belief maximizes expected accuracy as assessed from a rational perspective. So moderate permissivists will need to say why their view goes beyond extreme permissivism. Once we add something to this view, however, our answer to the value question becomes less unified. If we wanted to identify a unique, valuable property of rational belief, this kind of strategy won’t give us one. Though that isn’t a knock-down objection, I think it is at least reason to worry that this strategy won’t yield a satisfactory answer to the value question. Thanks to Dennis Whitcomb for suggesting this response on behalf of moderate permissivism, and for helpful discussion on this point.

  17. Though this position isn’t explicitly endorsed by permissivists as far as I can tell, it is often in the background. For example, Kelly writes, “the Permissivist might think that what permissive cases there are, aren’t all that permissive. … [Suppose] you and I agree on the basis of our common evidence that the Democrat is more likely than not to be elected. … The only difference between us is this: [you] give a bit less credence to the proposition that the Democrat will win than I do. Here there seems little pressure for me to conclude that you are less reasonable than I am.” (Kelly, forthcoming, pp. 2–3).

  18. This is an intuitively plausible constraint on acceptable ways of measuring both closeness and accuracy of credence functions. For a proof that this constraint is true of strictly proper scoring rules, see Pettigrew (2013) (appendix).

  19. For example, suppose your credence in P is 0.5, and your credence in ~P is also 0.5. It might be rational to have 0.6 credence in P, and 0.4 in ~P—suppose that coherent assignment of credences has high expected accuracy as assessed by your credences and your scoring rule, so it meets the required threshold. But then some incoherent assignments of credences will plausibly also meet the threshold: for instance, 0.51 credence in P and 0.5 credence in ~P. Thanks to Jennifer Carr and Brian Hedden.

  20. While many permissivists accept that there can be cases like this, some do not. For instance, Cohen (forthcoming) argues that rationality is only permissive when one is unaware of the other permissible options on one’s evidence. A view like this might be able to avoid many of the problems I raise here for moderate permissivism. I don’t have the space to give this view the attention it deserves, but I will mention a few reasons to think that permissivists should be hesitant to adopt it. First of all, Cohen’s view commits us to the claim that there can be widespread rational ignorance, or rational false beliefs, about what rationality requires: and indeed, that this kind of ignorance is rationally required in all permissive cases. This is a very strong conclusion (as Ballantyne and Coffman (2012) point out). Second, this view undermines some popular motivations for permissivism: for example, it implies that you and I could never rationally recognize our own situation as a “reasonable disagreement” of the type supposedly found on juries or among scientists.

    Cohen embraces this conclusion, and argues that when we find out which other credences are rational (perhaps through disagreement with others) we should conciliate. But this means that if a rational agent takes any view of her own epistemic situation at all, she must believe that if her evidence is permissive, her own credence in response to it is near the middle of the permissible range (at least in expectation). (If she thought her credence was near the lower bound of the permissible range, say, she would have pressure to move towards the middle.) Why is the middle of the range so special, and why should we try to approach it in response to disagreement? See Cohen (forthcoming), Christensen (2009) for further comments. Thanks to Louis deRosset for helpful discussion on this topic.

  21. I’m simplifying a little here; depending on which scoring rule we’re using here, it might turn out that the “middle” of the rational range isn’t actually where we should put Alice in order to make the threshold view come out true from her perspective.

  22. Bob should think that his own beliefs maximize expected accuracy, again, because of Immodesty. Schoenfield (2013) argues that this is why someone in Bob’s situation should not regard the choice between his credence and Alice’s as “arbitrary”, contra White (2007); we should stick to our own credences in permissive cases because we see those credences as maximizing expected accuracy.

  23. Again, I’m glossing over some issues about “closeness” here (see fn. 7). But I think we can make sense of the general point without going into detail on this issue.

  24. Field (2000), p. 141.

  25. Meacham (forthcoming), Teller (2011) each raise versions of this worry. Similar complaints have been raised against the extreme permissivist’s defense of Immodesty. Subjective Bayesians argue that coherent agents should regard their credences as maximizing expected accuracy according to a proper scoring rule—but why should we think that this is the right way to measure accuracy? One of the main motivations for using proper scoring rules is that they allow coherent agents to be immodest. So the extreme permissivist’s answer to the value question might be similarly unconvincing to those who don’t already hold the view. See Maher (2002), Gibbard (2008) for two versions of this worry about Subjective Bayesianism.

References

  • Ballantyne, N., & Coffman, E. J. (2012). Conciliationism and uniqueness. Australasian Journal of Philosophy, 90(4), 657–670.

    Article  Google Scholar 

  • Christensen, D. (2007). Epistemology of disagreement: The good news. Philosophical Review, 116(2), 187–217.

    Google Scholar 

  • Christensen, D. (2009). Disagreement as evidence: The epistemology of controversy. Philosophy Compass, 4(5), 756–767.

    Google Scholar 

  • Cohen, S. (1984). Justification and truth. Philosophical Studies, 46(3), 279–295.

    Google Scholar 

  • Cohen, S. (forthcoming). A defense of the (almost) equal weight view. In C. Christensen & J. Lackey (Eds.), The epistemology of disagreement: New essays. Oxford: Oxford University Press.

  • Dogramaci, S. (2012). Reverse engineering epistemic evaluations. Philosophy and Phenomenological Research, 84(3), 513–530.

    Article  Google Scholar 

  • Douven, I. (2009). Uniqueness revisited. American Philosophical Quarterly, 46(4), 347–361.

    Google Scholar 

  • Feldman, R. (2007). Reasonable religious disagreements. In L. Antony (Ed.), Philosophers without gods: Meditations on atheism and the secular life (pp. 194–214). Oxford: Oxford University Press.

    Google Scholar 

  • Field, H. (2000). Apriority as an evaluative notion. In P. A. Boghossian & C. Peacocke (Eds.), New essays on the a priori. Oxford: Oxford University Press.

    Google Scholar 

  • Gibbard, A. (2008). Rational credence and the value of truth. Oxford Studies in Epistemology, 2, 224.

    Google Scholar 

  • Greaves, H., & Wallace, D. (2006). Justifying conditionalization: Conditionalization maximizes expected epistemic utility. Mind, 115(459), 607–632.

    Google Scholar 

  • Joyce, J. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.

    Article  Google Scholar 

  • Joyce, J. (2009). Accuracy and coherence: Prospects for an alethic epistemology of partial belief. In Huber, F ,& Schmidt-Petri , C. (eds.), Degrees of Belief. Synthese.

  • Kelly, T. (2010). Peer disagreement and higher-order evidence. In R. Feldman & T. Warfield (Eds.), Disagreement. Oxford: Oxford University Press.

    Google Scholar 

  • Kelly, T. (forthcoming). How to be an epistemic permissivist. In M. Steup & J. Turri (Eds.), Contemporary debates in epistemology. Oxford: Blackwell.

  • Leitgeb, H., & Pettigrew, R. (2010). An objective justification of Bayesianism I: Measuring inaccuracy. Philosophy of Science, 77(2), 201–235.

    Article  Google Scholar 

  • Lewis, D. (1971). Immodest inductive methods. Philosophy of Science, 38(1), 54–63.

    Article  Google Scholar 

  • Maher, P. (2002). Joyce’s argument for probabilism. Philosophy of Science, 69(1), 73–81.

    Article  Google Scholar 

  • Meacham, C. (forthcoming). Impermissive Bayesianism. Erkenntnis.

  • Moss, S. (2011). Scoring rules and epistemic compromise. Mind, 120(480), 1053–1069.

    Article  Google Scholar 

  • Pettigrew, R. (2012). Accuracy, chance, and the principal principle. Philosophical Review, 121(2), 241–275.

    Google Scholar 

  • Pettigrew, R. (2013). A new epistemic utility argument for the principal principle. Episteme, 10(1), 19–35.

    Article  Google Scholar 

  • Rosen, G. (2001). Nominalism, naturalism, philosophical relativism. Philosophical Perspectives, 15, 69–91.

    Google Scholar 

  • Schoenfield, M. (2013). Permission to believe: Why permissivism is true and what it tells us about irrelevant influences on belief. Noûs, 47(1).

  • Teller, P. (2011). Learning to live with voluntarism. Synthese, 178(1), 49–66.

    Google Scholar 

  • White, R. (2007). Epistemic permissiveness. Philosophical Perspectives, 19(1), 445–459.

    Article  Google Scholar 

  • Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

Download references

Acknowledgments

For helpful comments, questions, and suggestions, thanks to Rachael Briggs, Alex Byrne, Jennifer Carr, David Christensen, Louis deRosset, Brendan Dill, Sinan Dogramaci, Ryan Doody, Tom Dougherty, Kenny Easwaran, Katie Finley, David Gray, Caspar Hare, Brian Hedden, Sam Fox Krauss, Jack Marley-Payne, Chris Meacham, Richard Pettigrew, Damien Rochford, Bernhard Salow, Mike Titelbaum, Katia Vavova, Jonathan Vogel, Kenny Walden, Fritz Warfield, Dennis Whitcomb, Steve Yablo, and audiences at the UT Austin Graduate Conference, the Notre Dame/Northwestern Graduate Epistemology Conference, the Bellingham Summer Philosophy Conference, and several work in progress venues at MIT. Special thanks to Miriam Schoenfield, Paulina Sliwa, and Roger White.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sophie Horowitz.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Horowitz, S. Immoderately rational. Philos Stud 167, 41–56 (2014). https://doi.org/10.1007/s11098-013-0231-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-013-0231-6

Keywords

Navigation