Skip to main content

Advertisement

Log in

In defence of the value free ideal

European Journal for Philosophy of Science Aims and scope Submit manuscript

An ORIGINAL PAPER IN PHILOSOPHY OF SCIENCE to this article was published on 07 August 2014

Abstract

The ideal of value free science states that the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values. It has been criticized on the grounds that scientists have to employ moral judgements in managing inductive risks. The paper seeks to defuse this methodological critique. Allegedly value-laden decisions can be systematically avoided, it argues, by making uncertainties explicit and articulating findings carefully. Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change (IPCC).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Notes

  1. A third, rather empirical critique, challenges the ideal of value free science in regard of its (potential) harmful side-effects. Adopting the ideal in scientific policy advice, the argument goes, might have the effect that worse decisions are eventually being taken or that scientific advice, for being too nuanced or careful, is completely ignored in policy making. see for example Cranor (1990, p. 139) or Elliott (2011, pp. 55–80, in particular pp. 67–68). Both Cranor and Elliott, though, don’t provide detailed empirical evidence to support the claim that providing value-free advice is socially harmful. Moreover, Elliott reconstructs the argument as a defence of the methodological critique (ibid., pp. 63–64, 68). I don’t agree: If it’s really harmful to give value-free advice, that’s clearly a reason in its own not to do it—quite independently of any further sophisticated, methodological reasoning. Moreover, stressing that value-free advice is harmful does not invalidate the refutation of the methodological critique advanced below. This said, the charge that adopting the value-free ideal might be socially harmful—at least in some contexts and under certain conditions—has to be taken serious and calls for further philosophical and empirical investigation.

  2. A term coined by Williams (1985).

  3. See Kitcher (2011, pp. 31–40). Besides endorsing the methodological critique reconstructed in this paper, Kitcher sets forth a further argument against value freedom which is based on the pervasiveness of so-called probative values in scientific inquiry (Kitcher 2011, pp. 37–40). While this argument deserves a more thorough discussion in its own, I suspect that it hinges on an ambiguity. If probative values (e.g. worthiness, policy-relevance) are simply used to determine detailed research questions, their being non-epistemic doesn’t undermine value freedom. If probative values (e.g. burdens of proof), however, are used to infer scientific results, they represent plain epistemic values, and the value free ideal is left intact, as well.

  4. Compare Longino (1990, 2002).

  5. See Wilholt (2009).

  6. See Winsberg (2010, pp. 93–119).

  7. See Elliott (2011, pp. 55–80). Elliott distinguishes, further, two versions of what I call the methodological critique: the “gap argument” (ibid., pp. 62–66) and the “error argument” (ibid., pp. 66–70). While he sees clearly that these arguments are not completely independent but rely on joint premisses (e.g., ibid., p. 70), I’d even go a bit further: The “error argument”, on the one hand, applies only in situations where one faces inductive risks, i.e., where there is a gap to be bridged in the scientific inference chain. On the other hand, the “gap argument” stresses that non-epistemic values have to be alluded to in order to bridge (evidential or logical) gaps in scientific reasoning—and the methodological handling of errors seems just to be one such gap. In sum, it’s not clear to me whether we have two distinct (albeit closely related) arguments at all. The reconstruction unfolded in this paper merges the “gap argument” and the “error argument” in one line of argumentation.

  8. In a series of philosophical studies, Douglas (2000, 2007, 2009) has revived and improved upon Rudner’s original argument.

  9. So the critique, in particular as set forth by Heather Douglas, does not refer to Duhem-style underdetermination as discussed, e.g., in Quine (1975) or Laudan (1991), although it has been originally articulated in such terms (cf. Rudner 1953).

  10. This restriction to policy-relevant science is crucial, as Mitchell (2004) has stressed. Only if results are communicated and (potentially) influence policy decisions does the following claim hold.

  11. This is not to say that value judgements themselves are arbitrary in the sense of being irrational or unjustifiable. The critique merely maintains that every epistemically underdetermined decision relies on non-epistemic reasons—and is hence non-arbitrary from a broader, non-epistemic perspective. The argument unfolded in this paper is consistent with the view that non-epistemic normative claims can be supported and justified through (e.g. moral) reasoning.

  12. See specifically Douglas (2007, pp. 122–126) and Douglas (2009, pp. 80–81).

  13. This general claim entails that errors committed by scientists are highly consequential, too. Error probabilities and inductive risks will have a more specific argumentative rôle to play in the following section.

  14. A somewhat less specific, and hence weaker, analogue to premiss P1 figures prominently in Rudner’s exposition of the methodological critique. Rudner (1953) claims, explicitly, that “the scientist as scientist accepts or rejects hypotheses”, and any satisfactory methodological account should explain how (ibid., p. 2). Such a rough analogue to premiss P1 is rather implicitly (but no less importantly) assumed in the writings of Douglas, too. Thus, Douglas (2000) explicitly affirms P2 by claiming that the scientists who study the health effects of dioxins (i) have to set a specific level of statistical significance when drawing the inference (ibid., p. 567), (ii) have to agree on an unequivocal interpretation of the available data (ibid., p. 571), and have to choose between two alternative (causal) models, the threshold and the linear extrapolation model (ibid., p. 573). Echoing her earlier analysis in a discussion of risk assessments, Douglas (2009, p. 142) claims that scientists frequently have to choose from equally plausible models in order to “bridge the gaps” and complete the assessment. But why are scientists required to make these choices? Presumably in order to arrive at policy-relevant results (that is, roughly, P1). However, a careful reading of the critics seems to reveal that they are not exactly committed to P1. The discussion in the next section will account for this observation.

  15. See also footnote 14.

  16. In the sense of Knight (1921).

  17. Curiously, conditionalizing on normative assumptions is exactly the strategy favored by Douglas (2009, p. 153), herself. It should be noted that such conditional scientific results comply with the value free ideal, because once uncertain or (non-epistemic) normative assumptions are placed in the antecedent of a hypothesis, they are clearly not maintained by the scientist anymore.

  18. This is the bottom line of Jeffrey’s criticism (Jeffrey 1956), too. However, my argument deviates substantially from Jeffrey’s in allowing that uncertainties be made explicit otherwise than through probability assignments. More generally, it seems to me a shortcoming of the current debate about value freedom that uncertainty articulation is, short-sightedly, identified with the assignment of probabilities. Not only does Douglas ignore non-probabilistic uncertainty statements, as I will argue below, Winsberg (2010, pp. 96, 119), Biddle and Winsberg (2010) and Kitcher (2011, p. 34), too, wrongly assume that giving value free policy advice requires one to make uncertainties explicit through probabilities.

    As an additional point, note that reporting hedged hypotheses is not identical with merely stating the (e.g., limited, partially conflicting) evidence and letting the policy makers decide on whether the plain hypothesis should be adopted or not. That’s, however, what Elliott (2011) seems to have in mind when referring to value-free scientific advice in the face of uncertainty, as formulations like scientists “passing the buck” (ibid., pp. 55, 64), “withholding their judgment or providing uninterpreted data to decision makers” (ibid., p. 55), letting “the users of information decide whether or not to accept the hypotheses” (ibid., p. 67) suggest. The point about making uncertainties fully explicit and reporting hedged hypotheses is (i) to enable policy makers to take the actual scientific uncertainty into account and (ii) to allow their normative risk preferences (level of risk aversion) to bear on the decision. Justifying decisions under uncertainty does obviously not require one to fully adopt an uncertain prediction or to act as if one of the uncertain forecasts were true; see also footnotes 20 and 21.

  19. Besides systematic objections, the following modification also addresses the hermeneutic issue considered in footnote 14.

  20. Cf. Hansson (2011).

  21. As discussed by Sunstein (2005), Gardiner (2006) and Shue (2010).

  22. Note that in refuting P1 or, respectively, P2’, it suffices to say that scientists can weaken their empirical claims such that they are warranted beyond reasonable doubt; we are not, at this stage, committed to the view that they should do so. Only if the value free ideal is accepted, e.g. for reasons indicated in the very first paragraph, the analysis unfolded in this section might entail that scientists should carry out epistemic qualification or conditionalization, because this might be the only way to achieve value freedom.

  23. Following David Hume (2000, p. 119), I tend to consider the evocation of such uncertainties as a sort of unreasonable skepticism, which comprises, in addition, doubting the existence of the external world or questioning our fundamental cognitive capacities.

  24. See also Kitcher (2011, p. 34) for a similar “frank explanation” of a climate scientist. By carefully stressing the uncertainties, Kitcher’s hypothetical climatologist is—in contrast to what Kitcher seems to believe—almost fully complying with this paper’s methodological recommendations.

  25. See, for example, the papers by Schneider (2002), Dessai and Hulme (2004), Risbey (2007) and Stainforth et al. (2007).

  26. 26Volume 109, Numbers 1–2/November 2011.

  27. Compare also Risbey and Kandlikar (2007) for a detailed discussion.

  28. Note that, according to the Guidance Note, the explication of uncertainties does not necessarily depend on “our best climate models”, as Winsberg (2010, p. 111) assumes.

  29. One may raise the question whether, by recommending use of expert judgement and surveys of expert views, the Guidance Note in fact tolerates non-epistemic value judgements and represents, accordingly, a counterexample to this paper’s line of thought. The relation between expert judgements and non-epistemic value judgements is intriguing and clearly deserves closer attention. At this point, however, we have to be content with the following brief remarks: First of all, the Guidance Note prescribes to use expert surveys in order to gauge the degree of agreement within the scientific community (cf. Mastrandea et al. 2010, pp. 2–3), e.g. with a view to probability estimates. Such surveys hence represent one way to make the prevailing uncertainties explicit, fully in line with this paper. Secondly, I take it that to infer a hypothesis from expert judgement is not necessarily more uncertain than a well-founded inference from empirical evidence: In cases where we know that experts have acquired tacit knowledge and where many experts agree, expert judgement, too, can establish a hypothesis beyond reasonable doubt. Finally, whenever these conditions aren’t met, i.e. whenever expert judgement doesn’t establish a hypothesis beyond reasonable doubt, the Guidance Note (in my understanding) recommends to switch the category, e.g.: if experts don’t agree on a range of a variable (category D), one should attempt to provide an order of magnitude (category C) rather than reporting an uncertain and poorly founded quantitative range; if experts don’t agree on a probability of variable (category E), they should try to estimate a range of possible values for that variable without assigning probabilities. So, on the one hand, the Guidance Note can be interpreted in a way such that it remains an example for how to eliminate non-epistemic value judgements, although it relies on expert views. On the other hand, the unspecific reference to expert judgements in the Guidance Note leaves room for different interpretations. Note, however, that this brief case study does not hinge on the Guidance Note being perfect or flawless in every single aspect, for it illustrates in any case the general methodological strategy of removing policy-relevant inductive risks through making uncertainties explicit.

  30. The Guidance Note, by the way, endorses that ideal explicitly (cf. Mastrandea et al. 2010, p. 2).

  31. Or, in other words, the ideal of value free science can and should be understood as a regulative principle in a Kantian sense. Note that Kitcher (2011), too, conceives the ideal of well-ordered science (ibid., p. 125ff.) and the ideal of transparency (ibid., p. 151) along the same lines.

References

  • Biddle, J., & Winsberg, E. (2010). Value judgements and the estimation of uncertainty in climate modelling. In P.D. Magnus, & J. Busch (Eds.), New waves in philosophy of science (pp. 172–197) Basingstoke: Palgrave Macmillan.

    Google Scholar 

  • Cranor, C.F. (1990). Some moral issues in risk assessment. Ethics, 101(1), 123–143.

    Article  Google Scholar 

  • Dessai, S., & Hulme, M. (2004). Does climate adaptation policy need probabilities? Climate Policy, 4(2), 107–128.

    Google Scholar 

  • Douglas, H.E. (2000). Inductive risk and values in science. Philosophy of Science, 67(4), 559–579.

    Article  Google Scholar 

  • Douglas, H.E. (2007). Rejecting the ideal of value-free science. In H. Kincaid, J. Dupré, A. Wylie (Eds.), Value-free science? Ideals and illusions (pp. 120–139). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Douglas, H.E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Dupré, J. (2007) . Fact and value. In H. Kincaid, J. Dupré, A. Wylie (Eds.), Value-free science? Ideals and illusions (pp. 27–41). New York: Oxford University Press.

    Google Scholar 

  • Elliott, K.C. (2011). Is a little pollution good for you? Incorporating societal values in environmental research. New York: Oxford University Press.

    Book  Google Scholar 

  • Gardiner, S.M. (2006). A core precautionary principle. The Journal of Political Philosophy, 14(1), 33–60.

    Article  Google Scholar 

  • Hansson, S.O. (2011). Coping with the unpredictable effects of future technologies. Philosophy & Technology, 24(2), 137–149.

    Article  Google Scholar 

  • Hume, D. (2000). An enquiry concerning human understanding. The Clarendon edition of the works of David Hume. Oxford: Clarendon Press.

    Google Scholar 

  • Jeffrey, R.C. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 23(3), 237–246.

    Article  Google Scholar 

  • Kitcher, P. (2011). Science in a democratic society. Amherst: Prometheus Books.

    Google Scholar 

  • Knight, F. (1921). Risk, uncertainty and profit. Boston: Houghton Mifflin.

    Google Scholar 

  • Laudan, L. (1991). Empirical equivalence and underdetermination. The Journal of Philosophy, 88(9), 449–472.

    Article  Google Scholar 

  • Levi, I. (1960). Must the scientist make value judgments? The Journal of Philosophy, 57(11), 345–357.

    Article  Google Scholar 

  • Longino, H.E. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton: Princeton University Press.

    Google Scholar 

  • Longino, H.E. (2002). The fate of knowledge. Princeton: Princeton University Press.

    Google Scholar 

  • Mastrandrea, M.D., Field, C.B., Stocker, T.F., Edenhofer, O., Ebi, K.L., Frame, D.J., Held, H., Kriegler, E., Mach, K.J., Matschoss, P.R., Plattner, G.-K., Yohe, G.W., Zwiers, F.W. (2010). Guidance note for E., Mach, K.J., Matschoss, P.R., Plattner, G.-K., Yohe, G.W., Zwiers, F.W. (2010). Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. Technical report, Intergovernmental Panel on Climate Change (IPCC).

  • Mitchell, S.D. (2004). The prescribed and proscribed values in science policy. In G. Wolters, & P. Machamer (Eds.), Science, values, and objectivity (pp. 245–255). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Putnam, H. (2002). The collapse of the fact/value dichotomy. Cambridge: Harvard University Press.

    Google Scholar 

  • Quine, W.V.O. (1975). On empirically equivalent systems of the world. Erkenntnis, 9(3), 313–328.

    Article  Google Scholar 

  • Risbey, J. (2007). Subjective elements in climate policy advice. Climatic Change, 85(1), 11–17.

    Article  Google Scholar 

  • Risbey, J., & Kandlikar, M. (2007). Expressions of likelihood and confidence in the IPCC uncertainty assessment process. Climatic Change, 85(1), 19–31.

    Article  Google Scholar 

  • Rudner, R. (1953). The scientist qua scientist makes value judgements. Philosophy of Science, 20(1), 1–6.

    Article  Google Scholar 

  • Sartori, G. (1962). Democratic theory. Westport: Greenwood Press.

    Google Scholar 

  • Schneider, S.H. (2002). Can we estimate the likelihood of climatic changes at 2100? Climatic Change, 52, 441–451.

    Article  Google Scholar 

  • Shue, H. (2010). Deadly delays, saving opportunities: Creating a more dangerous world? In S.M. Gardiner (Ed.), Climate ethics: Essential readings (pp. 146–162). New York: Oxford University Press.

    Google Scholar 

  • Stainforth, D.A., Allen, M.R., Tredger, E.R., Smith, L.A. (2007). Confidence, uncertainty and decision-support relevance in climate predictions. Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences, 365(1857), 2145–2161.

    Article  Google Scholar 

  • Sunstein, C.R. (2005). Laws of fear: Beyond the precautionary principle. The Seeley lectures. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Weber, M. (1949). The meaning of “ethical neutrality” in sociology and economics. In E.A. Shils, & H.A. Finch (Eds.), Methodology of social sciences (pp. 1–48). Glencoe: Free Press.

    Google Scholar 

  • Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science, 40, 92–101.

    Article  Google Scholar 

  • Williams, B.A.O. (1985). Ethics and the limits of philosophy. Cambridge: Harvard University Press.

    Google Scholar 

  • Winsberg, E. (2010). Science in the age of computer simulation. Chicago: Chicago University Press.

    Book  Google Scholar 

Download references

Acknowledgements

I’d like to thank two anonymous reviewers of EJPS and its editor in chief for the numerous and extremely valuable comments on an earlier version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregor Betz.

Additional information

A comment to this article is available at http://dx.doi.org/10.1007/s13194-014-0095-4.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Betz, G. In defence of the value free ideal. Euro Jnl Phil Sci 3, 207–220 (2013). https://doi.org/10.1007/s13194-012-0062-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13194-012-0062-x

Keywords

Navigation