7.2 Dennett and the Intentional Stance
Dennett introduces the (non-folk-psychology-using) Martian super scientist as a way of expressing an objection to the intentional stance (that is, to his intentional systems theory based on the intentional stance)—the objection that the intentional stance is observer-relative. The objection goes something like this: in order to deal with an earlier objection (the lectern objection), Dennett stipulates that a belief-desire possessing system is one for which the intentional stance is required in order to reliably and voluminously predict its behaviour. On such a view, we come out as “true believers”, while a lectern does not: even though the intentional stance can be used to predict a lectern’s behaviour, the physical stance does just as well, if not better. But the Martian super-scientist is stipulated to be such that it would not need the intentional stance to predict a typical person P’s behaviour, even if we need the intentional stance to do the same. So is P a true believer or not? It seems that Dennett’s theory would make whether or not a subject is a true believer to be a subjective matter: P is a true believer relative to us, but relative to the Martian super-scientist, not.
Dennett’s counter to this objection is that there are patterns that the super scientist would not be able to explain without folk psychology:
“The Earthling and the Martian observe (and observe each other observing) a particular bit of local physical transaction. From the Earthling’s point of view, this is what is observed. The telephone rings in Mrs. Gardner’s kitchen . She answers, and this is what she says: “Oh, hello dear. You’re coming home early? Within the hour? And bringing the boss to dinner? Pick up a bottle of wine on the way home then, and drive carefully.” On the basis of this observation, our Earthling predicts that a large metallic vehicle with rubber tires will come to a stop on the drive within one hour, disgorging two human beings, one of whom will be holding a paper bag containing a bottle containing an alcoholic fluid . The prediction is a bit risky, perhaps, but a good bet on all counts. The Martian makes the same prediction, but has to avail himself of much more information about an extraordinary number of interactions of which, so far as he can tell, the Earthling is entirely ignorant . For instance, the deceleration of the vehicle at intersection A, five miles from the house, without which there would have been a collision with another vehicle - whose collision course had been laboriously calculated over some hundreds of meters by the Martian. The Earthling’s performance would look like magic! How did the Earthling know that the human being who got out of the car and got the bottle in the shop would get back in? The coming true of the Earthling’s prediction, after all the vagaries, intersections, and branches in the paths charted by the Martian, would seem to anyone bereft of the intentional strategy as marvelous and inexplicable as the fatalistic inevitability of the appointment in Samarra”.
A full understanding of what is going on in Dennett’s counter is beyond the scope of this paper, but we can note here that our use of the ultra-scientist does have some similarities to Dennett’s Martian. The main question of our paper is whether the ultra-scientist can make sense of the blob’s ability to predict the behaviour of the ball, which is structurally similar to the question of whether Dennett’s Martian can predict/explain/understand Mrs. Gardner’s ability to predict the behaviour of the person on the phone (presumably Mr. Gardner). One idea of how these relate to each other is shown in Table
1.
But there is an important disanalogy: Mr. Gardner is a person, with beliefs and desires, whereas the ball in our example is not. The ball is playing the role of the kinds of things that intentional agents reason about, not that of an intelligent agent itself. Dennett’s example conflates these two. Presumably, his example would work just as well if Mrs. Gardner were reasoning about inanimate objects (like balls), since it is the Earthling’s (and opposed to Mrs Gardner’s) successful ascription of beliefs and desires to others (as opposed to possessing beliefs and desires) that is supposed to puzzle the Martian. Our example eliminates this confusing conflation.
Table 1One understanding of how our thought experiment relates to Dennett’s
Martian super-scientist | Ultra-scientist using only forward reasoning |
Earthling | Ultra-scientist using temporal-interpolation reasoning |
Mrs. Gardner | Blob |
Mr. Gardner | Ball |
But there’s another way of seeing how the two examples relate. Dennett argues that the Martian super-scientist would not be able to make sense of the Earthling’s ability to predict Mrs Gardner’s behaviour without seeing the Earthling as possessing the ability to ascribe the concepts of folk psychology to others. Similarly, we argue that we cannot make sense of the ultra-scientist’s ability to predict the blob’s behaviour without seeing the ultra-scientist as possessing the ability to ascribe proto-intentional relations to others. This interpretation is shown in Table
2.
Table 2Another understanding of how our thought experiment relates to Dennett’s
Martian super-scientist | Us |
Earthling | Ultra-scientist using temporal-interpolation reasoning |
Mrs. Gardner | Blob |
Mr. Gardner | Ball |
Seeing things this way reveals an important difference of explanatory strategy between Dennett’s example and ours. In stressing the conceptual distinctness of the intentional stance as an explanatory enterprise, Dennett at best failed to show how the things posited by the intentional stance are metaphysically continuous with those postulated by the physical stance. At worst, he introduced a dualism from which there is no recovery. Our goal, on the other hand, is to emphasise the continuity between intentional and non-intentional explanations. Thus, our strategy (as clarified in Sect.
2.1) was to present the blob, the blob’ s behaviour, and the ultrascientist’s mode of reasoning about the blob, in physically-grounded terms, without explicitly stipulating that any of them are intentional, so that such a designation might be a conclusion of our analysis, rather than a presupposition of it.
It should be noted that it seems that Dennett only allows his Martian to use mechanism-forward reasoning (like the stereotypical version of Laplace’s demon). This weakens his point considerably, because that somewhat arbitrary restriction has other consequences that undermine Dennett’s points concerning intentionality. For example, for a Martian that is limited to mechanism-forward reasoning, we can construct a scenario in which it is equally puzzled by the Earthling’s ability to predict the behaviour of a rock rolling down a rugged slope. The Earthling takes one look at the slope, sees that there is nothing on which the rock can get stuck, and predicts that the rock will cross a line painted, half-way down, horizontally across the slope, knowing that the rock will end up at the bottom. The Martian simulates the rock’s precise trajectory from a detailed knowledge of its shape and the slope’s surface, and concludes that the Earthling is miraculously correct, despite the Earthling being ignorant of the details that permit a mechanism-forwards prediction. To avoid this situation, we need to grant the Martian the ability to engage in temporal-interpolation reasoning as well. Fortunately, doing so does not force us to attribute to the Martian naturalistically problematic concepts (see Sect.
1.2). Further, even with enough temporal-interpolation reasoning abilities to find the Earthling’s reasoning about the rock, slope, and the painted line non-miraculous, the Martian does not thereby necessarily have the kind of temporal-interpolation reasoning that is intentionality-invoking, as does our ultra-scientist.
Put another way, Dennett individuates explanatory practices by the concepts they use, which is fine if such concepts are already well-established, but dubious if there is no such prior grounding, and circular if one attempts to clarify those concepts by reference to the very explanatory practices which they are meant to help individuate. Such circularity is on display in Dennett’s treatment of the intentional stance; the stance itself is defined in terms of belief and desire:
“Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.”
But then the notion of belief is clarified in terms of the intentional stance:
“What it is to be a true believer is to be an intentional system, a system whose behaviour is reliably and voluminously predictable via the intentional strategy.”
By contrast, in this paper we individuate explanatory practices in the first instance by the kinds of physical systems and physical behaviours they can and cannot predict/explain - this is why we take such care to tell you what the blob does in non-intentional terms, rather than saying something like “Suppose the ultra-scientist is trying to explain the behaviour of an intentional agent”. This allows us to stipulate an empirical discontinuity within the class of physically characterised systems, rather than to stipulate a conceptual discontinuity between physical and intentional modes of explanation.
There are several advantages to taking our approach. Its non-circularity allows it to at least potentially provide a true naturalisation of intentionality. It also would permit (future) investigation of what it would take for an understanding system/understood system dyad to transit from being a dyad that does not, to a dyad that does, require the understander system to use teleological concepts to predict the understood system’s behaviour. That is, we could finally begin to offer a non question-begging answer to the question “what is it about a system that makes the intentional stance useful in predicting its behaviour?”.
Further, by weakening the intentional concepts involved from beliefs and desires to belief-like and desire-like states, our approach is better suited to exploring the points at which intentionality and intentional accounts first start to get a grip. Such exploration can be done in a way that is data-led, rather than motivated by top-down, a priori constraints from the special case of full-fledged beliefs and desires. Accordingly, we are not forced to use complex, difficult to naturalise semantic structures, instead relying only on the notion of non-actual situations.