초록
In this paper, we introduce what can be called the “standard account of implementation” and briefly mention some objections raised against it. Then we carefully examine Chalmers’ account of implementation and show that without a notion of “legitimate grouping of physical states” all sorts of physical systems would implement unintended computations. Specifically, we show how, despite Chalmers’ attempts to overcome the difficulties inherent in defining physical state types, his definition of implementation still allows for unwanted implementations.
키워드
implementation, computation, computationalism
Introduction: The Standard Account of Implementation
Over the years, several proposals of what it means to
“We require first that the states of the system can be interpreted as representing the elements of the domain and range of the function, and we require that (in certain circumstances) if the system is in a state representing an element of the domain of the function, physical laws guarantee that it will go into a state representing the corresponding element of the range of the function. ”
Formally, this can be written as follows:
1. there is an “interpretation” function I from a set
2. physical laws guarantee that (in certain specifiable circumstances) if the system is in a state i in
While this definition is very general in that it includes various other accounts of implementation and physical realization as special cases, the standard account of implementation, as it stands, does not quite work unfortunately. It has been pointed out that it is “too liberal,” for it does not put any restriction on the interpretation function: without any restrictions and constraints,
Figure 1.. The standard account of physical realization of a function:
One
Then
“The trick used in this example is obviously to get the object to compute the function by somehow building the function
To see what needs to be done in order to “exclude tricks of this sort,” one needs to analyze different kinds of “tricks” and hope to be able to detect “common patterns.” It is clear that there must be “some restrictions on the interpretation function used in any empirically substantial computational claim”
“This raises a problem for the idea that an ordinary calculator might realize the addition function on the natural numbers. The problem is that the addition function is infinite (in the sense that it has an infinite domain): any pair of numbers can be added, and there are infinitely many different pairs. To realize the addition function, then, a physical system would have to have infinitely many computationally relevant physical states (since our interpretation
Besides the question, whether physical systems can realize infinite functions at all (e.g., using infinitely many states), it seems that one has to account at least for cases like the calculator and revise the standard account to allow it to realize an infinite function
1. there is an “interpretation” function
2. physical laws guarantee that (in certain circumstances
Note that the revised standard account has made two crucial transitions: an explicit transition from (potentially) infinite sets of physical states to finite such sets, and another
There are other modifications to Definition 2 that become necessary upon further analysis. For example, it has been pointed out by
“[…] the machine is a finite object, accepting only finitely many numbers as inputs and yielding only finitely many as outputs—others are simply too big. Indefinitely many programs extend the actual behavior of the machine. Usually, this is just ignored because the designer of the machine intended it to fulfill just one program […] Second, in practice it is hardly likely that I really intend to entrust the values of a function to the operation of a physical machine, even for that part of the function for which the machine can operate. Actual machines can malfunction: through melting wires or slipping gears they may give the wrong answer. How is it determined when a malfunction occurs? By reference to the program of the machine, as intended by its designer, not simply by reference to the machine itself.”
Kripke’s objection to the general idea of a physical system realizing an infinite function rests on the claim that “indefinitely many programs extend the actual behavior of the machine” and that reference to intention of the designer (of a computing device) is needed to fix one particular program. To counter objections of that sort, Stabler suggests modifying part 2 of Definition 2 by adding the counterfactual clause “if the system satisfied conditions of normal operation N for long enough”:
2.’ Physical laws guarantee that (in certain circumstances
By adding this clause, reference to the designer’s intentions is replaced by onditions of normal operation, thus avoiding the problems that the former aises for the functionalist (see
“The counterfactuals needed by such accounts [e.g. realization of simple omputations such as the identity function realized by a wire] are f a sort that can be supported by our understanding of physical laws. here is no reason to suppose that the more complicated counterfactuals needed to support claims about the functioning of organisms will be different in kind. […] we need only consider what would be the case if the antecedents of our counterfactuals held, and as in our examples [e.g., the wire example mentioned above] sometimes our understanding of physical laws can guide us quite clearly in that consideration.” (
Although it would be preferable to dispense with counterfactuals in an account of physical realization altogether (e.g., see
Thus, with the modified standard account of implementation (2 + 2’), we are now in a position to analyze Chalmers’ notion of implementation.
An Analysis of Chalmers’ Definition of Implementation
“A physical system implements a given computation when the causal structure of the physical system mirrors the formal structure of the computation.
In a little more detail, this comes to:
A physical system implements a given computation when there exists a grouping of physical states of the system into state-types and a one-toone mapping from formal states of the computation to physical statetypes, such that formal states related by an abstract state-transition relation are mapped onto physical states-types related by a corresponding causal state-transition relation.”
There is a little ambiguity in the two phrasings as to what the exact meaning of “mirrors” is supposed to be.
[iso⇒]
[iso⇐]
Yet, the second phrasing does not seem to imply structural sameness, since it requires only [iso⇒] (i.e., that “formal states related by an abstract statetransition relation are mapped onto physical states-types related by a corresponding causal state-transition relation”), but not the other direction [iso⇐] (i.e., that physical states-types related by the corresponding causal state-transition relation have to be mapped onto formal states related by an abstract state-transition relation, too). Hence, with “mirrors” Chalmers seems to mean only [iso⇒]. At a different place in the text, however, he suggests [iso⇐] when he writes “… that the formal state-transitional structure of the computation mirrors the causal state-transitional structure of the physical system” (1994, p. 393). So, it seems that “mirrors” is to be understood as “isomorphic to,” and, indeed, he later writes: “the relation between an implemented computation and an implementing system is one of isomorphism between the formal structure of the former and the causal structure of the latter” (1994, p. 396).
Hence, the mapping between physical state types and computational states has to be bijective (
Before we discuss Chalmers’ second (formal) definition, a remark on the use of the term “state” seems appropriate. The term “state” in this context is sometimes used in the sense of “token of a particular state type,” although this is at best ambiguous and misleading. “Automaton state,” for example, could denote a state in the set of states in the definition of the automaton as well as a state in a particular run of the automaton—the former is a type, the latter a token.
Yet, some philosophers still use “state” as if it referred to a unique particular physical occurrence, a constellation of a physical system which obtains only once at a particular moment in time, and thus once in the life-time of the system.
To avoid terminological (and consequently conceptual) confusion, we will use the term “instantiation of a state” (or maybe “state token,” e.g., see
After having explicated the overall structure of “implementation” in the first definition, Chalmers spells out the details in a more formal definition in which he uses a finite state automaton (FSA) as a representative for other computational formalisms:
“A physical system
Note that nothing in this definition requires that the mapping
This is where the idea of a grouping of physical states (mentioned in the former definition) comes to reconcile two seemingly incompatible ideas: 1) that no one-to-one correspondence can be established at the level of physical states (for the above and similar reasons), and 2) that the computational description is supposed to mirror the causal structure of the physical system. By relating not the physical states themselves, but more complex types of these states (formed by the “particular grouping” of states into types), it becomes possible to establish a one-to-one and onto relationship between these types and computational states, which is the prerequisite of an “isomorphism” (the mathematical term expressing structural identity, i.e., “mirroring”).
Chalmers, although never explicitly, seems to suggest that it is possible to obtain a bijective mapping
To see why Chalmers’ definition fails to capture his view about “mirroring” (i.e., isomorphism between physical and formal structures), let us examine how his definition works in detail, starting with a simple physical system
Input to
Figure 2.. The simple physical system
Figure 3.. The automaton
Automaton
Note, however, that
In the examples so far, every automaton state type corresponded to one and only one physical state type in a direct way, hence there was no need for a complex grouping of physical states types. Now, consider the physical system
Figure 4.. The physical system
The abstract structure of
Now suppose switch
Figure 5.. The causal structure of physical system
In the end, we establish that for the above-defined state types
Figure 6.. A graph of the states and transitions in the two-switch system
For the following, assume in addition that the input alphabet of
It can be seen that for “(A, a)→(B,1)” and “(B, b)→(A, 0)” (i.e., for every state transition) the conditional of Chalmers’ definition of implementation (i.e., “if the system were in state …, it would transit into …”) is true: takethe first transition “(A, a)→(B, 1).” Two states of
Together, we get that
The former does correspond to our intuition that the same physical system (i.e., the same spatio-temporal region) can be described at different levels according to different notions of physical state appropriate for that level. Note, however, that one is limited to groupings (that is, combinations) of the given physical states in the above case, which are, of course, limited to groupings that allow for reliable transitions between them.
The latter conclusion points in another direction: if computational descriptions are to “mirror” the causal structure of physical systems, then the causal topology of a set of physical states
A Challenge for Chalmers’ Notion of Implementation
Unfortunately, Chalmers neither provides criteria for the formation of (basic) physical types nor for more abstract types, but is rather satisfied with the mere existence of a grouping of types into more complex types. In his second definition, he simply notes that his “[…] definition uses maximally specific physical states s rather than the grouped state-types”
Consider system
Figure 7.. A graph of the causal structure of the physical system
This construction can be generalized to allow a switch system
Proof: Consider the physical switch system with
Figure 8.. A graph depicting the physical system
Pick an arbitrary FSA
Figure 9.. Transitions in
Thus, for every transition (
There is an obvious weak spot in the above argument, marked by ‘(*).’ One could argue that even though other input/output states are not applicable (because they have been just so defined), they should not be excluded
Figure 10.. An automaton which could be used to argue against the validity of the Slicing Theorem.
Define
In checking whether the switch system so defined implements the automaton under
Whether this kind of argument is valid or not, is left for the reader to decide. Even if the objection is correct and the theorem has to be restricted to cases where no two transitions use the same symbol, a strange aftertaste remains: the simple switch system will still implement a restricted, yet infinite class of automata. Besides: every language accepted by an automaton with
Before we discuss the consequences of the Slicing Theorem for general theories of implementation, note that this result can also be extended to CSAs, “which differ from FSAs only in that an internal state is specified not by a monadic label
“A physical system
Chalmers points out that “a natural requirement for such a decomposition is that each element correspond to a distinct physical region within the system […] the same goes for the complex structure of inputs and outputs.” Again, he claims (wrongly) that “state-transition relations are isomorphic in the obvious way.” Furthermore, he is convinced that his CSA model prevents the notion of implementation from the threat of vacuity: “What counts is that a given system does not implement every computation […] This is what is required for a substantial foundation for AI and cognitive science, and it is what the account I have given provides” (1994, p. 397). This can be contrasted with the following theorem:
A physical state transition corresponding to a combinatorial state transition can then be defined as the transition taking place by pressing all 3 switches of the system during subinterval
Figure 11.. Transitions in the 3-switch system that are mapped onto all n automata transitions.
This kind of result that one physical system can implement “too many” computations is exactly what Chalmers tried to avoid when he proposed his definition against the background of Putnam’s construction showing that every ordinary open system implements every finite state automaton without input and output
1) While Putnam’s construction shows how to implement a particular “run” of a computation, i.e., a particular sequence of state transitions, the above construction models the complete state-transitional structure of the automaton. That is why it only needs a finite (i.e., bounded) number of states to perform arbitrarily long computations (i.e., all possible computational sequences), whereas Putnam’s construction requires an unbounded number of states (depending on the length of the respective computational sequence). It exploits the fact that at some level of description physical configurations can be viewed as
2) Tokens of state types such as “switch up on Mondays” can be easily individuated (we can check if it is—currently or at some other time—Monday and we can check whether the switch is in position “up” or “down”). These input states have the predictive capacity that Putnam’s construction was criticized for; it is known ahead of time, what tokens of these types will look like and how they can be produced (as necessary requirement if one wants to control the input to a system!).
3) State transitions are reliable. Pressing switches is certainly as reliable an action as any reliable one can imagine (unless the switch is defective, etc., which can be accounted for by adding conditions of normal opera-tions to the definition of implementation as in the case of Definition 2). The same holds of the light bulbs being lit or not lit under the respective circumstances: given a setup corresponding to the wire diagram, the respective light bulbs will be lit reliably if the switches are in a position such that current can flow (and neither the battery, nor the wires, nor the light bulbs are defective). Transitions from one time interval into the next are obviously reliable as well, as they happen without further ado (and rely on the physical laws regarding the permanence of objects which are not subject to internal decay processes and/or external influences).
4) State transitions support counterfactuals. To see this, note that input occurs only “within time slices,” i.e., whenever a switch is pressed down on Monday, say, the system will reliably change state into “switch down on Monday.” If it is pushed up again, the system will return to state “switch up on Monday.” It can never be case that the system receives input on Monday and ends up in a state on Tuesday. Thus, any counterfactual of the form “had the system been in state
5) Because of the above and the laws of physics (i.e., circuit theory), the
It seems that the only objection left to the above construction is the nature of the involved physical states, since the main charges (see
There is a better reply to the objection that the unwanted results of the Slicing Theorems result from temporally individuated states. Instead of individuating these states temporally, one could slightly modify the switch system by adding “a clock,” which reliably goes through a fixed cycle of physical states in a given amount of time (12·60·60 states per 12 hours, say). Then one can form the same “slices” that were used in the switch system to implement arbitrarily complex computations, except that the
3 There could be more nodes, if some of them are unreachable from the start state. In that case, one needs to divide the time interval futher to obtain new states which can be mapped onto the unreachable ones, but this presents no difficulty
4 I corrected the misprint in Chalmers’ article by substituting ‘
5 For example, the CSA could compute the “carry” in an addition: it would take the input as binary number and add it to the number represented by the current state, then transit into a state which represents the sum (modulo 8) and report if a carry over has occurred during the addition (by outputting 1, otherwise 0). To illustrate this, assume the automaton is in state [1,0,1] and receives input [1,1]. This makes it transits into state [0,0,0] and with output [1]. Had it been in state [1,0,0], then the output would have been [0] and it would have transited to state [1,1,1].
6 For
7 Note that plausibility of the principle of non-cyclical behavior was one of the main points of attack of Putnam’s argument, see, for example,
Discussion and Conclusion
The analysis of Chalmers’ definitions of implementation for FSAs and CSAs showed that they do not view computation as an abstract way of specifying the causal structure of a physical system such that “the formal state-transitional structure of the computation mirrors the causal statetransitional structure of the physical system”
If one were to go with the first alternative, then isomporphism might still be too strong a requirement, because there are infinitely many equivalent redundant computational descriptions of any given computational system that should all legitimately count as being isomorphic to the causal structure of the physical system. Hence, physical systems in that case would have to be viewed as implementing all computations that are
And if one were to go with the second alternative, it should not be the case that simpler computations which fail to capture and may even violate important parts of the causal structure of a physical system can be viewed as being implemented by that physical system (as is the case with Chalmers‘ definitions as shown by the switch system where the switch could be pressed only once). Moreover, the fact that even very simple physical systems can be viewed as implementing very
It is important to note that the Slicing Theorems do not only pose a threat to Chalmers’ definitions of implementation, but to any “(state-to-state) correspondence view of implementation” (CVI) as well as any “semantic view of implementation” (SVI) (e.g., Copeland 1996,
Two possible, non-exclusive conclusions are implied: computation is not the right kind of explanatory device for causal organizations of physical systems (and as a consequence for theories of mind), and/or CVI/SVIs are not the right kind of approach to a general theory of implementation (as they will fail as soon as a physical theory does not provide a well-defined notion of physical state). Note that CVI/SVIs are certainly applicable if physical states are given, which in real-world hardware design, for example, is obviously the case. Yet, the success of CVI/SVIs with systems that have been purposefully designed so as to allow for an easy state-to-state correspondence between physical states (which are given) and more abstract states should not distract from their failure in the general case (e.g., analog computers are a good example of designed devices that do not provide a clear-cut notion of physical state). Because computations have to be linked to concrete systems (which are described at a certain level) in order to
We are left with various questions unanswered: which (lower) level is the
8 This worry was pointed out by an anonymous reviewer.
참고문헌(13)
-
[참고문헌]
(1994)
“On Implementing a Computation”
Minds and Machines 4 : 391 - 402
10.1007/BF00974166
-
[참고문헌]
(1996)
“Does a Rock Implement Every Finite-State Automaton?”
Synthese 108 : 310 - 333
10.1007/BF00413692
-
[참고문헌]
(1994)
“Why Everything Doesn’t Realize Every Computations”
Minds and Machines 4 : 391 - 402
10.1007/BF00974167
-
[참고문헌]
(1996)
“What is Computation?”
Synthese 108 : 403 - 420
-
[단행본]
(1981)
Wittgenstein on Rules and Private Language.
Blackwell
-
[참고문헌]
(1996)
“Searle’s Abstract Argument Against Strong AI”
Synthese 108 : 391 - 419
10.1007/BF00413696
-
[단행본]
(1988)
Representation and Reality.
MIT Press
-
[참고문헌]
(1999)
“Implementation is Semantic Interpretation”
The Monist 82 : 109 - 130
10.5840/monist19998212
-
[참고문헌]
(1999)
“When Physical Systems Realize Functions…”
Minds and Machines 9(2) : 161 - 196
10.1023/A:1008364332419
-
[참고문헌]
(2001)
“Causal vs. Computational Complexity?”
Minds and Machines 11 : 534 - 566
10.1023/A:1011855915651
-
[참고문헌]
(1990)
“Is the Brain a Digital Computer?”
Proceedings and Addresses of the American Philosophical Association 64 : 21 - 36
10.2307/3130074
-
[단행본]
(1992)
The Rediscovery of Mind.
MIT Press
-
[참고문헌]
(1987)
“Kripke on Functionalism and Automata”
Synthese 70 : 1 - 22
10.1007/BF00414025
1 Example: “The automaton transits from stateA to state B on input a producing output b ” for the type and “After five inputs the automaton is in state A ” for the token interpretation.
2 A reason for this conceptual conflation might be that some physical systems might assume or instantiate any state (type) only at most once. For example, the construction inPutnam (1988) is essentially based on a physical principle that guarantees that (open) physical systems are always in different physical states at different times.