Skip to main content
Erschienen in: Minds and Machines 2/2014

01.05.2014

On the Claim that a Table-Lookup Program Could Pass the Turing Test

verfasst von: Drew McDermott

Erschienen in: Minds and Machines | Ausgabe 2/2014

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the Turing Test, if we exclude arguments that cognitive science is simply not possible.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
I will capitalize the word “test” when referring to the Turing Test as a concept, and use lower case when referring to particular test occurrences.
 
2
I realize that the use of this slang term makes the paper sound a bit frivolous. I take this risk because the size of the required table will easily be seen to be beyond comprehension, and it’s important to keep this in mind. I don’t think words like “vast”, “stupendous”, “gigantic” really do the job. In (Dennett 1995, Ch. 1) the word “Vast” with a capital “v” is used for numbers in the range I discuss in this paper, numbers of magnitude 10100 and up.
 
3
Or some other arbitrary time limit fixed in advance; but I’ll use an hour as the limit throughout this paper. The importance of this rule will be seen below.
 
4
Further clerical details: Turns end when the person enters two newlines in a row, or exceeds time or character limits (including further constraints imposed later). As explained in section “The Argument and Its Role”, the judge gets a chance to edit their entries before any part of them is sent to the interlocutor. (I will use third-person plural pronouns to refer to a singular person of unimportant, unknown, or generic gender, to avoid having to say “him or her” repeatedly). Judge inputs that violate constraints such as character limits must be edited until the constraints are satisfied. The two newlines between turns don’t count as part of the utterance on either side. We’ll always let the judge go first, but they can type the empty string to force the interlocutor to be the first to “speak”. The interview ends after an hour or if the judge and interlocutor successively type the empty string (in either order). Note that I’ll use sometimes words like “speak” or “say” when I mean “type”, only because the latter sounds awkward in some contexts.
 
5
Block’s term for what I am calling the “judge”.
 
6
It’s obviously necessary to insert something like the ## marks because otherwise there would be many possible interchanges that could begin ABC. It’s not clear whose turn it is to speak after a conversation beginning “Veni … Vidi … Vici. Ave Caesar! A fellow Latin scholar. Great!” Block probably just assumed some such marker would end A, B, and C. I’m making it explicit.
 
7
Actually, just to get the chronology right, it’s important to note that Block described a slightly different version of the program in Block (1978, p. 281) in order to make a somewhat different point. Very confusingly, an anthology published 2 years later included a slightly condensed version of the paper under the same title (Block 1980), a version that lacks any mention of the Humongous-Table Program.
 
8
Shannon and McCarthy (1956) require that a definition of “thinking”, in the case of an allegedly intelligent machine, “must involve something relating to the manner in which the machine arrives at its responses”.
 
9
Block talks as though the “programmers” might emulate his Aunt Bertha. Actually, they can be somewhat more creative if they want to. On different branches of the tree, different “personalities” might emerge. But it will be much simpler, and sacrifice no generality, to speak as though each tree emulated one personality, and we’ll go along with calling her “Aunt Bertha” or “AB”. I have my doubts that we will ever be able to simulate a particular person in enough detail to fool their close friends. But that’s not necessary. If someone creates a program to compete in a Turing test and bases it on their aunt, it doesn’t have to mimic her that closely. If it sounds to the judges like it might be someone’s aunt, that’s good enough.
 
10
Equivalently, odd-length lists of strings.
 
11
No time can be greater than the number of milliseconds in an hour, but at “run time” the actual time left determines whether the interview comes to an end before the judge and examinee give the signal.
 
12
If we want to allow interlocutors to edit lines before they are seen by the judge, then times should be associated with completed lines, not individual characters. If we really want to avoid reaction times completely, then we can introduce random delays (as we do for the judge; see below) or we could have two sets of judges, one to conduct the interviews and another to review the transcripts and decide who’s human. But that’s a rather drastic change to the rules.
 
13
One more restriction: timed strings can’t have times so short that the typing speed exceeds the rate at which a plausible human can type. Of course, if the examinee types at blinding speed it will be easy for the judge to identify, but if we’re considering the set of all possible examinees, as we will in section “Argument Two: Why the Possibility of HTPLs Proves Nothing”, it’s necessary to set bounds on their abilities to keep the set finite.
 
14
We could do the same with the interlocutor’s output, but it’s traditional to put the burden of replicating human timing and error patterns on the examinees.
 
15
For now, I will be casual about the distinction between a strategy tree—a mathematical object—and the incarnation of a strategy tree in a physical medium. How the latter might work is discussed in section “Argument One: Why the Possibility of HTPSs Proves Nothing
 
16
Braddon-Mitchell and Jackson seem oddly oblivious to the fact that real people grow and then wither over their lifespans. Perhaps “behavior” for them includes changes in body shape. For our purposes the robot’s lifespan need merely be an hour.
 
17
This test is a blend of what call Harnad calls T3 and T4 in (Harnad 2000), depending on whether the automaton has to be able to do things like blush or not.
 
18
If we opt instead for all mathematically possible input sequences, then for all but a vanishingly small fraction scientific induction does not work; the universe is mostly white noise. In the ones where scientific induction does work, all but a vanishingly small fraction have different laws of nature from those in the real world. At this point I no longer believe that the game tree has been specified precisely enough for me to conceive of it.
 
19
Of course, a truly intelligent examinee would have to have delusional beliefs about its physical appearance, so as to be able to answer questions such as “How tall are you, Bertha?”, and “Are you left- or right-handed?” (And about its surroundings; see “If We Neglect Phenomenology, Computational Models of People are Possible”.) It will also have to have delusional memories of, say, having eaten strawberries and cream, or having ridden a snowmobile, or having done some real-world thing, or the judges will get suspicious. Whether we can attribute even delusional beliefs to the HT program is an issue we take up in section “If We Neglect Phenomenology, Computational Models of People are Possible”.
Strategically optimal or not, is it ethical to create a model of a person, run it for an hour so it can take a test, reset it to its original state, run it again a few times, then junk it?
 
20
Jorge Luis Borges’s vision (Borges 2000) of a library of all possible books of a certain size conveys the idea.
 
21
For the exact rules, see Appendix A in Supplementary Material.
 
22
See Appendix A in Supplementary Material
 
23
And perhaps a cognitive psychologist.
 
24
I allude once again to “The Library of Babel”.
 
25
Cf. (Culbertson 1956), although Culbertson was talking about a somewhat different set of robot-control mechanisms. He pointed out that they were “uneconomical”, which must be the greatest understatement of all time.
 
26
In (Block 1978), Block points out that “… If it [the strategy tree] is to ‘keep up’ with current events, the job [of rebuilding it] would have to be done often” (p. 295). How such a huge thing is to be rebuilt “often” is not clear.
 
27
There might be issues of wide vs. narrow content here (Botterill and Carruthers 1999), but they probably take a back seat to problems raised by the fact that x and her world are fictional.
 
28
It’s odd that no one has, as far as I know, raised this issue before. If the surroundings of the participants are not made uniform the judge might be able to figure out who’s who by asking the participants to describe the location where they’re sitting.
 
29
When a leaf state is reached, the FSM halts.
 
30
This is related to the function TS described in section “If We Neglect Phenomenology, Computational Models Of People Are Possible”, but that one ignored O, and took a series of inputs as argument.
 
31
It is, of course, just a coincidence that Turing’s name is on both the Turing Test and the Turing machine; he never linked the two, if you don’t count vague allusions.
 
32
Using multiple tapes is a convenient device that doesn’t change the computational power of Turing machines (Homer and Selman 2011, Ch. 2).
 
33
Another example is Searle’s (1980) “Chinese Room” argument. One reason it is so easy to fall into this trap is that the inventors of the first computers resorted so often to words such as “memory” to describe pieces of these new things, and we’ve been stuck with them ever since. But I confess that in teaching intro programming I get students into the right mindset by pretending the computer is a “literal-minded assistant” or some such thing, that variables are “boxes” this assistant “puts numbers into”, and so on.
 
34
This may or may not be the “real” machine, depending on whether machine language is executed by a microcode interpreter. And if the computer has several “cores”, should we think of it as a committee?
 
35
Recall that in section “The Argument and Its Role” we “optimized” keys by removing the examinee’s contributions to the dialogue.
 
36
Of course, some people contend that it is absurd to deny a creature phenomenal consciousness if it doesn’t seem to believe it lacks anything (Dennett 1978; McDermott 2001).
 
37
For the syntax of the programming language used in what follows, see Appendix 1.
 
38
A set of deterministic processors acting asynchronously in parallel would be nondeterministic, and this nondeterminism would be eliminated when we switch to a single processor. But I argued above (section “If We Neglect Phenomenology, Computational Models of People are Possible”) that a judge would be unable to tell the difference between a deterministic and nondeterministic program.
 
39
It may seem unusual to compute a new knowledge base rather than make changes to the old one, but it’s a standard move made for technical reasons; the compiler is supposed to eliminate any inefficiencies that result from this device. I will take this opportunity to insert the standard disclaimer about the term “knowledge base”: It should really be called the “belief base”, but for some reason that term hasn’t caught on.
 
40
One might object that a person sentenced to capital punishment could always get a last-minute reprieve from the governor; their hopes and dreams are never necessarily futile. So imagine someone poisoned by an irreversible infusion of nanobots that snip out pieces of brain one by one until after an hour the victim is dead.
 
41
Of course, she can discuss them, and probably will if the judge brings them up.
 
42
So the state of remembering the name of the judge is mediated by the disjunctive state consisting of all string sequences in which the judge tells AB their name and AB is able to recite it correctly later.
 
43
If we supply a special input channel from which random numbers are read, analogous to a tape containing random bits for a Turing machine (section The Sensible-String Table Must Not Have Been Built by Enacting All Possible Conversations), then we can treat randomness elimination as a special case of input anticipation.
 
44
In this appendix I use the word “branch” to mean something different from the meaning explained in section “The Argument and Its Role” Here it means a decision point in a program, an “ if ” statement, conditional jump, or the like.
 
45
Although it’s hard to be completely sure of what happens in 10445 branches.
 
46
How come I haven’t had to treat \({\tt KB}_{{\tt new}}\) and \({\tt T}_C\) the same way I handled R ? I could have, but it’s not necessary, because the name reuse doesn’t actually cause any confusion.
 
47
If you really, really want the program to be isomorphic to the HTPL, you could transform it once again by converting it to a loop with an iteration-counting variable, adding a test for the appropriate value of this variable to every test of the if and replacing the semicolons with else s. A transformation to accomplish this (“loop imposition”?) is left as an exercise for the reader.
 
48
See “The Argument and Its Role” for why the length of a key string = the number of judge inputs so far.
 
Literatur
Zurück zum Zitat Allen, R., & Kennedy, K. (2001). Optimizing compilers for modern architectures: A dependence-based approach. San Francisco: Morgan Kaufmann. Allen, R., & Kennedy, K. (2001). Optimizing compilers for modern architectures: A dependence-based approach. San Francisco: Morgan Kaufmann.
Zurück zum Zitat Bertsekas, D. P. (1987). Dynamic programming, deterministic and stochastic models. Englewood Cliffs, NJ: Prentice-Hall.MATH Bertsekas, D. P. (1987). Dynamic programming, deterministic and stochastic models. Englewood Cliffs, NJ: Prentice-Hall.MATH
Zurück zum Zitat Binmore, K. (2007). Playing for real: A text on game theory. Oxford: Oxford University Press.CrossRef Binmore, K. (2007). Playing for real: A text on game theory. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Perception and cognition: Issues in the foundation of psychology, Minnesota studies in the philosophy of science (pp. 261–325). USA: University of Minnesota Press. Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Perception and cognition: Issues in the foundation of psychology, Minnesota studies in the philosophy of science (pp. 261–325). USA: University of Minnesota Press.
Zurück zum Zitat Block, N. (ed.) (1980). Readings in the philosophy of psychology (Vol. 2). Cambridge, MA: Harvard University Press. Block, N. (ed.) (1980). Readings in the philosophy of psychology (Vol. 2). Cambridge, MA: Harvard University Press.
Zurück zum Zitat Block, N. (1981). Psychologism and behaviorism. The Philosophical Review, 90(1), 5–43.CrossRef Block, N. (1981). Psychologism and behaviorism. The Philosophical Review, 90(1), 5–43.CrossRef
Zurück zum Zitat Borges, J. L. (2000). The library of Babel. In The total library: Non-fiction, 1922–1986 (pp. 214–216) (trans: Weinberger, E.). Borges, J. L. (2000). The library of Babel. In The total library: Non-fiction, 1922–1986 (pp. 214–216) (trans: Weinberger, E.).
Zurück zum Zitat Botterill, G., & Carruthers, P. (1999). The philosophy of psychology. Cambridge: Cambridge University Press.CrossRef Botterill, G., & Carruthers, P. (1999). The philosophy of psychology. Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Braddon-Mitchell, D. (2009). Behavourism. In J. Symons & P. Calvo (Eds.), The routledge companion to philosophy of psychology (pp. 90–98). London: Routledge. Braddon-Mitchell, D. (2009). Behavourism. In J. Symons & P. Calvo (Eds.), The routledge companion to philosophy of psychology (pp. 90–98). London: Routledge.
Zurück zum Zitat Braddon-Mitchell, D., & Jackson, F. (2007). Philosophy of mind and cognition (2nd ed.). Oxford: Blackwell Publishing. Braddon-Mitchell, D., & Jackson, F. (2007). Philosophy of mind and cognition (2nd ed.). Oxford: Blackwell Publishing.
Zurück zum Zitat Braithwaite, R., Jefferson, G., Newman, M.,& Turing, A. (1952). Can automatic machines be said to think? (BBC Radio broadcast). Also in (Copeland 2004) Braithwaite, R., Jefferson, G., Newman, M.,& Turing, A. (1952). Can automatic machines be said to think? (BBC Radio broadcast). Also in (Copeland 2004)
Zurück zum Zitat Chisholm, R. (1957). Perceiving. Ithaca: Cornell University Press. Chisholm, R. (1957). Perceiving. Ithaca: Cornell University Press.
Zurück zum Zitat Christian, B. (2011). The most human human: What talking with computers teaches us about what it means to be alive. New York: Doubleday. Christian, B. (2011). The most human human: What talking with computers teaches us about what it means to be alive. New York: Doubleday.
Zurück zum Zitat Copeland, B. J., & Proudfoot, D. (2009). Turing’s test: A philosophical and historical guide. In Epstein et al. 2008 (pp. 119–138). Copeland, B. J., & Proudfoot, D. (2009). Turing’s test: A philosophical and historical guide. In Epstein et al. 2008 (pp. 119–138).
Zurück zum Zitat Culbertson, J. T. (1956). Some uneconomical robots. In Shannon and McCarthy 1956 (pp. 99–116). Culbertson, J. T. (1956). Some uneconomical robots. In Shannon and McCarthy 1956 (pp. 99–116).
Zurück zum Zitat Davidson, D. (1987). Knowing one’s own mind. In Proceedings and addresses of the American philosophical association (Vol. 60, pp. 441–58). (Also in Donald Davidson 2001 Subjective, Intersubjective, Objective. New York and Clarendon: Oxford University Press, pp. 15–38). Davidson, D. (1987). Knowing one’s own mind. In Proceedings and addresses of the American philosophical association (Vol. 60, pp. 441–58). (Also in Donald Davidson 2001 Subjective, Intersubjective, Objective. New York and Clarendon: Oxford University Press, pp. 15–38).
Zurück zum Zitat Dennett, D. C. (1978). Toward a cognitive theory of consciousness. In D. C. Dennett (Ed.), Brainstorms (pp. 149–173). Cambridge, MA: Bradford Books/MIT Press, (originally in Savage 1978). Dennett, D. C. (1978). Toward a cognitive theory of consciousness. In D. C. Dennett (Ed.), Brainstorms (pp. 149–173). Cambridge, MA: Bradford Books/MIT Press, (originally in Savage 1978).
Zurück zum Zitat Dennett, D. C. (1985). Can machines think?. In M. Shafto (Ed.), How we know (pp. 121–145). San Francisco: Harper and Row. Dennett, D. C. (1985). Can machines think?. In M. Shafto (Ed.), How we know (pp. 121–145). San Francisco: Harper and Row.
Zurück zum Zitat Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and the meanings of life. New York: Simon and Schuster. Dennett, D. C. (1995). Darwin’s dangerous idea: Evolution and the meanings of life. New York: Simon and Schuster.
Zurück zum Zitat Dowe, D. L., & Hájek, A. R. (1997). A computational extension to the Turing test. Technical Report 97/322, nil, Department of Computer Science, Monash University Dowe, D. L., & Hájek, A. R. (1997). A computational extension to the Turing test. Technical Report 97/322, nil, Department of Computer Science, Monash University
Zurück zum Zitat Dowe, D. L., & Hájek, A. R. (1998). A non-behavioural, computational extension to the Turing Test. In Proceedings of international conference on computational intelligence and multimedia applications (pp. 101–106). Gippsland, Australia Dowe, D. L., & Hájek, A. R. (1998). A non-behavioural, computational extension to the Turing Test. In Proceedings of international conference on computational intelligence and multimedia applications (pp. 101–106). Gippsland, Australia
Zurück zum Zitat Epstein, R., Roberts, G., & Beber, G. (2008). Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. New York: Springer Epstein, R., Roberts, G., & Beber, G. (2008). Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. New York: Springer
Zurück zum Zitat Fodor, J. (1975). The language of thought. New York: Thomas Y. Crowell. Fodor, J. (1975). The language of thought. New York: Thomas Y. Crowell.
Zurück zum Zitat French, R. M. (1990). Subcognition and the limits of the Turing Test. Mind, 99(393):53–65. [Reprinted in (Shieber 2004), pp. 183–197]. French, R. M. (1990). Subcognition and the limits of the Turing Test. Mind, 99(393):53–65. [Reprinted in (Shieber 2004), pp. 183–197].
Zurück zum Zitat Furht, B., & Escalante, A. (eds) (2010). Handbook of cloud computing. New York: Springer.MATH Furht, B., & Escalante, A. (eds) (2010). Handbook of cloud computing. New York: Springer.MATH
Zurück zum Zitat Geach, P. (1957). Mental acts. London: Routledge and Kegan Paul. Geach, P. (1957). Mental acts. London: Routledge and Kegan Paul.
Zurück zum Zitat Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42, 335–346.CrossRef Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42, 335–346.CrossRef
Zurück zum Zitat Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1(1), 43–54. Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1(1), 43–54.
Zurück zum Zitat Hayes, P., & Ford, K. (1995). Turing Test considered harmful. In Proceedings of Ijcai (Vol. 14, pp. 972–977). Hayes, P., & Ford, K. (1995). Turing Test considered harmful. In Proceedings of Ijcai (Vol. 14, pp. 972–977).
Zurück zum Zitat Hodges, A. (1983). Alan Turing: The enigma. New York: Simon and Schuster.MATH Hodges, A. (1983). Alan Turing: The enigma. New York: Simon and Schuster.MATH
Zurück zum Zitat Homer, S., & Selman, A. L. (2011). Computability and complexity theory. New York: Springer.CrossRefMATH Homer, S., & Selman, A. L. (2011). Computability and complexity theory. New York: Springer.CrossRefMATH
Zurück zum Zitat Humphrys, M. (2008). How my program passing the Turing Test. In Epstein et al. 2008 (pp. 237–260). Humphrys, M. (2008). How my program passing the Turing Test. In Epstein et al. 2008 (pp. 237–260).
Zurück zum Zitat Jones, N., Gomard, C., & Sestoft, P. (1993). Partial evaluation and automatic program generation. In L. O. Andersen, T. Mogensen (Eds.). Prentice: Prentice Hall International. Jones, N., Gomard, C., & Sestoft, P. (1993). Partial evaluation and automatic program generation. In L. O. Andersen, T. Mogensen (Eds.). Prentice: Prentice Hall International.
Zurück zum Zitat Kam, T. (1997). Synthesis of finite state machines: Functional optimization. Boston: Kluwer Academic.CrossRefMATH Kam, T. (1997). Synthesis of finite state machines: Functional optimization. Boston: Kluwer Academic.CrossRefMATH
Zurück zum Zitat Kirk, R. (1995). How is consciousness possible?. In T. Metzinger (Ed.), Conscious experience (pp. 391–408). Paderborn: Ferdinand Schoningh. (English edition published by Imprint Academic). Kirk, R. (1995). How is consciousness possible?. In T. Metzinger (Ed.), Conscious experience (pp. 391–408). Paderborn: Ferdinand Schoningh. (English edition published by Imprint Academic).
Zurück zum Zitat Knuth, D. E. (1998). The art of computer programming: seminumerical algorithms (3rd ed.). Reading, MA: Addison-Wesley. Knuth, D. E. (1998). The art of computer programming: seminumerical algorithms (3rd ed.). Reading, MA: Addison-Wesley.
Zurück zum Zitat Leigh, J. (2006). Applied digital control: Theory, design and implementation (2nd ed.). New York: Dover. Leigh, J. (2006). Applied digital control: Theory, design and implementation (2nd ed.). New York: Dover.
Zurück zum Zitat Lenat, D. B. (2009). Building a machine smart enough to pass the Turing Test: Could we, should we, will we? In Epstein et al. 2008 (pp. 261–282). Lenat, D. B. (2009). Building a machine smart enough to pass the Turing Test: Could we, should we, will we? In Epstein et al. 2008 (pp. 261–282).
Zurück zum Zitat McDermott, D. (2001). Mind and mechanism. Cambridge, MA: MIT Press.MATH McDermott, D. (2001). Mind and mechanism. Cambridge, MA: MIT Press.MATH
Zurück zum Zitat Millican, P., & Clark, A. (1996). The legacy of Alan Turing. Oxford: Clarendon Press. Millican, P., & Clark, A. (1996). The legacy of Alan Turing. Oxford: Clarendon Press.
Zurück zum Zitat Perlis, D. (2005). Hawkins on intelligence: Fascination and frustration. Artificial Intelligence, 169, 184–191.CrossRefMathSciNet Perlis, D. (2005). Hawkins on intelligence: Fascination and frustration. Artificial Intelligence, 169, 184–191.CrossRefMathSciNet
Zurück zum Zitat Purtill, R. (1971). Beating the imitation game. Mind, 80(318), 290–94. [Reprinted in (Shieber 2004), pp. 165–71]. Purtill, R. (1971). Beating the imitation game. Mind, 80(318), 290–94. [Reprinted in (Shieber 2004), pp. 165–71].
Zurück zum Zitat Rothschild, L. (1986). The distribution of english dictionary word lengths. Journal of Statistical Planning and Inference, 14(2), 311–322.CrossRefMathSciNet Rothschild, L. (1986). The distribution of english dictionary word lengths. Journal of Statistical Planning and Inference, 14(2), 311–322.CrossRefMathSciNet
Zurück zum Zitat Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Englewood Cliffs, NJ: Prentice Hall. Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.
Zurück zum Zitat Searle, J. R. (1980). Minds, brains, and program. The Behavioral and Brain Sciences, 3, 417–424.CrossRef Searle, J. R. (1980). Minds, brains, and program. The Behavioral and Brain Sciences, 3, 417–424.CrossRef
Zurück zum Zitat Shannon, C. (1950a). A chess-playing machine. Scientific American, 182(2), 48–51. (Reprinted in Newman, J. R. (1956). The world of mathematics (Vol. 4, pp. 2124–2133). New York: Simon and Schuster). Shannon, C. (1950a). A chess-playing machine. Scientific American, 182(2), 48–51. (Reprinted in Newman, J. R. (1956). The world of mathematics (Vol. 4, pp. 2124–2133). New York: Simon and Schuster).
Zurück zum Zitat Shannon, C. (1950b). Programming a computer for playing chess. Philosophical Magazine, 7–41(314), 256–275. (Reprinted in Levy, D. N. L. (ed.) (1988). Computer chess compendium. New York, NY: Springer). Shannon, C. (1950b). Programming a computer for playing chess. Philosophical Magazine, 7–41(314), 256–275. (Reprinted in Levy, D. N. L. (ed.) (1988). Computer chess compendium. New York, NY: Springer).
Zurück zum Zitat Shannon, C. E.,& McCarthy, J. (eds) (1956). Automata studies. [Note: Annals of Mathematics Studies (Vol. 34)]. Princeton: Princeton University Press. Shannon, C. E.,& McCarthy, J. (eds) (1956). Automata studies. [Note: Annals of Mathematics Studies (Vol. 34)]. Princeton: Princeton University Press.
Zurück zum Zitat Sloman, A.,& Chrisley, R. (2003). Virtual machines and consciousness. Journal of Consciousness Studies, 10(4–5), 6–45. [Reprinted in (Holland 2003), pp. 133–172]. Sloman, A.,& Chrisley, R. (2003). Virtual machines and consciousness. Journal of Consciousness Studies, 10(4–5), 6–45. [Reprinted in (Holland 2003), pp. 133–172].
Zurück zum Zitat Smith, S.,& Di, J. (2009). Designing asynchronous circuits using NULL conventional logic (ncl). San Rafael: Morgan and Claypool Publishers. Smith, S.,& Di, J. (2009). Designing asynchronous circuits using NULL conventional logic (ncl). San Rafael: Morgan and Claypool Publishers.
Zurück zum Zitat Wegener, I. (1991). The complexity of boolean functions. London: Wiley. Wegener, I. (1991). The complexity of boolean functions. London: Wiley.
Zurück zum Zitat Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco: W. H. Freeman. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco: W. H. Freeman.
Metadaten
Titel
On the Claim that a Table-Lookup Program Could Pass the Turing Test
verfasst von
Drew McDermott
Publikationsdatum
01.05.2014
Verlag
Springer Netherlands
Erschienen in
Minds and Machines / Ausgabe 2/2014
Print ISSN: 0924-6495
Elektronische ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-013-9333-3

Weitere Artikel der Ausgabe 2/2014

Minds and Machines 2/2014 Zur Ausgabe