Skip to main content
Top
Published in: Minds and Machines 3/2018

18-07-2018

Computers Aren’t Syntax All the Way Down or Content All the Way Up

Author: Cem Bozşahin

Published in: Minds and Machines | Issue 3/2018

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This paper argues that the idea of a computer is unique. Calculators and analog computers are not different ideas about computers, and nature does not compute by itself. Computers, once clearly defined in all their terms and mechanisms, rather than enumerated by behavioral examples, can be more than instrumental tools in science, and more than source of analogies and taxonomies in philosophy. They can help us understand semantic content and its relation to form. This can be achieved because they have the potential to do more than calculators, which are computers that are designed not to learn. Today’s computers are not designed to learn; rather, they are designed to support learning; therefore, any theory of content tested by computers that currently exist must be of an empirical, rather than a formal nature. If they are designed someday to learn, we will see a change in roles, requiring an empirical theory about the Turing architecture’s content, using the primitives of learning machines. This way of thinking, which I call the intensional view of computers, avoids the problems of analogies between minds and computers. It focuses on the constitutive properties of computers, such as showing clearly how they can help us avoid the infinite regress in interpretation, and how we can clarify the terms of the suggested mechanisms to facilitate a useful debate. Within the intensional view, syntax and content in the context of computers become two ends of physically realizing correspondence problems in various domains.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
The test compares two physical medium-independent verbal exchanges with a machine and a human, and which is which is decided by a human questioner.
 
2
The combinators S and K are respectively the lambda terms \(\lambda x\lambda y\lambda z.xz(yz)\) and \(\lambda x\lambda y.x\).
 
3
A compiler is a program which translates from one programming language to another. Typically, the target language is a lower-level language than the source language. The final process in compiling is called code generation, by which all intermediate representations of levels of translations are dispensed with, and the code is solely in the form of the primitive instruction set of the intended architecture. There is no syntactic translation at run-time of compiled programs; it is just execution.
 
4
Take f to be the representation relation. It can be for example \(f(\text{ low } \text{ voltage })=0\), meaning we map the physical property of low voltage to bit 0. Decoding and encoding are different uses of a representation. Decoding is essentially using f. Encoding is using \(f^{-1}\). For example, when we type ’a’ in our word processor, we encode whatever physical property—some voltage levels—is assigned to it down below. Therefore good representations are needed technologically, both in digital and analog computers, to be sure about the physical component.
 
5
The authors note that the compute cycle is the reverse of the experiment cycle in for example physics. In physics we let the abstract level do its work, then encode its result in some physical object to see if the theory predicted its presence, location etc. correctly, as in Fig. 2a. The physical level does not do the work; its work is predicted. In the case of computers, the physical layer does the work—therefore it is not a simulation, and the result is tried to be predicted by an algorithm or an analogy (implicitly computes relation) of Fig. 1, as in Fig. 2b. Notice that what allows the physical layer to carry out its work is a series of translations of say the averaging algorithm or equations to the terms of the computer. We compare its physical work with the prediction at the abstract level, i.e. by an algorithm or equation.
Notice also that a computer scientist seeking an explanation has to reverse the modeling relation too, after it is established. Because it only serves to establish the correctness of the algorithm, assuming the underlying physical machinery was confirmed. What we do with the corrected algorithm afterwards is much like the diagram in Fig. 2a. Therefore physics and computer science may differ on how they use the model-theory-technology cycles, but it is clear that what amounts to theory in physics and computer science is based on the same process of thinking.
 
6
One implication of this result is that nothing computes in nature unless we map it to a computational problem in our thinking. Pancomputationalism and born-again computationalism seem to be some form of analogical (if not romantic) reasoning. ACM’s (2012) centenary celebration of Turing by all the living Turing-award winners makes the point quite clear: “This development [algorithmic thinking outside computer science] is an exquisite unintended consequence of the fact that there is latent computation underlying each of these phenomena [cells, brains, market, universe, etc.], or the ways in which science studies them.” [emphasis added]
Nature-inspired computing is not to be confused with ‘nature computes’ movement. Personally, I would not lose too much sleep if some problems are not amenable to computationalist understanding. Discovery by method has its limits.
 
7
Algorithmic complexity theory is based on Turing’s notion of ‘next’, such as the next step, next state, and next input. Problem size in the theory is measured with this concept. It is not a physical concept, but obviously physically realizable; see Bozşahin (2016) for more philosophical implications.
 
8
Problems in P have polynomial solutions, where polynomial is on the problem size in the sense of footnote 7. NP problems can check a given solution with polynomial effort. Whether P and NP are the same is an open problem in computer science, with serious implications in economics, social sciences and natural sciences; see Fortnow (2013) for an entertaining coverage of these aspects.
 
9
To avoid a cryptic mathematical exposition of the problem, I follow the authors’ informal description: The Steiner Tree Problem is equivalent in real life to building a road system of minimum length for n towns, possibly making intersections outside of towns. If the number of vertices that can be formed outside of towns (called Steiner vertices) is not known, the problem is NP-hard. If the question is whether we have a solution with length m then it is NP-complete. We are discussing the latter problem.
 
10
For example, winning at chess every time in maximum of n moves from any starting move would be as easy as testing a checkmate on the board. If the opponent also knows the algorithm, then winning the game reduces to who starts the game. Even a non-player can undertake the test (checkmate condition) without understanding the game, if the rules are given on a piece of paper.
 
11
For semantic content, we can take the perspective of the Language of Thought (LOT) hypothesis of Fodor (1975), which requires primitives (or core concepts), to support LOT as its primitives, or its rival, map theory, which states that we have a system of belief maps by which we steer our cognitive functions, rather than individuated sentential statements of LOT. Something has to support the system of belief and the maps. “The brain can do this” is no more an empirical theory than Turing’s “being suitably programmed” was for intelligence and understanding. A good place to start for both is by giving them a process ontology. Although map-theorists shun the computer analogy, the intensional view is not an analogy but a constitutive principle, so it might also help those theorists.
 
12
See also Searle’s objection to the original replies in the (1980) article, and the volume of Preston and Bishop (2002), containing more new responses. Although Searle also contributed to the volume, he was not given the opportunity to respond to the articles (Preston 2002:46).
 
13
Rogers (1959):115 provided the first Chinese Room-like argument where the input-output materials are not Chinese symbols but numbers. He was interested in computing number-theoretic functions by a human. He placed a man in the room equipped with a finite set of instructions to compute numbers, who outputted them for checking. He did not suggest a test to decide between man and machine. His test was to be able to compute any number-theoretic function in a finite amount of time. He assumed that the person inside the room is inexhaustible.
 
14
Pace Copeland (2002), who considers super-Turing computing to be possible, we cannot compute and violate the laws of physics at the same time; see Cockshott et al. (2012) for discussion
 
15
The Robot Reply suggested that a computer with a body would be like a child learning language.
 
16
I first saw the use of this mythological ‘word turtle’ idea in the sense closest to the current discussion in Ross (1967), who attributes it to William James. Its significance for the intensional view is that Ross recalls it with a “bull’s eye relevance to the study of syntax.”
 
17
Without this assumption, we would not be too far from linguistic relativism of the Sapir variety, in which we would think in a language. Although there are many cases in which linguistic terminology is deeply cultural; for example, basicness of color terms, this is not a causal link from language to thought, because we have seen tragic cases of having thought but not language, at least quite unexpected cases of inferential ability (if language caused thought), in the light of little (almost no) linguistic exposure up to puberty. One such case is Genie (Fromkin et al. 1974; Curtiss et al. 1974). Assuming that language is an expression of thought seems to avoid these problems. In the current argument, it is not only empirically on better grounds, but also a necessary consequence of thinking that language is a computational mechanism. On the other hand, syntactic semantics require it as an extra assumption; but, let me stress that nobody is denying the role of compositional semantics in syntax. The question is whether syntax alone can cause semantics.
 
18
Ford (2011):70, who defended Searle’s views against Rapaport, is less apathetic, but still analogical in thinking about computers: “if we can get a computer to have meaningful conscious experiences—the road to natural language acquisition and understanding would be clear (as far as Searle is concerned).”
 
19
Keller’s case is different from that of a child who is deaf or blind in the critical period of acquisition. The child in these circumstances can have some access to meanings out there by his own initiative, to relate them to forms. In fact blind children create form differences for the semantic distinction of look and see, although they cannot experience visual looking or visual seeing. Deaf or hearing children who are born to deaf parents acquire their sign language in the normal time course of language acquisition. Blind children follow a normal course too as long as they are exposed to language; see Gleitman and Elissa (1995) for a summary.
 
Literature
go back to reference Aaronson, S. (2005). Guest column: NP-complete problems and physical reality. ACM SIGACT News, 36(1), 30–52.CrossRef Aaronson, S. (2005). Guest column: NP-complete problems and physical reality. ACM SIGACT News, 36(1), 30–52.CrossRef
go back to reference Aaronson, S. (2013). Why philosophers should care about computational complexity. In B. J. Copeland, C. J. Posy, & O. Shagrir (Eds.), Computability: Turing, Gödel, Church, and beyond. Cambridge: MIT Press. Aaronson, S. (2013). Why philosophers should care about computational complexity. In B. J. Copeland, C. J. Posy, & O. Shagrir (Eds.), Computability: Turing, Gödel, Church, and beyond. Cambridge: MIT Press.
go back to reference Abend, O., Kwiatkowski, T., Smith, N., Goldwater, S., & Steedman, Mark. (2017). Bootstrapping language acquisition. Cognition, 164, 116–143.CrossRef Abend, O., Kwiatkowski, T., Smith, N., Goldwater, S., & Steedman, Mark. (2017). Bootstrapping language acquisition. Cognition, 164, 116–143.CrossRef
go back to reference Bickhard, M. H. (1996). Troubles with computationalism. In W. O’Donohue & R. Kitchener (Eds.), Philosophy of psychology (pp. 173–183). London: Sage.CrossRef Bickhard, M. H. (1996). Troubles with computationalism. In W. O’Donohue & R. Kitchener (Eds.), Philosophy of psychology (pp. 173–183). London: Sage.CrossRef
go back to reference Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Minnesota studies in the philosophy of science. Minneapolis: University of Minnesota Press. Block, N. (1978). Troubles with functionalism. In C. W. Savage (Ed.), Minnesota studies in the philosophy of science. Minneapolis: University of Minnesota Press.
go back to reference Bozşahin, C. (2016). What is a computational constraint? In V. C. Müller (Ed.), Computing and philosophy. Synthese Library 375 (pp. 3–16). Heidelberg: Springer. Bozşahin, C. (2016). What is a computational constraint? In V. C. Müller (Ed.), Computing and philosophy. Synthese Library 375 (pp. 3–16). Heidelberg: Springer.
go back to reference Bryant, P. E. (1974). Perception and understanding in young children. New York: Basic Book. Bryant, P. E. (1974). Perception and understanding in young children. New York: Basic Book.
go back to reference Burgin, M. (2001). How we know what technology can do. Communications of the ACM, 44(11), 82–88.CrossRef Burgin, M. (2001). How we know what technology can do. Communications of the ACM, 44(11), 82–88.CrossRef
go back to reference Cariani, P. (1998). Epistemic autonomy through adaptive sensing. In Intelligent control (ISIC). Held jointly with IEEE international symposium on computational intelligence in robotics and automation (CIRA), Intelligent systems and semiotics (ISAS) (pp. 718–723). Cariani, P. (1998). Epistemic autonomy through adaptive sensing. In Intelligent control (ISIC). Held jointly with IEEE international symposium on computational intelligence in robotics and automation (CIRA), Intelligent systems and semiotics (ISAS) (pp. 718–723).
go back to reference Cockshott, P., Mackenzie, L. M., & Michaelson, G. (2012). Computation and its limits. Oxford: Oxford University Press.MATH Cockshott, P., Mackenzie, L. M., & Michaelson, G. (2012). Computation and its limits. Oxford: Oxford University Press.MATH
go back to reference Copeland, B. J., & Shagrir, O. (2011). Do accelerating Turing machines compute the uncomputable? Minds and Machines, 21(2), 221–239.CrossRef Copeland, B. J., & Shagrir, O. (2011). Do accelerating Turing machines compute the uncomputable? Minds and Machines, 21(2), 221–239.CrossRef
go back to reference Curtiss, S., Fromkin, V., Krashen, S., Rigler, D., & Rigler, M. (1974). The linguistic development of Genie. Language, 50(3), 528–554.CrossRef Curtiss, S., Fromkin, V., Krashen, S., Rigler, D., & Rigler, M. (1974). The linguistic development of Genie. Language, 50(3), 528–554.CrossRef
go back to reference Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87–106.CrossRef Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68(4), 87–106.CrossRef
go back to reference Dennett, D. C. (1991). Consciousness explained. New York: Little Brown & Co. Dennett, D. C. (1991). Consciousness explained. New York: Little Brown & Co.
go back to reference Dewdney, A. K. (1984). On the spaghetti computer and other analog gadgets for problem solving. Scientific American, 250(6), 19–26.CrossRef Dewdney, A. K. (1984). On the spaghetti computer and other analog gadgets for problem solving. Scientific American, 250(6), 19–26.CrossRef
go back to reference Fodor, J. (1975). The language of thought. Cambridge, MA: Harvard. Fodor, J. (1975). The language of thought. Cambridge, MA: Harvard.
go back to reference Ford, J. (2011). Helen Keller was never in a Chinese Room. Minds and Machines, 21(1), 57–72.CrossRef Ford, J. (2011). Helen Keller was never in a Chinese Room. Minds and Machines, 21(1), 57–72.CrossRef
go back to reference Fortnow, L. (2013). The golden ticket: P, NP, and the search for the impossible. Princeton: Princeton University Press.CrossRefMATH Fortnow, L. (2013). The golden ticket: P, NP, and the search for the impossible. Princeton: Princeton University Press.CrossRefMATH
go back to reference Fromkin, V., Krashen, S., Curtiss, S., Rigler, D., & Rigler, M. (1974). The development of language in Genie: A case of language acquisition beyond the “critical period”. Brain and Language, 1(1), 81–107.CrossRef Fromkin, V., Krashen, S., Curtiss, S., Rigler, D., & Rigler, M. (1974). The development of language in Genie: A case of language acquisition beyond the “critical period”. Brain and Language, 1(1), 81–107.CrossRef
go back to reference Gandy, R. (1980). Church’s thesis and principles for mechanisms. Studies in Logic and the Foundations of Mathematics, 101, 123–148.MathSciNetCrossRefMATH Gandy, R. (1980). Church’s thesis and principles for mechanisms. Studies in Logic and the Foundations of Mathematics, 101, 123–148.MathSciNetCrossRefMATH
go back to reference Gleitman, L. R., & Elissa, L. N. (1995). The invention of language by children: Environmental and biological influences on the acquisition of language. In L. R. Gleitman & M. Liberman (Eds.), Language: An invitation to cognitive science (2nd ed., pp. 1–24). Cambridge, MA: MIT Press. Gleitman, L. R., & Elissa, L. N. (1995). The invention of language by children: Environmental and biological influences on the acquisition of language. In L. R. Gleitman & M. Liberman (Eds.), Language: An invitation to cognitive science (2nd ed., pp. 1–24). Cambridge, MA: MIT Press.
go back to reference Graham, P. (1994). On Lisp. Englewood Cliffs, NJ: Prentice Hall. Graham, P. (1994). On Lisp. Englewood Cliffs, NJ: Prentice Hall.
go back to reference Horsman, C., Stepney, S., Wagner, R. C., & Kendon, V. (2013). When does a physical system compute? Proceedings of the Royal Society A, 470, 20140182.CrossRefMATH Horsman, C., Stepney, S., Wagner, R. C., & Kendon, V. (2013). When does a physical system compute? Proceedings of the Royal Society A, 470, 20140182.CrossRefMATH
go back to reference Hoyte, D. (2008). Let over lambda. HCSW and Hoytech. Doug Hoyte. ISBN 9781435712751. Hoyte, D. (2008). Let over lambda. HCSW and Hoytech. Doug Hoyte. ISBN 9781435712751.
go back to reference Keller, H. (1905). The story of my life. Garden City, NY: Doubleday. Keller, H. (1905). The story of my life. Garden City, NY: Doubleday.
go back to reference Knuth, D. E. (1973). Searching and Sorting, the art of computer programming, vol. 3. Reading, MA: Addison-Wesley.MATH Knuth, D. E. (1973). Searching and Sorting, the art of computer programming, vol. 3. Reading, MA: Addison-Wesley.MATH
go back to reference Knuth, D. E. (1996). Selected papers on computer science. Cambridge: Cambridge University Press.MATH Knuth, D. E. (1996). Selected papers on computer science. Cambridge: Cambridge University Press.MATH
go back to reference Lenneberg, E. H. (1967). The biological foundations of language. New York: Wiley. Lenneberg, E. H. (1967). The biological foundations of language. New York: Wiley.
go back to reference Lewis, H. R., & Papadimitriou, C. H. (1998). Elements of the theory of computation (2nd ed.). New Jersey: Prentice-Hall. Lewis, H. R., & Papadimitriou, C. H. (1998). Elements of the theory of computation (2nd ed.). New Jersey: Prentice-Hall.
go back to reference Newell, A., & Simon, H. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126.MathSciNetCrossRef Newell, A., & Simon, H. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113–126.MathSciNetCrossRef
go back to reference Pask, G. (1968). Colloquy of mobiles. London: ICA. Pask, G. (1968). Colloquy of mobiles. London: ICA.
go back to reference Piccinini, G. (2008). Computers. Pacific Philosophical Quarterly, 89, 32–73.CrossRef Piccinini, G. (2008). Computers. Pacific Philosophical Quarterly, 89, 32–73.CrossRef
go back to reference Pitowsky, I. (1990). The physical Church thesis and physical computational complexity. Iyyun: The Jerusalem Philosophical Quarterly, 39, 81–99. Pitowsky, I. (1990). The physical Church thesis and physical computational complexity. Iyyun: The Jerusalem Philosophical Quarterly, 39, 81–99.
go back to reference Preston, J. (2002). Introduction. In Preston and Bishop (2002). Preston, J. (2002). Introduction. In Preston and Bishop (2002).
go back to reference Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford: Oxford University Press.MATH Preston, J., & Bishop, M. (Eds.). (2002). Views into the Chinese room: New essays on Searle and artificial intelligence. Oxford: Oxford University Press.MATH
go back to reference Rapaport, W. J. (1988). Syntactic semantics: Foundations of computational natural-language understanding. In J. H. Fetzer (Ed.), Aspects of artificial intelligence (pp. 81–131). Holland: Kluwer.CrossRef Rapaport, W. J. (1988). Syntactic semantics: Foundations of computational natural-language understanding. In J. H. Fetzer (Ed.), Aspects of artificial intelligence (pp. 81–131). Holland: Kluwer.CrossRef
go back to reference Rapaport, W. J. (2006). How Helen Keller used syntactic semantics to escape from a Chinese Room. Minds and Machines, 16(4), 381–436.CrossRef Rapaport, W. J. (2006). How Helen Keller used syntactic semantics to escape from a Chinese Room. Minds and Machines, 16(4), 381–436.CrossRef
go back to reference Rogers, H, Jr. (1959). The present theory of Turing machine computability. Journal of the Society for Industrial and Applied Mathematics, 7(1), 114–130.MathSciNetCrossRefMATH Rogers, H, Jr. (1959). The present theory of Turing machine computability. Journal of the Society for Industrial and Applied Mathematics, 7(1), 114–130.MathSciNetCrossRefMATH
go back to reference Ross, J. R. (1967). Constraints on variables in syntax. Ph.D. dissertation, MIT. Published as Ross 1986. Ross, J. R. (1967). Constraints on variables in syntax. Ph.D. dissertation, MIT. Published as Ross 1986.
go back to reference Ross, J. R. (1986). Infinite syntax!. Norton, NJ: Ablex. Ross, J. R. (1986). Infinite syntax!. Norton, NJ: Ablex.
go back to reference Searle, J. R. (1980). Minds, brains and programs. The Behavioral and Brain Sciences, 3, 417–424.CrossRef Searle, J. R. (1980). Minds, brains and programs. The Behavioral and Brain Sciences, 3, 417–424.CrossRef
go back to reference Searle, J. R. (1990). Is the brain’s mind a digital computer? Proceedings of American Philosophical Association, 64(3), 21–37.CrossRef Searle, J. R. (1990). Is the brain’s mind a digital computer? Proceedings of American Philosophical Association, 64(3), 21–37.CrossRef
go back to reference Searle, J. R. (2001). Chinese Room argument. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 115–116). Cambridge, MA: MIT Press. Searle, J. R. (2001). Chinese Room argument. In R. A. Wilson & F. C. Keil (Eds.), The MIT encyclopedia of the cognitive sciences (pp. 115–116). Cambridge, MA: MIT Press.
go back to reference Searle, J. R. (2002). Twenty-one years in the Chinese Room. In Preston and Bishop (2002). Searle, J. R. (2002). Twenty-one years in the Chinese Room. In Preston and Bishop (2002).
go back to reference Shagrir, O. (1999). What is computer science about? The Monist, 82(1), 131–149.CrossRef Shagrir, O. (1999). What is computer science about? The Monist, 82(1), 131–149.CrossRef
go back to reference Simon, H. (1969). The sciences of the artificial. Cambridge: MIT Press. Simon, H. (1969). The sciences of the artificial. Cambridge: MIT Press.
go back to reference Turing, A. M. (1936). On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42(series 2), 230–265.MathSciNetMATH Turing, A. M. (1936). On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42(series 2), 230–265.MathSciNetMATH
Metadata
Title
Computers Aren’t Syntax All the Way Down or Content All the Way Up
Author
Cem Bozşahin
Publication date
18-07-2018
Publisher
Springer Netherlands
Published in
Minds and Machines / Issue 3/2018
Print ISSN: 0924-6495
Electronic ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-018-9469-2

Other articles of this Issue 3/2018

Minds and Machines 3/2018 Go to the issue

Premium Partner