Skip to main content

2013 | Buch

Computing Nature

Turing Centenary Perspective

herausgegeben von: Gordana Dodig-Crnkovic, Raffaela Giovagnoli

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Applied Philosophy, Epistomology and Rational Ethics

insite
SUCHEN

Über dieses Buch

This book is about nature considered as the totality of physical existence, the universe, and our present day attempts to understand it. If we see the universe as a network of networks of computational processes at many different levels of organization, what can we learn about physics, biology, cognition, social systems, and ecology expressed through interacting networks of elementary particles, atoms, molecules, cells, (and especially neurons when it comes to understanding of cognition and intelligence), organs, organisms and their ecologies?

Regarding our computational models of natural phenomena Feynman famously wondered: “Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do?” Phenomena themselves occur so quickly and automatically in nature. Can we learn how to harness nature’s computational power as we harness its energy and materials?

This volume includes a selection of contributions from the Symposium on Natural Computing/Unconventional Computing and Its Philosophical Significance, organized during the AISB/IACAP World Congress 2012, held in Birmingham, UK, on July 2-6, on the occasion of the centenary of Alan Turing’s birth. In this book, leading researchers investigated questions of computing nature by exploring various facets of computation as we find it in nature: relationships between different levels of computation, cognition with learning and intelligence, mathematical background, relationships to classical Turing computation and Turing’s ideas about computing nature - unorganized machines and morphogenesis. It addresses questions of information, representation and computation, interaction as communication, concurrency and agent models; in short this book presents natural computing and unconventional computing as extension of the idea of computation as symbol manipulation.

Inhaltsverzeichnis

Frontmatter
Computing Nature – A Network of Networks of Concurrent Information Processes
Abstract
The articles in the volume Computing Nature present a selection of works from the Symposium on Natural/Unconventional Computing at AISB/IACAP (British Society for the Study of Artificial Intelligence and the Simulation of Behaviour and The International Association for Computing and Philosophy) World Congress 2012, held at the University of Birmingham, on the occasion of the centenary of Alan Turing’s birth.
Gordana Dodig-Crnkovic, Raffaela Giovagnoli
A Framework for Computing Like Nature
Abstract
We address the context within which ‘Natural’ computation can be carried out, and conclude that a birational ecosystemic hierarchical framework would provide for computation which is closer to Nature. This presages a major philosophical change in the way Science can be carried out. A consequence is that all system properties appear as intermediates between unattainable dimensional extremes; even existence itself. We note that Classical and Quantum mechanical paradigms make up a complementary pair. What we wish to do is to bring all of Science under a generalized umbrella of entity and its ecosystem, and then characterize different types of entity by their relationships with their relevant ecosystems. The most general way to do this is to move the ecosystemic paradigm up to the level of its encompassing logic, creating a complementary pair of conceivably different logics – one for the entity we are focusing on; one for the ecosystem within which it exists – and providing for their quasi-autonomous birational interaction.
Ron Cottam, Willy Ranson, Roger Vounckx
The Coordination of Probabilistic Inference in Neural Systems
Abstract
Life, thought of as adaptively organised complexity, depends upon information and inference, which is nearly always inductive, because the world, though lawful, is far from being wholly predictable. There are several influential theories of probabilistic inference in neural systems, but here I focus on the theory of Coherent Infomax, and its relation to the theory of free energy reduction. Coherent Infomax shows, in principle, how life can be preserved and improved by coordinating many concurrent inferences. It argues that neural systems combine local reliability with flexible, holistic, context-sensitivity. What this perspective contributes to our understanding of neuronal inference is briefly outlined by relating it to cognitive and neurophysiological studies of context-sensitivity and gain-control, psychotic disorganization, theories of the Bayesian brain, and predictive coding. Limitations of the theory and unresolved issues are noted, emphasizing those that may be of interest to philosophers, and including the possibility of major transitions in the evolution of inferential capabilities.
William A. Phillips
Neurobiological Computation and Synthetic Intelligence
Abstract
When considering the ongoing challenges faced by cognitivist approaches to artificial intelligence, differences in perspective emerge when the synthesis of intelligence turns to neurobiology for principles and foundations. Cognitivist approaches to the development of engineered systems having properties of autonomy and intelligence are limited in their lack of grounding and emphasis upon linguistically derived models of the nature of intelligence. The alternative of taking inspiration more directly from biological nervous systems can go far beyond twentieth century models of artificial neural networks (ANNs), which greatly oversimplified brain and neural functions. The synthesis of intelligence based upon biological foundations must draw upon and become part of the ongoing rapid expansion of the science of biological intelligence. This includes an exploration of broader conceptions of information processing, including different modalities of information processing in neural and glial substrates. The medium of designed intelligence must also expand to include biological, organic and inorganic molecular systems capable of realizing asynchronous, analog and self-* architectures that digital computers can only simulate.
Craig A. Lindley
A Behavioural Foundation for Natural Computing and a Programmability Test
Abstract
What does it mean to claim that a physical or natural system computes? One answer, endorsed here, is that computing is about programming a system to behave in different ways. This paper offers an account of what it means for a physical system to compute based on this notion. It proposes a behavioural characterisation of computing in terms of a measure of programmability, which reflects a system’s ability to react to external stimuli. The proposed measure of programmability is useful for classifying computers in terms of the apparent algorithmic complexity of their evolution in time. I make some specific proposals in this connection and discuss this approach in the context of other behavioural approaches, notably Turing’s test of machine intelligence. I also anticipate possible objections and consider the applicability of these proposals to the task of relating abstract computation to nature-like computation.
Hector Zenil
Alan Turing’s Legacy: Info-computational Philosophy of Nature
Abstract
Alan Turing’s pioneering work on computability, and his ideas on morphological computing support Andrew Hodges’ view of Turing as a natural philosopher. Turing’s natural philosophy differs importantly from Galileo’s view that the book of nature is written in the language of mathematics (The Assayer, 1623). Computing is more than a language used to describe nature as computation produces real time physical behaviors. This article presents the framework of Natural info-computationalism as a contemporary natural philosophy that builds on the legacy of Turing’s computationalism. The use of info-computational conceptualizations, models and tools makes possible for the first time in history modeling of complex self-organizing adaptive systems, including basic characteristics and functions of living systems, intelligence, and cognition.
Gordana Dodig-Crnkovic
Dualism of Selective and Structural Manifestations of Information in Modelling of Information Dynamics
Abstract
Information can be defined in terms of the categorical opposition of one and many, leading to two manifestations of information, selective and structural. These manifestations of information are dual in the sense that one always is associated with the other. The dualism can be used to model and explain dynamics of information processes. Application of the analysis involving selective-structural duality is made in the contexts of two domains, of computation and foundations of living systems. Similarity of these two types of information processing allowing common way of their modelling becomes more evident in the naturalistic perspective on computing based on the observation that every computation is inherently analogue, and the distinction between analogue and digital information is only a matter of its meaning. In conclusion, it is proposed that the similar dynamics of information processes allows considering computational systems of increased hierarchical complexity resembling living systems.
Marcin J. Schroeder
Intelligence and Reference
Formal Ontology of the Natural Computation
Abstract
In a seminal work published in 1952, “The chemical basis of morphogenesis”, A. M. Turing established the core of what today we call “natural computation” in biological systems, intended as self-organizing dissipative systems. In this contribution we show that a proper implementation of Turing’s seminal idea cannot be based on diffusive processes, but on the coherence states of condensed matter according to the dissipative Quantum Field Theory (QFT) principles. This foundational theory is consistent with the intentional approach in cognitive neuroscience, as far as it is formalized in the appropriate ontological interpretation of the modal calculus (formal ontology). This interpretation is based on the principle of the “double saturation” between a singular argument and its predicate that has its dynamical foundation in the principle of the “doubling of the degrees of freedom” between a brain state and the environment, as an essential ingredient of the mathematical formalism of dissipative QFT.
Gianfranco Basti
Representation, Analytic Pragmatism and AI
Abstract
Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak AI and to explain a notion of representation common to human and artificial agents.
Raffaela Giovagnoli
Salient Features and Snapshots in Time: An Interdisciplinary Perspective on Object Representation
Abstract
Faced with a vast, dynamic environment, some animals and robots often need to acquire and segregate information about objects. The form of their internal representation depends on how the information is utilised. Sometimes it should be compressed and abstracted from the original, often complex, sensory information, so it can be efficiently stored and manipulated, for deriving interpretations, causal relationships, functions or affordances. We discuss how salient features of objects can be used to generate compact representations, later allowing for relatively accurate reconstructions and reasoning. Particular moments in the course of an object-related process can be selected and stored as ‘key frames’. Specifically, we consider the problem of representing and reasoning about a deformable object from the viewpoint of both an artificial and a natural agent.
Veronica E. Arriola-Rios, Zoe P. Demery, Jeremy Wyatt, Aaron Sloman, Jackie Chappell
Toward Turing’s A-Type Unorganised Machines in an Unconventional Substrate: A Dynamic Representation in Compartmentalised Excitable Chemical Media
Abstract
Turing presented a general representation scheme by which to achieve artificial intelligence – unorganised machines. Significantly, these were a form of discrete dynamical system and yet such representations remain relatively unexplored. Further, at the same time as also suggesting that natural evolution may provide inspiration for search mechanisms to design machines, he noted that mechanisms inspired by the social aspects of learning may prove useful. This paper presents initial results from consideration of using Turing’s dynamical representation within an unconventional substrate - networks of Belousov-Zhabotinsky vesicles - designed by an imitation-based, i.e., cultural, approach. Turing’s representation scheme is also extended to include a fuller set of Boolean functions at the nodes of the recurrent networks.
Larry Bull, Julian Holley, Ben De Lacy Costello, Andrew Adamatzky
Learning to Hypercompute? An Analysis of Siegelmann Networks
Abstract
This paper consists of a further analysis (continuing that of [11]) of the hypercomputing neural network model of Hava Siegelmann ([21]).
Keith Douglas
Oracle Hypermachines Faced with the Verification Problem
Abstract
One of the main current issues about hypercomputation concerns the claim of the possibility of building a physical device that hypercomputes. In order to prove this claim, one possible strategy could be to physically build an oracle hypermachine, namely a device which is be able to use some extern information from nature to go beyond Turing machines limits. However, there is an epistemological problem affecting this strategy, which may be called “verification problem”. This problem raises in presence of an oracle hypermachine and it may be set out as follows: even if we were able to build such a hypermachine we would not be able to claim that it hypercomputes because it would be impossible to verify that the machine can compute a non Turing-computable function. In this paper, I propose an analysis of the verification problem in order to know whether it is a genuine problem for oracle hypermachines.
Florent Franchette
Does the Principle of Computational Equivalence Overcome the Objections against Computationalism?
Abstract
Computationalism has been variously defined as the idea that the human mind can be modelled by means of mechanisms broadly equivalent to Turing Machines. Computationalism’s claims have been hotly debated and arguments against and for have drawn extensively from mathematics, cognitive sciences and philosophy, although the debate is hardly settled. On the other hand, in his 2002 book New Kind of Science, Stephen Wolfram advanced what he called the Principle of Computational Equivalence (PCE), whose main contention is that fairly simple systems can easily reach very complex behaviour and become as powerful as any possible system based on rules (that is, they are computationally equivalent). He also claimed that any natural (and even human) phenomenon can be explained as the interaction of very simple rules. Of course, given the universality of Turing Machine-like mechanisms, PCE could be considered simply a particular brand of computationalism, subject to the same objections as previous attempts. In this paper we analyse in depth if this view of PCE is justified or not and hence if PCE can overcome some criticisms and be a different and better model of the human mind.
Alberto Hernández-Espinosa, Francisco Hernández-Quiroz
Some Constraints on the Physical Realizability of a Mathematical Construction
Abstract
Mathematical constructions of abstract entities are normally done disregarding their actual physical realizability. The definition and limits of the physical realizability of these constructions are controversial issues at the moment and the subject of intense debate.
In this paper, we consider a simple and particular case, namely, the physical realizability of the enumeration of rational numbers by Cantor’s diagonalization by means of an Ising system.
We contend that uncertainty in determining a particular state in an Ising system renders impossible to have a reliable implementation of Cantor’s diagonal method and therefore a stronger physical system is required. We also point out what are the particular limitations of this system from the perspective of physical realizability.
Francisco Hernández-Quiroz, Pablo Padilla
From the Closed Classical Algorithmic Universe to an Open World of Algorithmic Constellations
Abstract
In this paper we analyze methodological and philosophical implications of algorithmic aspects of unconventional computation. At first, we describe how the classical algorithmic universe developed and analyze why it became closed in the conventional approach to computation. Then we explain how new models of algorithms turned the classical closed algorithmic universe into the open world of algorithmic constellations, allowing higher flexibility and expressive power, supporting constructivism and creativity in mathematical modeling. As Gödel’s undecidability theorems demonstrate, the closed algorithmic universe restricts essential forms of mathematical cognition. In contrast, the open algorithmic universe, and even more the open world of algorithmic constellations, remove such restrictions and enable new, richer understanding of computation.
Mark Burgin, Gordana Dodig-Crnkovic
What Makes a Computation Unconventional?
Abstract
Turing’s standard model of computation, and its physical counterpart, has given rise to a powerful paradigm. There are assumptions underlying the paradigm which constrain our thinking about the realities of computing, not least when we doubt the paradigm’s adequacy.
There are assumptions concerning the logical structure of computation, and the character of its reliance on the data it feeds on. There is a corresponding complacency spanning theoretical – but not experimental – thinking about the complexity of information, and its mathematics. We point to ways in which classical computability can clarify the nature of apparently unconventional computation. At the same time, we seek to expose the devices used in both theory and practice to try and extend the scope of the standard model. This involves a drawing together of different approaches, in a way that validates the intuitions of those who question the standard model, while providing them with a unifying vision of diverse routes “beyond the Turing barrier”.
S. Barry Cooper
Backmatter
Metadaten
Titel
Computing Nature
herausgegeben von
Gordana Dodig-Crnkovic
Raffaela Giovagnoli
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-37225-4
Print ISBN
978-3-642-37224-7
DOI
https://doi.org/10.1007/978-3-642-37225-4