Skip to main content

2017 | Buch

Advances in Unconventional Computing

Volume 1: Theory

insite
SUCHEN

Über dieses Buch

The unconventional computing is a niche for interdisciplinary science, cross-bred of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology. The aims of this book are to uncover and exploit principles and mechanisms of information processing in and functional properties of physical, chemical and living systems to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices.
This first volume presents theoretical foundations of the future and emergent computing paradigms and architectures. The topics covered are computability, (non-)universality and complexity of computation; physics of computation, analog and quantum computing; reversible and asynchronous devices; cellular automata and other mathematical machines; P-systems and cellular computing; infinity and spatial computation; chemical and reservoir computing.
The book is the encyclopedia, the first ever complete authoritative account, of the theoretical and experimental findings in the unconventional computing written by the world leaders in the field. All chapters are self-contains, no specialist background is required to appreciate ideas, findings, constructs and designs presented. This treatise in unconventional computing appeals to readers from all walks of life, from high-school pupils to university professors, from mathematicians, computers scientists and engineers to chemists and biologists.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Nonuniversality in Computation: Fifteen Misconceptions Rectified

In January 2005 I showed that universality in computation cannot be achieved. Specifically, I exhibited a number of distinct counterexamples, each of which sufficing to demonstrate that no finite and fixed computer can be universal in the sense of being able to simulate successfully any computation that can be executed on any other computer. The number and diversity of the counterexamples attest to the general nature of the nonuniversality result. This not only put to rest the “Church–Turing Thesis”, byChurch–Turing Thesis (CTT) proving it to be a false conjecture, but also was seen to apply, in addition to the Turing MachineTuring machine, to all computational models, past, present, and future, conventional and unconventional. While ten years have now passed since nonuniversality in computationNonuniversality in computation was established, the result remains largely misunderstood. There appear to be at least two main reasons for this state of affairs. As often happens to new ideas, the nonuniversality result was confronted with a stubborn entrenchment in a preconceived, deeply held, and quasi-religious belief in the existence of a universal computer. ThisUniversal computer was exacerbated by a failure to read the literature that demonstrates why such a belief is unfounded. Behavior of this sort, sadly not uncommon in science, explains the enduring mirage of the universal computer. The purpose of this chapter is to rectify the most notorious misconceptions associated with nonuniversality in computation. These misconceptions were expressed to the author in personal communications, orally, by email, and in referee reports. Each misconception is quoted verbatim and a detailed response to it is provided. The chapter concludes by inviting the reader to take a computational challenge.

Selim G. Akl
Chapter 2. What Is Computable? What Is Feasibly Computable? A Physicist’s Viewpoint

In this chapter, we show how the questions of what is computable and what is feasibly computable can be viewed from the viewpoint of physics: what is computable within the current physics? what is computable if we assume—as many physicists do—that no final physical theory is possible? what is computable if we consider data processing, i.e., computations based on physical inputs? Our physics-based analysis of these questions leads to some unexpected answers, both positive and negative. For example, we show that under the no-physical-theory-is-perfect assumption, almost all problems are feasibly solvable—but not all of them.

Vladik Kreinovich, Olga Kosheleva
Chapter 3. The Ideal Energy of Classical Lattice Dynamics

We define, as local quantities, the least energy and momentum allowed by quantum mechanics and special relativity for physical realizations of some classical lattice dynamics. These definitions depend on local rates of finite-state change. In two example dynamics, we see that these rates evolve like classical mechanical energy and momentum.

Norman Margolus
Chapter 4. An Analogue-Digital Model of Computation: Turing Machines with Physical Oracles

We introduce an abstract analogue-digital model of computation that couples Turing machines to oracles that are physical processes. Since any oracle has the potential to boost the computational power of a Turing machine, the effect on the power of the Turing machine of adding a physical process raises interesting questions. Do physical processes add significantly to the power of Turing machines; can they break the Turing Barrier? Does the power of the Turing machine vary with different physical processes? Specifically, here, we take a physical oracle to be a physical experiment, controlled by the Turing machine, that measures some physical quantity. There are three protocols of communication between the Turing machine and the oracle that simulate the types of error propagation common to analogue-digital devices, namely: infinite precision, unbounded precision, and fixed precision. These three types of precision introduce three variants of the physical oracle model. On fixing one archetypal experiment, we show how to classify the computational power of the three models by establishing the lower and upper bounds. Using new techniques and ideas about timing, we give a complete classification.

Tânia Ambaram, Edwin Beggs, José Félix Costa, Diogo Poças, John V. Tucker
Chapter 5. Physical and Formal Aspects of Computation: Exploiting Physics for Computation and Exploiting Computation for Physical Purposes

Achieving greater speeds and densities in the post-Moore’s Law era will require computation to be more like the physical processes by which it is realized. Therefore we explore the essence of computation, that is, what distinguishes computational processes from other physical processes. We consider such issues as the topology of information processing, programmability, and universality. We summarize general characteristics of analog computation, quantum computation, and field computation, in which data is spatially continuous. Computation is conventionally used for information processing, but since the computation governs physical processes, it can also be used as a way of moving matter and energy on a microscopic scale. This provides an approach to programmable matter and programmed assembly of physical structures. We discuss artificial morphogenesis, which uses the formal structure of embryological development to coordinate the behavior of a large number of agents to assemble complex hierarchical structures. We explain that this close correspondence between computational and physical processes is characteristic of embodied computation, in which computation directly exploits physical processes for computation, or for which the physical consequences of computation are the purpose of the computation.

Bruce J. MacLennan
Chapter 6. Computing in Perfect Euclidean Frameworks

This chapter presents what kind of computation can be carried out using an Euclidean space—as input, memory, output...—with dedicated primitives. Various understandings of computing are encountered in such a setting allowing classical (Turing, discrete) computations as well as, for some, hyper and analog computations thanks to the continuity of space. The encountered time scales are discrete or hybrid (continuous evolution between discrete transitions). The first half of the chapter presents three models of computation based on geometric concepts—namely: ruler and compass, local constrains and emergence of polyhedra and piece-wise constant derivative. The other half concentrates on signal machines: line segments are extended; when they meet, they are replaced by others. Not only are these machines capable of classical computation but moreover, using the continuous nature of space and time they can also perform hyper-computation and analog computation. It is possible to build fractals and to go one step further on to use their partial generation to solve, e.g., quantified SAT in “constant space and time”.

Jérôme Durand-Lose
Chapter 7. Unconventional Computers and Unconventional Complexity Measures

Unconventional computers—for example those exploiting chemical, analogue or quantum phenomena in order to compute, as opposed to those employing the standard, digital-computer approach of electronically implementing discrete-value logic gates—are widely studied both theoretically and experimentally. One notable motivation driving this study is the desire efficiently to solve classically difficult problems—we recall for example a chemical-computer approach to the $$\mathsf {{NP}}$$-complete Travelling Salesperson Problem—, with computational complexity theory providing the criteria for judging this efficiency. However, care must be taken: conventional (Turing-machine-style) complexity analyses are in many cases inappropriate for assessing unconventional computers; new, non-standard computational resources, with correspondingly new complexity measures, may well be consumed during unconventional computation, and yet are overlooked by conventional analyses. Accordingly, we discuss in this chapter various resources beyond merely the conventional time and space, advocating such resources’ consideration during analysis of the complexity of unconventional computers (and, more fundamentally, we discuss various interpretations of the term ‘resource’ itself). We hope that this acts as a useful starting point for practitioners of unconventional computing and computational complexity.

Ed Blakey
Chapter 8. Decreasing Complexity in Inductive Computations

Kolmogorov or algorithmic complexity has found applications in many areas including medicine, biology, neurophysiology, physics, economics, hardware and software engineering. Conventional Kolmogorov/algorithmic complexity and its modifications are based on application of conventional, i.e., recursive, algorithms, such as Turing machines. Inductive complexity studied in this paper is based on application of unconventional algorithms such as inductive Turing machines, which are super-recursive as they can compute much more than recursive algorithm can. It is possible to apply inductive complexity in all cases where Kolmogorov complexity is used. In particular, inductive complexity has been used in the study of mathematical problem complexity. The main goal of this work is to show how inductive algorithms can reduce complexity of programs and problems. In Sect. 8.2, we build the constructive hierarchy of inductive Turing machines and study the corresponding hierarchy of inductively computable functions. Inductive Turing machines from the constructive hierarchy are very powerful because they can build (compute) the whole arithmetical hierarchy. In Sect. 8.3, it is proved that inductive algorithms from the constructive hierarchy can essentially reduce complexity of programs and problems and the more powerful inductive algorithms are utilized the larger reduction of complexity is achievable.

Mark Burgin
Chapter 9. Asymptotic Intrinsic Universality and Natural Reprogrammability by Behavioural Emulation

We advance a Bayesian concept of intrinsic asymptotic universality, taking to its final conclusions previous conceptual and numerical work based upon a concept of a reprogrammability test and an investigation of the complex qualitative behaviour of computer programs. Our method may quantify the trust and confidence of the computing capabilities of natural and classical systems, and quantify computers by their degree of reprogrammability. We test the method to provide evidence in favour of a conjecture concerning the computing capabilities of Busy Beaver Turing machines as candidates for Turing universality. The method has recently been used to quantify the number of intrinsically universalIntrinsic universality cellularautomataCellular automata, with results that point towards the pervasiveness of universality due to a widespread capacity for emulation. Our method represents an unconventional approach to the classical and seminal concept of Turing universality, and it may be extended and applied in a broader context to natural computation, by (in something like the spirit of the Turing test) observing the behaviour of a system under circumstances where formal proofs of universality are difficult, if not impossible to come by.

Hector Zenil, Jürgen Riedel
Chapter 10. Two Small Universal Reversible Turing Machines

We study the problem of constructing small universal Turing machines (UTMs) under the constraint of reversibility, which is a property closely related to physical reversibility. Let URTM(m,n) denote an m-state n-symbol universal reversible Turing machine (URTM). Then, the problem is to find URTM(m,n) with small m and n. So far, several kinds of small URTMs have been given. Here, we newly construct two small URTMs. They are URTM(13,7) and URTM(10,8) that can simulate cyclic tag systems, a kind of universal string rewriting systems proposed by Cook. We show how these URTMs can be designed, and compare them with other existing URTMs.

Kenichi Morita
Chapter 11. Percolation Transition and Related Phenomena in Terms of Grossone Infinity Computations

In this chapter, a number of traditional models related to the percolation theory is taken into consideration: site percolation, gradient percolation, and forest-fire model. They are studied by means of a new computational methodology that gives a possibility to work with finite, infinite, and infinitesimal quantities numerically by using a new kind of a computer—the Infinity ComputerInfinity computer, the —introduced recently. It is established that in light of the new arithmetic using grossone-based numerals the phase transition point in site percolation and gradient percolation appears as a critical interval, rather than a critical point. Depending on the ‘microscope’ we use, this interval could be regarded as finite, infinite, or infinitesimal interval. By applying the new approach we show that in vicinity of the percolation threshold we have many different infinite clustersInfinite cluster instead of one infinite cluster that appears in traditional considerations. With respect to the cellular automaton forest-fire model, two traditional versions of the model are studied: a real forest-fire model where fire catches adjacent trees in the forest in the step by step manner and a simplified version with instantaneous combustion. By applying the new approach there is observed that in both situations we deal with the same model but with different time resolutions. We show that depending on ‘microscope’ we use, the same cellular automaton forest-fire model reveals either the instantaneous forest combustion or the step by step firing.

Dmitry I. Iudin, Yaroslav D. Sergeyev
Chapter 12. Spacetime Computing: Towards Algorithmic Causal Sets with Special-Relativistic Properties

Spacetime computing is undoubtedly one of the most ambitious and less explored forms of unconventional computing. Totally unconventional is the medium on which the computation is expected to take place—the elusive texture of physical spacetime—and unprecedentedly wide its scope, since the emergent properties of these computations are expected to ultimately reproduce everything we observe in nature. First we discuss the distinguishing features of this peculiar form of unconventional computing, and survey a few pioneering approaches. Then we illustrate some novel ideas and experiments that attempt to establish stronger connections with advances in quantum gravity and the physics of spacetime. We discuss techniques for building algorithmic causal sets—our proposed deterministic counterpart of the stochastic structures adopted in the Causal Set programme for discrete spacetime modeling—and investigate, in particular, the extent to which they can reflect an essential feature of continuous spacetime: Lorentz invariance.

Tommaso Bolognesi
Chapter 13. Interaction-Based Programming in MGS

The modeling and simulation of morphogenetic phenomena require to take into account the coupling between the processes that take place in a space and the modification of that space due to those processes, leading to a chicken-and-egg problem. To cope with this issue, we propose to consider a growing structure as the byproduct of a multitude of interactions between its constitutive elements. An interaction-based model of computation relying on spatial relationships is then developed leading to an original style of programming implemented in the MGS programming language. While MGS seems to be at first glance a domain specific programming language, its underlying interaction-based paradigm is also relevant to support the development of generic programming mechanisms. We show how the specification of space independent computations achieves polytypism and we develop a direct interpretation of well-known differential operators in term of data movements.

Antoine Spicher, Jean-Louis Giavitto
Chapter 14. Cellular Automata in Hyperbolic Spaces

The chapter presents a bit more than fifteen years of research on cellular automata in hyperbolic spaces. After a short historical section, we remind the reader what is needed to know from hyperbolic geometry. Then we sum up the results which where obtained during the considered period. We focus on results about universal cellular automata, giving the main ideas which were used in the quest for universal hyperbolic cellular automata with a number of states as small as possible.

Maurice Margenstern
Chapter 15. A Computation in a Cellular Automaton Collider Rule 110

A cellular automaton collider is a finite state machine build of rings of one-dimensional cellular automata. We show how a computation can be performed on the collider by exploiting interactions between gliders (particles, localisations). The constructions proposed are based on universality of elementary cellular automaton rule 110, cyclic tag systems, supercolliders, and computing on rings.

Genaro J. Martínez, Andrew Adamatzky, Harold V. McIntosh
Chapter 16. Quantum Queries Associated with Equi-partitioning of States and Multipartite Relational Encoding Across Space-Time

In the first part of this paper we analyze possible quantum computational capacities due to quantum queries associated with equi-partitions of pure orthogonal states. Special emphasis is given to the parity of product states and to functional parity. The second part is dedicated to a critical review of the relational encoding of multipartite states across (space-like separated) space-time regions; a property often referred to as “quantum nonlocality”.

Karl Svozil
Chapter 17. Solving the Broadcast Time Problem Using a D-wave Quantum Computer

We illustrate how the D-Wave Two quantum computer is programmed and works by solving the Broadcast Time Problem. We start from a concise integer program formulation of the problem and apply some simple transformations to arrive at the QUBO form which can be run on the D-Wave quantum computer. Finally, we explore the feasibility of this method on several well-known graphs.

Cristian S. Calude, Michael J. Dinneen
Chapter 18. The Group Zoo of Classical Reversible Computing and Quantum Computing

By systematically inflating the group of $$n \times n$$ permutation matrices to the group of $$n \times n$$ unitary matrices, we can see how classical computing is embedded in quantum computing. In this process, an important role is played by two subgroups of the unitary group U(n), i.e. XU(n) and ZU(n). Here, XU(n) consists of all $$n \times n$$ unitary matrices with all line sums (i.e. the n row sums and the n column sums) equal to 1, whereas ZU(n) consists of all $$n \times n$$ diagonal unitary matrices with upper-left entry equal to 1. As a consequence, quantum computers can be built from NEGATOR gates and PHASOR gates. The NEGATOR is a 1-qubit circuit that is a natural generalization of the 1-bit NOT gate of classical computing. In contrast, the PHASOR is a 1-qubit circuit not related to classical computing.

Alexis De Vos, Stijn De Baerdemacker
Chapter 19. Fault Models in Reversible and Quantum Circuits

In this chapter we describe faults that can occur in reversible circuit as compared to faults that can occur in classical irreversible circuits. Because there are many approaches from classical irreversible circuits that are being adapted to reversible circuits, it is necessary to analyze what faults that exists in irreversible circuits can appear in reversible circuit as well. Thus we focus on comparing faults that can appear in classical circuit technology with faults that can appear in reversible and quantum circuit technology. The comparison is done from the point of view of information reversible and information irreversible circuit technologies. We show that the impact of reversible computing and quantum technology strongly modifies the fault types that can appear and thus the fault models that should be considered. Unlike in the classical non-reversible transistor based circuits, in reversible circuits it is necessary to specify what type of implementation technology is used as different technologies can be affected by different faults. Moreover the level of faults and their analysis must be revised to precisely capture the effects and properties of quantum gates and quantum circuits that share several similarities with reversible circuits. By not doing so the available testing approaches adapted from classical circuits would not be able to properly detect relevant faults. In addition, if the classical faults are directly applied without revision and modifications, the presented testing procedure would be testing for such faults that cannot physically occur in the given implementation of reversible circuits. The observation and analysis of these various faults presented in this chapter clearly demonstrates what faults can occur and what faults cannot occur in various reversible technologies. Consequently the results from this chapter can be used to design more precise tests for reversible logic circuits. Moreover the clearly described differences between faults occurring in reversible and irreversible circuits means that new algorithms for fault detection should be implemented specifically for particular reversible technologies.

Martin Lukac, Michitaka Kameyama, Marek Perkowski, Pawel Kerntopf, Claudio Moraga
Chapter 20. A Class of Non-optimum-time FSSP Algorithms

Synchronization of large-scale networks is an important and fundamental computing primitive in parallel and distributed systems. The synchronization in cellular automata, known as the firing squad synchronization problem (FSSP), has been studied extensively for more than fifty years, and a rich variety of synchronization algorithms has been proposed. In the present chapter, we give a survey on a class of non-optimum-time 3n-step FSSP algorithms for synchronizing one-dimensional (1D) cellular automata of length n in $$3n \pm O(\log n)$$ steps and present a comparative study of a relatively large-number of their implementations. We also propose two smallest-state, known at present, implementations of the 3n-step algorithm. This chapter gives the first complete transition rule sets implemented on finite state automata for the class of non-optimum-time 3n-step FSSP algorithms developed so far.

Hiroshi Umeo
Chapter 21. Universality of Asynchronous Circuits Composed of Locally Reversible Elements

Reversible computing reflects a fundamental law of microscopic physics, which usually attempts to develop an equivalence between the global reversibility and local reversibility in computational systems. Hitherto the equivalence assumes implicitly that the underlying systems are synchronously timed. Alternative systems include the delay-insensitive circuits, a special type of asynchronous circuits, of which the operations are robust to arbitrary delays involved in the transmissions of signals. This chapter aims at exploring the universal input and output behavior of delay-insensitive circuits composed by reversible elements, with the globally non-reversible behavior arising from the locally invertible operations.

Jia Lee
Chapter 22. Reservoir Computing as a Model for In-Materio Computing

Research in substrate-based computing has shown that materials contain rich properties that can be exploited to solve computational problems. One such technique known as Evolution-in-materio uses evolutionary algorithms to manipulate material substrates for computation. However, in general, modelling the computational processes occurring in such systems is a difficult task and understanding what part of the embodied system is doing the computation is still fairly ill-defined. This chapter discusses the prospects of using Reservoir Computing as a model for in-materio computing, introducing new training techniques (taken from Reservoir Computing) that could overcome training difficulties found in the current Evolution-in-Materio technique.

Matthew Dale, Julian F. Miller, Susan Stepney
Chapter 23. On Reservoir Computing: From Mathematical Foundations to Unconventional Applications

In a typical unconventional computation setup the goal is to exploit a given dynamical system, which cannot be easily adjusted or programmed, for information processing applications. While one has some intuition of how to use the system, it is often the case that it is not entirely clear how to achieve this in practice. Reservoir computing represents a set of approaches that could be useful in such situations. As a paradigm, reservoir computing harbours enormous technological potential which can be naturally released in the context of unconventional computation. In this chapter several key concepts of reservoir computing are reviewed, re-interpreted, and synthesized to aid in realizing the unconventional computation agenda, and to illustrate what unconventional computation might be. Some philosophical approaches are discussed too, e.g. the strongly related implementation problem. The focus is on understanding reservoir computing in the classical setup, where a single fixed dynamical system is used: To this end, mathematical foundations of reservoir computing are presented, in particular the Stone-Weierstrass approximation theorem, with a mixture of rigor, and intuitive explanations. To make the synthesis it was crucial to thoroughly analyze both the differences and similarities between Liquid State Machines and Echo State Networks, and find a common context insensitive base. The result of the synthesis is the suggested Reservoir Machine model. The model could be used to analyze how to build unconventional information processing devices and to understand their computing capacity.

Zoran Konkoli
Chapter 24. Computational Properties of Cell Regulatory Pathways Through Petri Nets

The paper develops a Petri net model of a negative feedback oscillator, Case 2a from Tyson et al. (Curr. Opin. Cell Biol. 15, 221–231, (2003), [48]), in order to be able to perform the holonomy decomposition of the automaton derived from its token markings and allowed transitions for a given initial state. The objective is to investigate the algebraic structure of the cascade product obtained from its holonomy components and to relate it to the behaviour of the physical system, in particular to the oscillations. The analysis is performed in two steps, first focusing on one of its component systems, the Goldbeter–Koshland ultrasensitive switch (Case 1c from Tyson et al. Curr. Opin. Cell Biol. 15, 221–231, (2003), [48]), in order to verify the validity of its differential model and, from this, to validate the corresponding Petri net through a stochastic simulation. The paper does not present new original results but, rather, discusses and critiques existing results from the different points of view of continuous and discrete mathematics and stochasticity. The style is one of a review paper or tutorial, specifically to make the material and the concepts accessible to a wide interdisciplinary audience. We find that the Case 2a model widely reported in the literature violates the assumptions of the Michaelis–Menten quasi-steady-state approximation. However, we are still able to show oscillations of the full rate equations and of the corresponding Petri net for a different set of parameters and initial conditions. We find that even the automata derived from very coarse Petri nets of Case 1c and Case 2a, with places of capacity 1, are able to capture meaningful biochemical information in the form of algebraic groups, in particular the reversibility of the phosphorylation reactions. Significantly, it appears that the algebraic structures uncovered by holonomy decomposition are a larger set than what may be relevant to a specific physical problem with specific initial conditions, although they are all physically possible. This highlights the role of physical context in helping select which algebraic structures to focus on when analysing particular problems. Finally, the interpretation of Petri nets as positional number systems provides an additional perspective on the computational properties of biological systems.

Paolo Dini
Chapter 25. Kernel P Systems and Stochastic P Systems for Modelling and Formal Verification of Genetic Logic Gates

P systems are the computational models of membrane computing, a computing paradigm within natural computing area inspired by the structure and behaviour of the living cell. In this chapter, we discuss two variants of this model, a non-deterministic case, called kernel P (kP) systems, and a stochastic one, called stochastic P (sP) systems. For both we present specification languages and associated tools, including simulation and verification components. The expressivity and analysis power of these natural computing models will be used to illustrate the behaviour of two genetic logic gates.

Marian Gheorghe, Savas Konur, Florentin Ipate
Chapter 26. On Improving the Expressive Power of Chemical Computation

The term chemical computation describes information processing setups where an arbitrary reaction system is used to perform information processing. The reaction system consists of a set of reactants and a reaction volume that harbours all chemicals. It has been argued that this type of computation is in principle Turing complete: for any computable function a suitable chemical system can be constructed that implements it. Turing completeness cannot be strictly guaranteed due to the inherent stochasticity of chemical reaction dynamics. The computation process can end prematurely or branch off in the wrong direction. The frequency of such errors defines the so-called fail rate of chemical computation. In this chapter we review recent advances in the field, and also suggest a few novel generic design principles which, when adhered to, should enable engineers to build accurate chemical computers.

Erik Bergh, Zoran Konkoli
Chapter 27. Conventional and Unconventional Approaches to Swarm Logic

We consider two possible ways of logical formalization of swarm behaviour: the conventional way by classical automata and the unconventional one by labelled transition systems coded by p-adic integers. Swarm intelligence is one of the directions in emergent computing. We show that the computational complexity of conventional way connected to implementations of Kolmogorov–Uspensky machines, Schönhage’s storage modification machines, and random-access machines on swarms is very high. The point is that computable functions can be simulated by swarm behaviors with a low accuracy, because of the following two main features: (i) in swarms we observe the propagation in all possible directions; (ii) there are some emergent patterns. These features cannot be defined conventionally by inductive sets. However, we can consider swarms in the universe of streams which is permanently being expended and can be coded by p-adic numbers. In this universe we can define functions and relations for the algorithmization of swarm intelligence in an unconventional way.

Andrew Schumann
Chapter 28. On the Inverse Pattern Recognition Problem in the Context of the Time-Series Data Processing with Memristor Networks

The implementation problem deals with identifying computations that can be performed by a given physical system. This issue is strongly related to the problem of describing a computing capacity of the system. In this chapter, these issues have been addressed in the context of on-line (real-time) pattern recognition of time series data, where memristor networks are used in the reservoir computing setup to perform information processing. Instead of designing a network that can solve a particular task, an inverse question has been addressed: Given a network of a certain design, which signals might it be particularly adept at recognizing? Several key theoretical concepts have been identified and formalised. This enabled us to approach the problem in a rigorous mathematical way: The problem has been formulated as an optimization problem, and a suitable algorithm for solving it has been suggested. The algorithm has been implemented as computer software: For a given network description the software produces the time-dependent voltage patterns (signals) that can be best recognized by the network. These patterns are found by performing a directed random search in the space of input signals. As an illustration of how to use the algorithm, we systematically investigated all networks containing up to four memristors.

Christopher Bennett, Aldo Jesorka, Göran Wendin, Zoran Konkoli
Chapter 29. Self-Awareness in Digital Systems: Augmenting Self-Modification with Introspection to Create Adaptive, Responsive Circuitry

The question of augmenting self-modification with introspection to create flexible, responsive digital circuitry is discussed. A specific self-configurable architecture—the Cell Matrix—is introduced, and features that support introspection and self-modification are described. Specific circuits and mechanisms that utilize these features are discussed, and sample applications that make use of these capabilities are presented. Conclusions are presented, along with comments about future work.

Nicholas J. Macias, Lisa J. K. Durbeck
Chapter 30. Looking for Computers in the Biological Cell. After Twenty Years

This is a personal, in a great extent autobiographical, view on natural computing, especially about DNA and membrane computing, having as a background the author work in these research areas in the last (more than) two decades. The discussion ranges from precise (though informal) computer science and mathematical issues to very general issues, related, e.g., to the history of natural computing, tendencies, questions (deemed to remain questions, debatable) of a, say, philosophical flavor.

Gheorghe Păun
Chapter 31. Unconventional Computing: A Brief Subjective History

In this chapter we present a few stages of the evolution of the emerging area of unconventional computing from a personal perspective.

Cristian S. Calude
Backmatter
Metadaten
Titel
Advances in Unconventional Computing
herausgegeben von
Andrew Adamatzky
Copyright-Jahr
2017
Electronic ISBN
978-3-319-33924-5
Print ISBN
978-3-319-33923-8
DOI
https://doi.org/10.1007/978-3-319-33924-5