Skip to main content
Top

2012 | Book

Handbook of Natural Computing

Editors: Grzegorz Rozenberg, Thomas Bäck, Joost N. Kok

Publisher: Springer Berlin Heidelberg

insite
SEARCH

About this book

Natural Computing is the field of research that investigates both human-designed computing inspired by nature and computing taking place in nature, i.e., it investigates models and computational techniques inspired by nature and also it investigates phenomena taking place in nature in terms of information processing.

Examples of the first strand of research covered by the handbook include neural computation inspired by the functioning of the brain; evolutionary computation inspired by Darwinian evolution of species; cellular automata inspired by intercellular communication; swarm intelligence inspired by the behavior of groups of organisms; artificial immune systems inspired by the natural immune system; artificial life systems inspired by the properties of natural life in general; membrane computing inspired by the compartmentalized ways in which cells process information; and amorphous computing inspired by morphogenesis. Other examples of natural-computing paradigms are molecular computing and quantum computing, where the goal is to replace traditional electronic hardware, e.g., by bioware in molecular computing. In molecular computing, data are encoded as biomolecules and then molecular biology tools are used to transform the data, thus performing computations. In quantum computing, one exploits quantum-mechanical phenomena to perform computations and secure communications more efficiently than classical physics and, hence, traditional hardware allows.

The second strand of research covered by the handbook, computation taking place in nature, is represented by investigations into, among others, the computational nature of self-assembly, which lies at the core of nanoscience, the computational nature of developmental processes, the computational nature of biochemical reactions, the computational nature of bacterial communication, the computational nature of brain processes, and the systems biology approach to bionetworks where cellular processes are treated in terms of communication and interaction, and, hence, in terms of computation.

We are now witnessing exciting interaction between computer science and the natural sciences. While the natural sciences are rapidly absorbing notions, techniques and methodologies intrinsic to information processing, computer science is adapting and extending its traditional notion of computation, and computational techniques, to account for computation taking place in nature around us. Natural Computing is an important catalyst for this two-way interaction, and this handbook is a major record of this important development.

Table of Contents

Frontmatter

Section I: Cellular Automata

Frontmatter
1. Basic Concepts of Cellular Automata

This chapter reviews some basic concepts and results of the theory of cellular automata (CA). Topics discussed include classical results from the 1960s, relations between various concepts of injectivity and surjectivity, and dynamical system concepts related to chaos in CA. Most results are reported without full proofs but sometimes examples are provided that illustrate the idea of a proof. The classical results discussed include the Garden-of-Eden theorem and the Curtis–Hedlund–Lyndon theorem, as well as the balance property of surjective CA. Different variants of sensitivity to initial conditions and mixing properties are introduced and related to each other. Also, algorithmic aspects and undecidability results are mentioned.

Jarkko J. Kari
2. Cellular Automata Dynamical Systems

We present recent studies on cellular automata (CAs) viewed as discrete dynamical systems. In the first part, we illustrate the relations between two important notions: subshift attractors and signal subshifts, measure attractors and particle weight functions. The second part of the chapter considers some operations on the space of one-dimensional CA configurations, namely, shifting and lifting, showing that they conserve many dynamical properties while reducing complexity. The final part reports recent investigations on two-dimensional CA. In particular, we report a construction (slicing construction) that allows us to see a two-dimensional CA as a one-dimensional one and to lift some one-dimensional results to the two-dimensional case.

Alberto Dennunzio, Enrico Formenti, Petr Kůrka
3. Algorithmic Tools on Cellular Automata

This chapter is dedicated to classic tools and methods involved in cellular transformations and constructions of signals and of functions by means of signals, which will be used in subsequent chapters. The term “signal” is widely used in the field of cellular automata (CA). But, as it arises from different levels of understanding, a general definition is difficult to formalize. This chapter deals with a particular notion of signal, which is a basic and efficient tool in cellular algorithmics.

Marianne Delorme, Jacques Mazoyer
4. Language Recognition by Cellular Automata

Cellular automata (CA) comprise a simple and well-formalized model of massively parallel computation, which is known to be capable of universal computation. Because of their parallel behavior, CA have rich abilities of information processing; however, it is not easy to define their power limitations. A convenient approach to characterizing the computation capacity of CA is to investigate their complexity classes. This chapter discusses the CA complexity classes and their relationships with other models of computations.

Véronique Terrier
5. Computations on Cellular Automata

This chapter shows how simple, common algorithms (multiplication and prime number sieve) lead to very natural cellular automata implementations. All these implementations are built with some natural basic tools: signals and grids. Attention is first focussed on the concept of signals and how simple and rich they are to realize computations. Looking closely at the space–time diagrams and the dependencies induced by the computations reveals the concept of grids, and shows how powerful they are in the sense of computability theory.

Jacques Mazoyer, Jean-Baptiste Yunès
6. Universalities in Cellular Automata*

This chapter is dedicated to computational universalities in cellular automata, essentially Turing universality, the ability to compute any recursive function, and intrinsic universality, the ability to simulate any other cellular automaton. Constructions of Boolean circuits simulation in the two-dimensional case are explained in detail to achieve both kinds of universality. A detailed chronology of seminal papers is given, followed by a brief discussion of the formalization of universalities. The more difficult one-dimensional case is then discussed. Seminal universal cellular automata and encoding techniques are presented in both dimensions.

Nicolas Ollinger
7. Reversible Cellular Automata

A reversible cellular automaton (RCA) is a special type of cellular automaton (CA) such that every configuration of it has only one previous configuration, and hence its evolution process can be traced backward uniquely. Here, we discuss how RCAs are defined, their properties, how one can find and design them, and their computing abilities. After describing definitions on RCAs, a survey is given on basic properties on injectivity and surjectivity of their global functions. Three design methods of RCAs are then given: using CAs with block rules, partitioned CAs, and second-order CAs. Next, the computing ability of RCAs is discussed. In particular, we present simulation methods for irreversible CAs, reversible Turing machines, and some other universal systems by RCAs, in order to clarify the universality of RCAs. In spite of the strong constraint of reversibility, it can be seen that RCAs have rich abilities in computing and information processing, and even very simple RCAs have universal computing ability.

Kenichi Morita
8. Conservation Laws in Cellular Automata

A conservation law in a cellular automaton (CA) is the statement of the invariance of a local and additive energy-like quantity. This chapter reviews the basic theory of conservation laws in cellular automata. A general mathematical framework for formulating conservation laws in cellular automata is presented and several characterizations of them are summarized. Computational problems regarding conservation laws (verification and existence problems) are discussed. Microscopic explanations of the dynamics of the conserved quantities in terms of flows and particle flows are explored. The related concept of dissipating energy-like quantities is also addressed.

Siamak Taati
9. Cellular Automata and Lattice Boltzmann Modeling of Physical Systems

Cellular automata (CA) and lattice Boltzmann (LB) methods provide a natural modeling framework to describe and study many physical systems composed of interacting components. The reason for this success is the close relation between these methods and a mesoscopic abstraction of many natural phenomena. The theoretical basis of the CA and LB approaches are introduced and their potential is illustrated for several applications in physics, biophysics, environmental sciences, traffic models, and multiscale modeling.

Bastien Chopard

Section II: Neural Computation

Frontmatter
10. Computing with Spiking Neuron Networks

Spiking Neuron Networks (SNNs) are often referred to as the third generation of neural networks. Highly inspired by natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity to memorize and a strong ability to do fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of traditional connectionist models. This chapter relates the history of the “spiking neuron” in Sect. 1 and summarizes the most currently-in-use models of neurons and synaptic plasticity in Sect. 2. The computational power of SNNs is addressed in Sect. 3 and the problem of learning in networks of spiking neurons is tackled in Sect. 4, with insights into the tracks currently explored for solving it. Finally, Sect. 5 discusses application domains, implementation issues and proposes several simulation frameworks.

Hélène Paugam-Moisy, Sander Bohte
11. Image Quality Assessment — A Multiscale Geometric Analysis-Based Framework and Examples

This chapter is about objective image quality assessment (IQA), which has been recognized as an effective and efficient way to predict the visual quality of distorted images. Basically, IQA has three different dependent degrees on original images, namely, full-reference (FR), no-reference (NR), and reduced-reference (RR). To begin with, we introduce the fundamentals of IQA and give a broad treatment of the state-of-the-art. We focus on a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and Weber's law of just noticeable difference (JND). Thorough empirical studies were carried out using the laboratory for image and video engineering (LIVE) database against subjective mean opinion score (MOS), and these demonstrate that the proposed framework has good consistency with subjective perception values and the objective assessment results well reflect the visual quality of the images.

Xinbo Gao, Wen Lu, Dacheng Tao, Xuelong Li  
12. Nonlinear Process Modelling and Control Using Neurofuzzy Networks

This chapter presents neurofuzzy networks for nonlinear process modeling and control. The neurofuzzy network uses local linear models to model a nonlinear process and the local linear models are combined using center of gravity (COG) defuzzification. In order to be able to provide accurate long-range predictions, a recurrent neurofuzzy network structure is developed. An advantage of neurofuzzy network models is that they are easy to interpret. Insight about the process characteristics at different operating regions can be easily obtained from the neurofuzzy network parameters. Based on the neurofuzzy network model, nonlinear model predictive controllers can be developed as a nonlinear combination of several local linear model predictive controllers that have analytical solutions. Applications to the modeling and control of a neutralization process and a fed-batch process demonstrate that the proposed recurrent neurofuzzy network is very effective in the modeling and control of nonlinear processes.

Jie Zhang
13. Independent Component Analysis

Independent component analysis (ICA) is a statistical method, the goal of which is to decompose multivariate data into a linear sum of non-orthogonal basis vectors with coefficients (encoding variables, latent variables, and hidden variables) being statistically independent. ICA generalizes widely used subspace analysis methods such as principal component analysis (PCA) and factor analysis, allowing latent variables to be non-Gaussian and basis vectors to be non-orthogonal in general. ICA is a density-estimation method where a linear model is learned such that the probability distribution of the observed data is best captured, while factor analysis aims at best modeling the covariance structure of the observed data. We begin with a fundamental theory and present various principles and algorithms for ICA.

Seungjin Choi
14. Neural Networks for Time-Series Forecasting

Neural networks has become an important method for time series forecasting. There is increasing interest in using neural networks to model and forecast time series. This chapter provides a review of some recent developments in time series forecasting with neural networks, a brief description of neural networks, their advantages over traditional forecasting models, and some recent applications. Several important data and modeling issues for time series forecasting are highlighted. In addition, recent developments in several methodological areas such as seasonal time series modeling, multi-period forecasting, and the ensemble method are reviewed.

G. Peter Zhang
15. SVM Tutorial — Classification, Regression and Ranking

Support vector machines (SVMs) have been extensively researched in the data mining and machine learning communities for the last decade, and applied in various domains. They represent a set of supervised learning techniques that create a function from training data, which usually consists of pairs of an input object, typically vectors, and a desired output. SVMs learn a function that generates the desired output given the input, and the learned function can be used to predict the output of a new object. They belong to a family of generalized linear classifier where the classification (or boundary) function is a hyperplane in the feature space. This chapter introduces the basic concepts and techniques of SVMs for learning classification, regression, and ranking functions.

Hwanjo Yu, Sungchul Kim
16. Fast Construction of Single-Hidden-Layer Feedforward Networks

In this chapter, two major issues are addressed: (i) how to obtain a more compact network architecture and (ii) how to reduce the overall computational complexity. An integrated analytic framework is introduced for the fast construction of single-hidden-layer feedforward networks (SLFNs) with two sequential phases. The first phase of the algorithm focuses on the computational efficiency for fast computation of the unknown parameters and fast selection of the hidden nodes. The second phase focuses on improving the performance of the network obtained in the first phase. The proposed algorithm is evaluated on several benchmark problems.

Kang Li, Guang-Bin Huang, Shuzhi Sam Ge
17. Modeling Biological Neural Networks

In recent years, many new experimental studies on emerging phenomena in neural systems have been reported. The high efficiency of living neural systems to encode, process, and learn information has stimulated an increased interest among theoreticians in developing mathematical approaches and tools to model biological neural networks. In this chapter we review some of the most popular models of neurons and neural networks that help us understand how living systems perform information processing. Beyond the fundamental goal of understanding the function of the nervous system, the lessons learned from these models can also be used to build bio-inspired paradigms of artificial intelligence and robotics.

Joaquin J. Torres, Pablo Varona
18. Neural Networks in Bioinformatics

Over the last two decades, neural networks (NNs) gradually became one of the indispensable tools in bioinformatics. This was fueled by the development and rapid growth of numerous biological databases that store data concerning DNA and RNA sequences, protein sequences and structures, and other macromolecular structures. The size and complexity of these data require the use of advanced computational tools. Computational analysis of these databases aims at exposing hidden information that provides insights which help with understanding the underlying biological principles. The most commonly explored capability of neural networks that is exploited in the context of bioinformatics is prediction. This is due to the existence of a large body of raw data and the availability of a limited amount of data that are annotated and can be used to derive the prediction model. In this chapter we discuss and summarize applications of neural networks in bioinformatics, with a particular focus on applications in protein bioinformatics. We summarize the most often used neural network architectures, and discuss several specific applications including prediction of protein secondary structure, solvent accessibility, and binding residues.

Ke Chen, Lukasz A. Kurgan
19. Self-organizing Maps

A topographic map is a two-dimensional, nonlinear approximation of a potentially high-dimensional data manifold, which makes it an appealing instrument for visualizing and exploring high-dimensional data. The self-organizing map (SOM) is the most widely used algorithm, and it has led to thousands of applications in very diverse areas. In this chapter we introduce the SOM algorithm, discuss its properties and applications, and also discuss some of its extensions and new types of topographic map formation, such as those that can be used for processing categorical data, time series, and tree-structured data.

Marc M. Van Hulle

Section III: Evolutionary Computation

Frontmatter
20. Generalized Evolutionary Algorithms

People have been inventing and tinkering with various forms of evolutionary algorithms (EAs) since the 1950s when digital computers became more readily available to scientists and engineers. Today we see a wide variety of EAs and an impressive array of applications. This diversity is both a blessing and a curse. It serves as strong evidence for the usefulness of these techniques, but makes it difficult to see “the big picture” and make decisions regarding which EAs are best suited for new application areas. The purpose of this chapter is to provide a broader “generalized” perspective.

Kenneth De Jong
21. Genetic Algorithms — A Survey of Models and Methods

This chapter first reviews the simple genetic algorithm. Mathematical models of the genetic algorithm are also reviewed, including the schema theorem, exact infinite population models, and exact Markov models for finite populations. The use of bit representations, including Gray encodings and binary encodings, is discussed. Selection, including roulette wheel selection, rank-based selection, and tournament selection, is also described. This chapter then reviews other forms of genetic algorithms, including the steady-state Genitor algorithm and the CHC (cross-generational elitist selection, heterogenous recombination, and cataclysmic mutation) algorithm. Finally, landscape structures that can cause genetic algorithms to fail are looked at, and an application of genetic algorithms in the domain of resource scheduling, where genetic algorithms have been highly successful, is also presented.

Darrell Whitley, Andrew M. Sutton
22. Evolutionary Strategies

Evolutionary strategies (ES) are part of the all-embracing class of evolutionary algorithms (EA). This chapter presents a compact tour through the developments regarding ES from the early beginning up to the recent past before advanced techniques and the state-of-the-art techniques are explicated in detail. The characterizing features that distinguish ES from other subclasses of EA are discussed as well.

Günter Rudolph
23. Evolutionary Programming

Evolutionary programming (EP) has a long history of development and application. This chapter provides a basic background in EP in terms of its procedure, history, extensions, and application. As one of the founding approaches in the field of evolutionary computation (EC), EP shares important similarities and differences with other EC approaches.

Gary B. Fogel
24. Genetic Programming — Introduction, Applications, Theory and Open Issues

Genetic programming (GP) is an evolutionary approach that extends genetic algorithms to allow the exploration of the space of computer programs. Like other evolutionary algorithms, GP works by defining a goal in the form of a quality criterion (or fitness) and then using this criterion to evolve a set (or population) of candidate solutions (individuals) by mimicking the basic principles of Darwinian evolution. GP breeds the solutions to problems using an iterative process involving the probabilistic selection of the fittest solutions and their variation by means of a set of genetic operators, usually crossover and mutation. GP has been successfully applied to a number of challenging real-world problem domains. Its operations and behavior are now reasonably well understood thanks to a variety of powerful theoretical results. In this chapter, we introduce the main definitions and features of GP and describe its typical operations. We then survey some of its applications. We also review some important theoretical results in this field, including some very recent ones, and discuss some of the most challenging open issues and directions for future research.

Leonardo Vanneschi, Riccardo Poli
25. The Dynamical Systems Approach — Progress Measures and Convergence Properties

This chapter considers local progress and the dynamical systems approach. The approach can be used for a quantitative analysis of the behavior of evolutionary algorithms with respect to the question of convergence and the working mechanism of these algorithms. Results obtained so far for evolution strategies on various fitness functions are described and discussed before presenting drawbacks and limitations of the approach. Furthermore, a comparison with other analysis methods is given.

Silja Meyer-Nieberg, Hans-Georg Beyer
26. Computational Complexity of Evolutionary Algorithms

When applying evolutionary algorithms to the task of optimization it is important to have a clear understanding of their capabilities and limitations. By analyzing the optimization time of various variants of evolutionary algorithms for classes of concrete optimization problems, important insights can be gained about what makes problems easy or hard for these heuristics. Still more important than the derivation of such specific results is the development of methods that facilitate rigorous analysis and enable researchers to derive such results for new variants of evolutionary algorithms and more complex problems. The development of such methods and analytical tools is a significant and very active area of research. An overview of important methods and their foundations is presented together with exemplary applications. This enables one to apply these methods to concrete problems and participate in the theoretical foundation of evolutionary computing.

Thomas Jansen
27. Stochastic Convergence

Since the state transitions of an evolutionary algorithm (EA) are of stochastic nature, the deterministic concept of the “convergence to the optimum” is not appropriate. In order to clarify the exact semantic of a phrase like “the EA converges to the global optimum” one has to, at first, establish the connection between EAs and stochastic processes before distinguishing between the various modes of stochastic convergence of stochastic processes. Subsequently, this powerful framework is applied to derive convergence results for EAs.

Günter Rudolph
28. Evolutionary Multiobjective Optimization

This chapter provides an overview of the branch of evolutionary computation that is dedicated to solving optimization problems with multiple objective functions. On the one hand, it sketches the foundations of multiobjective optimization and discusses general approaches to deal with multiple optimization criteria. On the other hand, it summarizes algorithmic concepts that are employed when designing corresponding search methods and briefly comments on the issue of performance assessment. This chapter concludes with a summary of the main application areas of evolutionary multiobjective optimization, namely, learning/decision making and multiobjectivization.

Eckart Zitzler
29. Memetic Algorithms

Memetic algorithms (MA) has become one of the key methodologies behind solvers that are capable of tackling very large, real-world, optimization problems. They are being actively investigated in research institutions as well as broadly applied in industry. This chapter provides a pragmatic guide on the key design issues underpinning memetic algorithms (MA) engineering. It begins with a brief contextual introduction to memetic algorithms and then moves on to define a pattern language for MAs. For each pattern, an associated design issue is tackled and illustrated with examples from the literature. The last section of this chapter “fast forwards” to the future and mentions what, in our mind, are the key challenges that scientists and practitioners will need to face if memetic algorithms are to remain a relevant technology in the next 20 years.

Natalio Krasnogor
30. Genetics-Based Machine Learning

This is a survey of the field of genetics-based machine learning (GBML): the application of evolutionary algorithms (ES) to machine learning. We assume readers are familiar with evolutionary algorithms and their application to optimization problems, but not necessarily with machine learning. We briefly outline the scope of machine learning, introduce the more specific area of supervised learning, contrast it with optimization and present arguments for and against GBML. Next we introduce a framework for GBML, which includes ways of classifying GBML algorithms and a discussion of the interaction between learning and evolution. We then review the following areas with emphasis on their evolutionary aspects: GBML for subproblems of learning, genetic programming, evolving ensembles, evolving neural networks, learning classifier systems, and genetic fuzzy systems.

Tim Kovacs
31. Coevolutionary Principles

Coevolutionary algorithms approach problems for which no function for evaluating potential solutions is present or known. Instead, algorithms rely on the aggregation of outcomes from interactions among evolving entities in order to make selection decisions. Given the lack of an explicit yardstick, understanding the dynamics of coevolutionary algorithms, judging whether a given algorithm is progressing, and designing effective new algorithms present unique challenges unlike those faced by optimization or evolutionary algorithms. The purpose of this chapter is to provide a foundational understanding of coevolutionary algorithms and to highlight critical theoretical and empirical work done over the last two decades. This chapter outlines the ends and means of coevolutionary algorithms: what they are meant to find, and how they should find it.

Elena Popovici, Anthony Bucci, R. Paul Wiegand, Edwin D. De Jong
32. Niching in Evolutionary Algorithms

Niching techniques are the extension of standard evolutionary algorithms (EAs) to multi-modal domains, in scenarios where the location of multiple optima is targeted and where EAs tend to lose population diversity and converge to a solitary basin of attraction. The development and investigation of EA niching methods have been carried out for several decades, primarily within the branches of genetic algorithms (GAs) and evolution strategies (ES). This research yielded altogether a long list of algorithmic approaches, some of which are bio-inspired by various concepts of organic speciation and ecological niches, while others are more computational-oriented. This chapter will lay the theoretical foundations for niching, from the perspectives of biology as well as optimization, provide a summary of the main contemporary niching techniques within EAs, and discuss the topic of experimental methodology for niching techniques. This will be accompanied by the discussion of specific case-studies, including the employment of the popular covariance matrix adaptation ES within a niching framework, the application to real-world problems, and the treatment of the so-called niche radius problem.

Ofer M. Shir

Section IV: Molecular Computation

Frontmatter
33. DNA Computing — Foundations and Implications

DNA computing is an area of natural computing based on the idea that molecular biology processes can be used to perform arithmetic and logic operations on information encoded as DNA strands. The first part of this review outlines basic molecular biology notions necessary for understanding DNA computing, recounts the first experimental demonstration of DNA computing (Adleman’s 7-vertex Hamiltonian Path Problem), and recounts the milestone wet laboratory experiment that first demonstrated the potential of DNA computing to outperform the computational ability of an unaided human (20 variable instance of 3-SAT).The second part of the review describes how the properties of DNA-based information, and in particular the Watson–Crick complementarity of DNA single strands, have influenced areas of theoretical computer science such as formal language theory, coding theory, automata theory and combinatorics on words. More precisely, we describe the problem of DNA encodings design, present an analysis of intramolecular bonds, define and characterize languages that avoid certain undesirable intermolecular bonds, and investigate languages whose words avoid even imperfect bindings between their constituent strands. We also present another, vectorial, representation of DNA strands, and two computational models based on this representation: sticker systems and Watson–Crick automata. Lastly, we describe the influence that properties of DNA-based information have had on research in combinatorics on words, by enumerating several natural generalizations of classical concepts of combinatorics of words: pseudopalindromes, pseudoperiodicity, Watson–Crick conjugate and commutative words, involutively bordered words, pseudoknot bordered words. In addition, we outline natural extensions in this context of two of the most fundamental results in combinatorics of words, namely Fine and Wilf's theorem and the Lyndon–Schutzenberger result.

Lila Kari, Shinnosuke Seki, Petr Sosík
34. Molecular Computing Machineries — Computing Models and Wet Implementations

This chapter presents both the theoretical results of molecular computing models mainly from the viewpoint of computing theory and the biochemical implementations of those models in wet lab experiments. Selected topics include a variety of molecular computing models with computabilities ranging from finite automata to Turing machines, and the associated issues of molecular implementation, as well as some applications to logical controls for circuits and medicines.

Masami Hagiya, Satoshi Kobayashi, Ken Komiya, Fumiaki Tanaka, Takashi Yokomori
35. DNA Computing by Splicing and by Insertion–Deletion

This chapter is devoted to two of the most developed theoretical computing models inspired by DNA biochemistry, computing by splicing (a formal operation with strings that models the recombination of DNA molecules under the influence of restriction enzymes and ligase) and by insertion–deletion. Only basic ideas and results are presented, as well as a comprehensive – although not complete – list of titles where further information can be found.

Gheorghe Păun
36. Bacterial Computing and Molecular Communication

Emerging technologies that enable the engineering of nano- or cell-scale systems using biological and/or artificially synthesized molecules as computing and communication devices have been receiving increased attention. This chapter focuses on “bacterial computing,” which attempts to create an autonomous cell-based Turing machine, and “molecular communication,” which attempts to create non-electromagnetic-wave-based communication paradigms by using molecules as an information medium. Section 2 introduces seminal works for constructing in vivo logic circuits, and focuses on research into implementing in vitro and in vivo finite automata in the framework of DNA-based computing. Furthermore, the first experimental development of a programmable in vivo computer that executes a finite-state automaton in bacteria is highlighted. Section 3 reports on the system design, experimental results, and research trends of molecular communication components (senders, molecular communication interfaces, molecular propagation systems, and receivers) that use bacteria, lipids, proteins, and DNA as communication devices.

Yasubumi Sakakibara, Satoshi Hiyama
37. Computational Nature of Gene Assembly in Ciliates

Ciliates are a very diverse and ancient group of unicellular eukaryotic organisms. A feature that is essentially unique to ciliates is the nuclear dualism, meaning that they have two functionally different types of nuclei, the macronucleus and the micronucleus. During sexual reproduction a micronucleus is transformed into a macronucleus – this process is called gene assembly, and it is the most involved naturally occurring example of DNA processing that is known to us. Gene assembly is a fascinating research topic from both the biological and the computational points of view.In this chapter, several approaches to the computational study of gene assembly are considered. This chapter is self-contained in the sense that the basic biology of gene assembly as well as mathematical preliminaries are introduced. Two of the most studied molecular models for gene assembly, intermolecular and intramolecular, are presented and the main mathematical approaches used in studying these models are discussed. The topics discussed in more detail include the string and graph rewriting models, invariant properties, template-based DNA recombination, and topology-based models. This chapter concludes with a brief discussion of a number of research topics that, because of the space restrictions, could not be covered in this chapter.

Robert Brijder, Mark Daley, Tero Harju, Nataša Jonoska, Ion Petre, Grzegorz Rozenberg
38. DNA Memory

This chapter summarizes the efforts that have been made so far to build a huge memory using DNA molecules. These efforts are targeted at increasing the size of the address space of a molecular memory and making operations on a specified word in the address space more efficient and reliable. The former issue should be solved by careful design of the base sequences of the address portions. The latter issue depends on the architecture of a molecular memory and the available memory operations. Concrete examples of molecular memories described in this chapter are classified into in vitro DNA memory, DNA memory on surfaces, and in vivo DNA memory. This chapter also describes the technology for designing base sequences of DNA molecules.

Masanori Arita, Masami Hagiya, Masahiro Takinoue, Fumiaki Tanaka
39. Engineering Natural Computation by Autonomous DNA-Based Biomolecular Devices

This chapter overviews the past and current state of a selected part of the emerging research area of DNA-based biomolecular devices. We particularly emphasize molecular devices that are: autonomous, executing steps with no exterior mediation after starting; and programmable, the tasks executed can be modified without entirely redesigning the nanostructure.We discuss work in this area that makes use of synthetic DNA to self-assemble into DNA nanostructure devices. Recently, there has been a series of impressive experimental results that have taken the technology from a state of intriguing possibilities into demonstrated capabilities of quickly increasing scale. We discuss various such programmable molecular-scale devices that achieve: computation, 2D patterning, amplified sensing, and molecular or nanoscale transport.This article is written for a general audience, and particularly emphasizes the interdisciplinary aspects of this quickly evolving and exciting field.

John H. Reif, Thomas H. LaBean
40. Membrane Computing

This chapter introduces the basic ideas, results, and applications of membrane computing, a branch of natural computing inspired by the structure and the functioning of the living cell, as well as by the cooperation of cells in tissues, colonies of cells, and neural nets.

Gheorghe Păun

Section V: Quantum Computation

Frontmatter
41. Mathematics for Quantum Information Processing

Information processing in the physical world must be based on a physical system representing the information. If such a system has a quantum physical description, then one talks about quantum information processing. The mathematics needed to handle quantum information is somewhat more complicated than that used for classical information. The purpose of this chapter is to introduce the basic mathematical machinery used to describe quantum systems with finitely many potential observable values. A typical example of such a system is the quantum bit (or qubit, for short), which has at most two potential values for any physical observable. The mathematics for quantum information processing is based on vector spaces over complex numbers, and can be seen as a straightforward generalization of linear algebra of real vector spaces.

Mika Hirvensalo
42. Bell's Inequalities — Foundations and Quantum Communication

For individual events, quantum mechanics makes only probabilistic predictions. Can one go beyond quantum mechanics in this respect? This question has been a subject of debate and research since the early days of the theory. Efforts to construct a deeper, realistic level of physical description, in which individual systems have, like in classical physics, preexisting properties revealed by measurements are known as hidden-variable programs. Demonstrations that a hidden-variable program necessarily requires outcomes of certain experiments to disagree with the predictions of quantum theory are called “no-go theorems.” The Bell theorem excludes local hidden variable theories. The Kochen–Specker theorem excludes non-contextual hidden variable theories. In local hidden-variable theories faster-than-light-influences are forbidden, thus the results for a given measurement (actual, or just potentially possible) are independent of the settings of other measurement devices which are at space-like separation. In non-contextual hidden-variable theories, the predetermined results of a (degenerate) observable are independent of any other observables that are measured jointly with it.It is a fundamental doctrine of quantum information science that quantum communication and quantum computation outperform their classical counterparts. If this is to be true, some fundamental quantum characteristics must be behind the better-than-classical performance of information-processing tasks. This chapter aims at establishing connections between certain quantum information protocols and foundational issues in quantum theory. After a brief discussion of the most common misinterpretations of Bell's theorem and a discussion of what its real meaning is, we will demonstrate how quantum contextuality and violation of local realism can be used as useful resources in quantum information applications. In any case, the readers should bear in mind that this chapter is not a review of the literature of the subject, but rather a quick introduction to it.

Časlav Brukner, Marek Żukowski
43. Algorithms for Quantum Computers

This chapter surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform-based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).

Jamie Smith, Michele Mosca
44. Physical Implementation of Large-Scale Quantum Computation

The development of large-scale quantum computing started rapidly in 1994 after the presentation of the factoring algorithm by Peter Shor. In this review, the basic requirements for the successful implementation of quantum algorithms on physical systems are first discussed and then a few basic concepts in actual information processing are presented. After that, the current situation is evaluated, concentrating on the most promising methods for which actual experimental progress has taken place. Among these are trapped ions, nuclear spins, and various solid-state structures such as quantum dots and Josephson junctions.

Kalle-Antti Suominen
45. Quantum Cryptography

Several results in quantum cryptography will be surveyed in this chapter. After a brief introduction to classical cryptography, we provide some cryptographic primitives from the viewpoint of quantum computational complexity theory, which are helpful to get an idea of quantum cryptographic protocols. We then examine cryptographic protocols of quantum key distribution, quantum bit commitment, quantum oblivious transfer, quantum zero-knowledge, quantum public-key encryption, quantum digital signature, and their security issues.

Takeshi Koshiba
46. BQP-Complete Problems

The concept of completeness is one of the most important notions in theoretical computer science. PromiseBQP-complete problems are those in PromiseBQP to which all other PromiseBQP problems can be reduced in classically probabilistic polynomial time. Studies of PromiseBQP-complete problems can deepen our understanding of both the power and limitation of efficient quantum computation. In this chapter we give a review of known PromiseBQP-complete problems, including various problems related to the eigenvalues of sparse Hamiltonians and problems about additive approximation of Jones polynomials and Tutte polynomials.

Shengyu Zhang

Section VI: Broader Perspective – Nature-Inspired Algorithms

Frontmatter
47. An Introduction to Artificial Immune Systems

The field of artificial immune systems (AIS) comprises two threads of research: the employment of mathematical and computational techniques in the modeling of immunology, and the incorporation of immune system metaphors in the development of engineering solutions. The former permits the integration of immunological data and sub-models into a coherent whole, which can be of value to immunologists in the facilitation of immunological understanding, hypothesis testing, and the direction of future research. The latter attempts to harness the perceived properties of the immune system in the solving of engineering problems. This chapter concentrates on the latter: the development and application of immune inspiration to engineering solutions.

Mark Read, Paul S. Andrews, Jon Timmis
48. Swarm Intelligence

Increasing numbers of books, websites, and articles are devoted to the concept of “swarm intelligence.” Meanwhile, a perhaps confusing variety of computational techniques are seen to be associated with this term, such as “agents,” “emergence,” “boids,” “ant colony optimization,” and so forth. In this chapter, we attempt to clarify the concept of swarm intelligence and its associations, and to provide a perspective on its inspirations, history, and current state. We focus on the most popular and successful algorithms that are associated with swarm intelligence, namely, ant colony optimization, particle swarm optimization, and (more recently) foraging algorithms, and we cover the sources of natural inspiration with these foci in mind. We then round off the chapter with a brief review of current trends.

David W. Corne, Alan Reynolds, Eric Bonabeau
49. Simulated Annealing

Since its introduction as a generic heuristic for discrete optimization in 1983, simulated annealing (SA) has become a popular tool for tackling both discrete and continuous problems across a broad range of application areas. This chapter provides an overview of the technique with the emphasis being on the use of simulated annealing in the solution of practical problems. A detailed statement of the algorithm is given, together with an explanation of its inspiration from the field of statistical thermodynamics. This is followed by a brief overview of the theory with emphasis on those results that are important to the decisions that need to be made for a practical implementation. It then goes on to look at some of the ways in which the basic algorithm has been modified in order to improve its performance in the solution of a variety of problems. It also includes a brief section on application areas and concludes with general observations and pointers to other sources of information such as survey articles and websites offering downloadable simulated annealing code.

Kathryn A. Dowsland, Jonathan M. Thompson
50. Evolvable Hardware

This chapter surveys the field of evolvable hardware. After a brief overview of the reconfigurable devices used in evolvable hardware, elementary principles of evolvable hardware, corresponding terminology, and open problems in the field are introduced. Then, the chapter is divided into three main parts: extrinsic evolution, intrinsic evolution, and adaptive hardware. Extrinsic evolution (i.e., evolution using simulators) covers evolutionary design of digital circuits, analog circuits, antennas, optical systems, and microelectromechanical systems (MEMS). Intrinsic evolution conducted in field programmable gate arrays (FPGAs), field programmable transistor arrays (FPTAs), field programmable analog arrays (FPAAs), and some unconventional devices is discussed together with a description of the most successful applications. Examples of real-world adaptive hardware systems are also presented. Finally, an overview of major achievements and problems is given.

Lukáš Sekanina
51. Natural Computing in Finance – A Review

The field of natural computing (NC) has advanced rapidly over the past decade. One significant offshoot of this progress has been the application of NC methods in finance. This chapter provides an introduction to a wide range of financial problems to which NC methods have been usefully applied. The chapter also identifies open issues and suggests future directions for the application of NC methods in finance.

Anthony Brabazon, Jing Dang, Ian Dempsey, Michael O'Neill, David Edelman
52. Selected Aspects of Natural Computing

In this chapter we will discuss a selection of application areas in which natural computation shows its value in real-world enterprises. For the purposes of demonstrating the significant impact and potential of natural computation in practice, there is certainly no shortage of documented examples that could be selected. We present just ten applications, ranging from specific problems to specific domains, and ranging from cases familiar to the authors to highlights known well in the general natural computation community. Each displays the proven promise or great potential of nature-inspired computation in high-profile and important real-world applications, and we hope that these applications inspire both students and practitioners.

David W. Corne, Kalyanmoy Deb, Joshua Knowles, Xin Yao

Section VII: Broader Perspective – Alternative Models of Computation

Frontmatter
53. Artificial Life

Artificial life has now become a mature inter-discipline. In this contribution, its roots are traced, its key questions are raised, its main methodological tools are discussed, and finally its applications are reviewed. As part of the growing body of knowledge at the intersection between the life sciences and computing, artificial life will continue to thrive and benefit from further scientific and technical progress on both sides, the biological and the computational. It is expected to take center stage in natural computing.

Wolfgang Banzhaf, Barry McMullin
54. Algorithmic Systems Biology — Computer Science Propels Systems Biology

The convergence between computer science and biology occurred in successive waves involving deeper and deeper concepts of computing. The current situation makes computer science a suitable candidate to become a philosophical foundation for systems biology with the same importance as mathematics, chemistry, and physics. Systems biology is a complex and expanding applicative domain that can open completely new avenues of research in computing and eventually help it become a natural, quantitative science. This chapter highlights the benefits of relying on an algorithmic approach to model, simulate, and analyze biological systems. The key techniques surveyed are related to programming languages and concurrency theory as they are the main tools to express in an executable form the key feature of computing: algorithms and the coupling executor/execution of descriptions. The concentration here is on conceptual tools that are also supported by computational platforms, thus making them suitable candidates to tackle real problems.

Corrado Priami
55. Process Calculi, Systems Biology and Artificial Chemistry

Knowledge about life processes develops through the interplay of theoretical speculation and experimental investigation. Both speculation and experiments present several difficulties that call for the development of faithful and accessible abstract models of the phenomena investigated. Several theories and techniques born in computer science have been proposed for the development of models that rely on solid formal bases and allow virtual experiments to be carried out computationally in silico.This chapter surveys the basics of process calculi and their applications to the modeling of biological phenomena at a system level. Process calculi were born within the theory of concurrency for describing and proving properties of distributed interacting systems. Their application to biological phenomena relies on an interpretation of systems as made of interacting components exhibiting a computational kind of behavior, “cells as computation.”The first seminal proposals and the subsequent enhancements for best adapting computer science theories to the domain of biology (with particular reference to chemical, biochemical, and cellular phenomena) are surveyed.

Pierpaolo Degano, Andrea Bracciali
56. Reaction–Diffusion Computing

A reaction–diffusion computer is a spatially extended chemical system, which processes information by transforming an input concentration profile to an output concentration profile in a deterministic and controlled manner. In reaction–diffusion computers, the data are represented by concentration profiles of reagents, information is transferred by propagating diffusive and phase waves, computation is implemented via the interaction of these traveling patterns (diffusive and excitation waves), and the results of the computation are recorded as a final concentration profile. Chemical reaction–diffusion computing is among the leaders in providing experimental prototypes in the fields of unconventional and nature-inspired computing. This chapter provides a case-study introduction to the field of reaction–diffusion computing, and shows how selected problems and tasks of computational geometry, robotics, and logics can be solved by encoding data within transient states of a chemical medium and by programming the dynamics and interactions of chemical waves.

Andrew Adamatzky, Benjamin De Lacy Costello
57. Rough–Fuzzy Computing

In recent years, a rapid growth of interest in rough set theory, fuzzy set theory, and their hybridization and applications has been witnessed worldwide. In this chapter, the basic concepts of rough/fuzzy computing are presented. The role of rough/fuzzy computing in the development of Wisdom Technology (Wistech) is also emphasized.

Andrzej Skowron
58. Collision-Based Computing

Collision-based computing is an implementation of logical circuits, mathematical machines, or other computing and information processing devices in homogeneous, uniform and unstructured media with traveling mobile localizations. A quanta of information is represented by a compact propagating pattern (gliders in cellular automata, solitons in optical systems, wave fragments in excitable chemical systems). Logical truth corresponds to presence of the localization, logical false to absence of the localization; logical values can also be represented by a particular state of the localization. When two or more traveling localizations collide, they change their velocity vectors and/or states. Post-collision trajectories and/or states of the localizations represent results of logical operations implemented by the collision. One of the principal advantages of the collision-based computing medium – hidden in 1D systems but obvious in 2D and 3D media – is that the medium is architecture-less: nothing is hardwired, there are no stationary wires or gates, a trajectory of a propagating information quanta can be seen as a momentary wire. The basics of collision-based computing are introduced, and the collision-based computing schemes in 1D and 2D cellular automata and continuous excitable media are overviewed. Also a survey of collision-based schemes, where particles/collisions are dimensionless, is provided.

Andrew Adamatzky, Jérôme Durand-Lose
59. Nonclassical Computation — A Dynamical Systems Perspective

In this chapter, computation is investigated from a dynamical systems perspective. A dynamical system is described in terms of its abstract state space, the system’s current state within its state space, and a rule that determines its motion through its state space. In a classical computational system, that rule is given explicitly by the computer program; in a physical system, that rule is the underlying physical law governing the behavior of the system. Therefore, a dynamical systems approach to computation allows one to take a unified view of computation in classical discrete systems and in systems performing nonclassical computation. In particular, it gives a route to a computational interpretation of physical embodied systems exploiting the natural dynamics of their material substrates.

Susan Stepney
Backmatter
Metadata
Title
Handbook of Natural Computing
Editors
Grzegorz Rozenberg
Thomas Bäck
Joost N. Kok
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-92910-9
Print ISBN
978-3-540-92909-3
DOI
https://doi.org/10.1007/978-3-540-92910-9

Premium Partner