Skip to main content

2007 | Buch

Unconventional Computation

6th International Conference, UC 2007, Kingston, Canada, August 13-17, 2007. Proceedings

herausgegeben von: Selim G. Akl, Cristian S. Calude, Michael J. Dinneen, Grzegorz Rozenberg, H. Todd Wareham

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Papers

How Neural Computing Can Still Be Unconventional After All These Years
Abstract
Attempts to infer a technology from the computing style of the brain have often focused on general learning styles, such as Hebbian learning, supervised learning, and reinforcement learning. The present talk will place such studies in a broader context based on the diversity of structures in the mammalian brain – not only does the cerebral cortex have many regions with their own distinctive characteristics, but their architecture differs drastically from that of basal ganglia, cerebellum, hippocampus, etc. We will discuss all this within a comparative, evolutionary context. The talk will make the case for a brain-inspired computing architecture which complements the bottom-up design of diverse styles of adaptive subsystem with a top-level design which melds a variety of such subsystems to best match the capability of the integrated system to the demands of a specific range of physical or informational environments.
This talk will be a sequel to Arbib, M.A., 2003, Towards a neurally-inspired computer architecture, Natural computing, 2:1-46, but the exposition will be self-contained.
Michael A. Arbib
Optimal Algorithmic Cooling of Spins
Abstract
Algorithmic Cooling (AC) of Spins is potentially the first near-future application of quantum computing devices. Straightforward quantum algorithms combined with novel entropy manipulations can result in a method to improve the identification of molecules.
We introduce here several new exhaustive cooling algorithms, such as the Tribonacci and k-bonacci algorithms. In particular, we present the “all-bonacci” algorithm, which appears to reach the maximal degree of cooling obtainable by the optimal AC approach.
Yuval Elias, José M. Fernandez, Tal Mor, Yossi Weinstein
Nanocomputing by Self-assembly
Abstract
Biomolecular (DNA) computing is an emergent field of unconventional computing, lying at the crossroads of mathematics, computer science and molecular biology. The main idea behind biomolecular computing is that data can be encoded in DNA strands, and techniques from molecular biology can be used to perform arithmetic and logic operations. The birth of this field was the 1994 breakthrough experiment of Len Adleman who solved a hard computational problem solely by manipulating DNA strands in test-tubes. This led to the possibility of envisaging a DNA computer that could be thousand to a million times faster, trillions times smaller and thousand times more energy efficient than today’s electronic computers.
I will present one of the most active directions of research in DNA computing, namely DNA nanocomputing by self-assembly. I will namely discuss the computational potential of self-assembly, the process by which objects autonomously come together to form complex structures. I will bring forth evidence that self-assembly of DNA molecules can be used to perform computational tasks. Lastly, I will address the problem of self-assembly of arbitrarily large super-shapes, its solution and implications.
Lila Kari
Organic User Interfaces (Oui!): Designing Computers in Any Way Shape or Form
Abstract
Over the past few years, there has been a quiet revolution in display manufacturing technology. One that is only comparable in scope to that of the invention of the first LCD, which led to DynaBook and the modern laptop. E-ink electrophoretic pixel technology, combined with advances in organic thin-film circuit substrates, have led to displays that are so thin and flexible they are beginning to resemble paper. Soon displays will completely mimic the high contrast, low power consumption and flexibility of printed media. As with the invention of the first LCD, this means we are on the brink of a new paradigm in computer user interface design: one in which computers can have any organic form or shape. One where any object, no matter how complex, dynamic or flexible its structure, may display information. One where the deformation of shape is a main source of input.
This new paradigm of Organic User Interface (Oui!) requires a new set of design guidelines, which I will discuss in this presentation. These guidelines were inspired by architecture, which went through a similar transformation decades ago. In Oui! The Input Device Is The Output Device (TIDISTOD), Form dynamically follows Flow of activities of the human body, and Function equals Form. I will give an overview of technologies that led to Organic UI, such as Tangible UI and Digital Desks, after which I will discuss some of the first real Oui! interfaces, which include Gummi and PaperWindows. PaperWindows, which was developed at HML, is the first real paper computer. It uses computer vision to track sheets of real paper in real time. Page shape is modeled in 3D, textured with windows and projected back onto the paper, making for a wireless hi-res flexible color display. Interactions with PaperWindows take place through hand gestures and paper folding techniques.
Roel Vertegaal

Regular Papers

Unconventional Models of Computation Through Non-standard Logic Circuits
Abstract
The classical (boolean) circuit model of computation is generalized via polynomial ring calculus, an algebraic proof method adequate to non-standard logics (namely, to all truth-functional propositional logics and to some non-truth-functional logics). Such generalization allows us to define models of computation based on non-standard logics in a natural way by using ‘hidden variables’ in the constitution of the model. Paraconsistent circuits for the paraconsistent logic mbC (and for some extensions) are defined as an example of such models. Some potentialities are explored with respect to computability and computational complexity.
Juan C. Agudelo, Walter Carnielli
Amoeba-Based Nonequilibrium Neurocomputer Utilizing Fluctuations and Instability
Abstract
We employ a photosensitive amoeboid cell known as a model organism for studying cellular information processing, and construct an experimental system for exploring the amoeba’s processing ability of information on environmental light stimuli. The system enables to examine the amoeba’s solvability of various problems imposed by an optical feedback, as the feedback is implemented with a neural network algorithm. We discovered that the amoeba solves the problems by positively exploiting fluctuations and instability of its components. Thus, our system works as a neurocomputer having flexible properties. The elucidation of the amoeba’s dynamics may lead to the development of unconventional computing devices based on nonequilibrium media to utilize fluctuations and instability.
Masashi Aono, Masahiko Hara
Unconventional “Stateless” Turing–Like Machines
Abstract
We refer to Turing machines (TMs) with only one state as “stateless” TMs. These machines remain in the same state throughout the computation. The only way they can “remember” anything is by writing on their tapes. Stateless TMs have limited computing power: They cannot compute all computable functions. However, this handicap of a stateless TM can be overcome if we modify the TM a little by adding an extra feature. We propose a modified Turing–like machine called a JTM—it is stateless (has only one internal state) by definition—and show that it is as powerful as any TM. That is, a JTM does not switch states, and yet can compute all computable functions. JTMs differ from conventional TMs in the following way: The tape head spans three consecutive cells on the tape; it can read/write a string of three (tape) symbols all at once. However, the movement of the head is not in terms of “blocks of three cells”: in the next move, the head does not “jump” to the adjacent (entirely new) block of three cells; it only shifts itself by one cell—to the right or to the left. A JTM is more than a product of theoretical curiosity: it can serve as a “normal form” and might lead to simpler proofs for certain theorems in computation theory.
Joshua J. Arulanandham
Polarizationless P Systems with Active Membranes Working in the Minimally Parallel Mode
Abstract
We investigate the computing power and the efficiency of P systems with active membranes without polarizations, working in the minimally parallel mode. Such systems are shown to be computationally complete even when using only rules handling single objects in the membranes and avoiding the division of non-elementary membranes. Moreover, we elaborate an algorithm for solving NP-complete problems, yet in this case we need evolution rules generating at least two objects as well as rules for non-elementary membrane division.
Rudolf Freund, Gheorghe Păun, Mario J. Pérez-Jiménez
On One Unconventional Framework for Computation
Abstract
My main objective is to point out a fundamental weakness in the conventional conception of computation and suggest a way out. This weakness is directly related to a gross underestimation of the role of object representation in a computational model, hence confining such models to an unrealistic (input) environment, which, in turn, lead to “unrealistic” computational models. This lack of appreciation of the role of structural object representation has been inherited from logic and partly from mathematics, where, in the latter, the centuries-old tradition is to represent objects as unstructured “points”. I also discuss why the appropriate fundamental reorientation in the conception of computational models will bring the resulting study of computation closer to the “natural” computational constrains. An example of the pertinent, class-oriented, representational formalism developed by our group over many years—Evolving Transformation System (ETS)—is briefly outlined here, and several general lines of research are suggested.
Lev Goldfarb
Computing Through Gene Assembly
Abstract
The intramolecular gene assembly model, [1], uses three molecular recombination operations ld, dlad, and hi. A computing model with two contextual recombination operations del and trl, which are based on ld and dlad, respectively, is considered in [6] and its computational power is investigated. In the present paper, we expand the computing model with a new molecular operation such as cpy - copy. Then we prove that the extended contextual intramolecular gene assembly model is both computationally universal and efficient.
Tseren-Onolt Ishdorj, Ion Petre
Learning Vector Quantization Network for PAPR Reduction in Orthogonal Frequency Division Multiplexing Systems
Abstract
Major drawback of Orthogonal Frequency Division Multiplexing (OFDM) is its high Peak to Average Power Ratio (PAPR) that exhibits inter modulation noise when the signal has to be amplified with a non linear high power amplifier (HPA). This paper proposes an efficient PAPR reduction technique by taking the benefit of the classification capability of Learning Vector Quantization (LVQ) network. The symbols are classified in different classes and are multiplied by different phase sequences; to achieve minimum PAPR before they are transmitted. By this technique a significant reduction in number of computations is achieved.
Seema Khalid, Syed Ismail Shah, Jamil Ahmad
Binary Ant Colony Algorithm for Symbol Detection in a Spatial Multiplexing System
Abstract
While an optimal Maximum Likelihood (ML) detection using an exhaustive search method is prohibitively complex, we show that binary Ant Colony Optimization (ACO) based Multi-Input Multi-Output (MIMO) detection algorithm gives near-optimal Bit Error Rate (BER) performance with reduced computational complexity. The simulation results suggest that the reported unconventional detector gives an acceptable performance complexity trade-off in comparison with conventional ML and non-linear Vertical Bell labs Layered Space Time (VBLAST) detectors. The proposed technique results in 7-dB enhanced BER performance with acceptable increase in computational complexity in comparison with VBLAST. The reported algorithm reduces the computer time requirement by as much as 94% over exhaustive search method with a reasonable BER performance.
Adnan Khan, Sajid Bashir, Muhammad Naeem, Syed Ismail Shah, Asrar Sheikh
Quantum Authenticated Key Distribution
Abstract
Quantum key distribution algorithms use a quantum communication channel with quantum information and a classical communication channel for binary information. The classical channel, in all algorithms to date, was required to be authenticated. Moreover, Lomo- naco [8] claimed that authentication is not possible using only quantum means. This paper reverses this claim. We design an algorithm for quantum key distribution that does authentication by quantum means only. Although a classical channel is still used, there is no need for the channel to be authenticated. The algorithm relies on two protected public keys to authenticate the communication partner.
Naya Nagy, Selim G. Akl
The Abstract Immune System Algorithm
Abstract
In this paper we present an Abstract Immune System Algorithm, based on the model introduced by Farmer et al, inspired on the theory of Clonal Selection and Idiotypic Network due to Niels Jerne. The proposed algorithm can be used in order to solve problems much in the way that Evolutionary Algorithms or Artificial Neural Networks do. Besides presenting the Algorithm itself, we briefly discuss its various parameters, how to encode input data and how to extract the output data from its outcome. The reader can do their own experiments using the workbench found in the address http://ctp.di.fct.unl.pt/~jddp/immune/ .
José Pacheco, José Félix Costa
Taming Non-compositionality Using New Binders
Abstract
We propose an extension of the traditional λ-calculus in which terms are used to control an outside computing device (quantum computer, DNA computer...). We introduce two new binders: ν and ρ. In νx.M, x denotes an abstract resource of the outside computing device, whereas in ρx.M, x denotes a concrete resource. These two binders have different properties (in terms of α-conversion, scope extrusion, convertibility) than the ones of standard λ-binder. We illustrate the potential benefits of our approach with a study of a quantum computing language in which these new binders prove meaningful. We introduce a typing system for this quantum computing framework in which linearity is only required for concrete quantum bits offering a greater expressiveness than previous propositions.
Frédéric Prost
Using River Formation Dynamics to Design Heuristic Algorithms
Abstract
Finding the optimal solution to NP-hard problems requires at least exponential time. Thus, heuristic methods are usually applied to obtain acceptable solutions to this kind of problems. In this paper we propose a new type of heuristic algorithms to solve this kind of complex problems. Our algorithm is based on river formation dynamics and provides some advantages over other heuristic methods, like ant colony optimization methods. We present our basic scheme and we illustrate its usefulness applying it to a concrete example: The Traveling Salesman Problem.
Pablo Rabanal, Ismael Rodríguez, Fernando Rubio
Principles of Stochastic Local Search
Abstract
We set up a general generic framework for local search algorithms. Then we show in this generic setting how heuristic, problem-specific information can be used to improve the success probability of local search by focussing the search process on specific neighbor states. Our main contribution is a result which states that stochastic local search using restarts has a provable complexity advantage compared to deterministic local search. An important side aspect is the insight that restarting (starting the search process all over, not using any information computed before) is a useful concept which was mostly ignored before.
Uwe Schöning
Spatial and Temporal Resource Allocation for Adaptive Parallel Genetic Algorithm
Abstract
Adaptive parameter control in evolutionary computation is achieved by a method of computational resource allocation, both spatially and temporally. Spatially it is achieved for the parallel genetic algorithm by the partitioning of the search space into many subspaces. Search for solution is performed in each subspace by a genetic algorithm which domain of chromosomes is restricted inside that particular subspace. This spatial allocation of computational resource takes the advantage of exhaustive search which avoids duplicate effort, and combine it with the parallel nature of the search for solution in disjoint subspaces by genetic algorithm. In each subspace, temporal resource allocation is also made for different competing evolutionary algorithms, so that more CPU time is allocated to those algorithms which showed better performance in the past. This general idea is implemented in a new adaptive genetic algorithm using the formalism of mutation matrix, where the need for setting a survival probability is removed. The temporal resource allocation is introduced and implemented in a single computer using the quasi-parallel time sharing algorithm for the solution of the zero/one knapsack problem. The mutation matrix M(t) is constructed using the locus statistics and the fitness distribution in a population At with N rows and L columns, where N is the size of the population and L is the length of the encoded chromosomes. The mutation matrix is parameter free and adaptive as it is time dependent and captures the accumulated information in the past generation. Two competing strategies of evolution, mutation by row (chromosome), and mutation by column (locus) are used for the competition of CPU time. Time sharing experiment on these two strategies is performed on a single computer in solving the knapsack problem. Based on the investment frontier of time allocation, the optimal configuration in solving the knapsack problem is found.
K. Y. Szeto
Gravitational Topological Quantum Computation
Abstract
A new model in topological quantum computing, named Gravitational Topological Quantum Computing (GTQC), is introduced as an alternative respect to the Anyonic Topological Quantum Computing and DNA Computing. In the new model the quantum computer is the quantum space-time itself and the corresponding quantum algorithms refer to the computation of topological invariants for knots, links and tangles. Some applications of GTQC in quantum complexity theory and computability theory are discussed, particularly it is conjectured that the Khovanov polynomial for knots and links is more hard than #P-hard; and that the homeomorphism problem, which is non-computable, maybe can be computed after all via a hyper-computer based on GTQC.
Mario Vélez, Juan Ospina
Computation in Sofic Quantum Dynamical Systems
Abstract
We analyze how measured quantum dynamical systems store and process information, introducing sofic quantum dynamical systems. Using recently introduced information-theoretic measures for quantum processes, we quantify their information storage and processing in terms of entropy rate and excess entropy, giving closed-form expressions where possible. To illustrate the impact of measurement on information storage in quantum processes, we analyze two spin-1 sofic quantum systems that differ only in how they are measured.
Karoline Wiesner, James P. Crutchfield
Bond Computing Systems: A Biologically Inspired and High-Level Dynamics Model for Pervasive Computing
Abstract
Targeting at modeling the high-level dynamics of pervasive computing systems, we introduce Bond Computing Systems (BCS) consisting of objects, bonds and rules. Objects are typed but addressless representations of physical or logical (computing and communicating) entities. Bonds are typed multisets of objects. In a BCS, a configuration is specified by a multiset of bonds, called a collection. Rules specifies how a collection evolves to a new one. A BCS is a variation of a P system introduced by Gheorghe Paun where, roughly, there is no maximal parallelism but with typed and unbounded number of membranes, and hence, our model is also biologically inspired. In this paper, we focus on regular bond computing systems (RBCS), where bond types are regular, and study their computation power and verification problems. Among other results, we show that the computing power of RBCS lies between LBA (linearly bounded automata) and LBC (a form of bounded multicounter machines) and hence, the regular bond-type reachability problem (given an RBCS, whether there is some initial collection that can reach some collection containing a bond of a given regular type) is undecidable. We also study some restricted models (namely, B-boundedness and 1-transfer) of RBCS where the reachability problem becomes decidable.
Linmin Yang, Zhe Dang, Oscar H. Ibarra
Backmatter
Metadaten
Titel
Unconventional Computation
herausgegeben von
Selim G. Akl
Cristian S. Calude
Michael J. Dinneen
Grzegorz Rozenberg
H. Todd Wareham
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-73554-0
Print ISBN
978-3-540-73553-3
DOI
https://doi.org/10.1007/978-3-540-73554-0