Skip to main content
main-content

Über dieses Buch

the outcome of a NATO Advanced Research Workshop (ARW) This book is held in Neuss (near Dusseldorf), Federal Republic of Germany from 28 September to 2 October, 1987. The workshop assembled some 50 invited experts from Europe, Ameri­ ca, and Japan representing the fields of Neuroscience, Computational Neuroscience, Cellular Automata, Artificial Intelligence, and Compu­ ter Design; more than 20 additional scientists from various countries attended as observers. The 50 contributions in this book cover a wide range of topics, including: Neural Network Architecture, Learning and Memory, Fault Tolerance, Pattern Recognition, and Motor Control in Brains versus Neural Computers. Twelve of these contributions are review papers. The readability of this book was enhanced by a number of measures: * The contributions are arranged in seven chapters. * A separate List of General References helps newcomers to this ra­ pidly growing field to find introductory books. * The Collection of References from all Contributions provides an alphabetical list of all references quoted in the individual con­ tributions. * Separate Reference Author and Subject Indices facilitate access to various details. Group Reports (following the seven chapters) summarize the discus­ sions regarding four specific topics relevant for the 'state of the art' in Neural Computers.

Inhaltsverzeichnis

Frontmatter

Introductory Lectures

The Role of Adaptive and Associative Circuits in Future Computer Designs

The fifth-generation computers have essentially been designed around knowledge data bases. Originally they were supposed to accept natural data such as speech and images; these functions, however, have now been abandoned. Some vistas to sixth- and later-generation computers have already been presented, too. The latter are supposed to be even more “natural”. Certainly the fifth-generation computers have applied very traditional logic programming, but what could the alternative be? What new functions can evolve from the “neural computer” principles? This presentation expounds a few detailed questions of this kind. Also some more general problem areas and motivations for their handling are described.

Teuvo Kohonen

Faust, Mephistopheles and Computer

Even ancient religious conceptions involve good and bad supernatural forces. The Old Testament does not only base on the idea of one single God, but also introduces the concept of the Satan, embodying the bad and evil. The antagonisme between the good and bad principle dominates all our life. There always is a difference between pleasant and unpleasant events.

Konrad Zuse

Architecture and Topology of Neural Networks: Brain vs. Computer

Structured Neural Networks in Nature and in Computer Science

Recent advances in several disciplines have led to rapidly growing interest in massively parallel computation. There are considerable potential scientific and practical benefits from this approach, but positive results will be based on existing knowledge of computational structures, not on mystical emergent properties of unstructured networks.

Jerome A. Feldman

Goal and Architecture of Neural Computers

The ultimate goal of neural computers is the construction of flexible robots, based on massively parallel structures and on self-organization. This goal includes construction of a generalized scene. Some of the necessary sub-goals have been demonstrated on the basis of neural architecture, but this architecture has to be further developed. Major issues are the reduction of learning times, the integration of subsystems and the introduction of syntactical structure.

Christoph von der Malsburg

Fault Tolerance in Biological vs. Technical Neural Networks

Conventional Fault-Tolerance and Neural Computers

Fault-tolerance is used in conventional computer systems and in VLSI circuits in order to fulfil reliability, dependability and/or cost of manufacture objectives. A wide range of techniques have been used according to the particular objectives and the system architecture. Almost all of these techniques can be observed in biological neural networks and may even be in use simultaneously. This paper suggests that VLSI designers may wish to incorporate several of these approaches into digital neural computers.

Will R. Moore

Fault-Tolerance in Imaging-Oriented Systolic Arrays

Image Processing often involves convolutions and Fourier Transforms (DFT and FFT): these specific operations are well implemented by means of a systolic multi-pipeline structure.Practical implementations require large pipelines, adopting highly integrated circuits that are prone to production defects and run-time faults; efficient fault-tolerance through reconfiguration is then required.Still, the basic problem of concurrent (or semi-concurrent) testing must be solved prior to any reconfiguration step. Here, we prove how these structures allow to perform testing by a simple technique (based on the classical LSSD method) so that added circuits required due to testing functions is kept very limited.

R. Negrini, M. G. Sami, N. Scarabottolo, R. Stefanelli

Parallelism and Redundancy in Neural Networks

Recent interest in neural networks is largely triggered by the idea that knowledge from this field may be advantageously applied to problems arising in massively parallel computing. Besides increasing processing velocity, parallelism provides possibilities for improved fault tolerance and graceful degradation. In technical devices, this has been achieved by back-up hardware added in parallel to the main system. Biological systems take advantage of parallelism in many other respects including natural implementations, minimization of the number of computation steps, exploitation of signal redundancy, and a balanced distribution of processing tasks between all subsystems. As a result, reliability and accuracy of computation become exchangable. We present examples for these principles of biological information processing and discuss how parallelism is used for their implementation.

Werner von Seelen, Hanspeter A. Mallot

Visual Pattern Recognition Systems

Relational Models in Natural and Artificial Vision

In the last few years, there has been considerable interest for information processing models inspired from the architecture and functioning principles of the brain (see for instance Rumelhart and McClelland 1986). The general features which characterize these models are the following.

Elie Bienenstock

Image Segmentation with Neurocomputers

Our ultimate application interest is the automated understanding of images, especially non-biological images such as range or synthetic aperture radar. In this report, we describe one aspect of the processing of such images, segmentation, and show that one can design a neural network to perform this computation. An important step14 in image analysis is segmentation. An image is an array of discrete sampled values called pixels. An image is understood when the objects portrayed in that image are recognized3 or at least characterized. An object is characterized in terms of the segments of the image that it subtends. An image segment1,6 is a collection of pixels that satisfies certain conditions of adjacency and similarity.

G. L. Bilbro, Mark White, Wesley Snyder

A Hierarchical Neural Network Model for Selective Attention

A neural network model, which has the function of selective attention in visual pattern recognition and in associative recall, is proposed and simulated on a digital computer. The model has the function of segmentation and pattern-recognition. When a composite stimulus consisting of two patterns or more is presented, the model focuses its attention selectively to one of them, segments it from the rest, and recognizes it. After that, the model switches its attention to recognize another pattern. The model also has the ability to restore an imperfect pattern, and can recall the complete pattern in which the defects have been corrected and the noise eliminated. These functions are performed successfully even if the input patterns are deformed in shape or shifted in position.

Kunihiko Fukushima

Mapping Images to a Hierarchical Data Structure — A Way to Knowledge-Based Pattern Recognition

The Hierarchical Structure Code (HSC) provides a transition between the signal space of a gray-scale image and the space of symbolic description. Continuous objects are mapped to code-trees of the HSC-network by hierarchical linking operations. The HSC-network is controlled by a system of inhibition mechanisms, extracting invariant features from the code trees. Extracted features are compared with modelled features in the knowledge base.

Georg Hartmann

Computing Motion in the Presence of Discontinuities: Algorithm and Analog Networks

In this paper we will describe recent developments in the theory of early vision which lead from the formulation of the motion problem as ill-posed problem to its solution by minimizing certain “cost” functions. These cost or energy functions can be mapped onto very simple analog and binary resistive networks. Thus, we will see how the optical flow can be computed by injecting currents into “neural” networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

Christof Koch

Design Principles for a Front-End Visual System

Although one has no data sheets relating to the design of organic visual systems, much can be said on a priori grounds. The possible design principles for successful vision systems depend on the physics of the environment and can be formulated in mathematical terms. Physiology gives us glimpses of actual implementations and allows us to glean which of the potentially useful features are actualized in the different species. In this paper I outline some of the a priori principles relating to the very front end of vision systems and try to correlate these with current knowledge concerning the primate visual system.

Jan J. Koenderink

Towards a Primal Sketch of Real World Scenes in Early Vision

The problem of symbolic representation of intensity variations in gray-value pictures of real scenes is studied. The goal is to relate the responses of a filter bank of different gradient filters to the structure of the picture which is determined by the physics of the image generation process. A simple criterion is proposed for the selection of a suitable center frequency of the involved band-pass filters. The gradient vectors of the image function give the direction of maximal intensity changes with high resolution (8 bit) which can be used for an invariant shape description by corner points of a contour. The picture is segmented by closed contour lines into regions which form a topographic representation in the picture domain.

Axel F. Korn

Why Cortices ? Neural Computation in the Vertebrate Visual System

We propose three high level structural principles of neural networks in the vertebrate visual cortex and discuss some of their computational implications for early vision: a) Lamination, average axonal and dendritic domains, and intrinsic feedback determine the spatio-temporal interactions in cortical processing. Possible applications of the resulting filters include continuous motion perception and the direct measurement of high-level parameters of image flow, b) Retinotopic mapping is an emergent property of massively parallel connections. With a local intrinsic operation in the target area, mapping combines to a space-variant image processing system as would be useful in the analysis of optical flow. c) Further space-variance is brought about by both, discrete (patchy) connections between areas and periodic (columnar) arrangement of specialized neurons within the areas. We present preliminary results on the significance of these principles for neural computation.

Hanspeter A. Mallot

A Cortical Network Model for Early Vision Processing

We present an isotropic neural network model for processing by layer IVc of the primate primary visual cortex. It describes how this layer can reconstruct fine local details in an image which have been lost in the low-capacity retinal-LGN pathway, while at the same time narrowing the effective spatial-frequency bandwidth of the response to sinusoidal patterns. We also investigate the circumstances under which such a network can act as an elementary feature extractor by responding preferentially to striped, checked or other high-symmetry patterns. We find that the model can act in something like this way in a particular region of its parameter space, but that such behaviour is incompatible with the reconstruction of local detail.

P. Møller, M. Nylén, J. A. Hertz

Image Segregation by Motion : Cortical Mechanisms and Implementation in Neural Networks

The experimental evidence suggesting that at an early visual cortical level neurones signal differences in speed or direction of motion is reviewed. The functional significance of these findings is examined from the point of view of higher processing in visual parallel networks. We suggest that elementary visual parameters are processed in a dual way, in a ‘discontinuity’ and in a ‘continuous’ stream and that the power of ‘visual routines’ is due in part to the interplay between these two streams.

G. A. Orban, B. Gulyás

On the Acquisition of Object Concepts from Sensory Data

We review psychological evidence that shows properties distinguishing object descriptions and sensory feature maps. We then outline a neurocomputational approach to the computation of object features from the sensory data and for learning these descriptions. We concentrate on acquiring object concepts that generalise across position on the sensory surface.

W. A. Phillips, P. J. B. Hancock, N. J. Willson, L. S. Smith

Neural Computers in Vision: Processing of High Dimensional Data

Both biological and computer vision systems have to process in real time a vast amount of data. Mechanisms of automatic gain control, realized in biological systems by multilevel feedback loops, coupled with selective channeling of data, reorganize and reduce the dimensionality of signals as they flow along the retinotopic pathway. These principles of organization are applied to VLSI-based highly parallel neural computer architecture.

Yehoshua Y. Zeevi, Ran Ginosar

Computational Networks in Early Vision: From orientation selection to optical flow

Orientation selection is the process of extracting the tangents to piecewise smooth curves from a two-dimensional image. The analysis of orientation selection begins by resolving the question of representation with reference to geometric, biological and computational constraints. The structure of a relaxation network is then derived from a discretization of the differential geometry of curves in the plane, and considerations about endstopped neurons suggest a robust method for estimating curvature. Experimental results from a simulation are presented. In addition to its uses in computational vision, the relaxation network can be interpreted as a rough model of some of the interactive circuitry underlying orientation selection in the early visual cortex at about the resolution of receptive fields.

Steven W. Zucker, Lee Iverson

Learning and Memory in Neural Network Systems

Logical Connectionist Systems

A universal node model is assumed in this general analysis of connectionist nets. It is based on a logic truth-table with a probabilistic element. It is argued that this covers other definitions. Algorithms are developed for training and testing techniques that involve reducing amounts of noise, giving a new perspective on annealing. The principle is further applied to ‘hard’ learning and shown to be achievable on the notorious parity-checking problem. The performance of the logic-probabilistic system is shown to be two orders of magnitude better than know back-error propagation techniques which have used this task as a benchmark.

I. Aleksander

Backpropagation in Perceptrons with Feedback

Backpropagation has shown to be an efficient learning rule for graded perceptrons. However, as initially introduced, it was limited to feedforward structures. Extension of backpropagation to systems with feedback was done by this author, in [4]. In this paper, this extension is presented, and the error propagation circuit is interpreted as the transpose of the linearized perceptron network. The error propagation network is shown to always be stable during training, and a sufficient condition for the stability of the perceptron network is derived. Finally, potentially useful relationships with Hopfield networks and Boltzmann machines are discussed.

Luís B. Almeida

A Neural Model with Multiple Memory Domains

Previous studies with probabilistic neural nets constructed of formal neurons have assumed that all neurons have the same probability of connection with any other neuron in the net. However,in this new study we incorporate a restriction according to which the neural connections are made up by means of chemical markers carried by the individual cells. Results obtained with this new approach show simple and multiple hysteresis phenomena.Such hysteresis loops may be considered to represent the basis for short-term memory.

Photios Anninos

The Never-Ending Learning

A processing principle supported by a dynamic memory is presented, which makes learning involved in the overall treatment. By emphasizing the operational constraints of this principle, and taking into account the concrete tasks to be performed, a modular and parallel architecture is gradually defined. It is shown that this architecture arises in the course of processing, through two complementary mechanisms: the long-term reinforcement or dissolution of memory pathways, and the episodic sprouting of new pathways. The resulting system basically detects coincidences between a cross flow of internal signals and an afferent flow of incoming signals.

Dominique Béroule

Storing Sequences of Biased Patterns in Neural Networks with Stochastic Dynamics

A network of spin-like neurons with asymmetric exchange interactions and stochastic spike response is proposed. The network can store and recall time sequences of regular and random biased patterns. The patterns can overlap. The performance of the suggested network is described by Monte Carlo simulation, in terms of a Fokker-Planck equation and, for a very large number N of neurons, in terms of a Liouville equation. We provide analytical expressions for the timing of the recall and analyze the scatter of the recall around the limit of precise recall N → ∞.

Joachim Buhmann, Klaus Schulten

The Inverse Problem for Linear Boolean Nets

The inverse problem, that is the design of a syncronous deterministic net of binary mathematical neurons that will perform any sequence of states prescribed a priori, was exactly solved for arbitrary boolean nets (of which cellular automata are particular cases). Nets of linear separable boolean functions, far more restricted in their possible behaviours, are better treated with an approach which specifically exploits their linear aspects. It is shown how to do so. Most considerations do not require syncronicity; they should be of interest also for stochastic treatments.

E. R. Caianiello, M. Marinaro, R. Tagliaferri

Training with Noise: Application to Word and Text Storage

We describe local iterative training algorithms, which maximise the number of stored patterns and their content-addressability in the Hopfield net and generalisations of it. Provided a solution exists to the problem of retrieving prescribed patterns from any initial configuration with a given number of wrong bits, the algorithms are shown to converge to one such solution. We describe an application to the storage of words and continuous text, exploiting the Distributed Array Processor.

E. Gardner, N. Stroud, D. J. Wallace

Of Points and Loops

New learning rules for the storage and retrieval of temporal sequences, in neural networks with parallel synchronous dynamics, are presented. They allow either one-shot, non-local learning, or slow, local learning. Sequences with bifurcation points, i.e. sequences in which a given state appears twice, or in which a given state belongs to two distinct sequences, can be stored without errors and retrieved.

I. Guyon, L. Personnaz, G. Dreyfus

On the Asymptotic Information Storage Capacity of Neural Networks

Neural networks can be useful and economic as associative memories, even in technical applications. The asymptotic information storage capacity of such neural networks is defined and then calculated and compared for various local synaptic rules. It turns out that among these rules the simple Hebb rule is optimal in terms of its storage capacity. Furthermore the capacity of the clipped Hebb rule (C = In 2) is even higher than the capacity of the unclipped Hebb rule (C = 1/(8-ln 2)).

G. Palm

Learning Networks of Neurons with Boolean Logic

Through a training procedure based on simulated annealing, Boolean networks can ‘learn’ to perform specific tasks. As an example, a network implementing a binary adder has been obtained after a training procedure based on a small number of examples of binary addition, thus showing a generalization capability. Depending on problem complexity, network size, and number of examples used in the training, different learning regimes occur. For small networks an exact analysis of the statistical mechanics of the system shows that learning takes place as a phase transition. The ‘simplicity’ of a problem can be related to its entropy. Simple problems are those that are thermodynamically favored.

Stefano Patarnello, Paolo Carnevali

Neural Network Learning Algorithms

The earliest network models of associative memory were based on correlations between input and output patterns of activity in linear processing units. These models have several features that make them attractive: The synaptic strengths are computed from information available locally at each synapse in a single trial; the information is distributed in a large number of connection strengths, the recall of stored information is associative, and the network can generalize to new input patterns that are similar to stored patterns. There are also severe limitations with this class of linear associative matrix models, including interference between stored items, especially between ones that are related, and inability to make decisions that are contingent on several inputs. New neural network models and neural network learning algorithms have been introduced recently that overcome some of the shortcomings of the associative matrix models of memory. These learning algorithms require many training examples to create the internal representations needed to perform a difficult task and generalize properly. They share some properties with human skill acquisition.

Terrence J. Sejnowski

Exploring Three Possibilities in Network Design: Spontaneous Node Activity, Node Plasticity and Temporal Coding

The relationship between two massively parallel techniques, namely relaxation labelling and synaptically-based neural learning, is sketched. Within this framework, neuron-centered learning can be viewed as second-order relaxation. This type of learning has been explored through simulation of a lateral-inhibition toroidal network of 10 × 10 plastic pacemaker neurons. Two frequencies successively presented to the network have been encoded in separate groups of neurons, without learning of the second frequency disrupting memory of the first one. Some implications of these results for both Computational Neuroscience and Computer Network Design are pointed out.

Carme Torras i Genís

Motor Program Generation and Sensorimotor Coordinate Transformation

Schemas and Neurons: Two Levels of Neural Computing

For much of neural computing, the emphasis has been on tasks which can be solved by networks of simple units. In this paper I will argue that neural computing can learn from the study of the brain at many levels, and in particular will argue for schemas as appropriate functional units into which the solution of complex tasks may be decomposed. We may then exploit neural layers as structural units intermediate between structures subserving schemas and small neural circuits. The emphasis in this paper will be on Rana computatrix, modelling the frog as a biological robot, rather than on the use of schemas and neural networks in the design of brain-inspired devices. It is hoped that the broader implications will be clear to the reader.

Michael A. Arbib

Applications of Concurrent Neuromorphic Algorithms for Autonomous Robots

This article provides an overview of studies at the Oak Ridge National Laboratory (ORNL) of neural networks running on parallel machines applied to the problems of autonomous robotics. The first section provides the motivation for our work in autonomous robotics and introduces the computational hardware in use. Section 2 presents two theorems concerning the storage capacity and stability of neural networks. Section 3 presents a novel load-balancing algorithm implemented with a neural network. Section 4 introduces the robotics test bed now in place. Section 5 concerns navigation issues in the test-bed system. Finally, Section 6 presents a frequency-coded network model and shows how Darwinian techniques are applied to issues of parameter optimization and on-line design.

J. Barhen, W. B. Dress, C. C. Jorgensen

The Relevance of Mechanical Sensors for Neural Generation of Movements in Space

Biological joints perform movements in space under a variety of conditions by means of a redundant set of oblique compliant muscles. Therefore the neural control systems of joints face the problem to ensure uniqueness of motor commands while taking the geometry of the joint and external conditions into account. As muscles are supplied with two classes of sensors, which monitor the mechanical state of each individual muscle and thus provide necessary and sufficient information, a mechanism is proposed, how the control problem may be solved by exploitation of the mechano-sensory signals.

Wolfgang J. Daunicht

Spatial and Temporal Transformations in Visuo-Motor Coordination

The superior colliculus and its main brainstem premotor projections play an important role in the generation of visually-guided eye and head movements. The morphological and behavioral study of the tecto-reticulo-spinal network revealed that this system is intimately involved in a number of neuronal operations which are required to control an orienting movement, in particular: the choice of the appropriate motor strategy and the geometrical and temporal transformations which are likely performed by the branching pattern of the tectal axons. In this paper, we propose a theoretical model of this system in which the motor command, the desired eye velocity profile, is generated by a retinotopic updated memory map.

J. Droulez, A. Berthoz

Neural Networks for Motor Program Generation

Because the vertebrate central nervous system can be considered to be a federation of ‘special purpose computers’, ‘Intelligent’ robots are currently being conceived and developed that have neural network architecture and consist of various modules with special purpose functions, such as: A.)Pattern Recognition (visual, auditory, tactile, etc.)B)Associative or Content-Adressable MemoriesC)Internal Representation of Spatio-Temporal Patterns and TrajectoriesD)Generation of Motor ProgramsE)Sensory Coordinate Transformation and Motor Coordinate Transformation

Rolf Eckmiller

Innate and Learned Components in a Simple Visuo-Motor Reflex

In this review a model is proposed that explains the differences in a simple visuo-motor reflex, the optokinetic reflex (OKR), in various mammals by the specific interactions between retinal and cortical projections to the nucleus of the optic tract (NOT) in the pretectum. The model is based on the following assumptions. 1.A genetically prespecified retinal input reaches the NOT first during ontogeny.2.Thereafter information flow via cortical connections is accepted only if it agrees with the complements of the retinal input.3.After the cortico-pretectal connections have been established, the retino-pretectal connections gradually lose their influence and are replaced by cortical afferents.This model explains why after the loss of visual cortex the optokinetic reflex is much weaker and asymmetric and why wrong instructions during early visual experience lead to a loss of binocularity in the NOT and as a consequence to an impaired and asymmetric OKR.

K.-P. Hoffmann

Tensor Geometry: A Language of Brains & Neurocomputers. Generalized Coordinates in Neuroscience & Robotics

Neurocomputers are implementations of mathematical paradigms performed by real neuronal networks. Thus, it is essential for their construction that the mathematical language of brain function be made explicit. Based on the philosophy that the brain, as a product of natural evolution, is a geometrical object (not a machine that is a product of engineering), tensor geometry is used to describe multidimensional general (tensor) transformations of natural coordinates that are intrinsic to the organism. Such an approach uses a formalism that not only generalizes existing Cartesian vector-matrix paradigms, but can unite neuroscience with robotics: general frames include both Natural coordinate systems (found by quantitative computerized anatomy) and those simple artificial ones that are selected in engineering for convenience. Utilizations of the tensor approach center on natural and artificial sensorimotor operations, promoting a co-evolution of coordinated (and intelligent) robots with Nature’s systems such as adaptive cerebellar compensatory reflexes. Such sensorimotor-based strategy enables also a cross-fertilization; eg. employing neurocomputers to implement a coordination-algorithm of cerebellar-networks, to be used for functional neuromuscular stimulation of paraplegics.

A. Pellionisz

Extending Kohonen’s Self-Organizing Mapping Algorithm to Learn Ballistic Movements

Rapid limb movements are known to be initiated by a brief torque pulse at the joints and to proceed freely thereafter (ballistic movements). To initiate such movements with a desired starting velocity u requires knowledge of the relation between torque pulse and desired velocity of the limb. We show for a planar two-link arm model that this relationship can be learnt with the aid of a self-organizing mapping of the type proposed earlier by Kohonen. To this end we extend Kohonen’s algorithm by a suitable learning rule for the individual units and show that this approach results in a significant improvement in the convergency properties of the learning rule used.

Helge Ritter, Klaus Schulten

Parallel Computers and Cellular Automata

Limited Interconnectivity in Synthetic Neural Systems

If designers of integrated circuits are to make a quantum jump forward in the capabilities of microchips, the development of a coherent, parallel type of processing that provides robustness and is not sensitive to failure of a few individual gates is needed. The problem of using arrays of devices, highly integrated within a chip and coupled to each other, is not one of making the arrays, but is one of introducing the hierarchial control structure necessary to fully implement the various system or computer algorithms necessary. In other words, how are the interactions between the devices orchestrated so as to map a desired architecture onto the array itself? We have suggested in the past that these arrays could be considered as local cellular automata [1], but this does not alleviate the problem of global control which must change the local computational rules in order to implement a general algorithm. Huberman [2,3] has studied the nature of attractors on finite sets in the context of iterative arrays, and has shown in a simple example how several inputs can be mapped into the same output. The ability to change the function during processing has allowed him to demonstrate adaptive behavior in which dynamical associations are made between different inputs, which initially produced sharply distinct outputs. However, these remain only the initial small steps toward the required design theory to map algorithms into network architecture. Hopfield and coworkers [4,5], in turn, have suggested using a quadratic cost function, which in truth is just the potential energy surface commonly used for Liaponuv stability trials, to formulate a design interconnection for an array of neuron-like switching elements. This approach puts the entire foundation of the processing into the interconnections.

L. A. Akers, M. R. Walker, D. K. Ferry, R. O. Grondin

Nonlinear Optical Neural Networks: Dynamic Ring Oscillators

We analyze the dynamics of a simple nonlinear optical circuit and draw upon analogies with neural network models. A key element in the circuit is a dynamic holographic medium, a photorefractive medium, which serves as a long-term memory. We discuss associative recall with the device in the context of competition among the memory states of the system. When the system is not being addressed, it will forget in time the stored items. At the same time, the system can display an unlearning phenomenon whereby long-term memory traces tend to equalize their strengths. Under other conditions the system will become obsessive, forgetting all memory traces except for one.

Dana Z. Anderson

Dynamical Properties of a New Type of Neural Network

A new type of neural network is presented. It comprises both dense short-range and more sparsely distributed long-range synaptic connections, between a two-dimensional array of cells that represents a portion of the cortex. All of these connections are excitatory, but inhibitory effects locally limit the number of cells that can be active at any time. Small areas of the cellular assembly are designated as input and output regions, and the dynamical response of the system to a variety of inputs is investigated.

Rodney M. J. Cotterill

A Dedicated Computer for Simulation of Large Systems of Neural Nets

A dedicated neurocomputer has been designed for high speed parallel simulation of large systems of neural networks. The machine consists of a 3-dimensional array of autonomous simulators, each capable of solving rectangular analog nets at a rate of 4 million synapses per second and learning at a rate of 1.3 million synaptic updates per second. The simulators are connected to their nearest neighbours in 3 dimensions and communication is performed at lOMBits/sec. between them. The machine is designed around an industry-standard development environment for ease of programming.

Simon Garth

Neurocomputer Applications

Neurocomputing is the engineering discipline concerned with non-programmed adaptive information processing systems called neural networks that develop their own algorithms in response their environment. Neurocomputing is a fundamentally new and different information processing paradigm. It is an alternative to the programming paradigm. This paper discusses the nature of neurocomputing, surveys some specific neural network information processing capabilities, and discusses applications of neurocomputing.

Robert Hecht-Nielsen

Asynchrony and Concurrency

The problem of synchrony versus asynchrony in concurrent computation falls neatly along the lines of granularity level in parallel computers. We discuss both SIMD and MIMD architectures, provide some examples, and discuss the asynchrony of open systems, a most advanced form of concurrent computation which is now emerging.

B. A. Huberman

Implementations of Neural Network Models in Silicon

Is it possible for a computer to perform real time speech or visual pattern recognition? We know the tasks are possible; existence proofs are all around us. Most humans are able to perform these tasks seemingly effortlessly, yet, to date, artificial machines designed to perform these functions have fallen far short of human performance. At first this seems puzzling, since today’s computers operate with instruction times of 10-100ns, a million times faster than typical switching speeds in the brain. As far as we know, the only advantage of the brain is that it processes information in a massively parallel fashion as opposed to a typical computer which deals with one instruction and one or two pieces of data at a time. It would seem, therefore, that it would be useful to emulate the brain in machines to gain some of its advantages.

Stuart Mackie, Hans P. Graf, Daniel B. Schwartz

The Transputer

VLSI technology allows a large number of identical devices to be manufactured cheaply. For this reason, it is attractive to implement a concurrent system using a number of identical components, each programmed with the appropriate process. A transputer [2] is such a component, and is designed to execute the parallel programming language occam.

David May, Roger Shepherd

Parallel Architectures for Neural Computers

Recent advances in “neural” computation models1 will only demonstrate their true value with the introduction of parallel computer architectures designed to optimise the computation of these models. Many special-purpose neural network hardware implementations are currently underway2,3,4. While these machines may solve the problem of realising the potential of specific models, the problem of designing a “general-purpose” Neural Computer has not been really addressed. This Neural Computer should provide a framework for executing neural models in much the same way that traditional computers address the problems of number crunching which they are best suited for. This framework must include a means of programming (i.e. operating system and programming languages) and the hardware must be reconfigurable in some manner.

M. Recce, P. C. Treleaven

Control of the immune response

When foreign substances (macromolecules, bacterias or viruses, to be further called antigens and abbreviated as Ag) attempt to invade our body, a strong reaction, specific of the antigen, is triggered (we shall not describe here the mecanisms that are not specific of the Ag). The so-called immune reaction consist in the secretion of macromolecules (the antibodies abbreviated as Ab) and cells in the blood and the lymph (the lymphocytes) which participate in the recognition and the destruction of the antigens. Recognition is the process by which a site of the surface of an antigen is fixed by the specific site of an immunoglobulin (an Ab) or the receptor on the membrane of a lymphocyte. Specificity is ensured by the steric complementarity of the van der Wals link. The transformation of the Ab or of the cell receptor gives then rise to a series of cellular transformations, secretions, and multiplication which result in the subsequent destruction of the foreign antibodies. At the very simple level of our description, all what we say about one mechanism, wether molecular with Ab or cellular with lymphocyte, is valid for the other one and we shall not distinguish further between the two immune responses.

Gérard Weisbuch, Henri Atlan

Structure in Neural Networks

Group Report 1: Structure in Neural Networks

J. A. Feldman, H. Mallot

General Software for the Simulation of Neural Nets

Group Report 2: General Software for the Simulation of Neural Nets

The brain is a quintessentially multi-purpose machine, and there is a natural tendency on the part of those who simulate brain function to build a certain degree of flexibility into their computer models. It invariably transpires, however, that this is not a particularly desirable approach. It was the clear consensus that one inevitably finishes up with a compromise in which no single aspect of brain function is being reproduced in an optimal fashion. The message therefore appears to be that one should work with dedicated programmes, aimed at a relatively specific structure, and we had the opportunity of examining in detail two examples of this type: the Caltech vision modeller and the Rochester connectionist simulator. The vision modeller, which has been developed by Fox, Bower, Zemansky and Koch, faithfully follows the anatomy of a small part of the visual system, right down to the geometrical detail of the dendrites. One of its aims is to show how this results in the specialization displayed by the imple cells, and possibly even the complex cells. The connectionist simulator, which was developed by Feldman, will permit one to follow the functioning of a single cell in greater detail than heretofore. It appears to represent the state of the art with this particular problem.

C. v. d. Malsburg, R. M. J. Cotterill

Hardware (e.g. Transputer Based) for Neural Computing

Group Report 3: Hardware (e.g. Transputer Based) for Neural Computing

The interests of workers involved in neural computing are exceedingly varied and consequently, their specific computational requirements cannot be universally defined. Nevertheless it is possible to draw some broad generalizations which help to classify the problem.

R. Shepherd, S. Garth

Typical Applications of Neural Computers

Group Report 4: Typical Applications of Neural Computers

The first part of the discussion centered on the application of NC’s to language translation and speech recognition. The need for major breakthroughs towards these aims was commonly agreed. An intermediate and less distant goal could be provided by the translation between very similar languages such as Danish and Swedish (Hertz). However, many felt even this requiring a capability close to language understanding (Bienenstock) and at best still very complicated rules. The situation might be better for vision, the rules being given by geometry and physics there (Omohundro).

R. Eckmiller, H. Ritter

Backmatter

Weitere Informationen