Skip to main content
main-content

Über dieses Buch

The NATO workshop on Disordered Systems and Biological Organization was attended, in march 1985, by 65 scientists representing a large variety of fields: Mathematics, Computer Science, Physics and Biology. It was the purpose of this interdisciplinary workshop to shed light on the conceptual connections existing between fields of research apparently as different as: automata theory, combinatorial optimization, spin glasses and modeling of biological systems, all of them concerned with the global organization of complex systems, locally interconnected. Common to many contributions to this volume is the underlying analogy between biological systems and spin glasses: they share the same properties of stability and diversity. This is the case for instance of primary sequences of biopo Iymers I ike proteins and nucleic acids considered as the result of mutation-selection processes [P. W. Anderson, 1983] or of evolving biological species [G. Weisbuch, 1984]. Some of the most striking aspects of our cognitive apparatus, involved In learning and recognttlon [J. Hopfield, 19821, can also be described in terms of stability and diversity in a suitable configuration space. These interpretations and preoccupations merge with those of theoretical biologists like S. Kauffman [1969] (genetic networks) and of mathematicians of automata theory: the dynamics of networks of automata can be interpreted in terms of organization of a system in multiple possible attractors. The present introduction outlInes the relationships between the contributions presented at the workshop and brIefly discusses each paper in its particular scientific context.

Inhaltsverzeichnis

Frontmatter

Automata Theory

Frontmatter

1. Cellular Automata Models of Disorder And Organization

Cellular automata are mathematical objects introduced in 1948 by J. von Neumann and S. Ulam to “abstract the logical structure of life” [1], Since then, they have established themselves as unique tools to analyze the emergence of global organization, complexity, and pattern formation from the iteration of local operations between simple elements. They have also been extensively used as models of universal computation, and are being increasingly applied to a variety of concepts from physics and chemistry[2]. They are in fact versatile enough to offer analogies with almost all the themes discussed at this meeting (in particular: self-organization, dissipative systems, spatial vs. thermal fluctuations, neural networks, optimization, ergodicity-breaking, and ultrametricity)

Gérard Y. Vichniac

2. Dynamics and Self-Organization in One-Dimensional Arrays

A one-dimensional (1-D for short) array is a collection of identical finite state machines indexed by integers x of ℤ, and where any cell x can directly receive informations from its neighbours x + i, i = -n,…, n, where n is a positive integer called the scope of the array. Each machine can synchronously change its state at discrete time steps as a function of its state and the states of its neighboring machines.

Maurice Tchuente

3. Basic Results for the Behaviour of Discrete Iterations

There has been an improved effort for about 20 years in studying the dynamical behaviour of discrete, iterative systems. The reason for this is probably that different (but conceptually similar) discrete models are presently of interest in various domains of science, such as: Physics (spin glass problems, see (10), (12), (14), for example), Chemistry (diffusion reactions (9)), Biomathematics (neural networks, genetic nets, (10), (11), (14), (18), (20)), Computer Science (pattern recognition, associative memories (7), (10), cellular automata (1), (2), (9), (19), (21), (22), (23), cellular arrays for systolic computation in V.L.S.I. systems (13), (15), (17), (19) and so on: see especially (4), (5), (6)).

François Robert

4. On Some Dynamical Properties of Monotone Networks

Cellular automata were first introduced in the 1950’s by John Von Neumann. Informally, a cellular automaton can be viewed as a discrete dynamical system whose global behaviour is generated by the (simple) local interactions of its elementary cells, hereafter called the sites of the automaton. As pointed out by Wolfram (1984), cellular automata have arisen in several disciplines, because they provide examples in which the generation of complex behaviour by the cooperative effects of simple components may be studied. No unified mathematical framework has been yet developped to modelize the iterative behaviour of general networks of automata, but some tools such as algebraic operators and Lyapounov functions (Goles (1985), Fogelman (1985)), modular arithmetic and polynomial algebra (Martin (1983)), arithmetic in finite fields (Gill (1966), Tchuente (1982)) have been introduced to analyze special classes of cellular automata. In this paper, we characterize the dynamics of some monotone cellular automata. First we use a morphism technique to derive the iterative behaviour of automata whose transition functions are generalized majority functions. Then, we study some relationships between the connection-graph and the iteration-graph of boolean automata with memory, assuming that the transition function of each site is monotone with regard to its inputs.

Yves Robert, Maurice Tchuente

5. Inhomogeneous Cellular Automata (INCA)

A cellular automaton in 2 dimensions, as considered in this paper, consists of a regular array of sites (or cells), with a value, or state (0 or 1) at each site. With an evolution law in discrete time, a cellular automaton is a fully discrete dynamical system. The law is local: the value assumed by any one site at time t + 1 is determined by the values taken at time t by the neighboring sites. (For reviews, see Vichniac, this volume, and references therein.) These systems are homogeneous in the sense that all sites evolve according to the same transition rule.

H. Hartman, G. Y. Vichniac

6. The Ising Model And The Rudin-Shapiro Sequence

Provided purely imaginary temperatures are allowed, the 1-dimensional cyclic Ising model is identified with the Rudin-Shapiro sequence.

Jean-Paul Allouche

7. Dynamical Properties of An Automaton with Memory

Automata networks have been often used to model elementary dynamical properties of neuronal networks (Mc Culloch and Pitts (1943)). Caianiello et al. (1967) generalized the above model in order to introduce the refractory character of the neural response. They proposed to use a memory associated to the system and obtained the following discrete iteration scheme, in order to simulate the response of a single neuron: k-1 1$$ {x_{{n + 1}}} = 1\left( {\sum\limits_{{i = 0}}^{{k - 1}} {{a_i}{x_{{n - i}}} - b} } \right) $$ where xn belongs to (0,1), ai and b are real parameters, k, the size of the memory, a given integer and 1 (u) is 1 if u is non negative and 0 otherwise.

Michel Cosnard, Eric Goles Chacc

8. Dynamics of Random Boolean Networks

Random boolean networks have been studied in order to modelize complex systems composed of many parts. S.Kauffman (1970) founds his choice of such models upon a comparison between genetic nets and networks of randomly interconnected binary elements. He gives important statistical results for nets with connectivity k=1, 2, 3, N (number of elements in the net).

Didier Pellegrin

9. Random Fields And Spatial Renewal Potentials

By using an approach similar to that used for Markov random fields, we propose a spatial version of renewal processes, generalizing the usual notion in dimension 1. We characterize the potentials of such renewal random fields and we give a theorem about the presence of phase transition. Finally, we study the problem of the sampling of renewal fields by means of a random automaton, we show simulations and discuss the stopping rules of the process of sampling.

J. Demongeot, J. Fricot

10. Lyapunov Functions and Their USE in Automata Networks

In this paper, we introduce the concept of Lyapunov function to study the dynamical behavior of automata networks. This notion, classical in continuous dynamical systems, has proved very useful for discrete systems as well.

F. Fogelman Soulie

11. Positive Automata Networks

Automata networks have been introduced as a modeling tool in several fields: neurophysiology, Fogelman(1985), Hopfield(1982), Mc Culloch(1943), selfreproducing cellular arrays, Von Neumann(1966), group dynamics, Goles(1985a) and, more recently, simulations on spin glass structures and other physical systems, Demongeot, Goles, Tchuente(1985), Fogelman(1983, 1985), Goles (1985a). In most of those applications, the study of the automata dynamics was made by computer simulations and the theoretical results were very few. This last aspect is due, principally, to the hard combinatorial analysis which is needed in order to handle the discrete nature of the problem: finite set of states, discrete cellular array, discrete time evolution, etc.

E. Goles

12. Directional Entropies of Cellular Automaton-Maps

Consider a fixed lattice L in n -dimensional euclidean space, and a finite set K of symbols. A correspondence a which assigns a symbol a(x) ∈K to each lattice point x ∈ L will be called a configuration. An n -dimensional cellular automaton can be described as a map which assigns to each such configuration a some new configuration a′ = f (a) by a formula of the form$$ a'(x) = F(a(x + {v_l}),...,\,a(x + {v_{\tau }})) $$ a’(x) = F(a(x + v1)), ⋯, a(x + vr)), where v1, ⋯, vr are fixed vectors in the lattice L, and where F is a fixed function of r symbols in K. I will call f the cellular automaton-map which is associated with the local map F. If the alphabet K has k elements, then the number of distinct local maps F is equal to kk′. This is usually an enormous number, so that it is not possible to examine all of the possible F. Depending on the particular choice of F and of the v1, such an automaton may display behavior which is simple and convergent, or chaotic and random looking, or behavior which is very complex and difficult to describe. (Compare [Wolfram].)

John Milnor

Physical Disordered Systems

Frontmatter

13. On the Statistical Physics of Spin Glasses

These notes are intended to provide a brief introduction to the recent progresses on the physical understanding of the equilibrium properties of spin glasses (in mean field theory). In order to try and make them more accessible, I have systematically avoided all the computations, which are described in many good recent reviews (see for instance /1–4/) in which one will find also more extended references. An introduction to the replica method, from which a large part of the following results have been deduced, can be found in /4/. Parts of the following text are inspired from a “pedagogical” article I wrote in french recently /18/.

M. Mézard

14. Symbolic Computation Methods for Some Spin Glasses Problems

The purpose of this paper is the computation of exact expressions (symbolic expressions) of two fundamental functions used in Statistical Physics. In the first part we study the partition functions of finite two-dimensional and three-dimensional Ising models. These partition functions can be expressed with polynomials and we want to compute all the coefficients of these polynomials exactly. In the second part we deal with the free energy of some regular two-dimensional Ising models. We write these functions in the form of double integrals and we carry out the calculation of the kernels of these integrals.

Bernard Lacolle

15. Order and Defects in Geometrically Frustrated Systems

In many physical (and perhaps biological) systems, there is a contradiction between the local order (local building rule) and the possibility for this local configuration to generate a perfect tiling of the euclidean space. It is the case, for instance, for amorphous metals where long range order is absent while the local configuration is rather well defined with the presence of five fold (icosahedral) symmetry1. Our aim is to provide a definition for the notions or “order” and “defects” in these materials. In a first step the underlying space geometry is modified in order for the local configuration to propagate freely (without “frustration”). This is often possible by allowing for curvature in space (either positive or negative). The obtained configuration in space is called the “Constant Curvature Idealization” of the disordered material. Let us take a simple example. In 2D a perfect pentagon tiling of the euclidean plane is impossible (fig. 1-a). It is however possible to tile a positively curved surface, a sphere, with pentagons leading to a pentagonal dodecahedron. Let us try now to densely pack spheres in 3D (this is an approximation to amorphous metals structures). Four spheres build a regular tetrahedron, but a perfect tetra-hedral packing is impossible because the tetrahedron dihedral angle is not a submultiple of 2π (fig. 1-b.). A perfect tetrahedral packing becomes possible Fig. 1geometrical frustration in 2 and 3 Dimensions if space is curved. if space is curved.

R. Mosseri, J. F. Sadoc

Formal Neural Networks

Frontmatter

16. Collective Computation With Continuous Variables

The model on which collective computation in neural networks was originally shown to produce a content addressable memory was based on neurons with two discrete states (1) (or levels of activity) which might be thought of as “0” and “1”. That model involved an asynchronous updating of the state of each neuron, and could be closely related to aspects of magnetic Ising systems. The role of exchange in the magnetic system is played by the connections between the neurons in the biological system.

J. J. Hopfield, D. W. Tank

17. Collective Properties of Neural Networks

A neural network is basically made of interconnected threshold automata i. (i=1,…., N) collecting the signals Sj produced by a set of upstream units j and delivering an output signal Si to a set of downstream units k. The evolution of the dynamic variables is driven by the following equations: (1)$$ {S_i}(t) = F({V_i}(\{ {S_i}(t - {\Delta_{{ij}}})\} - {e_i}) $$ where F is a sigmoïd function, Vi the membrane potential of i, Δij the transmission delay between the units j and i and ei the threshold of i.

P. Peretto, J. J. Niez

18. Determining The Dynamic Landscape Of Hopfield Networks

The purpose of this paper is to describe in a semi-quantitative manner the basins of attraction of the original Hopfield (1982) networks.

Gérard Weisbuch, Dominique D’humieres

19. High Resolution Microfabrication and Neural Networks

This paper reviews basic lithographic considerations for current integrated circuit fabrication at the 1 micron level and illustrates methods for making structures with dimensions smaller than 0.1 micron. Many of these methods are well suited to fabricating high-density, resistive neural-networks.

L. D. Jackel, R. E. Howard, H. P. Graf, J. Denker, B. Straughn

20. Ultrametricity, Hopfield Model and all that

It is part of our every day experience that when we try to memorize new information we look for all possible relationships with previously stored patterns. If we can classify the new pattern, that is, place it in a hierarchical tree of categories we do it with so much eagerness that sometimes we just censor the data so as to eliminate some exceptional features. Hopfield’s model(1) for the brain seems, on the contrary, to be quite happy crunching uncorrelated orthogonal words. This is the more surprising given the fact that otherwise the model simulates quite nicely other features of Human Memory.

M. A. Virasoro

21. The Emergence of Hierarchical Data Structures in Parallel Computation

We think of the world in hierarchical manner. When dealing with the objects around us, however, we usually concentrate on one level of the hierarchy at a time. The very possibility of such a separation is in fact a precondition for knowledgeI When several levels of the hierarchy are simultaneously active, as in economics — where natural conditions, individual, corporate and governmental decisions all interact — we say that the situation is “complex”…

Michel Kerszberg

22. Cognitive Capabilities of a Parallel System

There has been recent interest in parallel, distributed, associative models as ways of organizing powerful computing systems and of handling noisy and incomplete data. There is no doubt such systems are effective at doing some interesting kinds of computations. Almost certainly they are intrinsically better suited to many kinds of computations than traditional computer architecture.

James A. Anderson

23. Neural Network Design for Efficient Information Retrieval

The ability of neural networks to store and retrieve information has been investigated for many years. A renewed interest has been triggered by the analogy between neural networks and spin glasses which was pointed out by W.A. Little et al.1 and J. Hopfield2. Such systems would be potentially useful autoassociative memories “if any prescribed set of states could be made the stable states of the system”2; however, the storage prescription (derived from Hebb’s lav/) which was used by both authors did not meet this requirement, so that the information retrieval properties of neural networks based on this law were not fully satisfactory. In the present paper, a generalization of Hebb’s law is derived so as to guarantee, under fairly general conditions, the retrieval of the stored information (autoassociative memory). Illustrative examples are presented.

L. Personnaz, I. Guyon, G. Dreyfus

24. Learning Process in an Asymmetric Threshold Network

Threshold functions and related operators are widely used as basic elements of adaptive and associative networks [Nakano 72, Amari 72, Hopfield 82]. There exist numerous learning rules for finding a set of weights to achieve a particular correspondence between input-output pairs. But early works in the field have shown that the number of threshold functions (or linearly separable functions) in N binary variables is small compared to the number of all possible boolean mappings in N variables, especially if N is large. This problem is one of the main limitations of most neural networks models where the state is fully specified by the environment during learning: they can only learn linearly separable functions of their inputs. Moreover, a learning procedure which requires the outside world to specify the state of every neuron during the learning session can hardly be considered as a general learning rule because in real-world conditions, only a partial information on the “ideal” network state for each task is available from the environment. It is possible to use a set of so-called “hidden units” [Hinton,Sejnowski,Ackley. 84], without direct interaction with the environment, which can compute intermediate predicates. Unfortunately, the global response depends on the output of a particular hidden unit in a highly non-linear way, moreover the nature of this dependence is influenced by the states of the other cells.

Yann Le Cun

25. Layered Networks for Unsupervised Learning

Among several models of neural networks(1–4), layered structures are particularly appealing as they lead naturally to a hierarchical representation of the input sets, along with a reduced connectivity between individual cells. In Ref. 3 and 4, it was shown that such layered networks are able to memorize complicated input patterns, such as alphabetic characters, during unsupervised learning. On top of that, the filtering properties of the network can be continuously tuned from very sharp discrimination between similar patterns, to broad class aggregation when the selectivity of the cells is decreased. Unfortunately, it was also shown(4) that these properties are obtained with a reduced stability of the learning (the learning process does not converge for some values of the selectivity).

D. d’Humières

26. Statistical Coding and Short-Term Synaptic Plasticity: A Scheme for Knowledge Representation in the Brain

This work is a theoretical investigation of some consequences of the hypothesis that transmission efficacies of synapses in the Central Nervous System (CNS) undergo modification on a short time-scale. Short-term synaptic plasticity appears to be an almost necessary condition for the existence of activity states in the CNS which are stable for about 1 sec., the time-scale of psychological processes. It gives rise to joint “activity-and-connectivity” dynamics. This dynamics selects and stabilizes particular high-order statistical relationships in the timing of neuronal firing; at the same time, it selects and stabilizes particular connectivity patterns. In analogy to statistical mechanics, these stable states, the attractors of the dynamics, can be viewed as the minima of a hamiltonian, or cost function. It is found that these low-cost states, termed synaptic patterns, are topologically organized. Two important properties of synaptic patterns are demonstrated: (i) synaptic patterns can be “memorized” and later “retrieved”, and (ii) synaptic patterns have a tendency to assemble into compound patterns according to simple topological rules. A model of position-invariant and size-invariant pattern recognition based on these two properties is briefly described. It is suggested that the scheme of a synaptic pattern may be more adapted than the classical cell-assembly notion for explaining cognitive abilities such as generalization and categorization, which pertain to the notion of invariance.

Christoph von der Malsburg, Elie Bienenstock

27. A Physiological Neural Network as an Autoassociative Memory

We consider a neural network model in which the single neurons are chosen to resemble closely known physiological properties. The neurons are assumed to be linked by synapses which change their strength according to Hebbian rules [1] on a short time scale (100ms) [2]. Each nerve cell receives input from a primary set of receptors, which offer learning and test patterns without changing their own properties. The activity of the neurons is interpreted as the output of the network (see Fig.1). The backward bended arrows in Fig.1 indicate the feed-back due to the effect of the neuron activity on the synaptic strengths Sik between neuron k and i in the neural network.

J. Buhmann, K. Schulten

Combinatorial Optimization

Frontmatter

28. Configuration Space Analysis for Optimization Problems

An interesting analogy between frustrated disordered systems studied in condensed matter physics and combinatorial optimization problems [1] has led to the use of simulated annealing (a stochastic algorithm based on the Monte Carlo method) to find approximate solutions to complex optimization problems. A common feature of these systems is the competition between objectives which favor different and incompatible types of ordering. Such “frustration” leads to the existence of a large number of nearly degenerate solutions which are not related by symmetry.

Sara A. Solla, Gregory B. Sorkin, Steve R. White

29. Statistical Mechanics: a General Approach to Combinatorial Optimization

The aim of this work was to investigate the possibility of using statistical mechanics as an alternative framework to study complex combinatorial optimization problems. This deep connection, suggested in ref. [1] by analogy with the annealing of a solid, allows to design a searching procedure which generates a sequence of admissible solutions. This sequence may be viewed as the random evolution of a physical system in contact with a heat-bath. As the temperature is lowered, the solutions approach the optimal solution in such a way that a well organized structure is brought out of a very large number of unpredictable outcomes. To speed up the convergence of this process, we propose a selecting procedure which favours the choice of neighbouring trial solutions. This improved version of the simulated annealing is illustrated by two examples: the N-city Travelling Salesman Problem [2] and the Minimum Perfect Euclidean Matching Problem [3].

E. Bonomi, J. L. Lutton

30. Bayesian Image Analysis

In [8] we introduced a class of image models for various tasks in digital image processing. These models are multi-level or “hierarchical” Markov Random Fields (MRFs). Here we pursue this approach to image modelling and analysis along some different lines, involving segmentation, boundary finding, and computer tomography. Similar models and associated optimization algorithms appear regularly in other work involving immense spatial systems; some examples are the studies in these proceedings on statistical mechanical systems (e.g. ferromagnets, spin-glasses and random fields), the work of Hinton and Sejnowski [14], Hopfield [15], and von der Malsburg and Bienenstock [19], in neural modeling and perceptual inference, and other work in image analysis, e.g. Besag [2], Kiiveri and Campbell [17], Cross and Jain [5], Cohen and Cooper [4], Elliott and Derin [7], Deviver [6], Grenander [11], and Marroquin [20]. The use of MRFs and related stochastic processes as models for intensity data has been prevalent in the image processing literature for some time now; we refer the reader to [8] and standard references for a detailed account of the genealogy.

Donald Geman, Stuart Geman

31. The Langevin Equation as a Global Minimization Algorithm

During the past two years a great deal of attention has been given to simulated annealing as a global minimization algorithm in combinatorial optimization problems [11], image processing problems [2], and other problems [9]. The first rigorous result concerning the convergence of the annealing algorithm was obtained in [2]. In [4], the annealing algorithm was treated as a special case of non-stationary Markov chains, and some optimal convergence estimates and an ergodic theorem were established. Optimal estimates for the annealing algorithm have recently been obtained by nice intuitive arguments in [7].

Basilis Gidas

32. Spin Glass and Pseudo-Boolean Optimization

The minimization of pseudo-boolean functions started in the early sixties and since then has been studied as a discrete optimization problem in operations research, mathematical programming and combinatorics. The author was quite surprised to learn that more recently the problem emerged as the spin glass problem in statistical mechanics. The probabilistic approach developed there has been applied to various combinatorial problems and even to biology or cellular automata.

I. G. Rosenberg

33. Local Versus Global Minima, Hysteresis, Multiple Meanings

Although the macroscopic behaviour of a system is quite different according to the fact that the states are described by either the global or the local minima of a potential, the actual simulation of the evolution of physical systems shows the unity of mecanisms. The absence of exterior information to compare the different states (what would not be true of e.g. some economic agents) and the existence of fluctuations, make the choice a matter of time scale and temperature. We shall use some concepts of catastrophe theory (although this theory has not been concerned so far with microscopic mecanisms) like the loop of hysteresis, to study some deterministic ways of finding global minima by acting on a parameter of the system in the context of pattern matching (such as multiresolution) and shall then compare for both alternatives (global or local minima) the functional possibilities relative to A.I.

Y. L. Kergosien

Models of Biological Organization

Frontmatter

34. Boolean Systems, Adaptive Automata, Evolution

The past decade has seen renewed interest in non Von Neuman computation by parallel processing systems. This interest on the part of solid state physicists and others has led to models of pattern recognition and associative memory (1,2,3). In these models, it is largely the dynamical attractors which are of interest as the classes, or memories, stored in the systems. Further, the mathematical tractability of threshold systems with symmetric coupling, that is, in which each binary device “fires” if a weighted sum of excitation minus inhibition exceeds some threshold, and couplings between two binary devices are symmetrical, has focused particular attention on this subclass of automata. The marked advantage of this subclass of automata is the existence of a potential function allowing prescription of weightings on inputs to each binary device in order to choose steady state attractors with desired properties such as location in state space, and stability to perturbation (1,2,3).

Stuart A. Kauffman

35. Invariant Cycles in the Random Mapping of N Integers Onto Themselves. Comparison with Kauffman Binary Network

According to Kauffman’s idea [Kauffman 1970a,b, 1979], one considers an ensemble of P genes which may be found in two possible states s., labelled as 0 and 1. An overall state S of the ensemble is the set {s1, s2,…, sp}, which is an element of {0,1}p. Given a mapping of {0,l}p→{0,1}p, the iteration of this mapping defines the dynamics of any initial S. In Kauffman model si at time (t+1) is determined by the states of k genes at time t, -possibly including si itself. Therefore the dynamics is defined by the set of all gene connections and, for each gene, by the data of a Boolean function, that is by an array of 2k elements whose values are either 0 or 1 (there are $$ {2^{{{2^k}}}} $$ possible Boolean functions). The dynamics drives any S towards a cycle of period m (1 ⩽ m ⩽ 2p), and the problem is to find out the number and the periods of those cycles when S is varied over the various possible states. A numerical study has been performed by Kauffman for k=2 and choosing at random the set of gene connections and the P Boolean functions. It appeared that: i)The average number of cycles is of the order of $$ \sqrt {P} $$ii)The average period of the cycles is also of the order of $$ \sqrt {P} $$. This remarkable result shows up some amazing simplicity in the dynamics of a large system and, in particular, helps one to understand how a so large number of interacting genes can produce only few cellular types.

Jean Coste, Michel Henon

36. Fibroblasts, Morphogenesis and Cellular Automata

Cellular automata provide models for various biological processes. Threshold automata have been invented to simulate neurons and their electrophysiological behaviour in nervous centres (Xac Culloch and Pitts, 1943; Hopfield, 1982). Complex behaviours such as recognition and learning can even be simulated (Minski and Papert, 1969; Le Cun, 1986). Remarkable morphologies are also produced by cellular automata and there are numerous examples (Ulam, 1962; Conway’s game “life”, see Gardner, 1971–1972). The examination of pictures obtained with cellular automata suggest new mechanisms for biological pattern formation (Wolfram, 1984).

Y. Bouligand

37. Percolation and Frustration in Neural Networks

There are reasons to believe that the theoretical and computational techniques of solid state physics can be used to get an understanding of the dynamics of the brain. Here I shall approach along similar lines a related, but less ambitious set of problems stemming from tissue-culture experiments with foetal rat cortex neurons [1]. Starting from a planar array of disconnected cells, dendrites and axons grow out slowly, leading via the formation of clusters of connected cells to a percolating network. After a week, synapses develop at the contacts, allowing the cells to transmit spike-signals by a discrete stochastic process. Growth of the network is much slower than spike-propagation. Two main cell types exist: excitatory cells that increase the spike rate of their target cells, and inhibitory cells that decrease it.

A. J. Noest

38. Self Organizing Mathematical Models: Nonlinear Evolution Equations with a Convolution term

The simplest example of a dynamical system which organizes itself through cooperation and competition has been given in this conference C8]; I shall formalize it as follows: let A be a linear operator in the plane R2,and consider the ordinary diferential system 1$$ \dot{x} = Ax,\,x(0) = {x_0} $$ where x is constrained to remain in the unit square 2$$ x\,\,\,\,K = \left[ { - 1,1} \right] \times \left[ { - 1,1} \right] $$

Michelle Schatzman

39. Recurrent Collateral Inhibition Simulated in a Simple Neuronal Automata Assembly

One of the central problems that has to be resolved in view of a better understanding of the functioning of CNS structures is that of the dynamic properties of it’s constituent neuronal nets. It clearly appears, indeed, to experimental as well as theoretical neurobiologists that albeit the wealth of knowledge accumulated on the properties of neurons at the molecular, membrane and cellular levels a great degree of cooperativity is present between neuronal elements and that it is therefore the collective properties of groups of neurons inside the structure that must be unveiled. This has at least two important consequences. The first is that it may be difficult to use general mathematical “models” to derive precise existing properties. Indeed there does not exist such a thing as a “general” CNS structure but on the contrary very differently organized morphological structures. The second is that if one takes in account the ultimate scope of the CNS, which is certainely related to an “adequate behavior” of the animal in its environment, then the temporal constraints imposed on the systems must surely be stressed.

H. Axelrad, C. Bernard, B. Giraud, M. E. Marc

40. Cerebellum Models: an Interpretation of Some Features

Many authors have studied the functional capacity of cerebellar cortex and its implications on the organismic behaviour. A basic observation is the extremely regular organization of cells in the cortex with their repartition within two layers and the possible functional unity around Purkinje cell. The first theoretical model following Eccles’s experimental works [1] was D. Marr’s one [2]. Indeed Marr used a possible synaptic modification between Purkinje cell and parallel fibres as fundamental hypothesis, but also numerous imaginative suppositions which permit him to conceive a functional and qualitative model. The regular geometry and possible analogies with electronic devices and computation organs induce simple ideas on the functionning of a cerebellar cortex unit. After Marr, J.S. Albus [3] created a quantitative model of cerebellar cortex based on similar properties — but only three — so it was a computer approach rather than a physiological explanation of cerebellar function. That is the reason why he called his model the CMAC or Cerebellum Model Arithmetic Computer. More recent models have been considered by S. Grossberg [4] and J.C.C. Boylls [5].

G. Chauvet

Backmatter

Weitere Informationen