Skip to main content

2014 | Buch

Neural Fields

Theory and Applications

herausgegeben von: Stephen Coombes, Peter beim Graben, Roland Potthast, James Wright

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

Neural field theory has a long-standing tradition in the mathematical and computational neurosciences. Beginning almost 50 years ago with seminal work by Griffiths and culminating in the 1970ties with the models of Wilson and Cowan, Nunez and Amari, this important research area experienced a renaissance during the 1990ties by the groups of Ermentrout, Robinson, Bressloff, Wright and Haken. Since then, much progress has been made in both, the development of mathematical and numerical techniques and in physiological refinement und understanding. In contrast to large-scale neural network models described by huge connectivity matrices that are computationally expensive in numerical simulations, neural field models described by connectivity kernels allow for analytical treatment by means of methods from functional analysis. Thus, a number of rigorous results on the existence of bump and wave solutions or on inverse kernel construction problems are nowadays available. Moreover, neural fields provide an important interface for the coupling of neural activity to experimentally observable data, such as the electroencephalogram (EEG) or functional magnetic resonance imaging (fMRI). And finally, neural fields over rather abstract feature spaces, also called dynamic fields, found successful applications in the cognitive sciences and in robotics. Up to now, research results in neural field theory have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. There is no comprehensive collection of results or reviews available yet. With our proposed book Neural Field Theory, we aim at filling this gap in the market. We received consent from some of the leading scientists in the field, who are willing to write contributions for the book, among them are two of the founding-fathers of neural field theory: Shun-ichi Amari and Jack Cowan.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Tutorial on Neural Field Theory
Abstract
The tools of dynamical systems theory are having an increasing impact on our understanding of patterns of neural activity. In this tutorial chapter we describe how to build tractable tissue level models that maintain a strong link with biophysical reality. These models typically take the form of nonlinear integro-differential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of spatiotemporal patterns, based around natural extensions of those used for local differential equation models. We present an overview of these techniques, covering Turing instability analysis, amplitude equations, and travelling waves. Finally we address inverse problems for neural fields to train synaptic weight kernels from prescribed field dynamics.
Stephen Coombes, Peter beim Graben, Roland Potthast

Theory of Neural Fields

Frontmatter
Chapter 2. A Personal Account of the Development of the Field Theory of Large-Scale Brain Activity from 1945 Onward
Abstract
In this paper I give my personal perspective on the development of a field theory of large-scale brain activity. I review early work by Pitts, Wiener, Beurle and others, and give an account of the development of the mean-field Wilson-Cowan equations. I then outline my reasons for trying to develop a stochastic version of these equations, and recall the steps leading to the discovery that one can use field theory and the van Kampen system-size expansion of a neural master equation to obtain stochastic Wilson-Cowan equations. I then describe how stochastic neural field theory led to the discovery that there is a directed percolation phase transition in large-scale brain activity, and how the stochastic Wilson-Cowan equations can provide insight into many aspects of large-scale brain activity, such as the generation of fluctuation-driven avalanches and oscillations.
Jack Cowan
Chapter 3. Heaviside World: Excitation and Self-Organization of Neural Fields
Abstract
Mathematical treatments of the dynamics of neural fields become much simpler when the Heaviside function is used as an activation function. This is because the dynamics of an excited or active region reduce to the dynamics of the boundary. We call this regime the Heaviside world. Here, we visit the Heaviside world and briefly review bump dynamics in the 1D, 1D two-layer, and 2D cases. We further review the dynamics of forming topological maps by self-organization. The Heaviside world is useful for studying the learning or self-organization equation of receptive fields. The stability analysis shows the formation of a continuous map or the emergence of a block structure responsible for columnar microstructures. The stability of the Kohonen map is also discussed.
Shun-ichi Amari
Chapter 4. Spatiotemporal Pattern Formation in Neural Fields with Linear Adaptation
Abstract
We study spatiotemporal patterns of activity that emerge in neural fields in the presence of linear adaptation. Using an amplitude equation approach, we show that bifurcations from the homogeneous rest state can lead to a wide variety of stationary and propagating patterns on one- and two-dimensional periodic domains, particularly in the case of lateral-inhibitory synaptic weights. Other typical solutions are stationary and traveling localized activity bumps; however, we observe exotic time-periodic localized patterns as well. Using linear stability analysis that perturbs about stationary and traveling bump solutions, we study conditions for the activity to lock to a stationary or traveling external input on both periodic and infinite one-dimensional spatial domains. Hopf and saddle-node bifurcations can signify the boundary beyond which stationary or traveling bumps fail to lock to external inputs. Just beyond a Hopf bifurcation point, activity bumps often begin to oscillate, becoming breather or slosher solutions.
G. Bard Ermentrout, Stefanos E. Folias, Zachary P. Kilpatrick
Chapter 5. PDE Methods for Two-Dimensional Neural Fields
Abstract
We consider neural field models in both one and two spatial dimensions and show how for some coupling functions they can be transformed into equivalent partial differential equations (PDEs). In one dimension we find snaking families of spatially-localised solutions, very similar to those found in reversible fourth-order ordinary differential equations. In two dimensions we analyse spatially-localised bump and ring solutions and show how they can be unstable with respect to perturbations which break rotational symmetry, thus leading to the formation of complex patterns. Finally, we consider spiral waves in a system with purely positive coupling and a second slow variable. These waves are solutions of a PDE in two spatial dimensions, and by numerically following these solutions as parameters are varied, we can determine regions of parameter space in which stable spiral waves exist.
Carlo R. Laing
Chapter 6. Numerical Simulation Scheme of One- and Two Dimensional Neural Fields Involving Space-Dependent Delays
Abstract
Neural Fields describe the spatiotemporal dynamics of neural populations involving spatial axonal connections between neurons. These neuronal connections are delayed due to the finite axonal transmission speeds along the fibers inducing a distance-dependent delay between two spatial locations. The numerical simulation in 1-dimensional neural fields is numerically demanding but may be performed in a reasonable run time by implementing standard numerical techniques. However 2-dimensional neural fields demand a more sophisticated numerical technique to simulate solutions in a reasonable time. The work presented shows a recently developed numerical iteration scheme that allows to speed up standard implementations by a factor 10–20. Applications to some pattern forming systems illustrate the power of the technique.
Axel Hutt, Nicolas Rougier
Chapter 7. Spots: Breathing, Drifting and Scattering in a Neural Field Model
Abstract
Two dimensional neural field models with short range excitation and long range inhibition can exhibit localised solutions in the form of spots. Moreover, with the inclusion of a spike frequency adaptation current, these models can also support breathers and travelling spots. In this chapter we show how to analyse the properties of spots in a neural field model with linear spike frequency adaptation. For a Heaviside firing rate function we use an interface description to derive a set of four nonlinear ordinary differential equations to describe the width of a spot, and show how a stationary solution can undergo a Hopf instability leading to a branch of periodic solutions (breathers). For smooth firing rate functions we develop numerical codes for the evolution of the full space-time model and perform a numerical bifurcation analysis of radially symmetric solutions. An amplitude equation for analysing breathing behaviour in the vicinity of the bifurcation point is determined. The condition for a drift instability is also derived and a center manifold reduction is used to describe a slowly moving spot in the vicinity of this bifurcation. This analysis is extended to cover the case of two slowly moving spots, and establishes that these will reflect from each other in a head-on collision.
Stephen Coombes, Helmut Schmidt, Daniele Avitabile
Chapter 8. Heterogeneous Connectivity in Neural Fields: A Stochastic Approach
Abstract
One of the traditional approximations applied in Amari type neural field models is that of a homogeneous isotropic connection function. Incorporation of heterogeneous connectivity into this type of model has taken many forms, from the addition of a periodic component to a crystal-like inhomogeneous structure. In contrast, here we consider stochastic inhomogeneous connections, a scheme which necessitates a numerical approach. We consider both local inhomogeneity, a local stochastic variation of the strength of the input to different positions in the media, and long range inhomogeneity, the addition of connections between distant points. This leads to changes in the well known solutions such as travelling fronts and pulses, which (where these solutions still exist) now move with fluctuating speed and shape, and also gives rise to a new type of behaviour: persistent fluctuations in activity. We show that persistent activity can arise from different mechanisms depending on the connection model, and show that there is an increase in coherence between fluctuations at distant regions as long-range connections are introduced.
Chris A. Brackley, Matthew S. Turner
Chapter 9. Stochastic Neural Field Theory
Abstract
We survey recent work on extending neural field theory to take into account synaptic noise. We begin by showing how mean field theory can be used to represent the macroscopic dynamics of a local population of N spiking neurons, which are driven by Poisson inputs, as a rate equation in the thermodynamic limit N. Finite-size effects are then used to motivate the construction of stochastic rate-based models that in the continuum limit reduce to stochastic neural fields. The remainder of the chapter illustrates how methods from the analysis of stochastic partial differential equations can be adapted in order to analyze the dynamics of stochastic neural fields. First, we consider the effects of extrinsic noise on front propagation in an excitatory neural field. Using a separation of time scales, it is shown how the fluctuating front can be described in terms of a diffusive–like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. Second, we investigate rare noise-driven transitions in a neural field with an absorbing state, which signals the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic neural field in the weak noise limit. These equations take the form of nonlocal Hamilton equations in an infinite–dimensional phase space.
Paul C. Bressloff
Chapter 10. On the Electrodynamics of Neural Networks
Abstract
We present a microscopic approach for the coupling of cortical activity, as resulting from proper dipole currents of pyramidal neurons, to the electromagnetic field in extracellular fluid in presence of diffusion and Ohmic conduction. Starting from a full-fledged three-compartment model of a single pyramidal neuron, including shunting and dendritic propagation, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. Under reasonable simplifications, we then derive a leaky integrate-and-fire model for the dynamics of a neural network, which facilitates comparison with existing neural network and observation models. In particular, we compare our results with a related model by means of numerical simulations. Performing a continuum limit, neural activity becomes represented by a neural field equation, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation. Our work consistently satisfies the widespread dipole assumption discussed in the neuroscientific literature.
Peter beim Graben, Serafim Rodrigues

Applications of Neural Fields

Frontmatter
Chapter 11. Universal Neural Field Computation
Abstract
Turing machines and Gödel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a Gödel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distribution functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a Dynamic Field Theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation dynamic field automaton.
Peter beim Graben, Roland Potthast
Chapter 12. A Neural Approach to Cognition Based on Dynamic Field Theory
Abstract
How may cognitive function emerge from the different dynamic properties, regimes, and solutions of neural field equations? To date, this question has received much less attention than the purely mathematical analysis of neural fields. Dynamic Field Theory (DFT) aims to bridge the ensuing gap, by bringing together neural field dynamics with principles of neural representation and fundamentals of cognition. This chapter provides review of each of these aspects. We show how dynamic fields can be viewed as mathematical descriptions of activation patterns in neural populations that arise due to sensory and motor events; how field dynamics in DFT give rise to a set of stable states and associated instabilities that provide the elementary building blocks for cognitive processes; and how these properties can be brought to bear in the construction of neurally grounded process models of cognition. We conclude that DFT provides a valuable framework for linking mathematical descriptions of neural activity to actual sensory, motor, and cognitive functionality and behavioral signatures thereof.
Jonas Lins, Gregor Schöner
Chapter 13. A Dynamic Neural Field Approach to Natural and Efficient Human-Robot Collaboration
Abstract
A major challenge in modern robotics is the design of autonomous robots that are able to cooperate with people in their daily tasks in a human-like way. We address the challenge of natural human-robot interactions by using the theoretical framework of Dynamic Neural Fields (DNFs) to develop processing architectures that are based on neuro-cognitive mechanisms supporting human joint action. By explaining the emergence of self-stabilized activity in neuronal populations, Dynamic Field Theory provides a systematic way to endow a robot with crucial cognitive functions such as working memory, prediction and decision making. The DNF architecture for joint action is organized as a large scale network of reciprocally connected neuronal populations that encode in their firing patterns specific motor behaviors, action goals, contextual cues and shared task knowledge. Ultimately, it implements a context-dependent mapping from observed actions of the human onto adequate complementary behaviors that takes into account the inferred goal of the co-actor. We present results of flexible and fluent human-robot cooperation in a task in which the team has to assemble a toy object from its components.
Wolfram Erlhagen, Estela Bicho
Chapter 14. Neural Field Modelling of the Electroencephalogram: Physiological Insights and Practical Applications
Abstract
The aim of this chapter is to outline a mean field approach to modelling brain activity that has been particularly successful in articulating the genesis of rhythmic electroencephalographic activity in the mammalian brain. In addition to being able to provide a physiologically consistent explanation for the genesis of the alpha rhythm, as well as expressing an array of complex dynamical phenomena that may be of relevance to understanding cognition, the model is also capable of accounting for many of the macroscopic electroencephalographic effects associated with anaesthetic action, a feature often missing in similar formulations. This chapter will then conclude with an example of how the physiological insights afforded by this mean field modelling approach can be translated into improved methods for the clinical monitoring of depth of anaesthesia.
David T. J. Liley
Chapter 15. Equilibrium and Nonequilibrium Phase Transitions in a Continuum Model of an Anesthetized Cortex
Abstract
In this chapter we investigate a range of dynamic behaviors accessible to a continuum model of the cerebral cortex placed close to the anesthetic phase transition. If the anesthetic transition from the high-firing (conscious) to the low-firing (comatose) state can be modeled as a jump between two equilibrium states of the cortex, then we can draw an analogy with the vapor-to-liquid phase transition of the van der Waals gas of classical thermodynamics. In this analogy, specific volume (inverse density) of the gas maps to cortical activity, with pressure and temperature being the analogs of anesthetic concentration and subcortical excitation. It is well known that at the thermodynamic critical point, large fluctuations in specific volume are observed; we find analogous critically-slowed fluctuations in cortical activity at its critical point. Unlike the van der Waals system, the cortical model can also exhibit nonequilibrium phase transitions in which the homogeneous equilibrium can destabilize in favor of slow global oscillations (Hopf temporal instability), stationary structures (Turing spatial instability), and chaotic spatiotemporal activity patterns (Hopf–Turing interactions). We comment on possible physiological and pathological interpretations for these dynamics. In particular, the turbulent state may correspond to the cortical slow oscillation between “up” and “down” states observed in nonREM sleep and clinical anesthesia.
D. Alistair Steyn-Ross, Moira L. Steyn-Ross, Jamie W. Sleigh
Chapter 16. Large Scale Brain Networks of Neural Fields
Abstract
Neural fields describe neural activations continuous in space and time. Neurons at a particular location in the brain receive input from its local neighbors and from far distant neuronal populations. Both types of connectivity, local and global, contribute approximately equally to the complete connectivity, but differ qualitatively in their connection topology. The local connectivity is characterized by a connection density that monotonously decreases with the distance, typically independent of the location in the brain, whereas the global connectivity is characterized by sparse long-range connections (Connectome) between brain areas. In this chapter I discuss some developments of local-global descriptions of neural fields culminating in the international neuroscience project The Virtual Brain.
Viktor Jirsa
Chapter 17. Neural Fields, Masses and Bayesian Modelling
Abstract
This chapter considers the relationship between neural field and mass models and their application to modelling empirical data. Specifically, we consider neural masses as a special case of neural fields, when conduction times tend to zero and focus on two exemplar models of cortical microcircuitry; namely, the Jansen-Rit and the canonical microcircuit model. Both models incorporate parameters pertaining to important neurobiological attributes, such as synaptic rate constants and the extent of lateral connections. We describe these models and show how Bayesian inference can be used to assess the validity of their field and mass variants, given empirical data. Interestingly, we find greater evidence for neural field variants in analyses of LFP data but fail to find more evidence for such variants, relative to their neural mass counterparts, in MEG (virtual electrode) data. The key distinction between these data is that LFP data are sensitive to a wide range of spatial frequencies and the temporal fluctuations that these frequencies contain. In contrast, the lead fields, inherent in non-invasive electromagnetic recordings, are necessarily broader and suppress temporal dynamics that are expressed in high spatial frequencies. We present this as an example of how neuronal field and mass models (hypotheses) can be compared formally.
Dimitris A. Pinotsis, Karl J. Friston
Chapter 18. Neural Field Dynamics and the Evolution of the Cerebral Cortex
Abstract
We describe principles for cortical development which may apply both to the evolution of species, and to the antenatal development of the cortex of individuals. Our account depends upon the occurrence of synchronous oscillation in the neural field during embryonic development, and the assumption that synchrony is linked to cell survival during apoptosis. This leads to selection of arrays of neurons with ultra-small-world characteristics. The “degree of separation” power law is supplied by the combination of neuron sub-populations with differing exponential axonal tree distributions, and consequently, in the visual cortex, connections emerge in anatomically realistic patterns, with an ante-natal arrangement which projects signals from the surrounding cortex onto each macrocolumn, in a form analogous to the projection of a Euclidean plane onto a Möbius strip. Simulations of signal flow explain cortical responses to moving lines as functions of stimulus velocity, length and orientation. With the introduction of direct visual inputs, under the operation of Hebbian learning, development of mature selective response “tuning” to stimuli “features” then takes place, overwriting the earlier ante-natal configuration. Further assuming similar development principles apply to inter-areal interactions in the developing cortex, a general principle for the evolution of increasingly complicated sensory-motor sequences, at both species-evolution and individual time-scales, is implicit.
James J. Wright, Paul D. Bourke
Backmatter
Metadaten
Titel
Neural Fields
herausgegeben von
Stephen Coombes
Peter beim Graben
Roland Potthast
James Wright
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-54593-1
Print ISBN
978-3-642-54592-4
DOI
https://doi.org/10.1007/978-3-642-54593-1

Premium Partner