Skip to main content

2013 | Buch

Computational Neuroscience

A First Course

verfasst von: Hanspeter A Mallot

Verlag: Springer International Publishing

Buchreihe : Springer Series in Bio-/Neuroinformatics

insite
SUCHEN

Über dieses Buch

Computational Neuroscience - A First Course provides an essential introduction to computational neuroscience and equips readers with a fundamental understanding of modeling the nervous system at the membrane, cellular, and network level. The book, which grew out of a lecture series held regularly for more than ten years to graduate students in neuroscience with backgrounds in biology, psychology and medicine, takes its readers on a journey through three fundamental domains of computational neuroscience: membrane biophysics, systems theory and artificial neural networks. The required mathematical concepts are kept as intuitive and simple as possible throughout the book, making it fully accessible to readers who are less familiar with mathematics. Overall, Computational Neuroscience - A First Course represents an essential reference guide for all neuroscientists who use computational methods in their daily work, as well as for any theoretical scientist approaching the field of computational neuroscience.

Inhaltsverzeichnis

Frontmatter
Excitable Membranes and Neural Conduction
Abstract
Neural information processing is based on three cellular mechanisms, i.e., the excitability of neural membranes, the spatio-temporal integration of activities on dendritic trees, and synaptic transmission. The basic element of neural activity is the action potential, which is a binary event, being either present or absent, much as the electrical signals in digital circuit technology. In this chapter, we discuss the formation of the action potential as a result of the dynamics of electrical and chemical processes in the neural membrane. In order to infer the closed loop dynamics from the individual processes of voltage sensitive channels and the resulting resistive and capacitive currents, a mathematical theory is needed, known as the Hodgkin-Huxley theory. The propagation of neural signals along axons and dendrites is based on the cable equation which is also discussed in this chapter. Mathematical background is mostly from the theory of dynamical systems.
Hanspeter A Mallot
Receptive Fields and the Specificity of Neuronal Firing
Abstract
Neuronal firing does not occur at random. In the sensory parts of the brain, firing is triggered by properties of various input stimuli, such as the position of a light stimulus in the visual field, the pitch of a tone, or the appearance of a familiar face. In “associative” areas of the brain, specificities for more abstract concepts have been found including cells representing place (e.g., in rodent hippocampus), or numerosity (e.g., in primate prefrontal cortex). In the motor parts of the brain, neurons have been found that fire preferably prior to pointing movements of the arm into a certain direction. That is to say, these neurons are specific for particular motor actions. In the sensory domain, specificities are quantified in terms of the receptive field, which can be defined as the totality of all stimuli driving a given neuron. The receptive field is measured by correlating the activity of a single neuron with externally measurable parameters of the stimulus. This approach is known as reverse correlation, since stimuli will always preceed the neuronal activity. The concept of correlation between neuronal activity and external measurables, however, generalizes easily to the motor system, leading to the concept of the motor field of a neuron. In this sense, visual receptive fields can be considered as an example for neuronal specificity at large. In this chapter, we discuss the basic theory of visual receptive fields which can be extended to similar concepts in other sensory, motor, or associative areas. The theory is closely related to linear systems theory applied to spatio-temporal signals, i.e. image sequences.Mathematically, it rests on integral equations of the convolution type which will be introduced in due course.
Hanspeter A Mallot
Fourier Analysis for Neuroscientists
Abstract
In this Chapter, we introduce a piece of mathematical theory that is of importance in many different fields of theoretical neurobiology, and, indeed, for scientific computing in general. It is included here not so much because it is a genuine part of computational neuroscience, but because computational and systems neuroscience make extensive use of it. It is closely related to systems theory as introduced in the previous chapter but is also useful in the analysis of local field potentials, EEGs or other brain scanning data, in the generation of psychophysical stimuli in computational vision and of course in analyzing the auditory system. After some instructive examples, the major results of Fourier theory will be addressed in two steps:
  • Sinusoidal inputs to linear, translation-invariant systems yield sinusoidal outputs, differing from the input only in amplitude and phase but not in frequency or overall shape. Sinusoidals are therefore said to be the “eigen-functions” of linear shift invariant systems. Responses to sinusoidal inputs or combinations thereof are thus reduced to simple multiplications and phase shifts. This is the mathematical reason for the prominent role of sinusoidals in scientific computing.
  • The second idea of this chapter is that any continuous function (and also some non-continuous functions) can be represented as linear combinations of sine and cosine functions of various frequencies. Alternatively to the use of sine- and cosine functions, one may also use sinusoidals with a phase value for each frequency, or complex exponentials from the theory of complex numbers.
Both ideas combine in the convolution theorem, stating that the convolution of two functions can also be expressed as the simple product of the respective Fourier transforms. This is also the reason why linear shift-invariant systems are often considered “filters” removing some frequency components from a signal and passing others.
Hanspeter A Mallot
Artificial Neural Networks
Abstract
In models of large networks of neurons, the behavior of individual neurons is treated much simpler than in the Hodgkin-Huxley theory presented in Chapter 1: activity is usually represented by a binary variable (1 = firing; 0 = silent) and time is modeled by a discrete sequence of time steps running in synchrony for all neurons in the net. Besides activity, the most interesting state variable of such networks is synaptic strength, or weight, which determines the influence of one neuron on its neighbors in the network. Synaptic weights may change according to so-called “learning rules”, that allow to find network connectivities optimized for the performance of various tasks. The networks are thus characterized by two state variables, a vector of neuron activities per time step and a matrix of neuronto-neuron transmission weights describing the connectivity, which also depends on time. In this chapter, we will discuss the basic approach and a number of important network architectures for tasks such as pattern recognition, learning of input-output associations, or the self-organization of representations that are optimal in a certain, well-defined sense. The mathematical treatment is largely based on linear algebra (vectors and matrices) and, as in the other chapters, will be explained “on the fly”.
Hanspeter A Mallot
Coding and Representation
Abstract
Neuronal activity does generally not in itself contain information about the stimulus. Spikes elicited by different stimuli look quite the same, but do generally occur in different cells. The information is contained in the specificity, or tuning curve of the cell generating the spike. This may be considered a modern account of the notion of the “specific sense energies” formulated by Johannes Müller already in 1826. In effect, any stimulation of the eye (or visual pathways) is perceived as visual and any stimulation of the ear (or auditory pathways) as auditory. The information encoded by each neuron is described by its tuning curve. Often, neurons are tuned simultaneously to different parameters such as position in visual space, edge orientation, and color, albeit to various extends (i.e., with sharper or coarser tuning curves). Tuning curves of different neurons overlap, leading to population coding where each stimulus is represented by the activities of a group, or population, of neurons. The first part of this chapter explores the consequences of population coding for neural information processing. In the second section, we study the fact that neighboring neurons in the cortical surface tend to have similar receptive fields and tuning curves. This similarity is defined for a combination of many parameters including position in the visual field as well as the well-known “perceptual dimensions” orientation, spatial frequency, color, motion, and depth. It is particularly evident for the tuning to visual field position, i.e. in retinotopic mapping of the visual field onto the cortical surface in the centimeter range.
Hanspeter A Mallot
Backmatter
Metadaten
Titel
Computational Neuroscience
verfasst von
Hanspeter A Mallot
Copyright-Jahr
2013
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-00861-5
Print ISBN
978-3-319-00860-8
DOI
https://doi.org/10.1007/978-3-319-00861-5