Skip to main content
Erschienen in: Journal of Computational Neuroscience 1/2018

Open Access 19.06.2018

Dynamics of spontaneous activity in random networks with multiple neuron subtypes and synaptic noise

Spontaneous activity in networks with synaptic noise

verfasst von: Rodrigo F. O. Pena, Michael A. Zaks, Antonio C. Roque

Erschienen in: Journal of Computational Neuroscience | Ausgabe 1/2018

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Spontaneous cortical population activity exhibits a multitude of oscillatory patterns, which often display synchrony during slow-wave sleep or under certain anesthetics and stay asynchronous during quiet wakefulness. The mechanisms behind these cortical states and transitions among them are not completely understood. Here we study spontaneous population activity patterns in random networks of spiking neurons of mixed types modeled by Izhikevich equations. Neurons are coupled by conductance-based synapses subject to synaptic noise. We localize the population activity patterns on the parameter diagram spanned by the relative inhibitory synaptic strength and the magnitude of synaptic noise. In absence of noise, networks display transient activity patterns, either oscillatory or at constant level. The effect of noise is to turn transient patterns into persistent ones: for weak noise, all activity patterns are asynchronous non-oscillatory independently of synaptic strengths; for stronger noise, patterns have oscillatory and synchrony characteristics that depend on the relative inhibitory synaptic strength. In the region of parameter space where inhibitory synaptic strength exceeds the excitatory synaptic strength and for moderate noise magnitudes networks feature intermittent switches between oscillatory and quiescent states with characteristics similar to those of synchronous and asynchronous cortical states, respectively. We explain these oscillatory and quiescent patterns by combining a phenomenological global description of the network state with local descriptions of individual neurons in their partial phase spaces. Our results point to a bridge from events at the molecular scale of synapses to the cellular scale of individual neurons to the collective scale of neuronal populations.
Hinweise
Action Editor: Alain Destexhe

Electronic supplementary material

The online version of this article (https://​doi.​org/​10.​1007/​s10827-018-0688-6) contains supplementary material, which is available to authorized users.
Michael A. Zaks and Antonio C. Roque contributed equally to the work.

1 Introduction

Simultaneous recordings from large neuronal populations disclose complex spatio-temporal firing patterns characterized by rhythmic oscillations with variable degrees of synchrony (Buzsáki and Draguhn 2004; Bonifazi et al. 2009; Uhlhaas et al. 2009; Colgin 2011). Recent evidence suggests that in the cortex these patterns range from a “synchronized” state, characterized by low-frequency oscillation in the population firing rate and up/down switching in the single-neuron membrane potential, to a “desynchronized” state, marked by a roughly constant population firing rate and irregular single-neuron firing (Harris and Thiele 2011; Vyazovskiy et al. 2011; Sachidhanandam et al. 2013; Miller et al. 2014; Okun et al. 2015; Jercog et al. 2017). Synchronous states are more prominent during slow-wave sleep and anesthesia whereas asynchronous firing activity is prevalent in the states of wakefulness and REM sleep (Steriade et al. 2001; El Boustani et al. 2007; Greenberg et al. 2008; Sanchez-Vives et al. 2017). Notably, the degree of synchrony in cortical and subcortical regions varies with time, often with intermittent switches between synchronous and asynchronous states (Ahn and Rubchinsky 2013, 2017; Hahn et al. 2017).
There is a widespread assumption that prevalence of synchrony or asynchrony in the network activity depends on the relative strength of excitatory and inhibitory synaptic inputs (van Vreeswijk et al. 1996; Amit and Brunel 1997; Renart et al. 2010; Landau et al. 2016). In the context of networks of leaky integrate-and-fire (LIF) neurons, the balance between average excitatory and inhibitory synaptic inputs is known to result in quantitative characteristics of network activity that resemble those of asynchronous cortical states (Brunel 2000; Mattia and Del Giudice 2002; Cessac and Viéville 2008; Vogels and Abbott 2005a; Kumar et al. 2008; Wang et al. 2011; Litwin-Kumar and Doiron 2012; Kriener et al. 2014; Ostojic 2014; Potjans and Diesmann 2014). In the absence of such balance, the network displays behaviors akin to synchronous cortical states (Vogels et al. 2005b).
Networks in which the nodes feature more complicated dynamics than LIF neurons and are able to reproduce intrinsic firing patterns of contrasting cortical neurons, e.g. based on the Izhikevich (2003, 2007) or the AdEx (Brette and Gerstner 2005; Gerstner et al. 2014) models, demonstrate higher diversity of temporal patterns. In the region of parameter space where inhibitory synaptic strength exceeds excitatory synaptic strength, mixtures of neurons with different individual firing characteristics perform collective spontaneous oscillations that resemble the alternation of up and down states observed in the synchronized cortical state (Tomov et al. 2014, 2016). This suggests that not only synaptic balance of excitation/inhibition but also heterogeneities in the neuronal composition of the network may have an impact on the dynamic pattern of the network.
Yet another factor, capable of influencing the interplay between oscillatory and non-oscillatory states, is the intrinsic randomness of synaptic channels. More specifically, stochasticity expressed by synaptic noise originates from spontaneous neurotransmitter release in the synaptic cleft which generates miniature excitatory (inhibitory) postsynaptic potentials, the so-called mEPSPs (mIPSPs) or simply minis (Kavalali 2015; Pulido and Marty 2017). Characteristics of miniature postsynaptic potentials as amplitude and frequency have been demonstrated to depend on the sleep/wake state (Liu et al. 2010). From the theoretical point of view, synaptic noise has been used in cortical models as a source of transitions between different dynamical network states (Compte et al. 2000; Renart et al. 2003; Holcman and Tsodyks 2006; Moreno-Bote et al. 2007; Parga and Abbott 2007).
Previous work has shown that up-down oscillations can appear in different setups. One of them considers neurons with an adaptive variable, within e.g. AdEx (Destexhe 2009) or Izhikevich (Tomov et al. 2014) formalism. Another setup uses noise to provoke the switching between the two states (Holcman and Tsodyks 2006; Jercog et al. 2017). Here, by combining adaptation with noise, we show that noise is not mandatory for the up-down oscillations but favors their occurrence when it is present. In this study we demonstrate that a network of Izhikevich neurons with stochastic synaptic inputs displays a rich variety of dynamic states with different levels of oscillations and degrees of synchrony. We locate these states in the parameter space spanned by the ratio between inhibitory and excitatory synaptic increments and the synaptic noise magnitude. As expected, noise transforms the transient dynamics observed in previous studies into persistent states with well established properties. Independently of network composition and relative inhibitory synaptic strength, for low intensities of synaptic noise the persistent states are asynchronous and non-oscillatory. For higher noise magnitudes, the type of persistent state depends on the relative inhibitory synaptic strength.
Remarkably, in the region of the parameter space where inhibitory synaptic increments are greater than excitatory synaptic increments the persistent state displays intermittent spontaneous transitions between two dynamic regimes: an active state characterized by rhythmic alternations of tonic firing and silence, and a quiescent state characterized by low-rate irregular network firing. In the active state, the average neuronal membrane voltage oscillates between depolarized and hyperpolarized states in a manner that resembles cortical up/down oscillations, whereas in the quiescent state the average membrane potential remains close to the resting value. We characterize this intermittent state by means of firing rates, power spectra, voltage series, and explain the observed phenomena in terms of the behavior of network-embedded neurons viewed in their single-neuron phase subspaces.
This work extends previous studies on activity pattern dynamics in random networks of LIF neurons to random networks with more involved on-site dynamics. To test the validity of our observations against the change of the chosen neuronal model, we performed similar computations for the same networks composed of the AdEx neurons, reproducing all basic effects found for the Izhikevich model. This paves way to a broader conjecture that two-dimensional neuron models with a slow recovery variable can naturally account for oscillations between depolarized and hyperpolarized states, mimicking up/down states. In this context, the synaptic noise can transform transient oscillatory network activity into a persistent complex state with intermittent switches between two different dynamic regimes.

2 Materials and methods

2.1 Neuron and network model

Our work is based on a recent model that describes self-sustained oscillations across high (up) and low (down) global activity states (Tomov et al. 2014, 2016). This is the standard random network model where directed connections between every two nodes exist with a fixed probability p. To keep cortical sparseness we have chosen a low connection probability p = 0.01 and size N = 210. This renders the expected number of incoming connections per node (average in-degree) p(N − 1) ≈ 10. The network is mixed: it includes both excitatory and inhibitory nodes. The sizes of excitatory and inhibitory subpopulations are taken in the proportion 4 : 1 (Brunel 2000). Each network node is a neuron modeled by the Izhikevich formalism (Izhikevich 2003) with parameters that ensure diverse dynamics on the individual level. Every neuron is described by two variables: voltage v(t) and membrane recovery variable u(t), which follow the coupled differential equations
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} \dot{v} & =\alpha v^{2}+\beta v+\gamma - u + I(t) \\ \dot{u} & =a(bv-u), \end{array}\right. \end{array} $$
(1)
with a fire and reset rule. Every time when v(t) assumes the threshold value v(t) = vpeak, both variables are instantaneously updated:
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} v(t) & \rightarrow c,\\ u(t) & \rightarrow u(t)+d. \end{array}\right. \end{array} $$
(2)
Our choice of the Izhikevich neuronal model is based on its ability to mimic, by means of setting the appropriate values of parameters a, b, c, d, the behavior of neurons from different electrophysiological classes (Nowak et al. 2003; Contreras 2004). Among those, we concentrate in this study on the excitatory regular spiking (RS) and chattering (CH) neurons, and on the inhibitory fast spiking (FS) and low-threshold spiking (LTS) neurons. Figure 1 shows examples of individual dynamics for different classes: the neuronal types differ in frequency, adaptation, and in rheobase current. We consider network compositions where all inhibitory neurons belong to the same class: all of them are either of the LTS type or of the FS type. In the excitatory subpopulation we take the case when all neurons belong to the RS type, and the case when the RS neurons are mixed with CH. A thorough discussion of different aspects of the Izhikevich neuron model can be found in Izhikevich (2007).
The last term in the first equation of Eq. (1) describes the synaptic current which, for a neuron j, reads:
$$ I_{syn,j}(t)=G^{\text{ex}}_{j}(t)\left( E_{\text{ex}}-v_{j}\right) + G^{\text{in}}_{j}(t)\left( E_{\text{in}}-v_{j}\right). $$
(3)
The current is controlled by conductances \(G_{j}^{\text {ex/in}}\) and reversal potentials Eex/in, responsible for excitatory/inhibitory effects. Whenever an excitatory (inhibitory) neuron spikes, an increment gex (gin) is added to the conductances Gex (Gin) of all its postsynaptic neurons; thereafter the conductances decay exponentially with time constant τex/in. This is well known as a conductance based synaptic model, described by the differential equation
$$\begin{array}{@{}rcl@{}} \frac{dG_{j}^{\text{ex/in}}(t)}{dt}=-\frac{G_{j}^{\text{ex/in}}(t)}{\tau_{\text{ex/in}}}+g_{\text{ex/in}} \sum\limits_{i} \delta(t - t_{i})\\ + \sqrt{2D n_{j}}\xi_{j}(t), \end{array} $$
(4)
where summation is performed over all time instants ti of preceding presynaptic spikes. We adopt the same parameters as in Tomov et al. (2016): Eex = 0 mV, Ein = −80 mV, τex = 5 ms and τin = 6 ms.
The last term in Eq. (4) is the synaptic noise source. Since, for simplification, the noise sources are treated as being independent or weakly correlated, a superposition of a large number of such inputs is approximated by a simple Gaussian white noise process. We assume that ξj is Gaussian with zero mean and unit variance: \(\left \langle \xi (t)\right \rangle = 0\) and \(\left \langle \xi (t)\xi (s)\right \rangle =\delta (t-s)\). Note that in spite of the zero mean of the Gaussian process, the mean value of the synaptic input current \(I_{syn,j}\) stays non-zero which, in its turn, is determined by both \(G_{j}^{\text {ex/in}}\) and \(E_{\text {ex/in}}\). So, the Gaussian process only has the effect of causing displacements in the synaptic current but does not act as a driving force. Concerning the variance, since the sum of independent random normally distributed variables is normally distributed as well, the overall variance of the stochastic process for a neuron j is chosen to be proportional to the total number of excitatory/inhibitory inputs \(n_{j}\) that this neuron receives. Thereby, for neurons with different numbers of presynaptic partners, the intensity of the noisy input is different. Altogether, evolution of conductances for each neuron consists of the stochastic Ornstein-Uhlenbeck process (Uhlenbeck and Ornstein 1930) in the time intervals between the presynaptic spikes and of discontinuous jumps upwards of the size \(g_{\text {ex/in}}\) at the instants of arrival of those spikes. This stochastic model, similar to the point-conductance model described in Destexhe et al. (2001), has its power-spectral density and variance completely determined (Gillespie 1996). In distinction to Destexhe et al. (2001), in our case randomness is generated within the synapses and is, in general, non-Poissonian.
The complete set of parameter values used in the simulations of this study is summarized in Table 1. Note that that the values of parameters \(\alpha , \beta , \gamma \) in the voltage equation are shared by all neuronal types.
Table 1
Parameters used in the simulations
Common parameters in Eq. (1)
 
\(\alpha \)
\(\beta \)
\(\gamma \)
\(v_{\text {peak}}\) [mV]
 
  
0.04
5
140
30
 
Parameters of Eq. (1) for
      
different firing patterns
 
a
b
c [mV]
d
 
Excitatory RS
 
0.02
0.2
\(-\)65
8
 
Excitatory CH
 
0.02
0.2
\(-\)50
2
 
Inhibitory FS
 
0.1
0.2
\(-\)65
2
 
Inhibitory LTS
 
0.02
0.25
\(-\)65
2
 
Synaptic parameters
\(g_{\text {ex}}^{\max }\)
\(g_{\text {in}}^{\max }\)
\(\tau _{\text {ex}}\) [ms]
\(\tau _{\text {in}}\) [ms]
\(E_{\text {ex}}\) [mV]
\(E_{\text {in}}\) [mV]
 
0.15
1
5
6
0
\(-\)80
Network parameters
size N
ratio exc:inh
connectivity
   
 
\(2^{10}\)
4:1
\(p = 0.01\)
   

2.2 Measures

In this subsection, we introduce neuron and network measures that will be used below for characterization of the results.
We start by defining the network time-dependent firing rate as
$$ r(t; {\Delta} t) = \frac{1}{N {\Delta} t} \sum\limits_{j = 1}^{N} {\int}_{t}^{t+{\Delta} t} x_{j}(t^{\prime}) dt^{\prime}, $$
(5)
where the spike train \(x_{j}\) for each neuron j is viewed as a series of \(\delta \) functions: \(x_{j}(t) = {\sum }_{{t^{f}_{j}}} \delta (t-{t^{f}_{j}})\), with \(\{{t^{f}_{j}}\}\) being the set of times when neuron j fired. We fix the time window Δt = 1 ms.
We will use two power spectra: the spike train power spectrum and the voltage time series power spectrum. The first one is defined for each neuron j as
$$ S_{xx,j}(f) = \frac{\langle\tilde{x}_{j}\tilde{x}^{*}_{j}\rangle}{T}, $$
(6)
where \(\tilde {x}_{j}(f)\) is the Fourier transform \(\tilde {x}_{j}(f) = {{\int }_{0}^{T}}dt\,e^{2\pi ift}x_{j}(t)\), \(\tilde {x}_{j}^{*}\) is the complex conjugate and T is the length of the time interval. Note that 〈.〉 represents an ensemble average. The power spectrum of the voltage time series is obtained in the same way, replacing in Eq. (6) the spike train \(x_{j}(t)\) by the voltage time series \(v_{j}(t)\). In the case of the spike train power spectrum, the units are 1/s whereas the units of the voltage spectrum are mV2/Hz.
An average over a subset that includes K neurons renders the average power spectrum:
$$ \bar{S} = \frac{1}{K} \sum\limits_{j \in K} S_{xx,j}(f) , $$
(7)
We quantify the degree of oscillatory activity in the network via the spectral entropy \(H_{s}\) (Blanco et al. 2013; Sahasranamam et al. 2016). Spectral entropy is computed from the time-dependent firing rate Eq. (5) as
$$ H_{s} = \frac{-{\sum}_{k} S_{rr}(f_{k}) \log S_{rr}(f_{k})}{\log N_{b}}, $$
(8)
where \(N_{b}\) is the number of frequency bins and \(S_{rr}(f_{k})\) is the value of the normalized (i.e. \({\sum }_{k} S_{rr}(f_{k}) = 1\)) power spectrum of the network time-dependent firing rate \(r(t; {\Delta } t)\) at the k th bin. In our simulations we use \(N_{b}\)= 1000. In the case of broadband noise activity, the power spectrum of the network firing rate is flat and the spectral entropy is maximal: \(H_{s}= 1\). If, in contrast, all power is concentrated at one frequency, a case of single-frequency network oscillations, the spectral entropy vanishes: \(H_{s}= 0\).
To quantify the degree of synchrony in the network, we use the phase locking value (PLV ) which is a standard measure to evaluate phase synchronization (Lachaux et al. 1999; Celka 2007; Rosenblum et al. 2001; Aydore et al. 2013; Lowet et al. 2016). Unless otherwise stated, the time average used to calculate the PLV is always taken over a simulation interval of T = 2000 ms. We define the PLV as the average over K neuron pairs and T sample time points:
$$ PLV = \frac{1}{K} \sum\limits_{\{ij\}}^{K}\left|{\sum\limits_{t}^{T}} e^{i {\Delta} {\Phi}_{xy}(t)} \right|, $$
(9)
where \({\Delta } {\Phi }_{xy}(t)\) are the phase differences \(\rho _{x}{\Phi }_{x}(t)-\rho _{y}{\Phi }_{y}(t)\) from two randomly chosen spike-trains \(\left (x(t),y(t)\right )\) that are obtained using the Hilbert transform. The values \(\rho _{x}\) and \(\rho _{y}\) define the frequency ratio and, expecting similar firing rates, we set \(\rho _{x}=\rho _{y}= 1\). The PLV is bounded between 0 (asynchrony) and 1 (synchrony).
In our simulations, we constructed parameter space plots of the synchrony index PLV (like the ones shown in Results) for different numbers K of neuron pairs and observed a saturation in the plots for increasing values of K above 50. This indicates that PLV becomes independent of the number of neuron pairs for K ≥ 50. To ensure this independence, in computations we took K = 60.
Numerical integration of the differential equations was performed by means of the Heun algorithm (Mannella 2002). We used C++ to write the computational code, and Matlab and xmgrace to visualize and analyze the results.

3 Results

3.1 Preliminaries and the deterministic setup

To single out the effects caused by the introduction of synaptic noise, we first characterize the system in the non-perturbed state, i.e. in the absence of noise. Below, we refer to this case as the deterministic setup.
At the chosen parameter values the global state of rest is stable. Since in the deterministic setup no activity can be excited from that state without an initial disturbance, we start simulations by applying brief electric stimulation to arbitrarily selected neurons. Different stimuli are constructed by varying
  • the amplitude of the input current from \(I_{\text {stim}} = 10\) to \(I_{\text {stim}} = 20\);
  • the duration of the input current from \(t_{\text {stim}} = 50\) ms to \(t_{\text {stim}} = 300\) ms; and
  • the proportion of stimulated neurons: 1, 1/2, 1/4, 1/8, 1/16.
The initial kick provided by brief stimulation has a sole role to put the system into a state other than rest. After the stimulation ends, the network is left to evolve freely and its dynamics is recorded. Eventually all trials end up in the state of rest. In most cases evolution is not a straightforward decay but a long dynamical transient; its duration strongly (by several orders of magnitude) varies, depending on the initial conditions. On discarding the cases where the free activity was shorter than 400 ms, we are left with a set of trials in which the network displayed long-living self-sustained activity; duration of the latter stage justifies a closer look at its intrinsic characteristics.
We have studied different combinations of the conductance increments (gin, gex) and observed rather distinct behavior as shown in Fig. 2. The choice of \(g_{\text {ex}}\) and \(g_{\text {in}}\) directly affects the network balance and shapes thereby its dynamics (Vogels et al. 2005b).
Depending on the ratio \(g_{\text {in}}/g_{\text {ex}}\), the self-sustained activity displayed by the network belongs to one of two categories outlined in Tomov et al. (2014). The first one, shown in the left column of Fig. 2, is a relatively constant network activity state where neurons spike in an asynchronous and non-oscillatory fashion. For the given example, this is confirmed by the high value of the spectral entropy (Hs = 0.87) and the low phase locking value (PLV = 0.39). The reason for the constant network activity can be seen from the behavior of the voltage traces for two randomly selected neurons at the bottom of the left column of Fig. 2: the neurons fire irregularly, but their firing rates are so high that the collective activity is constant.
The second category, shown in the right column of Fig. 2, is an oscillatory state (Hs = 0.39) characterized by regular periods of high mean firing rate intercalated with periods of very low firing. The average voltage indicates that the bulk of neurons is fluctuating between depolarization and hyperpolarization. The PLV for this state is higher (PLV = 0.60) indicating that neurons fire/stay silent with a higher degree of synchrony. Voltage traces for two randomly picked neurons (see bottom plot in the right column of Fig. 2) show bursts of closely spaced spikes during high activity phases intercalated with periods of hyperpolarization below the reset value during low activity phases. This behavior was explained by us earlier (Tomov et al. 2016) in terms of the dynamics of the recovery variable u in the single neuron phase plane of the Izhikevich neuron.
The example given in the middle column of Fig. 2 illustrates the transition between the two above categories. This transition occurs when the inhibitory synaptic increment overcomes the excitatory synaptic increment as reported in Tomov et al. (2014). The network activity in the transition region looks as a mixture of constant and oscillatory activity, with intermediate values of the synchrony (PLV = 0.45) and oscillatory activity (Hs = 0.56) indexes. Voltage traces for two randomly chosen neurons (bottom of middle column) show high firing rates like in the first category (a tendency for constant activity), but now there are short periods of activity break like in the second category (oscillatory activity).
Naively, \(g_{\text {in}}/g_{\text {ex}}= 4\) may seem to be a balanced situation, as reported elsewhere (Brunel 2000). Here, however, we are dealing with neurons from different electrophysiological classes, and their firing rates differ as well. In addition, we are using a conductance based synaptic model where the synaptic current is voltage-dependent. In that sense, the mean time-averaged synaptic input for a given neuron j can be roughly estimated as
$$\begin{array}{@{}rcl@{}} I_{j}(t) \approx g_{\text{ex}}C_{\text{ex}}\nu_{\text{ex}} \tau_{\text{ex}} (E_{\text{ex}} - \langle v \rangle) \\ - g_{\text{in}}C_{\text{in}}\nu_{\text{in}} \tau_{\text{in}} (E_{\text{in}} - \langle v \rangle), \end{array} $$
(10)
where \(C_{\text {ex/in}}\) are the numbers of excitatory/inhibitory inputs to neuron j, \(\nu _{\text {ex/in}}\) are the mean firing rates of the excitatory/inhibitory populations, and \(\langle v \rangle \) is a representative voltage. The expression in Eq. (10) elucidates that the notion of “balance” is subtle, and its reduction to just \(g_{\text {ex/in}}\) and \(C_{\text {ex/in}}\) may be misleading. Usually, when LIF neurons are considered, equal mean firing rates of excitatory and inhibitory neurons, as well as equal relaxation times \(\tau _{\text {ex,in}}\) are assumed, hence the balance requires only that \(g_{\text {in}}/g_{\text {ex}}=C_{\text {ex}}/C_{\text {in}}\), which, in the widely studied situation with the number of excitatory connections four times higher, results in \(g_{\text {in}}/g_{\text {ex}}= 4\). In contrast, in a network like ours, with \(\nu _{\text {in}}>\nu _{\text {ex}}\), there is no balance at \(g_{\text {in}}/g_{\text {ex}}= 4\), instead there is a voltage dependent input current: if \(\langle v \rangle \) is depolarized (hyperpolarized), negative (positive) currents drive the neuron.
Altogether, these preliminary examples confirm that the deterministic setup, depending on the ratio \(g_{\text {in}}/g_{\text {ex}}\), is able to generate oscillatory or constant activity. In the following, we concentrate on the oscillatory situation, when inhibition overcomes excitation.
In Fig. 3 we present an exemplary simulation in the deterministic setup and extended statistics from the set of long-lived realizations with synaptic increments \(g_{\text {ex}} = 0.15\) and \(g_{\text {in}} = 1\) (this set contains 487 simulations, thus allowing good statistics). In this case the majority of neurons oscillates between a depolarized state and a hyperpolarized state, well visible in Fig. 3B and on the bimodal distribution in Fig. 3F, computed from the entire set of simulations with varied initial stimulation. For individual neurons these preferred subthreshold membrane potentials are known as “up” and “down” states (Wilson 2008), and in the context of the ensemble of neurons it seems natural to view these two states as collective “up” and “down”, respectively. As seen in Figs. 3A-E, a typical period of oscillations is close to 100 ms (f ≈ 10 Hz).
For a typical neuron in the ensemble, Figs. 3D-E illustrate the temporal evolution of the voltage and the membrane recovery variable, respectively, during the same simulation. There is strong correlation between firing of this neuron and the periods of high activity of the whole network, although some other neurons also fire when the network activity is low. These latter are inhibited during the high activity epochs and become disinhibited when the overall network activity is low. We have shown elsewhere the importance of this disinhibitory effect to sustain the long-lived activity of the network in the oscillatory situation (Tomov et al. 2016).
Remarkably, not only the voltage series in Fig. 3D features two different states (a hyperpolarized one and a depolarized one) but also the membrane recovery variable, which clearly grows when the network activity is high and slowly relaxes when the activity is low. This is a global phenomenon: in all simulations there are peaks of the variable u. In the distribution shown in Fig. 3 G, the maximum is broad due to the time-scale separation of the variables: u is slower than v and its relaxation takes much longer. In Tomov et al. (2016) we have shown the importance of the recovery variable for the breakdown of global high-activity epochs, which produces the up and down oscillatory pattern.
Figure 3H presents a histogram of durations in collective up and down states. The term “up” refers here to different states in which the network activity is above 20% of its average value, while the voltage for the majority of neurons is at a depolarized value. A collective “down” state is identified whenever the bulk of the neurons reaches a hyperpolarized state close to \(-80\) mV.
Recall that eventually the system ceases to oscillate, and voltages of all neurons invariably converge to the rest value.

3.2 Setup with synaptic noise

Introduction of synaptic noise drastically changes one important aspect, both in the individual and in the collective dynamics: the state of rest, albeit formally stable, ceases to be the ultimate attractor. A neuron is an excitable system, and in the noisy setup it is just a matter of time when a sufficiently strong fluctuation (or a cumulative effect of many fluctuations) drives it across the spiking threshold. For an ensemble this implies disordered sporadic firing of its members, which, under favorable conditions, can turn into ordered collective activity. If deterministic aspects dominate in dynamics, this activity will temporarily end in the state of rest, only to be recreated by new fluctuations.

3.2.1 Isolated neurons

Consider an individual neuron that obeys Eq. (1) with the synaptic current I given by Eq. (3) and synaptic conductances \(G^{\text {ex/in}}\) governed by Eq. (4) with noisy input. An isolated neuron, by definition, has no synaptic inputs; nevertheless, stochastic fluctuations of its synaptic conductances can result in action potentials. In this situation, to study the influence of noise on the resting neuron we, without loss of generality, set \(n_{j}= 1\) in Eq. (4). Take the initial conditions for the neuron at its state of rest and set its synaptic conductances to zero, so that the initial current is absent. As time goes on, the conductance evolves stochastically; to ensure that it stays positive, we impose a reflecting condition at zero (which, in the long run, very slightly shifts upwards the mean value of \(\xi (t)\)). As a result, a stochastic current \(I(t)\) is generated. As long as \(I(t)\) is absent or sufficiently small, the neuron stays at rest. As soon as the instantaneous current I exceeds the critical value \(I_{\text {crit}}(t)=\displaystyle \frac {(\beta -b)^{2}}{4\alpha }-\gamma \), with \(\alpha ,\beta ,\gamma ,b\) being the parameters of the Izhikevich model Eq. (1), the state of rest disappears (the mechanism is explained below in Section 3.4), the voltage variable v starts to grow monotonically, and the neuron fires.
Since presynaptic inputs are absent in this isolated neuron description (see Eq. (4)), computation of the first firing time for an isolated neuron turns into a variant of the mean first passage time problem (Siegert 1951) for the Ornstein-Uhlenbeck process. Numerically, we find this quantity by averaging over a sufficient number of trials.
Regarding dependency of \(I_{\text {crit}}(t)\) on the electrophysiological class, we note that the parameters \(\alpha \), \(\beta \) and \(\gamma \) are common for all classes, leaving b as the only parameter that matters. In this context, b determines the current threshold value. Furthermore, three of the four considered neuronal classes share the same value of b, whereas the LTS neuron has a higher value of b, ensuring early initiation of spikes. Hence, it suffices to compare two neurons: LTS and e.g. RS. In Fig. 4 we plot the computed dependences of the time of first spike on the synaptic noise intensity.
Notably, from the point of view of the random network, each curve in Fig. 4 shows the behavior for all neurons of its respective kind, regardless of their in-degree: according to Eq. (4), an increase of the in-degree (in other words, of the number of independent Gaussian noises acting upon the synapse) rescales the variance and is therefore equivalent to the corresponding increase of D at constant degree. Recall that in the studied networks most of the neurons have in-degree ≈10. Altogether, the influence of the number of synaptic connections is clear: the higher the in-degree, the higher the variance of the input noise, the faster the neuron crosses the threshold and emits a spike.

3.2.2 Network with weak synaptic noise

We begin the discussion of synaptic noise in the network by presenting a case where its introduction induces activity with properties strongly different from those in the deterministic setup. For the same set of parameter values as in the deterministic case of Fig. 3, instead of initial stimuli, we add in accordance with Eq. (4) small (D = 2.5 × 10− 6) stochastic fluctuations to the synaptic variables. This results in activity with very low firing rates, exemplified in panels A-C of Fig. 5. The high spectral entropy (Hs = 0.82) and the very low synchrony (PLV = 0.0298) indicate a non-oscillatory and asynchronous type of activity. The voltage distribution in Fig. 5D stands in contrast to the deterministic case: it is unimodal, the maximum lies at the mean, and the relevant voltage values are close to the resting potential. The firing rates in Fig. 5E are close either to 1 Hz (excitatory neurons) or to 8 Hz (inhibitory neurons). The state of the network in the weak synaptic noise setup corresponds well to the so-called asynchronous irregular (AI) state (Brunel 2000; Vogels et al. 2005b).
Up-down oscillations can occur in the weak synaptic noise setup, but only for short transient periods like in the deterministic case. After the transient, the persistent activity is asynchronous irregular like the one in Fig. 5. An example is shown in Fig. S1.

3.3 Onset and classification of intermittent oscillatory and quiescent activity in the synaptic noise setup

Here we describe various collective states induced in the network by synaptic noise. Experience gained from the study of the deterministic setup allows us to expect that, along with the synaptic noise intensity D, the crucial parameter in this context is the ratio \(g_{\text {in}}/g_{\text {ex}}\): the proportion between inhibitory and excitatory synaptic strengths (Brunel 2000; Girones and Destexhe 2016). We start by exploring the behavior of the spectral entropy \(H_{s}\) and the synchrony measure PLV in the two-dimensional diagram spanned by parameters \(g_{\text {in}}/g_{\text {ex}}\) and D (Fig. 6).
As seen in the diagrams in Fig. 6, both \(g_{\text {in}}/g_{\text {ex}}\) and D are responsible for shaping the activity pattern of the network. Let us begin with the diagram for spectral entropy in Fig. 6 A. For weak synaptic noise (\(D \lessapprox 5 \times 10^{-6}\)) the network displays non-oscillatory behavior independently of the ratio \(g_{\text {in}}/g_{\text {ex}}\). For the narrow horizontal band defined by \(5 \times 10^{-6} \lessapprox D \lessapprox 10^{-5}\), the state of the network is oscillatory and the degree of oscillatory activity is higher for \(g_{\text {in}}/g_{\text {ex}} \lessapprox 2\). On the other hand, for \(D \gtrapprox 10^{-5}\) the situation is inverted and the region determined by \(g_{\text {in}}/g_{\text {ex}} \lessapprox 2\) displays non-oscillatory activity while most of the remainder of the diagram features oscillatory activity. Within this latter part of the diagram, increase of both noise and inhibitory synaptic strength lowers the degree of oscillatory activity until in the upper right corner the activity turns non-oscillatory.
Now let us turn to the diagram for the synchrony PLV in Fig. 6B. The region of weak synaptic noise (\(D \lessapprox 5 \times 10^{-6}\)) displays asynchronous behavior independently of \(g_{\text {in}}/g_{\text {ex}}\). Under such weak noise firing remains an individual event for noise-perturbed neurons, rather than a collective effect. Along the narrow horizontal band of the diagram determined by \(5 \times 10^{-6} \lessapprox D \lessapprox 10^{-5}\), the synchrony index has mostly intermediate values with a narrow high-synchrony region around \(g_{\text {in}}/g_{\text {ex}} \approx 4\). In the remainder of the diagram the behavior along horizontal scans in the diagram is roughly the same: in the entire region determined by \(g_{\text {in}}/g_{\text {ex}} \lessapprox 2.5\) the activity is asynchronous, whereas outside that region the degree of synchrony has intermediate values.
The combined information in the two diagrams of Fig. 6 is qualitatively summarized in a schematic diagram drawn in Fig. 7. The states in this diagram are denoted in accordance with two measures of network activity in Fig. 6: \(H_{s}\) quantifies the degree of oscillatory activity and PLV quantifies the degree of synchrony. Selected samples from the different regions are also displayed on the right of Fig. 7 to show the time-dependent network firing rates for the corresponding combinations of \(H_{s}\) and PLV.
The region of weak synaptic noise lies at the bottom of the diagram in Fig. 7. The type of network activity there is asynchronous non-oscillatory, already described in Section 3.2.2. It is similar to the asynchronous irregular (AI) activity observed in networks of LIF neurons (Brunel 2000; Vogels et al. 2005b). The region stretches along the full length of the horizontal axis, indicating that the generic features of the network activity for weak synaptic noise are insensitive to the ratio between excitation and inhibition.
For stronger synaptic noise the structure of the diagram in Fig. 7 is more complex. The network displays synchronous oscillatory activity within an irregular shaped region in the center of the diagram, adjoined by a narrow horizontal strip in the bottom part. This is similar to the synchronous regular (SR) type of activity found in networks of LIF neurons (Brunel 2000; Vogels et al. 2005b). In the remainder of the third of the diagram where \(g_{\text {in}}/g_{\text {ex}} < 2\) the activity is asynchronous non-oscillatory. Its pattern is similar to the constant pattern shown in the left column of Fig. 2. On the other hand, through the upper two-thirds of the diagram for \(g_{\text {in}}/g_{\text {ex}} > 2\) the activity is synchronous non-oscillatory. Thus, for very strong synaptic noise the network activity is non-oscillatory and can be synchronous or asynchronous depending on the \(g_{\text {in}}/g_{\text {ex}}\) ratio.
Finally, the diagram in Fig. 7 includes the region marked as “transition”. It contains most of the right third of the diagram, with the exception of the regions of weak and strong synaptic noise mentioned above, and extends to the central part of the diagram where it separates the synchronous oscillatory from the asynchronous non-oscillatory regions. This corresponds to a region with intermediate degrees of oscillatory activity (the greenish region in the diagram for \(H_{s}\) in Fig. 6A) and synchrony (red-orange to yellow-orange colors in the diagram for PLV in Fig. 6B). Therefore, states in the transition region should occupy intermediate position between constant and oscillatory states like the state in the middle column of Fig. 2.
Interested in the behavior of the network in the transition region, we focus here on a part of the diagram in Fig. 7 determined by \((g_{\text {in}},g_{\text {ex}})=(1,0.15)\), which implies \(g_{\text {in}}/g_{\text {ex}}\approx 6.66\), and \(10^{-5} \lessapprox D \lessapprox 10^{-4}\). This corresponds to the greenish (light orange) region on the lower right-hand side of the diagram for \(H_{s}\) (PLV ) in Fig. 6A (B). Spectral entropy and PLV here are both close to 0.5 meaning that states with intermediate levels of oscillatory activity and synchrony may be encountered.
In Fig. 8 we illustrate dynamics for the point given by \(D = 1\times 10^{-5}\) and \((g_{\text {in}},g_{\text {ex}})=(1,0.15)\) in the diagram in Fig. 7. This point is in the transition region on the lower right-hand side of the diagram described above, which is characterized by intermediate values of \(H_{s}\) and PLV.
Remarkably, a typical record of a long simulation trial in this region of the diagram consists of alternating states (Fig. 8): an oscillatory one, akin to oscillations presented in the deterministic setup in Fig. 3, and a state with very low firing rates similar to the one in Fig. 5. From time to time transitions between these states occur, seemingly without any precursors. Compared to deterministic simulations, an additional feature is distinct in the histogram of mean voltage: a pronounced maximum at the state of rest. Accordingly, the temporal evolution of voltage is organized around three characteristic values, instead of two known from the deterministic setup. Three red dashed lines in Fig. 8B-C mark three relevant states; from top to bottom, they denote depolarization, the state of rest and hyperpolarization. Note that the histogram in B can be viewed as a combination of the voltage histograms from Figs. 3 and 5.
The average spectral entropy calculated over the quiescent/oscillatory states in Fig. 8 is \(H_{s} = 0.74\)/Hs = 0.37, indicating non-oscillatory activity in the first case and oscillatory activity in the second one.
We classify the observed states based on two attributes: network activity and average voltage. Like previously, the average voltage series was used to detect the up and down states (see Fig. 3H). The states close to rest are identified through very low network activity,
In terms of activity, we introduce the following distinction:
  • quiescent period is the time interval when the time-dependent firing rate of the network \(r(t,{\Delta } t)\) is below its maximum by at least 20%, and most of the single neurons have voltage values close to the resting state. During a quiescent period there can be sporadic noise-induced spikes but no collective dynamics. The state is similar to an asynchronous irregular (AI) state of networks of LIF neurons (Brunel 2000; Vogels and Abbott 2005a) with low firing rate, and to a desynchronized cortical state as described in the Introduction.
  • active period is the time interval when the network exhibits oscillatory activity, alternating between high depolarized and hyperpolarized mean voltage values: collective up and down states. Such behavior can be related to the self-sustained activity developed in in vivo cortical slice preparations and during slow-wave sleep and anesthesia (Steriade et al. 2001; Tomov et al. 2016; Sanchez-Vives et al. 2017).
These definitions, in combination with the values of the average voltage, facilitate identification of different collective states. Certain states that look very similar on the raster plot turn out to differ in typical voltage values. For instance, both the down state and the quiescent period feature in the raster plot almost no activity, but can be easily discerned in terms of the average voltage.
In Fig. 9 we show various regimes at different values of D. Three samples corresponding to the time interval of 2 s are, from top to bottom: \(D = 0.5\times 10^{-5}\), \(D = 1.5\times 10^{-5}\), and \(D = 4.5\times 10^{-5}\), respectively. In panels A1, B1, and C1 green dots denote states with instantaneous voltage values close to the resting state, blue dots denote hyperpolarized voltage (down state), and red dots denote depolarized voltage (up state). The plot highlights the crucial role of synaptic noise level in changes of typical duration at each of these states. It is easier to generate oscillatory states (alternating between up and down states) when the network is subjected to stronger synaptic noise. In contrast, the “green” states close to rest (quiescent periods), prevalent at low synaptic noise amplitudes, occupy a much smaller proportion of time when synaptic noise becomes sufficiently intensive.
Comparison of raster plots in Fig. 9 indicates that when noise intensity D is increased, the waves of activity start to merge. This hinders identification of states, based on the raster plot alone. The spectral entropy and the synchrony index increase with the noise intensity. We expect that at very high levels of noise the activity becomes constant (synchronous non-oscillatory), with rather high firing frequencies (see the schematic diagram in Fig. 7).
In the frequency domain, variation of the noise level leads to redistribution of power in the Fourier spectra of both the spike trains and the voltage series. Figure 10 presents spectra for the same noise intensities as in Fig. 9: from top to bottom, D = 0.5 × 10− 5, \(D = 1.5\times 10^{-5}\), and \(D = 4.5\times 10^{-5}\). All spectra were averaged over ensembles of 200 neurons, see Eq. (7). The shapes of spectral curves for spike trains and for voltage values are similar; the only noticeable difference is the somewhat faster decay at high frequencies in the voltage spectra. The left column shows mixtures of RS and LTS neurons; the right column corresponds to networks with RS and FS neurons. Under low levels of noise, spectral power is concentrated at very low frequencies, waves of collective activity are quite rare and, when they occur, they are mostly isolated events. On increasing the intensity D, waves of collective activity become more frequent whereas the periods of quiescence get shorter. During the periods of oscillatory activity, neurons are either firing at high frequency in the up state or rarely firing in the down state. This results in the increase of spectral power at low frequencies, with a distinct maximum near 10 Hz.
Comparison of left and right columns in Fig. 10 shows that spectral curves for networks with inhibitory LTS and FS neurons are similar. Comparing the peak values indicated in the panels. we see that spectral power in the networks with FS neurons is slightly higher.
Remarkably, these power spectra, computed for single neurons, bear resemblance to experimentally obtained spectral curves. In the case of the voltage spectra (Fig. 10B and D), the \(1/f^{n}\) behavior is reported in experiments on up-down states with n in the range 1 to 3 (Bédard and Destexhe 2009; Millman et al. 2010; Baranauskas et al. 2011). Furthermore, our results match the experimental observation that the spike-train power spectra have striking differences in comparison to the voltage-series power spectra (Bair et al. 1994).
In the spike-train power spectra (Fig. 10A and C), there is no \(1/f^{n}\) scaling. As the noise intensity D is raised, the value related to the zeroth frequency bin of the spectra decreases. This indicates that irregularity is becoming less apparent given that \(\lim \limits _{f\to 0}\bar {S}(f)\) is related to the Fano factor which is a measure of irregularity (Middleton et al. 2003; Pena et al. 2018).
Regarding the \(1/f^{n}\) scaling, observed both experimentally and theoretically (Beggs and Plenz 2003; Kinouchi and Copelli 2006), in our case we see that noise acts upon the scaling (cf. n values in Fig. 10B, D). It has been shown elsewhere (Baranauskas et al. 2011) that the shape of up-down transitions in the membrane potentials could be a determining factor for modulation of the \(1/f^{n}\) scaling with \(n = 2\). Our observations provide support to this experimental evidence. At unbounded growth of D, transitions should vanish, and, as a consequence, n decreases.
Additionally, increase of noise shifts the peak values and peak frequencies in both spike-train and voltage power spectra; compare the peak values in different subpanels. The existence of spectral differences where peaks becomes apparent or not is well known to be present in the cerebral cortex during different states such as slow wave sleep and wake (Buzsaki 2006).
We have seen that synaptic noise enforces alternation of collective states and influences durations of stay in each of them. Below, we explain how the dynamics of a single neuron, embedded in the synaptic noise setup, is reflected in the collective properties of activity, how the transitions are affected by the composition of the network, and how the picture changes at different levels of noise.

3.4 Single neuron phase plane description of the synaptic noise setup

A deeper understanding of the single neuron behavior in the synaptic noise setup can be gained from analysis of the course of its phase plane dynamics during the simulation. Setting the derivatives \(\dot {v}\) and \(\dot {u}\) in Eq. (1) to zero renders the nullclines of the voltage and the membrane recovery variable which we denote below as \(\bar {u}\) and \(u^{*}\), respectively.
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} u & =\bar{u} = \alpha v^{2}+\beta v+\gamma + I(t), \\ u & = u^{*}= bv. \end{array}\right. \end{array} $$
(11)
with \(\bar {u}\) being a (time-dependent) quadratic parabola and \(u^{*}\) a straight line. Synaptic noise enters this configuration implicitly, through its contribution to the current I.
Under the employed parameter values (see Table 1 above) and \(I = 0\), the nullclines intersect in two points of the phase plane. These points correspond to equlibria of the system; the left of them is stable: without input current, neuron exhibits no activity. When the instantaneous value of the current is increased, the nullcline \(\bar {u}\) is shifted upwards on the phase plane, and the equilibria move toward each other. At the value \(I_{\text {sn}}(t)=\frac {(\beta -b)^{2}}{4\alpha }-\gamma \) they merge and disappear in a saddle-node bifurcation. Absence of equilibria is sufficient to ignite a spike: the voltage grows until it reaches the threshold. In fact, if the value of the parameter b exceeds that of the parameter a (this holds for all four considered neuronal types), spiking starts at even weaker current: at \(I_{H}=\frac {(\beta -b)^{2}-(a-b)^{2}}{4\alpha }-\gamma \) the subcritical Andronov-Hopf bifurcation takes place, the equilibrium loses stability and the solutions spiral out from its vicinity toward the spiking threshold. Recall that the values of \(\alpha ,\,\beta ,\,\gamma \) are common for all neuronal types (cf. Table 1). Hence, the onset of spiking at \(I_{H}\) is dictated for each type of neuron by the pertaining a and b (the remaining parameters c and d characterize the reset and are irrelevant in this context: a neuron that has made it to the reset, is already in the spiking state).
Evolution of every individual neuron is governed by its instantaneous location on the phase plane with respect to the nullclines; its dynamics is affected not only by its own state, but by the time-dependent (due to external and synaptic currents) position of the nullcline \(\bar {u}\). This allows us to see the collective dynamics from the local point of view of its individual participant; for it, the rest of the network is a background mechanism that moves the nullcline \(\bar {u}\) upwards and downwards.
Remarkably, this motion is not always negligible in comparison to dynamics of the neuron on the phase plane: on arrival of synaptic input, the nullcline \(\bar {u}\) is swiftly shifted in the vertical direction. Sometimes this leads to spectacular effects: a rapid fall of \(\bar {u}\) may drag it across the instantaneous position of the neuron on the plane and thereby halt and reverse the developing action potential. Such events, however, are seldom in a network like ours with its moderated connectivity, therefore most of the time the vertical displacements of the nullcline \(\bar {u}\) stay noticeably slower than the motion of the neuron.
With this local view in mind, we present in Figs. 11 and 12 the same simulation as in Fig. 8 focusing on the individual dynamics of two representative neurons, arbitrarily picked among the populations of, respectively, the neurons that fire only during the active periods and the neurons that fire throughout all stages of evolution. As we will see, distinctions in the behavioral patterns can be traced down to the phase planes of the neurons.
We begin from the neuron # 240 which fires only during the active periods, showing it in the time range between 3800 ms and 4400 ms. We split this range, which contains both active and quiescent states, into 6 smaller intervals \({\Delta } t_{i}\), each one of either 50 or 100 ms duration. The upper panel in Fig. 11 shows the entire range and its breakdown into the set of \({\Delta } t_{i}\). The lower panels present for every \({\Delta } t_{i}\) the voltage series and the trajectory on the phase plane. Notably, in the hyperpolarized (down) state below reset, the neuron typically is close to the instantaneous location of \(\bar {u}\), hence its motion is slow.
We summarize our observations as follows (for a clearer visualization of the moving trajectory we refer to the video in Online Resource 1):
  • Interval \({\Delta } t_{1}\): in the beginning, the neuron has just ended its evolution in an up state and passes through a down state. There, the trajectory mostly stays inside the parabola of the voltage nullcline \(\bar {u}\) below the reset value. Since the system is located above nullcline \(u^{*}\) of the recovery variable, the latter decreases. The down state can be viewed as a period of relaxation where the voltage is hyperpolarized. The trajectory slowly moves toward the state of rest (marked as a black square in Fig. 11).
  • Interval \({\Delta } t_{2}\): Before the trajectory arrives at the resting state, the neuron receives excitatory input from its presynaptic partners and the voltage nullcline \(\bar {u}\) is shifted upwards, then the neuron resumes the up state and fires several times. The dynamics of \({\Delta } t_{1} + {\Delta } t_{2}\) is largely repeated every \(\approx 100\) ms.
  • Interval \({\Delta } t_{3}\): Since most of the neurons are firing, their recovery variables are growing (recall that at every spike, d is added to the value of the variable u). At very high u the negative feedback to the voltage variable v is so strong that the neuron is forced to stop firing and follows the same path as in \({\Delta } t_{1}\) (see Tomov et a. 2016 for a description of this effect). The majority of the neurons in the network stops firing due to the same reason, and the network does not supply synaptic input, hence the conductances \(G^{ex/in}\) relax. As a result, the nullcline \(\bar {u}\) lowers and the neuron approaches the state of rest.
  • Interval \({\Delta } t_{4}\): This is the middle of a quiescent period. The zoomed image shows how the neuron slowly moves toward the state of rest. The membrane recovery variable u monotonically decays. Synaptic noise perturbs the trajectory, but falls short of initiating a new up state.
  • Interval \({\Delta } t_{5}\): the neuron crosses the nullcline \(u^{*}\) of the recovery variable u. The latter does not decrease anymore while the voltage is fluctuating due to noisy synaptic input.
  • Interval \({\Delta } t_{6}\): finally the noise and/or arrival of inputs from presynaptic neurons are able to initiate a new active period.
The sequence of events in Fig. 11 discloses a major role of the membrane recovery variable u both in the transition from up state to down state and in the subsequent initiation of the new active phase by the noisy input. Because of high tonic firing during an up state, the total synaptic current into a neuron like #240 is very intense and roughly constant (its fluctuation amplitude depends on the synaptic noise level). Hence, the nullcline \(\bar {u}\) stays close to its highest position in the u-v diagram while the neuron climbs toward it due to the increments received by its recovery variable u after each spike. Finally, the neuron gets inside the parabolic nullcline \(\bar {u}\), has its firing probability decreased and eventually stops firing. The fact that the whole network enters a down state when this happens suggests that most neurons behave like #240, i.e. they dominate dynamics in the network. Excursion of the neuron to the left from the reset line while it is inside the parabola \(\bar {u}\) is the mechanism responsible for the hyperpolarized voltages seen in the down states of oscillatory regimes both in the deterministic (cf. Fig. 2) and noisy (cf. Fig. 8) setups. During a quiescent period, the nullcline \(\bar {u}\) is dragged to the bottom of the diagram putting the neuron close to rest. This explains the absence of hyperpolarized voltages during quiescent periods (cf. Fig. 8). In this situation the neuron is also close to the nullcline \(u^{*}\), so its eventual high jump to the region of the diagram below the nullcline \(u^{*}\) makes the neuron fire again and a new active period begins.
The behavior of the neuron #240 in Fig. 11 somewhat mimics the overall behavior of the network: it is highly active during up states of active periods and silent during down states of active periods and quiescent periods. In the following, we will refer to neurons of this type as “typical” in the sense that they represent the behavior of the majority of the network nodes.
The firing pattern of typical neurons is contrasted by the behavior displayed in Fig. 12. There, we show dynamics of the neuron #69, chosen because of its atypical behavior: it fires at all stages: in the up and down states of the active period and during the quiescent period. Dynamical features of this neuron are complementary to the ones of the typical neuron in Fig. 11, and a combination of the views given by them offers a deeper understanding of the mechanisms responsible for the intermittent changes between active and quiescent states.
A summary of our observations for the “atypical” neuron reads as follows (we refer the reader to the video in Online Resource 2 for a dynamical illustration of the effect):
  • Interval \({\Delta } t_{1}\): contrary to the typical neuron, # 69 starts its evolution with a low value of the recovery variable u. This indicates that during the previous up state the neuron did not fire much. The nullcline \(\bar {u}\) also begins this time interval at a low position, meaning that it did not receive many increments. This suggests that the neuron is heavily inhibited when the network is at a high firing state, possibly being postsynaptic to a large pool of inhibitory neurons. Hence, it is more likely that the neuron emits spikes during down states: there it receives less inhibition from its presynaptic neurons, which, like the typical neuron in Fig. 11, are relaxing toward rest. Due to synaptic noise or eventual inputs from other similar neurons, the neuron # 69 fires at a low rate during the down state.
  • Interval \({\Delta } t_{2}\): When the network enters the up state (second half of the time interval), the neuron is again strongly inhibited and emits fewer spikes than a typical neuron.
  • Interval \({\Delta } t_{3}\): The network up state continues and ends, whereas the neuron has a low probability of firing.
  • Interval \({\Delta } t_{4}\): This is the middle of the quiescent period. Note that by the end of the time interval the nullcline \(\bar {u}\) moves down, indicating a net inhibitory input to the neuron (an early sign of the recovery of network activity which will come in the next time steps). Even weak synaptic noisy inputs can make it fire. Since the firing rate depends on the synaptic noise level, the duration of the quiescent period depends on it as well.
  • Interval \({\Delta } t_{5}\): The situation is still as in the last time interval, but now we see a clear sign of the strong inhibition received by the neuron. After a spike in the first half of the time interval, when it is close to emitting a new spike, the neuron receives a strong inhibitory kick which hyperpolarizes its voltage and prevents the spike. The voltage grows again but another strong inhibitory impulse serves for the next setback. The inhibitory inputs come from neurons in the pool of presynaptic inhibitory neurons to # 69, which are starting to “wake up” on the eve of a new active period. As a consequence of the inhibitory inputs, the nullcline \(\bar {u}\) moves further down.
  • Interval \({\Delta } t_{6}\): The network enters the up state of an active period and most neurons are active again (like the typical # 240 in Fig. 11). This makes # 69 fire but because of the heavy inhibition, not at a high rate of the typical neuron. Evidence of the strong increase in the inhibitory input received by this neuron comes from the dramatic downward movement of the nullcline \(\bar {u}\) out of the scale of the plot.
Excitatory neurons like the one in Fig. 12, which fire at low rates at all periods, will be called here “quiet” neurons (elsewhere, in the context of the deterministic setup, we called them “moderately active neurons” Tomov et al. 2016). Quiet neurons are fewer than typical neurons; for the network of Fig. 8, they, on the average, constitute about a quarter of the population.
The sequence of events depicted in Fig. 12 underscores the importance of inhibition and synaptic noise in shaping the network activity during both down states and quiescent periods. Strongly inhibited during up states, the quiet neurons become disinhibited by the end of those states and serve as a source for most of the spikes occurring during down states and quiescent periods. Thus, the firing pattern in the down states and quiescent periods is basically due to the recurrent excitatory synaptic connections among quiet neurons. The weak noise limit (cf. Fig. 5) discloses the nature of the intrinsic activity pattern generated by the population of quiet neurons: it is highly asynchronous and non-oscillatory; remarkably, it is also weak. This confirms, on the one hand, that the population of quiet neurons is small, and explains, on the other hand, why the network activity during down states and quiescent periods is asynchronous and irregular.
Due to the weakness of intrinsic activity of quiet neurons, the likelihood that their pool can trigger a high firing (up) state in the network is low and the synaptic noise level plays a pivotal role in controlling this likelihood. At low synaptic noise level, the weak activity of the quiet neurons can restore the up state when the network is at a down state, but this can be repeated generating a sequence of up-down oscillations only for a short transient time. An example can be seen in Fig. S1. After the transient the network enters a quiescent period: a persistent low activity regime characterized by asynchronous non-oscillatory activity. When the network is in a quiescent period, the activity of the quiet neurons is too weak to start a high firing state in the network; a certain minimal synaptic noise level is necessary to trigger this state. In the absence of this minimal synaptic noise level, the network activity remains in the quiescent regime as seen in the diagrams of Figs. 6 and 7. When the synaptic noise intensity increases above minimum level, the recurrent excitation among quiet neurons gets stronger, as well as the synaptic noise inputs to typical neurons, and the probability of the network exiting a quiescent period increases.
The above discussion highlights a fundamental difference between down states and quiescent periods. In the weak synaptic noise regime, when the network activity is dictated by quiet neurons, their weak agitation is able to restore a high firing state in the network when the latter is in a down state but not when it is in a quiescent period. This phenomenon bears some similarity to the behavior observed previously by us in deterministic networks of two-dimensional nonlinear integrate-and-fire neurons in the absence of external inputs (Tomov et al. 2014, 2016). There, the network state oscillates for a transient time between up and down states, before decaying to rest (cf. the behavior of the network in the deterministic setup in Section 3.1). The decay to rest always occurs when the state of the network in its high-dimensional deterministic phase space passes through a particular region of the phase space (a “hole”) which, when represented in the two-dimensional space of average voltage \(\langle v \rangle \) and recovery \(\langle u \rangle \) variables, overlaps with the region traversed by the network when it is in a down state (Tomov et al. 2016). The analogy between down/rest state for the deterministic network without external input and down/quiescent state for the network in the synaptic noise setup suggests a further analogy between the hole in the high-dimensional phase space of the deterministic network and a hole in the high-dimensional phase space of the stochastic network. The difference is that when the network state in the stochastic high-dimensional phase space falls into its corresponding hole it escapes to a quiescent state instead of the resting state, and it can leave this quiescent state when the synaptic noise intensity is above a minimum level.
To show that the recovery variable u has a stronger impact on the cessation of activity than the inhibitory neurons, in Fig. 13 we compare the effects of this variable and the synaptic currents \(I_{syn}\) on the same neurons as in Fig. 8. In Fig. 13 we present for selected time points both variables (u, v) for 200 neurons randomly picked from the network, and their total synaptic input \(I_{syn}\).
The first row in Fig. 13 refers to an up-down transition: For \(T = 3880\) ms, which is the middle of the up state, some neurons have high values of u (due to the constant increments the u variable receives after each spike, cf. Eq. (2)) and consequently strong negative feedback. The consequence of this negative feedback is to hyperpolarize the neurons, which can be seen in the graphs for \(T = 3900\) and 3920 ms where the voltages progressively move to the left of the graph. As to the \(I_{syn}\) histograms, they are mostly dispersed around positive values (with a reduction in the dispersion as T increases) indicating a low inhibitory activity. This confirms an earlier observation that the inhibitory neurons are not the main responsibles for the up-down transition (Tomov et al. 2016). The second row in Fig. 13 refers to the down-up transition: for \(T = 3940\) ms, most neurons are hyperpolarized and the synaptic currents are sharply concentrated around zero, confirming that very few neurons (the quiet ones) are spiking, as shown in Fig. 8. As time increases, the distribution of neurons in the (v, u) plane becomes more disperse and the voltages v move to depolarized values. This indicates that the neurons are free (without negative feedback) to spike again. Meanwhile, the distribution of synaptic currents widens-up and is dominantly excitatory (although there are some inhibitory currents). The third row in Fig. 13 refers to the quiescent state: from \(T = 4000\) to 4040 ms, the variable u moves down and v moves to hyperpolarized values. As observed in Fig. 8 for the same condition, there are very few spikes. Only after about 300 ms the voltages start to grow again and firing is re-started in a new active period.

3.5 Influence of synaptic noise upon different states

Having demonstrated in the previous section that synaptic noise affects different phases of activity, we now proceed to a quantitative description. We compute the average duration of active and quiescent periods in sufficiently long (we take the value of \(6\times 10^{5}\) ms) trials. Mean duration is an important measure to characterize and model alternating states, e.g. in the course of transitions between brain rhythms (Lo et al. 2002; Ahn and Rubchinsky 2013).
Results of simulations confirm that the duration of stay in both active and quiescent periods is affected by the synaptic noise level (Fig. 14), but in a twofold way: the growth of noise intensity lengthens active periods and shortens the quiescent ones. This implies that synaptic noise influences transitions between the states. Remarkably, the average duration of stay in the quiescent state rapidly falls at the increase of small noise amplitude but seems to reach a certain saturation at moderate noise intensities. Apparently, the minimal time that the neurons need to organize a new collective activity is dynamically constrained by the network topology and deterministic characteristic times in the phase space: in the studied case it cannot be made lower than ≈ 80-100 ms.
Depending on the network composition, action of synaptic noise upon the average duration of active and quiescent periods can be weaker or stronger. Although the same common qualitative tendencies persist, quantitative aspects depend on the types of participating neurons as well as on proportions between them. An exemplary comparison is shown in Fig. 15. Simulations with two types of inhibitory neurons indicate that the LTS neurons, compared to the FS ones, seem to postpone the termination of the active period (top left panel): at low noise the duration of oscillatory activity is higher if LTS neurons are present. This implies that inhibitory neurons influence the transition from active to quiescent period. In contrast, the duration of the quiescent period (bottom right panel) displays no dependence on the type of inhibitory neuron: the corresponding curves in the plot nearly coincide. This indicates that the transition from quiescent to active period is regulated exclusively by excitatory neurons. Indeed, since inhibitory neurons cannot excite a network, every period of stay in the quiescent period should be interrupted by an excitatory neuron, or by a group of excitatory neurons.
Introducing diversity among excitatory neurons, we observe certain quantitative changes as well. By replacing 20% of RS neurons by CH neurons, we obtain a network built of 16% CH, 64%RS and 20%LTS. This composition is much less sensitive to the action of synaptic noise. The tendency of growth of active periods under increase of noise is practically absent (see top left panel in Fig. 15), and at all values of D the average active period is shorter than the corresponding silent one. As for the latter, however, there is a systematic shift. Compared to the case when the whole excitatory population is of the RS type, in the mixture with CH neurons the mean duration of quiescent periods decreases to lower values, below \(10^{2}\) ms. This decrease is a combination of synaptic noise- and network-related effects. A quiescent period ends whenever synaptic noise or some of the few quiet excitatory neurons which fire during the quiescent period (or both) drives across the firing threshold one of the majority of typical excitatory neurons which are at rest, provided that this neuron is able to activate its postsynaptic neighbors and initiate thereby a wave of activity. If the neighbors fail to fire, the quiescent period continues. The mean time required for the first neuron to fire is the same for the RS and the CH neuron (see Section 3.2.1). However, the RS neuron issues just one isolated action potential, whereas the CH neuron generates a series of spikes, raising with each of them the conductances of its postsynaptic neighbors and creating thereby conditions for their activation and subsequent collective spiking. In this sense, a burst of a CH neuron has higher chances to initiate common activity than a spike of a RS neuron. Therefore, in a network with CH neurons the quiescent periods end earlier. This confirms our conjecture that excitatory neurons influence the length of quiescent periods.
Histograms of duration of stay in the active period in Fig. 15C1-2 show exponential distributions but are somehow fractured (cf. the logarithmic representations in the insets). Distributions of this kind have been reported previously (Duc et al. 2015; Tomov et al. 2016). In the former case the authors related cessation of activity to passages through a specific region in the phase space of their deterministic network (the “hole” mentioned above), explaining thereby the quantization of cessation times. In our case, the behavior of the system is similar. Assuming the picture of a hole in the network phase space through which the network can escape from active to quiescent state, and a synaptic noise level high enough to allow multiple transitions from quiescent to active state, the quantization of active period durations can be explained keeping in mind that an active period is made of up-down cycles, each one with the same approximate period T. Since the escapes from active to quiescent state always occur at the end of an up-down cycle, the duration of an active state can only increase by integer multiples of T. The distributions of stay duration in the quiescent period, shown in Fig. 15. D1-2, possess exponential character as well, but without a fractured shape. This can be explained by the non-oscillatory nature of the quiescent periods.
It is important to note that the results concerning influence of variation of the synaptic noise intensity on mean duration of the different regimes can be translated to other features that in the end enhance the synaptic effect. For instance, if the noise intensity is kept constant but the network size is enlarged, the same effect is expected: the mean degree will increase and consequently the synaptic effect as well (cf. Eq. (4)). Our simulations with increased network size and connection probability have confirmed this effect; an example is presented in Fig. S2 where we show that increasing the number of neurons to 5125 (5 times the standard network in this work) at constant p creates a synchronous non-oscillatory activity type. In contrast, if p is lowered, the network goes back to intermittent dynamics, although in Fig. S2 the quiescent activity is rather rare (two periods are identified in a 60 seconds simulation) and for most of the time the network remains in the active state of up/down oscillations. Finally, a variation of the network size N compensated by simultaneous change in the connectivity p, so that the mean degree \(p N\) stays constant, keeps the synaptic effects (and, hence, the prevalent types of dynamics in the network) largely unchanged.
Let us have a look at shorter timescales: what happens inside the active periods? How does noise influence the collective up and down states? In Fig. 16 we show dependence of average durations of stay in the up and down states on the noise intensity D, in the same range of D as in Figs. 14 and 15. Whereas the average stay in the down state gets shorter under the growth of noise, lifetime in the up state almost does not change.
The down state is the only state that is sensitive to the type of inhibitory neuron. There is a clear shift upwards (see red and black curves) if the LTS neurons are replaced with FS ones. This means that transitions from the down state happen faster in the presence of LTS neurons. The sensitivity of the down state can be related to the interpretation in Tomov et al. (2016) where the cessation of self-sustained oscillatory activity was assigned to passages through a small region of instability (the hole), located in the phase space close to the down state. In our current synaptic noise setup, the more noise, the higher the disturbance in the region of instability at the down state and the shorter its lifetime.

3.6 Comparison with other neuron models

We expect the above results on intermittent transitions between active-quiescent states and the role of noise upon these transitions to stay qualitatively valid in networks based on other two-variable integrate-and-fire-type neuron models. To support this conjecture, below we apply the same procedure as in Fig. 8 to the adaptive exponential integrate-and-fire (AdEx) model (Brette and Gerstner 2005; Gerstner et al. 2014).
The AdEx model is a two-variable neuron that differs from the Izhikevich model by the equation for voltage: instead of a polynomial dependence on v, the AdEx features the exponential one. To run the AdEx network under the same conditions as the Izhikevich one without having to re-scale either the synaptic variables or the noise amplitude, we write the AdEx equations so that the input-frequency relationship and nullclines \(\bar {u}\) and \(u^{*}\) are similar to the ones from the Izhikevich model, leading to:
$$\begin{array}{@{}rcl@{}} \left\{\begin{array}{ll} \dot{v} & \,=\, -g_{L}(v - E_{L}) \,+\, g_{L} {\Delta}_{T} \exp{(\displaystyle\frac{v-v_{T}}{{\Delta}_{T}})} -46 - u \,+\, I(t) \\ \dot{u} & \,=\,a(b\,v-u), \end{array}\right. \end{array} $$
(12)
where \({\Delta }_{T}= 30\), \(g_{L}\)= 1, \(v_{T}=-65\), and \(E_{L}=c\). The parameters (a, b, c, d) are the same as in the Izhikevich model. Along with Eq. (12), the model includes the fire-and-reset rule given by Eq. (2). A comparison of the Izhikevich nullclines to the AdEx nullclines is performed in Fig. 17A and B where one can see that, for these chosen parameters and at the resting state (I = 0), the fixed points for the two models are very close and the shape of the nullcline \(\bar {u}\) is similar.
In Fig. 17C-F we see a qualitatively close behavior to Fig. 8A-D: raster plots with \(r(t; {\Delta } t)\) indicating that active (oscillatory) and quiescent (non-oscillatory) behaviors are switching sporadically, with voltages fluctuating among three different positions (hyperpolarized, resting, and depolarized), and recovery variable u oscillating and featuring accumulation depending on the period. A noticeable difference however occurs when the average voltage in Fig. 17D for the AdEx model is compared to the one in Fig. 8B for the Izhikevich model: for the AdEx model the voltage does not stay long enough in the hyperpolarized or depolarized states to create corresponding prominent peaks in the histogram (the peak for resting voltage is much more prominent). This difference is related to the integration of the AdEx neuron model, where the growth of voltage follows an exponential law, which is much faster than the quadratic one. This effect is reflected as well in the average voltage: the peaks and troughs are sharper than those for the Izhikevich neuron model.

4 Discussion

Networks of LIF neurons have been extensively scrutinized in the literature to understand their properties under different conditions (Brunel 2000; Mattia and Del Giudice 2002; Vogels and Abbott 2005a; Cessac and Viéville 2008; Wang et al. 2011; Litwin-Kumar and Doiron 2012; Kriener et al. 2014; Ostojic 2014; Potjans and Diesmann 2014; Yim et al. 2014; Landau et al. 2016; Jercog et al. 2017; Tartaglia and Brunel 2017). Much fewer works have been devoted to systematic investigations of networks of other spiking neuron models. Here we have studied networks of Izhikevich neurons in the presence of synaptic noise. We have found in these networks a rich variety of activity patterns, consisting of synchronous and asynchronous non-oscillatory states and oscillatory states with variable degrees of synchrony. Moreover, these networks exhibit intermittent noise-induced transitions between oscillatory and quiescent states. These transitions are irregular and affected by the synaptic noise level and the network composition.
A systematic analysis of time series, plots of neuron spikes, firing rates, average voltage and membrane recovery variable, and power spectra revealed the characteristics of the oscillatory and quiescent states, similar to observed cortical states (Steriade et al. 2001; El Boustani et al. 2007; Greenberg et al. 2008; Harris and Thiele 2011; Sanchez-Vives et al. 2017): during oscillations the membrane voltages of the neurons fluctuate between hyperpolarized (down) and depolarized (up) states like in the so-called “synchronized” states seen in in vivo preparations and during slow-wave sleep and anesthesia; in the quiescent state neurons display very low and irregular spiking activity like in the so-called “desynchronized” states seen in quiet rest. As far as we know, phenomena like oscillations between hyperpolarized and depolarized states, and noise-induced intermittent transitions between oscillatory and low activity regimes have not been reported in networks of LIF neurons.
By using the single neuron phase space representation of network dynamics combined with statistical assessments of duration of stay in the oscillatory and quiescent states, we were able to explain the roles played by synaptic noise and network composition in the durations of these states and the transitions between them. Besides, we also were able to explain the origin of the up and down oscillations and the asynchronous non-oscillatory nature of the quiescent states.
Up and down states, in which the average voltage of network neurons is, respectively, depolarized and hyperpolarized, occur during oscillatory (active) periods in the network. They can be understood in terms of the single neuron phase space in the same way as explained in the noiseless case (Tomov et al. 2016). During an up state, when most of the neurons fire tonically, the parabolic-shaped voltage nullcline is kept in the upper part of the phase plane while the recovery variable moves steadily upwards due to neuronal firing. Eventually the neuron finds itself inside the area bounded by the voltage nullcline; it is forced to move to the hyperpolarized region of the phase plane and then downwards, relaxing toward rest. This corresponds to a down state. In the latter, the activity of the network is sustained by quiet neurons, which were inhibited during the up state and became disinhibited during the down state. In the course of time, firing of the quiet neurons is able to excite some of the relaxing post-active neurons; this triggers a new wave of excitation in the network, starting the next up state. This mechanism strongly depends on the recovery variable and its instantaneous increment (cf. Eq. (2)), which causes spike-dependent adaptation (Izhikevich 2007). Together, they constitute a sort of intrinsic negative feedback mechanism which decreases network excitability during the up state, as proposed by other authors in different contexts (Contreras et al. 1996; Sanchez-Vives and McCormick 2000; Bazhenov et al. 2002; Compte et al. 2003; Hill and Tononi 2005; Holcman and Tsodyks 2006; Parga and Abbott 2007; Benita et al. 2012; Chen et al. 2012; Ghorbani et al. 2012; Mattia and Sanchez-Vives 2012; Jercog et al. 2017; Tartaglia and Brunel 2017; Levenstein et al. 2018).
The basic mechanism behind up and down oscillations is acting in both the deterministic and the synaptic noise setups. Thus, up-down oscillations are caused not by synaptic noise but by the intrinsic dynamics of the network. Disclosure of the same basic dynamical properties in the network with AdEx neurons (Section 3.6) allows to expect that this mechanism is common for networks populated by neurons with adaptation variables. In line with what has been pointed out elsewhere (Harris and Thiele 2011; Mattia and Sanchez-Vives 2012; Jercog et al. 2017), the up/down oscillations result from an interaction between recurrent synaptic connections and adaptation.
Interestingly, the comparison between networks populated with Izhikevich and AdEx neurons indicates some differences between them: although the global dynamical behavior of the two networks is similar, the local voltage profile of their neurons is different (cf. Figs. 8B and 17D). To the best of our knowledge, this is one of the first times in which the Izhikevich and AdEx neuron models are compared through their effects on the network.
The major difference between the deterministic and the synaptic noise setups is that in the deterministic case the oscillations are transient, while in presence of noise they become persistent. But the durations of the up and down phases and an up-down cycle are approximately the same, depending only on the characteristics of network neurons.
The up-down oscillations can be seen as a sort of default activity mode (Sanchez-Vives et al. 2017) of the system (at least in the region of the parameter space considered here). In the deterministic, noiseless, setup this activity eventually dies out, preceded, as we have shown, by the passage of the system through a specific region of its phase space we called a “hole” (Tomov et al. 2016). Through the hole, located close to the domain traversed by the system during a down phase, the system can escape the up-down oscillations and decay to rest. In the noiseless case the system sooner or later gets into the hole and the network activity dies out. In the synaptic noise setup, this hole-like region in the network’s high-dimensional phase space still exists but because of the noise the system does not decay to rest when it passes through it; instead, the system is dragged to the quiescent state.
As in the down state, in the quiescent state the network sustains activity, internally generated by quiet neurons via their recurrent synaptic connections and regulated by the synaptic noise level: it is weak for weak synaptic noise, and strong for strong synaptic noise. Being dictated by noise, activity during a quiescent period is asynchronous and irregular. Because of the passage through the hole the quiescent state has, in general, a longer duration than the down state. Hence, typical neurons which are relaxing in the hyperpolarized region of the single neuron phase space have time to decay to the phase space region around rest. This explains why during quiescent periods the average voltage is close to the resting voltage and is not hyperpolarized as in the down states. For weak synaptic noise, activity generated by the quiet neurons is insufficient to take the network out of the quiescent state: the system remains inactive. For moderate to high synaptic noise intensities, activity of quiet neurons gets stronger and even the neurons that are close to rest can fire, so eventually the global activity is reignited and an up state commences.
The basic effect of the synaptic noise level is to increase/decrease the average duration of the quiescent periods. In other words, synaptic noise can act as a facilitator of transitions between quiescent and active states, and the intermittency between these states results from the stochastic nature of the neuronal firing during quiescent periods as well as from the irregularity of trajectory of the system in its high-dimensional phase space (that determines whether it will hit a hole). Once the system enters the hole, the duration of stay in the quiescent state depends on the noise intensity. For very low noise, the system stays in the quiescent state essentially forever, displaying only residual activity (see Fig. 5). For moderate to high noise, the system eventually leaves the quiescent state and the up-down oscillations resume. The residence time in the quiescent state gets smaller as the synaptic noise intensity increases. For very strong noise the system may not even enter the hole because, in such a case, both typical and quiet neurons have high probability of firing at all moments. This explains the disappearance of quiescent periods in the high noise regime. For still higher levels of synaptic noise intensity, even the down periods disappear and the network features constant activity.
Our study also indicates that inhibition affects transitions from active to quiescent periods and the duration of down states. Average stay in the down states is shorter when the inhibitory neurons of the network are of the LTS type than when they are of the FS type (cf. Fig. 16). This may be related to experimental evidence showing that inhibitory neurons control cortical oscillatory up and down states (Sanchez-Vives et al. 2010). The authors of that study progressively blocked inhibitory cells during a spontaneous up state and showed that this blockage shortened the duration of up states and enlarged the duration of down states. Since the LTS neurons respond to noise faster, a replacement of all LTS neurons in the network by FS neurons can be viewed as a reduction of inhibition; thereby, the corresponding increase of the average duration of down states relates our observations in the model to the experimental evidence. Similar transitions from the up to the down states have been studied before (Holcman and Tsodyks 2006; Xu et al. 2016).
One of the objectives of our study was to check whether dynamics in the network of neurons with adaptation is sensitive to the composition of the network and the electrophysiological types of individual neurons. Generic qualitative features of dynamics, like intermittent oscillations between active (up/down) and quiescent states, shape of power spectra, etc, turned out to be persistent for all neuronal subtypes as well as for their mixtures; on the individual level this can be traced back to the common shape of the nullclines. At the same time, we established that certain quantitative measures (like average durations) depend on the proportions of neuron types.
For noise, there are many ways to enter a neural network model (Faisal et al. 2008; Longtin 2013; Destexhe and Rudolph-Lilith 2012; Brochini et al. 2016; McDonnell et al. 2016). In this work we considered the variant in which it affects the synaptic variables. By doing so, we were able to study the effect of noise at the molecular level on the behavior of the system at the network level. Since noise at the synaptic level is related to fluctuations in the release of neurotransmitters and the amplitude of miniature postsynaptic currents (Rao et al. 2007; Liu et al. 2010; Tononi and Cirelli 2014; Kavalali 2015), which are phenomena at scales of magnitude much smaller than the scale of voltage changes, the weak noise intensities D we considered here capture very small noisy events. Furthermore, because synaptic noise is filtered by the conductance variables, its effect upon neuronal voltages is akin to colored noise input, which is more biologically realistic than if noise were added, via e.g. Poisson processes, to neuron voltages directly.
Our work captures mechanisms at different levels of neural processing with potential contribution to current endeavors to model multiscale brain mechanisms and their role on normal and pathological function (Mejias et al. 2016; Neymotin et al. 2016; Lytton et al. 2017; Schwalger et al. 2017). As an example, the synaptic noise-induced switches between periods of oscillatory and irregular activity might give support to fast formation and destruction of cell assemblies.

Acknowledgements

This paper was developed within the scope of the IRTG 1740 / TRP 2015/50122-0, funded by DFG / FAPESP. RP and ACR are also part of the Research, Innovation and Dissemination Center for Neuromathematics (FAPESP grant 2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0). We are thankful to P. Tomov for stimulating discussions.

Compliance with Ethical Standards

Conflict of interests

The authors declare that they have no conflict of interest.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Anhänge

Electronic supplementary material

Below is the link to the electronic supplementary material.
Literatur
Zurück zum Zitat Ahn, S., & Rubchinsky, L.L. (2017). Potential mechanisms and functions of intermittent neural synchronization. Frontiers in Computational Neuroscience, 11, 44.CrossRefPubMedPubMedCentral Ahn, S., & Rubchinsky, L.L. (2017). Potential mechanisms and functions of intermittent neural synchronization. Frontiers in Computational Neuroscience, 11, 44.CrossRefPubMedPubMedCentral
Zurück zum Zitat Amit, D.J., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.CrossRefPubMed Amit, D.J., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.CrossRefPubMed
Zurück zum Zitat Bair, W., Koch, C., Newsome, W., Britten, K. (1994). Power spectrum analysis of MT neurons in the behaving monkey. Journal of Neuroscience, 14, 2870–2892.CrossRefPubMed Bair, W., Koch, C., Newsome, W., Britten, K. (1994). Power spectrum analysis of MT neurons in the behaving monkey. Journal of Neuroscience, 14, 2870–2892.CrossRefPubMed
Zurück zum Zitat Baranauskas, G., Maggiolini, E., Vato, A., Angotzi, G., Bonfanti, A., Zambra, G., Fadiga, L. (2011). Origins of 1/f 2 scaling in the power spectrum of intracortical local field potential. Journal of Neurophysiology, 107, 984–994.CrossRefPubMed Baranauskas, G., Maggiolini, E., Vato, A., Angotzi, G., Bonfanti, A., Zambra, G., Fadiga, L. (2011). Origins of 1/f 2 scaling in the power spectrum of intracortical local field potential. Journal of Neurophysiology, 107, 984–994.CrossRefPubMed
Zurück zum Zitat Bazhenov, M., Timofeev, I., Steriade, M., Sejnowski, T.J. (2002). Model of thalamocortical slow-wave sleep oscillations and transitions to activated states. Journal of Neuroscience, 22, 8691–8704.CrossRefPubMed Bazhenov, M., Timofeev, I., Steriade, M., Sejnowski, T.J. (2002). Model of thalamocortical slow-wave sleep oscillations and transitions to activated states. Journal of Neuroscience, 22, 8691–8704.CrossRefPubMed
Zurück zum Zitat Bédard, C, & Destexhe, A. (2009). Macroscopic models of local field potentials and the apparent 1/f noise in brain activity. Biophysical Journal, 96, 2589–2603.CrossRefPubMedPubMedCentral Bédard, C, & Destexhe, A. (2009). Macroscopic models of local field potentials and the apparent 1/f noise in brain activity. Biophysical Journal, 96, 2589–2603.CrossRefPubMedPubMedCentral
Zurück zum Zitat Beggs, J.M., & Plenz, D. (2003). Neuronal avalanches in neocortical circuits. Journal of Neuroscience, 23, 11167–11177.CrossRefPubMed Beggs, J.M., & Plenz, D. (2003). Neuronal avalanches in neocortical circuits. Journal of Neuroscience, 23, 11167–11177.CrossRefPubMed
Zurück zum Zitat Benita, J.M., Guillamon, A., Deco, G., Sanchez-Vives, M.V. (2012). Synaptic depression and slow oscillatory activity in a biophysical network model of the cerebral cortex. Frontiers in Computational Neuroscience, 6, 64.CrossRefPubMedPubMedCentral Benita, J.M., Guillamon, A., Deco, G., Sanchez-Vives, M.V. (2012). Synaptic depression and slow oscillatory activity in a biophysical network model of the cerebral cortex. Frontiers in Computational Neuroscience, 6, 64.CrossRefPubMedPubMedCentral
Zurück zum Zitat Blanco, S., Garay, A., Coulombie, D. (2013). Comparison of frequency bands using spectral entropy for epileptic seizure prediction. ISRN Neurology, 2013, 287327.CrossRefPubMedPubMedCentral Blanco, S., Garay, A., Coulombie, D. (2013). Comparison of frequency bands using spectral entropy for epileptic seizure prediction. ISRN Neurology, 2013, 287327.CrossRefPubMedPubMedCentral
Zurück zum Zitat Bonifazi, P., Goldin, M., Picardo, M.A., Jorquera, I., Cattani, A., Bianconi, G., Represa, A., Ben-Ari, Y., Cossart, R. (2009). Gabaergic hub neurons orchestrate synchrony in developing hippocampal networks. Science, 326, 1419–1424.CrossRefPubMed Bonifazi, P., Goldin, M., Picardo, M.A., Jorquera, I., Cattani, A., Bianconi, G., Represa, A., Ben-Ari, Y., Cossart, R. (2009). Gabaergic hub neurons orchestrate synchrony in developing hippocampal networks. Science, 326, 1419–1424.CrossRefPubMed
Zurück zum Zitat Brette, R., & Gerstner, W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 94, 3637–3642.CrossRefPubMed Brette, R., & Gerstner, W. (2005). Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology, 94, 3637–3642.CrossRefPubMed
Zurück zum Zitat Brochini, L., de Andrade Costa, A., Abadi, M., Roque, A.C., Stolfi, J., Kinouchi, O. (2016). Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Scientific Reports, 6, 35831.CrossRefPubMedPubMedCentral Brochini, L., de Andrade Costa, A., Abadi, M., Roque, A.C., Stolfi, J., Kinouchi, O. (2016). Phase transitions and self-organized criticality in networks of stochastic spiking neurons. Scientific Reports, 6, 35831.CrossRefPubMedPubMedCentral
Zurück zum Zitat Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.CrossRefPubMed Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.CrossRefPubMed
Zurück zum Zitat Buzsaki, G. (2006). Rhythms of the Brain. Oxford: Oxford University Press.CrossRef Buzsaki, G. (2006). Rhythms of the Brain. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat Buzsáki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929.CrossRefPubMed Buzsáki, G., & Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science, 304, 1926–1929.CrossRefPubMed
Zurück zum Zitat Celka, P. (2007). Statistical analysis of the phase-locking value. IEEE Signal Processing Letters, 14, 577–580.CrossRef Celka, P. (2007). Statistical analysis of the phase-locking value. IEEE Signal Processing Letters, 14, 577–580.CrossRef
Zurück zum Zitat Cessac, B., & Viéville, T. (2008). On dynamics of integrate-and-fire neural networks with conductance based synapses. Frontiers in Computational Neuroscience, 2, 2.CrossRefPubMedPubMedCentral Cessac, B., & Viéville, T. (2008). On dynamics of integrate-and-fire neural networks with conductance based synapses. Frontiers in Computational Neuroscience, 2, 2.CrossRefPubMedPubMedCentral
Zurück zum Zitat Chen, J.Y., Chauvette, S., Skorheim, S., Timofeev, I., Bazhenov, M. (2012). Interneuron-mediated inhibition synchronizes neuronal activity during slow oscillation. Journal of Physiology (London), 590, 3987–4010.CrossRef Chen, J.Y., Chauvette, S., Skorheim, S., Timofeev, I., Bazhenov, M. (2012). Interneuron-mediated inhibition synchronizes neuronal activity during slow oscillation. Journal of Physiology (London), 590, 3987–4010.CrossRef
Zurück zum Zitat Compte, A., Brunel, N., Goldman-Rakic, P.S., Wang, X.J. (2000). Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex, 10, 910–923.CrossRefPubMed Compte, A., Brunel, N., Goldman-Rakic, P.S., Wang, X.J. (2000). Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral Cortex, 10, 910–923.CrossRefPubMed
Zurück zum Zitat Compte, A., Sanchez-Vives, M.V., McCormick, D.A., Wang, X.J. (2003). Cellular and network mechanisms of slow oscillatory activity (< 1 Hz) and wave propagations in a cortical network model. Journal of Neurophysiology, 89, 2707–2725.CrossRefPubMed Compte, A., Sanchez-Vives, M.V., McCormick, D.A., Wang, X.J. (2003). Cellular and network mechanisms of slow oscillatory activity (< 1 Hz) and wave propagations in a cortical network model. Journal of Neurophysiology, 89, 2707–2725.CrossRefPubMed
Zurück zum Zitat Contreras, D., Timofeev, I., Steriade, M. (1996). Mechanisms of long-lasting hyperpolarizations underlying slow sleep oscillations in cat corticothalamic networks. Journal of Physiology (London), 494, 251–264.CrossRef Contreras, D., Timofeev, I., Steriade, M. (1996). Mechanisms of long-lasting hyperpolarizations underlying slow sleep oscillations in cat corticothalamic networks. Journal of Physiology (London), 494, 251–264.CrossRef
Zurück zum Zitat Contreras, D. (2004). Electrophysiological classes of neocortical neurons. Neural Networks, 17, 633–646.CrossRefPubMed Contreras, D. (2004). Electrophysiological classes of neocortical neurons. Neural Networks, 17, 633–646.CrossRefPubMed
Zurück zum Zitat Destexhe, A., Rudolph, M., Fellous, J.M., Sejnowski, T.J. (2001). Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience, 107, 13–24.CrossRefPubMedPubMedCentral Destexhe, A., Rudolph, M., Fellous, J.M., Sejnowski, T.J. (2001). Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience, 107, 13–24.CrossRefPubMedPubMedCentral
Zurück zum Zitat Destexhe, A. (2009). Self-sustained asynchronous irregular states and up–down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. Journal of Computational Neuroscience, 27, 493.CrossRefPubMed Destexhe, A. (2009). Self-sustained asynchronous irregular states and up–down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. Journal of Computational Neuroscience, 27, 493.CrossRefPubMed
Zurück zum Zitat Destexhe, A., & Rudolph-Lilith, M. (2012). Neuronal noise. New York: Springer.CrossRef Destexhe, A., & Rudolph-Lilith, M. (2012). Neuronal noise. New York: Springer.CrossRef
Zurück zum Zitat Duc, K.D., Parutto, P., Chen, X., Epsztein, J., Konnerth, A., Holcman, D. (2015). Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in up states. Frontiers in Computational Neuroscience, 9, 96. Duc, K.D., Parutto, P., Chen, X., Epsztein, J., Konnerth, A., Holcman, D. (2015). Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in up states. Frontiers in Computational Neuroscience, 9, 96.
Zurück zum Zitat El Boustani, S., Pospischil, M., Rudolph-Lilith, M., Destexhe, A. (2007). Activated cortical states: experiments, analyses and models. Journal of Physiology (Paris), 101, 99–109.CrossRef El Boustani, S., Pospischil, M., Rudolph-Lilith, M., Destexhe, A. (2007). Activated cortical states: experiments, analyses and models. Journal of Physiology (Paris), 101, 99–109.CrossRef
Zurück zum Zitat Gerstner, W., Kistler, W.M., Naud, R., Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge: Cambridge University Press.CrossRef Gerstner, W., Kistler, W.M., Naud, R., Paninski, L. (2014). Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Ghorbani, M., Mehta, M., Bruinsma, R., Levine, A.J. (2012). Nonlinear-dynamics theory of up-down transitions in neocortical neural networks. Physical Review E, 85, 021908.CrossRef Ghorbani, M., Mehta, M., Bruinsma, R., Levine, A.J. (2012). Nonlinear-dynamics theory of up-down transitions in neocortical neural networks. Physical Review E, 85, 021908.CrossRef
Zurück zum Zitat Gillespie, D.T. (1996). The mathematics of brownian motion and johnson noise. American Journal of Physics, 64, 225–240.CrossRef Gillespie, D.T. (1996). The mathematics of brownian motion and johnson noise. American Journal of Physics, 64, 225–240.CrossRef
Zurück zum Zitat Girones, Z., & Destexhe, A. (2016). Enhanced responsiveness in asynchronous irregular neuronal networks. arXiv:161109089. Girones, Z., & Destexhe, A. (2016). Enhanced responsiveness in asynchronous irregular neuronal networks. arXiv:161109089.
Zurück zum Zitat Greenberg, D.S., Houweling, A.R., Kerr, J.N. (2008). Population imaging of ongoing neuronal activity in the visual cortex of awake rats. Nature Neuroscience, 11, 749–751.CrossRefPubMed Greenberg, D.S., Houweling, A.R., Kerr, J.N. (2008). Population imaging of ongoing neuronal activity in the visual cortex of awake rats. Nature Neuroscience, 11, 749–751.CrossRefPubMed
Zurück zum Zitat Hahn, G., Ponce-Alvarez, A., Monier, C., Benvenuti, G., Kumar, A., Chavane, F., Deco, G., Frégnac, Y. (2017). Spontaneous cortical activity is transiently poised close to criticality. PLoS Computational Biology, 13, e1005543.CrossRefPubMedPubMedCentral Hahn, G., Ponce-Alvarez, A., Monier, C., Benvenuti, G., Kumar, A., Chavane, F., Deco, G., Frégnac, Y. (2017). Spontaneous cortical activity is transiently poised close to criticality. PLoS Computational Biology, 13, e1005543.CrossRefPubMedPubMedCentral
Zurück zum Zitat Hill, S., & Tononi, G. (2005). Modeling sleep and wakefulness in the thalamocortical system. Journal of Neurophysiology, 93, 1671–1698.CrossRefPubMed Hill, S., & Tononi, G. (2005). Modeling sleep and wakefulness in the thalamocortical system. Journal of Neurophysiology, 93, 1671–1698.CrossRefPubMed
Zurück zum Zitat Izhikevich, E.M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14, 1569–1572.CrossRefPubMed Izhikevich, E.M. (2003). Simple model of spiking neurons. IEEE Transactions on Neural Networks, 14, 1569–1572.CrossRefPubMed
Zurück zum Zitat Izhikevich, E.M. (2007). Dynamical systems in neuroscience Cambridge. MA: MIT Press. Izhikevich, E.M. (2007). Dynamical systems in neuroscience Cambridge. MA: MIT Press.
Zurück zum Zitat Jercog, D., Roxin, A., Barthó, P, Luczak, A., Compte, A., de la Rocha, J. (2017). Up-down cortical dynamics reflect state transitions in a bistable network. eLife, 6, e22425.CrossRefPubMedPubMedCentral Jercog, D., Roxin, A., Barthó, P, Luczak, A., Compte, A., de la Rocha, J. (2017). Up-down cortical dynamics reflect state transitions in a bistable network. eLife, 6, e22425.CrossRefPubMedPubMedCentral
Zurück zum Zitat Kavalali, E.T. (2015). The mechanisms and functions of spontaneous neurotransmitter release. Nature Reviews Neuroscience, 16, 5–16.CrossRefPubMed Kavalali, E.T. (2015). The mechanisms and functions of spontaneous neurotransmitter release. Nature Reviews Neuroscience, 16, 5–16.CrossRefPubMed
Zurück zum Zitat Kinouchi, O., & Copelli, M. (2006). Optimal dynamical range of excitable networks at criticality. Nature Physics, 2, 348–351.CrossRef Kinouchi, O., & Copelli, M. (2006). Optimal dynamical range of excitable networks at criticality. Nature Physics, 2, 348–351.CrossRef
Zurück zum Zitat Kriener, B., Enger, H., Tetzlaff, T., Plesser, H.E., Gewaltig, M.O., Einevoll, G.T. (2014). Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses. Frontiers in Computational Neuroscience, 8, 136.CrossRefPubMedPubMedCentral Kriener, B., Enger, H., Tetzlaff, T., Plesser, H.E., Gewaltig, M.O., Einevoll, G.T. (2014). Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses. Frontiers in Computational Neuroscience, 8, 136.CrossRefPubMedPubMedCentral
Zurück zum Zitat Kumar, A., Schrader, S., Aertsen, A., Rotter, S. (2008). The high-conductance state of cortical networks. Neural Computation, 20, 1–43.CrossRefPubMed Kumar, A., Schrader, S., Aertsen, A., Rotter, S. (2008). The high-conductance state of cortical networks. Neural Computation, 20, 1–43.CrossRefPubMed
Zurück zum Zitat Lachaux, J.P., Rodriguez, E., Martinerie, J., Varela, F.J. (1999). Measuring phase synchrony in brain signals. Human Brain Mapping, 8, 194–208.CrossRefPubMed Lachaux, J.P., Rodriguez, E., Martinerie, J., Varela, F.J. (1999). Measuring phase synchrony in brain signals. Human Brain Mapping, 8, 194–208.CrossRefPubMed
Zurück zum Zitat Landau, I.D., Egger, R., Dercksen, V.J., Oberlaender, M., Sompolinsky, H. (2016). The impact of structural heterogeneity on excitation-inhibition balance in cortical networks. Neuron, 92, 1106–1121.CrossRefPubMedPubMedCentral Landau, I.D., Egger, R., Dercksen, V.J., Oberlaender, M., Sompolinsky, H. (2016). The impact of structural heterogeneity on excitation-inhibition balance in cortical networks. Neuron, 92, 1106–1121.CrossRefPubMedPubMedCentral
Zurück zum Zitat Litwin-Kumar, A., & Doiron, B. (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience, 15, 1498–1505.CrossRefPubMedPubMedCentral Litwin-Kumar, A., & Doiron, B. (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience, 15, 1498–1505.CrossRefPubMedPubMedCentral
Zurück zum Zitat Liu, Z.W., Faraguna, U., Cirelli, C., Tononi, G., Gao, X.B. (2010). Direct evidence for wake-related increases and sleep-related decreases in synaptic strength in rodent cortex. Journal of Neuroscience, 30, 8671–8675.CrossRefPubMed Liu, Z.W., Faraguna, U., Cirelli, C., Tononi, G., Gao, X.B. (2010). Direct evidence for wake-related increases and sleep-related decreases in synaptic strength in rodent cortex. Journal of Neuroscience, 30, 8671–8675.CrossRefPubMed
Zurück zum Zitat Lo, C.C., Amaral, L.N., Havlin, S., Ivanov, P.C., Penzel, T., Peter, J.H., Stanley, H.E. (2002). Dynamics of sleep-wake transitions during sleep. Europhysics Letters, 57, 625–631.CrossRef Lo, C.C., Amaral, L.N., Havlin, S., Ivanov, P.C., Penzel, T., Peter, J.H., Stanley, H.E. (2002). Dynamics of sleep-wake transitions during sleep. Europhysics Letters, 57, 625–631.CrossRef
Zurück zum Zitat Lytton, W.W., Arle, J., Bobashev, G., Ji, S., Klassen, T.L., Marmarelis, V.Z., Schwaber, J., Sherif, M.A., Sanger, T.D. (2017). Multiscale modeling in the clinic: diseases of the brain and nervous system. Brain Informatics, 4, 219–230.CrossRefPubMedPubMedCentral Lytton, W.W., Arle, J., Bobashev, G., Ji, S., Klassen, T.L., Marmarelis, V.Z., Schwaber, J., Sherif, M.A., Sanger, T.D. (2017). Multiscale modeling in the clinic: diseases of the brain and nervous system. Brain Informatics, 4, 219–230.CrossRefPubMedPubMedCentral
Zurück zum Zitat Lowet, E., Roberts, M.J., Bonizzi, P., Karel, J., De Weerd, P. (2016). Quantifying neural oscillatory synchronization: a comparison between spectral coherence and phase-locking value approaches. PloS One, 11, e0146443.CrossRefPubMedPubMedCentral Lowet, E., Roberts, M.J., Bonizzi, P., Karel, J., De Weerd, P. (2016). Quantifying neural oscillatory synchronization: a comparison between spectral coherence and phase-locking value approaches. PloS One, 11, e0146443.CrossRefPubMedPubMedCentral
Zurück zum Zitat Mannella, R. (2002). Integration of stochastic differential equations on a computer. International Journal of Modern Physics C, 13, 1177–1194.CrossRef Mannella, R. (2002). Integration of stochastic differential equations on a computer. International Journal of Modern Physics C, 13, 1177–1194.CrossRef
Zurück zum Zitat Mattia, M., & Del Giudice, P. (2002). Population dynamics of interacting spiking neurons. Physical Review E, 66, 051917.CrossRef Mattia, M., & Del Giudice, P. (2002). Population dynamics of interacting spiking neurons. Physical Review E, 66, 051917.CrossRef
Zurück zum Zitat Mattia, M., & Sanchez-Vives, M.V. (2012). Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activity. Cognitive Neurodynamics, 6, 239–250.CrossRefPubMed Mattia, M., & Sanchez-Vives, M.V. (2012). Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activity. Cognitive Neurodynamics, 6, 239–250.CrossRefPubMed
Zurück zum Zitat McDonnell, M.D., Goldwyn, J.H., Lindner, B. (2016). Neuronal stochastic variability: influences on spiking dynamics and network activity. Frontiers in Computational Neuroscience, 10, 38.CrossRefPubMedPubMedCentral McDonnell, M.D., Goldwyn, J.H., Lindner, B. (2016). Neuronal stochastic variability: influences on spiking dynamics and network activity. Frontiers in Computational Neuroscience, 10, 38.CrossRefPubMedPubMedCentral
Zurück zum Zitat Mejias, J.F., Murray, J.D., Kennedy, H., Wang, X.J. (2016). Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex. Science Advances, 2, e1601335.CrossRefPubMedPubMedCentral Mejias, J.F., Murray, J.D., Kennedy, H., Wang, X.J. (2016). Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex. Science Advances, 2, e1601335.CrossRefPubMedPubMedCentral
Zurück zum Zitat Middleton, J.W., Chacron, M.J., Lindner, B., Longtin, A. (2003). Firing statistics of a neuron model driven by long-range correlated noise. Physical Review E, 68, 021920.CrossRef Middleton, J.W., Chacron, M.J., Lindner, B., Longtin, A. (2003). Firing statistics of a neuron model driven by long-range correlated noise. Physical Review E, 68, 021920.CrossRef
Zurück zum Zitat Miller, J.K., Ayzenshtat, I., Carrillo-Reid, L., Yuste, R. (2014). Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences (USA), 111, E4053–E4061.CrossRef Miller, J.K., Ayzenshtat, I., Carrillo-Reid, L., Yuste, R. (2014). Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences (USA), 111, E4053–E4061.CrossRef
Zurück zum Zitat Millman, D., Mihalas, S., Kirkwood, A., Niebur, E. (2010). Self-organized criticality occurs in non-conservative neuronal networks during up states. Nature Physics, 6, 801–805.CrossRefPubMedPubMedCentral Millman, D., Mihalas, S., Kirkwood, A., Niebur, E. (2010). Self-organized criticality occurs in non-conservative neuronal networks during up states. Nature Physics, 6, 801–805.CrossRefPubMedPubMedCentral
Zurück zum Zitat Moreno-Bote, R., Rinzel, J., Rubin, N. (2007). Noise-induced alternations in an attractor network model of perceptual bistability. Journal of Neurophysiology, 98, 1125–1139.CrossRefPubMedPubMedCentral Moreno-Bote, R., Rinzel, J., Rubin, N. (2007). Noise-induced alternations in an attractor network model of perceptual bistability. Journal of Neurophysiology, 98, 1125–1139.CrossRefPubMedPubMedCentral
Zurück zum Zitat Neymotin, S.A., McDougal, R.A., Bulanova, A.S., Zeki, M., Lakatos, P., Terman, D., Hines, M.L., Lytton, W.W. (2016). Calcium regulation of hcn channels supports persistent activity in a multiscale model of neocortex. Neuroscience, 316, 344–366.CrossRefPubMed Neymotin, S.A., McDougal, R.A., Bulanova, A.S., Zeki, M., Lakatos, P., Terman, D., Hines, M.L., Lytton, W.W. (2016). Calcium regulation of hcn channels supports persistent activity in a multiscale model of neocortex. Neuroscience, 316, 344–366.CrossRefPubMed
Zurück zum Zitat Nowak, L.G., Azouz, R., Sanchez-Vives, M.V., Gray, C.M., McCormick, D.A. (2003). Electrophysiological classes of cat primary visual cortical neurons in vivo as revealed by quantitative analyses. Journal of Neurophysiology, 89, 1541–1566.CrossRefPubMed Nowak, L.G., Azouz, R., Sanchez-Vives, M.V., Gray, C.M., McCormick, D.A. (2003). Electrophysiological classes of cat primary visual cortical neurons in vivo as revealed by quantitative analyses. Journal of Neurophysiology, 89, 1541–1566.CrossRefPubMed
Zurück zum Zitat Okun, M., Steinmetz, N.A., Cossell, L., Iacaruso, M.F., Ko, H., Barthó, P., Moore, T., Hofer, S.B., Mrsic-Flogel, T.D., Carandini, M., et al. (2015). Diverse coupling of neurons to populations in sensory cortex. Nature, 521, 511–515.CrossRefPubMedPubMedCentral Okun, M., Steinmetz, N.A., Cossell, L., Iacaruso, M.F., Ko, H., Barthó, P., Moore, T., Hofer, S.B., Mrsic-Flogel, T.D., Carandini, M., et al. (2015). Diverse coupling of neurons to populations in sensory cortex. Nature, 521, 511–515.CrossRefPubMedPubMedCentral
Zurück zum Zitat Ostojic, S. (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience, 17, 594–600.CrossRefPubMed Ostojic, S. (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience, 17, 594–600.CrossRefPubMed
Zurück zum Zitat Parga, N., & Abbott, L.F. (2007). Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Frontiers in Neuroscience, 1, 57–66.CrossRefPubMedPubMedCentral Parga, N., & Abbott, L.F. (2007). Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Frontiers in Neuroscience, 1, 57–66.CrossRefPubMedPubMedCentral
Zurück zum Zitat Pena, R.F.O., Vellmer, S., Bernardi, D., Roque, A.C., Lindner, B. (2018). Self-consistent scheme for spike-train power spectra in heterogeneous sparse networks. Frontiers in Computational Neuroscience, 12, 9.CrossRefPubMedPubMedCentral Pena, R.F.O., Vellmer, S., Bernardi, D., Roque, A.C., Lindner, B. (2018). Self-consistent scheme for spike-train power spectra in heterogeneous sparse networks. Frontiers in Computational Neuroscience, 12, 9.CrossRefPubMedPubMedCentral
Zurück zum Zitat Potjans, T.C., & Diesmann, M. (2014). The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex, 24, 785–806.CrossRefPubMed Potjans, T.C., & Diesmann, M. (2014). The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex, 24, 785–806.CrossRefPubMed
Zurück zum Zitat Pulido, C., & Marty, A. (2017). Quantal fluctuations in central mammalian synapses: Functional role of vesicular docking sites. Physiological Reviews, 97, 1403–1430.CrossRefPubMed Pulido, C., & Marty, A. (2017). Quantal fluctuations in central mammalian synapses: Functional role of vesicular docking sites. Physiological Reviews, 97, 1403–1430.CrossRefPubMed
Zurück zum Zitat Rao, Y., Liu, Z.W., Borok, E., Rabenstein, R.L., Shanabrough, M., Lu, M., Picciotto, M.R., Horvath, T.L., Gao, X.B. (2007). Prolonged wakefulness induces experience-dependent synaptic plasticity in mouse hypocretin/orexin neurons. Journal of Clinical Investigation, 117, 4022–4033.CrossRefPubMed Rao, Y., Liu, Z.W., Borok, E., Rabenstein, R.L., Shanabrough, M., Lu, M., Picciotto, M.R., Horvath, T.L., Gao, X.B. (2007). Prolonged wakefulness induces experience-dependent synaptic plasticity in mouse hypocretin/orexin neurons. Journal of Clinical Investigation, 117, 4022–4033.CrossRefPubMed
Zurück zum Zitat Renart, A., Brunel, N., Wang, X.J. (2003). Mean-field theory of recurrent cortical networks: Working memory circuits with irregularly spiking neurons, (pp. 432–490). Boca Raton: CRC Press. Renart, A., Brunel, N., Wang, X.J. (2003). Mean-field theory of recurrent cortical networks: Working memory circuits with irregularly spiking neurons, (pp. 432–490). Boca Raton: CRC Press.
Zurück zum Zitat Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., Harris, K.D. (2010). The asynchronous state in cortical circuits. Science, 327, 587–590.CrossRefPubMedPubMedCentral Renart, A., De La Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., Harris, K.D. (2010). The asynchronous state in cortical circuits. Science, 327, 587–590.CrossRefPubMedPubMedCentral
Zurück zum Zitat Rosenblum, M., Pikovsky, A., Kurths, J., Schäfer, C., Tass, P.A. (2001). Phase synchronization: from theory to data analysis. In: Handbook of biological physics. North-Holland, pp 279–321. Rosenblum, M., Pikovsky, A., Kurths, J., Schäfer, C., Tass, P.A. (2001). Phase synchronization: from theory to data analysis. In: Handbook of biological physics. North-Holland, pp 279–321.
Zurück zum Zitat Sachidhanandam, S., Sreenivasan, V., Kyriakatos, A., Kremer, Y., Petersen, C.C. (2013). Membrane potential correlates of sensory perception in mouse barrel cortex. Nature Neuroscience, 16, 1671–1677.CrossRefPubMed Sachidhanandam, S., Sreenivasan, V., Kyriakatos, A., Kremer, Y., Petersen, C.C. (2013). Membrane potential correlates of sensory perception in mouse barrel cortex. Nature Neuroscience, 16, 1671–1677.CrossRefPubMed
Zurück zum Zitat Sahasranamam, A., Vlachos, I., Aertsen, A., Kumar, A. (2016). Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity. Scientific Reports, 6, 26029.CrossRefPubMedPubMedCentral Sahasranamam, A., Vlachos, I., Aertsen, A., Kumar, A. (2016). Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity. Scientific Reports, 6, 26029.CrossRefPubMedPubMedCentral
Zurück zum Zitat Sanchez-Vives, M.V., & McCormick, D.A. (2000). Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nature Neuroscience, 3, 1027–1034.CrossRefPubMed Sanchez-Vives, M.V., & McCormick, D.A. (2000). Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nature Neuroscience, 3, 1027–1034.CrossRefPubMed
Zurück zum Zitat Sanchez-Vives, M.V., Mattia, M., Compte, A., Perez-Zabalza, M., Winograd, M., Descalzo, V.F., Reig, R. (2010). Inhibitory modulation of cortical up states. Journal of Neurophysiology, 104, 1314–1324.CrossRefPubMed Sanchez-Vives, M.V., Mattia, M., Compte, A., Perez-Zabalza, M., Winograd, M., Descalzo, V.F., Reig, R. (2010). Inhibitory modulation of cortical up states. Journal of Neurophysiology, 104, 1314–1324.CrossRefPubMed
Zurück zum Zitat Sanchez-Vives, M.V., Massimini, M., Mattia, M. (2017). Shaping the default activity pattern of the cortical network. Neuron, 94, 993–1001.CrossRefPubMed Sanchez-Vives, M.V., Massimini, M., Mattia, M. (2017). Shaping the default activity pattern of the cortical network. Neuron, 94, 993–1001.CrossRefPubMed
Zurück zum Zitat Schwalger, T., Deger, M., Gerstner, W. (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS computational biology, 13, e1601335.CrossRef Schwalger, T., Deger, M., Gerstner, W. (2017). Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS computational biology, 13, e1601335.CrossRef
Zurück zum Zitat Siegert, A.J. (1951). On the first passage time probability problem. Physical Review, 81, 617–623.CrossRef Siegert, A.J. (1951). On the first passage time probability problem. Physical Review, 81, 617–623.CrossRef
Zurück zum Zitat Steriade, M., Timofeev, I., Grenier, F. (2001). Natural waking and sleep states: a view from inside neocortical neurons. Journal of Neurophysiology, 85, 1969–1985.CrossRefPubMed Steriade, M., Timofeev, I., Grenier, F. (2001). Natural waking and sleep states: a view from inside neocortical neurons. Journal of Neurophysiology, 85, 1969–1985.CrossRefPubMed
Zurück zum Zitat Tartaglia, E.M., & Brunel, N. (2017). Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons. Scientific Reports, 7, 11916.CrossRefPubMedPubMedCentral Tartaglia, E.M., & Brunel, N. (2017). Bistability and up/down state alternations in inhibition-dominated randomly connected networks of LIF neurons. Scientific Reports, 7, 11916.CrossRefPubMedPubMedCentral
Zurück zum Zitat Tomov, P., Pena, R.F., Zaks, M.A., Roque, A.C. (2014). Sustained oscillations, irregular firing, and chaotic dynamics in hierarchical modular networks with mixtures of electrophysiological cell types. Frontiers in Computational Neuroscience, 8, 103.CrossRefPubMedPubMedCentral Tomov, P., Pena, R.F., Zaks, M.A., Roque, A.C. (2014). Sustained oscillations, irregular firing, and chaotic dynamics in hierarchical modular networks with mixtures of electrophysiological cell types. Frontiers in Computational Neuroscience, 8, 103.CrossRefPubMedPubMedCentral
Zurück zum Zitat Tomov, P., Pena, R.F., Roque, A.C., Zaks, M.A. (2016). Mechanisms of self-sustained oscillatory states in hierarchical modular networks with mixtures of electrophysiological cell types. Frontiers in Computational Neuroscience, 10, 23.CrossRefPubMedPubMedCentral Tomov, P., Pena, R.F., Roque, A.C., Zaks, M.A. (2016). Mechanisms of self-sustained oscillatory states in hierarchical modular networks with mixtures of electrophysiological cell types. Frontiers in Computational Neuroscience, 10, 23.CrossRefPubMedPubMedCentral
Zurück zum Zitat Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity: from synaptic and cellular homeostasis to memory consolidation and integration. Neuron, 81, 12–34.CrossRefPubMedPubMedCentral Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity: from synaptic and cellular homeostasis to memory consolidation and integration. Neuron, 81, 12–34.CrossRefPubMedPubMedCentral
Zurück zum Zitat Uhlenbeck, G.E., & Ornstein, L.S. (1930). On the theory of the brownian motion. Physical Review, 36, 823–841.CrossRef Uhlenbeck, G.E., & Ornstein, L.S. (1930). On the theory of the brownian motion. Physical Review, 36, 823–841.CrossRef
Zurück zum Zitat Uhlhaas, P.J., Pipa, G., Lima, B., Melloni, L., Neuenschwander, S., Nikolić, D., Singer, W. (2009). Neural synchrony in cortical networks: history, concept and current status. Frontiers in Integrative Neuroscience, 3, 17.CrossRefPubMedPubMedCentral Uhlhaas, P.J., Pipa, G., Lima, B., Melloni, L., Neuenschwander, S., Nikolić, D., Singer, W. (2009). Neural synchrony in cortical networks: history, concept and current status. Frontiers in Integrative Neuroscience, 3, 17.CrossRefPubMedPubMedCentral
Zurück zum Zitat Vogels, T.P., & Abbott, L.F. (2005a). Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of Neuroscience, 25, 10786–10795.CrossRefPubMed Vogels, T.P., & Abbott, L.F. (2005a). Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of Neuroscience, 25, 10786–10795.CrossRefPubMed
Zurück zum Zitat Vogels, T.P., Rajan, K., Abbott, L.F. (2005b). Neural network dynamics. Annual Review of Neuroscience, 28, 357–376.CrossRefPubMed Vogels, T.P., Rajan, K., Abbott, L.F. (2005b). Neural network dynamics. Annual Review of Neuroscience, 28, 357–376.CrossRefPubMed
Zurück zum Zitat van Vreeswijk, C., Sompolinsky, H., et al. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726.CrossRefPubMed van Vreeswijk, C., Sompolinsky, H., et al. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726.CrossRefPubMed
Zurück zum Zitat Wang, S.J., Hilgetag, C.C., Zhou, C. (2011). Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations. Frontiers in Computational Neuroscience, 5, 30.PubMedPubMedCentral Wang, S.J., Hilgetag, C.C., Zhou, C. (2011). Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations. Frontiers in Computational Neuroscience, 5, 30.PubMedPubMedCentral
Zurück zum Zitat Xu, X., Ni, L., Wang, R. (2016). A neural network model of spontaneous up and down transitions. Nonlinear Dynamics, 84, 1541–1551.CrossRef Xu, X., Ni, L., Wang, R. (2016). A neural network model of spontaneous up and down transitions. Nonlinear Dynamics, 84, 1541–1551.CrossRef
Zurück zum Zitat Yim, M.Y., Kumar, A., Aertsen, A., Rotter, S. (2014). Impact of correlated inputs to neurons: modeling observations from in vivo intracellular recordings. Journal of Computational Neuroscience, 37, 293–304.CrossRefPubMedPubMedCentral Yim, M.Y., Kumar, A., Aertsen, A., Rotter, S. (2014). Impact of correlated inputs to neurons: modeling observations from in vivo intracellular recordings. Journal of Computational Neuroscience, 37, 293–304.CrossRefPubMedPubMedCentral
Metadaten
Titel
Dynamics of spontaneous activity in random networks with multiple neuron subtypes and synaptic noise
Spontaneous activity in networks with synaptic noise
verfasst von
Rodrigo F. O. Pena
Michael A. Zaks
Antonio C. Roque
Publikationsdatum
19.06.2018
Verlag
Springer US
Erschienen in
Journal of Computational Neuroscience / Ausgabe 1/2018
Print ISSN: 0929-5313
Elektronische ISSN: 1573-6873
DOI
https://doi.org/10.1007/s10827-018-0688-6