Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Statistical Model for In Vivo Neuronal Dynamics

  • Simone Carlo Surace ,

    surace@ini.uzh.ch

    Affiliations Department of Physiology, University of Bern, Bern, Switzerland, Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland

  • Jean-Pascal Pfister

    Affiliation Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland

Abstract

Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.

Introduction

During the last decade, there has been an increasing number of studies providing intracellular in vivo recordings. From the first intracellular recordings performed in awake cats [1, 2] to more recent recording in cats [3], monkeys [4], mice [5], and even in freely behaving rats [6], it has been shown that the membrane potential displays large fluctuations and is very rarely at the resting potential. Some recent findings in the cat visual cortex have also suggested that the statistical properties of spontaneous activity is comparable to the neuronal dynamics when the animal is exposed to natural images [7]. Similar results have been found in extracellular recordings in the ferret [8]. Those data are typically characterized by simple quantifications such as the firing rate or the mean subthreshold membrane potential [5], but a more comprehensive quantification is often missing. So the increasing amount of intracellular data of awake animals as well as the need to compare in a rigorous way the data under various recording conditions call for a model of spontaneous activity in single neurons.

Single neuron models have been studied for more than a century. Simple models such as the integrate-and-fire model [9, 10] and its more recent nonlinear versions [1113] describe the relationship between the input current and the membrane potential in terms of a small number of parameters and are therefore convenient for analytical treatment, but do not provide much insight about the underlying biophysical processes. On the other end of the spectrum, biophysical models such as the Hodgkin-Huxley model [14, 15] relate the input current to the membrane potential through a detailed description of the various transmembrane ion channels, but estimating the model parameters remains challenging [16, 17]. Despite the success of those types of models, none of them can be directly applied to intracellular in vivo recordings for the simple reason that the input current is not known.

Another reason why a precise model of spontaneous activity is needed is that there are several theories that have been proposed that critically depend on statistical properties of spontaneous activity. For example Berkes et al. validate their Bayesian treatment of the visual system by comparing the spontaneous activity and the averaged evoked activity [8]. Another Bayesian theory proposed the idea that short-term plasticity acts as a Bayesian estimator of the presynaptic membrane potential [18]. To validate this theory, it is also necessary to characterize spontaneous activity with a statistical model that describes the subthreshold as well as the spiking dynamics.

The last motivation for a model that describes both the subthreshold and the suprathreshold dynamics is the possibility to separate those two dynamics in a principled way. Indeed, it is interesting to know from the recordings what reflects the input dynamics and what aspect comes from the neuron itself (or rather what is associated with the spiking dynamics). Of course a simple voltage threshold can separate the sub- and a suprathreshold dynamics, but the value of the threshold is somewhat arbitrary and could lead to undesirable artifacts. Therefore a computationally sound model that decides itself what belongs to the subthreshold and what belongs to the suprathreshold dynamics is highly desirable.

Here, we propose a single neuron model that describes intracellular in vivo recordings as a sum of a sub- and suprathreshold dynamics. This model is flexible enough in order to capture the large diversity of neuronal dynamics while remaining tractable, i.e. the model can be fitted to data in a reasonable time. More precisely, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the firing intensity is expressed as a non-linear function of the membrane potential. Since we further include refractoriness and adaptation mechanisms, our model, which we call the Adaptive Gaussian Point Emission process (AGAPE), can be seen as an extension of both the log Gaussian Cox process [19] and the generalized linear model [2022].

Results

Here we present a statistical model of the subthreshold membrane potential and firing pattern of a single neuron in vivo. See Fig 1A for such an in vivo membrane potential recording. We first provide a formal definition of the model and then show a range of different results. 1) The model is flexible and supports arbitrary autocorrelation structures and adaptation kernels. Therefore, the range of possible statistical features is very large. 2) The model is efficiently fittable and the learning procedure is validated on synthetic data. 3) The model can be fitted to in vivo datasets. 4) All the features included in the model are required to provide a good description of in vivo data.

thumbnail
Fig 1.

(A) A sample in vivo membrane potential trace from an intracellular recording of a neuron in HVC of a Zebra Finch. (B) The generative AGAPE model can generate a trace of subthreshold membrane potential u (top trace). Based on this potential, a spike train s is generated (middle, dashed vertical lines). Finally, a stereotypic spike-related kernel is convolved with the spike train and added to u, giving rise to usom (bottom, thick line). This quantity is the synthetic analog of the recorded, preprocessed in vivo membrane potential.

https://doi.org/10.1371/journal.pone.0142435.g001

Definition of the AGAPE model

The AGAPE model is a single neuron model where the input to the neuron is not known, which is typically the case under in vivo conditions. The acronym AGAPE stands for Adaptive GAussian Point Emission process since the subthreshold membrane potential follows a Gaussian process and since the spike emission process is adaptive.

More formally, the AGAPE process defines a probability distribution p(usom, s) over the somatic membrane voltage trace usom(t) and the spike train where , i = 1, …, ns are the nominal spike times (decision times), which occur a certain fixed time period δ > 0 before the peak of the action potential. From this probability distribution (or generative model) it is possible to draw samples that look like intracellular in vivo activity (for practical purposes, the samples will be compared to the preprocessed recordings, see explanations below). The AGAPE model assumes that the somatic membrane voltage as a function of time usom(t) is given by (see Fig 1) (1) where ur is a constant (the reference potential), u(t) describes the subthreshold membrane potential as a stochastic function drawn from a stationary Gaussian process (GP) [23] (2) with covariance function k(tt′) (which can be parametrized by a weighted sum of exponential decays with weights and inverse time constants θi, see Materials and Methods). For small values of δ (e.g. 1–3 ms), u(t) can be seen as the net contribution from the unobserved synaptic inputs and uspike(t) is the spike-related contribution (see Fig 1B) which consists of the causal convolution of a stereotypical spike-related kernel α with the spike train s(t), i.e. (3) where α can be parametrized by a weighted sum of basis functions with weights ai, see Materials and Methods. Here, we have made a separation of subthreshold and suprathreshold layers, in that whatever is stereotypic and triggered by the point-like spikes s(t) is attributed to uspike(t), and the rest belongs to the fluctuating signal u(t). This separation need not correspond to the biophysical distinction between synaptic inputs and active processes of the recorded cell (i.e. the positive feedback loop of the spiking mechanism). Indeed, especially for a choice of large δ (e.g. ∼ 20 ms), uspike(t) also contains large depolarizations due to strong synaptic input which cannot be explained by the GP signal u(t).

Note that this model could easily be extended by including an additional term in Eq (1) which depends on an external input, e.g. a linear filter of the input (see also Discussion). However, since this input current was not accessible in our recordings, its contribution was assumed to be part of u(t) or uspike(t).

Now we proceed to the coupling between the subthreshold potential u(t) and the spiking output, as well as adaptive effects associated with spike generation. These effects are summarized by an instantaneous firing rate r(t)—as in the generalized linear model (GLM) [2022] or escape-rate models [24]—which is computed from the value of the subthreshold membrane potential at time t, u(t), and the spike history as (4) where β ≥ 0 is the coupling strength between u and the spikes, and A(t) is the adaptation variable which is the convolution of an adaptation kernel η (which can be parametrized by a weighted sum of basis functions with weights wi, see Materials and Methods) with the past spike train. Also note that we choose not to model adaptation currents explicitly, since they would simultaneously impact the membrane potential and the firing probability (see Discussion). The function g is called gain function, and here we use an exponential one, i.e. g [A(t) + βu(t)] = elog r0+A(t)+βu(t). Other functional forms such as rectified linear or sigmoidal could be used depending on the structure of the data. However, this choice has important implications on the efficiency of learning of the model parameters [25]. We define the probability density for s on an interval [0, T] conditioned on u as (5)

The parameter β connects the subthreshold membrane potential u to the rate fluctuations. The magnitude of the rate fluctuations depend on the variance σ2 of u, and therefore we use βσ as a measure of the effective coupling strength. When β > 0 the quantity θ(t) = −A(t)/β can be regarded as a soft threshold variable which is modulated after a spike, and u(t) − θ(t) is the effective membrane potential relevant for the spike generation. This spiking process is a point process which generalizes the log Gaussian Cox process. Indeed, when A = 0, Eq (5) describes an inhomogeneous Poisson process with rate g [βu(t)].

Practically, if we want to draw a sample from the AGAPE process, we first draw a sample u from the Gaussian Process (see S1 Text for how to do this efficiently), then for each time t we draw spikes s(t) with probability density r(t) and update the adaptation variable A(t). Finally, the somatic membrane potential is calculated using Eq 1.

It is important to emphasize at this point that while the model may be directly fitted to the raw membrane potential uraw as recorded by an intracellular electrode, we median filter the data in order to avoid artifacts and downsample for computational efficiency (see ‘Materials and Methods’). In this study the model is always fitted to the preprocessed recordings and this is reflected e.g. in the shape of α which is most strongly affected by the preprocessing. It is important to keep in mind this point while interpreting the results of model fitting. The details of the preprocessing steps which were used are given in the ‘Materials and Methods’ section.

The model has a rich dynamical repertoire

The AGAPE provides a flexible framework which can be adjusted in complexity to model a wide range of dynamics. While for the datasets presented here a covariance function was used which consists of a sum of Ornstein-Uhlenbeck (OU) kernels, the Gaussian Process (GP) allows for arbitrary covariance functions to be used. This includes simple exponential decay (as produced by a leaky integrate-and-fire neuron driven by white noise current), but it can produce also more interesting covariance functions such as power-law covariances, which are reported in [7, 26], or subthreshold oscillations, as reported in [27].

The model is also able to reproduce a wide range of firing statistics. A common measure of firing irregularity is the coefficient of variation (CV, i.e. the ratio of standard deviation and mean) of the inter-spike interval distribution. In the absence of adaptation, the AGAPE is a Cox process and therefore has a coefficient of variation CV ≥ 1 [28]. The precise value of the CV is a function of the coupling strength (βσ) as well as the autocorrelation of the GP. To illustrate this, we sampled synthetic data from a simple version of the AGAPE where the subthreshold potential u is an OU process with time-constant τ. As shown in Fig 2A, the CV is an increasing function of the membrane time-constant τ, baseline firing rate r0, and dimensionless coupling parameter between membrane potential and firing rate βσ. Moreover, the range of the CV extends from 1 to ≈ 8 within a range of βσ ∈ [0, 2] and r0τ ∈ [2 − 2, 28]. In the presence of adaptation, firing statistics are markedly different and can produce values of CV < 1 [24, 29]. To illustrate this point, we considered an exponential adaptation kernel, i.e. . While the CV increases as a function of βσ and r0τ as before, the range of values of the CV now also covers the interval (0, 1) which is not accessible by the Cox process but which is observed in many neurons across the brain [30]. In order to study the influence of the parameters of the adaptation mechanism, we fix βσ = r0τ = 1 and plot the CV as a function of r0τr and η0 (see Fig 2B). Within the parameter region explored in Fig 2B, the CV spans values from 0.1 up to 1.6.

thumbnail
Fig 2. The model has a rich dynamical repertoire (A, B) and can be correctly fitted to synthetic data (C-F).

(A, B) The coefficient of variation (CV) of the inter-spike interval distribution is computed for parameter values shown as black dots and then linearly interpolated. (A) The CV of a simple version of the AGAPE (k(t) = σ2 et/τ, α = η = 0) as a function of the model parameters (membrane time-constant τ, baseline firing rate r0 and coupling strength βσ). (B) CV of the AGAPE model with an exponentially adaptive process with fixed membrane time-constant, firing rate and coupling (βσ = r0τ = 1) as a function of the parameters describing adaptation, namely adaptation strength η0 and time-constant τr. (C, D, E, F) Synthetic data is sampled from the AGAPE model with GP (D), spike-related (E), and adaptation (F) kernels as depicted in black, and δ = 4 ms, r0 = 4.15 Hz, β = 0.374 mV − 1. Then the AGAPE is fitted to the synthetic data by maximum likelihood (ML). (C) The maximum log likelihood per bin as a function of the parameter δ has its maximum at the ground truth value δ = 4 ms. (D, E, F) The ML estimates (red) of the GP, spike-related and adaptation kernels lie within two standard deviations (red shaded regions, estimated by means of the observed Fisher information) from the ground truth.

https://doi.org/10.1371/journal.pone.0142435.g002

The model can be fitted efficiently

The parameters of the AGAPE model are learned through a maximum likelihood approach. More precisely, we fit the model to an in vivo sample (highlighted by a ‘*’) of preprocessed somatic membrane potential and spike train s*,δ by maximizing the log likelihood applied to the joint data set over the parameter space of the model (i.e. ur, log r0, β, the coefficients of the kernels k, η, and α, and the delay parameter δ). The empirical spike train s*,δ depends on the parameter δ because the formal spike times are assigned to be a time period δ before the recorded peak of the action potential. The joint probability of the data can be expressed as a product (6)

The subscript s*,δ of the first factor denotes the explicit dependence on the spike train. The individual terms on the r.h.s. will be given below. The function we are optimizing is the logarithm of the above joint probability which we can write as (7)

It should be noted that the presence of the spike-related kernel α in both terms produces a trade-off situation: removing the spike-related trajectory improves the Gaussianity of the membrane potential u (and therefore boosts the first term) at the cost of the of the second term by removing the short upward fluctuation that leads to the spike. This trade-off situation makes maximum likelihood parameter estimation a non-concave optimization problem. Moreover, the evaluation of the GP likelihood of n samples, where , comes at a high computational cost. Two important techniques make the parameter learning both tractable and fast: the first is the use of the circulant approximation of the GP covariance matrix which makes the evaluation of the likelihood function fast. The second is the use of an alternating fitting algorithm which (under an appropriate parametrization, see ‘Materials and Methods’) replaces the non-concave optimization in the full parameter space with two concave optimizations and a non-concave one in suitable parameter subspaces. Those two techniques are further described in the next section.

Efficient likelihood computation.

The log-likelihood function is evaluated in its discrete-time form with n time points separated by a time-step Δt. The GP variable u (which leads to usom through Eq (1)) is multivariate Gaussian distributed with a covariance matrix Kij = k(titj), where ti = iΔt. The matrix K is symmetric and, by virtue of stationarity, Toeplitz. Evaluation of the GP likelihood requires inversion of K, which is computationally expensive (the time required to invert a matrix typically scales with n3). For this reason we approximate this Toeplitz matrix by the circulant matrix C which minimizes the Kullback-Leibler divergence (see [3133] and S1 Text) (8) between the two multivariate Gaussian distributions with the same mean but different covariance matrices. This optimization problem can be solved by calculating the derivative of with respect to D and using the diagonalization of D by a Fourier transform matrix [33]. After a bit of algebra (see S1 Text), denoting ki = K1i and kn+1 ≡ 0, the optimal circulant matrix can be written as , where i, j = 1, …, n and (9)

The replacement of K by C is equivalent to having periodic boundary conditions on u, which has a small effect under the assumption that the time interval spanned by the data is much longer than the largest temporal autocorrelation length of k. So the first term on the r.h.s. of Eq (6) is a multivariate Gaussian density . The determinant of the covariance matrix C is the product of eigenvalues, which for a circulant matrix are conveniently given by the entries of , the discrete Fourier transform of c (see the S1 Text for our conventions regarding discrete Fourier transforms). Also the scalar product uT C − 1 u can be written in terms of . Together, the first term on the r.h.s. of Eq (6) takes the simple form (10) where are the components of the discrete Fourier transform of u*. The Gaussian component of the membrane potential u is implicitly given by the discretized somatic voltage modified by a discrete-time version of the spike-related kernel convolution, (11) where is the binned spike train (see below), αi is a discretized version of the spike-related kernel. The time required to compute is determined by the complexity of the Fourier transform, which is of the order of n log n. This dramatic reduction in complexity (compared to n3) allows a fast evaluation of the log-likelihood.

The spiking distribution is approximated by a Poisson distribution with constant rate within one time bin. For each bin, counts the number of spikes that occur in that bin, and the conditional likelihood of the spikes therefore reads (12) where and as defined in Eq (11). If contains only zeros and ones (which can be accomplished given small enough bins), the last term vanishes.

Efficient parameter estimation.

Except for the parameter δ, which takes discrete values of multiples of the discretization step Δt, it is possible to analytically calculate the first and second partial derivatives of the likelihood function defined in Eq (6) with respect to the model parameters (ur, k, α, log r0, β, η) (see S1 Text) to facilitate the use of gradient ascent, Newton, and pseudo-Newton optimization algorithms. A desirable feature of an optimization problem is concavity of the objective function (in our case, the log-likelihood function). Even though the problem of finding optimal parameters for the AGAPE process is not concave, the optimization can be done in three alternating subspaces (see Fig 3). The full set of parameters Θ is divided into three parts: θGP for the GP parameters (ur, parameters of k), θspike kernel for the spike-related kernel parameters, and θspiking for the parameters controlling spike emission (log r0, β and parameters of η). The optimization then proceeds according to the following cycle: (1) the GP parameters are learned, (2) the spike-related kernel parameters are learned, and lastly (3) the spiking parameters are learned. In each step the remaining parameters are held fixed. The cycle is repeated until the parameters reach a region where the log likelihood is locally concave in the full parameter space, after which the optimization can be run in the full parameter space until it converges. Joint concavity of the log likelihood holds if all the eigenvalues of the Hessian matrix are strictly negative. As shown in [25], step (3) is concave for a certain class of gain functions g, including the exponential function, and linear parametrizations of the adaptation kernel. The same holds for the spiking term of the log-likelihood in step (2). The voltage term of the log likelihood of step (2) is concave by numerical inspection in the cases we considered. To summarize, steps (2) and (3) are concave and Newton’s method can be used in these steps as well as for the final concave optimization in the full space. Step (1) is non-concave and therefore a simple gradient ascent algorithm is used.

thumbnail
Fig 3. This schematic shows the optimization scheme that is used to learn the parameters of the AGAPE model when it is fitted to the data (for a given δ).

As long as the current parameter estimate sits in a non-concave region of the likelihood function, the top cycle optimizes over different subspaces of the parameter space. If and when a concave point is reached, the optimization proceeds in the full parameter space. This whole scheme is repeated for each value of δ in order to find the optimal one.

https://doi.org/10.1371/journal.pone.0142435.g003

The optimization over (ur, k, α, log r0, β, η) is repeated for every δ = 0, Δt, 2Δt, …, δmax in order to select the one δ that maximizes the log-likelihood . The value of δmax is chosen such that it is less than the least upper bound of the support of the basis of the spike-related kernel α. Since the parameters ur, k, α, log r0, β, η are expected to change only a little when going from one δ to the next, δ + Δt, learned parameters for δ can be used as initial guesses for nearby δt or δ − Δt. We thus get two different initializations, which we can exploit by starting e.g. with δ = 0, ascending through the sequence of candidate δ’s up to the maximum δ, and descending back to zero.

Validation with synthetic data

Despite this improvement in speed and tractability, the optimization is still riddled with multiple local minima which require the use of multiple random initializations. In order to demonstrate the validity of the fitting method, synthetic data of length 270.112 seconds (n = 270112, the same as in vivo dataset , see below) was generated with known parameters (δ = 4 ms, r0 = 4.15 Hz, β = 0.374 mV − 1 and GP, spike-related kernel and adaptation kernels as depicted in Fig 2D–2F). The learning algorithm was initialized with least-squares estimates of the covariance function parameters based on the empirical autocorrelation function of usom and spike-related kernel and adaptation kernels set to zero. The true underlying δ can be recovered from the synthetic data (Fig 2C). Moreover, the algorithm converges after a few dozen iterations (taking only three minutes on an ordinary portable computer) and—with δ set to 4 ms—recovers the correct GP, spike-related, and adaptation kernels (Fig 2D–2F). All ML estimates lie within a region of two standard deviations around the ground truth, where standard deviations are estimated from the observed Fisher information [34].

The model can fit in vivo data

We fitted the model to a number of in vivo traces from different animals and conditions (see ‘Materials and Methods’ for a detailed description of the data sets). We would like to remind the reader at this point that the model is never fitted to the raw membrane potential, but to a preprocessed, i.e. median-filtered and downsampled dataset (see Materials and Methods). Because of this preprocessing stage, the model only sees the truncated action potentials which emerge from the median filter. This is reflected in the extracted spike-related kernel α, which is characterized by a smaller amplitude than the original action potential in the raw membrane potential data.

We show the detailed results of the model fitting for the example songbird HVC dataset . The optimal value of δ for this dataset was δ = 18 ms (see S3 Fig), with which the model captures the subthreshold and suprathreshold statistics (smaller values of δ compromise both the subthreshold and suprathreshold description because the large upward fluctuations which preceed spikes in this dataset are unlikely to arise from a GP). In particular, the stationary distribution of the membrane potential u is well approximated by a Gaussian (Fig 4B) and pronounced after-hyperpolarization is seen in the spike-related kernel (Fig 4D). The subthreshold autocorrelation structure is well reproduced by the parametric autocorrelation function k (Fig 4C). The adaptation kernel reveals an interesting structure in the way the spiking statistics deviates from a Poisson process (Fig 4E). This feature of the spiking statistics is also reflected in the inter-spike interval (ISI) distribution (Fig 4F). Both the data and the fitted model first show an increased, and then a significantly decreased probability density when compared to a pure Poisson process. The remaining parameters are listed in Table 1 (errors denote two standard deviations, estimated from Fisher information, see Materials and Methods). The model can be used to generate synthetic data, which is shown in Fig 4H.

thumbnail
Fig 4. The results of maximum likelihood (ML) parameter fitting to dataset .

After fitting, we see (A) the removal of the spike-related kernel through the difference between the recorded trace and the subthreshold membrane potential u+ur; (B) the match of the stationary distribution of the subthreshold potential u and a Gaussian. We also observe that (C) the autocorrelation function of the data, Eq (14), is well reproduced by k(t) in Eq (15); (D) the spike-related kernel α(t) starts at −δ = −18 ms relative to the peak of the action potential. The difference between the spike-triggered average (STA) and the spike-related kernel is attributed to the GP; and (E) that the adaptation kernel η(t) shows distinct modulation of firing rate which produces firing statistics significantly different from a Poisson process. This is also reflected in the inter-spike interval density ρ(τ) (F) of the data, which shows good qualitative agreement with a simulated AGAPE with adaptive kernel as in (E) (thick red line), but not by a non-adaptive (i.e. Poisson) process (thin red line). After fitting, a two second sample of synthetic data (H) looks similar as the in vivo data (G). In (G, H) vertical lines are drawn at the spiking times. All red shaded regions denote ± 2 standard deviations, estimated from the observed Fisher information.

https://doi.org/10.1371/journal.pone.0142435.g004

thumbnail
Table 1. The values (p.m. two standard deviations, estimated from the observed Fisher information) of the fitted parameters not shown in Fig 5 for the in vivo datasets described in the main text.

The last row shows the effective coupling strength between the membrane potential and the firing rate, given by β times the standard deviation σ of the membrane potential.

https://doi.org/10.1371/journal.pone.0142435.t001

In order to show the generality of the model, we fitted the model on two more datasets, from another HVC neuron and from mouse visual cortex. The parameter δ was found to take the optimal value of 12 ms for and 32 ms for (to see how fitted parameters change as a function of δ, see S3 and S4 Figs). The comparison of the GP, spike-related and adaptation kernels is shown in Fig 5, and the remaining parameters are listed in Table 1. The three cells show pronounced differences in autocorrelation structure, spike-related kernel and spike-history effects. In particular the two datasets and show rather long autocorrelation lengths of the membrane potential and asymmetric spike-related kernels, whereas the cell in has comparatively short autocorrelation length and very pronounced hyperpolarization. Adaptation is much stronger in than in , balancing the much higher baseline firing rate r0, see Table 1. The error bars on the adaptation kernel are small for datasets and due to the abundance of spikes. On the other hand, the adaptation kernel of dataset is poorly constrained by the available data. This is due to the fact that dataset consists of very short trials with very few spikes. Despite this fact, good agreement is achieved between the distribution of inter-spike intervals of the in vivo data and ISI statistics sampled from the AGAPE (see Fig 5, bottom row) for all datasets.

thumbnail
Fig 5. Fitting results for three different datasets.

Dataset is the same as in Fig 4, i.e. an HVC neuron from anesthetized Zebra Finch. is from HVC in awake Zebra Finch, and is from mouse visual cortex in awake mouse. The different panels show the results after fitting; in the first line the GP covariance function k(t) (red) and the empirical autocorrelation (black), Eq (14), in the second line the spike-related kernel α(t), in the third line the adaptation kernel η(t), and in the fourth line the inter-spike interval density ρ(t) (data ISI histogram in gray, simulated ISI distribution from AGAPE in red). There are pronounced differences between datasets in all three kernels, showing the flexibility of the AGAPE model in describing a wide range of statistics. All red shaded regions denote ± 2 standard deviations, estimated from the observed Fisher information.

https://doi.org/10.1371/journal.pone.0142435.g005

The model does not overfit in vivo data

The AGAPE process has a fairly large number of parameters. Therefore it is important to check whether the model overfits the data, compromising its generalization performance. In short, when a model has too many parameters, it tends to be poorly constrained by the data and therefore when the model is first trained on one part of the data and then tested on another part on which it is not trained, the test performance will be significantly worse than the training performance.

Here, we use cross-validation to perform a factorial model comparison on an exemplary dataset in order to validate the different structural parts of the model. The procedure is described in detail in the Materials and Methods.

Model comparison is performed on the dataset and the results are shown in Fig 6, where the mean difference of per-bin log-likelihood (see ‘Materials and Methods’) (13) is shown for all models i ∈ {0, …, Gαβη} (here, 〈⋅〉j denote averages over chunks j of the cross-validation). The results are very similar for both validation data (which was left-out during training, but appeared in other training runs) and the test data which was never seen during training. The most complex model () performs significantly better than any one of the simpler models on validation data except where the difference is too small and lies inside a region of two standard errors of the mean. This confirms that most of the model features are required to provide an accurate description of the experimental data.

thumbnail
Fig 6. Comparison of the different models on dataset .

The relative measure of model performance, i.e. the per-bin log-likelihood Δp (see Eq (13)) between any model and the most complex model () are significantly negative (with exception of , and trivially ), implying that the added complexity improves the model fit without overfitting. This holds for both validation scores Δpvalid (black) and scores from unseen test data Δptest (red). Error bars denote one standard error of the mean (S.E.M.). The biggest improvement of fit quality is achieved by including the spike-related kernel (upper vs. lower part of the figure).

https://doi.org/10.1371/journal.pone.0142435.g006

Discussion

In this study, we introduced the AGAPE generative model for single-neuron statistics in order to describe the spontaneous dynamics of the somatic potential without reference to an input current. We showed that this model has a rich dynamical repertoire and can be fitted to data efficiently. By fitting a heterogeneous set of data, we finally demonstrated that the AGAPE model can be used for the systematic characterization and comparison of in vivo intracellular recordings.

Flexibility and tractability of the model

The AGAPE model provides a unified description of intracellular dynamics, offering a large degree of flexibility in accounting for the distinct statistical features of a neuron. As the example datasets demonstrate, the model readily teases apart the differences in the statistics which exist between different cells in different animals (see Fig 5). This shows that the model is sensitive enough to distinguish between datasets which are in fact very similar.

We used a set of approximations and techniques to make the model fitting tractable, despite the non-concavity of the log likelihood function. It is still the case that multiple local maxima of the likelihood function can make the fitting somewhat hard, especially if the quantity of data available for fitting is quite low. However, since one run of the fitting itself takes only a few minutes even on a portable computer, multiple initializations can be tried out in a relatively short amount of time.

Comparison with existing models

From an operational perspective, existing spiking neuron models can be divided into three main categories: stimulus-driven, current-driven and input-free spiking neurons. The first category contains phenomenological models that relate sensory stimuli to the spiking output of the neuron. The linear-nonlinear-Poisson model (LNP) [35], the generalized linear model (GLM) [2022] or the GLM with additional latent variables [36] are typical examples in this category. Even though the spike generation of the AGAPE shares some similarities with those models, there is an important distinction to make. In those models the convolved input (i.e. the output of the ‘L’ step of the LNP or the input filter of the GLM) is an internal variable that does not need to be mapped to the somatic membrane potential whereas in our case, the detailed modeling of the membrane potential dynamics is an important part of the AGAPE. Consequently, those phenomenological models are descriptions of extracellular spiking recordings whereas the AGAPE models the dynamics of the full membrane potential accessible with intracellular methods.

The second class of spiking models aims at bridging the gap between the input current and the spiking output. The rather simple integrate-and-fire types of models such as the exponential integrate-and-fire [13] or the spike-response model [24, 37] as well as the more biophysical models such as the Hodgkin-Huxley model [15] fall within this category. In contrast to those models where the action potentials are caused by the input current, the AGAPE produces a fluctuating membrane potential and stochastic spikes without a reference to an input current.

The last category of models aims at producing spontaneous spiking activity without an explicit dependence to a given input [18, 38, 39]. For example, Cunningham et al. propose a doubly stochastic process where the spiking generation is given by a gamma interval process and the firing intensity by a rectified Gaussian process, which provides a flexible description of the firing statistics [38]. However, the membrane potential dynamics is not modeled. In opposition, the neuronal dynamics assumed by Pfister et al. [18] models explicitly the membrane potential (as a simple Ornstein-Uhlenbeck process) but is not flexible enough to capture the dynamics of in vivo recordings. Also any of the current-driven spiking neuron models mentioned above can be turned into an input-independent model by assuming some additional input noise. So why is there a need to go beyond stochastic versions of those models? An integrate-and-fire model with additive Gaussian white noise is certainly fittable, but does not have the flexibility to model arbitrary autocorrelation for the membrane potential. At the other end of the spectrum, a Hodgkin-Huxley model with some colored noise would certainly be able to model a richer dynamical repertoire, but the fitting of it remains challenging [16] (but see [17]). The main advantage of the AGAPE is that it is at the same time very flexible and easily fittable. The flexibility mostly comes from the fact that any covariance function can be assumed for the GP process. The relative ease of fitting comes from the circulant approximation as well as from the presence of concave subspaces in the full parameter space.

Another distinct feature of our model with respect to other existing models is the explicit modeling of the spike-related trajectory instead of the spike-triggered average (as e.g. in [40]). Even though both concepts share similarities—both would capture a sudden and strong input that lead to a spike—there is an important distinction. The spike-triggered average also captures the (possibly smaller) upward fluctuations of the membrane potential which causes the spike while the spike-related kernel α precisely avoids capturing those fluctuations, letting the GP kernel explain them.

So if we removed the spike-triggered average e.g. in synthetic data where the true coupling parameter β is large, we would also remove the characteristic upward fluctuation of the membrane potential which causes the spike. By doing so, the fitting procedure would not find the correct relation between the values of the membrane potential and the observed spike patterns and therefore choose a β close to zero. Thus, if something has to be removed around an action potential (and our model comparison, Fig 6, demonstrates convincingly that this is necessary), the formulation of the model demands that it is parametrically adjustable. This is the main reason why in our model framework the spike-triggered average has to be rejected as a viable extraction method. Note that if the true coupling parameter β is close to zero, the spike-triggered average is close to the extracted spike-related kernel α. For data where the action potential shape shows considerable variability, the model could be generalized to include a stochastic or a history-dependent spike-related kernel.

Extensions and future directions

Despite the focus of the present work on single-neuron spontaneous dynamics, the AGAPE model admits a straightforward inclusion of both stimulus-driven input and recurrent input. The inclusion of stimulus-driven input is similar as for the GLM model and allows the model to capture the neuronal correlate of stimulus-specific computation. The recurrent input makes the framework adaptable to multi-neuron recordings in vivo. While intracellular recordings from many neurons in vivo are very hard to perform, the rapid development of new recording techniques (e.g. voltage-sensitive dyes) makes the future availability of subthreshold data with sufficient time-resolution at least conceivable. The full-fledged model would allow questions regarding the relative importance of background activity, recurrent activity due to computation in the circuit, and activity directly evoked by sensory stimuli to be answered in a systematic way. In this setup, the contribution of the GP-distributed membrane potential to the overall fluctuations would be reduced (since it has to capture less unrecorded neurons) while the contribution of the recorded neurons would increase. This modified model can be seen as a generalization of the stochastic spike-response model [24] or a generalization of the GLM (if the internal variable of the GLM is interpreted as the membrane potential).

So far, we assumed that weak synaptic inputs are captured by the Gaussian process while the strong inputs that lead to the spikes are captured by the spike-related kernel α. A straightforward extension of the model would be to consider additional intermediate inputs that cannot be captured by the GP nor by the spike-related kernel α but that can drive the neuron to emit (with a given probability) an action potential. Those intermediate input could be modeled as filtered Poisson events. The inclusion of those latent events would increase the complexity of the model and at the same time change some of the fitted parameters. In particular, we expect that it would increase the coupling β between the membrane potential and the firing rate and reduce the optimal delay δ between the decision time and the peak of the action potential. This could also provide a better way to separate the subthreshold dynamics (which depends on the input activity) from the suprathreshold dynamics (which would depend only on the neuron dynamics, and not on the strong inputs that it receives, as it is the case now).

A central assumption of our model is that of a Gaussian marginal distribution of the subthreshold potential. Although it is remarkably valid for the dataset considered here (i.e. the HVC dataset see also Fig 4B), datasets characterized by a distinctly non-Gaussian voltage distribution even after spike-related kernel removal are beyond the scope of the current model. In order to address this limitation, the Gaussian process could be extended to a different stochastic process, e.g. a nonlinear diffusion process, permitting non-Gaussian and in fact arbitrary marginal distributions. Moreover, a reset behavior similar to the one exhibited by an integrate-and-fire model [13] could be achieved with a non-stationary GP which features a mean which is reset after a spike. Both modifications would have a severe impacton the technical difficulty of model fitting. Therefore, the Gaussian assumption can be regarded as a useful compromise which is preferable over a perfect account for the skewness of the marginal distribution.

The spike-related kernel method to separate subthreshold and suprathreshold dynamics is an important feature of the model which is used to rid the membrane potential recording of stereotypic waveforms associated with a spike. The spike related kernel as modeled in the AGAPE has no bearing on the probability of the spikes, whereas the adaptation kernel η which modulates the firing rate after a spike is not visible in the somatic membrane potential dynamics. A simple extension of the model could include spike-triggered adaptation currents which affect both the somatic membrane potential as well as the firing intensity. Another possible extension is to allow the firing probability to depend on a filtered version of the subthreshold potential u instead of the instantaneous value of u at a time δ before the peak of the action potential. Both of the mentioned extensions would improve the biophysical interpretability of the AGAPE, but they would also vastly increase the number of parameters. Therefore, a model comparison would be required to determine what level of model complexity is required in order to characterize the statistics of the recording.

In the present study, the AGAPE was fit to different datasets of two different animals and brain regions. A systematic fitting to in vivo intracellular data from a wide range of animals and brain regions would constitute a classification scheme which does not only complement existing classifications of neurons which are based on electrophysiological, morphological, histological, and biochemical data; such as the one in [41], but which is in direct relationship with the computational tasks the brain is facing in vivo.

Another application of the AGAPE could be in the context of a normative theory of short-term plasticity. Indeed, it has been recently hypothesized that short-term plasticity performs Bayesian inference of the presynaptic membrane potential based on the observed spike-timing [18, 42]. According to this theory, short-term plasticity properties have to match the in vivo statistics of the presynaptic neuron. Since the AGAPE provides a realistic generative model of presynaptic activity under which inference is supposedly performed, our model can be used to make testable predictions on the dynamical properties of downstream synapses.

Materials and Methods

Description of the datasets used

  1. Dataset is a recording from a HVC neuron of an anesthetized Zebra Finch (Ondracek and Hahnloser, unpublished recordings). The recording has a total length of 270 seconds at 32 kHz (see Fig 1A for a snippet of this recording) and contains 2281 action potentials.
  2. Dataset is another recording from a projection cell in HVC of Zebra Finch, but this time the animal is awake (Vallentin and Long, unpublished recordings). It consists of 6 individual recordings which together have a length of 152.5 seconds at 40 kHz. This dataset is used for model comparison (see below).
  3. Dataset is from similar conditions as (Vallentin and Long, unpublished recordings, see [43, 44, 45] for similar recordings) and has a length of 60 seconds.
  4. Dataset consists of 19 individual trials of 4.95s duration at 20 kHz. The recording was obtained from a pyramidal neuron in layer 2/3 of awake mouse visual cortex [46].

Preprocessing

Intracellular voltage traces are often recorded at a rate between 20 and 40 kHz. This allows the action potentials to be resolved very clearly and precise spike timings to be extracted. However, for the study of the subthreshold regime, this high sampling rate is not required, and therefore the data may be down-sampled to roughly 1 kHz after obtaining the precise spike timings. Prior to down-sampling, we smooth with a median filter of 1ms width in order to truncate the sharp action potential peaks and avoid artifacts (see details below).

We define the spike peak times operationally as the time where the local maximum of the action potential is reached. This means that occurs after action potential onset, and hence the spike-related kernel has to extend to the past of . The spike-related kernel starts at the nominal spike time which is shifted from the peak time by a fixed amount δ, i.e. . The nominal spike times are then binned to 1 ms, yielding a binary spike train si = 0, 1.

For usom(t) we use a preprocessed version of the recorded trace which has been median-filtered with a width of the filter of 1 ms and then down-sampled to 1 kHz, making it the same length as the binary spike train. This procedure preserves the relevant correlation structure of the membrane potential while reducing the computational demands of fitting as much as possible. In the data we examined, the median-filtered membrane potential has a dip after , but unless down-sampling is done carefully, this dip sometimes occurs one timestep after and sometimes right at in the downsampled usom. Since this dip will have to be captured by the spike-related kernel which has a fixed shape for all action potentials, the down-sampling procedure has to ensure that the dip occurs always in the first time-step. We solved this problem by setting the down-sampled value of usom at (rounded to 1 ms) to the value of usom at before down-sampling.

While applying the model to the raw recording uraw directly (without first filtering and downsampling it) is possible in principle, it comes at a massively increased computational cost. In the interest of time required to fit the model and amount of data having to be handled, it is therefore sensible to include that pre-processing stage.

Parametrizations and initializations

We already introduced the parameters ur, r0 and β. Additional parameters are needed to describe the autocorrelation k(t), the spike-related kernel α(t) and the adaptation kernel η(t).

The covariance function of the GP has to be parametrized such that it can explain the autocorrelation structure of the data. Therefore, an initial examination of the empirical autocovariance of usom, i.e. (14) for j = 0, …, jmax, is done in order to determine a suitable basis. Here, we used a sum of Ornstein-Uhlenbeck (OU) kernels, i.e. (15) where nk = 10 and θi = 2i ms − 1. The autocovariance has to remain positive definite. This induces the following linear constraints: (16) on , where are the discrete Fourier transforms of the circulant basis vectors. The optimization problem is non-concave in the subspace of and multiple local maxima and saddle points can occur. Therefore, multiple initializations have to be made in order to find a potential global optimum. In general, the least-squares fit of k(t) to the empirical autocovariance function Eq (14) yields a good starting point for the optimization.

The spike-rate adaptation kernel is chosen to be a linear combination of ten different alpha shapes (17) where we chose nη = 10, νi = 2ωi and νi = 2i ms − 1.

Since the median filter time constant is short, the voltage change around the spike can be fast, requiring flexible spike-related kernel basis. Most of this flexibility is required around t = δ. Because δ is adapted, we choose a discrete parametrization which has equal flexibility from t = 0 up to a maximum t. In our case, this maximum is at t = 60 ms, and therefore our parametrization of the spike-related kernel reads (18) where ai, i = 1, …, 60 are the free parameters. Since the spike-related kernel fitting is concave, the large number of parameters does not lead to a dramatic increase of computational time. It also does not lead to overfitting, as is evidenced by the smoothness of the fitted kernels (see Figs 4 and 5 in the main text and S2S4 Figs) and by the model comparison results (see Fig 6).

Model validation

We performed a factorial model comparison (see Fig 6) where the four factors were the presence/absence of each of the following: multiple OU components in the GP autocorrelation function (see Eq (15), as opposed to only one OU kernel with variable time-constant), the spike-related kernel α, coupling between u and s (through β) and adaptation η, which gives a total of 16 different models. We use the nomenclature that is the simplest model, e.g. α = β = η = 0 and only one OU component, having only four parameters (ur, θ, σ and r0). A subscript G (for GP) indicates that we use the multiple OU basis and any other subscript indicates that the corresponding parameter is adjustable in addition to the parameters already present in and the parameters that are associated with the subscribed ones. E.g. indicates that we use the multiple OU basis and allow a non-zero spike-related kernel and that there are now 73 parameters (δ, ur, θi, ai for i = 1, …, 60, and log r0). The parameter δ is optimized only for the 12 out of 16 models which depend on this parameter, i.e. that have at least β ≠ 0 or α ≠ 0.

For each of the models , we performed eight-fold cross-validation [47] on dataset in order to assess the models’ generalization performance. The entire dataset was cut into eight equally-sized chunks dj, where j = 1, …, 8, each of length 15s (n = 15000), and six chunks of 3s , j = 1, …, 6 set aside as a test set (n′ = 3000). Each model was then trained on seven out of eight chunks (treating them as independent samples) giving an optimal set of parameters and training per-bin log-likelihood . Then the validation likelihood of the left-out chunk #j was evaluated. The unseen data is used for a final benchmark of model performance, where the best set of parameters is selected for each model, i.e. .

Supporting Information

S1 Text. Supplementary Text.

Information on Discrete Fourier Transforms, circulant matrices, the circulant approximation, sampling from GPs using Fast Fourier Transforms, details of the optimization scheme, gradients and hessians used for the optimization.

https://doi.org/10.1371/journal.pone.0142435.s001

(PDF)

S1 Fig. Supplementary Figure.

Comparison of in vivo and artificial data snippets for datasets and , analogous to Fig 3G and 3H. The scale (shown on panel D) is the same for all four panels. Vertical lines are drawn at the spiking times. (A) A 2-second sample of in vivo activity from dataset (Zebra Finch HVC). (B) Artificial data sampled from AGAPE with parameters learned from dataset . (C) A 2-second sample of in vivo activity from dataset (mouse visual cortex). (D) Artificial data sampled from AGAPE with parameters learned from dataset .

https://doi.org/10.1371/journal.pone.0142435.s002

(PDF)

S2 Fig. Supplementary Figure.

The fitting result as a function of the parameter δ for dataset , see color code next to the plot of the marginal distribution of u in the second row of the left column. The top left panel shows that the log likelihood peaks at δ = 18 ms, and the bottom right panel shows the decrease of the effective coupling stength as δ increases.

https://doi.org/10.1371/journal.pone.0142435.s003

(EPS)

S3 Fig. Supplementary Figure.

The fitting result as a function of the parameter δ for dataset , see color code next to the plot of the marginal distribution of u in the second row of the left column. The top left panel shows that the log likelihood peaks at δ = 12 ms, and the bottom right panel shows the decrease of the effective coupling stength as δ increases.

https://doi.org/10.1371/journal.pone.0142435.s004

(EPS)

S4 Fig. Supplementary Figure.

The fitting result as a function of the parameter δ for dataset , see color code next to the plot of the marginal distribution of u in the second row of the left column. The top left panel shows that the log likelihood peaks at δ = 32 ms, and the bottom right panel shows the decrease of the effective coupling stength as δ increases.

https://doi.org/10.1371/journal.pone.0142435.s005

(EPS)

Acknowledgments

We would like to thank Janie Ondracek and Richard Hahnloser in Zürich, Switzerland, Daniela Vallentin and Michael Long at NYU, Bilal Haider and Matteo Carandini at UCL for kindly providing the data for this study. We also thank Máté Lengyel, Christian Pozzorini and Johanni Brea for helpful discussions.

Author Contributions

Analyzed the data: SCS. Wrote the paper: SCS JPP. Designed the software used in analysis: SCS. Developed the mathematical methods: SCS JPP. Performed the simulations: SCS .

References

  1. 1. Woody CD, Gruen E. Characterization of electrophysiological properties of intracellularly recorded neurons in the neocortex of awake cats: a comparison of the response to injected current in spike overshoot and undershoot neurons. Brain Research; 1978;158(2):343–357. pmid:709370
  2. 2. Baranyi A, Szente MB, Woody CD. Electrophysiological characterization of different types of neurons recorded in vivo in the motor cortex of the cat. ii. membrane parameters, action potentials, current-induced voltage responses and electrotonic structures. Journal of Neurophysiology; 1993;69(6):1865–1879. pmid:8350127
  3. 3. Steriade M, Timofeev I, Grenier F. Natural waking and sleep states: a view from inside neocortical neurons. Journal of Neurophysiology; 2001;85(5):1969–1985. pmid:11353014
  4. 4. Matsumura M, Cope T, Fetz EE. Sustained excitatory synaptic input to motor cortex neurons in awake animals revealed by intracellular recording of membrane potentials. Experimental Brain Research; 1988;70(3):463–469. pmid:3384047
  5. 5. Poulet JFA, Petersen CCH. Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature; 2008;454(7206):881–885. pmid:18633351
  6. 6. Lee AK, Manns ID, Sakmann B, Brecht M. Whole-cell recordings in freely moving rats. Neuron; 2006;51(4):399–407. pmid:16908406
  7. 7. El Boustani S, Marre O, Béhuret S, Baudot P, Yger P, Bal T, et al. Network-state modulation of power-law frequency-scaling in visual cortical neurons. PLoS Computational Biology; 2009;5(9):e1000519. pmid:19779556
  8. 8. Berkes P, Orbán G, Lengyel M, Fiser J. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science; 2011;331(6013):83–87. pmid:21212356
  9. 9. Lapicque L. Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. Journal de Physiologie et de Pathologie Générale; 1907;9(1):620–635.
  10. 10. Stein RB. The information capacity of nerve cells using a frequency code. Biophysical journal; 1967;7(6):797–826. pmid:19210999
  11. 11. Latham PE, Richmond BJ, Nelson PG, Nirenberg S. Intrinsic dynamics in neuronal networks. i. theory. Journal of Neurophysiology; 2000;83(2):808–827. pmid:10669496
  12. 12. Fourcaud-Trocmé N, Hansel D, van Vreeswijk C, Brunel N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. The Journal of Neuroscience; 2003;23(37):11628–11640. pmid:14684865
  13. 13. Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology; 2005;94(5):3637–3642. pmid:16014787
  14. 14. Hille B. Ion channels of excitable membranes. Sinauer Associates Incorporated; 2001.
  15. 15. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology; 1952;117(4):500. pmid:12991237
  16. 16. Gerstner W, Naud R. How good are neuron models? Science; 2009;326(5951):379–380. pmid:19833951
  17. 17. Druckmann S, Banitt Y, Gidon A, Schürmann F, Markram H, Segev I. A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. Frontiers in Neuroscience; 2007;1(1):7–18. pmid:18982116
  18. 18. Pfister JP, Dayan P, Lengyel M. Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nature Neuroscience; 2010;13(10):1271–1275. pmid:20852625
  19. 19. Møller J, Syversveen AR, Waagepetersen RP. Log gaussian cox processes. Scandinavian Journal of Statistics; 1998;25(3):451–482.
  20. 20. Truccolo W, Eden UT, Fellows MR, Donoghue JP, Brown EN. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology; 2005;93(2):1074–1089. pmid:15356183
  21. 21. Pillow J, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature; 2008;454(7207):995–999. pmid:18650810
  22. 22. Paninski L, Ahmadian Y, Ferreira DG, Koyama S, Rahnama Rad K, Vidne M, et al. A new look at state-space models for neural data. Journal of Computational Neuroscience; 2009;29(1–2):107–126. pmid:19649698
  23. 23. Rasmussen CE, Williams CKI. Gaussian processes for machine learning. The MIT Press; 2006.
  24. 24. Gerstner W, Kistler WM. Spiking neuron models. Single Neurons, Populations, Plasticity. Cambridge University Press; 2002.
  25. 25. Paninski L. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems; 2004;15(4):243–262.
  26. 26. Pozzorini C, Naud R, Mensi S, Gerstner W. Temporal whitening by power-law adaptation in neocortical neurons. Nature Neuroscience; 2013;16(7):942–948. pmid:23749146
  27. 27. Buzsáki G. Theta oscillations in the hippocampus. Neuron; 2002;33(3):325–340. pmid:11832222
  28. 28. Shinomoto S, Tsubo Y. Modeling spiking behavior of neurons with time-dependent poisson processes. Physical Review E; 2001;64(4):041910.
  29. 29. Lindner B, Schimansky-Geier L, Longtin A. Maximizing spike train coherence or incoherence in the leaky integrate-and-fire model. Physical Review E; 2002;66(3):031916.
  30. 30. Softky WR, Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. The Journal of Neuroscience; 1993;13(1):334–350. pmid:8423479
  31. 31. Katsaggelos AK, Lay KT. Maximum likelihood blur identification and image restoration using the em algorithm. IEEE Transactions on Signal Processing; 1991;39(3):729–733.
  32. 32. Bach FR, Jordan MI. Learning graphical models for stationary time series. IEEE Transactions on Signal Processing; 2004;52(8):2189–2199.
  33. 33. Gray RM. Toeplitz and circulant matrices: A review. Foundations and Trends in Communications and Information Theory; 2006;2(3):155–239.
  34. 34. Efron B, Hinkley DV. Assessing the accuracy of the maximum likelihood estimator: observed versus expected fisher information. Biometrika; 1978;65(3):457–487.
  35. 35. Chichilnisky EJ. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems; 2001;12(2):199–213.
  36. 36. Vidne M, Ahmadian Y, Shlens J, Pillow J, Kulkarni JE, Litke AM, et al. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells. Journal of Computational Neuroscience; 2012;33(1):97–121. pmid:22203465
  37. 37. Jolivet R, Rauch A, Lüscher HR, Gerstner W. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. Journal of Computational Neuroscience; 2006;21(1):35–49. pmid:16633938
  38. 38. Cunningham JP, Byron MY, Shenoy KV, Sahani M. Inferring neural firing rates from spike trains using gaussian processes. In: Advances in neural information processing systems; 2007. p. 329–336.
  39. 39. Macke JH, Buesing L, Cunningham JP. Empirical models of spiking in neural populations. Advances in Neural Information Processing Systems; 2011;24:1350–1358.
  40. 40. Mensi S, Naud R, Pozzorini C, Avermann M, Petersen CCH, Gerstner W. Parameter extraction and classification of three cortical neuron types reveals two distinct adaptation mechanisms. Journal of Neurophysiology; 2012;107(6):1756–1775. pmid:22157113
  41. 41. Markram H, Toledo-Rodriguez M, Wang Y, Gupta A, Silberberg G, Wu C. Interneurons of the neocortical inhibitory system. Nature Reviews Neuroscience; 2004;5(10):793–807. pmid:15378039
  42. 42. Pfister JP, Dayan P, Lengyel M. Know thy neighbour: A normative theory of synaptic depression. Advances in Neural Information Processing Systems; 2009;22:1464–1472.
  43. 43. Long MA, Jin DZ, Fee MS. Support for a synaptic chain model of neuronal sequence generation. Nature; 2010;468(7322):394–399. pmid:20972420
  44. 44. Hamaguchi K, Tschida KA, Yoon I, Donald BR, Mooney R. Auditory synapses to song premotor neurons are gated off during vocalization in zebra finches. eLife; 2014;3:e01833. pmid:24550254
  45. 45. Vallentin D, Long MA. Motor origin of precise synaptic inputs onto forebrain neurons driving a skilled behavior. The Journal of Neuroscience; 2015;35(1):299–307. pmid:25568122
  46. 46. Haider B, Häusser M, Carandini M. Inhibition dominates sensory responses in the awake cortex. Nature; 2013;493(7430):97–100. pmid:23172139
  47. 47. Arlot S, Celisse A. A survey of cross-validation procedures for model selection. Statistics surveys; 2010;4:40–79.