Skip to main content
Advertisement
  • Loading metrics

The Sign Rule and Beyond: Boundary Effects, Flexibility, and Noise Correlations in Neural Population Codes

  • Yu Hu ,

    huyu@uw.edu

    Affiliation Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America

  • Joel Zylberberg,

    Affiliation Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America

  • Eric Shea-Brown

    Affiliations Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America, Program in Neurobiology and Behavior, University of Washington, Seattle, Washington, United States of America, Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America

Abstract

Over repeat presentations of the same stimulus, sensory neurons show variable responses. This “noise” is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem – neural tuning curves, etc. – held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1) We generalize previous results to prove a sign rule (SR) — if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2) As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3) We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise) will be as good as it would be if there were no noise present at all.

Author Summary

Sensory systems communicate information to the brain — and brain areas communicate between themselves — via the electrical activities of their respective neurons. These activities are “noisy”: repeat presentations of the same stimulus do not yield to identical responses every time. Furthermore, the neurons' responses are not independent: the variability in their responses is typically correlated from cell to cell. How does this change the impact of the noise — for better or for worse? Our goal here is to classify (broadly) the sorts of noise correlations that are either good or bad for enabling populations of neurons to transmit information. This is helpful as there are many possibilities for the noise correlations, and the set of possibilities becomes large for even modestly sized neural populations. We prove mathematically that, for larger populations, there are many highly diverse ways that favorable correlations can occur. These often differ from the noise correlation structures that are typically identified as beneficial for information transmission – those that follow the so-called “sign rule.” Our results help in interpreting some recent data that seems puzzling from the perspective of this rule.

Introduction

Neural populations typically show correlated variability over repeat presentation of the same stimulus [1][4]. These are called noise correlations, to differentiate them from correlations that arise when neurons respond to similar features of a stimulus. Such signal correlations are measured by observing how pairs of mean (averaged over trials) neural responses co-vary as the stimulus is changed [3], [5].

How do noise correlations affect the population's ability to encode information? This question is well-studied [3], [5][16], and prior work indicates that the presence of noise correlations can either improve stimulus coding, diminish it, or have little effect (Fig. 1). Which case occurs depends richly on details of the signal and noise correlations, as well as the specific assumptions made. For example [8], [9], [14] show that a classical picture — wherein positive noise correlations prevent information from increasing linearly with population size — does not generalize to heterogeneously tuned populations. Similar results were obtained by [17], and these examples emphasize the need for general insights.

thumbnail
Figure 1. Different structures of correlated trial-to-trial variability lead to different coding accuracies in a neural population.

(Modified and extended from [5].) We illustrate the underlying issues via a three neuron population, encoding two possible stimulus values (yellow and blue). Neurons' mean responses are different for each stimulus, representing their tuning. Trial-to-trial variability (noise) around these means is represented by the ellips(oid)s, which show 95% confidence levels. This noise has two aspects: for each individual neuron, its trial-to-trial variance; and at the population level, the noise correlations between pairs of neurons. We fix the former (as well as mean stimulus tuning), and ask how noise correlations impact stimulus coding. Different choices (A–D) of noise correlations affect the orientation and shape of response distributions in different ways, yielding different levels of overlap between the full (3D) distributions for each stimulus. The smaller the overlap, the more discriminable are the stimuli and the higher the coding performance. We also show the 2D projections of these distributions, to facilitate the comparison with the geometrical intuition of [5]. First, Row A is the reference case where neurons' noise is independent: zero noise correlations. Row B illustrates how noise correlations can increase overlap and worsen coding performance. Row C demonstrates the opposite case, where noise correlations are chosen consistently with the sign rule (SR) and information is enhanced compared to the independent noise case. Intriguingly, Row D demonstrates that there are more favorable possibilities for noise correlations: here, these violate the SR, yet improve coding performance vs. the independent case. Detailed parameter values are listed in Methods Section “Details for numerical examples and simulations”.

https://doi.org/10.1371/journal.pcbi.1003469.g001

Thus, we study a more general mathematical model, and investigate how coding performance changes as the noise correlation are varied. Figure 1, modified and extended from [5], illustrates this process. In this figure, the only aspect of the population responses that differs from case to case are the noise correlations, resulting in differently shaped distributions. These different noise structures lead to different levels of stimulus discriminability, and hence coding performance. The different cases illustrate our approach: given any set of tuning curves and noise variances, we study how encoded stimulus information varies with respect to the set of all pairwise noise correlations.

Compared to previous work in this area, there are two key differences that makes our analysis novel: we make no particular assumptions on the structure of the tuning curves; and we do not restrict ourselves to any particular correlation structure such as the “limited-range” correlations often used in prior work [5], [7], [8]. Our results still apply to the previously-studied cases, but also hold much more generally. This approach leads us to derive mathematical theorems relating encoded stimulus information to the set of pairwise noise correlations. We prove the same theorems for several common measures of coding performance: the linear Fisher information, the precision of the optimal linear estimator (OLE [18]), and the mutual information between Gaussian stimuli and responses.

First, we prove that coding performance is always enhanced – relative to the case of independent noise – when the noise and signal correlations have opposite signs for all cell pairs (see Fig. 1). This “sign rule” (SR) generalizes prior work. Importantly, the converse is not true, noise correlations that perfectly violate the SR –and thus have the same signs as the signal correlations – can yield better coding performance than does independent noise. Thus, as previously observed [8], [9], [14], the SR does not provide a necessary condition for correlations to enhance coding performance.

Since experimentally observed noise correlations often have the same signs as the signal correlations [3], [6], [19], new theoretical insights are needed. To that effect, we develop a new organizing principle: optimal coding will always be obtained on the boundary of the set of allowed correlation coefficients. As we discuss, this boundary can be defined in flexible ways that incorporate constraints from statistics or biological mechanisms.

Finally, we identify conditions under which appropriately chosen noise correlations can yield coding performance as good as would be obtained with deterministic neural responses. For large populations, these conditions are satisfied with high probability, and the set of such correlation matrices is very high-dimensional. Many of them also strongly violate the SR.

Results

The layout of our Results section is as follows. We will begin by describing our setup, and the quantities we will be computing, in Section “Problem setup”.

In Section “The sign rule revisited”, we will then discuss our generalized version of the “sign rule”, Theorem 1, namely that signal and noise correlations between pairs of neurons with opposite signs will always improve encoded information compared with the independent case. Next, in Section “Optimal correlations lie on boundaries”, we use the fact that all of our information quantities are convex functions of the noise correlation coefficients to conclude that the optimal noise correlation structure must lie on the boundary of the allowed set of correlation matrices, Theorem 2.

We will further observe that there will typically be a large set of correlation matrices that all yield optimal (or near-optimal) coding performance, in a numerical example of heterogeneously tuned neural populations in Section “Heterogeneously tuned neural populations”.

We prove that these observations are general in Section “Noise cancellation” by studying the noise canceling correlations (those that yield the same high coding fidelity as would be obtained in the absence of noise). We will provide a set of necessary and sufficient conditions for correlations to be “noise canceling”, Theorem 3, and for a system to allow for these noise canceling correlations, Theorem 4. Finally, we will prove a result that suggests that, in large neural populations with randomly chosen stimulus response characteristics, these conditions are likely to be satisfied, Theorem 5.

A summary of most frequent notations we use is listed in Table 1.

Problem setup

We will consider populations of neurons that generate noisy responses in response to a stimulus . The responses, – wherein each component represents one cell's response – can be considered to be continuous-valued firing rates, discrete spike counts, or binary “words”, wherein each neuron's response is a 1 (“spike”) or 0 (“not spike”). The only exception is that, when we consider (discussed below), the responses must be continuous-valued. We consider arbitrary tuning for the neurons; . For scalar stimuli, this definition of “tuning” corresponds to the notion of a tuning curve. In the case of more complex stimuli, it is similar to the typical notion of a receptive field. Recall that the signal correlations are determined by the co-variation of the mean responses of pairs of neurons as the stimulus is varied, and thus they are determined by the similarity in the tuning functions.

As for the structure of noise across the population, our analysis allows for the general case in which the noise covariance matrix (superscript denotes “noise”) depends on the stimulus . This generality is particularly interesting given the observations of Poisson-like variability [20], [21] in neural systems, and that correlations can vary with stimuli [3], [16], [19], [22]. We will assume that the diagonal entries of the conditional covariance matrix – which describe each cells' variance – will be fixed, and then ask how coding performance changes as we vary the off-diagonal entries, which describe the covariance between the cell's responses (recall that the noise correlations are the pairwise covariances, divided by the geometric mean of the relevant variances ).

We quantify the coding performance with the following measures, which are defined more precisely in the Methods Section “Defining the information quantities, signal and noise correlations”, below. First, we consider the linear Fisher information (, Eq. (5)), which measures how easy it is to separate the response distributions that result from two similar stimuli, with a linear discriminant. This is equivalent to the quantity used by [11] and [10] (where Fisher information reduces to ). While Fisher information is a measure of local coding performance, we are also interested in global measures.

We will consider two such global measures, the OLE information (Eq. (12)) and mutual information for Gaussian stimuli and responses (Eq. (13)). quantifies how well the optimal linear estimator (OLE) can recover the stimulus from the neural responses: large corresponds to small mean squared error of OLE and vice versa. For the OLE, there is one set of read-out weights used to estimate the stimulus, and those weights do not change as the stimulus is varied. For contrast, with linear Fisher information, there is generally a different set of weights used for each (small) range of stimuli within which the discrimination is being performed.

Consequently, in the case of and , we will be considering the average noise covariance matrix , where the expectation is taken over the stimulus distribution. Here we overload the notation be the covariance matrix that one chooses during the optimization, which will be either local (conditional covariances at a particular stimulus) or global depending on the information measure we consider.

While and are concerned with the performance of linear decoders, the mutual information between stimuli and responses describes how well the optimal read-out could recover the stimulus from the neural responses, without any assumptions about the form of that decoder. However, we emphasize that our results for only apply to jointly Gaussian stimulus and response distributions, which is a less general setting than the conditionally Gaussian cases studied in many places in the literature. An important exception is that Theorem 2 additionally applies to the case of conditionally Gaussian distributions (see discussion in Section “Convexity of information measures”).

For simplicity, we describe most results for scalar stimulus if not stated otherwise, but the theory holds for multidimensional stimuli (see Methods Section “Defining the information quantities, signal and noise correlations”).

The sign rule revisited

Arguments about pairs of neurons suggest that coding performance is improved – relative to the case of independent, or trial-shuffled data – when the noise correlations have the opposite sign from the signal correlations [5], [7], [10], [13]: we dub this the “sign rule” (SR). This notion has been explored and demonstrated in many places in the experimental and theoretical literature, and formally established for homogenous positive correlations [10]. However, its applicability in general cases is not yet known.

Here, we formulate this SR property as a theorem without restrictions on homogeneity or population size.

Theorem 1. If, for each pair of neurons, the signal and noise correlations have opposite signs, the linear Fisher information is greater than the case of independent noise (trial-shuffled data). In the opposite situation where the signs are the same, the linear Fisher information is decreased compared to the independent case, in a regime of very weak correlations. Similar results hold for and , with a modified definition of signal correlations given in Section. “Defining the information quantities, signal and noise correlations”.

In the case of Fisher information, the signal correlation between two neurons is defined as (Section “Defining the information quantities, signal and noise correlations”). Here, the derivatives are taken with respect to the stimulus. This definition recalls the notion of the alignment in the change in the neurons' mean responses in, e.g., [11]. It is important to note that this definition for signal correlation is locally defined near a stimulus value; thus, it differs from some other notions of “signal correlation” in the literature, that quantify how similar the whole tuning curves are for two neurons (see discussion on the alternative in Section “Defining the information quantities, signal and noise correlations”). We choose to define signal correlations for , and as described in Section “Defining the information quantities, signal and noise correlations” to reflect precisely the mechanism behind the examples in [5], among others.

It is a consequence of Theorem 1 that the SR holds pairwise; different pairs of neurons will have different signs of noise correlations, as long as they are consistent with their (pairwise) signal correlations. The result holds as well for heterogenous populations. The essence of our proof of Theorem 1 is to calculate the gradient of the information function in the space of noise correlations. We compute this gradient at the point representing the case where the noise is independent. The gradient itself is determined by the signal correlations, and will have a positive dot product with any direction of changing noise correlations that obeys the sign rule. Thus, information is increased by following the sign rule, and the gradient points to (locally) the direction for changing noise correlations that maximally improves the information, for a given strength of correlations. A detailed proof is included in Methods Section “Proof of Theorem 1: the generality of the sign rule”; this includes a formula for the gradient direction (Remark 1 in Section “Proof of Theorem 1: the generality of the sign rule”). We have proven the same result for all three of our coding metrics, and for both scalar, and multi-dimensional, stimuli.

Intriguingly, there exists an asymmetry between the result on improving information (above), and the (converse) question of what noise correlations are worst for population coding. As we will show later, the information quantities are convex functions of the noise correlation coefficients (see Fig. 2). As a consequence, performance will keep increasing as one continues to move along a “good” direction, for example indicated by the SR. This is what one expects when climbing a parabolic landscape in which the second derivative is always nonnegative. The same convexity result indicates that the performance will not decrease monotonically along a “bad” direction, such as the anti-SR direction. For example, if, while following the anti-SR direction, the system passed by the minimum of the information quantity, then continued increases in correlation magnitude would yield increases in the information. In fact, it is even possible for anti-SR correlations to yield better coding performance than would be achieved with independent noise. An example of this is shown in Fig. 2, where the arrow points in the direction in correlation space predicted by the SR, but performance that is better than with independent noise can also be obtained by choosing noise correlations in the opposite direction.

thumbnail
Figure 2. The “sign rule” may fail to identify the globally optimal correlations.

The optimal linear estimator (OLE) information (Eq. (12)), which is maximized when the OLE produces minimum-variance signal estimates, is shown as a function of all possible choices of noise correlations (enclosed within the dashed line). These values are (x-axis) and (y-axis) for a 3-neuron population. The bowl shape exemplifies the general fact that is a convex function and thus must attain its maximum on the boundary (Theorem 2) of the allowed region of noise correlations. The independent noise case and global optimal noise correlations are labeled by a black dot and triangle respectively. The arrow shows the gradient vector of , evaluated at zero noise correlations. It points to the quadrant in which noise correlations and signal correlations have opposite signs, as suggested by Theorem 1. Note that this gradient vector, derived from the “sign rule”, does not point towards the global maximum, and actually misses the entire quadrant containing that maximum. This plot is a two-dimensional slice of the cases considered in Fig. 3, while restricting (see Methods Section “Details for numerical examples and simulations” for further parameters).

https://doi.org/10.1371/journal.pcbi.1003469.g002

Thus, the result that anti-SR noise correlations harm coding is only a “local” result – near the point of zero correlations – and therefore requires the assumption of weak correlations. We emphasize that this asymmetry of the SR is intrinsic to the problem, due to the underlying convexity.

One obvious limitation of Theorem 1 and the “sign rule” results in general is that they only compare information in the presence of correlated noise with the baseline case of independent noise. This approach does not address the issue of finding the optimal noise correlations, nor does it provide much insight into experimental data that do not obey the SR. Does the sign rule rule describe optimal configurations? What are the properties of the global optima? How should we interpret noise correlations that do not follow the SR? We will address these questions in the following sections.

Optimal correlations lie on boundaries

Let us begin by considering a simple example to see what could happen for the optimization problem we described in Section “Problem setup”, when the baseline of comparison is no longer restricted to the case of independent noise. This example is for a population of 3 neurons. In order to better visualize the results, we further require that . Therefore the configurations of correlations is two dimensional. In Fig. 2, we plot information as a function of the two free correlation coefficients (in this example the variances are all , thus ).

First, notice that there is a parabola-shaped region of all attainable correlations (in Fig. 2, enclosed by black dashed lines and the upper boundary of the square). The region is determined not only by the entry-wise constraint (the square), but also by a global constraint that the covariance matrix must be positive semidefinite. For linear Fisher information and mutual information for Gaussian distributions, we further assume (i.e. is positive definite) so that and remain finite (see also Section “Defining the information quantities, signal and noise correlations”). As we will see again below, this important constraint leads to many complex properties of the optimization problem. This constraint can be understood by noting that correlations must be chosen to be “consistent” with each other and cannot be freely and independently chosen. For example, if are large and positive, then cells 2 and 3 will be positively correlated – since they both covary positively with cell 1 – and may thus not take negative values. In the extreme of , is fully determined to be 1. Cases like this are reflecting the corner shape in the upper right of the allowed region in Fig. 2.

The case of independent noise is denoted by a black dot in the middle of Fig. 2, and the gradient vector of points to a quadrant that is guaranteed to increase information vs. the independent case (Theorem 1). The direction of this gradient satisfies the sign rule, as also guaranteed by Theorem 1. However, the gradient direction and the quadrant of the SR both fail to capture the globally optimal correlations, which are at upper right corner of the allowed region, and indicated by the red triangle. This is typically what happens for larger, and less symmetric populations, as we will demonstrate next.

Since the sign rule cannot be relied upon to indicate the global optimum, what other tools do we have at hand? A key observation, which we prove in the Methods Section “Proof of Theorem 2: optima lie on boundaries”, is that information is a convex function of the noise correlations (off-diagonal elements of ). This immediately implies:

Theorem 2. The optimal that maximize information must lie on the boundary of the region of correlations considered in the optimization.

As we saw in Fig. 2, mathematically feasible noise correlations may not be chosen arbitrarily but are constrained by the fact that the noise covariance matrix be positive semidefinite. We denote this condition by , and recall that it is equivalent to all of its eigenvalues being non-negative. According to our problem setup, the diagonal elements of , which are the individual neurons' response variances, are fixed. It can be shown that this diagonal constraint specifies a linear slice through the cone of all , resulting a bounded convex region in called a spectrahedron, for a population of neurons. These spectrahedra are the largest possible regions of noise correlation matrices that are physically realizable, and are the set over which we optimize, unless stated otherwise.

Importantly for biological applications, Theorem 2 will continue to apply, when additional constraints define smaller allowed regions of noise correlations within the spectrahedron. These constraints may come from circuit or neuron-level factors. For example, in the case where correlations are driven by common inputs [22], [23], one could imagine a restriction on the maximal value of any individual correlation value. In other settings, one might consider a global constraint by restricting the maximum Euclidean norm (2-norm) of the noise correlations (defined in Eq. (18) in Methods).

For a population of neurons, there are possible correlations to consider; naturally, as increases, the optimal structure of noise correlations can therefore become more complex. Thus we illustrate the Theorem above with an example of 3 neurons encoding a scalar stimulus, in which there are 3 noise correlations to vary. In Fig. 3, we demonstrate two different cases, each with distinct matrix and vector (values are given in Methods Section “S:numerics”). In the first case, there is a unique optimum (panel A, largest information is associated with the lightest color). In the second case, there are 4 disjoint optima (panel B), all of which lie on the boundary of the spectrahedron.

thumbnail
Figure 3. Optimal coding is obtained on the boundary of the allowed region of noise correlations.

For fixed neuronal responses variances and tuning curves, we compute coding performance – quantified by information values – for different values of the pair-wise noise correlations. To be physically realizable, the correlation coefficients must form a positive semi-definite matrix. This constraint defines a spectrahedron, or a swelled tetrahedron, for the cells used. The colors of the points represent information values. With different parameters and (see values in Methods Section “Details for numerical examples and simulations”), the optimal configuration can appear at different locations, either unique (A) or attained at multiple disjoint places (B), but always on the boundary of the spectrahedron. In both panels, plot titles give the maximum value of attained over the allowed space of noise correlations, and the value of that would obtained with the given tuning curves, and perfectly deterministic neural responses. This provides an upper bound on the attainable (see text Section “Noise cancellation”). Interestingly, in panel (A), the noisy population achieves this upper bound on performance, but this is not the case in (B). Details of parameters used are in Methods Section “Details for numerical examples and simulations”.

https://doi.org/10.1371/journal.pcbi.1003469.g003

In the next section, we will build from this example to a more complex one including more neurons. This will suggest further principles that govern the role of noise correlations in population coding.

Heterogeneously tuned neural populations

We next follow [8], [9], [15] and study a numerical example of a larger () heterogeneously tuned neural population. The stimulus encoded is the direction of motion, which is described by a 2-D vector . We used the same parameters and functional form for the shape of tuning curves as in [8], the details of which are provided in Methods Section “Details for numerical examples and simulation”. The tuning curve for each neuron was allowed to have randomly chosen width and magnitude, and the trial-to-trial variability was assumed to be Poisson: the variance is equal to the mean. As shown in Fig. 4 A, under our choice of parameters the neural tuning curves – and by extension, their responses to the stimuli – are highly heterogenous. Once again, we quantify coding by (see definition in Section “Problem setup” or Eq. (12) in Methods).

thumbnail
Figure 4. Heterogeneous neural population and violations of the sign rule with increasing correlation strength.

We consider signal encoding in a population of 20 neurons, each of which has a different dependence of its mean response on the stimulus (heterogeneous tuning curves shown in A). We optimize the coding performance of this population with respect to the noise correlations, under several different constraints on the magnitude of the allowed noise correlations. Panel (B) shows the resultant – optimal given the constraint – values of OLE information , with different noise correlation strengths (blue circles). The strength of correlations is quantified by the Euclidean norm (Eq. (18)). For comparison, the red crosses show information obtained for correlations that obey the sign rule (in particular, pointing along the gradient giving greatest information for weak correlations); this information is always less than or equal to the optimum, as it must be. Note that correlations that follow the sign rule fail to exist for large correlation strengths, as the defining vector points outside of the allowed region (spectrahedron) beyond a critical length (labeled (ii)). For correlation strengths beyond this point, distinct optimized noise correlations continue to exist; the information values they obtain eventually saturate at noise-free levels (see text), which is for the example shown here. This occurs for a wide range of correlation strengths. Panel (C) shows how well these optimized noise correlations are predicted from the corresponding signal correlations (by the sign rule), as quantified by the statistic (between 0 and 1, see Fig. 5). For small magnitudes of correlations, the values are high, but these decline when the noise correlations are larger.

https://doi.org/10.1371/journal.pcbi.1003469.g004

Our goal with this example is to illustrate two distinct regimes, with different properties of the noise correlations that lead to optimal coding. In the first regime, which occurs closest to the case of independent noise, the SR determines the optimal correlation structure. In the second, moving further away from the independent case, the optimal correlations may disobey the SR. (A related effect was found by [8]; we return to this in the Discussion.) We accomplish this in a very direct way: for gradually increasing the (additional) constraint on the Euclidean norm of correlations (Eq. (18) in Methods Section “Defining the information quantities, signal and noise correlations”), we numerically search for optimal noise correlation matrices and compare them to predictions from the SR.

In Fig. 4 B we show the results, comparing the information attained with noise correlations that obey the sign rule with those that are optimized, for a variety of different noise correlation strengths. As they must be, the optimized correlations always produce information values as high as, or higher than, the values obtained with the sign rule.

In the limit where the correlations are constrained to be small, the optimized correlations agree with the sign rule; an example of these “local” optimized correlations is shown in Fig. 5 ADG, corresponding to the point labeled in Fig. 4 BC. This is predicted by Theorem 1. In this “local” region of near-zero noise correlations, we see a linear alignment of signal and noise correlations (Fig. 5 D). As larger correlation strengths are reached (points and in Fig. 4 BC), we observe a gradual violation of the sign rule for the optimized noise correlations. This is shown by the gradual loss of the linear relationship between signal and noise correlations in Fig. 4 D vs. E vs. F, as quantified by the statistic. Interestingly, this can happen even when the correlation coefficients continue have reasonably small values, and are broadly consistent with the ranges of noise correlations seen in physiology experiments [3], [8], [24].

thumbnail
Figure 5. In our larger neural population, the sign rule governs optimal noise correlations only when these correlations are forced to be very small in magnitude; for stronger correlations, optimized noise correlations have a diverse structure.

Here we investigate the structure of the optimized noise correlations obtained in Fig. 4; we do this for three examples with increasing correlation strength, indicated by the labels in that figure. First (ABC) show scatter plots of the noise correlations of the neural pairs, as a function of their signal correlations (defined in Methods Section “Defining the information quantities, signal and noise correlations”). For each example, we also show (DEF) a version of the scatter plot where the signal correlations have been rescaled in a manner discussed in Section “Parameters for Fig. 1, Fig. 2 and Fig. 3”, that highlights the linear relationship (wherever it exists) between signal and noise correlations. In both sets of panels, we see the same key effect: the sign rule is violated as the (Euclidean) strength of noise correlations increases. In (ABC), this is seen by noting the quadrants where the dots are located: the sign rule predicts they should only be in the second and fourth quadrants. In (DEF), we quantify agreement with the sign rule by the statistic. Finally, (GHI) display histograms of the noise correlations; these are concentrated around 0, with low average values in each case.

https://doi.org/10.1371/journal.pcbi.1003469.g005

The two different regimes of optimized noise correlations arise because, at a certain correlation strength, the correlation strength can no longer be increased along the direction that defines the sign rule without leaving the region of positive semidefinite covariance matrices. However, correlation matrices still exist that allow for more informative coding with larger correlation strengths. This reflects the geometrical shape of the spectrahedron, wherein the optima may lie in the “corners”, as shown in Fig. 3. For these larger-magnitude correlations, the sign rule no longer describes optimized correlations, as shown with an example of optimized correlations in Fig. 5 CF.

Fig. 5 illustrates another interesting feature. There is a diverse set of correlation matrices, with different Euclidean norms beyond the value of (roughly) 1.2, that all achieve the same globally optimal information level. As we see in the next section, this phenomenon is actually typical for large populations, and can be described precisely.

Noise cancellation

For certain choices of tuning curves and noise variances, including the examples in Fig. 3 A and Section “Heterogeneously tuned neural populations”, we can tell precisely the value of the globally optimized information quantities — that is, the information levels obtained with optimal noise correlations. For the OLE, this global optimum is the upper bound on . This is shown formally in Lemma 8, but it simply translates to an intuitive lower bound of the OLE error, similar to the data processing inequality for mutual information. This bound states that the OLE error cannot be smaller than the OLE error when there is no noise in the responses, i.e. when the neurons produce a deterministic response conditioned on the stimulus. This upper bound may — and often will (Theorem 5) — be achievable by populations of noisy neurons.

Let us first consider an extremely simple example. Consider the case of two neurons with identical tuning curves, so that their responses are , where is the noise in the response of neuron , and is the same mean response under stimulus . In this case, the “noise free” coding is when on all trials, and the inference accuracy is determined by the shape of the tuning curve (whether or not it is invertible, for example). Now let us consider the case where the noise in the neurons' responses is non-zero but perfectly anti correlated, so that on all trials. We can then choose the read-out as to cancel the noise and achieve the same coding accuracy as the “noise free” case.

The preceding example shows that, at least in some cases, one can choose noise correlations in such a way that a linear decoder achieves “noise-free” performance. One is naturally left to wonder whether this observation applies more generally.

First, we state the conditions on the noise covariance matrices under which the noise-free coding performance is obtained. We will then identify the conditions on parameters of the problem, i.e. the tuning curves (or receptive fields) and noise variances, under which this condition can be satisfied. Recall that the OLE is based on a fixed (across stimuli) linear readout coefficient vector defined in Eq. (9)

Theorem 3. A covariance matrix attains the noise-free bound for OLE information (and hence is optimal), if and only if . Here is the cross-covariance between the stimuli responses (Eq. (11)), is the covariance of the mean response (Eq. (10)), and is the linear readout vector for OLE, which is the same as in the noise-free case — that is, — when the condition is satisfied.

We note that when the condition is satisfied, the conditional variance of the OLE is . This indicates that all the error comes from the bias, if we as usual write the mean square error (for scalar ) in two parts, . The condition obtained here can also be interpreted as “signal/readout being orthogonal to the noise.” While this perspective gives useful intuition about the result, we find that other ideas are more useful for constructing proofs of this and other results. We discuss this issue more thoroughly in Section “The geometry of the covariance matrix”.

In general, this condition may not be satisfied by some choices of pairwise correlations. The above theorem implies that, given the tuning curves, the issue of whether or not such “noise free” coding is achievable will be determined only by the relative magnitude, or heterogeneity, of the noise variances for each neuron – the diagonal entries of . The following theorem outlines precisely the conditions under which such “noise-free” coding performance is possible, a condition that can be easily checked for given parameters of a model system, or for experimental data.

Theorem 4. For scalar stimulus, let , , where is the readout vector for OLE in the noise-free case. Noise correlations may be chosen so that coding performance matches that which could be achieved in the absence of noise if and only if(1)When “” is satisfied, all optimal correlations attaining the maximum form a dimensional convex set on the boundary of the spectrahedron. When “” is attained, the dimension of that set is , where is the number of zeros in .

We pause to make three observations about this Theorem. First, the set of optimal correlations, when it occurs, is high-dimensional. This bears out the notion that there are many different, highly diverse noise correlation structures that all give the same (optimal) level of the information metrics. Second, and more technically, we note that the (convex) set of optimal correlations is flat (contained in a hyperplane of its dimension), as viewed in the higher dimensional space . A third intriguing implication of the theorem is that when noise-cancellation is possible, all optimal correlations are connected, as the set is convex (any two points are connected by a linear segment that also in the set), and thus the case of disjoint optima as in Fig. 3 B will never happen when optimal coding achieves noise-free levels. Indeed, in Fig. 3 B, the noise-free bound is not attained.

The high dimension of the convex set of noise-canceling correlations explains the diversity of optimal correlations seen in Fig. 4 B (i.e., with different Euclidean norms). Such a property is nontrivial from a geometric point of view. One may conclude prematurely that the dimension result is obvious if one considers algebraically the number of free variables and constraints in the condition of Theorem 3. This argument would give the dimension of the resulting linear space. However, as shown in the proof, there is another nontrivial step to show that the linear space has some finite part that also satisfies the positive semidefinite constraint. Otherwise, many dimensions may shrink to zero in size, as happens at the corner of the spectrahedron, resulting in a small dimension.

The optimization problem can be thought of as finding the level set of information function associated with as large as possible value while still intersecting with the spectrahedron. The level sets are collections of all points where the information takes the same value. These form high dimensional surfaces, and contain each other, much as layers of an onion. Here these surfaces are also guaranteed to be convex as the information function itself is. Next, note from Fig. 3 that we have already seen that the spectrahedron has sharp corners. Combining this with our view of the level sets, one might guess that the set of optimal solutions — i.e. the intersection — should be very low dimensional. Such intuition is often used in mathematics and computer science, e.g. with regards to the sparsity promoting tendency of L1 optimization. The high dimensionality shown by our theorem therefore reflects a nontrivial relationship between the shape of the spectrahedron and the level sets of the information quantities.

Although our theorem only characterizes the abundance of the set of exact optimal noise correlations, it is not hard to imagine the same, if not more, abundance should also hold for correlations that approximately achieve the maximal information level. This is indeed what we see in numerical examples. For example, note the long, curved level-set curves in Fig. 2 near the boundaries of the allowed region. Along these lines lie many different noise correlation matrices that all achieve the same nearly-optimal values of . The same is true of the many dots in Fig. 3 A that all share a similar “bright” color corresponding to large .

One may worry that the noise cancellation discussed above is rarely achievable, and thus somewhat spurious. The following theorem suggests that the opposite is true. In particular, it gives one simple condition under which the noise cancellation phenomenon, and resultant high-dimensional sets of optimal noise correlation matrices, will almost surely be possible in large neural populations.

Theorem 5. If the defined in Theorem 4 are independent and identically distributed (i.i.d.) as a random variable on with , then the probability(2)In actual populations, the might not be well described as i.i.d.. However, we believe that the conditions of the inequality of Eq.(1) are still likely to be satisfied, as the contrary seems to require one neuron with highly outlying tuning and noise variance value (a few comparable outliers won't necessary violate the condition, as their magnitudes will enter on the right hand side of the condition, thus the condition only breaks with a single “outlier of outliers”).

Discussion

Summary

In this paper, we considered a general mathematical setup in which we investigated how coding performance changes as noise correlations are varied. Our setup made no assumptions about the shapes (or heterogeneity) of the neural tuning curves (or receptive fields), or the variances in the neural responses. Thus, our results – which we summarize below – provide general insights into the problem of population coding. These are as follows:

  • We proved that the sign rule — if signal and noise correlations have opposite signs, then the presence of noise correlations will improve encoded information vs. the independent case — holds for any neural population. In particular, we showed that this holds for three different metrics of encoded information, and for arbitrary tuning curves and levels of heterogeneity. Furthermore, we showed that, in the limit of weak correlations, the sign rule predicts the optimal structure of noise correlations for improving encoded information.
  • However, as also found in the literature (see below), the sign rule is not a necessary condition for good coding performance to be obtained. We observed that there will typically be a diverse family of correlation matrices that yield good coding performance, and these will often violate the sign rule.
  • There is significantly more structure to the relationship between noise correlations and encoded information than that given by the sign rule alone. The information metrics we considered are all convex functions with respect to the entries in the noise correlation matrix. Thus, we proved that the optimal correlation structures must lie on boundaries of any allowed set. These boundaries could come from mathematical constraints – all covariance matrices must be positive semidefinite – or mechanistic/biophysical ones.
  • Moreover, boundaries containing optimal noise correlations have several important properties. First, they typically contain correlation matrices that lead to the same high coding fidelity that one could obtain in the absence of noise. Second, when this occurs there is a high-dimensional set of different correlation matrices that all yield the same high coding fidelity – and many of these matrices strongly violate the sign rule.
  • Finally, for reasonably large neural populations, we showed that both the noise-free, and more general SR-violating optimal, correlation structures emerge while the average noise correlations remain quite low — with values comparable to some reports in the experimental literature.

Convexity of information measures

Convexity of information with respect to noise correlations arises conceptually throughout the paper, and specifically in Theorem 2. We have shown that such convexity holds for all three particular measures of information studied above (, , and ). Here, we show that these observations may reflect a property intrinsic to the concept of information, so that our results could apply more generally.

It is well known that mutual information is convex with respect to conditional distributions. Specifically, consider two random variables (or vectors) , each with conditional distribution and (with respect the random “stimulus” variable(s) ). Suppose another variable has a conditional distribution given by a nonnegative linear combination of the two, , . The mutual information must satisfy . Notably, this fact can be proved using only the axiomatic properties of mutual information (the chain rule for conditional information and nonnegativity) [25].

It is easy to see how this convexity in conditional distributions is related to the convexity in noise correlations we use. To do this, we further assume that the two conditional means are the same, , and let be random vectors. Introduce an auxiliary Bernoulli random variable that is independent of , with probability of being 1. The variable can then be explicitly constructed using : for any , draw according to if and according to otherwise. Using the law of total covariance, the covariance (conditioned on ) between the elements of isThis shows that the noise covariances are expressed accordingly as linear combinations. If the information depends only on covariances (besides the fixed means), as for the three measures we consider, the two notions of convexity become equivalent. A direct corollary of this argument is that the convexity result of Theorem 2 also holds in the case of mutual information for conditionally Gaussian distributions (i.e., such that given is Gaussian distributed).

Sensitivity and robustness of the impact of correlations on encoded information

One obvious concern about our results, especially those related to the “noise-free” coding performance, is that this performance may not be robust to small perturbations in the covariance matrix – and thus, for example, real biological systems might be unable to exploit noise correlations in signal coding. This issue was recently highlighted, in particular, by [26].

At first, concerns about robustness might appear to be alleviated by our observation that there is typically a large set of possible correlation structures that all yield similar (optimal) coding performance (Theorem 4). However, if the correlation matrix was perturbed along a direction orthogonal to the level set of the information quantity at hand, this could still lead to arbitrary changes in information. To address this matter directly, we explicitly calculated the following upper bound for the sensitivity of information, or condition number with respect (sufficiently small) perturbations. The condition number is defined as the ratio of relative change in the function to that in its variables. For example, the condition number corresponding to perturbing is the smallest number that satisfies . Similarly one can define condition number for perturbing the tuning of neurons .

Proposition 6. The local condition number of under perturbations of (where magnitude is quantified by 2-norm) is bounded by(3)where and are the largest and smallest eigenvalue of respectively. Here is the condition number with respect to the 2-norm, as defined in the above equation.

Similarly, the condition number for perturbing of is bounded by(4)where is the -th column of and assume for all . Here is the dimension of the stimulus .

Though stated for , same results also hold for when replacing by in Eq. (3) and (4). We believe that a similar property is possible to derive for mutual information , but that the expression could be quite cumbersome; we do not pursue this further here.

To interpret this Proposition, we make the following observations which explain when the sensitivity or condition numbers will (or will not) be themselves reasonable in size, for given noise correlations . In our setup, the diagonal of (or for OLE) is fixed, and therefore is bounded (Gershgorin circle theorem). As long as (or ) is not close to singular, the information should therefore be robust, i.e. with a reasonably bounded condition number. For OLE, as , we always have a universal bound of determined only by . For the linear Fisher information, however, nearly singular can more typically occur near optimal solutions; in these cases, the condition numbers will be very large.

Relationship to previous work

Much prior work has investigated the relationship between noise correlations and the fidelity of signal coding [3], [5][11], [13][16]. Two aspects of our current work complement and generalize those studies.

The first are our results on the sign rule (Section “The sign rule revisited”). Here, we find that, if each cell pair has noise correlations that have the opposite sign vs. their signal correlations, the encoded information is always improved, and that, at least in the case of weak noise correlations, noise correlations that have the same sign as the signal correlations will diminish encoded information. This effect was observed by [6] for neural populations with identically tuned cells. Since the tuning was identical in their work, all signal correlations were positive. Thus, their observation that positive noise correlations diminish encoded information is consistent with the SR results described above.

Relaxing the assumption of identical tuning, several studies followed [6] that used cell populations with tuning that differed from cell to cell, but maintained some homogeneous structure – i.e., identically shaped, and evenly spaced (along the stimulus axis) tuning curves, e.g., [5], [7]. The models that were investigated then assumed that the noise correlation between each cell pair was a decaying function of the displacement between the cells' tuning curve peaks. The amplitude of the correlation function – which determines the maximal correlation over all cell pairs, attained for “nearby” cells – was the independent variable in the numerical experiments. Recall that these nearby (in tuning-curve space) cells, with overlapping tuning curves, will have positive signal correlations. These authors found that positive signs of noise correlations diminished encoded information, while negative noise correlations enhanced it. This is once again broadly consistent with the sign rule, at least for nearby cells which have the strongest correlation. Finally, we note that [5], [10], [12] give a crisp geometrical interpretation of the sign rule in the case of cells.

At the same time, experiments typically show noise correlations that are stronger for cell pairs with higher signal correlations [3], [6], [19], which is certainly not in keeping with the sign rule. This underscores the need for new theoretical insights. To this effect, we demonstrated that, while noise correlations that obey the sign rule are guaranteed to improve encoded information relative to the independent case, this improvement can also occur for a diverse range of correlation structures that violate it. (Recall the asymmetry of our findings for the sign rule: noise correlations that violate the sign rule are only guaranteed to diminish encoded information if those noise correlations are very weak).

This finding is anticipated by the work of [8], [9], [14], who used elegant analytical and numerical studies to reveal improvements in coding performance in cases where the sign rule was violated. They studied heterogeneous neural populations, with, for example, different maximal firing rates for different neurons. In particular, these authors show how heterogeneity can simultaneously improve the accuracy and capacity of stimulus encoding [14], or can create coding subspaces that are nearly orthogonal to directions of noise covariance [8], [9]. Taken together, these studies show that the same noise correlation structure discussed above – with nearby cells correlated – could lead to improved population coding, so long as the noise correlations are sufficiently strong. [8] also demonstrated that the magnitude of correlations needed to satisfy the “sufficiently strong” condition decreases as the population size increases, and that in the large limit, certain coding properties become invariant to the structure of noise correlations. Overall, these findings agree with our observations about a large diversity of SR-violating noise correlation structures that improve encoded information.

One final study requires its own discussion. Whereas the current study (and those discussed above) investigated how coding relates to noise correlations with no concerns for the biophysical origin of those correlations [17], studied a semi-mechanistic model in which noise correlations were generated by inter-neuronal coupling. They observed that coupling that generates anti-SR correlations is beneficial for population coding when the noise level is very high, but that at low noise levels, the optimal population would follow the SR. Understanding why different mechanistic models can display different trends in their noise correlations is important, and we are currently investigating that issue.

The geometry of the covariance matrix

One geometrical, and intuitively helpful, way to think about problems involving noise correlations is to ask when the noise is “orthogonal to the signal”: in these cases, the noise can be separated from or be orthogonal to the signal, and high coding performance is obtained. This geometrical view is equally valid for the cases we study (e.g., the conditions we derive in Theorem 3), and is implicit in the diagrams in Figure 1. To make the approach explicit, one could perform an eigenvector analysis on the covariance matrices at hand, where quantities like linear Fisher information are rewritten as a sum of projections of the tuning vector to the eigen-basis of the covariance matrix, weighted by the appropriate eigenvalues.

This invites the question of whether a simpler way to obtain the results in our paper wouldn't be to consider how covariance eigenvectors and eigenvalues could be manipulated more directly. For example, if one could simply “rotate” the eigenvectors of the covariance matrix out of the signal direction – or shrink the eigenvalues in that direction – one would necessarily improve coding performance. So why don't we simply do this when exploring spaces of covariance matrices? The reason is that these eigenvalue and eigenvector manipulations are not as easy to enact as they might at first sound (to us, and possibly to the reader). Recall that we asked how noise correlations affect coding subject to the specific constraint that the noise variance of each neuron is fixed, which translates in general to rather complex constraints on the eigenvalues and eigenvectors. For example, the eigenvalues of a fixed-diagonal covariance matrix cannot be equivalently described by simply having a fixed sum (which is a necessary condition for the diagonals to be constant, but is not a sufficient one). These facts limit the insights that a direct approach to adjusting eigenvalues and eigenvectors can have for our problem, and emphasize the non-trivial nature of our results.

An exception comes, for example, in special cases when the covariance matrix has a circulant structure, and consequently always has the Fourier basis for eigenvectors. These cases include many situations considered in the literature [8], [10]. For contrast, the covariance matrices we studied were allowed to change freely, as long as the diagonals remained fixed.

Limitations and extensions

We have developed a rich picture of how correlated noise impacts population coding. For our results on noise cancellation in particular, this was done by allowing noise correlations to be chosen from the largest mathematically possible space (i.e., the entire spectrahedron). This describes the fundamental structure of the problem at hand, but are conclusions derived in this way important for biology? It is not hard to imagine many biological constraints that may further limit the range of possible noise correlations (e.g., limits on the strength of recurrent connections or shared inputs). On the one hand, the likelihood that the underlying phenomena could be found in biological systems seems increased by the fact that many different correlation matrices will suffice for noise free coding and that, as we discuss in Proposition 6, information levels appear to have some robustness under perturbations of the underlying correlation matrices.

However, care must still be taken in interpreting what we mean by “noise free.” As emphasized by, e.g., [8], [27], noise upstream from the neural population in question can never be removed in subsequent processing. Therefore, the “noise free” bound we discuss in Lemma 8 should not allow for a higher information level than that determined by this upstream noise. In some cases, this fact could lead to a consistency requirement on either the set of signal correlations , the set of allowed noise correlations , or both. To specify these constraints and avoid possible over-interpretations of the abstract coding model as we study, one could combine a explicit mechanistic model with the present approach.

On another note, we have asked what noise correlations allow for linear decoders to best recover the stimulus from the set of neural population responses. At the same time, there is reason to be wary of linear decoders [28] (see also [16]), as they might miss significant information that is only accessible via a non-linear read-out. Furthermore, given the non-linearity inherent in dendritic processing and spike generation [29], there is added motivation to consider information without assuming linearity.

Furthermore, we have herein restricted ourselves to asking about pairwise noise correlations, while there are many studies that identify higher-order correlations (HOC) in neural data [30], [31], and some numerical results [32] that hint at when those HOC are beneficial for coding. In light of this study, it is interesting to ask whether we can derive a similarly general theory for HOC, and to investigate how the optimal pairwise and higher-order correlations interrelate. Note this issue is closely related to the type of decoder that is assumed: the performance of linear decoder (as measured by mean squared error) depends on the pairwise correlations, but not HOC. Therefore the effect of HOC must be studied in the context of nonlinear coding.

Finally, we note that here we used an abstract coding model that evaluates information based on the statistics and so on. For generality, we made no assumptions on the structure of these statistics, and any links among them. This suggests two questions for future work: whether an arbitrary set of such statistics is realizable in a constructive model of random variables, and whether there are any typical relationships between these statistics when they arise from tuned neural populations. As a preliminary investigation, we partially confirmed the answer to the first question, except for a “zero measure” set of statistics, under generic assumptions (data not shown).

Experimental implications

Recall that we observed that, in general, for a given set of tuning curves and noise variances, there will be a diverse family of noise correlation matrices that will yield good (optimal, or near-optimal) performance. This effect can be observed in Figs. 2, 3, and 5, as well as our result about the dimension of the set of correlation matrices that yield (when it is possible) noise-free coding performance (Theorem 4).

At least compared with the alternative of a unique optimal noise correlation structure, our findings imply that it could be relatively “easy” for the biological system to find good correlation matrices. At the same time, since the set of good solutions is so large, we should not be surprised to see heterogeneity in the correlation structures exhibited by biological systems. Similar observations have previously been made in the context of neural oscillators: Prinz and colleagues [33] observed that neuronal circuits with a variety of different parameter values could produce the types of rhythmic activity patterns displayed by the crab stomatogastric ganglion. Consequently, there is much animal-to-animal variability in this circuit [34], even though the system's performance is strongly conserved.

At the same time, the potential diversity of solutions could present a serious challenge for analyzing data (cf. [26]). Notice, that, at least for the cases of Figs. 2 and 3 for example, how much the performance can vary as one of the correlation coefficients is changed, while keeping the other ones fixed. If this phenomenon is general, it means that, in an experiment where we observe a (possibly small) subset of the correlation coefficients, it may be very hard to know how those correlations actually affect coding: the answer to that question depends strongly on all of the other (unobserved) correlation coefficients. As our recording technologies improve [35], and we make more use of optical methods, these “gaps” in our datasets will get smaller, and this issue may be resolved; further theoretical work to gauge the seriousness of the underlying issue is also needed. In the meanwhile, caution seems wise when analyzing noise correlations in sparsely sampled data.

Finally, recall that the optimal noise correlations will always lie on the boundary of the allowed region of such correlations. Importantly, what we mean by that boundary is flexible. It may be the mathematical requirement of positive semidefinite covariance matrices – the loosest possible requirement – or there may be tighter constraints that restrict the set of correlation coefficients. Since biophysical mechanisms determine noise correlations, we expect that there will be identifiable regions of possible correlation coefficients that are possible in a given circuit/system. Understanding those “allowed” regions will, we anticipate, be important for attempts to relate noise correlations to coding performance, and ultimately to help untangle the relationship between structure and function in sensory systems.

Methods

In the Methods below, we will first revisit the problem set-up, and define our metrics of coding quality. We will then prove the theorems from the main text. Finally, we will provide the details of our numerical examples. A summary of our most frequently used notation is listed in Table 1.

Summary of the problem set-up

We consider populations of neurons that encode a stimulus by their noisy responses . For simplicity, we will suppress the vector notation in the Methods Unless otherwise stated, most of our results apply equally well to either scalar, or multi-dimensional, stimuli.

The mean activity or “tuning” of the neurons are described by . In the case of scalar stimuli, this corresponds to the notion of a tuning curve. For more complex stimuli, this is more aligned with the idea of a receptive field.

The trial-to-trial noise part in , given a fixed stimulus, can be described by the conditional covariance (superscript denotes “noise”). In particular are noise variances of each neuron.

We ask questions of the following type: given fixed tuning curves and noise variances , how does the choice of noise covariance structure , affect linear Fisher information (see Section “Defining the information quantities, signal and noise correlations”)?

Besides the local information measure that quantifies coding near a specific stimulus, we also considered global measures that describe overall coding of the entire ensemble of stimuli. These are and , described in Section “Defining the information quantities, signal and noise correlations”. For these quantities, the relevant noise covariance is . We overload the notation with in these global coding contexts. The optimization problem can then be identically stated for and .

Defining the information quantities, signal and noise correlations

Linear Fisher information.

Linear Fisher information quantifies how accurately the stimulus near a value can be decoded by a local linear unbiased estimator, and is given by(5)In the case of a dimensional stimulus the same definition holds, with(6)In order for to be defined by Eq. (5), we assume is invertible and hence positive definite: . It can be shown that is the (attainable) lower bound of the covariance matrix of the error of any local linear unbiased estimator. Here the term lower bound is used in the sense of positive semidefiniteness, that is the ordering if and only if . To obtain a scalar information quantity, we consider and also denote this by if not stated otherwise.

Optimal linear estimator.

To quantify the global ability of the population to encode the stimulus (instead of locally, as for discrimination tasks involving small deviations from a particular stimulus value), we follow [18] and consider a linear estimator of the stimulus, given responses :(7)with fixed parameters and unchanged with . The set of readout coefficients that minimize the mean square error for a scalar random stimulus , i.e.(8)can be solved analytically as in [18], yielding:(9)where(10)and is a column vector with entries . Here the expectation generally means averaging over both noise and stimulus (except in , where averaging is only over the stimulus).

For multidimensional stimuli , similar to the case for linear Fisher information, the lower bound (in sense of positive semidefiniteness) of the error covariance is given by . Here is extended to form a matrix(11)Furthermore, a corresponding lower bound for the sum of squared errors is the scalar version .

When minimizing the OLE error with respect to noise correlations, , and are constants with respect to the optimization. Minimizing OLE error is therefore equivalent to maximizing the second term above, given by . This motivates us to define what we call “the information for OLE”, which is simply the second term (above) — i.e., the term that is subtracted from the signal variance to yield the OLE error. Specifically,(12)Thus, when is large, the decoding error is small, and vice versa. Comparing with the expression for , we see a similar mathematical structure, which will enable almost identical proofs of our theorems for both of these measures of coding performance.

Similar to , we need to be invertible in order to calculate . Since the signal covariance matrix does not change as we vary , this requirement is easy to satisfy. In particular, we assume is invertible (), and thus for all consistent – i.e. positive semidefinite – , , so that is invertible.

Mutual information for Gaussian distributions.

While the OLE and the linear Fisher information assume that a linear read-out of the population responses is used to estimate the stimulus, one may also be interested in how well the stimulus could be recovered by more sophisticated, nonlinear estimators. Mutual information, based on Shannon entropy is a useful quantity of this sort. It has many desirable properties consistent with the intuitive notion of “information”, and it we will use it to quantify how well a non-linear estimator could recover the stimulus.

Assuming that the joint distribution of is Gaussian ( can be multidimensional), the mutual information has a simple expression(13)The quantities above are the same as in the definitions of . Moreover, is taken to base , and hence the information is in units of nats. To convert to bits, one must simply divide our values by .

There is a consistency constraint that must be satisfied by any joint distribution of , namely that(14)This guarantees that is always defined and real (but could be ). To keep finite, one needs to further assume , which is equivalent to . This can be seen by rewriting mutual information while exchanging the position of the two variables (since mutual information is symmetric),

It is easy to see that the formula contains terms similar to those in and . In the scalar stimulus case, since is an increasing function, maximizing is equivalent to maximizing . In fact, the leading term in the Taylor expansion of with respect to is , which is proportional to . In the case of multivariate stimuli , we note that the operation preserves ordering defined in the positive semidefinite sense, i.e. . This close relationship suggests a way of transforming to a comparable scale of information in nats (or bits) as .

Signal and noise correlations.

Given the noise covariance matrix one can normalize it as usual by its diagonal elements (variances) to obtain correlation coefficients(15)

We next discuss signal correlations, which describe how similar the tuning of a pair of neurons is. For linear Fisher information, we define signal correlations as(16)Here is the sensitivity vector describing how the mean response of neuron changes with . With the above normalization, takes value between and .

For the other two information measures we use, and , a similar signal correlation can be defined. Here, we first define analogous tuning sensitivity vectors for each neuron, which will replace in Eq. (16). These vectors are(17)for and respectively. Here is the diagonal matrix of noise variances, and .

The definitions of signal correlations above are chosen so that they are tied directly to the concept of the sign rule, as demonstrated in the proof of Theorem 1. As a consequence, for the case of and , signal correlations are defined through the population readout vector. This has an important implication that we note here. Consider a case where only a subset of the total population is “read out” to decode a stimulus. Then, the population readout vector — and hence the signal correlations defined above — could vary in magnitude and even possibly change signs depending on which neurons are included in the subset.

A different definition of signal correlations for OLE is sometimes used in literature, which we denote by . Naturally, one should not expect our sign rule results to apply exactly under this definition. However, when we redid our plots of signal vs. noise correlations using for our major numerical example (Fig. 5 ABC), we observed the same qualitative trend (data not shown). This reflects the fact that, at least in this specific example, the signal correlations defined in the two ways are positively correlated. Understanding how general this phenomenon is would require further studies taking into account how the relevant statistics (, L, etc.) are generated from tuning curves or neuron models.

We next define the notion of the magnitude or strength of correlations, as came up throughout the paper. In particular, in Section “Heterogeneously tuned neural populations”, we considered restrictions on the magnitudes of noise correlations when finding their optimal values. We proceed as follows. Since , the list of all pairwise correlations of the population can be regarded as a single point in . If not stated otherwise, the vector 2-norm in that space (Euclidean norm) is what we call the “strength of correlations:”(18)

Proof of Theorem 1: The generality of the sign rule

We will now restate and then prove Theorem 1, first for and then for and .

Theorem 1. If, for each pair of neurons, the signal and noise correlations have opposite signs, the linear Fisher information is greater than the case of independent noise (trial-shuffled data). In the opposite situation where the signs are the same, the linear Fisher information is decreased compared to the independent case, in a regime of very weak correlations. Similar results hold for IOLE and Imut;G, with a modified definition of signal correlations given in Section “Defining the information quantities, signal and noise correlations”.

The proof proceeds by showing that information increases along the direction indicated by the sign rule, and that the information quantities are convex, so that information is guaranteed to increase monotonically along that direction.

Proof. Consider linear Fisher information(19)Let be the diagonal part of , corresponding to (noise) variance for each neuron. We change the off-diagonal entries of along a certain direction in and consider a parameterization of the resultant covariance matrix, with parameter : . We evaluate the directional derivative () of at ,(20)Here , and we have used the identity and the fact . Recalling the definition of signal correlations in Eq. (16), if the sign of is chosen to be opposite to the sign of for all , then Eq. (20) ensures that the directional derivative at .

We now derive a global consequence of this local derivative calculation. as a function of has . Since is smooth, there exists , such that for , . For corresponding , applying the mean value theorem, we have . Similarly, for the opposite case where all the signs of the noise correlations are the same as the signs of , the information will be smaller than the independent case (at least for weak enough correlations). This proves the local “sign rule”.

Thus, at least for small noise correlations, choosing noise correlations that oppose signal correlations will always be yield higher information values than the case of uncorrelated noise. To prove the “global” version of this theorem — that opponent signal and noise correlations always yield better coding than does independent noise — we will need to establish the convexity of . This is done in Theorem 2.

Note that, as we will soon prove, is a convex function of , and hence is increasing with . This means that the from our prior argument can be made arbitrarily large, and the same result – that performance improves when noise correlations are added, so long as they lie along this direction – will hold, provided that is still physically realizable. Thus, the improvement over the independent case is guaranteed globally for any magnitude of noise correlations.

Note that the arguments above do not guarantee that the globally optimal noise correlation structure will follow the sign rule. Indeed, we have seen concrete examples of this in Figs. 2 and Fig. 3.

Remark 1. From Eq. (20), the gradient (steepest uphill direction) of evaluated with independent noise is .

Remark 2. The same result can be shown for and , replacing with and , respectively, in the definition of in Eq. (16). The gradients are and , respectively, where is -th row of , and .

Proof of Theorem 2: Optima lie on boundaries

We begin by restating Theorem 2, which we then prove first for and then for and .

Theorem 2. The optimal Cn that maximize information must lie on the boundary of the region of correlations considered in the optimization.

We will show that is a convex function of and hence it will either attain its maximum value only on the boundary of the allowed region, or it will be uniformly constant. The latter is a trivial case that only happens when , as we see below.

Proof. To show that a function is convex, it is sufficient to show its second derivative along any linear direction is non-negative. For any constant direction of changing (off-diagonal entries of) , we consider a straight-line perturbation, parameterized by . Taking the derivative of with respect to ,(21)We have used that . Let . Taking another derivative gives(22)The inequality is because of Lemma 7 (see below) and being positive definite. Also, note that .

For the case when is constant over the region, using Proposition 10 (below), for any direction of change . Letting , , we see that the -th row of must be 0. This leads to and, since , to . This was the claim in the beginning. In other words, in the case where is constant with respect to the noise correlations, the optimal read-out is zero, regardless of the neurons' responses. With the exception of this (trivial) case, the optimal coding performance is obtained when the noise correlation matrix lies on a boundary of the allowed region.

Lemma 7. (Linear algebra fact) For any positive semidefinite matrix , and any matrix , (assuming the dimensions match for matrix multiplications) is positive semidefinite and hence . If “ = ” is attained, then .

Remark 3. When i.e. positive definite, leads to as is invertible.

Proof. For any vector (with the same dimension as the number of columns in ), since . Thus, by definition, , and therefore .

For the second part, if , all the eigenvalues of must be 0 (since none of them can be negative as ), hence . This in fact requires . To see this, let be an orthogonal diagonalization of . For any vector as above, . Since the eigenvalues are non-negative, let be the diagonal matrix with the square roots of . We have(23)Therefore the vector and . Since can be any vector, we must have .

Remark 4. Because of the similarities in the formulae for and , the same property can be shown for . In order for to be invertible, is only defined over the open set of positive definite . We therefore assume the closure of the allowed region is contained within this open set to state the boundary result.

A parallel version of Theorem 2 can also be established for , as we next show.

Proof of Theorem 2 for Imut;G. Again consider the linear parameterization along a direction , as defined above. Let . The consistency constraint in Eq. (14) assures . To keep finite, we further assume . Then, the derivative of with respect to is(24)where we have used the identity . The second derivative is thusHere is the identity matrix, , and as defined below Eq. (9). being positive definite allows us to split it into its square root . Moreover, the identity , for any matrices , and , is used in deriving the last line in the above equation. For the last inequality, we apply Lemma 7 to the two terms with and being positive semidefinite.

We have thus shown that is convex. For the special case that is constant, Proposition 10 shows . With the same argument as for , we observe that, in this (trivial) case .

Proof of Theorem 3: Conditions on the noise covariance matrix, under which noise-free coding is possible

We begin by showing that, for a given set of tuning curves, the maximum possible information – which may or may not be attainable in the presence of noise – is that which would be achieved if there were no noise in the responses. This is the content of Lemma 8. Next, we will introduce Lemma 9, which is a useful linear-algebraic fact that we will use repeatedly in our proofs.

We will then prove Theorem 3, which provides the conditions under which such noise-free performance can be obtained. One direction of the proof of Theorem 3 (sufficiency) is straightforward, while the other direction (necessity) relies on the observation of several conditions that are equivalent to the one in the theorem. We prove these equalities in Proposition 10.

For Theorem 3, we will only consider , since and will typically be infinity in the noise-free case ( becomes singular). If one takes all instances of infinite information as “equally optimal,” a version of Theorem 3 can also be obtained; moreover, the condition in Theorem 3 becomes a sufficient but not necessary condition for infinite information.

Lemma 8 (Upper bound by noise-free information).(25)Here the noise-free information refers to that which is obtained when plugging in at the place of in Eq. (12).

Proof. This follows essentially from the consistency between the information quantity and the positive semidefinite ordering of covariance matrices. First, we write(26)Then, we note the fact that for two positive definite matrices , if and only if . From this, we have . Finally, applying Lemma 7 yields .

Lemma 9 (Useful linear algebra fact). If, for any , , and , , then .

Proof. .

Proposition 10. (Equivalent conditions used in proving the noise-free coding Theorem 3).

Along a certain direction , the following conditions are equivalent.(27)The same also holds for and .

Proof for IOLE. “”:

We again consider parametrized deviations from , for some constant matrix B. Let , and recall (Eq. (22)),(28)Since is positive definite, according to the remark after Lemma 7, we have .

)”: If , by Lemma 9, . We have , for all in the allowed region, and hence .

”: immediate.

This concludes the proof for .

Proof for IF,lin. For , we further assume to avoid infinite information. Identical arguments will prove the properties above, where is replaced by .

Proof for Imut, G. For , we similarly assume (as defined in the proof of Theorem 2). Let , then ,It is easy to see . When holds, using Lemma 7, each of the two terms must be 0. In particular, as we discussed in the proof of Theorem 2 for (above), each of the terms is non-negative. Thus, if their sum is , then each term must individually be . According to the remark after Lemma 7, the second term being 0 indicates that or , which is .

If holds, by Lemma 9, we have . We have , for all in the allowed region, and hence . Similarly . This proves the property for .

Theorem 3. A covariance matrix Cn attains the noise-free bound for OLE information (and hence is optimal), if and only if CnA = Cn(Cμ)−1L = 0. Here L is the cross-covariance between the stimuli responses (Eq. (11)), Cμ is the covariance of the mean response (Eq. (10)), and A is the linear readout vector for OLE, which is the same as in the noise-free case — that is, A = (Cn+Cμ)−1L = (Cμ)−1L — when the condition is satisfied.

Proof. If , then Lemma 9 implies that , which means that , using the definition in Eq. (12).

For the other direction of the theorem, consider a function of , , whose values at the endpoints are equal, according to saturation of the information bound. The mean value theorem assures that there exists a such that(29)Since is positive semidefinite, according to Lemma 7, . Now using Lemma 9, we have that , and the readout vector .

Proof of Theorem 4: Conditions on tuning curves and variance, under which noise-free coding performance is possible

Next, we will restate, and then prove, Theorem 4. The proof will require using geometric ideas in Lemma 11, which we will state and prove below.

Theorem 4. For scalar stimulus, let qi = , i = 1⋅⋅⋅N, where A = (Cμ)−1L is the readout vector for OLE in the noise-free case. Noise correlations may be chosen so that coding performance matches that which could be achieved in the absence of noise if and only if(1)When “<” is satisfied, all optimal correlations attaining the maximum form a dimensional convex set on the boundary of the spectrahedron. When “ = ” is attained, the dimension of that set is , where N0 is the number of zeros in {qi}.

The proof is based on the condition in Theorem 3. After taking several invertible transforms of the equation, the problem of finding a noise-canceling is transformed to that of finding a set of vectors, whose length are specified by , that sum to zero (the vectors form a closed loop when connected consecutively). This allows us to take a geometrical point of view, in which inequality Eq. (1) becomes the triangle inequality. This will prove the “necessary” part of the Theorem. Lemma 0.5 shows the opposite direction, by inductively constructing the set of vectors that sum to zero.

This procedure will yield one “particular” with the noise-canceling property. Very much like finding all general solutions of an ODE, we then add to our particular solution an arbitrary homogeneous solution, which belongs to a vector space of dimension . In order for our perturbed solution, at least for small enough perturbations, to still be positive semidefinite, the particular we start with must be generic. In other words, it must satisfy a rank condition, which is guaranteed by the construction in Lemma 11. We can then conclude that the set of all noise canceling forms a linear segment with the dimension of the space of homogeneous solutions.

Finally, special treatments are given for the cases of “” in Eq. (1), as well as cases where some are 0.

Proof. To establish the necessity direction of the Theorem, first let be a diagonal matrix with or , where vector . Note that(30)Let , a positive semidefinite matrix with diagonal .

can be diagonalized by an orthogonal matrix , . Without loss of generality, further assume that the first diagonal elements of are positive, with the rest being 0, where . Let be the first block of , and be the first rows of . Then we have(31)Let , a matrix, and be the -th column. As , the 2-norm of vector is . Let be the maximum of ,(32)This concludes the necessary direction of our proof.

To establish sufficiency, we first focus on the case of “” and all . We will construct a generic that has rank , satisfying . We will basically reverse the direction of arguments in Eq. (3032). We will later deal with the “” case, and the case of for some .

Lemma 11 Let , be an orthonormal basis of . Given a set of positive satisfying “” in Eq. (1), there exist vectors , such that , and the spanned linear subspace .

Proof. We prove this by induction. has to be at least 3 for the inequality to hold. For , this is the case of a triangle. There is a (unique) triangle , for which the length of the three sides , , are respectively. The altitude from intersects the line of at . Let be the origin of the coordinate system, with being the x-axis and aligned with , and the altitude being the y-axis aligned with . From such a picture, it is easy to verify the following: , , satisfies the lemma, where if lies within and otherwise.

For the case of , assume that is the largest of the . Because of the inequality, there will always exist some non-negative real number (not necessarily one of the ) such that(33)We can verify that the set satisfies the inequality as well. By the assumption of induction, there exist vectors that span the space of , such that and .

Note the choice of also guarantees that can be the edge lengths of a triangle. Applying the result at , the three sides , , correspond to respectively. Let , . It is easy to verify that these satisfy the lemma.

Using the lemma, we have a set of . Stacking them as column vectors gives a matrix ; moreover, . Let , which is positive semidefinite with diagonals . It is easy to show that , by comparing the null spaces of the matrices. Let , where is defined as above. Then .

Now consider the case where there are zeros in . Assume that the first entries contain all of the the non-zero values. We apply the construction above for the first dimensions, and get a matrix such that , , where is part of with the first elements. The following block diagonal matrix(34)satisfies and .

We have shown that for the “” case in the theorem, there is always a noise canceling . Consider the direction , in which off-diagonal elements of vary, while keeping (temporarily ignoring the positive semidefinite constraint). The set of all such form a linear subspace of , determined by the linear system . Since there are equations, the dimension of is at least .

In the “” case, there must be at least 3 non-zero in order for the triangle inequality to be satisfied in Eq. 1. We will choose these three to be . Consider a block of the coefficient matrix associated with the system (note that the entries of are considered to be unknown variables), that are columns corresponding to variables (35)

Performing Gaussian elimination on the columns of this matrix, we obtain the following matrix, which will have the same rank.(36)This matrix – which determines the number of constraints that must be satisfied in order for – has rank , and hence is exactly .

For any direction in , we can always perturb the generic we found above by some finite amount , and still have be positive semidefinite. Let be the smallest non-zero eigenvalue of . Take any . For any vector , let be an orthogonal decomposition where is the projection along the direction of . Then(37)

This shows that the are positive semidefinite and they form a set of dimension as . We can always take the admissible values to their extremes, and the resulting matrices are all the possible noise canceling . For any , , and must be in . Note that the sets of positive semidefinite (spectrahedra) are convex. As a consequence, any point along the segment will be positive semidefinite. This shows we must have encompassed when considering the largest possible perturbations of , in any direction . Moreover, we note that the set of all noise-canceling is convex: if , , for any and is positive semidefinite, with the diagonal matching .

Thus, we have proved the claim about the dimension and convexity of the set of optimal correlations for the case of “” in Eq. (1).

Finally, for the special case of “” in Eq. (1), again first consider the case where all . As before, solving is equivalent to solving and there is an one to one correspondence between the two. Revisiting Eq. (32) in the proof above, the equality condition in the triangle inequality implies that all point along the same direction, and that is in the opposite direction, in order to cancel their sum. This fully determines , where , and(38)It is easy to verify that , and hence there is a unique noise canceling .

For the case when there are 0's among the , assume that the first coordinates are non-zero, so that . Next, we write in block matrix form, with blocks of dimension and :(39)Applying the previous argument from the case, there is a unique . Moreover, note that , following from the fact that in Eq. (38) has rank 1. Let be the orthogonal diagonalization and . Let be the identity matrix of dimension . Then we can take an orthogonal transform:With the notation , the original problem is therefore equivalent to finding all and such that,(40)while keeping the matrix in this equation positive semidefinite.

For any positive semidefinite matrix , it is easy to show that by considering the principle minor with indices , which must be non-negative. Note that since has only one non-zero diagonal entry, this forces the first columns of to be entirely 0. So we can rewrite the block matrix by dimension and as(41)where is the -th column of . Since , we have . It can be verified that, as long as the block structure of Eq. (41) is satisfied, Eq. (40) is always true. The positive semidefinite constraint becomes the constraint that the lower block be positive semidefinite; in turn, this corresponds to a spectrahedron (and hence a convex set) of dimension . Note that this dimensionality and convexity will be preserved when we undo the invertible linear transforms performed in prior steps to obtain the noise-canceling .

Proof of Theorem 5: Probability that noise-free coding is possible

In this subsection, we will restate, and then prove, Theorem 5.

Theorem 5. If the defined in Theorem 4 are independent and identically distributed (i.i.d.) as a random variable X on then the probability(2)

Proof. We will use the following fact to establish a lower bound for the probability of the event in the theorem (below, we denote this event as ).(42)We choose the two events and as and . Note that implies C,(43)the event in concern. We will then show that, for large populations, and , and thus .

For , by the law of large numbers, the average should converge to the expectation (which is a positive number), hence(44)

We next consider event B. Let the cumulative distribution function of be . Then cumulative distribution function for is by the assumption that these variables are drawn i.i.d. It follows thatHere, the first inequality is obtained via the lower bound of over the interval of integration, and the second uses the fact .

As , the last integral converges to 0 because of the fact that , together with the Lebesgue dominated convergence theorem. Hence as .

Combining the limits of and using Eq. (42), together with the fact , we conclude that must approach 1 as .

Proof of Proposition 6: Sensitivity to perturbations

Here, we will prove Proposition 6, which puts bounds on the condition numbers that define the sensitivity of our coding metrics to perturbations in noise correlations or the tuning curves. For our proof, we will require three different lemmas. We state and prove these, before moving on to Proposition 6.

Here, we will first consider the condition number for the case of a scalar stimulus , when is a vector. In the proof of the proposition, we show how to extend the results to the case of multivariate . As we mentioned in Section “Sensitivity and robustness of the impact of correlations on encoded information”, the same proof works for as well as .

Lemma 12. For any submultiplicative matrix norm and ,(45)

Proof. Since , exists and(46)

Lemma 13. For any positive definite matrix , vectors and such that ,(47)

Proof.Here, we have used , , and the assumed condition in the last line.

Lemma 14. For any positive definite matrix , vector and matrix where ,(48)

Proof.Here we have used . As , we apply Lemma 12 is applied to obtain the last line.

Proposition 6. The local condition number of IF;lin under perturbations of Cn (where magnitude is quantified by 2-norm) is bounded by(3)where max and min are the largest and smallest eigenvalue of Cn respectively. Here is the condition number with respect to the 2-norm, as defined in the above equation.

Similarly, the condition number for perturbing of is bounded by(4)where is the i-th column of and assume for all i. Here K is the dimension of the stimulus s.

Proof. Note that(49)where is the -th unit vector (). Since the bound in Lemma 14 does not depend on , we apply the Lemma for and each respectively. For any perturbation satisfying , we haveHere . We then note that for positive semidefinite matrices , , where and are the smallest and largest eigenvalues of . This proves the bound on the condition number for perturbing .

Similarly, for a perturbation of with , . This guarantees that(50)Applying Lemma 13 for each and , we haveHere is the Frobenius norm and we have used the fact for any matrix , . The last inequality follows from the definition of .

Details for numerical examples and simulation

Here, we describe the parameters of our numerical models, and the numerical methods we used.

Parameters for Fig. 1, Fig. 2 and Fig. 3.

All parameters we use are dimensionless, unless stated otherwise.

In Fig. 1, the mean response for the three neurons under stimulus 1 (red) and 2 (blue) is and respectively:(51)For each case of correlation structure (i.e., for each row in Figure 1), the noise covariance matrix is the same for the two stimuli, and all neuron variances . In detail:(52)(53)

The confidence circles and spheres are calculated based on a Gaussian assumption for the response distributions.

In Fig. 2, the noise variances are all set to 1. Additionally,(54)

In Fig. 3, the noise variances are all set to 1. In panel A(55)For panel B(56)

Heterogeneous tuning curves.

For the results in Section “Heterogeneously tuned neural populations”, we use the same model and parameters as in [8] to set up a heterogeneous population with tuning curves of random amplitude and width. For completeness, we include the details of this setup as follows:

The shape of each tuning curve (specifying firing rates) is modeled by a von Mises distribution. This an analog of the Gaussian distribution over the unit circle:(57)The parameters respectively control the magnitude, width and preferred direction for each neuron. We set to be equally spaced along and . The are independently chosen from a -square distribution with 3 degrees of freedom, scaled to a mean of 19. is similarly drawn from a -normal distribution with parameters giving mean 2 and standard deviation 2 (for the underlying normal distribution).

We assume Poisson firing variability, so that , and use a spike-count window ms in Fig. 4 and 5.

Equivalence between penalty functions and constrained optimizations.

In this section we note a standard fact about implementing constrained optimization with penalty functions — i.e., the method of Lagrange multipliers.

Consider an optimization problem: . Now add a penalty term with constant and consider the new optimization problem: . If is one of the solutions to this new optimization problem, then it is also an optimal solution to the constraint optimization problem .

To show this, let be any point that satisfies . Further, note is also the solution to the problem of , since we simply add a constant . Therefore,

As also satisfies the constraint, we conclude that is an optimal solution to the constrained optimization problem.

We use this fact to find the information-maximizing noise correlations, with the restriction that the noise correlations by small in magnitude. For a given , we perform the optimization , where in this case is one of our information measures, refers to the off-diagonal elements of the covariance matrix, and is the measure of the correlation strength as in Eq. (58). Thanks to the above result, we can be assured that the resulting covariance matrix (described by ) will be the one that maximizes the information for a particular strength of correlations. By varying (or in Eq. (58)), we can thus parametrically explore how the optimal correlation structures change as one allows either larger, or smaller, correlations in the system.

Penalty function.

In Section “Heterogeneously tuned neural populations”, our aim is to plot optimized noise correlations at various levels of the correlation strength, as quantified by the Euclidean norm. This constrained optimization problem can be achieved, as shown in the previous section, by adding a term to the information that penalizes the Euclidean norm — that is, a constant times the sum-of-squares of correlations. This is precisely the procedure that we follow, ranging over a number of different values of the constant to produce the plot of Fig. 4.

In more detail, we choose these different values of the constant as follows. To force the correlations towards a fixed strength of , we optimize a modified objective function with an additional term:(58)As will become clear, the term before the sum is a constant with respect to the terms being optimized; from one optimization to the next, we adjust the value of in this term. Here the variance terms are constants to scale properly as correlation coefficients. Also, is the gradient vector of at (the diagonal matrix corresponding independent noise) with respect to off-diagonal entries of (see the remarks after the proof of Theorem 1). means the entry-wise product of the two vectors (of length indexed by ). Note that is the ordinary vector 2-norm.

To understand this choice of the constant in (58), note that the new optimal correlations with the penalty can be characterized by setting the gradient of the total objective function to 0. In a small neighborhood of , the gradient of is close to . With these substitutions, the equation for the gradient of the total objective function yields approximately:(59)where we took an entry-wise product with and rearranged terms to obtain the final equality. The final equality implies that the (vector) 2-norm of noise correlations (i.e., the Euclidean norm) is approximately . This is what we set out to achieve with the additional term in the objective function.

Rescaling signal correlation.

In Fig. 5 DEF, we make scatter plots comparing noise correlations with the rescaled signal correlations. Here, we explain how and why this rescaling was done.

First, we note that the rescaling is done by multiplying each signal correlation by a positive weight. This will not change its sign, the property associated with the sign rule (Fig. 5 ABC).

Next, recall that in deriving the sign rule (Eq. (20)), we calculated the gradient of the information with respect to noise correlations. One should expect alignment between this gradient and the optimal correlations when their magnitudes are small. In other words, if we make a scatter plot with dots whose y and x coordinates are entries of the gradient and noise correlation vectors, respectively (so that the number of dots is the length of these vectors), we expect to see that a straight line will pass through all the dots.

We next note that the entries of the gradient vector are not exactly the normalized signal correlations (see Eq. (59)). Instead, this vector has additional “weight factors” that differ for each entry (neuron pair), and hence for each dot in the scatter plot. Thus, to reveal a linear relationship between signal and noise correlations in a scatter plot, we must scale each signal correlation with a proper (positive) weight, determined below, so that . We then redo the scatter plots with these new values on the horizontal axis. As we will see, the weights (defined below) do not depend on the noise correlations.

We now determine . Recall that our goal is to define such that, when it is used to rescale signal correlations as above, we will see a linear alignment between signal and noise correlations. In other words, if we choose correctly, we will have (for any ). Comparing the formulae for (from the remarks after the proof of Theorem 1) and (Eq. (17)), we see that satisfies this (with constant). Here .

Acknowledgments

This work was inspired by an ongoing collaboration with Fred Rieke and his colleagues on retinal coding, which suggested the importance of “mapping” the full space of possible signal and noise correlations. We gratefully acknowledge the ideas and insights of these scientists. We further wish to thank Andrea Barreiro, Fred Rieke, Kresimir Josic and Xaq Pitkow for helpful comments on this manuscript.

Author Contributions

Conceived and designed the experiments: YH JZ ESB. Performed the experiments: YH. Analyzed the data: YH. Contributed reagents/materials/analysis tools: YH JZ ESB. Wrote the paper: YH JZ ESB.

References

  1. 1. Mastronarde D (1983) Correlated firing of cat retinal ganglion cells. I. Spontaneously active inputs to X- and Y- cells. Journal of Neurophysiology 49: 303–324.
  2. 2. Alonso J, Usrey W, Reid R (1996) Precisely correlated firing of cells in the lateral geniculate nucleus. Nature 383: 815–819.
  3. 3. Cohen MR, Kohn A (2011) Measuring and interpreting neuronal correlations. Nature Neuroscience 14: 811–819.
  4. 4. Gawne T, Richmond B (1993) How independent are the messages carried by adjacent inferior temporal cortical neurons? Journal of Neuroscience 13: 2758–2771.
  5. 5. Averbeck BB, Latham PE, Pouget A (2006) Neural correlations, population coding and computation. Nature Reviews Neuroscience 7: 358–366.
  6. 6. Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370: 140–143.
  7. 7. Abbott LF, Dayan P (1999) The effect of correlated variability on the accuracy of a population code. Neural Computation 11: 91–101.
  8. 8. Ecker AS, Berens P, Tolias AS, Bethge M (2011) The Effect of Noise Correlations in Populations of Diversely Tuned Neurons. Journal of Neuroscience 31: 14272–14283.
  9. 9. Shamir M, Sompolinsky H (2006) Implications of neuronal diversity on population coding. Neural Comput 18: 1951–1986.
  10. 10. Sompolinsky H, Yoon H, Kang K, Shamir M (2001) Population coding in neuronal systems with correlated noise. Physical Review E 64: 051904.
  11. 11. Averbeck BB, Lee D (2006) Effects of noise correlations on information encoding and decoding. Journal of Neurophysiology 95: 3633–3644.
  12. 12. Latham P, Roudi Y (2011) Role of correlations in population coding. arXiv preprint arXiv 11096524.
  13. 13. Romo R, Hernandez A, Zainos A, Salinas E (2003) Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron 38: 649–657.
  14. 14. da Silveira RA, Berry MJ II (2013) High-Fidelity Coding with Correlated Neurons. arXivorg
  15. 15. Wilke SD, Eurich CW (2002) Representational accuracy of stochastic neural populations. Neural Comput 14: 155–189.
  16. 16. Josić K, Shea-Brown E, Doiron B, de la Rocha J (2009) Stimulus-dependent correlations and population codes. Neural Computation 21: 2774–2804.
  17. 17. Tkacik G, Prentice J, Balasubramanian V, Schneidman E (2010) Optimal population coding by noisy spiking neurons. Proc Natl Acad Sci USA 107: 14419–14424.
  18. 18. Salinas E, Abbott L (1994) Vector reconstruction from firing rates. Journal of Computational Neuroscience 1: 89–107.
  19. 19. Kohn A, Smith M (2005) Stimulus Dependence of Neuronal Correlation in Primary Visual Cortex of the Macaque. Journal of Neuroscience 25: 3661–3673.
  20. 20. Softky W, Koch C (1993) The highly irregular firing of cortical cells is incosistent with temporal integration of random epsp's. Journal of Neuroscience 13: 334–350.
  21. 21. Britten K, Shadlen M, Newsome W, Movshon J (1993) Responses of neurons in macaque MT to stochastic motion signals. Visual Neurosci 10: 1157–1169.
  22. 22. de la Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A (2007) Correlation between neural spike trains increases with firing rate. Nature 448: 802–806.
  23. 23. Binder M, Powers R (2001) Relationship between Simulated Common Synaptic Input and Discharge Synchrony in Cat Spinal Motoneurons. J Neurophysiol 86: 2266–2275.
  24. 24. Hansen B, Chelaru M, Dragoi V (2012) Correlated variability in laminar cortical circuits. Neuron 76: 590–602.
  25. 25. Cover TM, Thomas JA (2006) Elements of Information Theory. John Wiley & Sons.
  26. 26. Beck J, Kanitscheider J, Pitkow X, Latham P, Pouget A (2013) The perils of inferring information from correlations. Cosyne Abstracts 2013 Salt Late City USA
  27. 27. Beck JM, Ma WJ, Pitkow X, Latham PE, Pouget A (2012) Not noisy, just wrong: the role of suboptimal inference in behavioral variability. Neuron 74: 30–39.
  28. 28. Shamir M, Sompolinsky H (2004) Nonlinear population codes. Neural Computation 16: 1105–1136.
  29. 29. Koch C (1999) Biophysics of Computation. Oxford University Press.
  30. 30. Ganmor E, Segev R, Schneidman E (2011) Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences of the United States of America 108: 9679.
  31. 31. Ohiorhenuan I, Mechler F, Purpura K, Schmid A, Hu Q, et al. (2010) Sparse coding and high-order correlations in fine-scale cortical network. Nature 466: 617–621.
  32. 32. Zylberberg J, Shea-Brown E (2012) Input nonlinearities shape beyond-pairwise correlations and improve information transmission by neural populations. arXiv preprint arXiv 12123549.
  33. 33. Prinz A, Bucher D, Marder E (2004) Similar network activity from disparate circuit parameters. Nature Neuroscience 7: 1354–1352.
  34. 34. Marder E (2011) Variability, compensation, and modulation in neurons and circuits. Proceedings of the National Academy of Sciences USA 108: 15542–15548.
  35. 35. Stevenson I, Kording K (2011) How advances in neural recording affect data analysis. Nature Neuroscience 14: 139–142.