Skip to main content
Advertisement
  • Loading metrics

Dysregulation of excitatory neural firing replicates physiological and functional changes in aging visual cortex

  • Seth Talyansky,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – review & editing

    ¤ Current address: Stanford University, Stanford, California, United States of America

    Affiliations Catlin Gabel School, Portland, Oregon, United States of America, Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America

  • Braden A. W. Brinkman

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    braden.brinkman@stonybrook.edu

    Affiliation Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America

Abstract

The mammalian visual system has been the focus of countless experimental and theoretical studies designed to elucidate principles of neural computation and sensory coding. Most theoretical work has focused on networks intended to reflect developing or mature neural circuitry, in both health and disease. Few computational studies have attempted to model changes that occur in neural circuitry as an organism ages non-pathologically. In this work we contribute to closing this gap, studying how physiological changes correlated with advanced age impact the computational performance of a spiking network model of primary visual cortex (V1). Our results demonstrate that deterioration of homeostatic regulation of excitatory firing, coupled with long-term synaptic plasticity, is a sufficient mechanism to reproduce features of observed physiological and functional changes in neural activity data, specifically declines in inhibition and in selectivity to oriented stimuli. This suggests a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance. While this does not rule out deeper underlying causes or other mechanisms that could give rise to these changes, our approach opens new avenues for exploring these underlying mechanisms in greater depth and making predictions for future experiments.

Author Summary

Understanding how the brain’s neural circuitry changes during healthy aging is important for uncovering mechanisms of normal decline in neural performance and distinguishing these from changes due to age-related diseases such as Alzheimer’s or Parkinson’s disease. In this work we propose a computational model of an aging-like process in primary visual cortex that reproduces several experimentally-observed changes in senescent cats. In particular, our model predicts that an age-induced increase in excitatory neural activity amid ongoing synaptic plasticity and homeostatic regulation leads to decreased strengths of synaptic connections and deterioration of neural receptive fields, which in turn lead to decreased network sensitivity to oriented features in visual stimuli. These results suggest a potential causality between dysregulation of neuron firing and age-induced changes in brain physiology and functional performance, pointing the way toward new directions for investigating the dynamics of aging in depth and making predictions for future experiments.

Introduction

While healthy aging, and in particular its impact on neurological performance, has been the focus of numerous experimental studies [18], there are comparatively few theoretical and computational studies that focus on normal aging. Most computational studies to date model the effects of diseases that often manifest in advanced age in humans [9, 10], such as Alzheimer’s [1122] or Parkinson’s disease [2330]. Only recently has theoretical and computational work on aging in non-pathological networks begun to emerge [31, 32]. Because advanced age is one of the most important risk factors for developing such neurological disorders [9, 10, 33], to fully understand the progression of these diseases we ought to have a baseline understanding of how the brain’s circuitry changes during healthy aging, both in terms of physiological properties and functional performance. This would help dissociate disease-related changes from those caused during normal aging, and thereby allow researchers to focus their attention on treating potential causes of the disease progression not directly related to normal aging. On the other hand, understanding how the healthy brain ages may enable us to treat declines in performance caused solely by aging, in both healthy subjects and those with neurological disorders or diseases.

In this work we seek to advance our understanding of potential mechanisms and consequences of age-induced changes in physiology and functional performance in visual cortex. We do so by using a V1-like network model to qualitatively replicate experimentally-observed differences in the physiology and function of neurons in the visual cortex of young and old cats. In particular, we are motivated by experimental work that finds that senescent brain tissue in cat V1 shows increased spontaneous firing [34], decreased GABAergic inhibition [35, 36], and decreased selectivity to the orientation of grating stimuli [34].

The logic of our approach is to mimic the experimental conditions by creating a V1-like spiking network model that we “age” in such a way that it develops the physiological changes observed in the experiments of [3436]. Then, we can test the sensitivity of neurons to oriented grating stimuli at each age of our network to observe how neural selectivity changes as the networks become increasingly senescent—the strength of the orientation selectivity of the neurons will serve as our assessment of the functional performance of the network in this model. Importantly, because we model the aging process, rather than constructing two discrete “young” and “old” networks, we will not only qualitatively replicate the experimental findings in young and elderly cats, but will also probe the model at intermediate ages to understand changes in physiology and function throughout aging.

In order to model the experimental results we use E-I Net, a previously-developed spiking network model of mammalian V1 [37], as the basis for our study. The network structure of E-I Net is learned by training the model on pixel images of natural scenes, mimicking the early developmental processes [38] in which neurons develop Gabor-like receptive fields measured experimentally in the visual cortex of mammals [39]. Similarly, the recurrent connections develop such that neurons with similar receptive fields effectively inhibit one another through intermediary inhibitory neurons. The standard E-I Net model thus contains the important V1-like features we seek in a network model, and we use it to represent the young, mature network. In order to “age” the network we modify and extend the training procedure to create the aging-like process that qualitatively reproduces the experimental results of [34, 35], in addition to making predictions for other features of the aging process. Importantly, we show that the only modification we need to make to obtain a process that results in aged network phenotypes is a change to the training procedure that will promote increased firing activity within the network with age—the other physiological and functional performance changes emerge from this single change. We argue that this change can be interpreted as a breakdown in homeostatic-like mechanisms controlling the network’s target firing rate, leading to the increases in excitation, decreases in synaptic strength, and declines in orientation selectivity that mimic the experimentally observed changes. Although there may be other mechanisms that can replicate these experimental findings, our model makes several predictions for other yet-to-be-observed changes in physiology and functional performance that can potentially be used to further test our model and the interpretation of our results.

This paper proceeds as follows: in Results we first give a brief overview of E-I Net and the modifications we made to the model to implement an aging-like phase of the training dynamics. We then present our findings on the physiological changes that occurred in our network during this aging phase—i.e., the changes to receptive fields, lateral connections, and firing thresholds—followed by analyses of the changes in functional performance of the network, which we quantify by the changes in the selectivity of the neurons to oriented gratings. To gain a deeper understanding of how each set of our network parameters contributes to functional performance changes during the aging process, we perform a number of numerical experiments. In particular, i) we study how orientation selectivity evolves when one or more parameters sets are frozen to their young values and ii) we disambiguate whether receptive field structure or receptive field magnitude contributes more to the declines in orientation selectivity. Finally, in the Discussion we interpret our results and their relationship to recent work, the limitations of our approach, how our model’s predictions might be tested experimentally, and possible implications for understanding normal and pathological declines in neural performance with age.

Results

Modified E-I Net as a model of aging in V1-like networks

We begin by briefly reviewing E-I Net [37], the spiking network model of visual cortex we use in this work, and outlining how we modify this model to implement the aging-like process we use to replicate the experimental findings of [34, 35]. E-I Net comprises a population of recurrently-connected excitatory and inhibitory spiking neurons that receive external inputs driven by visual stimuli. The subthreshold dynamics of the neurons’ membrane potentials obey (1) where vi(t) is the membrane potential of neuron i at time t and the membrane time constant τi is τE (τI) if neuron i is excitatory (inhibitory). Neuron i fires a spike when vi(t)≥θi and then is reset to vi(t) = 0. As shown schematically in Fig 1, each of the N neurons receives visual input in the form of pixel images X from natural scenes; in this work, these scenes are taken from frames of movies in the CatCam database [40, 41]. The input currents generated by these visual scenes appearing in Eq (1) is ∑k Qik Xk, where Xk is the intensity of the kth pixel of the input patch X and the Qik are the strengths of the input connections from the kth pixel to neuron i. Neurons are connected by recurrent synaptic connections Wij, representing the amount of charge transferred when pre-synaptic neuron j transmits a spike to post-synaptic neuron i. While the individual Wij are distinct for all pairs ij, there are three classes of recurrent weights: inhibitory-to-excitatory weights WEI, excitatory-to-inhibitory weights WIE, and inhibitory-to-inhibitory weights WII; as in the original E-I Net model we assume excitatory-to-excitatory weights WEE are rare and may be neglected. The time at which the spike is emitted by neuron j is represented by in Eq (1). In this paper we take Wij to be positive (negative) if the post-synaptic neuron j is excitatory (inhibitory), in contrast to the original E-I Net paper which makes the sign explicit, such that Wij is non-negative for any cell type.

thumbnail
Fig 1. Schematic of the network model.

We use E-I Net [37] as our model of visual cortex. A. E-I Net comprises a network of leaky integrate-and-fire neurons that receive visual input in the form of randomly-drawn patches from pixel images (X, highlighted region of natural scene), communicated to the neurons through input weights Q. Each neuron belongs to either the excitatory or inhibitory population. Neurons are synaptically connected by lateral weights W that obey Dale’s law, such that excitatory-to-inhibitory weights WIE are positive, while inhibitory-to-excitatory (WEI) and inhibitory-to-inhibitory (WII) weights are negative. Excitatory-to-excitatory connections are assumed to be negligible and are not included in this model. The input weights Q, lateral weights W, and firing threshold of each neuron θ (not depicted) are learned by training the network on natural pixel images. B. The input stimulus to the network, X, comprises a grid of pixels (4 × 4 shown here; 8 × 8 used in practice). The kth pixel generates a current in the ith neuron equal to the intensity of the pixel multiplied by a weight Qik, for a total current into neuron i of ∑k Qik Xk. C. The input weights Q can be arranged spatially by ordering them the same way as the pixel-input patches. This representation can be interpreted as neuron i’s receptive field.

https://doi.org/10.1371/journal.pcbi.1008620.g001

These parameters develop according to synaptic and homeostatic learning rules inspired by sparse coding theory and previously established rules such as Oja’s Hebbian plasticity rule [42] and Foldiak’s rule [43]. These learning rules are not specific to any particular species, and therefore the network is taken to be a general model of mammalian visual cortex. The learning rules are designed to enforce linear decoding of the image patches from the spike rates (Eq (2)), minimal correlation in spiking (Eq (3)), and sparseness in firing (Eq (4)): (2) (3) (4) where yi is the spiking rate of neuron i in response to the presented image patch over a duration T, while 〈yi〉 is the long-time mean of the spike rate. The parameters α, βij, and γ are the learning rates, which set how quickly the network parameter updates converge to a steady state. The synaptic learning rates βij ∈ {βEI, βIE, βII} depend on whether the neurons connected by the synapse are excitatory or inhibitory. The quantity pi(tloop) is the expected number of spikes emitted per time-unit at each “age” of the network tloop, quantified by the number of training loops that have taken place; one loop is 50 different image presentations, each for 50 time-steps. Sparsity in firing is enforced by choosing pi(tloop) ≪ 1. In the original E-I Net model pi(tloop) is a constant value pE (pI) if neuron i is excitatory (inhibitory), independent of the network age.

In the original E-I Net model the learned network parameters Qik, Wij, and θi achieve a steady-state network structure in which the pixel-space arrangements of input weights Q resemble Gabor-like receptive fields typical of networks trained on natural scenes [39], as shown schematically in Fig 1C. The learned lateral weights are organized such that excitatory neurons with similar receptive fields effectively inhibit one another (see [39] and S1 Text). To study the effect of aging, and in particular how the network structure changes with age, we modify the target firing rate pE(tloop) in Eq (4), choosing it to be (5) where pE(0) is the expected number of spikes per time-unit for tloop ≤ 30 loops. That is, after the network has achieved its steady state (which occurs by 30 training loops) the target spike rate begins to increase with each loop. This change is designed to increase the spiking activity of the excitatory neurons in order to mimic the increased activity of neurons in aged cat brain tissue observed in [34]. Physiologically, this change may be interpreted as arising from a breakdown in homeostatic mechanisms that regulate the target spike rate, which we do not model explicitly (see Discussion).

As pE(tloop) increases, the network parameters will begin to depart from the previously stabilized values, and we can track their evolution with training time, or “age.” Moreover, at each of these ages we can assess both the physiological changes in Qik, Wij, and θi, as well as the functional performance of the network in response to different tasks. As outlined in the introduction, we seek to mimic the experimental conditions of [34], which measured the selectivity of young and elderly cat neurons to oriented grating stimuli and observed decreased selectivity in elderly cats. We therefore present patches of visual input from oriented grating images to our “aged” network models and quantify the selectivity of neurons to these edges.

We train our networks for an additional 50 loops, giving a total network age of 80 loops. The target spike rate pE(tloop) increases on every loop, and we record the aged set of parameters {Q, W, θ} and test the sensitivity to gratings every 5 loops. We report on the physiological changes during the aging process next (Physiological parameter changes), followed by the functional performance changes during the oriented grating tasks (Functional performance changes), and finally an additional set of numerical experiments that we run to further probe how the different network parameters contributed to the observed changes in physiology and functional performance (Numerical experiments to test contribution of different physiological parameters to declines in functional performance).

Physiological parameter changes

Fig 2 displays the distributions of the learned physiological parameters in “(mature) youth” (30 training loops) compared to “old age” (80 training loops); see also S1 Fig for the histograms at intermediate ages and S2 Fig for results obtained in networks trained on different movies. We find the range of magnitudes of the synaptic and input weights decreases with age, while the range of the firing thresholds expands by an order of magnitude, from 0.46 to 9.2. Note that the network model is formulated in dimensionless units, so the parameter ranges do not correspond directly to dimensionful quantities measured in experiments.

thumbnail
Fig 2. Distributions of physiological network parameters in youth vs. old age.

We plot histograms of the magnitudes of network parameters learned during training on natural images: the input weights Q, lateral weights W, and firing thresholds θ. Yellow histograms correspond to the distributions of these parameters in the mature young network, the steady state of network development (reached by 30 training loops). After this age, the excitatory target firing rate begins to increase, causing a rearrangement of the network that we interpret as an aging-like process. The blue histograms are the distributions of network parameters after 80 training loops. Aging tends to decrease the magnitude of input and lateral weights, while it widens the distribution of firing thresholds (in young age, all the thresholds belong to a narrow range, relative to the old-age thresholds, and the bins are accordingly very narrow). A decrease in inhibitory weights is consistent with experimental work of [35, 36] and decreases in excitatory weights are consistent with experimental studies across several brain areas and species [5, 6, 8]. All quantities are expressed in dimensionless units.

https://doi.org/10.1371/journal.pcbi.1008620.g002

In addition to changes in magnitude, the input weights also undergo a change in their spatial organization in pixel space—i.e., the receptive field of the neuron is not preserved with aging. Fig 3A shows the receptive field of a particular neuron with a Gabor-like receptive field (RF) at age 30 (top) and a random-looking RF at age 80 (bottom). If we “vectorize” the receptive fields by rearranging the 8 × 8 pixel grids into 1 × 64 vectors, then we can quantify the “distance” between young and old receptive fields in pixel-space by computing the angular separation φi between the vectorized young and older receptive fields Qyoung and Qold for each neuron i: (6)

thumbnail
Fig 3. Angle between receptive fields in the input weight parameter space.

A. A particular neuron’s receptive field (RF) in youth versus old age, demonstrating that features of a neuron’s RF may not be preserved in our model’s aging process. B. Violin plot of the distribution of angles between each individual excitatory neuron’s young and older vectorized receptive fields (RFs) (Eq 6). As our network ages a neuron’s RF becomes increasingly orthogonal to its mature young RF, on average, indicated by the median of the RF distributions tending to approximately π/2. The colors of the violins represent the age indicated on the horizontal axis.

https://doi.org/10.1371/journal.pcbi.1008620.g003

This angle is 0 if the old RF is identical to the young and π/2 if the young and old RFs are orthogonal. We find that the mean angle between vectorized RFs steadily increases with age, to the point that in the oldest age of our network the mean angle between a neuron’s young and old receptive fields tends to π/2, i.e., on average the old-age RFs of a population of neurons become orthogonal to the RFs in youth, as shown in Fig 3B (see also S3 Fig for angle distributions obtained in networks trained on different movies). Orthogonality in high-dimensions can be difficult to interpret, as random vectors in high-dimensional spaces are nearly orthogonal with high probability [44]. To gain some intuition into this result, we also compute the angles between the young (age 30) vectorized RFs and a shuffling of this same set of RFs, and compute the angles of separation between random pairs of these RFs. This shuffling produces a distribution of angles that is statistically indistinguishable from the distribution of angles between the young receptive fields and their age 80 counterparts (final violin in Fig 3B), as evaluated by a two-sample Kolmogorov–Smirnov test (p-value of 0.36, failure to reject the null hypothesis that the empirical cumulative distribution functions come from the same distribution).

This result suggests that the aging process could simply be shuffling the RFs over time: no individual neuron’s receptive field is preserved, but across the network the distribution does not change significantly. To show that this is not the case, we classify the receptive fields of excitatory neurons as either “Gabor-like,” showing characteristics of Gabor edge detectors, or “not-Gabor,” and show that the numbers of neurons in these categories change with age. The results are given in Table 1. As some RFs categorized as not-Gabor in youth become Gabor-like with age, we also record transitions between types (bottom subtable in Table 1). We assess Gabor-ness by fitting a spatial Gabor profile to each neuron’s receptive field and classify it as Gabor-like if the goodness-of-fit relative to the RF magnitude is sufficiently small; see Methods and [39]. We use a conservative goodness-of-fit criterion; nonetheless, under this criterion close to a third of the excitatory RFs are classified as Gabor-like in youth, while this fraction drops to nearly 1-in-20 in our oldest network, demonstrating that the aged networks undergo a net loss of neurons with Gabor-like RFs. Given the edge-detecting properties of Gabor-like receptive fields, we thus anticipate this loss plays a prominent role in the degradation of the sensitivity of the network to oriented grating stimuli, discussed in the next section.

thumbnail
Table 1. Quantification of Gabor-like receptive fields.

We quantify whether each excitatory neuron in our network model has a Gabor-like receptive field or not, in both our mature young network (30 loops) and the oldest network (80 loops). We assess Gabor-ness by fitting a spatial Gabor profile to each neuron’s receptive field and classify it as Gabor-like if the goodness-of-fit relative to the RF magnitude is sufficiently small; see Methods and [39]. In the bottom subtable, the numerator in each fraction corresponds to classification in old age and the denominator corresponds to classification in youth. For example, of the 123 excitatory neurons with Gabor-like RFs in youth, only 6 retain Gabor-like RFs in old age, while of the 400 − 123 = 277 neurons that are not Gabor-like (“not-Gabor”) in youth we find that 257 of those neurons are not-Gabor in old age.

https://doi.org/10.1371/journal.pcbi.1008620.t001

Functional performance changes

As a first basic check of network performance, we analyze the distribution of spike counts over image presentations. The single change we made to the network—increasing the excitatory target firing rate pE with training—is intended to produce increased neuron activity with age, as observed in experiments [34]. Indeed, as shown in Fig 4A, we find that not only do the mean excitatory spike counts increase with age, but the spread of the distribution does as well, confirming that our network exhibits growing excitation during the aging process. We do not increase the target firing rate pI of the inhibitory neurons, and as a result neither the mean spike counts nor variance of the spike count distribution of these neurons increase with age (Fig 4B). See also S4 Fig for results obtained in networks trained on different movies.

thumbnail
Fig 4. Activity of the network with age.

A. Violin plot of the excitatory spike count distributions with age. Markers indicate the median. Both median and variance of the distributions increase with age, indicating increased excitation in the network, as expected. B. Violin plot of the inhibitory spike count distributions. The inhibitory target spike rate does not increase with age, and thus the distributions do not change systematically during aging. The colors of the violins represent the age indicated on the horizontal axis.

https://doi.org/10.1371/journal.pcbi.1008620.g004

To evaluate functional performance at each age, quantified by the neurons’ orientation selectivity, every 5 training loops we pause training and simulate the network’s responses to patches drawn from pixel images of oriented gratings of spatial frequency 0.1 cycles per pixel, such that one 8 × 8 patch contains approximately 1 cycle. We quantify a neuron’s selectivity to each orientation using the same orientation selectivity index employed by Hua et al. [34] and others [45, 46]. As seen in Fig 5, the cumulative distribution plots of the orientation indices (the fraction of cells with indices less than a particular value) qualitatively agree with the experimental observations of [34] in young and old cats: selectivity decreases with age, demonstrated by the shift of the cumulative plots toward lower orientation indices. However, our model results also suggest this process may not be a straightforward monotonic degradation of selectivity distribution with age. In the first several epochs of aging our network we find that the mean selectivity of the network changes slowly, with changes in weakly- and strongly-selective neurons roughly balancing out (ages 30–50, model results in Fig 5B), and only later in age does the network begin to more quickly lose strongly-selective neurons, a prediction that could be tested experimentally in future studies. See also S5 Fig for results obtained in networks trained on different movies.

thumbnail
Fig 5. Selectivity of cells to oriented gratings.

A. Schematic of oriented gratings presented to the network to evaluate selectivity to oriented features in visual input. B. Cumulative distribution plots of orientation selectivity indices of real neurons from Hua et al. [34] in young (yellow) and elderly (blue) cats, compared to our model results for a series of different “ages” of our network model. In both cases orientation selectivity decreases with age.

https://doi.org/10.1371/journal.pcbi.1008620.g005

Numerical experiments to test contributions of different physiological parameters to declines in functional performance

Given the variety of changes our model network undergoes during aging, can we identify which physiological changes contribute most to the observed changes in functional performance? We certainly expect the degradation of the Gabor-like structure of the receptive fields to play a strong role, at least later in the aging process when the RFs are no longer Gabor-like, as in the absence of these edge-detector filters in the network we naturally expect selectivity to drop. However, before the RF structure has completely degraded, we may also expect lateral weights to play a role, as several theoretical and experimental studies suggest that neurons with similar response properties will effectively mutually inhibit one another and thereby sharpen the selectivity of their responses [4753]. Thus, a loss of inhibition could also contribute significantly to decreased orientation selectivity at intermediate older ages. To probe which parameter set dominates the contribution to the loss of orientation selectivity as a function of age, we performed several numerical experiments on our networks.

In our first set of simulation experiments, we age our network while shutting off the learning rule for one or more of the physiological parameters—freezing these parameters to their young values. In this way we can assess which parameters, or combination thereof, has the strongest influence on preserving orientation selectivity. We perform three such tests: i) we freeze the input weight matrices Q, ii) we freeze the lateral weight matrices W (for both excitatory and inhibitory neurons), and iii) we freeze both Q and W. Note that we do not freeze the thresholds to their young values because our network aging process is driven by the increasing excitatory target spike rate pE(tloop), which only appears in the threshold learning rule. Freezing this learning rule would therefore preclude aging in our network.

We present the results of these tests in Fig 6, in which we plot the average selectivity as a function of age for each case. We see that both cases in which the input weights Q are frozen retain a relatively high mean orientation selectivity across ages, comparable to the baseline network in the early stages of aging and exceeding it in old age. When only the lateral weights are frozen and the input weights are allowed to adapt to the increasing pE(tloop), the mean selectivity stays close to its youthful value for most of the aging process before beginning to drop at the oldest ages we simulated. These results suggest that preserving either input weights or lateral weights may be sufficient to maintain the functional performance of the network, though the steepening slope of the frozen-W curves compared to the flat slope of the frozen-Q and frozen-Q-and-W curves in advanced age suggests that in the long run preserving input weights may be more important than preserving lateral weights for maintaining the functional performance of the network. The relative importance of input weights over lateral weights is further supported by a supplementary analysis in S1 Text; see also S6 Fig for results obtained in networks trained on different movies.

thumbnail
Fig 6. Selectivity of cells when connection weights are frozen during training.

We test how network parameters contribute to orientation selectivity by shutting off learning of the input weights or lateral weights during the aging process. When only input weights Q are frozen to their young values, or both Q and W are frozen to their young values, the selectivity is only modestly impacted. However, when lateral weights are frozen to their young values, the selectivity remains relatively high for most ages until it begins to drop off near the oldest ages in our simulations.

https://doi.org/10.1371/journal.pcbi.1008620.g006

These tests so far demonstrate that input weight changes may be the dominant contributor to the decline in selectivity, but do not disambiguate the contribution of changes in input weight magnitude from the contribution of changes in the spatial organization (the receptive field structure). As observed in Figs 2 and 3, we know that both the receptive field structure and the distribution of input weight magnitudes are altered during our aging process. With our network model we can separate the effects of input weight magnitude versus structure to test whether one of these features contributes more to the changes in neural selectivity. We expect, for instance, that the loss of Gabor-like receptive field structure will result in diminished orientation selectivity. We test sensitivity to input weight structure and strength in two ways: i) we preserve each individual neuron’s young receptive field structure while remapping the input weight magnitudes such that the distribution matches that of the old-age input weights; ii) we preserve each individual neuron’s aged receptive field structure while remapping the input weight magnitudes to match the young input weight distribution. We find, as seen in Fig 7, that restoring the young RF structure gives a boost to the mean orientation selectivity at later ages. However, we also find that restoring the young magnitudes yields a greater improvement in selectivity across all ages, but is particularly effective in the later stages of aging. See also S7 Fig for results obtained in networks trained on different movies.

thumbnail
Fig 7. Selectivity of cells with remapped receptive fields.

To disambiguate the relative impact that aging receptive field (RF) magnitude versus spatial structure has on orientation selectivity, we remap the magnitudes (input weight values) of each neuron’s RF to create two types of RFs: RFs with the original spatial structure they have in the young network but a magnitude distribution matching the aged distribution, and RFs with the aged spatial structure but young magnitude distribution. (See Methods for details). We create these hybrid RFs for each age and simulate the network model to measure the orientation selectivity of the neuron population. We find that restoring the young RF spatial structure improves selectivity in the late stages of aging, outperforming the baseline network (no remapping) as aging progresses, while the old RF spatial structure combined with young RF magnitudes maintains an overall higher level of selectivity over the entire age range.

https://doi.org/10.1371/journal.pcbi.1008620.g007

Discussion

We have demonstrated that if the target spike rate pE(tloop) in the learning rule for neural thresholds begins to increase after the network has reached maturity the resulting training process produces networks that qualitatively replicate features observed in senescent brain tissue. The training period of E-I Net’s predecessor, SAILnet [39], has previously been used to model the experiential development of visual cortex in growing ferrets [38], and our work continues this line of thinking by extending the interpretation of the modified training dynamics to aging processes. Though we do not model the underlying mechanism that causes pE(tloop) to drift, we interpret this change as arising from a disruption in the homeostatic regulation of this target excitatory spiking rate. Coupled with ongoing synaptic plasticity, this drift of pE(tloop) reorganizes the physiological properties of a network in a multitude of ways. The first is an increase in excitation (Fig 4); this is the intended consequence of altering pE(tloop) to increase with continued training. All remaining changes in network physiology, including the weakening of synaptic weights, broadening of firing threshold distribution (Fig 2), and deformation of the neural receptive fields (Fig 3) follow from this single modification of the network learning rules. These physiological changes result in declines in functional performance of the network, such as reduced sensitivity to oriented features in visual input (Fig 5). Moreover, with access to all our model parameters, we can show that declines in selectivity are primarily due to the changes in input weight structure and strength (the receptive fields), rather than changes in lateral inhibition (Figs 6, 7, and S6; see also S1 Text). These changes are consistent with experimental studies probing aging visual cortex, suggesting that such firing rate dysregulation is a potential mechanism underlying senescence in neural circuitry.

To support this assessment of our results, below we compare our results to current experimental observations in detail and make predictions for future experiments that could lend more evidence toward or against this hypothesized mechanism (Comparison to experimental results and implications for future experiments), and then we discuss the limitations of the model; i.e., which features we expect to be reasonably realistic, albeit simplified, and which features of experimental observations we were not able to replicate with the current model (Limitations of the model), before giving a final summary and conclusion (Summary of results and implications for disease).

Comparison to experimental results and implications for future experiments

Our investigation was originally motivated by several experimental results in visual cortex: i) spontaneous and stimulus-evoked neural activity increases with age [34], ii) the proportion of GABAergic neurons in senescent brain tissue decreases with age (but total neuron density does not) [35, 36], and iii) aged neurons exhibit decreased orientation and direction selectivity [34].

It is not clear a priori what the causal relationships are between these observations; most likely, there is an underlying common cause, rather than several (approximately) independent causes leading to these changes. Intuitively, we might expect that decreases in GABAergic inhibition could lead to increased firing in the network. We explored this hypothesis in an early version of this work by training the network model to maturity and then turning off learning and decreasing the magnitude of inhibitory connection weights in the network by a global fraction. We were able to reproduce loss in functional performance only when we added a tonic current to drive the neurons, and only for particular ranges of this tonic current and global reduction in the inhibitory weights. However, as the changes to the inhibitory weights in this early version of the model were implemented by hand, and not by changing the learning rules, they provided no plausible mechanisms for explaining how the network aged. Moreover, because learning was turned off in this approach, it did not allow for the possibility that other network parameters might adjust to compensate.

We ultimately developed the hypothesis pursued in this work, that the causal relationship could be that a breakdown in homeostatic regulation of target excitatory firing, coupled with long-term synaptic plasticity, would lead to changes in synaptic strength and other physiological features, as well as changes in functional performance. Therefore, the fundamental change we made to the healthy mature network was intended to produce increased activity by steadily bringing up the excitatory target spike rate as training progressed past the phase at which the network parameters had reached a steady state. All other results follow from the coupling between this change and the learning dynamics of the model parameters. As expected, the mean firing rate of excitatory neurons increased with network age, as did variance of firing, as shown in Fig 4.

The consistency of the observed decreases in synaptic weight magnitudes in our model (Fig 2) with experimental observations in visual areas is more subtle and open to interpretation. The direct experimental observation in [35, 36] is that the density of GABAergic neurons measured by GABA-immunoreactive staining decreased in older subjects compared to younger subjects; however, the total density of neurons (measured by Nissl staining) did not decrease. This interpretation is supported by other work [54], which finds minimal neuron loss with age and notes that during aging genes involved with GABA-mediated neurotransmission are strongly down-regulated in human and macaque pre-frontal cortex. Thus, observed age-related changes are not caused by a loss of GABAergic neurons through cell death, but more likely a drop in GABA expression itself. In a conductance-based leaky-integrate-and-fire model this could be interpreted as a decrease in the maximum fraction of GABAergic receptors being activated by a pre-synaptic spike; as E-I Net is a current-based leaky-integrate-and-fire model, the closest equivalent effect is a change in synaptic weight. Because the reversal potential of GABAergic synapses is typically much larger in magnitude than the membrane potential, it is often valid to approximate the inhibitory conductance-based synapses as current-based synapses [55]. Therefore, we interpret the observed decrease in the range of inhibitory input and lateral synaptic weights in our model to be consistent with the experimental observations.

In addition to the observed decreases in GABAergic neurons [35], experimental work studying post-mortem tissue from human visual cortex [8] has also found evidence that the density of several types of glutamatergic receptors tends to decrease in old age, a trend which also appears to hold across several species and brain areas, with some exceptions [5, 6]. We interpret these decreases in glutamate concentration and/or receptor density as being consistent with the observed decrease in excitatory synaptic weights in our model (Fig 2). We were not aware of these results during the original formulation of our model, so the qualitative agreement between these studies and our model’s decrease in excitatory weights in addition to inhibitory weights may be viewed as a post-diction of our model.

As with the inhibitory synapses, there does not exist a perfect translation of experimental observations on excitatory neurotransmission into expected changes in our current-based leaky-integrate-and-fire model, but it is reasonable to interpret a decrease in receptor count or neurotransmitter concentration as resulting in an effective decrease in synaptic conductances, which approximately map onto the synaptic weights Wij of our model when neglecting the voltage-dependence of the conductance. Some additional caveats apply to this approximate mapping for excitatory neurons compared to inhibitory neurons, as there are several principal excitatory neurotransmitters to consider, each of which could change in different ways with age. The most consistent finding in [5] was a decrease in the density of NMDA receptors with age across several brain regions. Another possible caveat in our conductance-to-current mapping is the fact that the true conductance-based nature of excitatory synapses could become relevant because excitatory reversal potentials are not significantly larger than membrane potentials, so a current-based approximation would not fully capture changes in excitatory synapses with age if they were to cause membrane potentials to cross the excitatory synaptic reversal potentials. However, to the extent that a current-based integrate-and-fire model can qualitatively capture these changes, we believe it is reasonable to interpret our results as replications of these experimental findings.

One potentially interesting feature of the synaptic weight distributions is the apparent gap in inhibitory-inhibitory synaptic weights observed in Fig 2, suggesting a few connections between inhibitory neurons remain relatively strong. The most likely explanation for this observation is that a few pairs of neurons happened to develop very similar receptive fields, leading to strong inhibitory connections between them, as neurons with similar receptive fields tend to have stronger inhibitory connections between them [39]. Indeed, we find the similarity φij (Eq (6), modified to compare the receptive fields of different neurons in old age, rather than a single neuron’s young and old RFs) between the receptive fields of two neurons with residual strong WII connections is about 0.5 radians, much more similar than the receptive fields of randomly chosen pairs of inhibitory neurons, which have φijπ/2 radians. Moreover, when training the network on different sets of images (see S2 Fig), the weight distribution has smaller or even indiscernible gaps, suggesting the residual strong inhibitory connections are not a robust feature of our model’s aging-like process.

In addition to the weakening of synaptic connection weights, our model also predicts a change in the firing thresholds of neurons. Because E-I Net uses non-dimensionalized membrane potentials, changes in model thresholds could correspond to changes in the physiological thresholds or resting potentials. Data on how firing thresholds and resting potentials change with age do not appear to show any consistent trends across species or brain areas, with several studies finding no apparent changes in many cases [7, 56], some finding increases in firing threshold [7], while others find increased hyperpolarization of resting potentials and firing thresholds [57]. Taken at face value, the increase in the spread of firing thresholds in our model (Fig 2) suggests that even within a brain area there may not be a consistent trend, though further study is needed to test this possibility.

In terms of the loss of functional performance in older brain tissue, our aged network model qualitatively replicates Ref. [34]’s observed loss of orientation selectivity to grated stimuli compared to the mature young network. While there are quantitative differences between our model predictions and the experimental observations of [34], these quantitative differences between our simulation results and the data may partially be traced to some of the simplified features of the network model we use. For example, the selectivities of neurons in our model network are more polarized, having many neurons with vanishing orientation selectivity index (as they did not fire to any stimulus when presented with patches from the oriented grating images) and many with nearly maximal selectivity. These more polarized orientation selectivities in our simulated network are produced in part because the learning rules of E-I Net promote sparse firing yet demand decodability of inputs [39], resulting in highly selective neurons that are active essentially only when the feature they are selective for is presented, and are silent otherwise. Moreover, because the network is deterministic (there are no noisy current inputs), the resulting activity levels tend to be highly polarized. Intermediate selectivities are likely generated by lateral coupling between neurons with similar but distinct receptive fields (see S1 Text), leading to imperfect selectivity. Another factor that promotes these more polarized selectivities is the fact that every neuron in the network responds to the same visual patch at a given time. Potentially, a very large network in which different clusters of neurons respond to many inputs from different locations in the visual field would generate enough recurrent activity for recorded neurons to appear spontaneously active, which might smooth out the distributions of orientation selectivity. We leave training and simulating aging in such a network for future work.

As our model gives us full access to network parameters, we were able to demonstrate that this decline in orientation selectivity is primarily caused by the deterioration of Gabor-like receptive fields of the model neurons as the runaway target excitation causes the network to reorganize, rather than, say, by changes in lateral weights that might also have been predicted to impact sensitivity [4753]. In particular, we found both that the net proportion of Gabor-like receptive fields decreased as the network aged (Table 1) and that the receptive field of a given neuron was not preserved with age (Fig 3). The net loss of Gabor-like receptive fields impacting orientation selectivity makes intuitive sense, as such receptive fields are typically interpreted to be edge-detectors [58]. This is also a new prediction that could potentially be tested experimentally in longitudinal experiments on the same individuals, measuring the receptive fields of populations of neurons in visual cortex in both youth and old age, inferring the receptive fields using, for example, generalized linear model fits [5964], and comparing the distributions of Gabor-like versus not-Gabor receptive fields at these different ages.

Other experimental work has found that contrast sensitivity is diminished with age [2, 65]. We did not perform a detailed investigation of whether our model could qualitatively replicate this result, for technical reasons: images on which the network model is trained are first “whitened” to remove the mean intensity of the images and normalize the variance across pixels in an image—operations interpreted within the context of E-I Net to be performed by the retina and lateral geniculate nucleus (LGN) en route to V1. However, this pre-processing step will also normalize the contrast of an image. To properly test contrast sensitivity would thus require a model of the aging retina and LGN, which is beyond the scope of this work, but could potentially be developed and implemented as a modification of the whitening step of this model, or perhaps studied with a similar setup, using recent deep network models of retinal circuitry [66, 67]. In addition to retinal changes, we have not attempted to account for the variety of potential age-related physical changes in the lens or vitreous fluid of the eye, which can also alter the image information transmitted to visual cortex. However, modeling how visual input transforms with age-related changes in the eye [68] may allow future work to better estimate how these effects might influence the aging process in models like ours.

Limitations of the model

Although our model successfully replicated several experimental observations and makes predictions for new experiments, there are of course some results the model as currently implemented could not replicate. Moreover, as a simplified model of network development and dynamics, there are features of the model that could be considered biologically implausible or oversimplified, and warrant discussion here.

One of Ref. [34]’s results that our model did not successfully reproduce is the loss of direction selectivity. We trained our network on sequential patches of image frames from the CatCam database [40, 41] for the purpose of evaluating direction selectivity, but found that our network was not complex enough to learn direction selectivity from training on these natural movies alone. Our network was able to learn some direction selectivity when trained on moving grating stimuli, but this began to overwrite the network structure learned from the natural scenes. Because we interpret the trained network model as a computational stand-in for mammalian V1 that developed in natural environments, we considered it invalid to induce direction selectivity by training on moving gratings before probing network responses.

The difficulty of learning direction selectivity could be because directed motion is not represented frequently enough in our training movies, but could also be because our network neurons exponentially filter inputs, all with the same single timescale. Potentially, a modified network model that allowed for different spatiotemporal history filters or even multiple membrane timescales might be better suited to learn direction selectivity from natural movies. A model that could successfully learn direction selectivity may also prove useful for testing how age impacts neural decoding of stimulus speed, which has also been shown to deteriorate with age [69].

The biological plausibility of training neural networks has been debated extensively in the literature [70, 71], casting doubt on whether training such networks can be an appropriate model for network development. Many, if not most, of these criticisms focus on training algorithms that employ non-local learning rules (i.e., rules which require global information about the network in order to update an individual neuron’s synaptic connections), such as backpropagation [72]. However, the learning rules employed by E-I Net and its predecessor SAILnet [37, 39] are local (Eqs (2)–(4)), meaning that updating a particular neuron’s parameters only requires information about its own incoming connections and firing threshold. Therefore, the learning rules used in our model may be considered a reasonable simplification of biologically plausible learning mechanisms, and thus can capture the essential minimal features of network development. However, it is worth noting that the learning rules of E-I net do not give rise to any layered or modular network organization, so the resulting network is best interpreted as a particular layer of V1, such as layer 4, rather than an entire cortical column.

One subtle and potentially unintuitive feature of our model is the fact that the time-course of training need not be straightforwardly mapped onto the time-course of development. For instance, we use the same learning rates for the entire duration of our training procedure, including both the development phase (loops 1 − 30) and our aging phase (loops 30 − 80). However, experimental evidence indicates that there is a “critical period” in which long-term synaptic plasticity is high early in development, after which plasticity drops, allowing the nervous system to approximately lock in learned structure and prevent sudden drastic changes in sensory experience from overwriting this structure [73]. At first, a critical learning period seems at odds with our training procedure, which has no such period, but that is true only if one assumes a linear relationship between training time and true organism age. A nonlinear relationship would allow for our results to be consistent with a critical period. In particular, we could assume that training time and organism age are approximately linearly related during development (loops 1 − 30) but undergo a shift to a different linear relationship after maturation that holds during the aging phase. We have checked that decreasing all learning rates by a factor of 10 and extending stimulus presentation time by a factor of 10 after 30 training loops indeed give results qualitatively similar to those of the baseline model; see Critical learning periods and time rescaling in S1 Text).

Finally, our results do not rule out other possible mechanisms for generating the observed changes in networks with age, nor do they explain what physiological mechanisms might cause a drift in target firing rate to begin with. Possible causes could be related, for example, to changes in calcium dynamics controlling synaptic plasticity [74], but a detailed study of such possible breakdowns of this firing rate regulation are beyond the scope of the current work. Nevertheless, starting from the hypothesis that there is some breakdown of homeostatic regulation of target firing rates, causing them to increase with age, in the presence of ongoing long-term synaptic plasticity has allowed us to replicate several experimental results and make predictions for effects not yet observed experimentally. Future experiments testing these predictions would enable validation or refutation of this hypothetical mechanism.

Summary of results and implications for disease

We have shown that altering the target firing rate of neurons in a trained V1 network model—interpreted here as the result of a dysregulation of homeostatic mechanisms regulating network firing rates—amid ongoing long-term synaptic plasticity results in several physiological and functional changes symptomatic of aging in visual cortex. Our single modification to the learning rules of the E-I Net model of visual cortex is intended to promote increased firing with age in the network, and successfully reproduces this feature. The other features replicated or predicted by our approach follow from this single modification. The replicated features include

  • reduced synaptic strengths, in the form of weakened input connections to the excitatory and inhibitory subpopulations as well as diminished lateral connections between and within these subpopulations, and
  • impaired stimulus-feature detection, in the form of decreased selectivity for oriented grating stimuli,

while predicted features include

  • a net loss of Gabor-like receptive field structure of V1 neurons, and
  • the primary role of a decline in strength (i.e. input weight magnitude) and Gabor character of receptive fields in the loss of orientation selectivity; decreases in lateral weight strength also contribute, but may not be as significant.

Thus, our work suggests that dysregulation of network excitation may be one of the causal mechanisms that leads to physiological and functional changes in networks and their dynamics as an organism ages. While this insight does not rule out other possible mechanisms, it provides a basis for forming new hypotheses and devising experiments to test these ideas, either by looking for experimental evidence of model predictions in naturally aged neural tissue or perhaps by testing whether it is possible to induce senescent neural phenotypes in visual cortex by disrupting firing rate homeostasis. Future studies could test this mechanism either by performing new recordings in aged tissue to compare to our model predictions, or by direct experiments in which firing rate homeostasis in young tissue is actively disrupted by the experimenter to assess whether some features of old-age phenotypes develop. Should further studies provide evidence that this mechanism contributes to aging processes in vivo, our results also suggest possible ways some of these functional performance changes could be partly mitigated. For instance, our numerical experiment varying the structure and strength of input into visual cortex (Fig 7) suggests that improving orientation selectivity in aging patients could potentially be accomplished by coarsely increasing the gain of relevant clusters of neurons. In advanced age this approach could potentially be more effective than targeted treatments that would aim to precisely restore the inputs to individual neurons, a more complex and potentially invasive treatment.

Beyond replicating previous experimental observations and making predictions for new experiments, modeling how neural circuitry changes with age will play an important role in understanding the progression of neurological disorders whose onset correlates with age, such as Alzheimer’s disease or Parkinson’s disease, as well as general age-related cognitive decline. In particular, having a baseline model for predicting changes in healthy neural circuitry is important for disambiguating which effects are symptomatic of disease from those likely to be present in a healthy aging individual, and thereby determining which aspects of the disease progression to focus on and develop treatments for.

Methods

Network model

We use E-I Net, a spiking network model of V1 trained on natural images, as the basis for our investigations. E-I Net is written in MATLAB, and all modifications and subsequent analysis code are also in MATLAB. We will give a brief overview of the model here, and then focus on modifications of the model we implemented for use in this study. Some of these modifications were non-default options already available in the code, while others are modifications we developed ourselves. For full details on E-I Net we direct readers to [37].

E-I Net is a network of leaky integrate-and-fire neurons that spike in response to pixel-image inputs Xk. In discrete time the membrane dynamics of the network are given by (7) with spiking conditions followed by a reset

These equations are a discrete-time formulation of Eq (1) used in practice by [37]. Here, Δt is the time-step size (equal to 0.1 time-units in E-I Net), τi = τE or τI is the membrane time constant of neuron i, and Δni(t) = ni(t) − ni(t − Δt) is the number of spikes fired by neuron i on time-step t. Neuron i fires a spike on time-step t if vi(t)≥θi, after which its membrane potential is reset to vi(t+ Δt) = 0. The sum indexed by k is over the pixels of the input patch X, and the sum i = 1 to N is over the N neurons of the network. The network is partitioned into excitatory and inhibitory subpopulations; for our simulations reported in this work, we follow [37] in using NE = 400 excitatory neurons and NI = 49 inhibitory neurons. In this paper we take Wij to be positive (negative) if the post-synaptic neuron j is excitatory (inhibitory), in contrast to the original E-I Net paper, which makes the sign explicit, such that Wij is non-negative. We do not include excitatory-excitatory lateral connections (i.e., WEE = 0), as we expect such connections to be negligible physiologically; see [37] for additional details and justification of this choice and other aspects of the training of the model.

The network learns input weights Qik, synaptic weights Wij, and firing thresholds θi according to the learning rules given in the main text (Eqs (2)–(4)). For the reader’s convenience, we repeat these learning rules here: (8) (9) (10)

These learning rules impose linear decoding of the image inputs (Eq 8), minimize redundancy in spiking (Eq 9), and promote sparsity in firing (Eq 10). As in the main text, yi is the spiking rate of neuron i in response to the presented image patch over a duration T, and 〈yi〉 is the long-time average of the spiking rate. The parameters α, βij ∈ {βEI, βIE, βII}, and γ are the learning rates. The target number of spikes per time-unit is pi(t), equal to a constant pI for inhibitory neurons and an “age”-dependent function pE(tloop) for excitatory neurons, where (11) as in the main text. The “time” tloop corresponds to the number of training loops that have taken place (the network’s “age”).

The standard values used for the parameters in our network training and simulations are given in Table 2. For the learning rates we follow the original E-I Net model for our baseline cases. These rates are modified in the numerical experiments discussed in the section “Numerical experiments to test contribution of different physiological parameters to declines in functional performance” by setting one or more of these rates to 0. In the E-I Net code, these rates are combinations of several parameters that we do not list here; however, we note that the numerical values we give include a global learning factor, a multiplier applied to all the learning rates, which is set to 0.4, the value used for accelerated training shown in examples in the E-I Net code repository [37]. We summarize all of the parameters used in the model in Table 2.

thumbnail
Table 2. Parameters and standard values used in this work.

This table summarizes the parameters that appear in the equations we give in the main text, along with the standard values we use in our baseline simulations. Unless explicit units are given, quantities are measured in dimensionless units. The time-step Δt sets the standard unit of time during simulations, but is itself a dimensionless quantity as time has been non-dimensionalized in this model.

https://doi.org/10.1371/journal.pcbi.1008620.t002

The network learning rules are entirely local [37, 39] (i.e., changes to a neuron’s parameters depend only on inputs that neuron receives), and hence admit a straightforward biological interpretation as plastic or homeostatic mechanisms that shape network wiring or firing. When trained on natural image inputs the network has been shown to learn realistic Gabor-like receptive fields and produce activity that can be used to accurately reconstruct input [37, 39].

Training the network

Training our network model consists of two phases: the initial development phase, during which the network forms, and the aging phase during which we implement modifications to the learning rules to cause changes in network structure that mimic the effects of aging. The first phase of training is largely the same as the original E-I Net training procedure, while the aging phase is a novel modification that we implement after the first phase reaches a steady state.

SAILnet [39] and E-I Net [37] train their networks on static pixel images of natural scenes from the database of images used by Olshausen and Field in their seminal work on the emergence of Gabor-like receptive fields under sparse-coding principles [75]. Patches of these scenes are chosen randomly from a given movie in this database and each patch is presented to the network for 50 time-steps of simulation of the leaky integrate-and-fire network model. Following each image presentation, the state of the network’s membrane potentials are reset. To speed up training, E-I Net creates 100 copies of the network, presenting each copy with a different set of randomly selected patches from the movie. At the end of the training epoch, the spiking data from all 100 copies are amalgamated to determine how the network parameters should be updated.

We modify this procedure slightly in training our networks. We train our networks on natural scenes from movies in the CatCam database, which depict the scenes encountered by a cat exploring a forest [40, 41]. We choose the CatCam database because our original intent was to investigate changes in both direction selectivity and orientation selectivity, as noted in the Discussion. For this purpose, it is necessary to train the network on temporal sequences of patches. We use 50 contiguous frames of 8 × 8 patches from these natural movies. We present each patch in this sequence for 50 time-steps of network simulation before presenting the next patch in the sequence. We do not reset the membrane potentials of the neurons between patches (an option available in E-I Net). We similarly use movies of grating images to evaluate the orientation selectivity of the network, taking 40 frames of gratings drifting perpendicularly to their orientation, covering one direction for each of the four different orientations; we ceased evaluating functional performance on both grating directions after finding that the networks did not develop significant direction selectivity. Although the networks we report on in this work do not ultimately develop significant direction selectivity when trained on natural movies, we keep the sequential presentation procedure for training our networks.

In our model we take the fully trained network to represent our mature “young” network, trained for 30 loops at the baseline excitatory target spike rate of pE = 0.01 expected spikes per time-unit and pI = 0.05 expected spikes per time-unit for inhibitory neurons, comparable to the values used in [37], which use pE = pI = 0.05 for the entire duration of training. The goal of this work is to modify this model to reflect experimentally observed age-related physiological changes, such as increased neural activity and decreased inhibition, and thereby produce “aged” networks in which we can test how functional performance and coding have changed. Rather than implement the physiological changes by several independent by-hand adjustments, a much more parsimonious modification is to alter only the excitatory target spiking rate pE. If we increase pE with training time past the mature young stage of the network, the network will reorganize to achieve a higher mean firing rate close to this new target. We increase pE by 0.002 every subsequent loop, culminating in the “old” network after 80 total loops. This not only results in increased excitatory neuron activity (both in terms of mean firing and variance of firing), but also naturally gives rise to a decrease in inhibition (Fig 2). The changes in selectivity to oriented grating stimuli also emerge from this single by-hand change. To check how sensitive the aging procedure is to the initial value of pE, we also ran three additional network simulations starting from higher values, pE(0) = 0.05, 0.07, and 0.09 spikes per time-unit; see S1 Text. Because our method produces a sequence of aged networks that are gradual evolutions of earlier “ages,” we loosely interpret the training process as the “aging” process in our network. (As noted in the Discussion, there is no definite relationship between training loops and organism age). This is similar in spirit to previous work interpreting the initial training phase of the network as a model for the developmental period of visual cortex [38].

Convergence to steady-state

Our aging process does not begin until 30 training loops have occurred, at which point the network has reached a steady state at which continued training only causes the network parameters to fluctuate around a mean value. In practice, we determine whether the network has reached a steady state using the mean orientation selectivity of the network. We find that by approximately 20 loops the mean orientation selectivity of the network has converged to 0.7 and continues to fluctuate around this value. Thus, the additional 10 loops of training serve as a buffer to ensure the network has reached its mature steady state.

As explained above, after the network reaches this steady state we begin to increase the excitatory target spike rate pE(tloop) every loop, pausing training every 5 loops only to evaluate the current network’s orientation selectivity. One may therefore wonder whether the network is able to achieve a new steady state during each of these loops (though this is not a major concern, as it is entirely plausible that the aging effects induced by increasing pE(tloop) outpace the ability of the synaptic plasticity to keep up). To determine whether a single training loop at the increased value of pE(tloop) is enough for the network to remain close to a new steady state, we perform an additional test in which we use a modified procedure in which we increase pE(tloop) every loop as in our regular procedure, but then at every 5th loop we hold pE(tloop) fixed for 5 subsequent loops. We find that these additional loops do not appreciably affect the course of degradation of the network, and network selectivity does not recover. This suggests that, at least after the initial maturation period, the network performance is approximately a state function of the target excitatory spike rate. This hypothesis is supported by a supplementary analysis in S1 Text.

Evaluating network responses to oriented gratings

Every 5 loops we pause training and evaluate the network’s responses to moving grating stimuli (see Fig 5) of 4 possible orientations in order to measure the selectivity of model neurons, during which we set the global network learning rate to zero so that network parameters do not adapt to the statistics of the gratings. We generate the gratings at spatial and temporal frequencies of 0.1 cycles/pixel and 0.1 cycles/frame, respectively, using the GratingStim module of VisualStimulusToolbox [76]. As in the training phase, we use 8 × 8 pixel patches of these grating images as visual input to our network. For this patch size there is approximately one cycle of the grating per patch.

For each sequence of grating presentations we record the spiking activity of all neurons in response to grating patches of each orientation and use these responses to compute the orientation selectivity of each neuron. For each of the four orientations, we draw 100 patches from 10 frames of the grating movie. Each frame presentation lasts 200 time-steps of the leaky-integrate-and-fire network dynamics (compared to 50 time-steps during training on the natural movies). We tally up the total number of spikes each neuron emitted to each orientation across all ten frames, and then compute an orientation selectivity index. While there are various methods of determining orientation selectivity [77], we follow [34, 45, 46], defining where nk(ϑ) is the total number of spikes neuron k emitted in response to orientation ϑ ∈ {0, π/4, π/2, 3π/4}. If the neuron does not fire any spikes in response to any of the gratings, we take OSk = 0. This measure assigns a complex number nk(ϑ)e2 to the neuron’s response to each orientation and then sums the complex numbers up. Taking the magnitude |∑θ e2 nk(ϑ)| and normalizing by the total number of spikes gives OSk. A highly selective neuron will have OSk ≈ 1, while a neuron that fires indiscriminately to orientations will have an OSk ≈ 0 (the factor of 2 in e2 is necessary for it to be possible that OSk = 0).

“Gabor-ness” analysis

The receptive fields (RFs) of the neurons in the network are observed to evolve over the aging process (Fig 3). Here we define a neuron’s RF as the pixel-space arrangement of the weights Q on its connections from pixel-input patches (Fig 1C). We quantify change in RFs by fitting a Gabor wavelet profile G to each RF in both youth and old age. RFs with fits for which the normalized goodness-of-fit, ||GRF||2/||RF||, is less than 0.8 are classified as Gabor-like, while RFs with fits that exceed this threshold are classified as “not-Gabor.” This is the same procedure as performed in [39], except that we use a less strict classification threshold (0.8 vs. 0.5). The Gabor profile takes the following form: where A is the amplitude of the Gabor wavelet, f is the frequency of amplitude oscillation, ψ is the phase, and σx and σy are the spreads of the Gabor profile in the x and y directions. The Gabor wavelet is taken to be a function of independent coordinates xp and yp, related to the coordinates x and y of the pixel-grid representation of Q shown in Fig 1C by an angle of rotation θ (not related to the firing thresholds) and offsets x0 and y0 corresponding to the shifted center of the Gabor profile. In addition to classifying fits that fail our goodness-of-fit criterion as not-Gabor, we also assign this classification to fits that are centered outside the frame of the receptive field. In principle, we would wholly exclude from the statistics any receptive fields for which fits cannot be computed, but this was never the case for the neurons in our model (see Table 1).

Numerical experiments

Learning rate freezing.

To examine the contribution of each network property to the observed decline in neural orientation selectivity as a result of increase in pE, we conduct the following series of tests. We train several instances of the network to maturity (30 loops) under normal conditions. Upon the completion of the 30th loop, we set the learning rate to zero for different sets of connection weights: in one instance, we zero out the learning rates for only the input connections to the excitatory and inhibitory neurons; in another, we zero out the learning rates for only the lateral connections between neurons; and in the last instance, we zero out the learning rates for both input weights and lateral weights. There is no case in which we zero out the learning rates of the thresholds, as it is through this learning rule that our network ages. After zeroing out the learning rates, we continue training these variants until old age (80 loops), probing selectivity every five loops; see Fig 6.

Receptive field remapping.

To decouple the effects of the changing RF structure and magnitude on network selectivity, we examine, independent of the training procedure, the selectivity after remapping the young network’s input weights onto a later distribution of input weight magnitudes, for the networks after 35, 40, 45…80 loops: (12) where Φyoung(·) and Φold(·) are the cumulative distribution functions (CDFs) for the young and old input weight distributions. Eq (12) modifies the value of every input weight such that the distribution of remapped young input weight values matches the distribution of old input weight values, but the spatial pixel organization of each RF is not altered. We empirically estimate the CDFs and inverse CDFs from the data and use interpolating splines to obtain smooth estimates. Thus, we can insert these remapped weights into our old networks to test how RF structure contributes to selectivity. We find that networks with these remapped input weights exhibit consistently higher selectivity than their counterparts with the original aged input weights. We can also test the inverted remapping, , in which the aged networks’ input weights are remapped onto the young distribution of magnitudes, retaining the old spatial pixel structure but with young magnitudes. Networks with these remapped weights are comparable in selectivity to their corresponding original (baseline) networks in the early stages of the aging process, and increasingly more selective late in aging. See Fig 7.

Figures

Figures created in MATLAB were saved using the export_fig script, available on GitHub [78].

Supporting information

S1 Text. Supplementary analyses.

To support the main findings of our work we performed several additional analyses, including evaluating the orientation selectivity of hybrid networks created by mixing-and-matching young and old parameters sets, training the network with a critical learning period, demonstrating the excitatory neurons with similar receptive fields effectively inhibit one another (through inhibitory interneurons), and testing the history-dependence of training on the initial value of the excitatory target spike rate, pE(0).

https://doi.org/10.1371/journal.pcbi.1008620.s001

(PDF)

S1 Fig. Histograms of physiological network parameters during the aging process.

The empirical distributions of the input weights Q, lateral weights W, and firing thresholds θ at different ages during the aging process. A. 40 loops, B. 50 loops, C. 60 loops, D. 70 loops, and E. 80 loops (same data shown in Fig 2). This network was trained on frames from movie01 in the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s002

(TIF)

S2 Fig. Histograms of physiological network parameters in youth vs. old age for different training movies.

The empirical distributions of the input weights Q, lateral weights W, and firing thresholds θ obtained in networks trained on A. movie01 (same as Fig 2), B. movie07, and C. movie16 from the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s003

(TIF)

S3 Fig. Angles between young and old receptive fields in the input weight parameter space.

The distribution of angles between young and old receptive fields in networks trained on A. movie01 (same as Fig 3B), movie07 (panel B), and movie16 (panel C) from the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s004

(TIF)

S4 Fig. Activity of the network with age.

The spike counts of neurons for networks trained on movie01 (panels A and B, same as Fig 4), movie07 (panels C and D), and movie16 (panels E and F) from the CatCam database [40, 41]. In movie16 there were some outlier spike counts (e.g., the largest excitatory spike count was 243 at loop 65), so we have restricted the vertical axis to match the range observed for the other movies.

https://doi.org/10.1371/journal.pcbi.1008620.s005

(TIF)

S5 Fig. Orientation selectivity changes.

The empirical cumulative distribution functions (CDFs) for the orientation selectivity measured in networks trained on images from A. movie01 (same as panel B in Fig 5), B. movie07, and C. movie16 from the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s006

(TIF)

S6 Fig. Selectivity of cells when connection weights are frozen during training.

The mean orientation selectivity across neurons for networks trained on images from A. movie01 (same as Fig 6), B. movie07, and C. movie16 from the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s007

(TIF)

S7 Fig. Selectivity of cells with remapped receptive fields.

The mean orientation selectivity across neurons for networks trained on images from A. movie01 (same as Fig 7), B. movie07, and C. movie16 from the CatCam database [40, 41].

https://doi.org/10.1371/journal.pcbi.1008620.s008

(TIF)

Acknowledgments

We thank Lindsey Czarnecki, Paul King, Arianna Maffei, Andrew Merrill, Lorna Role, and Joel Zylberberg for helpful discussions. BAWB thanks Barry Brinkman for providing the photograph used in Fig 1.

References

  1. 1. Kubanis P, Zornetzer SF. Age-related behavioral and neurobiological changes: A review with an emphasis on memory. Behavioral and Neural Biology. 1981; 31(2):115–172. https://doi.org/10.1016/S0163-1047(81)91195-X. pmid:6114732
  2. 2. Spear PD, Moore RJ, Kim CB, Xue JT, Tumosa N. Effects of aging on the primate visual system: spatial and temporal processing by lateral geniculate neurons in young adult and old rhesus monkeys. Journal of Neurophysiology. 1994; 72(1):402–420. https://doi.org/10.1152/jn.1994.72.1.402. pmid:7965023
  3. 3. Kelly KM, Nadon NL, Morrison JH, Thibault O, Barnes CA, Blalock EM. The neurobiology of aging. Epilepsy Research. 2006; 68:5–20. https://doi.org/10.1016/j.eplepsyres.2005.07.015. pmid:16386406
  4. 4. Grady C. The cognitive neuroscience of ageing. Nature Reviews Neuroscience. 2012;13(7):491–505. https://doi.org/10.1038/nrn3256. pmid:22714020
  5. 5. Segovia G, Porras A, Arco AD, Mora F. Glutamatergic neurotransmission in aging: a critical perspective. Mechanisms of Ageing and Development. 2001; 122(1):1–29. https://doi.org/10.1016/S0047-6374(00)00225-6. pmid:11163621
  6. 6. Kaiser LG, Schuff N, Cashdollar N, Weiner MW. Age-related glutamate and glutamine concentration changes in normal human brain: 1H MR spectroscopy study at 4 T. Neurobiology of Aging. 2005; 26(5):665–672. https://doi.org/10.1016/j.neurobiolaging.2004.07.001. pmid:15708441
  7. 7. Rizzo V, Richman J, Puthanveettil SV. Dissecting mechanisms of brain aging by studying the intrinsic excitability of neurons. Frontiers in Aging Neuroscience. 2015; 6:337. https://doi.org/10.3389/fnagi.2014.00337. pmid:25610394
  8. 8. Siu CR, Beshara SP, Jones DG, Murphy KM. Development of Glutamatergic Proteins in Human Visual Cortex across the Lifespan. The Journal of Neuroscience. 2017; 37(25):6031–6042. https://doi.org/10.1523/JNEUROSCI.2304-16.2017. pmid:28554889
  9. 9. Johnson I. Age-related neurodegenerative disease research needs aging models. Frontiers in Aging Neuroscience. 2015; 7:168. https://doi.org/10.3389/fnagi.2015.00168. pmid:26388766
  10. 10. Hou Y, Dan X, Babbar M, Wei Y, Hasselbalch SG, Croteau DL, et al. Ageing as a risk factor for neurodegenerative disease. Nature Reviews Neurology. 2019; 15(10):565–581. https://doi.org/10.1038/s41582-019-0244-7 pmid:31501588
  11. 11. Horn D, Ruppin E, Usher M, Herrmann M. Neural Network Modeling of Memory Deterioration in Alzheimer’s Disease. Neural Computation. 1993; 5(5):736–749. https://doi.org/10.1162/neco.1993.5.5.736.
  12. 12. Hasselmo ME. Runaway synaptic modification in models of cortex: Implications for Alzheimer’s disease. Neural Networks. 1994; 7(1):13–40. https://doi.org/10.1016/0893-6080(94)90053-1.
  13. 13. Tippett LJ, Farah MJ. A computational model of naming in Alzheimer’s disease: Unitary or multiple impairments? Neuropsychology. 1994; 8(1):3–13. https://doi.org/10.1037/0894-4105.8.1.3.
  14. 14. Ruppin E, Reggia JA. A Neural Model of Memory Impairment in Diffuse Cerebral Atrophy. British Journal of Psychiatry. 1995; 166(1):19–28. https://doi.org/10.1192/bjp.166.1.19. pmid:7894871
  15. 15. Hasselmo ME. A computational model of the progression of Alzheimer’s disease. MD Computing: Computers in Medical Practice. 1997; 14(3):181–191. pmid:9151508
  16. 16. Menschik ED, Finkel LH. Neuromodulatory control of hippocampal function: towards a model of Alzheimer’s disease. Artificial Intelligence in Medicine. 1998; 13(1–2):99–121. https://doi.org/10.1016/s0933-3657(98)00006-2. pmid:9654381
  17. 17. Meeter M, Murre JMJ. Tracelink: A model of consolidation and amnesia. Cognitive Neuropsychology. 2005; 22(5):559–587. https://doi.org/10.1080/02643290442000194. pmid:21038266
  18. 18. Moustafa AA, Keri S, Herzallah MM, Myers CE, Gluck MA. A neural model of hippocampal–striatal interactions in associative learning and transfer generalization in various neurological and psychiatric patients. Brain and Cognition. 2010; 74(2):132–144. https://doi.org/10.1016/j.bandc.2010.07.013. pmid:20728258
  19. 19. Bhattacharya BS, Coyle D, Maguire LP. A thalamo–cortico–thalamic neural mass model to study alpha rhythms in Alzheimer’s disease. Neural Networks. 2011; 24(6):631–645. https://doi.org/10.1016/j.neunet.2011.02.009. pmid:21435838
  20. 20. Abuhassan K, Coyle D, Maguire LP. Investigating the Neural Correlates of Pathological Cortical Networks in Alzheimer’s Disease Using Heterogeneous Neuronal Models. IEEE Transactions on Biomedical Engineering. 2012; 59(3):890–896. https://doi.org/10.1109/TBME.2011.2181843. pmid:22207633
  21. 21. Rowan M, Neymotin S, Lytton W. Electrostimulation to reduce synaptic scaling driven progression of Alzheimer’s disease. Frontiers in Computational Neuroscience. 2014; 8:39. https://doi.org/10.3389/fncom.2014.00039. pmid:24765074
  22. 22. Bianchi D, Michele PD, Marchetti C, Tirozzi B, Cuomo S, Marie H, et al. Effects of increasing CREB-dependent transcription on the storage and recall processes in a hippocampal CA1 microcircuit. Hippocampus. 2014; 24(2):165–177. https://doi.org/10.1002/hipo.22212. pmid:24123649
  23. 23. Edwards R, Beuter A, Glass L. Parkinsonian Tremor and Simplification in Network Dynamics. Bulletin of Mathematical Biology. 1999; 61(1):157–177. https://doi.org/10.1006/bulm.1998.0086. pmid:10071513
  24. 24. Cutsuridis V, Perantonis S. A Neural Network Model of Parkinson’s Disease Bradykinesia. Neural Networks. 2006; 19(4):354–374. https://doi.org/10.1016/j.neunet.2005.08.016. pmid:16359844
  25. 25. Schiff SJ. Towards model-based control of Parkinson’s disease. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2010; 368(1918):2269–2308. https://doi.org/10.1098/rsta.2010.0050. pmid:20368246
  26. 26. Kerr C, Van Albada S, Neymotin S, Chadderdon G, Robinson P, Lytton W. Cortical information flow in Parkinson’s disease: a composite network/field model. Frontiers in Computational Neuroscience. 2013; 7:39. https://doi.org/10.3389/fncom.2013.00039. pmid:23630492
  27. 27. Dovzhenok A, Park C, Worth RM, Rubchinsky LL. Failure of Delayed Feedback Deep Brain Stimulation for Intermittent Pathological Synchronization in Parkinson’s Disease. PLOS ONE. 2013; 8(3):1–10. https://doi.org/10.1371/journal.pone.0058264. pmid:23469272
  28. 28. Pavlides A, Hogan SJ, Bogacz R. Computational Models Describing Possible Mechanisms for Generation of Excessive Beta Oscillations in Parkinson’s Disease. PLOS Computational Biology. 2015; 11(12):1–29. https://doi.org/10.1371/journal.pcbi.1004609. pmid:26683341
  29. 29. Kumaravelu K, Brocker DT, Grill WM. A biophysical model of the cortex-basal ganglia-thalamus network in the 6-OHDA lesioned rat model of Parkinson’s disease. Journal of Computational Neuroscience. 2016; 40(2):207–229. https://doi.org/10.1007/s10827-016-0593-9. pmid:26867734
  30. 30. Humphries MD, Obeso JA, Dreyer JK. Insights into Parkinson’s disease from computational models of the basal ganglia. Journal of Neurology, Neurosurgery & Psychiatry. 2018; 89(11):1181–1188. https://doi.org/10.1136/jnnp-2017-315922. pmid:29666208
  31. 31. Wang Z, Yu S, Fu Y, Tzvetanov T, Zhou Y. Aging Potentiates Lateral but Not Local Inhibition of Orientation Processing in Primary Visual Cortex. Frontiers in Aging Neuroscience. 2018; 10:14. https://doi.org/10.3389/fnagi.2018.00014. pmid:29459825
  32. 32. Ollé-Vila A, Seoane LF, Solé R. Aging, computation, and the evolution of neural regeneration processes. Journal of the Royal Society Interface. 2020; 17(168):10. https://doi.org/10.1098/rsif.2020.0181.
  33. 33. Kerchner GA, Wyss-Coray T. In: Sierra F, Kohanski R, editors. The Role of Aging in Alzheimer’s Disease. Cham: Springer International Publishing; 2016. p. 197–227. Available from: https://doi.org/10.1007/978-3-319-23246-1_7.
  34. 34. Hua T, Li X, He L, Zhou Y, Wang Y, Leventhal AG. Functional degradation of visual cortical cells in old cats. Neurobiology of Aging. 2006; 27(1):155–162. https://doi.org/10.1016/j.neurobiolaging.2004.11.012. pmid:16298251
  35. 35. Hua T, Kao C, Sun Q, Li X, Zhou Y. Decreased proportion of GABA neurons accompanies age-related degradation of neuronal function in cat striate cortex. Brain Research Bulletin. 2008; 75(1):119–125. https://doi.org/10.1016/j.brainresbull.2007.08.001. pmid:18158105
  36. 36. Ding Y, Zheng Y, Liu T, Chen T, Wang C, Sun Q, et al. Changes in GABAergic markers accompany degradation of neuronal function in the primary visual cortex of senescent rats. Scientific Reports. 2017; 7(1):14897. https://doi.org/10.1038/s41598-017-15006-3. pmid:29097694
  37. 37. King PD, Zylberberg J, DeWeese MR. Inhibitory Interneurons Decorrelate Excitatory Cells to Drive Sparse Code Formation in a Spiking Model of V1. Journal of Neuroscience. 2013; 33(13):5475–5485. https://doi.org/10.1523/JNEUROSCI.4188-12.2013. pmid:23536063
  38. 38. Zylberberg J, DeWeese MR. Sparse Coding Models Can Exhibit Decreasing Sparseness while Learning Sparse Codes for Natural Images. PLOS Computational Biology. 2013; 9(8):1–10. https://doi.org/10.1371/journal.pcbi.1003182. pmid:24009489
  39. 39. Zylberberg J, Murphy JT, DeWeese MR. A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields. PLOS Computational Biology. 2011; 7(10):1–12. https://doi.org/10.1371/journal.pcbi.1002250. pmid:22046123
  40. 40. Betsch BY, Einhäuser W, Körding KP, König P. The world from a cat’s perspective—statistics of natural videos. Biological Cybernetics. 2004; 90(1):41–50. https://doi.org/10.1007/s00422-003-0434-6. pmid:14762723
  41. 41. Betsch BY, Einhäuser W, Körding KP, König P. CatCam Dataset [Biol. Cybern. 90(1):41–50]; 2004. Available from: https://doi.org/10.5281/zenodo.46481.
  42. 42. Oja E. Simplified neuron model as a principal component analyzer. Journal of Mathematical Biology. 1982; 15(3):267–273. https://doi.org/10.1007/BF00275687. pmid:7153672
  43. 43. Földiák P. Learning Invariance from Transformation Sequences. Neural Computation. 1991; 3(2):194–200. https://doi.org/10.1162/neco.1991.3.2.194. pmid:31167302
  44. 44. Gorban AN, Tyukin IY. Blessing of dimensionality: mathematical foundations of the statistical physics of data. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2018; 376(2118):18. https://doi.org/10.1098/rsta.2017.0237. pmid:29555807
  45. 45. Levick WR, Thibos LN. Analysis of orientation bias in cat retina. The Journal of Physiology. 1982; 329(1):243–261. https://doi.org/10.1113/jphysiol.1982.sp014301. pmid:7143249
  46. 46. Mazurek M, Kager M, Van Hooser SD. Robust quantification of orientation selectivity and direction selectivity. Frontiers in Neural Circuits. 2014; 8:92. https://doi.org/10.3389/fncir.2014.00092. pmid:25147504
  47. 47. Carpenter RHS, Blakemore C. Interactions between orientations in human vision. Experimental Brain Research. 1973; 18(3):287–303. https://doi.org/10.1007/BF00234599. pmid:4769265
  48. 48. Wehr M, Zador AM. Balanced inhibition underlies tuning and sharpens spike timing in auditory cortex. Nature. 2003; 426(6965):442–446. https://doi.org/10.1038/nature02116. pmid:14647382
  49. 49. Isaacson JS, Scanziani M. How Inhibition Shapes Cortical Activity. Neuron. 2011; 72(2):231–243. https://doi.org/10.1016/j.neuron.2011.09.027. pmid:22017986
  50. 50. Harris KD, Mrsic-Flogel TD. Cortical connectivity and sensory coding. Nature. 2013; 503(7474):51–58. https://doi.org/10.1038/nature12654. pmid:24201278
  51. 51. Boerlin M, Machens CK, Denève S. Predictive Coding of Dynamical Variables in Balanced Spiking Networks. PLOS Computational Biology. 2013; 9(11):1–16. https://doi.org/10.1371/journal.pcbi.1003258. pmid:24244113
  52. 52. Schwemmer MA, Fairhall AL, Denéve S, Shea-Brown ET. Constructing Precisely Computing Networks with Biophysical Spiking Neurons. Journal of Neuroscience. 2015; 35(28):10112–10134. https://doi.org/10.1523/JNEUROSCI.4951-14.2015. pmid:26180189
  53. 53. Gutierrez GJ, Denève S. Population adaptation in efficient balanced networks. eLife. 2019; 8:e46926. https://doi.org/10.7554/eLife.46926.001. pmid:31550233
  54. 54. Bishop N, Lu T, Yankner B. Neural mechanisms of ageing and cognitive decline. Nature. 2010; 464:529–35. https://doi.org/10.1038/nature08983. pmid:20336135
  55. 55. Dayan P, Abbott LF. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press; 2005.
  56. 56. Luebke JI, Medalla M, Amatrudo JM, Weaver CM, Crimins JL, Hunt B, et al. Age-Related Changes to Layer 3 Pyramidal Cells in the Rhesus Monkey Visual Cortex. Cerebral Cortex. 2013; 25(6):1454–1468. https://doi.org/10.1093/cercor/bht336. pmid:24323499
  57. 57. Smithers HE, Terry JR, Brown JT, Randall AD. Aging-Associated Changes to Intrinsic Neuronal Excitability in the Bed Nucleus of the Stria Terminalis Is Cell Type-Dependent. Frontiers in Aging Neuroscience. 2017; 9:424. https://doi.org/10.3389/fnagi.2017.00424. pmid:29311907
  58. 58. Stevens CF. Novel neural circuit mechanism for visual edge detection. Proceedings of the National Academy of Sciences. 2015; 112(3):875–880. https://doi.org/10.1073/pnas.1422673112. pmid:25561549
  59. 59. Simoncelli EP, Paninski L, Pillow JW, Schwartz O. Characterization of Neural Responses with Stochastic Stimuli. In: Gazzaniga M, editor. The Cognitive Neurosciences. 3rd ed. MIT Press; 2004. p. 327–338.
  60. 60. Paninski L, Pillow JW, Simoncelli EP. Maximum likelihood estimation of a stochastic integrate-and-fire neural encoding model. Neural Computation. 2004; 16(12):2533–61. https://doi.org/10.1162/0899766042321797. pmid:15516273
  61. 61. Kulkarni JE, Paninski L. Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems. 2007; 18(4):375–407. https://doi.org/10.1080/09548980701625173. pmid:17943613
  62. 62. Pillow JW, Shlens J, Paninski L, Sher A. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature. 2008; 454:995–999. https://doi.org/10.1038/nature07140. pmid:18650810
  63. 63. Vidne M, Ahmadian Y, Shlens J, Pillow JW, Kulkarni JE, Litke AM, et al. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells. Journal of Computational Neuroscience. 2012; 33(1):97–121. https://doi.org/10.1007/s10827-011-0376-2.Modeling. pmid:22203465
  64. 64. Paninski L. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems. 2015; 15(4):243–262. https://doi.org/10.1088/0954-898X/15/4/002.
  65. 65. Wang Z, Yao Z, Yuan N, Liang Z, Li G, Zhou Y. Declined contrast sensitivity of neurons along the visual pathway in aging cats. Frontiers in Aging Neuroscience. 2014; 6:163. https://doi.org/10.3389/fnagi.2014.00163. pmid:25071569
  66. 66. Maheswaranathan N, McIntosh LT, Kastner DB, Melander JB, Brezovec L, Nayebi A, et al. Deep learning models reveal internal structure and diverse computations in the retina under natural scenes. bioRxiv. 2018. https://doi.org/10.1101/340943.
  67. 67. Tanaka H, Nayebi A, Maheswaranathan N, McIntosh L, Baccus S, Ganguli S. From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction. In: Wallach H, Larochelle H, Beygelzimer A, d’Alché-Buc F, Fox E, Garnett R, editors. Advances in Neural Information Processing Systems 32. Curran Associates, Inc.; 2019. p. 8535–8545.
  68. 68. Navarro R. Adaptive model of the aging emmetropic eye and its changes with accommodation. Journal of Vision. 2014; 14(13):21. https://doi.org/10.1167/14.13.21. pmid:25761283
  69. 69. Mendelson JR, Wells EF. Age-related changes in the visual cortex. Vision Research. 2002; 42(6):695–703. https://doi.org/10.1016/S0042-6989(01)00307-8. pmid:11888535
  70. 70. Illing B, Gerstner W, Brea J. Biologically plausible deep learning — But how far can we go with shallow networks? Neural Networks. 2019; 118:90–101. https://doi.org/10.1016/j.neunet.2019.06.001. pmid:31254771
  71. 71. Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Networks. 2020; 122:253–272. https://doi.org/10.1016/j.neunet.2019.09.036. pmid:31726331
  72. 72. Mazzoni P, Andersen RA, Jordan MI. A More Biologically Plausible Learning Rule Than Backpropagation Applied to a Network Model of Cortical Area 7a. Cerebral Cortex. 1991; 1(4):293–307. https://doi.org/10.1093/cercor/1.4.293. pmid:1822737
  73. 73. Levelt CN, Hübener M. Critical-Period Plasticity in the Visual Cortex. Annual Review of Neuroscience. 2012; 35(1):309–330. https://doi.org/10.1146/annurev-neuro-061010-113813. pmid:22462544
  74. 74. Helias M, Rotter S, Gewaltig MO, Diesmann M. Structural plasticity controlled by calcium based correlation detection. Frontiers in Computational Neuroscience. 2008; 2:7. https://doi.org/10.3389/neuro.10.007.2008. pmid:19129936
  75. 75. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996; 381(6583):607–609. https://doi.org/10.1038/381607a0. pmid:8637596
  76. 76. Beyeler M. Visual Stimulus Toolbox: v1.0.0; 2016. Available from: https://doi.org/10.5281/zenodo.154061.
  77. 77. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience. 1982; 2(1):32–48. https://doi.org/10.1523/JNEUROSCI.02-01-00032.1982. pmid:7054394
  78. 78. Altman Y. export_fig, GitHub; retrieved Dec. 13, 2019. https://www.github.com/altmany/export_fig.