Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability

  • Elif Köksal Ersöz,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Current address: LTSI, INSERM U1099, University of Rennes 1, Rennes, France

    Affiliation Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France

  • Carlos Aguilar,

    Roles Software, Visualization, Writing – review & editing

    Affiliation Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France

  • Pascal Chossat,

    Roles Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliations Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France, Université Côte d’Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France

  • Martin Krupa,

    Roles Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    Affiliations Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France, Université Côte d’Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France

  • Frédéric Lavigne

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Université Côte d’Azur, CNRS-BCL, Nice, France

Abstract

In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.

Introduction

The processing of sequences of items in memory is a fundamental issue for the brain to generate sequences of stimuli necessary for goal-directed behavior [1], language processing [2, 3], musical performance [4, 5], thinking and decision making [6] and more generally prediction [79]. Those processes rely on priming mechanisms in which a triggering stimulus (e.g. a prime word) activates items in memory corresponding to stimuli not actually presented (e.g. target words) [10, 11]. A given triggering stimulus can generate two types of sequences: on the one hand, the systematic activation of the same sequence is required to repeat reliable behaviors [1216]; on the other hand, the generation of variable sequences is necessary for the creation of new behaviors [1721]. Hence the brain has to face two opposite constraints of generating repetitive sequences or of generating new sequences. Satisfying both constraints challenges the link between the types of sequence generated by the brain and the relevant biological parameters. Can a neural network with a fixed synaptic matrix switch behavior between reproducing a sequence and produce new sequences? And which neuronal mechanisms are sufficient for such switch in the type of sequence generated? The question addressed here is how changes in neuronal noise, short-term synaptic depression and neuronal gain make possible either repetitive or variable sequences.

Neural correlates of sequence processing involve cerebral cortical areas from V1 [16, 22] and V4 [14] to prefrontal, associative, and motor areas [23, 24]. The neuronal mechanisms involve a distributed coding of information about items across a pattern of activity of neurons [2529]. In priming studies, neuronal activity recorded after presentation of a prime image shifts from neurons active for that image to neurons active for another image not presented, hence beginning a sequence of neuronal patterns [3033]. Those experiments report that a condition for the shift between neuronal patterns of activity is that stimuli have been previously learned as being associated. Considering that the synaptic matrix codes the relation between items in memory [34, 35], computational models of priming have shown that the activation of sequences of two populations of neurons rely on the efficacy of the synapses between neurons from these two populations [10, 3639].

Turning to longer sequences, many of the models studied to date rely on the existence of steady patterns (equilibria) of saddle type, which allow for transitions from one memory item to the next [4042]. Such models are well suited for reproducing systematically the same unidirectional sequence: as time evolves neuronal patterns are activated in a systematic order. These works show that the generation of directional sequences relies on the asymmetry of the relations between the populations of neurons that are activated successively. Regarding the order of populations n, n+1, n+2 in a sequence, the directionality of the sequence is obtained thanks to two properties of the synaptic matrix. First, the synaptic efficacy increases with the order of the populations, that is efficacy is weaker between populations one and two than between populations two and three [15, 40]. Second, the amount of overlap increases with the order of populations [42]. Indeed, individual neurons respond to several different stimuli [4345] and two populations of neurons coding for two items can share some active neurons [46, 47]. Models have proposed a Hebbian learning mechanism that determines synaptic efficacy as a function of the overlap between the populations [48, 49]. In models the amount of overlap codes for the association between the populations and determines their order of activation in a sequence [11, 40, 42, 50]. These works identify sufficient properties of the synaptic matrix to generate systematic sequences. However such properties of the synaptic matrix may not be necessary and neuronal mechanisms may also be sufficient to generate sequences.

Neural network models have pointed to neuronal gain as a key parameter that determines the easiness of state transitions and the stability of internal representations [51]. Further, a cortical network model has shown that neuronal gain determines the amount of activation between populations of neurons associated through potentiated synapses [52]. The latter has shown that variable values of gain reproduce the variable magnitude of the activation of associates in memory (semantic priming) reported in schizophrenic participants compared to healthy participants [5355]. However, these models considered states stability or the amount of activation but not the reliability nor the length of the sequences that can be activated. This points to a possible effect of neuronal gain but leaves open the possibility that it could play a role in the regularity or variability of the sequences that can be activated.

In this work we consider the case of fixed synaptic efficacy and fixed overlap to focus on sufficient neuronal mechanisms that underlie the type of sequence, reliable or variable. The present study mathematically analyses a new and more general type of sequences in which the states of the network do not need to pass near saddle points. The model is based on a more general mechanism of transition from one memory item to the next, with the saddle pattern replaced by a saddle-sink pair (see [56], for a prototype of this mechanism of transition). As time evolves the sink and saddle patterns become increasingly similar, so that even a small random perturbation can push the system past the saddle to the next memory item. In the model those new dynamics alleviate constraints on the synaptic matrix by allowing sequences that form spontaneously with the transitions obtained between populations related through fixed overlap, without theoretical or practical restriction on the length of the sequences. We show that, in addition to regular (predictable) sequences which follow the overlap between the populations, our system also supports sequences with random transitions between learned patterns. We investigate how changes in parameters with a clear biological meaning such as neuronal noise, short-term synaptic depression (or short-term depression (STD), for short) and neuronal gain can control the reliability of the sequences.

Our model is mainly deterministic, however small noise is needed to facilitate transitions from one state to the next. As in [40] we used small noise to activate regular transitions, and, unlike in other contexts, e.g. [42], we used small noise for random activations. In the context of large noise (stochastic systems), it is difficult to generate regular sequences if white noise is used. This is the main reason why we decided to take an almost deterministic approach. As our goal was to understand the possible effects of deterministic dynamics, we chose time independent white noise.

Model

The focus of this paper is to present a mechanism of sequential activation of memory items in the absence of either increasing overlap, or increasing synaptic conductance, or any other feature forcing directionality of the sequences. We present this mechanism in the context of a simple system, however the idea is general and can be implemented in detailed models. We use the neural network model of the form (1) (2) as in [40], with the variables xi ∈ [0, 1] representing normalised averaged firing rates of excitatory neuronal populations (units), and si ∈ [0, 1] controlling STD. The limiting firing rates xi = 0 and xi = 1 correspond respectively to the resting and excited states of unit i. Any set (x1, …, xN) with xi = 0 or 1 (i = 1, …, N) defines a steady, or equilibrium, pattern for the network. In the classical paradigm the learning process results in the formation of stable patterns of the network. Retrieving memory occurs when a cue puts the network in a state which belongs to the basin of attraction of the learned pattern. Eq (1) is usually formulated using the activity variable ui (average membrane potential) rather than xi, and xi is related to ui through a sigmoid transfer function. Our formulation in which the inverse of the sigmoid is replaced by a linear function with slope μ, was shown to be convenient for finding sequential retrievals of learned patterns, see [40].

The parameters in Eq (1) are μ (or its inverse γ = μ−1 which is the gain, supposed identical, of the units, or slope of the activation function of the neuron [57]), λ the strength of a non-selective inhibition (inhibitory feedback due to excitation of interneurons) and the maximum weight of the connexion from unit j to unit i. The parameter I can be understood as feedforward inhibition [58] or distance to the excitability threshold. This parameter was used by [11, 40, 50]. Note that I controls the stability of the completely inactive state (xi = 0 for all i). In this work we set I to 0, which means that the inactive state is marginally stable (see Section “Marginal stability of the inactive state” in S1 Appendix). This is reminiscent of the up state [59], characterised by neurons being close to the firing threshold. Finally, η is a noise term which can be thought of as a fluctuation of the firing rate due to random presence or suppression of spikes. In our simulations we considered white noise with the additional constraint of pointing towards the interior of the interval [0, 1]. Other types of noise can be chosen, this does not affect the mechanisms which we have investigated.

STD reported in cortical synapses [60] rapidly decreases the efficacy of synapses that transmit the activity of the pre-synaptic neuron. This is modeled by Eq (2) where τr is the synaptic time constant of the synapse and U is the fraction of used synaptic resources. In order to be more explicit for the rest of the manuscript, we re-write Eq (2) as (3) where ρ = τr U. Eq (3) immediately shows the respective roles played by τr and the synaptic product ρ. The synaptic time constant τr produces slow dynamics when τr ≫ 1, while ρ determines the value of the limiting state of the synaptic strength. More precisely, for an active unit xi = 1 with initially maximal synaptic strength si = 1, si decays towards the value S = (1 + ρ)−1 by following with the decay time constant (1 + ρ)/τr which depends on τr. For an inactive unit xi = 0, si recovers to si = 1 by following with the recovery time constant τr and si(0) is the synaptic value at the beginning of the recovery process.

The main difference in the model between this paper and [40] is the form of the matrix of excitatory connections Jmax: (4)

This matrix is derived by the application of the simplified Hebbian learning rule of [61] (details provided in [40]) using the collection of learned patterns (5) where the two excited units are i and i + 1. Conditions for the stability of these patterns in the absence of STD were derived in [40]. Note that the overlap between ξi and ξi+1 is constant (one unit). By the application of the learning rule the coefficients of Jmax are given by the formula: (6)

Consequently the matrix Jmax is made up of identical (1 2 1) blocks along the diagonal, so that there is no increase in either overlap or the synaptic efficacy (weight) along any possible chain. We prove mathematically and verify by numerics that Eq (1) admits a chain of latching dynamics passing through the patterns ξi, i = 1, …n − 2, either in forwards or in backwards direction depending on the activation, as well as shorter chains. The simplest way to switch dynamically from the learned pattern ξi to ξi+1 is by having a mechanism such that unit i passes from excited to rest state, then unit i + 2 passes from rest to excited state. STD can clearly result in the inhibition of unit i. However in the framework of [40] it was not possible to obtain the spontaneous excitation of unit i + 2 with the connectivity matrix Eq (4), because it was required that the upper and lower diagonal coefficients of Jmax be strictly increasing with the order i.

Connectionist models have shown the effects of fast synaptic depression on semantic memory [62] and on priming [11, 50]. Recall that fast synaptic depression is one of the aspects of short term synaptic plasticity [63]. For the sake of simplicity, we neglect the other aspects of short term synaptic plasticity, but their effects on sequential activation of memory items are likely to be significant [64], and we intend to investigate them in further work. Fast synaptic depression contributes to deactivation of neurons initially active in a pattern—because they activate less and less each other—in favor of the activation of neurons active in a different but overlapping pattern—because newly activated neurons can strongly activate their associates in a new pattern. The combination of neuronal noise and fast synaptic depression enables latching dynamics in any direction depending on the initial bias due to random noise. Indeed, when the parameters lie within a suitable range, the action of STD has the effect of creating a “dynamic equilibrium” with a small basin of attraction. This dynamic equilibrium could be ξi, (the pattern in which only unit i + 1 is excited) or an intermediate pattern for which the value of xi is between 0 and 1. Subsequently the noise allows the system to eventually jump to ξi+1, the process being repeated sequentially between all or part of the learned patterns. This noise-driven transition is what we call an excitable connection by reference to a similar phenomenon discussed in [56]. Chains of excitable connections can also be activated or terminated by noise. Last but not least we show that our system, depending on the value of the parameter μ, hence of the neuronal gain γ = 1/μ, will follow the sequence indicated by the overlap or execute a random sequence of activations. Changes in neuronal gain change the sensitivity of a neuron to its incoming activation [57, 65, 66], and are reported to impact contextual processing [67] to enhance the quality of neuronal representations [68] and to modulate activation between populations of neurons to reproduce priming experiments [52]. Here we show how changes in neuronal gain switches the network’s behavior between repetitive (reliable) sequences and variable (new) sequences.

We proceed to present the results in more detail, as follows. In Sec. Case study: a system with N = 8 excitable units we present simulations for the network with N = 8, which serves as an example of the more general construction. In Sec. Case study: a system with N = 8 excitable units we sketch the methods we use to search for or verify the existence of the chains. In Sec. Irregular chains and additional numerical results we discuss irregular chains of random activations versus regular chains defined by the overlap. Simulations were run using the Euler-Maruyama method with time steps of 0.01 ms.

Results

Case study: A system with N = 8 excitable units

We consider sequences of seven learned patterns ξ1, …, ξ7 (named ABCDEFG) encoded by eight units x1, …, x8. The sequence represents the sequential activation of pairs of units 1-2, 2-3, 3-4, 4-5, 5-6, 6-7 and 7-8, corresponding to patterns A and B, B and C, etc. with an overlap of one unit between them (see Fig 1a). Learning is reported to rely on changes in the efficacy of the synapses between neurons [69] through long term potentiation (LTP) and long term depression (LTD) [7072]. As a consequence, LTP/LTD potentiates/depresses synapses between units coding for patterns as a function of their overlap, that is synapses between units coding for overlapping patterns are more potentiated. Due to the constant overlap, all synapses between overlapping patterns are equal. Note that the matrix Jmax is learned as a function of the overlap between patterns without imposing any sequences. A consequence is that learning of independent pairs of patterns generates a matrix that allows for the activation of sequences.

thumbnail
Fig 1. Directional sequences of an endpoint stimulus-driven system.

(a) Each numbered circle represents a unit. Consecutive units encode a pattern. Except for x1 and x8, each unit participates in two patterns. A forward sequence is the activation of units in increasing order. A backward sequence is the activation of units in decreasing order. (b) Left panel: System initialised from the pattern A follows the forward sequence until the pattern F. Right panel: System initialised from the pattern G follows the backward sequence until the pattern B. Same colour code is used to represent units’ indices in (a) and (b). Parameters: μ = 0.41, λ = 0.51, I = 0, ρ = 1.8, τr = 900, η = 0.02.

https://doi.org/10.1371/journal.pone.0231165.g001

A system of N = 8 excitatory units can encode P = N − 1 = 7 regular patterns in Jmax (see Fig 1a). Encoded memory items can be retrieved either spontaneously (in a noisy environment) or when the memory network is triggered by an external cue [42, 73]. Units x1 and x8 are the least self-excited units with , thus it is very unlikely to active them unless they are part of the initial activity state. Hence, the longest chain has P − 1 = 6 consecutive patterns.

Directional sequences from a stimulus-driven pattern in the sequence.

Starting from the first pattern A, the directional activation corresponds to the sequence ABCDEFG (Fig 1b left panel). The forward direction is imposed by Jmax because x1 is less excited since . Hence, while the synaptic variables s1 and s2 are equal and decreasing together as the system lies in the vicinity of ξ1, x1 is deactivated before x2. In the same interval of time s2 < s3 and s2s3 increases so that x2 becomes unstable before x3 and the system now may converge to ξ2. The process can be repeated between ξ2 and ξ3 etc. Similarly, starting from the last pattern G gives the reverse direction (GFEDCBA) to the system (Fig 1b right panel).

Initialising the system from a middle pattern ξi does not introduce any direction, since the two active units of ξi are equally excited. While their synaptic variables are decreasing together, depending on the noise at the moment when ξi becomes unstable, either ξi−1 or ξi+1 is activated with equal probabilities. Fig 2 shows the response of the system starting from a mid-point pattern D. The activated sequence can go in either direction DEFG or its reverse DCBA. The random choice for a sequence is driven by a bias in the noise at the time of stimulus-driven activation of the mid-point pattern.

thumbnail
Fig 2. Directional sequences of a midpoint stimulus-driven system.

(a) Each numbered circle represents a unit. Consecutive neurons encode a pattern. Except for x1 and x8, each unit participates in the encoding for two patterns. When the system initialised from the pattern D, it follows either the “DCBA” or “DEFG” sequence. (b) Left panel: System initialised from the pattern D follows the “DCBA” sequence until the pattern B. Right panel: System initialised from the pattern D follows “DEFG” sequence until the pattern F. The same colour code is used to represent units’ indices in (a) and (b). Parameters: μ = 0.414, λ = 0.51, I = 0, ρ = 1.8, τr = 900, η = 0.02.

https://doi.org/10.1371/journal.pone.0231165.g002

Noise-driven random sequence from a mid-point pattern in the sequence.

The units that participate in two patterns (overlapping units) have stronger self-excitation as it is manifested by the diagonals of Jmax. These units (xi, i ≠ {1, 8}) are likely to be excited by random noise and they can activate others which they encode a pattern with. After a pattern ξi or the associated intermediate pattern being randomly excited by noise, the system can follow either ξi−1 or ξi+1. The robustness of activity depends on the system parameters. Fig 3 shows an example of spontaneous activation of a mid-point pattern D where the directional oriented sequence can be either DEFG or its reverse DCBA. Similar to the system initialised from a middle pattern, the random choice for a direction is driven by a bias in the noise at the time of noise-driven activation.

thumbnail
Fig 3. Directional sequences of a spontaneously activated system.

System activated spontaneous by random noise can move in backward (a) or forward (b) directions. Parameters: μ = 0.21, λ = 0.51, I = 0, ρ = 1.8, τr = 300, η = 0.04.

https://doi.org/10.1371/journal.pone.0231165.g003

Sensitivity of the dynamics upon parameter values.

We have seen that patterns can be retrieved sequentially when the system is triggered by a cue or spontaneously by noise. However the effectiveness of this process depends on the values of the parameters in Eqs (1) and (2). The dynamics of the system can follow part of the sequence, then either terminate on one pattern ξi with i < N − 1, or converge to a non learned pattern. Moreover we identified two different dynamical scenarios by which a sequence can be followed, depending mainly on the value of μ. This will be analyzed in Sec. Analysis of the dynamics. Here we comment on numerical simulations which highlight the dependency of the sequences upon parameter values.

The behavior of the model was tested on simulation data by measuring the length of regular sequences generated by the network (chain length) and by computing the distance made from the initial pattern after irregular sequences (distance). Chain length and distance were analyzed by fitting linear mixed-effect models (LMM) to the data, using the lmer function from the lme4 package (Version 1.1–7) in R (Version R-3.1.3 [74]). All predictor parameters (inverse of the gain μ, inhibition λ, time constant τr, synaptic constant ρ and noise η) were defined as continuous variables and they were centered on their mean. The optimal structure was determined after comparing the goodness of fit of a range of models, using Akaike’s information criterion (AIC); the model with the smallest AIC was selected, corresponding to the model with main effects and interactions between all of the parameters. The significance of the effects was tested using the lmerTest package. For the sake of clarity of the text, we flag the levels of significance with one star (*) if p-value <0.05, two stars (**) if p-value <0.01, three stars (***) if p-value<0.001.

Fig 4 shows time series of the full or partial completion of sequences of retrievals (for N = 8 units) for two different values of noise amplitude η = 0.02 (first row) and η = 0.04 (second and third row). In each case the two first columns show time series with the STD parameters ρ = 1.8 and τr = 300 while the last column corresponds to the choice ρ = 1.8 and τr = 900. By fixing the synaptic constant ρ = 1.8, we ensure that the synaptic variables si decay to the same value S with a decay time depending on τr (see Sec. Model). The global inhibition coefficient λ is set at 0.51 in rows Fig 4a and 4b and λ = 0.56 in row Fig 4c. For each choice of the STD parameters μ takes two values, either μ = 0.41 or μ = 0.21.

thumbnail
Fig 4. Response of the system initialised from pattern A (units 1-2) to different levels of noise η and system parameters λ, μ, τr.

Synaptic variables are faster along the first two columns (τr = 300) than the last column (τr = 900). In all simulations I = 0. Row (a) η = 0.02, λ = 0.51. The system with fast synapses can follow the longest sequence from A (units 1-2) to F (units 6-7) for μ = 0.41 (the first panel) but not for μ = 0.21 (the second panel), while the slow synapses can trigger the longest sequence (the third panel). Row (b) η = 0.04, λ = 0.51. Increasing the noise amplitude enables the activation of the whole sequence with fast synapses (the first two panels), whereas the slow synapses give either very short patterns or 3 co-active units, the third panel, respectively. Row (c) η = 0.04, λ = 0.56. Increasing the global inhibition λ regulates the transition for slow synapses (the third panel), whereas the system with fast synapses and μ = 0.41 (the first panel) randomly activates learned patterns and yields short regular and irregular sequences. On the other hand, the system with μ = 0.21 and fast synapses (the second panel) can preserve a regular sequence.

https://doi.org/10.1371/journal.pone.0231165.g004

Observe that the sequence and the pattern durations are shorter in the system with fast synapses (τr = 300) than the one with slow synapses (τr = 900). In the case of weaker noise (Fig 4a) and fast synapses, the system follows the sequence ABCDEF when μ = 0.41 whereas it stops at the pattern B when μ = 0.21. In other words, increasing μ (decreasing neural gain) in the system with fast synapses recruits more units sequentially. Another way to increase the chain length for μ = 0.21 is slowing down the synaptic variables. The system with slow synapses can follow the sequence ABCDEF for a wide range of μ values. In fact the two different values of μ in Fig 4 correspond to the two different dynamical scenarios which have been evoked in the beginning of this section. This point will be developed in Sec. Analysis of the dynamics. When noise is stronger (Fig 4b) the picture is different: the full sequence can be completed with fast synapses even for μ = 0.21. However, the sequence is shorter with slow synapses and the system quickly explores unexpected patterns like one with three excited units 2, 3, 4 around t = 600 (which is not a learned pattern) in third panel of Fig 4b. These type of activity can be observed for a wide range of μ values with slow synapses.

Comparison between Fig 4b and 4c exemplifies the effect of changing inhibition λ and μ (inverse of neural gain) for the same noise amplitude. Increasing the inhibition coefficient λ regulates the transition for slow synapses, while fast synapses and high values of μ (low neural gain) randomly activates the learned patterns and yields short sequences. The latter is also due to the self inhibition in the system given by the −μxi term in Eq (1) which facilitates deactivation of an active unit, but makes difficult for an inactive unit to be activated if it is too high. Notice the self inhibitory effect in the fast synapses can be compensated for slow synapses and small μ (high neural gain) which can regularize sequences.

Length of a chain.

When the patterns in a chain are explored in the right order by the system we call it regular. As we saw in Sec. Sensitivity of the dynamics upon parameter values it can happen that only part of the full regular chain has been realised before it stops or starts exploring patterns in a different order, hence activating an irregular chain. We call the partial regular chain a regular segment and its length is the number of patterns it contains. Here we investigate the maximal length that a regular segment starting at pattern A can attain. This length is the rank of the last activated pattern over simulations. It depends on noise η but also on the neuronal parameters (μ, λ) and on the synaptic parameters (τr, ρ). As it can be read in Eq (3), ρ characterizes the limiting decay state of the synaptic variable. Possible impact of τr and ρ on the dynamics has been investigated in [40]. It has been shown in a system with a a structured connectivity matrix that deactivation of a unit is harder for ρ being small while too high ρ prevents recruiting new units. Thus, ρ determines the reliability of a sequential activation. On the other hand, the threshold of the noise need for activation of a chain decreases with increasing τr for ρ constant.

The relation between the model parameters and the sequential activation is more subtle with the learning matrix (4) derived from the most simplified Hebbian learning rule than a structured one as in [40]. In Figs 5 and 6 we present the mean chain length for two different noise intensities; η = 0.02 and η = 0.04, respectively. In each figure synaptic parameters are ρ ∈ {1.2, 2.4},τr ∈ {300, 900}. Neural parameters λ and μ are varied within a range assuring the existence of chains of at least length 2. Details of the sequences are shown in S1 and S2 Figs. Statistics on chain length show main effects of each parameter (η (***), μ (**), λ (***), τr (***) and ρ (***). They also show an interaction between the five parameters altogether (***).

thumbnail
Fig 5. Average chain length in a regular segment for noise η = 0.02.

Synaptic time constant equals to τr = 300 on panels (a) and (c), and τr = 900 on panels (b) and (d). (a, b) Activity for ρ = 1.2. The chain length increases with μ (1/neural gain) and τr. The global inhibition value, λ, should be high enough for a sequential activation, but the chain length decreases if λ is too high. (c, d) Activity for ρ = 2.4. The chain length increases with μ and τr, but decreases with λ. See S1 Fig for the details of the activity.

https://doi.org/10.1371/journal.pone.0231165.g005

thumbnail
Fig 6. Average chain length in a regular segment for noise η = 0.04.

Synaptic time constant equals to τr = 300 on panels (a) and (c), and τr = 900 on panels (b) and (d). (a, b) Activity for ρ = 1.2. The chain length increases with τr. Chains are longer for intermediate values of μ (intermediate values of neural gain), but shorter for μ too high (low neural gain). Increasing inhibition λ facilitates regular pattern activation for the low values of μ, see for instance λ = 0.501 vs λ = 0.601 in (a), but ceases if it is too strong. (c, d) Activity for ρ = 2.4. The chain length increases with τr, but decreases with λ. Increasing μ lengthens the chains more with τr = 900 than τr = 300. See S2 Fig for the details of the activity.

https://doi.org/10.1371/journal.pone.0231165.g006

In all panels of Fig 5 where η = 0.02, the average chain length increases with μ, unless μ is too large. In Fig 5a and 5b we observe a sharp increase in the chain length for λ = {0.551, 0.601} (less pronounced for λ = {0.501, 0.651}). The sharp increase in the average chain length occurs when the bifurcation scenario changes around μμ* (for the definition of μ* see Sec. Analysis of the dynamics and section “Dynamic bifurcation scenarios” in S1 Appendix) While middle range inhibition leads to longer sequences for ρ = 1.2, weak inhibition is more suitable for ρ = 2.4 (Fig 5c and 5d). Indeed, λ and ρ have interacting effects on chain length (***).

In Fig 6, the noise level is increased to η = 0.04. Generally speaking, increasing the noise level prolongs the chains by facilitating the activation. Especially for τr = 900 and λ = 0.651, the average chain length is considerably higher with η = 0.04 (Fig 6) than η = 0.02 (Fig 5). On the other hand, the relation between the average chain length and μ becomes more delicate. In Fig 6a and 6b for ρ = 1.2, the average chain length peaks for the intermediate values of μ. Strong inhibition λ = 0.651 prolongs the chains for small values of μ and slow synapses, whereas intermediate values of inhibition λ = {0.551, 0.601} favor longer chains as μ increases. For ρ = 2.4 (Fig 6c and 6d) chains are longer under weak inhibition λ = 0.501. However, increasing μ under weak inhibition considerably shortens the chains. The average chain length increases with μ under strong inhibition for λ = {0.601.0.651} Parameters λ and μ have interacting effects on chain length (***).

Our analysis unveils a nonlinear relation between μ and the chain length. When μ is small, an increase of μ provokes an increase of the length of the chain. However, in most cases we find that the chain lengths are maximal for intermediate values of μ. This is clear intuitively: large gain (small μ) prevents the units from deactivating, making the transition from one pattern to the next difficult. Small gain, on the other hand, prevents the next unit from activating. Another factor is the occurrence of the transition from scenario 1 to 2 (see Sec. Analysis of the dynamics and section “Dynamic bifurcation scenarios” in S1 Appendix). The synaptic product ρ and the global inhibition parameter λ also influence the system’s behaviour. For ρ = 1.2, inhibition in the middle range leads to longer sequences, whereas weak inhibition is more suitable for ρ = 2.4.

Analysis of the dynamics

Latching dynamics is defined as a sequence (chain) of activations of learned patterns that de-activate due to a slow process (e.g., adaptation, here synaptic depression), allowing for a transition to the next learned pattern in the sequence [11, 75]. Here we refine this description using the language of dynamics and multiple timescale analysis. The main idea is to treat the synaptic variables si as slowly varying parameters, so that the evolution of the system becomes a movie of the dynamical configurations of the units xi. On the other hand the firing rate Eq (1) is well adapted to analyze latching dynamics. Indeed, from the form of Eq (1) (assuming for the moment that noise is set to 0) one can immediately see that whenever xi is set to 0 or 1, this variable stays fixed at any time. Therefore considering any face in the hypercube [0, 1]N defined by two coordinates (xi, xj), the other coordinates being fixed at 0 or 1, it is invariant under the flow of Eq (1). In other words, any trajectory starting on the face stays entirely on it. This is of course true also for the edges and vertices at the boundary of each face. Each vertex is an equilibrium of Eq (1) and connections between such equilibria can be realised through edges of the hypercube, which greatly simplifies the analysis.

When the couple (xi, si) of unit i is set at (1, 1), xi is fixed as we have seen but STD Eq (3) induces an asymptotic decrease of the synaptic variable towards the value S = (1 + ρ)−1. This in turn weakens the synaptic weight Jmax si in Eq (1), which may destabilize ξi in the direction of . Considering si as a slowly varying parameter this can be seen as a dynamic bifurcation of an equilibrium along the edge from ξi to . The following scenario was described in [40]. For the sake of simplicity we now assume i = 1 (the same arguments hold for any i). The patterns ξ1, and ξ2 lie at the vertices of a face, which we call Φ, generated by the coordinates x1 and x3, with x2 = 1 and the rest of the coordinates being set to 0.

Fig 7 shows three successive snapshots of the movie on Φ. The left panel illustrates the initial configuration, with the stable pattern ξ1 corresponding to the top left vertex. Then at some time T0 an equilibrium bifurcates out of in the direction of ξ1 (here the ‘slow’ STD time plays the role of bifurcation parameter, see middle panel). After a time T1 (right panel) this bifurcated equilibrium disappears in ξ1 which becomes unstable and a connecting trajectory is created along the edge with . Simultaneously a trajectory connects to ξ2 along the corresponding edge. It results that the following sequence of connecting trajectories is created: . As a result, any state of the system initially close to ξ1 will follow the ‘vertical’ edge towards , then the ‘horizontal’ edge towards ξ2. The process can repeat itself from ξ2 to ξ3 and so on. It was shown in [40] that in order to work, this scenario requires that the coefficients of the matrix Jmax satisfy the relation (more generally , i = 1, …, P − 1, for the existence of a chain of P patterns), a condition which does not hold with Eq (4).

thumbnail
Fig 7. Representative phase portraits of the fast dynamics on the face Φ corresponding to the scenario of [40].

The phase portraits are shown at three different ‘slow’ STD times. Green, orange and purple dots represent the stable, completely unstable and saddle equilibria, respectively. The blue lines illustrate segments of trajectory starting near ξ1. (a) The learned patterns ξ1 and ξ2 are stable for t < T0. (b) The bifurcation of a saddle point on the edge between ξ1 and happens for T0 < t < T1. (c) Pattern ξ1 has become unstable along the edge whilst ξ2 is still stable for t > T1. The saddle point in the interior of Φ merges with the bifurcated equilibrium before (c) is realised.

https://doi.org/10.1371/journal.pone.0231165.g007

The results of this paper rely on the observation that the existence of the connections for t > T1 (right panel of Fig 7) is not needed for the occurrence of chains. We will show below that for the connectivity matrix J given by Eq (4) the connections exist for at most a unique value of t = T1 and yet regular chains or segments can occur. For t > T1 the connecting trajectory along the edge is broken by a sink (stable equilibrium) close to . In such case strong enough noise perturbations could push a trajectory out of the basin of attraction of the sink to the basin of attraction of ξ2. As a result the trajectory would get past and converge towards ξ2, as expected. When such chains driven by noise exist, we call them excitable chains by reference to [57] who introduced the concept. In the case when the connections exist for t = T1 chains occur with noise of arbitrarily small amplitude, because as t approaches T1 from above the amplitude of noise needed to jump over to the basin of attraction of ξ2 converges to 0. We extend the terminology excitable chain to this case also.

Under this new scheme of excitable chains the number of possible transitions is much larger and multiple outcomes are possible. We have identified two scenarios (named 1 and 2) by which these excitable chains can occur in our problem. Typical cases are illustrated on Fig 8. As in Fig 7, snapshots of the dynamics at three different “slow” times are shown. The red line marks the boundary of the basin of attraction of ξ2 and the dashed circles mark the closest distances for a possible stochastic jump out of it. In both scenarios a completely unstable equilibrium point exists on the edge from ξ1 to the unnamed vertex on Φ, which corresponds to the pattern (1, 1, 1, 0, …, 0) (not a learned pattern).

thumbnail
Fig 8. Phase portraits of the fast dynamics on the face Φ at three different ‘slow’ STD times.

Stable patterns are coloured in green. The red trajectories are separatrices between the basins of attraction of the stable equilibria. The blue lines illustrate segments of a trajectory starting near ξ1. Box (1) panels (a,b,c) Phase portraits illustrate the mechanisms that can lead to transition ξ1ξ2 with excitable connections in Scenarios 1 as time evolves. (a) Trajectory starting near ξ1 converges to ξ1. (b) Trajectory follows the saddle between ξ1 and on the x1-axis. (c) Trajectory “jumps” out of the basin of attraction of under the effect of noise and converges towards ξ2 to ξ1. Box (2) panels (d,e,f) Phase portraits illustrate the mechanisms that can lead to transition ξ1ξ2 with excitable connections in Scenarios 2 as time evolves. (d) Trajectory starting near ξ1 converges to ξ1. (e) Trajectory remains in the basin of attraction of x1 defined by the saddle on the x1-axis. (f) Trajectory “jumps” out of the basin of attraction of under the effect of noise and converges towards ξ2.

https://doi.org/10.1371/journal.pone.0231165.g008

Under Scenario 1 the pattern ξ1 first loses stability by a dynamic bifurcation of a sink (stable equilibrium) along the edge . The trajectory, which was initially in the basin of attraction of ξ1, follows this sink while it is traveling along the edge towards (Fig 8b). In this time interval the distance between the sink and the attraction boundary of ξ2 (the red line in Fig 8b) is decreasing. Hence it becomes more likely for the noise to carry the trajectory over the ξ2 stability boundary, activating a transition to ξ2. The noise level necessary for the jump becomes smaller as the sink approaches . The critical transition shown in Fig 9a occurs as the sink reaches . At this moment, noise of arbitrarily small amplitude can cause the transition. Subsequently is transiently stable and the distance to the ξ2 attraction boundary increases. Therefore, it becomes likely that the trajectory remains trapped in the basin of attraction of , as shown in Fig 8c). A further decrease of s2 occurring with the passage of time gives a loss of stability of and the trajectory leaves Φ to the inactive state. This is the mechanism of the termination of the regular part of the chain (see Fig 2 for an example).

thumbnail
Fig 9. Phase portraits corresponding to critical transitions in Scenario 1 and Scenario 2.

These phase portraits occur for a special value of σ such that changes from a saddle to a sink. (a) The phase portrait of the critical transition corresponding to Scenario 1 contains a sequence of connecting trajectories from ξ1 to ξ2, allowing for a transition from ξ1 to ξ2 with arbitrarily small noise (e.g. for ρ = 2.4, τr = 900, λ = 0.55, μ = 0.45, the critical transition occurs at s1 ≈ 0.45, s2 ≈ 0.56). (b) In the phase portrait of the critical transition corresponding to Scenario 2, a transition from ξ1 to ξ2 would fail unless the noise amplitude is sufficiently large. (e.g. for ρ = 2.4, τr = 900, λ = 0.55, μ = 0.15, the critical transition occurs at s1 ≈ 0.37, s2 ≈ 0.49).

https://doi.org/10.1371/journal.pone.0231165.g009

In Scenario 2 the overlap equilibrium point becomes a stable before ξ1 loses stability. At the critical transition (Fig 9b), a sequence of connections from ξ1 to ξ2 does not exist, hence the trajectory cannot pass from ξ1 to ξ2 unless the noise is sufficiently large. In the context of this scenario regular chains tend to be substantially shorter.

As we have described above, noise is indispensable for crossing the attraction boundary of the next pattern, hence it is crucial for the chains we study. Minimum noise level required for jumps is scenario dependent. In Scenario 1, as s1 and s2 decrease along the trajectory, the distance between the sink and the ξ2 attraction boundary also decreases, becoming arbitrarily small as the critical transition is approached. Thus the noise amplitude required for a jump also decreases to 0. In Scenario 2 noise needs to be stronger to make the trajectory cross over the excitability thresholds of ξ1 and . On the other hand, we should also keep in mind that too strong noise can hinder regular chains. Simulations (and analysis, see S1 Appendix) identify μ as the main control parameter which determines the choice between these scenarios: the system follows Scenario 1 for higher values of μ and Scenario 2 for lower values of μ. This explains the difference in behavior seen in Fig 4 at lower and higher values of μ. The boundary between the two regions is defined by the value μ = μ* for which ξ1 and change stability at the same time. For an analytic definition of μ* and more detailed analysis, see S1 Appendix. In the next section we will see how these scenarios affect irregular activation.

Irregular chains and additional numerical results

The question we address here is what happens after the last pattern of a regular segment has been reached. We let (i, i + 1) denote the last pattern of the regular sequence. It follows from previous analysis that the trajectory will remain near for a considerable amount of time. Subsequently can lose stability and the trajectory passes to the inactive state, or remains stable indefinitely. By our choice of parameters (setting I = 0, see Sec. Model) the inactive state is marginally stable, which means that an irregular activation will eventually happen due to small noise and a the next pattern will be chosen at random. The mechanism of random activation from , is similar, with the exception that chain reversal is likely to occur in this case. The transition time to an irregular activation may be significantly longer than in the case of regular transitions, which allows for the recovery of the synaptic variables. Table 1 assembles the features of regular and irregular activations, as predicted by the analysis.

thumbnail
Table 1. Features of regular and irregular transitions as predicted by the analysis of Sec. Analysis of the dynamics.

https://doi.org/10.1371/journal.pone.0231165.t001

Our numerical results confirm the trends of the analytical predictions, at the same time showing that the dynamics of the model is more complex and other parameters play a very important role. In particular, interestingly, the longest regular segments are observed for μ values corresponding to scenario 1 close to μ = μ*, which is the boundary value separating scenarios 1 and 2 (see Sec. Analysis of the dynamics). Regular segments typically become shorter if μ is increased significantly beyond μ*, see Figs 5 and 6. This means that there exists an optimal μ window for the existence of long regular segments, or, in other words, neuronal gain needs to be not too small and not too large.

In this section we will discuss features of irregular activation, based on numerical results. To show statistics of irregular activation we define a measure of ‘distance’ Δ, as follows. Suppose at a time t, xp and xq are the two most recently activated units, with xp preceding xq in its activation. We define

Note that a regular chain satisfies Δ = 1 for all t until the last pattern is reached.

We distinguished two cases of irregular continuation of chains: reversing the chain (Δ = −1) and random reactivation of new chains (Δ ≠ −1). We allow the possibility of Δ = 1 as such chains can occur in an irregular way for the following reason: an irregular activation is typically preceded by a complete deactivation, usually with a prolonged passage time. The new activation is random, so that an activation to the pattern which is next in the overlap sequence can occur, for instance the sequence [1, 2]→[2]→[] → [3]→[2, 3]…. Such reactivations are documented in details in S3 and S4 Figs.

Recall the scenarios 1 and 2 for transitions from one pattern to the next (Fig 8). The former occurs for “large” values of μ and the latter for lower values of μ. Let be the last intermediate state at the end of the regular segment. In Scenario 2 either remains stable indefinitely or it destabilizes after some (long) time due to the repotentiation of sp. The latter case corresponds to a dynamic scenario for a chain’s reversal. We refer to the prolonged residence of the system at as pending and note that it likely leads to a reversal. However, for high values of noise, random activation (Δ ≠ −1) may also occur. The scenario 1 is more likely to yield random re-activation as loses stability in the xp+1 direction with the decrease of sp+1, so that a transition to the inactive state is possible. Notice that in this case too other Δ values are possible when the noise is large. Statistics on distance of irregular chains show main effects of parameter η (*), μ (***), λ (*) and ρ (**). The four parameters η (*), μ (***), λ (*) and τ also have effects on activation distance and interact with each other (***).

Figs 10 and 11 show the average activation distance Δ for η = 0.02 and η = 0.04, respectively. For λ = {0.501, 0.551} and small values of μ, the system remains on the last activated pattern. Activity with Δ = −1 is generally supported for ρ = 1.2 and for ρ = 2.4 if (μ, λ) are small. Indeed, λ, ρ and μ have interacting effects on the activation distance (***). We do not see any activation for small values of μ in Fig 10 which indicates that the activity remains either on a pattern ξ or on an intermediate state . Increasing μ introduces a backwards activation, except for λ = 0.651 for which the new activity is in the forward direction. For ρ = 2.4, we observe that the average distance increases with λ and μ if τr = 300, but decreases with μ if τr = 900.

thumbnail
Fig 10. Probability of a new activity and average distance Δ for noise η = 0.02.

Synaptic time constant equals to τr = 300 in panels (b) and (e), and τr = 900 in panels (c) and (f). (a, b, c) Activity for ρ = 1.2. (a) The probability of a new activity after the initial sequence for ρ = 300 (upper panel) and τ = 900 (lower panel). Small values of μ (1/neural gain) tends to keep the system on the last activated pattern. Minimum μ value required for a new sequence decreases with inhibition λ and τr. (b, c) Activated patterns mostly remain in negative distances except for λ = 0.651 for τr = {300, 900}, for high values for μ for λ = 0.601 with τr = 300. (d, e, f) Activity for ρ = 2.4. (d) The probability of a new activity after the initial sequence for ρ = 300 (upper panel) and τ = 900 (lower panel). Minimum μ value required for a new sequence decreases with inhibition λ and time constant τr. A new activation is always observed for τr = 300, λ = 0.651; and for τr = 900, λ = {0.601, 0.651}. (e) Average distance Δ is positive and increases with μ. (f) Average distance Δ is negative and increases with mu for λ = 0.501. Average distance Δ is positive and decreases with μ for λ = {0.551, 0.601, 0.651}.

https://doi.org/10.1371/journal.pone.0231165.g010

thumbnail
Fig 11. Probability of a new activity and average distance Δ for noise η = 0.04.

Synaptic time constant equals to τr = 300 in panels (b) and (e), and τr = 900 in panels (c) and (f). (a, b, c) Activity for ρ = 1.2. (a) The probability of a new activity after the initial sequence for ρ = 300 (upper panel) and τ = 900 (lower panel). Minimum μ (1/neural gain) value required for a new sequence decreases with inhibition λ and time constant τr. (a, b) Average distance is negative except for μ > 0.3, λ = 0.651 and τr = 300. (d, e, f) Activity for ρ = 2.4. (d) The probability of a new activity after the initial sequence for ρ = 300 (upper panel) and τ = 900 (lower panel). A new activation is always observed for τr = 300, λ = {0.601, 0.651}; and in almost all trials for τr = 900.(e) Average distance Δ is positive except for λ = 0.501 where it increases from negative to positive with increasing μ. (d) Average distance Δ decreases for all cases except for λ = 0.501 where it increases from negative to positive with increasing μ.

https://doi.org/10.1371/journal.pone.0231165.g011

Increasing the noise level η (Fig 11) facilitates activation of new patterns. As Fig 11a demonstrates, the system remains on the last activated pattern only for λ = {0.501, 0.551} and small values of μ, New activation mostly stays in the negative distance for ρ = 1.2 unless for high values λ and μ (Fig 11b and 11c). Indeed, there are interactive effects between λ, η and ρ (**) and between λ, ρ and μ (**). Taking ρ = 2.4 considerable changes the average Δ for both synaptic time constants. Probability of a new activity is above 0.5 for all parameter combinations (Fig 11d). For τr = 300, the average stays in the positive region for the whole range of μ, unless λ = 0.501 for which average Δ climbs from negative to positive values (Fig 11e). A similar pattern is observed with λ = 0.501 and the synaptic time constant τr = 900 (Fig 11f). However, the average Δ decreases with μ and for the other values of λ when τr = 900.

Supporting materials S3 and S4 Figs show the percentage of Δ values after a new activation for η = 0.02 and η = 0.04, respectively. Recall that as the chains get longer with increasing μ, regular segments get longer, as well, specially when noise is high (η = 0.04). Regarding the type of forward and/or backward irregular chains, high values of ρ and high values of μ (e.g. low gain) for low values of noise η (S3 Fig) increase the possibility for irregular chains in the forward direction, while the combination of high values of μ (e.g. low gain), ρ and noise η increase the possibility for irregular chains in both directions (S4 Fig). The difference between the percentages of Δ for τr = 300 and τr = 900 indicates the capability of slow synapses to yield longer chains.

In order to see the relation between the chain length and distance Δ for each combination of (η, ρ, τr), we categorized the regular sequences with respect to the last activated pattern. Then, we extracted the Δ values among the trials with activation and obtain a Δ set for a each possible last activated pattern of the sequence ABCDEF. Fig 12 shows the distribution of Δ at patterns from A to F in a violin plot. Left side of each violin shows the results with η = 0.02, and right side with η = 0.04. We remark at first glance that Δ depends on the chain length. Activation is in the forward direction in short chains whereas it is in the backward direction for long chains (still preserving a preference for Δ = −1). While this is related to being a bounded system (in terms of the system size N = 8), for the intermediate patterns, like D, negative and positive values of Δ are almost equally distributed (especially for ρ = 2.4, τr = 900). We also observe that after passing C, the system activates more and more the units in negative distance Δ ≠ −1. Increasing noise spreads the distribution of Δ, very visibly in ρ = 1.2 for patterns A, B; also for pattern F in ρ = 2.4, τr = 900. Finally, distribution of Δ approximates to a normal distribution for shorter chains.

thumbnail
Fig 12. Distribution of activation distance Δ at last activated patterns from A to F over all trials.

The distributions are bounded by the minima and maxima of the Δ sets. Each violin is colored with respect to the color code of the last activated units of a pattern in the forward direction as in Figs 1 and 2. Left half of each violin corresponds to activity with noise η = 0.02 and the hashed right halves correspond to activity with noise η = 0.04. (a) Activity for ρ = 1.2, τr = 300. (b) Activity for ρ = 1.2, τr = 900. (a) Activity for ρ = 2.4, τr = 300. (b) Activity for ρ = 2.4, τr = 900.

https://doi.org/10.1371/journal.pone.0231165.g012

Perturbed connectivity matrix

In order to test the robustness of our model, we randomly perturbed the off-diagonal elements of the connectivity matrix (4) while ensuring its diagonally symmetric structure. We consider two levels of perturbations (5% and 10%) and a parameter set for which we obtain a wide range of behaviour with matrix (4) (the parameter set: η = 0.04, ρ = 1.2, τr = {300, 900}). The simulation results are presented in Table 2 and in S5 Fig. Looking at Table 2, we see that the difference in the chain lengths are less than 10%, and in the probability of a new activity is around 15%. Furthermore, these two features follow similar patters to the ones of the regular matrix as S5(a)–S5(d) Fig show. The differences between the average distance Δ values are around 18% for τr = 300 and 32% for τr = 900. In particular, the difference in Δ increases with μ and λ for τr = 900 S5(e) and S5(f).

thumbnail
Table 2. Relative difference between the features of the unperturbed and irregularly perturbed synaptic matrices.

https://doi.org/10.1371/journal.pone.0231165.t002

Discussion

Experimental evidence indicates that the brain can either replay the same learned sequence to repeat reliable behaviors [1216] or generate new sequences to create new behaviors [1721, 76]. The present research identifies biologically plausible mechanisms that explain how a neural network can switch from repeating learned regular sequences to activating new irregular sequences. To make the problem analytically tractable, the combined effects of the parameters were analyzed on neuronal population firing rates in a simplified balanced network model by use of slow-fast dynamics and dynamic bifurcations. We demonstrated how variations in neuronal gain, short-term synaptic depression and noise can switch the network behavior between regular or irregular sequences for a fixed learned synaptic matrix.

Let us point out that the model we have considered represents a general framework of networks with adaptation, thus is likely to have applications in other fields, such as population dynamics, genetics, game theory, sociology or economics.

Synaptic matrix

In the present model the overlap had the same number of shared units for all the overlapping populations. This allowed us to show that variable overlap is not a necessary condition for the activation of sequences of populations. A consequence of the constant overlap is that sequences from a stimulus-driven end-point pattern in the sequence (e. g. first pattern A of the sequence) are directional but sequences from a mid-point pattern can go in any of the two possible directions. The model can then generate bi-directional sequences interesting in free recall. Starting from the first pattern A (or G), the sequence ABCDEFG is oriented in one direction (or in the other direction), and starting from a middle pattern e.g. D, the sequence can be oriented in any of the two possible directions. The present model allows for bi-directionnal sequences as well as for new sequences depending on the value of neuronal gain γ = μ−1.

Our model is robust against small perturbations of the connectivity matrix. In other words our results would hold, with small modifications, in a slightly heterogeneous network.

Regular vs. irregular sequences

Regarding regular sequences, the chain length increases with noise and for combinations of strong STD (high values of ρ) and low inhibition, or weak STD (low values of ρ) and strong inhibition. Further, for most combinations of noise, STD and inhibition, there is an optimal value of gain that generates the longest chains. The sensitivity of a neuron to its incoming activation varies with changes in its gain [65]. Simulations and analysis show that the neuronal gain (1/μ) is a key control parameter that selects the length and type of sequence activated: regular or irregular. Large neuronal gain impairs the deactivation of the units in a pattern and hence makes the transition to the next pattern difficult, and small gain impairs the activation of the next unit and again makes the transition difficult. Consequently there is an optimal window for the gain corresponding to long sequences. Experimental evidence shows that presentations of a given stimulus reproduces the same sequence reliably [14, 16, 22, 77]. The present model can repeat systematic full sequences of activation for some values of the parameters that make the network change patterns in a given order. This ‘reliable’ mode could be well adapted to the reliable reproduction of learned sequences of behaviors.

Regarding irregular sequences, a large neuronal gain leads to the second scenario describing transitions from one pattern to the next (as in Fig 8). According to this second scenario, the last intermediate state of the network at the end of a regular segment can destabilize and leads to a reversal of the sequence. In that case the network activates patterns backward in the reverse order. Further, for high values of noise, Scenario 2 can lead to random activation of patterns in either the forward or backward direction. Such variable sequences are more likely to be generated according to the first scenario that makes possible a transition to another state in the forward or backward direction and that does not necessarily overlap with the current state (forward or backward leaps). Direction of recall has been linked to the stimulation amplitude presence of non-context units in [78]. Our model can generate variable sequences over repetitions of the same triggering stimulus for high values of gain, in line with a memoryless system [56] that activates a new pattern in an unpredictable fashion. Behavioral studies indicate that presentation of a triggering stimulus can activate distant items that are not directly associated to it [79]. The generation of new sequences corresponding to the activation of new possibilities [80] and the execution of new information-seeking behaviors such as saccades or locomotor explorations of unknown locations [1] rely on variable internal neural dynamics such as in the medial frontal cortex [81]. This ‘creative’ mode of variable activation not following a given sequence could correspond to a mind wandering mode [19, 82] or divergent thinking involved in creativity [8386].

Neuromodulation of the switch between regular and irregular sequences

Neuronal gain is reported to depend on neuromodulatory factors such as dopamine [51, 8789] involved in reward-seeking behaviors and punishment [9092]. Dopamine is reported to modulate the magnitude of the activation between associates in memory (priming; [93, 94]) and dopamine induced changes in neuronal gain have been reported to account for changes in activation in memory [52] and for changes in neuronal activity that controls muscle outputs [95]. A novel feature of our network model is that neuronal gain influences the type of sequences that are generated: regular or irregular. Typical computational models of sequence generation reproduce learned sequences [15]. However, if the brain must in some case reproduce systematic behaviors, it must also have the capacity to liberate itself from repetition in order to create new behaviors. The present research shows that the network can exhibit the dual behavior of activating regular or irregular sequences for a given synaptic matrix. The transition depends on biological parameters, in particular on gain modulation. Given that changes in gain change the length of the regular sequence, and that when the regular sequence stops it becomes irregular, the gain controls the regularity of the sequences. The present research sheds light on how the brain can switch between a ‘reliable’ mode and a ‘creative’ mode of sequential behavior depending on external factors such as reward that neuromodulate neuronal gain.

Fixed versus increasing overlap size

Earlier works [4042] proved the existence of regular sequences under the assumptions of increasing overlap and synaptic efficacy. In this work we showed that neither of these conditions is needed: regular sequences can exist in the context of equal overlap. It is know that populations of neurons coding for memories, and their overlaps, can vary (due to learning) on time scales that are long in the context of this paper, but still relatively short [96]. Sequences arising through increasing overlap can be understood as learned as opposed to the regular sequences in this paper, that occur merely due to the semantic relation between concepts. Consequently, it is interesting to extend our modelling framework to a setting where overlap could vary on a super-slow timescale.

Supporting information

S1 Fig. Percentage of last activated patterns in a regular segment for noise η = 0.02.

Pattern colours follow to the colour codes of the last activated units in Figs 14 (see the legend on the right). The height of each colour on a bar indicates the percentage of the corresponding pattern for a given parameter combination over 100 trials. Synaptic time constant equals to τr = 300 on panels (a) and (c), and τr = 900 on panels (b) and (d). (a, b) ρ = 1.2. The chain length increases with μ (decreases with neural gain) and τr. The global inhibition value, λ, should be high enough for a sequential activation, but the chain length decreases if λ is too high. (c, d) ρ = 2.4. The chain length increases with μ and τr, but decreases with λ.

https://doi.org/10.1371/journal.pone.0231165.s002

(EPS)

S2 Fig. Percentage of last activated patterns in a regular segment for noise η = 0.04.

Pattern colours follow to the colour codes of the last activated units in Figs 1 and 2 (see the legend on the right). The height of each colour on a bar indicates the percentage of corresponding last activated pattern for a given parameter combination over 100 trials. Synaptic time constant equals to τr = 300 on panels (a) and (c), and τr = 900 on panels (b) and (d). (a, b) ρ = 1.2. The chain length increases with τr. Chains are longer for intermediate values of μ (intermediate values of neural gain), but shorter for μ too high (low neural gain). Increasing inhibition λ facilitates regular pattern activation for the low values of μ, see for instance λ = 0.501 vs λ = 0.601 in (a), but ceases if it is too strong. (c,d) ρ = 2.4. The chain length increases with τr, but decreases with λ. Increasing μ lengthens the chains more with τr = 900 than τr = 300.

https://doi.org/10.1371/journal.pone.0231165.s003

(EPS)

S3 Fig. Activity after the initial sequence for noise η = 0.02 of the simulations given in S1 Fig.

Bars are coloured according to the activation distance Δ (see the legend on the right) and the height of each colour indicates the percentage of the corresponding distance Δ. (a, b) Activity for ρ = 1.2. (a) Percentage of Δ for ρ = 300. (b) Percentage of Δ for ρ = 900. New sequences are generated as μ (1/neural gain) and inhibition λ increase, with a preference in the backward activity with Δ = −1. Forward activity is possible if both λ and μ are high. (c, d) Activity for ρ = 2.4. (c) Percentage of Δ for ρ = 300. Increasing (μ, λ) ensures activation in the forward direction. (d) Percentage of Δ for ρ = 900. Backward activation is observed for small values of λ. Increasing (μ, λ) ensures activation in the forward direction. Overall, the probability of new (forward) activity is higher in ρ = 2.4 than ρ = 1.2.

https://doi.org/10.1371/journal.pone.0231165.s004

(EPS)

S4 Fig. Activity after the initial sequence for noise η = 0.04 of the simulations given in S2 Fig.

Bars are coloured according to the activation distance Δ (see the legend on the right) and the height of each colour indicates the percentage of the corresponding distance Δ. (a, b) Activity for ρ = 1.2. (a) Percentage of Δ for ρ = 300. (b) Percentage of Δ for ρ = 900. New sequences are generated as μ (1/neural gain) and inhibition λ increase, with a preference for backward activity for small values of λ. Activation in distance Δ = −1 has the largest probability. Fast synapses activate in distance Δ > 0 for λ = 0.651 and high values of μ while slow synapses activate in distance Δ < 0. (c, d) Activity for ρ = 2.4. (c) Percentage of Δ for ρ = 300. Increasing (μ, λ) ensures activation in the forward direction. (d) Percentage of Δ for ρ = 900. Probability of distance being Δ < 1 is considerably high for λ = 0.551 but very small for λ. Overall, the probability of generating new activation is higher for ρ = 2.4 than ρ = 1.2 and activation in distance Δ < 0 is much higher with τr = 900 than τr = 300.

https://doi.org/10.1371/journal.pone.0231165.s005

(EPS)

S5 Fig. Average chain length, probability of a new activity and average distance Δ with irregularly perturbed connectivity matrices.

Off-diagonal elements of the connectivity matrix are perturbed by 5% and 10% while keeping the resulting synaptic matrix diagonally symmetric, and for each 10 synaptic matrices are generated. System parameters: Noise η = 0.04, synaptic constants ρ = 1.2 with τr = {300, 900}, global inhibition λ = {0.501, 0.551, 0.601, 0.651}, and μ = [0.05, 0.50] (1/neural gain) (100 simulations for each combination). Left row shows the results for τr = 300 and right row for τr = 900. In each panel bold traces show the results obtained with unperturbed connectivity matrix (4), dotted traces (…) with 5% perturbed connectivity matrices, and dash-dotted traces (−.) with 10% perturbed matrices connectivity matrices (mean over 10 different symmetric matrices for each). (a,b) Average chain length with (4) and perturbed connectivity matrices. (c,d) Probability of new activation with (4) and perturbed connectivity matrices. (e,f) Average distance Δ with (4) and perturbed connectivity matrices.

https://doi.org/10.1371/journal.pone.0231165.s006

(EPS)

References

  1. 1. Pezzulo G, van der Meer MAA, Lansink CS, Pennartz CMA. Internally generated sequences in learning and executing goal-directed behavior. Trends in Cognitive Sciences. 2014;18(12):647–657. https://doi.org/10.1016/j.tics.2014.06.011 pmid:25156191
  2. 2. Burgess N, Hitch GJ. Memory for serial order: a network model of the phonological loop and its timing. Psychological review. 1999;106(3).
  3. 3. Lavigne F, Dumercy L, Darmon N. Determinants of Multiple Semantic Priming: A Meta-Analysis and Spike Frequency Adaptive Model of a Cortical Network. The Journal of Cognitive Neuroscience. 2011;23(6):1447–1474. https://doi.org/10.1162/jocn.2010.21504 pmid:20429855
  4. 4. Rohrmeier MA, Koelsch S. Predictive information processing in music cognition. A critical review. International Journal of Psychophysiology. 2012;83:164–175. https://doi.org/10.1016/j.ijpsycho.2011.12.010 pmid:22245599
  5. 5. Zatorre RJ, Chen JL, Penhune VB. When the brain plays music: auditory–motor interactions in music perception and production. Nature Reviews Neuroscience. 2007;8(7):547–558. pmid:17585307
  6. 6. Graziano M, Polosecki P, Shalom DE, Sigman M. Parsing a perceptual decision into a sequence of moments of thought. Frontiers in Integrative Neuroscience. 2011. pmid:21941470
  7. 7. Bubic A, von Cramon DY, Schubotz R. Prediction, cognition and the brain. Frontiers in Human Neuroscience. 2010;4. pmid:20631856
  8. 8. Kok P, Jehee JF, de Lange FP. Less is more: expectation sharpens representations in the primary visual cortex. Neuron. 2012;75(2):265–270. https://doi.org/10.1016/j.neuron.2012.04.034 pmid:22841311
  9. 9. Meyer T, Olson CR. Statistical learning of visual transitions in monkey inferotemporal cortex. Proceedings of the National Academy of Sciences. 2011;108(48):1401–406. https://doi.org/10.1073/pnas.1112895108
  10. 10. Brunel N, Lavigne F. Semantic priming in a cortical network model. J Cog Neurosci. 2009;21(12):2300–2319. https://doi.org/10.1162/jocn.2008.21156
  11. 11. Lerner I, Bentin S, Shriki O. Spreading activation in an attractor network with latching dynamics: automatic semantic priming revisited. Cognitive Science. 2012;36:1339 –1382. pmid:23094718
  12. 12. Buhusi CV, Meck WH. What makes us tick? Functional and neural mechanisms of interval timing. Nature Reviews Neuroscience. 2005;6(10):755–765. pmid:16163383
  13. 13. Conway CM, Christiansen MH. Sequential learning in non-human primates. Trends in Cognitive Sciences. 2001;5(12):539–546. https://doi.org/10.1016/S1364-6613(00)01800-3 pmid:11728912
  14. 14. Eagleman SL, Dragoi V. Image sequence reactivation in awake v4 networks. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(47):450–455.
  15. 15. Veliz-Cuba A, Shouval HZ, Josic K, Kilpatrick ZP. Networks that learn the precise timing of event sequences. Journal of Computtational Neuroscience. 2015;39:235–254. https://doi.org/10.1007/s10827-015-0574-4
  16. 16. Xu S, Jiang W, Poo M, Dan Y. Activity recall in a visual cortical ensemble. Nature Neuroscience. 2012;15:449–455. pmid:22267160
  17. 17. Abraham A, Beudt S, Ott DVM, von Cramon DR. Creative cognition and the brain: Dissociations between frontal, parietal-temporal and basal ganglia groups. Brain Research. 2012;1482:55–70. pmid:22982590
  18. 18. Buckner RL, Andrews-Hanna JR, Schacter DL. The brain’s default network. Ann NY Acad Sci. 2008;1124:1–38. pmid:18400922
  19. 19. Christoff K, Gordon AM, Smallwood J, Smith R, Schooler JW. Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proceedings of the National Academy of Sciences. 2009;106(21):8719–8724.
  20. 20. Gonen-Yaacovi G, de Souza LC, Levy R, Urbanski M, Josse G, Volle E. Rostral and caudal prefrontal contributions to creativity: a meta-analysis of functional imaging data. Frontiers in Human Neuroscience. 2013;7:465. pmid:23966927
  21. 21. Guilford JP. Creativity. American Psychologist. 1950;5:444–454. pmid:14771441
  22. 22. Gavornik JP, Bear MF. Learned spatiotemporal sequence recognition and prediction in primary visual cortex. Nature Neuroscience. 2014;17(5):732–737. pmid:24657967
  23. 23. Jenkins I, Brooks D, Nixon P, Frackowiak R, Passingham R. Motor sequence learning: a study with positron emission tomography. The Journal of Neuroscience. 1994;14(6):3775–3790. https://doi.org/10.1523/JNEUROSCI.14-06-03775 pmid:8207487
  24. 24. Sakai K, Hikosaka O, Miyauchi S, Takino R, Sasaki Y, Pötz B. Transition of brain activation from frontal to parietal areas in visuomotor sequence learning. The Journal of Neuroscience. 1998;18(5):1827–1840. https://doi.org/10.1523/JNEUROSCI.18-05-01827 pmid:9465007
  25. 25. Hung C, Kreiman G, Poggio T, DiCarlo J. Fast read-out of object information in inferior temporal cortex. Science. 2005;310:863–866. pmid:16272124
  26. 26. Kreiman G, Hung CP, Kraskov A, Quian Quiroga R, Poggio T, DiCarlo JJ. Object selectivity of local field potentials and spikes in the macaque inferior temporal cortex. Neuron. 2006;49(3):433–445. https://doi.org/10.1016/j.neuron.2005.12.019 pmid:16446146
  27. 27. Quian Quiroga R, Kreiman G. Measuring sparseness in the brain: comment on Bowers (2009). Psychological Review. 2010;117:291–299. pmid:20063978
  28. 28. Quian Quiroga R. Neuronal codes for visual perception and memory. Neuropsychologia. 2016;83:227–241. pmid:26707718
  29. 29. Young M, Yamane S. Sparse population coding of faces in the inferotemporal cortex. Science. 1992;256:1327–1331.
  30. 30. Erickson CA, Desimone R. Responses of macaque perirhinal neurons during and after visual stimulus association learning. Journal of Neuroscience. 1999;19:10404–10416. https://doi.org/10.1523/JNEUROSCI.19-23-10404.1999 pmid:10575038
  31. 31. Miyashita Y. Neuronal correlate of visual associative long-term memory in the primate temporal cortex. Nature. 1988;335:817–820. pmid:3185711
  32. 32. Rainer G, Rao SC, Miller EK. Prospective coding for objects in primate prefrontal cortex. Journal of Neuroscience. 1999;19:5493–5505. https://doi.org/10.1523/JNEUROSCI.19-13-05493 pmid:10377358
  33. 33. Reddy L, Poncet M, Self MW, Peters JC, Douw L, van Dellen E, et al. Learning of anticipatory responses in single neurons of the human medial temporal lobe. Nature Communication. 2015;6:8556.
  34. 34. Weinberger NM. Physiological memory in primary auditory cortex: characteristics and mechanisms. Neurobiology of Learning and Memory. 1998;70(1-2):226–251. https://doi.org/10.1006/nlme.1998.3850 pmid:9753599
  35. 35. Yakovlev V, Fusi S, Berman E, Zohary E. Inter-trial neuronal activity in inferior temporal cortex: a putative vehicle to generate long-term visual associations. Nature Neuroscience. 1998;1(4):310–317. pmid:10195165
  36. 36. Brunel N. Hebbian Learning of Context in Recurrent Neural Networks. Neural Computation. 1996;8(8):1677–1710. pmid:8888613
  37. 37. Lavigne F, Denis S. Attentional and semantic anticipations in recurrent neural networks. International Journal of Computing Anticipatory Systems. 2001;14:196–214.
  38. 38. Lavigne F, Denis S. Neural network modeling of learning of contextual constraints on adaptive anticipations. International Journal of Computing Anticipatory Systems. 2002;12:253–268.
  39. 39. Mongillo G, Amit DJ, Brunel N. Retrospective and prospective persistent activity induced by Hebbian learning in a recurrent cortical network. European Journal of Neuroscience. 2003;18(7):2011–2024. https://doi.org/10.1046/j.1460-9568.2003.02908.x pmid:14622234
  40. 40. Aguilar C, Chossat P, Krupa M, F L. Latching dynamics in neural networks with synaptic depression. PLoS One. 2017;12(8):e0183710. https://doi.org/10.1371/journal.pone.0183710 pmid:28846727
  41. 41. Bick C, Rabinovich MI. Dynamical origin of the effective storage capacity in the brain’s working memory. Physical Review Letters. 2009;103:218101. pmid:20366069
  42. 42. Katkov M, Romani S, Tsodyks M. Memory retrieval from first principles. Neuron. 2017;94:1027–1032. http://dx.doi.org/10.1016/j.neuron.2017.03.048 pmid:28595046
  43. 43. Rolls ET, Tovee MJ. Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. J Neurophysiol. 1995;73(2):713–26. pmid:7760130
  44. 44. Tamura H, Tanaka K. Visual response properties of cells in the ventral and dorsal parts of the macaque inferotemporal cortex. Cerebral Cortex. 2001;11:384–399. pmid:11313291
  45. 45. Tsao DY, Freiwald WA, Tootell RB, Livingstone M. A cortical region consisting entirely of face-selective cells. Science. 2006;311(5761):670–674. pmid:16456083
  46. 46. Fujimichi R, Naya Y, Koyano KW, Takeda M, Takeuchi D, Miyashita Y. Unitized representation of paired objects in area 35 of the macaque perirhinal cortex. European Journal of Neuroscience. 2010;32(4):659–667. pmid:20718858
  47. 47. Quian Quiroga R. Concept cells: the building blocks of declarative memory functions. Nature Reviews Neuroscience. 2012;13:587–597. https://doi.org/10.1038/nrn3251
  48. 48. Tsodyks MV. Associative memory with binary synapses. Modern Physics Letters B. 1990;11:713–716.
  49. 49. Clopath C, B L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis Nature Neuroscience 2010;13:344–352 https://doi.org/10.1038/nn.2479 pmid:20098420
  50. 50. Lerner I, Shriki O. Internally and externally driven network transitions as a basis for automatic and strategic processes in semantic priming: theory and experimental validation. Frontiers of Psychology. 2014;5:314.
  51. 51. Rolls ET, Loh M, Deco G, Winterer G. Computational models of schizophrenia and dopamine modulation in the prefrontal cortex. Nature Reviews Neuroscience. 2008;9:696–709. https://doi.org/10.1038/nrn2462 pmid:18714326
  52. 52. Lavigne F, Darmon N. Dopaminergic Neuromodulation of Semantic Priming in a Cortical Network Model. Neuropsychologia. 2008;46:3074–3087. https://doi.org/10.1016/j.neuropsychologia.2008.06.019 pmid:18647615
  53. 53. Kreher DA, Holcomb PJ, Goff D, Kuperberg GR. Neural evidence for faster and further automatic spreading activation in schizophrenic thought disorder. Schizophrenia Bulletin. 2007;34(3):473–482. pmid:17905785
  54. 54. Moritz S, Woodward TS, Kuppers D, Lausen A, Schickel M. Increased automatic spreading of activation in thought-disordered schizophrenic patients. Schizophrenia Research. 2002;59(2–3):181–186. https://doi.org/10.1016/S0920-9964(01)00337-1.
  55. 55. Spitzer M, Braun U, Maier S, Hermle L, Maher BA. Indirect semantic priming in schizophrenic patients. Schizophrenia Research. 1993;11(1):71–80. https://doi.org/10.1016/0920-9964(93)90040-P pmid:8297807
  56. 56. Ashwin P, Postlethwaite C. Designing heteroclinic and excitable networks in phase space using two populations of coupled cells. J Nonlinear Sci. 2016;26(2):345–364. https://doi.org/10.1007/s00332-015-9277-2
  57. 57. Salinas E, Thier P. Gain modulation: a major computational principle of the central nervous system. Neuron. 2000;27:15–21. https://doi.org/10.1016/S0896-6273(00)00004-0 pmid:10939327
  58. 58. Buzáki G. Feed-forward inhibition in the hippocampal formation. Progress in Neurobiology. 1984;22:131–153. https://doi.org/10.1016/0301-0082(84)90023-6
  59. 59. Wilson CJ, Groves PM. Spontaneous firing patterns of identified spiny neurons in the rat neostriatum. Brain Research. 1981;220:67–80. https://doi.org/10.1016/0006-8993(81)90211-0 pmid:6168334
  60. 60. Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the national academy of sciences. 1997;94(2):719–723. https://doi.org/10.1073/pnas.94.2.719
  61. 61. Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural Computation. 1998;10:821–835. https://doi.org/10.1162/089976698300017502 pmid:9573407
  62. 62. Huber DE, O’Reilly RC. Persistence and accommodation in short-term priming and other perceptual paradigms: temporal segregation through synaptic depression. Cognitive Science. 2003;27(3):403–430. https://doi.org/10.1016/S0364-0213(03)00012-0
  63. 63. Mongillo G, Barak O, Tsodyks MV. Synaptic theory of working memory. Science. 2008;319(5869):1543–1546. pmid:18339943
  64. 64. Torres JJ, Kappen HJ. Emerging phenomena in neural networks with dynamic synapses and their computational implications. Frontiers in Computational Neuroscience. 2013;7:30. pmid:23637657
  65. 65. Salinas E, Sejnowski TJ. Gain modulation in the central nervous system: where behavior, neurophysiology, and computation meet. Neuroscientist. 2001;7(5):430–440. pmid:11597102
  66. 66. Silver A. Neuronal arithmetic. Nature Reviews Neuroscience. 2010;11:474—489. pmid:20531421
  67. 67. Aston-Jones G, Cohen JD. An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annu Rev Neurosci. 2005;28:403–450. https://doi.org/10.1146/annurev.neuro.28.061604.135709 pmid:16022602
  68. 68. Servan-Schreiber D, Printz H, Cohen JD. A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior. Science. 1990;249(4971):892–895.
  69. 69. Kandel ER. The molecular biology of memory storage: a dialogue between genes and synapses. Science. 2001;294(5544):1030–1038. pmid:11691980
  70. 70. Alberini CE. Transcription factors in long-term memory and synaptic plasticity. Physiological Reviews. 2009;89(1):121–145. pmid:19126756
  71. 71. Nabavi S, Fox R, Proulx CD, Lin JY, Tsien RY, Malinow R. Engineering a memory with LTD and LTP. Nature. 2014;511:348–352. https://doi.org/10.1038/nature13294 pmid:24896183
  72. 72. Takeuchi T, Duszkiewicz AJ, Morris RG. The synaptic plasticity and memory hypothesis: encoding, storage and persistence. Philosophical Transactions of the Royal Society B: Biological Sciences. 2014;369(1633):20130288.
  73. 73. Romani S, Pinkoviezky I, Rubin A, Tsodyks M. Scaling laws of associative memory retrieval. Neural Computation. 2013;25:2523–2544. https://doi.org/10.1162/NECO_a_00499 pmid:23777521
  74. 74. R Core Team. R A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. ISBN. 2012. http://www.R-project.org/.
  75. 75. Treves A. Frontal latching networks: a possible neural basis for infinite recursion. Cognitive Neuropsychology. 2005;22(3-4):276–291. pmid:21038250
  76. 76. Fink A, Benedek M. EEG alpha power and creative ideation. Neuroscience and Biobehavioral Reviews. 2012;44:11–123. https://doi.org/10.1016/j.neubiorev.2012.12.002.
  77. 77. Shuler MG, Bear MF. Reward timing in the primary visual cortex. Science. 2006;311(5767):1606–1609. pmid:16543459
  78. 78. Miller P, Wingfield A. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory Frontiers in Systems Neuroscience. 2010;4:14. https://doi.org/10.3389/fnsys.2010.00014 pmid:20631822
  79. 79. Bowden EM, Jung-Beeman M. One hundred forty-four Compound Remote Associate Problems: Short insight-like problems with one-word solutions. Behavioral Research, Methods, Instruments, and Computers. 2003;35:634–639.
  80. 80. Hassabis D, Kumaran D, Vann SD, Maguire EA. Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences. 2007;104(5):1726–1731.
  81. 81. Atilgan H, Kwan AC. Same lesson, varied choices by frontal cortex. Nature Neuroscience. 2018;21:1648–1650. pmid:30420733
  82. 82. Andrews-Hanna JR. The Brain’s Default Network and its Adaptive Role in Internal Mentation. Neuroscientist. 2012;18(3):251–270. pmid:21677128
  83. 83. Beaty RE, Benedek M, Silvia PJ, Schacter DL. Creative cognition and brain network dynamics. Trends in cognitive sciences. 2016;20(2):87–95. pmid:26553223
  84. 84. Benedek M, Könen T, Neubauer AC. Associative abilities underlying creativity. Psychology of Aesthetics, Creativity, and the Arts. 2012;6(3):273.
  85. 85. Benedek M, Neubauer AC. Revisiting Mednick’s model on creativity-related differences in associative hierarchies. Evidence for a common path to uncommon thought. The Journal of Creative Behavior. 2013;47(4):273–289. pmid:24532853
  86. 86. Guilford JP. The nature of human intelligence. New York, NY, US: McGraw-Hill; 1967.
  87. 87. Braver TS, Barch DM, Cohen JD. Cognition and control in schizophrenia: a computational model of dopamine and prefrontal function. Biological Psychiatry. 1999;46(3):312–328. https://doi.org/10.1016/S0006-3223(99)00116-X pmid:10435197
  88. 88. Cohen JD, Servan-Schreiber D. Context, cortex, and dopamine: a connectionist approach to behavior and biology in schizophrenia. Psychological Review. 1992;99(1):45–77. http://dx.doi.org/10.1037/0033-295X.99.1.45 pmid:1546118
  89. 89. Seamans JK, Durstewitz D, Christie BR, Stevens CF, Sejnowski TJ. Dopamine D1/D5 receptor modulation of excitatory synaptic inputs to layer V prefrontal cortex neurons. Proceedings of the National Academy of Sciences. 2001;98:301–306.
  90. 90. Jhou TC, Vento PJ. Bidirectional regulation of reward, punishment, and arousal by dopamine, the lateral habenula and the rostromedial tegmentum (RMTg). Current Opinion in Behavioral Sciences. 2019;26:90–96. https://doi.org/10.1016/j.cobeha.2018.11.001
  91. 91. Lak A, Stauffer WR, Schultz W. Dopamine prediction error responses integrate subjective value from different reward dimensions. Proceedings of the National Academy of Sciences. 2014;111(6):2343–2348. https://doi.org/10.1073/pnas.1321596111
  92. 92. Pessiglione M, Seymour B, Flandin G, Dolan RJ, Frith CD. Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans. Nature. 2006;42:1042–1045. https://doi.org/10.1038/nature05051
  93. 93. Kischka U, Kammer TH, Maier S, Weisbrod M, Thimm M, Spitzer M. Dopaminergic modulation of semantic network activation. Neuropsychologia. 1996;34(11):1107–1113. https://doi.org/10.1016/0028-3932(96)00024-3 pmid:8904748
  94. 94. Roesch-Ely D, Weiland S, Scheffel H, Schwaninger M, Hundemer HP, Kolter T, et al. Dopaminergic modulation of semantic priming in healthy volunteers. Biological Psychiatry. 2006;60:604–611. https://doi.org/10.1016/j.biopsych.2006.01.004 pmid:16603132
  95. 95. Stroud JP, Porter MA, Hennequin G, Vogels TP. Motor primitives in space and time via targeted gain modulation in cortical networks. Nature neuroscience. 2018;21(12):1774–1783. https://doi.org/10.1038/s41593-018-0276-0 pmid:30482949
  96. 96. Hasan MT, Hernández-González S, Dogbevia G, Treviño M, Bertocchi I, Gruart A, et al. Role of motor cortex NMDA receptors in learning-dependent synaptic plasticity of behaving mice.