Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A generalized phase resetting method for phase-locked modes prediction

Abstract

We derived analytically and checked numerically a set of novel conditions for the existence and the stability of phase-locked modes in a biologically relevant master-slave neural network with a dynamic feedback loop. Since neural oscillators even in the three-neuron network investigated here receive multiple inputs per cycle, we generalized the concept of phase resetting to accommodate multiple inputs per cycle. We proved that the phase resetting produced by two or more stimuli per cycle can be recursively computed from the traditional, single stimulus, phase resetting. We applied the newly derived generalized phase resetting definition to predicting the relative phase and the stability of a phase-locked mode that was experimentally observed in this type of master-slave network with a dynamic loop network.

1 Introduction

Oscillatory neural activity is ubiquitous and covers a wide spatial and temporal scale from single neural cells to whole brain regions and from milliseconds to days. Neural oscillations are believed to be relevant for a wide range of brain activities from sensory information processing to consciousness [1]. It is believed that the phase of low frequency theta oscillations (4-8 Hz) drives the pyramidal cells and is used for information processing in the hippocampus [24]. Visual stimuli binding is believed to be related to the phase resetting of the fast frequency gamma band (30-70 Hz) [5]. Positive phase correlations between the theta rhythm and the amplitude of gamma oscillations were found during visual stimuli processing and learning [1, 6, 7] and during fear-related information processing [8, 9]. Theta rhythm resetting also drives cognitive processes [10]. Theoretical studies suggested that phase resetting could explain cross-frequency phase-locking of gamma rhythm within a theta cycle [11], which is the hallmark of successful memory retrieval [12, 13]. The phase of neural oscillations is also used to bridge a much wider frequency range from slow theta rhythms of large neural networks, such as those in the hippocampus, up to the individual fast spiking neurons used for speech decoding [14]. It was found that speech resets background (rest) oscillatory activity in specific frequency domains corresponding to the sampling rates optimal for phonemic and syllabic sampling [14, 15]. Phase resetting is also critical in the functioning of suprachiasmatic nucleus that produces a stable circadian oscillation by light-induced resetting of endogeneous rhythm [16, 17]. It was also shown that single sensory stimulus [18, 19] and periodic train of inputs [20, 21] induce phase resetting in electroencephalograms, which manifest as event-related evoked potentials.

Most of neurobiologically inspired interval timing theories assume that neural oscillators and their relative phases could be used as internal clocks for biological rhythms [22]. It was experimentally and computationally found that noisy neural oscillators could produce accurate timing that also obeys scalar property, i.e. the temporal estimation error increases proportionally with the duration [2325]. The attention mechanism phase resets neural oscillators and can produce either a stop or a delay in conditioned stimuli timing with intrudes such as gaps [26] or fear stimuli [27].

Recent optogentic experiments shown that the steady gamma rhythm of medial prefrontal cortex can be reset and entrained by light stimuli and modulated by amphetamines [28]. Delay embedding reconstruction of the phase space gave a low dimensional attractor suggesting a phase coupled model of medial prefrontal cortex that is reset by light stimuli [29].

Unidirectional coupling between neural oscillators, i.e. a master-slave system, suggests the simplest possible synchronization mechanism that uses phase resetting to drive a neural population to a desired phase-locked firing pattern. Phase resetting methodology has been successfully used for predicting one-to-one entrainment in networks where the receiving population always follows the driving population [3032]. It was recently shown that unidirectional coupling also allows for “anticipated synchronization” [33] in which the receiving population anticipates the states of the driving population [34]. It has been analytically proven and numerically verified that time-delayed feedback can force coupled dynamical systems onto a synchronization manifold that involves the future state of the drive system, i.e. “anticipating synchronization” [33]. Such a result is counterintuitive since the future evolution of the drive system is anticipated by the response system despite the unidirectional coupling. It has been suggested that delayed coupling in dynamical systems separated by some distance can still promote synchronization despite the slow signal transmission and the unidirectional coupling. The first anticipating synchronization study of excitable systems was done by Ciszak et al [35], followed by more recent behavioral-related investigations [36, 37]. Synaptic delay and synaptic plasticity was recently extensively investigated as potential control parameters that can lead to tunable delayed and anticipating synchronization in neural networks [3840].

We investigated analytically and numerically a three-neuron master-slave system with a dynamic inhibitory loop that was previously shown experimentally to exhibit anticipating synchronization [41, 42]. The three-neuron network investigated here was shown to produce both delayed synchronization, in which pre-synaptic neuron fires a spike before post-synaptic neuron, and anticipating synchronization. It was argued that the delayed synchronization is a possible mechanism for spike-timing dependent plasticity [40], whereas anticipating synchronization could contribute to long term depression of synaptic couplings [4042]. This study focuses on deriving analytic criteria for the existence and the stability of phase-locked modes in a three-neuron network that was found to generate both delayed and anticipated synchronization. For this purpose, we used the method of phase response curve (PRC) [32, 4350]. The novelties of this study are (1) the generalization of PRC to multiple inputs per cycle and (2) the prediction of phase-locked modes in a neural network that is no longer limited to one-to-one firing patterns.

2 Phase response curve method

The phase response curve (PRC) method has been extensively used for predicting phase-locked modes in neural networks [5154]. It assumes that the only effect of a stimulus is to reset the phase of the ongoing oscillation of a neuron. Traditionally, the PRC tabulates the transient change in the firing frequency of a neural oscillator in response to one external stimulus per cycle of oscillation. The term PRC has been used almost exclusively in regard to a single stimulus per cycle of neural oscillators. Recently, we suggested a generalization of the PRC that allowed us to account for the overall resetting when two or more inputs are delivered during the same cycle [55]. As a result, we expanded the PRC theory from the prediction of the traditional one-to-one phase-locked modes to arbitrary phase-locked firing patterns. Here we present the first quantitative application of such generalized PRC approach to a realistic neural network with a dynamic feedback loop.

In the case of a single stimulus, the PRC measures the change of the free running period Pi of a neural oscillator to a new value P1 (see Fig 1A). The stimulus time ts is measured from an arbitrary phase reference φ = 0. In our numerical simulations, the phase reference was the zero crossing of the membrane potential with a positive slope. The relative change in the duration of the current cycle, i.e. the cycle that contains the perturbation, with respect to the unperturbed duration Pi determines the first order PRC in response to a single stimulus (for detailed mathematical definitions see Appendix 1). As a result of the perturbation, the new firing period becomes P1 = Pi(1 + F(1)(φ)), where F(1) represents the relative shortening/lengthening of the intrinsic firing period Pi due to the stimulus applied at phase φ = ts/Pi [47, 48, 50].

thumbnail
Fig 1. Typical PRCs for different classes of excitable cells.

(A) The free running neural oscillator (continuous line) with an intrinsic period Pi is perturbed at stimulus time ts by a brief current pulse (see shaded rectangle). As a result, the membrane potential is perturbed (dashed line) and the period of oscillation is transiently modified to P1, which induces a phase shift of all subsequent spikes. The time it takes a neuron to recover from a stimulus until it reaches the arbitrary zero phase reference again is called recovery time tr. Higher order PRCs measure the relative change in the firing period of the second and subsequent cycles (not shown). (B) Class I excitable cells can fire with arbitrarily low frequency by adjusting a bias current (solid circles), whereas class II excitable cells can only start firing at a minimum frequency (solid squares). The experimentally observed class I/class II distinction between neural oscillators translates in an (almost) one-to-one correspondence in type 1 (unimodal) PRCs (C) and, respectively, type 2 (bimodal) PRCs (D). The vertical arrow indicates a stimulus delivered at phase φ ≈ 0.2 that produces a 5% shortening of the intrinsic firing period in a type 1 (C) and 1% resetting in a type 2 (D) neural oscillator. The neural network for which we used the PRC to predict the phase-locked modes has three neurons: #1 is the pacemaker (master) of the network as it receives no feedback and drive the half-center formed with neurons #2 and #3; neuron #2 (slave) receives two inputs: one forward excitatory (open triangle) from the master neuron #1 and the other inhibitory (solid circle) from the interneuron #3; the interneuron #3 only receives one excitatory input (open triangle) from neuron #2 (slave) (E).

https://doi.org/10.1371/journal.pone.0174304.g001

A saddle-node bifurcation, which presents a continuous frequency versus bias current (f-I) curve that extends to arbitrarily low frequencies (see solid circles in Fig 1B) usually leads to a type 1 PRC that looks unimodal as in Fig 1C (although for counterexamples see [49, 56]). Fig 1C shows a typical type 1 PRC in response to a brief excitatory current perturbation that produces only phase advances (period shortening), i.e. negative resettings. A type 1 PRC looks unimodal and is often associated with a class I excitable cell, i.e. a cell that can produce stable oscillatory activity with arbitrarily low frequency [57, 58]. Usually, such excitable cells produce stable oscillations via a saddle node bifurcation on an invariant circle [59]. A type 2 PRC looks bimodal (see Fig 1D) and is often associated with a class II excitable cell [57, 58]. Class II oscillations usually emerge through a Hopf bifurcation [59] (see Fig 1B). As a side note, it was recently shown that type 1 (unimodal) PRCs do not always come from a class I excitable cell [56] and in fact all PRCs are bimodal with varying degrees [49, 50].

Close to the bifurcation point, accurate analytical formulas called normal forms describe the PRCs (see [57] and Appendix 1 for mathematical details), which we used in this study to get some analytical insights into the general behavior of the three-neuron network with a dynamic loop shown in Fig 1E.

The key assumption in generalizing the PRC method to multiple inputs per cycle was that the resetting induced by one stimulus takes effect “almost” instantaneously, i.e. before the arrival of the next stimulus [55]. Therefore, the effects of two stimuli applied during the same cycle are independent of each other. As a result, we used the single stimulus PRCs (F(1)) shown in Fig 1C and 1D to compute the phase resetting in response to two or more stimuli (see Appendix 1 for the detailed mathematical derivation of F(2) and its generalization). Briefly, the first stimulus delivered at stimulus phase φa = tsa/Pi produces a transient change in the firing period to Pa = Pi(1 + F(1)(φa)). The second stimulus that arrives at a stimulus phase φb = tsb/Pa > φa further changes the firing period to Pb = Pa(1 + F(1)(φb)) (see Appendix 1). Combining the above effects of the two stimuli applied at phases φa and φb, the new firing period Pb becomes Pb = Pi(1 + F(2)(φa, φb)) (see Appendix 1 for a detailed mathematical derivation).

A typical two stimuli protocol (see Fig 2A) and the corresponding phase resetting F(2) are shown in Fig 2B, where the three-dimensional surface is given by Eq (22) in Appendix 1 and a two-dimensional contour plot also shows the contours of equal phase resetting. For this plot we used the analytical normal form of the PRC (see Eq (17) in Appendix 1) where P2i = 70 ms and the coupling strengths from neuron 1 to neuron 2 was g12 = 0.015 (excitatory) and from neuron 3 to neuron 2 was g32 = 0.002 (inhibitory) (see section 3 for a detailed description of the neural model and the synaptic couplings).

thumbnail
Fig 2. Typical two stimuli PRC.

(A) Two brief stimuli delivered at stimulus times tsa and, respectively, tsb. The first stimulus transiently modifies the intrinsic firing period Pi to a new value Pa = Pi(1 + F(1)(φa)), where φa = tsa/Pi. The second stimulus arrived at a new phase φb = tab/Pa that found a modified firing period Pa and, therefore, further reset the firing period to Pb = Pa(1 + F(1)(φb)). (B) A typical two stimuli phase response surface for a class I excitable cell.

https://doi.org/10.1371/journal.pone.0174304.g002

3 The neural model

In their seminal work on giant squid axon, Hodgkin and Huxley [6064] experimentally identified three classes, or types, of axonal excitability: class I, where the repetitive firing is controlled by the intensity of an external stimulus; class II, where the firing frequency is almost independent on stimulus intensity; and class III, where there are no endogenous bursters regardless of stimulus intensity or duration.

Our simulations were performed using a class I, single compartment, neural oscillator described by a standard conductance-based, or Hodgkin-Huxley (HH), mathematical model [6466]. The rate of change of membrane potential is: (1) where V is the membrane potential, and Ech are the maximum conductance and, respectively, the reversal potential for ionic channel ch (only calcium, potassium and leak were considered), w is the instantaneous probability that a potassium channel is open, and I0 is a constant bias current. Each ionic current is the product of a voltage-dependent conductance and a driving force Ich = gch(V)(VVch) where gch(V) is the product of the maximum conductance for that channel and a specific voltage-dependent gating variable. Morris and Lecar (ML) mathematical model has two non-inactivating voltage-sensitive gating variables: one instantaneous, voltage-dependent, calcium activation m(V) and a delayed voltage-dependent potassium w given by a first order differential equation [67]: (2) where ϕ is a temperature-dependent parameter, and a voltage-dependent relaxation time constant is defined by τ(V) = cosh−1((VVw,1/2)/(2Vw,slope)). All open-state probability functions, or steady-state gating variables x, have a sigmoidal form [67]: (3) where Vx,1/2 is the half-activation voltage and Vx,slope is the slope factor for the gating variable x. The ML model is widely used in computational neuroscience because it captures relevant biological processes and, at the same time, by changing only a small subset of parameters it can behave either as a type 1 or a type 2 neural oscillator. The dimensionless parameters for a type 1 ML neuron are: Vm,1/2 = −0.01, Vm,slope = 0.15, Vw,1/2 = 0.1, Vw,slope = 0.145, VK = −0.7, VLeak = −0.5, VCa = 1.0, , , , I0 = 0.070, and ϕ = 0.6 (Ermentrout, 1996). The model’s equations and its parameters are in dimensionless form with all voltages divided by the calcium reversal potential VCa0 = 120 mV, all conductances divided by mS/cm2, and all currents normalized by (Ermentrout, 1996). For example, a dimensionless reversal potential for a leak current of VLeak = −0.5 means VLeak = −0.5VCa0 = −0.5 × 120 mV = -60 mV.

The Synaptic Model. We implemented fast chemical synapses between neurons given by a synaptic current , where is the maximum synaptic conductance, s(t) is the fraction of channels activated by neurotransmitters, Vpost is the membrane potential of the postsynaptic neuron, and Esyn is the reversal potential of the synaptic coupling. We used Esyn = 0 for excitatory and Esyn = −0.6 for inhibitory coupling. The synapses activation was described by a first order kinetics s′ = αT(1 − s) − βs, where α = 15, β = 1.5, and neurotransmitter binding was described by a sigmoidal function T(Vpre) = 1/(1 + e(−Vpre − 0.2)120/5)) where Vpre is the membrane potential of the presynaptic neuron.

We numerically computed the PRCs in open loop setup, i.e. by injecting a single synaptic input from a corresponding presynaptic neuron for neurons 2 and 3 shown in Fig 3 for our neural network configuration.

thumbnail
Fig 3. Typical phase-locked mode with one neuron receiving two inputs per cycle.

Neuron #1 is the driver of the entire network and its intrinsic firing period P1i was used as reference duration for all other intrinsic periods. The neuron’s spike is represented by a thick vertical line. The coupling between the neurons is marked by vertical dashed lines that terminate either with an excitatory (empty triangle) or a inhibitory (solid circle) synapse. Neuron #2 receives 2 inputs during one cycle: the first is an inhibition at stimulus time t2sa from the interneuron #3 and later on it receives an excitatory input from neuron #1 at stimulus time t2sb. The neuron recovers from the last stimulus after t2r and fires again. Neuron #3 only receives one excitatory input per cycle from neuron #2.

https://doi.org/10.1371/journal.pone.0174304.g003

4 The neural network model

In order to use the PRC method (see section 2) for predicting the relative phases of neurons in a phase-locked firing pattern, we assumed a fixed firing order of the three neurons with the goal of determining if such a pattern exists and if it is stable. Based on the neural network model proposed for delayed and anticipated synchronization by Matias et al [4042], we identified the following definitions for the firing period of each neuron (see Fig 3): (4) where t2r is the recovery time of neuron #2 after its last input, t2sa and t2sb are the corresponding stimulus times for the first and, respectively, the second input to neuron #2, and the index of the cycle is marked with the square brackets […]. The subscript index refers to the neural oscillator index according to Fig 3. From Eq (4) we eliminated t2r[n − 1] = P1it2sb[n] and substituted it into the other two equations, which led to: (5)

Based on the definitions of the PRCs (see Eqs (16) and (22) in Appendix 1), we further expanded Eq (5) the transiently modified firing period in terms of experimentally determined PRCs: (6) The above system of two recursive equations has two unknowns, i.e. t2sa and t2sb, that describe the temporal evolution of the relative phase of neural oscillators from the firing cycle [n] to [n + 1].

5 The existence of phase-locked modes

Let us assume that there is a steady state solution for the recursive Eq (6) that mimics the activity of the neural network shown in Fig 3, i.e. the following limits exist and . By substituting the steady state, i.e. phase-locked mode, solution into Eq (6) one obtains: (7) where we used the fact that and that t2r[n − 1] = P1it2sb[n], which led to

As we notice from the second equation in Eq (7), the steady state value could be immediately determined and it only depends on P1i, P3i, and the PRC of the third neuron, which depends on the coupling strength g23. It results that the steady state value is given by: (8) Since the coupling from neuron #2 to neuron #3 is excitatory, the PRC is negative (only advances the next spike), i.e. . As a result of Eq (8), the steady states can only exist for P1i < P3i which means that the interneuron (neuron #3) must be slower than the pacemaker (neuron #1) of the network. Moreover, since a type 1 PRC in response to excitatory inputs has only one negative minimum () that determines the magnitude of the strongest possible resetting, then , which means the the interneuron intrinsic period is bounded by

Once we determined the steady state from Eq (8), then we plugged it into the first Eq (7) and found Using the PRC definition (see Eq (22) from Appendix 1) we obtained: (9) where The above equation can be reduced to: (10) We must emphasize that and are two different single stimulus PRCs for the same neuron. Here is the single stimulus phase response curve of the second neuron to an input received from the third neuron, i.e. is determined by g32. Similarly, is the single stimulus phase response curve of the second neuron in response to an input received from the first neuron, i.e. is determined by g12.

5.1 Explicit steady state solutions using normal form generic type 1 PRCs

In order to get insights into the general existence criteria for steady state (phase-locked modes) derived above, we assumed that the single stimulus and the generalized PRCs are quite well approximated by the corresponding normal forms given by Eq (16) and, respectively, by Eq (22) in Appendix 1. Then the steady state solution of Eq (8) can be analytically written as: (11) By least square fitting the numerically generated PRCs for each neuron in response to a single spike from its corresponding presynaptic neuron with the theoretical formula of the normal form given by Eq (17), we found a quantitative relationship between the abstract coupling strength coefficient c and the physiologically measurable maximum synaptic couplings (see Appendix 1). Therefore, in order to simplify the mathematical notation, throughout the rest of the paper we only write, for example, c23 when referring to the coefficient of the theoretical normal form of the PRC with the understanding that it is a known function of the synaptic conductance, i.e. c23 = c23(g23).

Since −1 ≤ cos(x)≤1, it results that , which determines the minimum coupling strength g23 for a given ratio of the two intrinsic firing periods to attain a phase-locked mode pattern. Based on the above relationship, for excitatory coupling, i.e. Esyn = 0, the master (pacemaker) neuron #1 (see Fig 3) must be faster than the interneuron #3, i.e. P1i < P3i. At the same time, the coupling strength g23 must also be strong enough to reset the longer intrinsic period P3i to match the shorter period of the network’s pacemaker, i.e. to ensure that . The above relationship allowed us to estimate that, if the coupling is very strong (g23∞), then the steady state from Eq (11) has the solution with k = 0,±1,±2,…, which is marked by vertically downward arrows in Fig 4. Furthermore, if the two firing periods are approximately equal (P1iP3i), then from Eq (11) it results that with k = 0, 1, 2, … (see also Fig 4). Fig 4 also shows that for each intrinsic period ratio P3i/P1i there is a minimum coupling strength g23 that ensures appropriate resetting of the interneuron. For example, the minimum coupling for P3i/P1i = 1.5 (dotted red line in Fig 4) is g23 = 0.024. A stronger coupling of g23 = 0.036 is necessary for a larger ratio P3i/P1i = 2 (dashed blue line in Fig 4)).

thumbnail
Fig 4. Minimum coupling strength g23 required for a given phase-locked mode time .

(a) There are multiple possible solutions for for the same coupling strength between neurons #2 and #3 due to the PRC periodicity. In the limit case of a very strong coupling (g23∞) the phase-locked stimulus time becomes (see the vertically downward arrows).

https://doi.org/10.1371/journal.pone.0174304.g004

The phase-locked modes given by Eq (6) depend on three intrinsic periods P1i, P2i, P3i and three synaptic conductances g12, g23, and g32. Since the master neuron receives no input, all durations were measured relative to P1i. The bias current for the computational model was set such that P1i = 60 ms, P2i = 70 ms and P3i = 80 ms (see section 3 for details and supplemental files for a computational implementation). Intuitively, the phase-locked solution is the stable interspike interval between neurons #2 and #3 (the interneuron). The other phase-locked solution is the stable interspike interval between neurons #2 and #1 (network’s driver). However, this simplification only reduces the parameter space to five dimensions.

In order to reduce the parameter space to four dimensions, we only show examples of phase-locked modes for a fixed inhibitory coupling with g32 = 0.002 (arb. units). For a fixed intrinsic period of the second neuron P2i/P1i = 70/60, the parameter space further reduces to three dimensions, which allowed us to visualize the phase-locked modes. The solution of the second equation in Eq (6) only depends on P3i/P1i and the coupling strength g23 (see the green surface in Fig 5).

thumbnail
Fig 5. Phase-locked solutions and (vertical axes) versus the intrinsic period of the interneuron P3i and the coupling strength g23.

(A) The first stimulus time only depends on P3i/P1i and g23—see green surface. For a fixed intrinsic period ratio P2i/P1i = 70/60, the second stimulus time depends also on the coupling strength g12 = 0.012 (red surface) and g12 = 0.05 (blue surface). (B) For a fixed coupling g12 = 0.015, the second stimulus time dependence on the intrinsic firing periods P2i/P1i = 60/60 (red surface) and P2i/P1i = 70/60 (blue surface) shows that the solution space is wider for shorter intrinsic periods.

https://doi.org/10.1371/journal.pone.0174304.g005

However, the phase locked solution of the first equation in Eq (6) depend on the additional coupling g12. Therefore, to gain insight into how g12 affects the solution , we used the same axis P3i/P1i and g23 as for , but with different constant values of coupling g12 = 0.012 (red surface) and g12 = 0.05 (blue surface) in Fig 5A. We notice from Fig 5A that increasing the strength of the excitatory coupling g12 leads to an increased stimulus time and a wider parameter domain of the phase-locked solution.

Similarly, if we hold constant the synaptic coupling g12 = 0.015 (arb. units) between the master and the slave neurons, then we could visualize the phase-locked solution for variable intrinsic period of the second neuron P2i/P1i = 60/60 (red surface in Fig 5B) and P2i/P1i = 70/60 (blue surface in Fig 5B). We notice that for smaller intrinsic periods P2i the range of control parameters P3i/P1i and g23 is broader. This is because for more similar firing frequencies it is easier to bring the driven neuron to the firing frequency of the driving neuron.

6 The stability of phase-locked modes

The possible phase-locked modes given by Eq (6) may not all be stable and, therefore, they may not be all experimentally observable. To determine the stability of the steady solutions , we assume small perturbations: (12) where the nth cycle perturbation is assumed very small for both stimuli. By substituting Eq (12) into the existence criteria from Eq (7) and using a Taylor series expansion one obtains: (13) where is the slope of the second neuron’s PRC at the phase of the first stimulus , is the slope of the second neuron’s PRC at the phase of the second stimulus , is the slope of the third neuron’s PRC at the phase of the stimulus . The stability Eq (13) can be rewritten in a matrix form as: (14) which led us to a first order recursive relationship for the perturbations: (15) where a11 = (1 − m3)(1 − b), a12 = (m3 − 1)m2b, a21 = −b, and a22 = 1 − m2b with The stability of the steady state is determined by the eigenvalues of Eq (15) (see the Appendix 2 for the general stability conditions in a two-dimensional recursive map).

We also must keep track of the third stability condition as the original recursive system in Eq (4) contained three variables, which were reduced to two coupled recursive equations (see Eq (5)) by eliminating the third variable, i.e. t2r[n − 1] = P1it2sb[n]. As a result, the steady state of the previous substitution gives and the corresponding infinitesimal perturbation is δt2r[n − 1] = −δt2sb[n]. Therefore, the stability of solution is determined by the stability of δt2sb[n], which is already covered by Eq (15) without involving additional control parameters.

The general stability conditions for any first order recursion of two variables is discussed in details in the Appendix 2. Briefly, the trace Tr(A) = a11 + a22 and the determinant Det(A) = a11 a22a12 a21 of the recursion matrix in Eq (15) determine the stability of each steady state obtained by solving Eq (7).

7 Numerical validation of the existence and the stability criteria

The analytically derived criteria for the existence (see section 5) and stability (see section 6) of phase-locked modes in a master-slave network with a dynamic loop (see Fig 3) were only based on PRCs in response to a single stimulus. We checked our theoretical predictions based on open loop PRCs against the numerical simulations of the actual neural networks implemented according to the model presented in section 3, i.e. closed loop (fully connected neural network).

The analytical normal form PRC formulas (see Eq (17) in Appendix 1) were convenient analytical tools and even led us to some analytical results in the preceding sections. However, for the actual comparison between the multiple stimuli PRC-based phase-locked mode prediction (open loop) and the numerical simulations results of the fully coupled neural network (see Fig 3) we used numerically generated open lopp PRCs. The reason is that, although the analytical normal form of type 1 PRC given by Eq (17) (see dashed red line in Fig 6A) is close to the numerically (experimentally) generated open loop PRC (see dotted blue curve in Fig 6A), we wanted a more accurate prediction based on the real-world PRC as it is generated in wet lab/numerical experiments.

thumbnail
Fig 6. Phase-locked modes in fully coupled neural network.

(A) The numerically generated PRC in open loop setup in response to a single triangularly shaped stimulus (solid circles) was fitted to the theoretical PRC given by Eq (17) to determined the conversion factor between the model-dependent coupling constant c in Eq (17) and the synaptic coupling gsyn. (B) A typical stable phase locked mode in which neuron #2 (dashed line) receives two inputs during a single cycle: first from neuron #3 (dotted line) at and then from neuron #1 (continuous line) at . The experimental values for the phase locked mode as measured from panel (B) were ms and ms whereas the PRC-based predictions were ms and ms. The network’s firing period was P = 60 ms = P1i.

https://doi.org/10.1371/journal.pone.0174304.g006

We also used the least square minimization to fit actual PRCs (see dotted curve in Fig 6A) with the theoretical formula given by Eq (17) in order to establish the conversion factor between the model-dependent coupling constant c in the theoretical formula given by Eq (17) and the synaptic constant gsyn used in our numerical simulations. The Mathematica file that contains the implementation of the neural network shown in Fig 3 based on the model equations provided in section 3 is available in supplemental files section.

The synaptic couplings used for the example shown in Fig 6B were g12 = 0.015, g32 = 0.002, and g23 = 0.0275, which led to a phase locked mode with ms and ms. The PRC-based predictions were ms (about 18% error) and ms (about 10% error). We found that the eigenvalues of the stability matrix were λ1 = 0.489, and λ2 = 0.779, which indicated that the predicted mode was stable.

Discussion

Since even for a small unidirectionally coupled three-neuron network the parameter space is six-dimensional, i.e. three intrinsic firing periods (P1i, P2i, and P3i), one unidirectional synaptic coupling between master-slave neurons (g12), and two coupling constants for the feedback loop (g23 and g32), we reduced it to manageable dimensions in order to visualize the phase-locked solution. Since the master neuron receives no feedback from the network, its intrinsic firing period P1i was considered the reference duration, which reduces the parameter space to five dimensions. We further reduced the parameter space to four dimensions using a fixed value for the inhibitory coupling of the interneuron, i.e. g32 = 0.002 (arb. units). We numerically found the phase locked modes by considering two separate cases: (1) fixed period of slave neuron #2, of which we only show two examples with P2i/P1i = 60/60 and P2i/P1i = 70/60 in Fig 5A and (2) fixed master-slave synaptic coupling, of which we only showed two examples with g12 = 0.012 and g12 = 0.05 in Fig 5B. In all numerical simulations, the free parameters were the intrinsic period of the interneuron P3i and the excitatory synaptic coupling to that neuron (g23). The reason is that it was previously shown that the interneuron through its intrinsic properties and its synaptic coupling can lead to either delayed or anticipating synchronization in this neural network [40, 42] and our goal was to closely match previous experimental findings using the newly developed generalized PRC method. Based on Fig 5, an increase in the strength of the master-slave synaptic coupling g12 leads to a larger phase difference between the two steady states and . At the same time, the parameter space of the interneuron (P3i, g22) becomes wider. Another possibility for broadening the parameter space was to bring the intrinsic firing period of the slave neuron #2 closer the the master neuron #1, i.e. by reducing network heterogeneity. All out numerical simulations are in agreement with previously observed firing patterns in this type of neural network [40, 42].

Conclusions

We used a phase response curve method to predict the existence and the stability of phase-locked modes in a master-slave networks with a dynamic feedback loop. This study brings two novel solutions to phase-locked mode prediction in neural networks. First, we generalized the the phase response curve definition to include the more realistic case when neural oscillators receive more than one input per cycle. Secondly, we applied the generalized phase resetting definition to a biologically relevant neural network that has been shown to produce both delayed and anticipated synchronization.

Predicting phase-locked modes in large neural networks usually requires as a first step a complexity reduction to manageable subnetworks of two neurons [68, 69] or, whenever possible, reduces the entire network to a two-population network [70]. Our PRC generalization to multiple inputs per cycle is a significant advance in phase resetting theory that allows investigation of large networks in which individual neurons receive multiple inputs per cycle without assuming special network connectivity. Furthermore, our generalization of phase response curve and its proof of concept application to predicting phase-locked modes existence and stability in a biologically relevant three-neuron network with a dynamic feedback loop is not limited to weak coupling nor to only one-to-one firing patterns. Indeed, the coupling strengths used were quite large such that it reset the firing period of the interneuron #3 by 25% from 80 ms to 60 ms.

Appendix 1

Single stimulus phase response curve method

There are two main experimental protocols for measuring the single stimulus PRC in isolated cells: (1) single stimulus and (2) recurring (periodic) stimuli. In the case of a single stimulus protocol, a free running neural oscillator with the intrinsic period Pi is perturbed at a certain instant called stimulus time ts, which is measured from an arbitrary phase reference φ = 0, e.g. zero crossing of the membrane potential with a positive slope. As a result of the perturbation, the length of the current cycle that contains the stimulation (see Fig 1A) may be transiently shortened or lengthened to a new duration P1. The relative change in the duration of the current cycle with respect to the unperturbed duration Pi determines the first order PRC in response to a single and nonrecurring stimulus: (16) where the superscript (1) emphasizes that the resetting is due to a single input per cycle, which has been used as the “classical” definition of PRC. Based on Eq (16), a negative value of the PRC means that the next spike is advanced, otherwise it is delayed. Others [58, 71] prefer to flip the sign in Eq (16) and associate a positive sign to a phase advance. Oftentimes, the effect of a single stimulus extends to subsequent cycles and is measured by higher order PRCs [47, 48, 50]. Usually, one records at least five cycles until the neural oscillatory returns back to its unperturbed oscillatory activity [31, 32]. Afterwards another single stimulus is applied at a different phase to quantify its effect on the isolated neuron (open loop experimental setup).

In the case of recurring external stimuli, the interpretation of the phase resetting and its usage in phase-locked mode prediction is complicated by (1) the fact that the measured resetting compounds multiple PRC orders in a potentially nonlinear manner and (2) the activation of slow currents and/or long term potentiation (see [72] for examples and [32] for higher order PRC applications).

Normal Forms of Single Stimulus Phase Response Curves. A saddle-node bifurcation, which presents a continuous frequency versus bias current (f-I) curve that extends to arbitrarily low frequencies (see solid circles in Fig 1B) usually leads to a type 1 PRC that looks unimodal as in Fig 1C (although for counterexamples see [49, 56]). Close to the bifurcation point, type 1 unimodal PRCs are described analytically by the following equation [57]: (17) where cSN is a constant determined by the neural model and ω = 2π/Pi is the intrinsic angular frequency of the oscillator. In this study, we used the simplified analytical form given by Eq (17) to get analytical insights into the general behavior of the three-neuron network with a dynamic loop shown in Fig 3.

By least square fitting the numerically generated PRCs for each neuron in response to a single spike from its corresponding presynaptic neuron with the theoretical formula of the normal form PRC given by Eq (17), we found coupling strengths c are proportional to the maximum synaptic couplings : c12 = −6.1733g12 − 0.0003, c23 = −6.9555g23 − 0.0005, and c32 = 7.2764g32 + 0.0002.

Phase resetting in response to multiple stimuli

Assuming that the resetting induced by one stimulus takes effect “almost” instantaneously, i.e. before the arrival of the second stimulus, then the effects of two stimuli applied during the same cycle are independent of each other and we could use the single stimulus PRC defined by Eq (16) (shown in Fig 1C and 1D) to compute the phase resetting in response to two or more stimuli. In order to compute the phase resetting induced by the second stimulus based on Eq (16) we need to correctly compute its phase (see Fig 2A). The phase of the first stimulus that arrives at a stimulus time tsa is φa = tsa/Pi. The first stimulus produces an “almost” instantaneously phase resetting and changes the firing period to: (18) When the second stimulus arrives at a stimulus time tsb > tsa, the neuron already has a different firing period Pa due to the previous stimulus. As a result, the phase of the second stimulus is φb = tsb/Pa and the new firing period due to the second stimulus is: (19) where we used the same definition of the first order phase resetting for a single stimulus as in Eq (16). By substituting Eq (18) into Eq (19) one obtains: (20) which could be rewritten in a form that resembles Eq (16) as: (21) where the superscript (2) emphasizes that the new transient period Pb is computed in response to two stimuli arriving at phases φa and, respectively, φb > φa during the same cycle. By comparing the definition from Eq (21) against the derived resetting from Eq (20), we found that: (22) which has the advantage that can predict the phase resetting in response to two stimuli by recursively using the single stimulus PRC defined in Eq (16). A typical two stimuli phase response curve F(2) is shown in Fig 2B.

Furthermore, our novel derivation of PRC in response to two stimuli given by Eq (22) generalizes to an arbitrary number of inputs per cycle as follows: (23) where P0 = Pi is the intrinsic firing period of the isolated neuron, tsk > ts(k + 1), and tsk < Pk−1 (stimulus k still falls inside the transiently modified period due to the previous stimulus).

Appendix 2

Stability Conditions for Two-Dimensional Recursive Maps. The characteristic polynomial of any first order recursive equation of two variables, such as Eq (6), is: (24) where Tr(A) and Det(A) are the trace and, respectively, the determinant of the recursion matrix of the perturbations (δt2sa, δt2sb), such as the Eq (15). The first order recursions have the following solution: (25) where C1 and C2 are some constants determined from the initial conditions and λi (with i = 1, 2) are the solutions of the characteristic polynomial Eq (24). For the perturbations to die out, all characteristic roots must be less than unit, i.e. |λi|<1 for both i = 1, 2. To ensure stability, there are two possibilities: (1) the roots of the characteristic polynomial are real and both less than the unit, or (2) the roots are complex conjugated with a magnitude less than the unit.

Real characteristic roots. In this case, the following conditions must be met (26) The region where all three conditions are met is shown in Fig 7 with crossed hashing, i.e. the region below the parabolic curve and above the two straight, tangent, lines.

thumbnail
Fig 7. Stability regions of the two-dimensional recursive maps.

The stability condition for any recursive maps requires that all roots of the characteristic polynomial are less than unit (|λ| < 1). For a two-dimensional, first order, recursive there are only two parameters that control the stability conditions above, i.e. the trace x = Tr(A) and the determinant y = Det(A) of the characteristic matrix. The parabolic curve in (x, y) plane separates real from imaginary roots of characteristic polynomial. For real roots, i.e. below the parabolic curve, the stability region is only limited to the areas above the two tangent lines to the parabola (see 45 degree hashed areas). For imaginary roots, i.e. above the parabolic curve, the stability region is also limited to the area below the unit value since (see hashed area with horizontal lines).

https://doi.org/10.1371/journal.pone.0174304.g007

Imaginary characteristic roots. In this case, the discriminant of the characteristic polynomial is negative, i.e. −Det(A) + (Tr(A))2/4 < 0. At the same time, the magnitude of each complex conjugated characteristic root is , i.e. Det(A) < 1. As a result, the stability region in the case of complex characteristic root is above the parabolic shape shaded with horizontal lines in Fig 7.

Supporting information

S1 File. Mathematica code.

The Mathematica code simulates the driven-driver neural network with adaptive feedback. It uses Morris-Lecar type 1 neurons and chemical couplings between neurons to produce a stable phase-locked firing pattern.

https://doi.org/10.1371/journal.pone.0174304.s001

(NB)

Acknowledgments

We are grateful to the reviewers for their constructive comments that allowed us to improve the computational implementation of the neural network and the overall quality of the manuscript. This research was supported by US National Science Foundation Career Award IOS-1054914 to SAO.

Author Contributions

  1. Conceptualization: SAO.
  2. Data curation: SAO.
  3. Formal analysis: SAO.
  4. Funding acquisition: SAO.
  5. Investigation: SAO DIA.
  6. Methodology: SAO DIA.
  7. Project administration: SAO.
  8. Resources: SAO.
  9. Software: SAO DIA.
  10. Supervision: SAO.
  11. Validation: SAO.
  12. Visualization: SAO.
  13. Writing – original draft: SAO.
  14. Writing – review & editing: SAO DIA.

References

  1. 1. Buzsaki G. Rhythms of the Brain. New York: Oxford University Press; 2011.
  2. 2. Harris KD, Henze DA, Hirase H, Leinekugel X, Dragoi G, Czurko A, et al. Spike train dynamics predicts theta-related phase precession in hippocampal pyramidal cells. Nature. 2002;417:738–741. pmid:12066184
  3. 3. Hirase H, Czurko A, Csicsvari J, Buzsaki G. Firing rate and theta-phase coding by hippocampal pyramidal neurons during space clamping. Eur J Neurosci. 1999;11:4373–4380. pmid:10594664
  4. 4. Kamondi A, Acsady L, Wang XJ, Buzsaki G. Theta oscillations in somata and dendrites of hippocampal pyramidal cells in vivo: activity-dependent phase-precession of action potentials. Hippocampus. 1998;8:244–261. pmid:9662139
  5. 5. Busch NA, Herrmann CS, Muller MM, Lenz D, Gruber T. A cross-laboratory study of event-related gamma activity in a standard object recognition paradigm. Neuroimage. 2006;33:1169–1177. pmid:17023180
  6. 6. Kendrick KM, Zhan Y, Fischer H, Nicol AU, Zhang X, J F. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex. BMC Neuroscience. 2011;12:471–2202.
  7. 7. Tort ABL, Komorowski R, Eichenbaum H, Kopell N. Measuring Phase-Amplitude Coupling Between Neuronal Oscillations of Different Frequencies. J Neurophysiol. 2010;104:1195–1210. pmid:20463205
  8. 8. Karalis N, Dejean C, Chaudun F, Khoder S, Rozeske RR, Wurtz H, et al. 4-Hz oscillations synchronize prefrontal-amygdala circuits during fear behavior. Nature Neuroscience. 2016;19:605–612. pmid:26878674
  9. 9. Stujenske JM, Likhtik E, Topiwala MA, Gordon JA. Fear and safety engage competing patterns of theta-gamma coupling in the basolateral amygdala. Neuron. 2014;83:919–933. pmid:25144877
  10. 10. Rizzuto DS, Madsen JR, Bromfield EB, Schulze-Bonhage A, Seelig D, Aschenbrenner-Scheibe R, et al. Reset of human neocortical oscillations during a working memory task. Proc Natl Acad Sci USA. 2003;100:7931–7936. pmid:12792019
  11. 11. Malerba P, Kopell N. Phase resetting reduces theta-gamma rhythmic interaction to a one-dimensional map. J Math Biol. 2013;66:1361–1386. pmid:22526842
  12. 12. Colgin LL, Denninger T, Fyhn M, Hafting T, Bonnevie T, Jensen O, et al. Frequency of gamma oscillations routes flow of information in the hippocampus. Nature. 2009;462:353–357. pmid:19924214
  13. 13. Colgin LL. Theta-gamma coupling in the entorhinal-hippocampal system. Current opinion in neurobiology. 2015;31:45–50. pmid:25168855
  14. 14. Giraud AL, Poeppel D. Cortical oscillations and speech processing: emerging computational principles and operations. Nat Neurosci. 2012;15:511–514. pmid:22426255
  15. 15. Luo H, Poeppel D. Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron. 2007;54:1001–1010. pmid:17582338
  16. 16. Porterfield VM, Piontkivska H, Mintz EM. Identification of novel light-induced genes in the suprachiasmatic nucleus. BMC Neuroscience. 2007;8:98. pmid:18021443
  17. 17. Rusak B, Zucker I. Neural regulation of circadian rhythms. Physiol Rev. 1979;59:449–526. pmid:379886
  18. 18. Sauseng P, Klimesch W, Gruber WR, Hanslmayr S, Freunberger R, Doppelmayr M. Are event-related potential components generated by phase resetting of brain oscillations? A critical discussion. Neuroscience. 2007;146(4):1435–1444. pmid:17459593
  19. 19. Shah AS, Bressler SL, Knuth KH, Ding M, Mehta AD, Ulbert I, et al. Neural dynamics and the fundamental mechanisms of event-related brain potentials. Cereb Cortex. 2004;1991(14):476–483.
  20. 20. Herrmann CS, Knight RT. Mechanisms of human attention: event-related potentials and oscillations. Neuroscience and Biobehavioral Reviews. 2001;25(6):465–476. pmid:11595268
  21. 21. Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE. Entrainment of Neuronal Oscillations as a Mechanism of Attentional Selection. Science. 2008;320(5872):110–113. pmid:18388295
  22. 22. Buhusi CV, Meck WH. What makes us tick? Functional and neural mechanisms of interval timing. Nature Reviews Neuroscience. 2005;6:755–765. pmid:16163383
  23. 23. Oprisan SA, Buhusi CV. Why noise is useful in functional and neural mechanisms of interval timing? BMC Neuroscience. 2013;14(1):1–12.
  24. 24. Oprisan SA, Buhusi CV. How noise contributes to time-scale invariance of interval timing. Phys Rev E. 2013;87(5):052717.
  25. 25. Buhusi CV, Oprisan SA. Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons. Behavioral Processes. 2013;95(5):60–70.
  26. 26. Oprisan SA, Dix S, Buhusi CV. Phase resetting and its implications for interval timing with intruders. Behavioral Processes. 2014;101:146–153.
  27. 27. Oprisan SA, Buhusi CV. What is all the noise about in interval timing? Philosophical Transactions of the Royal Society B: Biological Sciences. 2014;369:20120459.
  28. 28. Dilgen JE, Tompa T, Saggu S, Naselaris TD, Lavin A. Optogenetically evoked gamma oscillations are disturbed by cocaine administration. Frontiers in Cellular Neuroscience. 2013;7(213). pmid:24376397
  29. 29. Oprisan SA, Lynn PE, Tompa T, Lavin A. Low-dimensional attractor for neural activity from local field potentials in optogenetic mice. Front Comput Neurosci. 2015;8:125.
  30. 30. Oprisan SA, Canavier CC. Stability Analysis of Rings of Pulse-Coupled Oscillators: The Effect of Phase Resetting in the Second Cycle After the Pulse Is Important at Synchrony and For Long Pulses. Journal of Differential Equations and Dynamical Systems. 2002;(3-4):243–258.
  31. 31. Oprisan SA, Thirumalai V, Canavier CC. Dynamics from a time series: Can we extract the phase resetting curve from a time series? Biophysical Journal. 2003; p. 2919–2928. pmid:12719224
  32. 32. Oprisan SA, Prinz AA, Canavier CC. Phase resetting and phase locking in hybrid circuits of one model and one biological neuron. Biophysical Journal. 2004; p. 2283–2298. pmid:15454430
  33. 33. Voss HU. Anticipating chaotic synchronization. Phys Rev E. 2000;61(5):5115.
  34. 34. Calvo O, Chialvo DR, Eguiluz VM, Mirasso C, Toral R. Anticipated synchronization: A metaphorical linear view. Chaos. 2004;(1):7–13. pmid:15003039
  35. 35. Ciszak M, Mirasso CR, Toral R, Calvo O. Predict-prevent control method for perturbed excitable systems. Phys Rev E. 2009;79:046203.
  36. 36. Stepp N, Turvey MT. On strong anticipation. Cognitive Systems Research. 2010;11(2):148–164. pmid:20191086
  37. 37. Stephan KE, Zilles K, Kotter R. Coordinate-independent mapping of structural and functional data by objective relational transformation (ORT). Philos Trans R Soc Lond B Biol Sci. 2000;355:37–54. pmid:10703043
  38. 38. Sausedo-Solorio JM, Pisarchik AN. Synchronization of map-based neurons with memory and synaptic delay. Physics Letters A. 2014;378(30-31):2108–2112.
  39. 39. Simonov AY, Gordleeva SY, Pisarchik AN, Kazantsev VB. Synchronization with an arbitrary phase shift in a pair of synaptically coupled neural oscillators. JETP Letters. 2014;98(10):632–637.
  40. 40. Matias FS, Carelli PV, Mirasso CR, Copelli M. Self-Organized Near-Zero-Lag Synchronization Induced by Spike-Timing Dependent Plasticity in Cortical Populations. PLoS ONE. 2015;10:e0140504. pmid:26474165
  41. 41. Matias FS, Carelli PV, Mirasso CR, Copelli M. Anticipated synchronization in a biologically plausible model of neuronal motifs. Phys Rev E. 2011;84:021922.
  42. 42. Matias FS, Gollo LL, Carelli PV, Bressler SL, Copelli M, Mirasso CR. Modeling positive Granger causality and negative phase lag between cortical areas. NeuroImage. 2014;99:411–418. pmid:24893321
  43. 43. Oprisan SA, Canavier CC. Stability criterion for a two-neuron reciprocally coupled network based on the phase and burst resetting curves. Neurocomputing. 2005; p. 733–739.
  44. 44. Oprisan SA, Boutan C. Prediction of Entrainment and 1:1 Phase-Locked Modes in Two-Neuron Networks Based on the Phase Resetting Curve Method. International Journal of Neuroscience. 2008;(6):867–890. pmid:18465430
  45. 45. Oprisan SA. Stability of Synchronous Oscillations in a Periodic Network. International Journal of Neuroscience. 2009;(4):482–491.
  46. 46. Oprisan SA. Existence and stability criteria for phase-locked modes in ring neural networks based on the spike time resetting curve method. Journal of Theoretical Biology. 2010;262(2):232–244. pmid:19818355
  47. 47. Oprisan SA. A Geometric Approach to Phase Resetting Estimation Based on Mapping Temporal to Geometric Phase. In: Schultheiss NW, Prinz AA, Butera RJ, editors. Phase Response Curves in Neuroscience. vol. 6. New York: Springer; 2012. p. 131–162.
  48. 48. Oprisan SA. Existence and Stability Criteria for Phase-Locked Modes in Ring Networks Using Phase-Resetting Curves and Spike Time Resetting Curves. In: Schultheiss NW, Prinz AA, Butera RJ, editors. Phase Response Curves in Neuroscience. vol. 6. New York: Springer; 2012. p. 131–162.
  49. 49. Oprisan SA. All phase resetting curves are bimodal, but some are more bimodal than others. ISRN Computational Biology. 2013; p. 1–11.
  50. 50. Oprisan SA. Multistability of Coupled Neuronal Oscillators. In: Dieter J, Ranu J, editors. Encyclopedia of Computational Neuroscience. New York: Springer; 2014. p. 1–15.
  51. 51. Mirollo RM, Strogatz SH. Synchronization of pulse-coupled biological oscillators. SIAM Journal of Applied Mathematics. 1990;50:1645–1662.
  52. 52. Perkel DH, Schulman JH, Bullock TH, Moore GP, Segundo JP. Pacemaker neurons: Effects of regularly spaced synaptic input. Science. 1964;145:61–63. pmid:14162696
  53. 53. Winfree AT. Electrical instability in cardiac muscle: Phase singularities and rotors. Journal of Theoretical Biology. 1989;138:353–405. pmid:2593680
  54. 54. Winfree AT. The geometry of biological time. New York: Springer-Verlag; 2001.
  55. 55. Vollmer MK, Vanderweyen CD, Tuck DR, Oprisan SA. Predicting phase resetting due to multiple stimuli. Journal of the South Carolina Academy of Science. 2015;13:5.
  56. 56. Ermentrout GB, Glass L, Oldeman BE. The Shape of Phase-Resetting Curves in Oscillators with a Saddle Node on an Invariant Circle Bifurcation. Neural Computation. 2012;24:3111–3125. pmid:22970869
  57. 57. Brown E, Moehlis J, Holmes P. On the Phase Reduction and Response Dynamics of Neural Oscillator Populations. Neural Computation. 2004;14:673–715.
  58. 58. Ermentrout GB. Type I Membranes, Phase Resetting Curves, and Synchrony. Neural Computation. 1996;8:979–1001. pmid:8697231
  59. 59. Izhikevich E. Neural excitability, spiking and bursting. Int J Bif Chaos. 2000;10:1171–1266.
  60. 60. Hodgkin AL, Huxley AF. The local electric changes associated with repetitive action in a non-medullated axon. J Physiol. 1948;107:165–181. pmid:16991796
  61. 61. Hodgkin AL, Huxley AF. urrents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. J Physiol. 1952;116:449–472. pmid:14946713
  62. 62. Hodgkin AL, Huxley AF. The components of membrane conductance in the giant axon of Loligo. J Physiol. 1952;116:473–496. pmid:14946714
  63. 63. Hodgkin AL, Huxley AF. The dual effect of membrane potential on sodium conductance in the giant axon of Loligo. J Physiol. 1952;116:497–506. pmid:14946715
  64. 64. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–544. pmid:12991237
  65. 65. Bean BP. The action potential in mammalian central neurons. Nat Rev Neurosci. 2007;8:451–465. pmid:17514198
  66. 66. Hille B. Ion channels of excitable membranes. 3rd ed. Sunderland, MA: Sinauer; 2001.
  67. 67. Morris C, Lecar H. Voltage Oscillations in the barnacle giant muscle fiber. Biophys J. 1981;35:193–213. pmid:7260316
  68. 68. Skinner FK, Bazzazi H, Campbell SA. Two-cell to N-cell heterogeneous, inhibitory networks: precise linking of multi-stable and coherent properties. J Comput Neurosci. 2005;18:343–352. pmid:15830170
  69. 69. Netoff TI, Acker CD, Bettencourt JC, White JA. Beyond two-cell networks: experimental measurement of neuronal responses to multiple synaptic inputs. J Comput Neurosci. 2005;18:287–295. pmid:15830165
  70. 70. Pervouchine DD, Netoff TI, Rotstein HG, White JA, Cunningham MO, Whittington MA, et al. Low dimensional maps encoding dynamics in entorhinal cortex and hippocampus. Neural Comput. 2006;18:2617–2650. pmid:16999573
  71. 71. Nadim F, Zhao S, Bose A. A PRC Description of How Inhibitory Feedback Promotes Oscillation Stability. In: Schultheiss NW, Prinz AA, Butera RJ, editors. Phase Response Curves in Neuroscience. vol. 6. New York: Springer; 2012. p. 399–417.
  72. 72. Wilson CJ, Beverlin B, Netoff T. Chaotic Desynchronization as the Therapeutic Mechanism of Deep Brain Stimulation. Front Syst Neurosci. 2011;5:50. pmid:21734868