Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Algorithm for the Time-Propagation of the Radial Diffusion Equation Based on a Gaussian Quadrature

  • Dirk Gillespie

    dirk_gillespie@rush.edu

    Affiliation Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1750 West Harrison Street, Suite 1289, Chicago, Illinois, United States of America

Abstract

The numerical integration of the time-dependent spherically-symmetric radial diffusion equation from a point source is considered. The flux through the source can vary in time, possibly stochastically based on the concentration produced by the source itself. Fick’s one-dimensional diffusion equation is integrated over a time interval by considering a source term and a propagation term. The source term adds new particles during the time interval, while the propagation term diffuses the concentration profile of the previous time step. The integral in the propagation term is evaluated numerically using a combination of a new diffusion-specific Gaussian quadrature and interpolation on a diffusion-specific grid. This attempts to balance accuracy with the least number of points for both integration and interpolation. The theory can also be extended to include a simple reaction-diffusion equation in the limit of high buffer concentrations. The method is unconditionally stable. In fact, not only does it converge for any time step Δt, the method offers one advantage over other methods because Δt can be arbitrarily large; it is solely defined by the timescale on which the flux source turns on and off.

Introduction

Diffusion is one of the most classic problems in physics, with the governing laws of Fick dating back more than 150 years [1, 2] and a number of books on the subject [3, 4, 5]. From both the mathematical and numerical points of view, the diffusion equation—especially the spherically-symmetric version considered here—is very well studied; its mathematical properties are known and a number of numerical techniques for solving it exist (e.g., explicit-time methods like forward-time/central-space (FTCS), implicit-time methods like backward-time/central-space (BTCS) and Crank-Nicolson, and more sophisticated ones [6]). While diffusion has been studied experimentally and theoretically for a very long time, it is still of central importance in many areas today. For example, it plays vital roles in biology [7] and new technologies like nanofluidics [8].

For these applications, the efficient numerical solution of the diffusion equation is fundamental to understanding the physics underlying these new applications. One important example is calcium-induced calcium release (CICR) in cardiac muscle, where an array of Ca2+-selective ion channels open and close with a probability that depends on the local Ca2+ concentration, which comes from the diffusion of Ca2+ through neighboring channels [9]. In this sense, the channel’s open/closed state history affects its future state in a feedback mechanism. For such systems, the source flux is changing stochastically in time, depending on the on/off states of all the other flux sources. Moreover, since each channel is ∼ 40 nm across, the calculation of a 10 × 10 array of these channels must be accurate over long distances (at least 560 nm) to understand the complex interactions.

Traditional numerical algorithms like BTCS and Crank-Nicolson divide the space coordinate into small slices to calculate numerical derivatives and produce a matrix equation that must be solved at each time step. Similarly, the conditionally-stable explicit-time methods like FTCS require a matrix/vector multiplication. Because the matrix is sparse, the number of flops for this solve is proportional to the number of grid points. These approaches are very valuable and easy to implement, but when the number of grid points reaches well into the thousands for large systems other approaches might be more computationally efficient.

Here, one alternative numerical technique to compute the concentration profile for radial Fickian diffusion from a point source is described. Specifically, the spherically-symmetric equation considered here is (1) with the initial condition (2) which we take to be a constant. Here, c(r, t) is the concentration (number per unit volume) of the diffusing particles and s(r, t) is the flux per volume (number per unit volume per time) injected into the system. Here, a point source is considered. D is the diffusion coefficient and the variables r and t are the distance from the origin and time, respectively.

The purpose of this paper is describe a method to solve Eq (1) numerically for c(r, t). Instead of discretizing the differential equation itself, the propagation algorithm described here is based on an integral version of the solution to Eq (1). This approach splits the solution into two components, the diffusion from the source during the time interval (whose exact solution is known) and the propagation of the particles released during previous time intervals. Here, we show that the propagation integral can be very efficiently calculated using a combination of (1) a new Gaussian quadrature that is specifically formulated for the diffusion equation and (2) interpolation with grid points whose locations are chosen based on the exact solution to the diffusion equation. This, then, minimizes both the number of points needed to evaluate the integral and the number of points needed to interpolate the results between grid points.

The propagation method offers advantages over more classical techniques. First, since no numerical time derivatives are used, the time step Δt does not have to be small. In the propagation method, Δt is determined by the source’s timescale. For example, if the source flux varies smoothly, then Δt is chosen to be small to approximate that function. However, if the source turns on and off stochastically, then Δt should be the timescale that the source operates on; if, for example, it changes at most once a second, then Δt is 1 second. Second, since no numerical spatial derivatives are used with a constant Δr, very large numbers of grid points (and therefore very large tridiagonal matrices) are avoided. In the propagation method, a nonuniform grid is created that places points optimally for the diffusion problem and between those points interpolation is shown to be accurate.

Preliminary calculations show that a 10-point Gaussian quadrature and ∼ 100 interpolation grid points can give very accurate results, even over long simulations using only a matrix/vector multiplication with a sparse matrix containing < 1400 nonzero entries. These calculations also show that it is possible to evaluate the error of the propagation technique. Moreover, the technique is fast enough to find an optimal set of simulation parameters that maximize accuracy and minimize calculation time. This error and parameter determination is part of the overhead of the simulation so that simulations of the system of interest can be done knowing that the error in the concentration is always within acceptable limits and that the run time is the shortest possible.

Theory

Propagation integral formulation

For the source flux in Eq (1) we consider a point source. When the point source has a constant flux j that does not vary in time so that (3) where δ(r) is the Dirac delta function, a simple closed form formula exists in terms of the complementary error function [3]: (4) where (5)

However, when the source is not constant over time, Eq (1) must be solved numerically.

One way to attack this problem is via classic finite difference or finite elements methods. Methods like FTCS, BTCS, and Crank-Nicolson produce matrix equations to be solved for the concentration at previously-defined spatial grid points. The choice of grid points is crucial since minimizing the number of points is critical for fast calculation, but accuracy may suffer if the number is too small. The goal of this paper is to provide an alternative solution method that is not based on finite differences or finite elements. This approach propagates the concentration profile in time with a sparse-matrix/vector multiplication, as compared to solving a sparse-matrix equation. Since the grid points in this paper are based on a Gaussian quadrature and an interpolation scheme that are both specifically developed for this diffusion problem, the number of grid points and their locations are optimized in the sense of producing a high-order solution with a minimum number of points.

The approach we use here involves decomposing the concentration into two parts, a source part that adds new particles from the point source during the discretized time interval Tn = (tn, tn+1) and a propagation part that diffuses the concentration profile of the last time step through the time interval Tn [3]. That is, (6) where (7) and (8) (9) where χn is 0 or 1 depending if the source is off or on during the time interval Tn, respectively, jn is the flux during Tn, and Δt = tn+1tn is assumed to be the same for all n. These classic formulas may be found, for example, in Section 8.4 of the book by Barton [3]. (Note that for a half-space problem like flux through an ion channel on one side of a membrane, the denominator includes 2π not 4π because the surface area is half that of the whole sphere.)

Here, the source flux takes the form (10) and is assumed to be constant during the discretized time intervals: (11)

Similarly, χn, the on or off state of the source, is the same during the entire time interval [tn, tn+1). Note that here we do not assume that the χn are known beforehand; for that, an analytic solution of the diffusion equation exists (see Eq (54) later). Here, the on/off state of the source can vary due to random inputs that depend on the concentration profile produced by the source. For example, in CICR, the concentration profile produced by one channel affects the open state of its neighbors, which in turn produce Ca2+ profiles that affect the original channel.

It is also important to note that while we assume that j is constant over a time interval, that does not mean we are only considering fluxes that are either on or off or that j must be the same during all time intervals. Specifically, if the source flux is a smoothly varying function, the time intervals Tn should be chosen small enough so that j(t) is well-approximated by the piecewise-constant function j(t) = jnj(tn) if tTn. In the notation used here, jn is the flux during the n-th time interval and χn is a random variable that takes values 0 or 1. The χ function is convenient for the case of a randomly on or off source. For a smoothly varying source on the other hand, χn = 1 for all n and the jn are different for different n.

Since the source term is the concentration profile due to the time-independent source flux during the time interval Tn, Eq (9) is just Eq (4) applied to this time interval. In Eqs (7) and (8), the vectors r and r′ (with lengths r and r′, respectively) are three-dimensional locations near the point source. The propagation integral can be simplified by using the relations (12) and (13) to give (14)

The accurate integration of this integral is computationally difficult because it requires knowing the function c(r′, tn) for all r′ from 0 to ∞. This is generally difficult because one needs at least some prior knowledge about a reasonable finite upper limit and about acceptable grid spacings for the r′. One purpose of this paper is to develop an order N integration scheme for this propagation integral using a Gaussian quadrature.

One important thing to note is that Eqs (9) and (14) are exact solutions of the diffusion equation over a time step Δt. Therefore, Δt can be arbitrarily large. Importantly, it is not constrained by having to be small to numerically approximate a time derivative, as it must be in other methods like FTCS, BTCS, and Crank-Nicolson. Δt is then defined only by the time course of the flux source, the rate at which it turns on and off. This is one way that the propagation method provides an advantage over other existing alternatives. Moreover, because Δt can be anything, the propagation method is unconditionally stable.

Non-dimensionalization

The first step is non-dimensionalizing r with the diffusion lengthscale by defining (15) with a similar definition for R′. One can then define (16) (and similarly for ρprop(R, tn) and ρsource(R, tn)) so that the propagator integral (14) becomes (17) with the weight function (18) (19)

For the source concentration, (20) where the flux factor Fn is defined as (21)

Then, (22)

Propagation algorithm

The integral in Eq (17) may be evaluated in a number of different ways (e.g., the Fast Gauss Transform [10]). The alternative approach taken here is to find a set of Gauss-diffusion quadrature (GDQ) points with corresponding weights that both depend on R. Then, (23)

This, however, leads to an infinite nesting of grid point evaluations: (24) (25) (26) This is, of course, not desirable because we want a finite, well-defined set of GDQ points {xα}. One can overcome this problem by defining a set of grid points between which interpolation is used to evaluate the function at the GDQ points. Specifically, for each Ri, ρ(Ri, tn+1) can be calculated from Eq (23) after the {xα(Ri)} and {wα(Ri)} have been calculated. The interpolation grid and the corresponding GDQ points and weights may be precalculated and saved.

The interpolation used here is a weighted sum of the interpolation points, so that (27)

The interpolation weights depend on z because the Rj are nonuniformly spaced. By Eqs (17) and (23), (28) (29) (30) with the combined weights (31)

In order to express the iteration process in matrix/vector notation, define the column vector (32) and the matrix (33)

Then, (34) for the source column vector (35)

One algorithm for evolving ρ(R, tn) in time then is:

  1. Overhead (before the simulation)
    1. (a) Choose the parameters Δt, the maximum time T of the simulation, and the number of GDQ points N.
    2. (b) Calculate the interpolation grid {Ri} for this T; details are discussed in Subsection Interpolation Grid and Weights.
    3. (c) Compute the source vector s in Eq (35).
    4. (d) For each Ri, compute the GDQ points and weights and ; details are discussed in Subsection GDQ Points and Appendix C.
    5. (e) For each i, compute the interpolation weights ; details are discussed in Subsection Interpolation Grid and Weights and Appendix A.
    6. (f) Compute the combined weights matrix W using Eq (31).
  2. (optional) Perform an error checking simulation; details are discussed in Section Error Analysis.
  3. Simulation. Starting with ρ(R,0) = c0R for time t = 0,
    1. (a) evaluate the source state χn and flux jn, possibly based on the concentrations ρn from the previous time step;
    2. (b) compute ρn+1 from ρn using Eq (34).

The overhead is where all the serious programming work goes, specifically to compute the Gaussian quadrature, the interpolation grid, and the interpolation weights. That having been said, however, none of these steps are as difficult as they first appear to be. First, a Mathematica notebook that computes the Gaussian quadrature grid points and weights is available in the Supporting Information (S1 File). Second, programs like Mathematica provide function interpolation routines that can be used in the interpolation grid building strategy described in Interpolation Grid and Weights, thereby requiring little actual programming by the user. Lastly, Fornberg [11] provides an easy-to-implement algorithm for interpolation that, with one tweak, is very fast to compute and easily allows the user to change the interpolation order. The overhead calculations are generally fast, taking a few seconds and, if parallelization is used, less than 1 sec.

At each simulation step, the iteration (Eq (34)) requires a matrix/vector multiplication with the I × I matrix W. However, W is both small and sparse. Generally, W is small because I < 200 is generally sufficient for accurate results. This is because the interpolation grid is nonuniformly spaced and chosen based on knowledge of the exact solution of the diffusion problem, thereby a priori putting extra points where they are needed (Section Interpolation Grid and Weights). W is also sparse because only a small number of neighboring points are used in the Fornberg interpolation. In sample calculations, W ranged from 3% to 42% filled, depending on the number of GDQ and Fornberg points, as well as on the spacing of the interpolation grid points (specifically, the number of near and far points, as defined in Interpolation Grid and Weights). The sparsity of W and the small size of W make the propagation scheme very fast.

Interpolation grid and weights

There are two components to the interpolation used in the propagation algorithm, namely choosing the points {Ri} and choosing an interpolation method. For the latter, we use a method by Fornberg [11] that uses polynomial interpolation between previously chosen points. This was chosen not only because it is easy to implement and quick to compute, but also because changing of the interpolation order NF (i.e., the order of the interpolating polynomial or, equivalently, the number of nearest-neighbor points around each Ri) is effortless for the user. The technical details are given in Appendix A.

The interpolation grid points should be a minimal number of points, and therefore it would be best to have some knowledge of the mathematical structure of the solution ρ(R, tn). Luckily, it is possible to derive an exact solution (see Appendix B). Specifically, (36) where (37) While one can use this exact solution, it is computationally impractical because no work from time tn can be reused for time tn+1. Therefore, if T timesteps have elapsed, it takes O(T2) operations per location R to compute ρ(R, T) via Eq (36), which becomes slower than the propagation method proposed here after ∼ 100 to 1000 timesteps.

Eq (36) does, however, provide useful information for making the interpolation grid. Specifically, one can find an interpolation grid for the functions em(R) for various m and taking their union. This will yield points that are located where they are needed most.

The first step is to find the largest possible distance Rmax that could be needed in the simulation that ends at time Tmax. This can be determined by considering the worst-case scenario where the current source is on the entire time. Rmax should be chosen so that the concentration there is below some chosen threshold ɛ; that is, Rmax is the rmax that satisfies (38) or, in nondimensionalized variables, (39) where Fmax corresponds to the largest possible flux jmax encountered during the simulation. The interpolation grid then spans [0, Rmax].

The second step is to analyze the functions em(R). For each m ≥ 1, em(0) = em(∞) = 0 and em(R) has one maximum. Moreover, as m → ∞, the maximum value decays to 0 and so the em(R) → 0 for all R. This is shown in Fig 1. The figure shows that the first few em(R) are the most important numerically and that their contribution is limited to the interval [0, 10] (approximately). We therefore divide the interval [0, Rmax] into two parts, the near (R ≤ 10) and the far (R > 10) intervals. We start with the same uniform grid for each m, but the far interval is divided uniformly on a logarithmic scale while the near interval is divided on a linear scale. That is, [0, 10] is divided uniformly while it is [log(10), log(Rmax)] that is divided uniformly. This serves two purposes. First, it focuses the points where e1(R) has the most structure (Fig 1) and needs to be resolved the best, namely in the near interval. Second, it keeps the number of grid points to a minimum. Since one can have Rmax > 5000, using the logarithmic scale places relatively few points in the far interval. More points (e.g., by linearly dividing the far interval) are not necessary because the em(R) have little structure in the far interval that needs to be resolved (Fig 1). Also, since Rmax increases with Tmax, the number of grid points grows logarithmically with Tmax.

thumbnail
Fig 1. The functions em(R) defined in Eq (37).

The hash marks above the main figure are the interpolation grid points for small I (Simulation 1 in Table 1).

https://doi.org/10.1371/journal.pone.0132273.g001

After dividing the near and far intervals uniformly, the next step is to interpolate the functions em(R) on these intervals separately. To do that, polynomial interpolation (usually third order) is used on each subinterval of the uniformly divided interval. Specifically, each subinterval is bisected and both em and the polynomial approximation are evaluated at this midpoint. Each interval is further bisected until em and the interpolation are within a specified tolerance. This is done for many m and the union of all these interpolation grids is taken. This does not add many points because each m starts with the same uniformly-spaced intervals that are then bisected. Some m (especially the small m) will have more bisection steps because they have more structure, but this does not affect the accuracy of the m that do not have many bisections; in fact it only increases their accuracy. In practice, doing this for m up to 500 suffices, with significantly larger m adding only a few extra points. Fig 1 shows an example grid. It was generated using the Mathematica function FunctionInterpolation, which uses this interpolation scheme.

GDQ points

Gaussian quadrature integration methods are very efficient when numerically integrating a smooth function f(R′) against a weight function Ω(R′) over integration limits a and b by using a weighted sum: (40)

The efficiency over standard integration methods (e.g., the trapezoidal rule) comes from being able to choose the locations Rα as well as the weights ωα; standard integration methods prescribe the Rα, usually as uniformly-spaced points. With the appropriate choices of N Gaussian weights and points, a polynomial of degree 2N is integrated exactly [12]. In practice, even a small N gives very accurate results if f is a very smooth function like we have here (Eq (36) and Fig 1). For the diffusion problem, N between 10 and 20 works well (discussed in detail later).

The theory of Gaussian quadratures is well-established (see, for example, Ref. [12]) and for many specific choices of Ω, a, and b there are standard techniques for computing the Gaussian weights and points for a given N. These are generally referred to as, for example, Gauss-Legendre (Ω = 1, a = −1, b = 1) and Gauss-Laguerre (Ω = eR, a = 0, b = ∞) quadratures, and so in our case we refer to it as Gauss-diffusion quadrature, or GDQ. Computing a Gaussian quadrature for non-standard weight functions is possible [12], and Appendix C describes in detail how for the diffusion weight function of Eq (18). A Mathematica notebook that computes the GDQ weights and points is included in the Supporting Information (S1 File).

Error analysis

There are several different ways to analyze the accuracy of the propagation technique. By using a concrete example, we next show that it is possible to quantify the error in the simulations using a standardized system. Moreover, that system can also be used to find simulation parameters that achieve a desired accuracy with the shortest computation time.

Summary of error analysis

Knowing the accuracy of a simulation method is always important, but there are additional reasons for quantifying the error of the propagation method. First, by using interpolation we are introducing errors into the calculation of ρn at every time step and using those results to compute ρn+1. It is therefore a real concern that errors accumulate and lead to first-order errors after the many timesteps required for a simulation. Second, when the flux source is randomly on, ρn(R) can have local maxima and minima that must be resolved accurately. If they are not, then that error may can lead to spurious results at later times. With the example in the next section we show that neither of these occur when the simulation parameters are chosen well. For the vast majority of simulation parameter sets (i.e., N, NF, Inear, and Ifar) the propagation technique gives very accurate answers, although for some combinations of parameters the ρn can diverge. It is therefore important to run these kinds of test simulations.

There are two different checks one can do. The first is a simulation where the flux source is turned on and off randomly and the exact solution in Eq (36) is used to check the result. The second is a simulation where the flux source is always on with a constant flux and the exact solution in Eq (4) is used. Both have pros and cons. With the randomly-on source simulation only relatively small total simulation times T can be checked because the time to calculate the exact solution grows as T2, as discussed above. However, with this technique one can explicitly see whether the fine details of the ρn(R) profile is reproduced. On the other hand, the advantage of the always-on simulation test is that simulations of arbitrary length can be checked to assure against error accumulation. In the next section, it is shown that the always-on test suffices because it bounds the error; its error is always larger than in the random-on test. One can therefore always run this test to assess simulation accuracy. Moreover, the propagation technique is fast enough that one can test which combination of simulation parameters gives a desired level of accuracy in the fastest time, thereby ensuring accurate results in minimal time. This is important for applications like calcium-induced calcium release were many similar simulations must be done.

Example

To illustrate this, we consider a specific example and examine the errors produced by different parameter choices. Specifically, the point source had a flux of 107 particles per second, which is of the order of the flux through an ion channel (approximately 1.6 pA of current for a univalent ion). The initial concentration c0 was 0. The time step was Δt = 10−7 seconds and the diffusion coefficient was 10−9 m/s2, which gives Fn = 6.61 × 10−5 M for all time steps. A maximum of 106 time steps were considered for all simulations. Solving Eq (39) with ɛ = 10−17 gives Rmax = 5148.

All the errors shown here are the maximum absolute difference between the simulated result ρsim and the exact result ρexact over all R, scaled by Fn: (41)

With this scaling, the exact result is independent of the flux and diffusion coefficient (see Eq (36)). Also, ρexact(R, t)/Fn ≤ 1 for all R, and, because it is O(1), −log10(ɛsim) is approximately the number of significant figures the simulation has correct. It is important to note, however, that this may not be the error metric of choice for a given problem. In this example, for instance, having a relatively large error with ɛsim = 10−3 corresponds to a concentration error of 66.1 nM (Fn ɛsim), and if nothing in the problem is sensitive to nanomolar concentrations, then having a stricter error requirement like ɛsim = 10−5 is overkill and probably a waste of computational time and effort.

As a first test, we consider different N (the number of Gaussian integration points) and NF (the number of Fornberg interpolation points) using 173 interpolation grid points (80 near points for R ≤ 10 and 73 far points for R > 10). The results for simulations lasting 104 time steps are shown in Fig 2. The flux source was always on. In general, as N increases, the error decreases. However, the same is not always true for NF. Usually the error decreases for NF ≤ 8, but for larger NF the simulation ρ can diverge (e.g., values > 1010). This is not unexpected, however, as high interpolation order does not necessarily produce high interpolation accuracy. For example, a constant function is well-approximated by linear interpolation (NF = 1), but poorly by high-order polynomials because these necessarily require oscillations between the grid points in order to be 0 at the grid points. This is, in fact, what occurs here; the interpolation oscillates far from the flux source where ρ is constant (at 0). As the simulation continues, these oscillations are either damped out for the best results (purple in Fig 2) or amplified for the diverged results (red) (data not shown). If divergence does occur, the user should be aware that small changes in N and NF can alleviate this problem (Fig 2).

thumbnail
Fig 2. Error for different NF (x-axis) and N (y-axis) for simulations lasting 104 time steps.

The flux source was always on. The maximum difference between the simulated ρ and exact solution at all R over all 104 time steps (maxt{ɛsim(t)}) is shown on a color-coded log10 scale. Errors larger than 1 were truncated to 1 to make the graph more readable.

https://doi.org/10.1371/journal.pone.0132273.g002

With N = 20 and NF = 9 that produce accurate results, as a second test we consider the number of interpolation grid points I. More specifically, we consider how the number of near points Inear (for 0 ≤ R ≤ 10, as discussed in Interpolation Grid and Weights) and the number of far points Ifar (R > 10) affect the error. Table 1 lists the six cases that were considered, and Fig 3 shows ɛsim(t) for the first five cases; the results of Simulation 6 were identical to Simulation 5 except that in Fig 3A it did not have the upswing in error near the 106-th time step that occurs in Simulation 5 (and overlaps with Simulation 2).

thumbnail
Table 1. Parameters used in Fig 3.

The circled numbers in Fig 3 correspond to the simulation number (Sim.) in the Table. For each simulation, the number of near (Inear) and far (Ifar) interpolation grid points are shown, as well as the sum (I). Simulation 6 is not shown in Fig 3 because it was the same as Simulation 5 except that it did not have the uptick in error at the end of the simulation. For all simulations, N = 20 and NF = 9.

https://doi.org/10.1371/journal.pone.0132273.t001

thumbnail
Fig 3. Error versus simulation time for five of the six simulations listed in Table 1.

The circled numbers correspond to the Simulation listed in the table. (A) The flux source is always on. (B) The flux source is randomly on for 50000 time steps. The bars on the right side are the maximum error over 50000 time steps from panel A to show that the error in the always-on simulations bounds the error of the randomly-on simulations. For all simulations, N = 20 and NF = 9.

https://doi.org/10.1371/journal.pone.0132273.g003

We first analyze the case where the flux source is on at all times (Fig 3A). To make sense of the results, consider Simulations 1, 2, and 3, which differ in the number of far points. At the beginning of the simulation, they overlap. At later times, they separate and eventually Simulations 1 and 2 begin to diverge. This indicates that increasing number of far points increases accuracy at later times. This is consistent with the results of Simulations 4, 5, and 6. Again, they overlap at early times and separate according to how many far points there are; the more far points, the smaller the late-time error. Conversely, the more near points, the smaller the early-time error. For example, compare Simulations 1 and 4 and Simulations 2 and 5. Intuitively, these results make sense: early in the simulation ρ is changing near the source so the near interval is most important, while late in the simulation ρ is changing in the far interval.

In the case just considered, the flux source is always on. However, the propagation technique is designed for fluctuating sources. Moreover, the always-on case does not require the specially-constructed interpolation grid. This is shown in Fig 4 where ρ profiles at different times are shown for a randomly-on flux source. For comparison, the dashed line shows the always-on result at the same time step as the red solid curve. The random-on profiles can have multiple local maxima and inflection points, something not seen for the monotonically decaying always-on profile. Because of the more complicated ρ profiles, it is possible that the error for the randomly-on case is much different than for the always-on case. For example, not resolving those maxima or minima at one time can propagate an error to later times.

thumbnail
Fig 4. ρ(R) for a randomly-on flux source at various simulation times.

It illustrates both the complicated structure of these curves and to show how the interpolation grid points are more dense in the regions that need them. The points are the simulation results and the curves are the exact result. For comparison, the dashed curve is the always-on exact result at the same time step as the red curve. For clarity a very small number of interpolation grid points were used (I = 64).

https://doi.org/10.1371/journal.pone.0132273.g004

Fig 3B shows that this is not the case. In fact, for all six cases considered, the maximum error is always less for the random-on case than for the always-on case. This is very useful because the exact answer is much easier to compute for the always-on case; for the random-on case only about 5 × 105 time steps can be computed in a reasonable amount of time. Bounding the error is also important because it makes it always possible to pre-compute the error; if the details of how the flux source turns on or off are unimportant for measuring simulation error, then one can use the always-on case.

Moreover, one can optimize accuracy versus computation speed by combining all of these error checks. The computation speed of the propagation method is dominated by the matrix/vector multiplication in Eq (34), and because W is sparse, the number of multiplications required is the number of nonzero entries in W. Therefore, one can vary all the parameters above (N, NF, Inear, and Ifar) and plot maxt{ɛsim(t)} versus the number of nonzero entries in W. The fastest and most accurate set of parameters can then be chosen for all future simulations. Fig 5 shows this for 336 parameter choices for 106 time steps. The arrow points to the parameter set with the fastest computation time that still achieve high accuracy (maxt{ɛsim(t)} ≤ 10−5), in this case N = 10, NF = 8, Inear = 25, and Ifar = 90 with W having 1356 nonzero entries. Because the propagation method is fast, it is possible to scan this large set of parameters; Fig 5 took approximately 7 hours on a desktop computer with a 6-core 3.33 GHz i7-980X processor using Mathematica version 9 (Wolfram Research, Champaign, Illinois), which could have been significantly faster if done in parallel rather than in series. This time investment is well worth it for applications where a large number of simulations must be done because it guarantees both high accuracy and the fastest overall computation speed for production runs.

thumbnail
Fig 5. Simulation error versus the number of nonzero elements in W (to represent computation time).

Each point is for a different parameter set (N, NF, Inear, and Ifar) for a long simulation of 106 time steps for the example described in the main text with the flux source always on. The arrow points to the optimal balance of high accuracy and computation speed. Errors larger than 1 were truncated to 1 to make the graph more readable.

https://doi.org/10.1371/journal.pone.0132273.g005

To see how this compares to classical methods like BTCS and Crank-Nicolson, one must first consider their mathematical structure. Each discretizes both the time and space derivatives in Eq (1), almost always with uniform spacing Δt and Δr for the time and space coordinates, respectively. This creates a system of linear equations to be solved at each time step with a tridiagonal matrix [12]. Since tridiagonal systems can be solved with O(Nr) operations (Nr is the number of spatial grid points), to be comparable to the propagation method, Nr should be around 1000.

Nr is determined by both Δr and the maximum distance L to be considered. Since the diffusion equation requires both initial and boundary conditions at r = 0 and r = L, one must pick L large enough so that the concentration is (approximately) 0. For the example in this section, Rmax = 5148 corresponds to L = 102.96 μm. From Fig 4 one can see that it takes a resolution in R of at least 0.25, which corresponds to 5 nm for the example in this section. Therefore, Nr ≡ ⌈Lr⌉ = 20592. Even for such a large system, a simple Crank-Nicolson implementation produced maxt{ɛsim(t)} > −4.1, significantly worse than many propagation implementations with suboptimal parameters (Fig 5) and 6 to 45 times as many nonzero matrix entries.

Extension to reaction-diffusion

The focus so far has been on the diffusion equation with a point source (Eq (1)). However, the work done so far can also be extended to include a simple model where chemical reactions remove the diffusing particles. If we assume that these buffer molecules are present at high concentrations and do not change in time or space, then Eq (1) becomes [13] (42) where k is the off-rate of the buffer and k+ is the on-rate. Since we assume the buffer is present at high concentration, the free buffer concentration b and the bound buffer concentration B are known constants (see, for example, [13]). Eq (6) then has two new source and sink terms whose form is the same as Eq (8). For the source term from the diffusant unbinding from the buffer, (43) (44) (45) where Eq (14) with a constant concentration was used to obtain the intermediate equation. For the sink term from the diffusant binding to buffer, (46) (47) (48) where the intermediate equation used a one-point quadrature for the time integral and the final result follows from Eq (7).

All of the results of the previous sections then carry forward directly: (49) and Eq (34) becomes (50) where (51) with (52)

Conclusion

An integration quadrature for the propagation integral of the spherically-symmetric diffusion equation from a time-dependent point source was developed. This integral (Eq (17)) was evaluated using a new Gauss-diffusion quadrature and a specialized interpolation grid was used to find the values at the Gaussian quadrature points for the next integration. This scheme then balances accuracy with speed by using a small number of integration and interpolation grid points to achieve high accuracy over many time steps.

The analysis of this propagation technique shows that it has a number of positive attributes:

  1. It works with a small number of grid points. Even with just 115 interpolation grid points one can get very accurate results even for long simulations (e.g., 106 time steps in Fig 5) with a propagation matrix with < 1400 nonzero entries.
  2. The update step is a simple sparse-matrix/vector multiplication.
  3. The simulation error is quantifiable before any “real” simulations are done.
  4. Simulation error and computation time can be optimized together to minimize both error and simulation time.
  5. The time step Δt can be arbitrarily large, being defined only by the rate at which the flux source turns on and off.
  6. It is unconditionally stable.
  7. It can easily incorporate nonlinear feedback into the flux source, for example, from the concentration profile calculated at previous time steps.

The algorithm developed here is an alternative to more traditional finite difference or finite element approaches. It is different since it computes the concentration profile at the next time step using a simple sparse-matrix/vector multiplication rather than a (sparse) matrix solve needed for unconditionally-stable implicit-time methods. Differences in computation speed then depend on the number of grid points used in the traditional approaches and the specifics of the matrix solution method, making it difficult to compare the methods head-to-head. However, if uniform spatial discretization is used in classical methods like BTCS or Crank-Nicolson, they produce matrices with significantly more nonzero entries than the propagation method. The matrix/vector multiplication of the propagation method is similar to that of conditionally-stable explicit-time methods, except that these methods require small time steps to be convergent, while the time step for the propagation method is solely determined by the timescale on which the flux source turns on and off. However, the propagation method is meant to be an alternative technique that tries to minimize the number of points (and nonzero matrix entries) while retaining high accuracy.

Appendix A: Revised Fornberg algorithm

So that the paper is self-contained, a brief summary of Fornberg’s interpolation algorithm [11] is given, noting one time-saving change and focusing only on function interpolation, rather than derivatives as well; that is, here M = 0 in Fornberg’s notation. If there are grid points to interpolate from, then for some given point x0 we seek an approximation of f(x0) as a weighted average of NFI of these interpolation points: (53) where the is the subset of NF interpolation points closest to x0. The superscript (x0) indicates that the weights Aν are different for different x0.

It is important to note that the Fornberg algorithm uses only the first NF number of interpolation points given to it. Therefore, for each x0 one must input the NF interpolation points nearest to x0. In particular, for each x0 a different set of interpolation points must be given; if one always inputs {Ri}, then the first NF of them will always be used to compute f(x0), even if x0 is not close to any of these points. Hence the notation in Eq (53). This is a point not made clearly by Fornberg.

The Fornberg algorithm for function interpolation only (i.e., not evaluating derivatives) is:

set all ak, l = 0 for 0 ≤ k, lNF − 1 and a0,0 = 1

set c1 = 1

for n = 1 to NF − 1 (this is the time-saving change; Fornberg has I instead of NF − 1 which results in an extremely long calculation of many 0’s)

  set c2 = 1

  for ν = 1 to NF − 2

   set c3 = RnRν

   set c2 = c2c3

   set an,ν = (Rnx0)an−1,ν/c3

  next ν

  set

  set c1 = c2

next n

set

Appendix B: Derivation of the exact solution

Here, a brief derivation of the exact solution to the propagation integral evolution is given, specifically that (54) where (55)

While this result is almost surely not new, it was derived independently by the author in the following way.

Define (56) and note that the weight function of Eq (19) has the property (57)

Then (58)

Taking the Fourier transform (denoted with ˜ and 𝓕R to be explicit of the variable R), we get (59) with (60)

Therefore, with * denoting the complex conjugate, (61) (62)

Assuming the initial concentration is 0, we have (63) and (64) where (65)

Using Eq (62) repeatedly, we find that (66)

Using the fact that (67) taking the inverse Fourier transform of Eq (62) gives that (68) (69) (70) which is the same as Eq (36).

Appendix C: Gaussian quadrature

The theory of Gaussian quadratures is developed in a number of texts (e.g., that by Press et al. [12]) so only a brief outline relevant to the diffusion problem is given. One must compute the coefficients aj (j = 0, 1, …, N) and bj (j = 1, …, N) for the orthonormal polynomials (71) (72) (73) where (74) and (75) for the inner product (76)

Note that because the integral’s weight function depends on R, the coefficients depend on R as well.

To determine the coefficients aj and bj, we need the moments of the weight function: (77) (78) where Φ(α, γ;z) is the confluent hypergeometric function which is equal to the generalized hypergeometric series 1F1(α;γ;z) [14]. This function has the recurrence relationship [14] (79) that will allow us to evaluate the moments efficiently. From the structure of this recurrence relationship, it is easiest to break the moments into even n and odd n.

For even n = 2k (k ≥ 1) with α = n/2 = k, we have (80) or, when (81) (82)

This gives a procedure for increasing k by starting with (83) and (84)

Using that (85) we get (86) (87) and (88) (89) (90) (91) so that for even n ≥ 4 (92) (93) (94) (95)

Here Eq (95) is derived by substituting in Eq (92).

For odd n = 2l + 1 (l ≥ 1) with α = l + 1/2, we have (96) so that (97)

This gives a procedure for increasing l by using (98) and starting with (99) (100) and (101) (102) (103) so that (104) (105) (106) (107) (108)

Here, Eq (107) is derived by substituting in Eq (104).

These moments can then be used to build a Gaussian quadrature using standard procedures, namely finding the eigenvalues {λj} and eigenvectors {vj} of the symmetric tridiagonal matrix [12] (109)

Then, the Gaussian grid points are the eigenvalues (i.e., xα = λα) and the weights are (110) (111) where vα,0 is the first element of vα.

From a theoretical point of view this is straightforward. From a computational point of view, however, it is important to note that the inner product ratios that define the coefficients aj and bj in Eqs (74) and (75) must be done with extremely high precision (e.g., 100 digits or more of working precision) or round-off errors accumulate quickly, even for relatively small N. Moreover, the matrix in Eq (109) is extreme ill-conditioned. With similarly high working precision, however, finding the eigenvalues and eigenvectors is fast and effective. Lastly it is noted that computing the inner products, (112) (a = 0 or 1) can be done by working only with the coefficients of the polynomial pj to find the coefficients of pj(x)2 by using a list convolution and then using a dot product with the vector of moments Mn.

This procedure gives GDQ points and weights . Note that while the matrix and inner product operations must be done with many digits of accuracy, once the GDQ points and weights are calculated they can be truncated to machine precision for the diffusion calculations. A Mathematica notebook that computes the GDQ points and weights is available in the Supporting Information (S1 File).

An alternative approach to calculating the GDQ points and weights is the procedure described by Golub and Welsch [15], but ill-conditioning will still be an issue.

Supporting Information

S1 File. Mathematica notebook containing code to compute the GDQ points and weights as described in Appendix C.

https://doi.org/10.1371/journal.pone.0132273.s001

(NB)

Acknowledgments

I would like to thank Matt Knepley and Jay Bardhan for very valuable discussions.

Author Contributions

Conceived and designed the experiments: DG. Performed the experiments: DG. Analyzed the data: DG. Wrote the paper: DG.

References

  1. 1. Fick A. Ueber diffusion. Ann der Physik. 1855;94: 59 (1855).
  2. 2. Fick A. On liquid diffusion. Phil Mag J Sci. 1855;10: 30 (1855).
  3. 3. Barton G. Elements of Green’s functions and propagation. Oxford: Clarendon Press; 1989.
  4. 4. Crank J. The mathematics of diffusion. Oxford: Oxford University Press; 1980.
  5. 5. Cussler EL. Diffusion: Mass transfer in fluid systems. Cambridge: Cambridge University Press; 2009.
  6. 6. Greengard L, Strain J. The fast Gauss transform. SIAM Journal on Scientific and Statistical Computing. 1991;12: 79–94.
  7. 7. Stern MD, Ríos E, Maltsev VA. Life and death of a cardiac calcium spark. J Gen Physiol 2013;142: 257–274. pmid:23980195
  8. 8. Conlisk AT. Essentials of micro- and nanofluidics: With applications to the biological and chemical sciences. Cambridge: Cambridge University Press; 2012.
  9. 9. Bers DM. Excitation-contraction coupling and cardiac contractile force. 2nd ed. Dordrecht: Kluwer Academic Publishers; 2001.
  10. 10. Li JR, Greengard L. On the numerical solution of the heat equation I: Fast solvers in free space. J Comp Physics. 2007;226: 1891–1901.
  11. 11. Fornberg B. Generation of finite difference formulas on arbitrary spaced grid. Math Comp. 1988;51: 699–706.
  12. 12. Press WH, Teukolsky SA, Vetterling W, Flannery BP. Numerical Recipes in C. Cambridge, UK: Cambridge University Press; 1992.
  13. 13. Neher E. Concentration profiles of intracellular calcium in the presence of a diffusible chelator. In: Heinemann U, Klee M, Neher E, editors. Calcium Electrogenesis and Neuronal Functioning. Berlin: Springer-Verlag. pp. 80–96; 1986.
  14. 14. Gradshteyn IS, Ryzhik IM. Table of integrals, series, and products. San Diego: Academic Press; 2000.
  15. 15. Golub GH, Welsch JH. Calculation of Gaussian quadrature rules. Math Comp. 1969;23: 221–230.