07B. Reaction-diffusion systems¶

Mingyang Lu¶

3/7/2024¶

Reaction-diffusion equations¶

If we now use $u(x,t)$ as the state variable and add another term $f(u,t)$ to the diffusion equation,

$$\frac{\partial u(X,t)}{\partial t} = f(u,t) + D \frac{\partial^2 u(X,t)}{\partial X^2} \tag{1} $$

Equation (1) is called reaction-diffusion equation. $f(u, t)$ represents the reaction term, which we commonly used in a chemical rate equation

$$\frac{du}{dt} = f(u, t) \tag{2}$$

When $f(u, t)$ vanishes, Equation (1) becomes the PDE for a pure diffusion process.

Fisher's equation¶

If we consider a logistic growth as the reaction, Equation (2) for $u(X, t)$ becomes

$$\frac{\partial u}{\partial t} = ru(1-\frac{u}{B}) + D \frac{\partial^2 u}{\partial X^2} \tag{3} $$

$u$ is the population size, and $B$ is the carrying capacity. This is Fisher's equation, first proposed by Ronald Fisher in 1934. Below is a slightly modified PDE integrator to model reaction-diffusion systems. We use the Dirichlet boundary condition here (zero $u$ at the boundary).

InĀ [1]:
import numpy as np
import matplotlib.pyplot as plt

## PDE integration with the finite difference method for reaction-diffusion equation
def pde_fd_reaction_diffusion(derivs, ngrid, dX, dt, D, t0, t_total, p0, **kwargs):
    """
    Parameters:
    - derivs: derivative function
    - ngrid: number of grid points
    - dX: X step size
    - dt: time step size
    - D: diffusion constant
    - t0: initial time
    - t_total: total simulation time
    - p0: initial condition: P(X, t = t0)
    - **kwargs: additional keyworded arguments to the derivative function
    
    Returns:
    - p: Final values of P(X, t)
    """
    t_all = np.arange(t0, t_total + dt, dt)
    nt_all = len(t_all)
    factor = D / dX**2 * dt
    p = p0.copy()  # Make a copy to avoid modifying the original array

    for i in range(nt_all - 1):
        p_plus_one = np.concatenate((p[1:], [0]))  # P(X + dX) for all Xs, 0 for boundary condition
        p_minus_one = np.concatenate(([0], p[:-1]))  # P(X - dX) for all Xs, 0 for boundary condition
        
        # Calculate derivatives at each point
        f = np.array([derivs(t_all[i], X, **kwargs) for X in p])
        
        # Update P(X, t) using finite difference method
        p = p + f * dt + factor * (p_plus_one + p_minus_one - 2 * p)

    return p

Here, we set a constant $u$ at some center grids as the initial condition of the system. We can observe a bidirectional traveling wave at a constant velocity until the wave reaches to the boundary.

InĀ [2]:
l = 100
dX = 1
dt = 0.01
D = 1
r = 0.64
B = 75
ngrid = int(l / dX) + 1  # number of grid points
X_all = np.arange(0, ngrid) * dX  # X values for each grid point

u0 = np.zeros(ngrid)
u0[int(ngrid / 2) - 2: int(ngrid / 2) + 3] = 50

def fisher(t, u, r, B):
    return r * u * (1 - u / B)

plt.figure(figsize=(6,3))
plt.plot(X_all, u0, 'b-', label='0')
plt.xlabel('X')
plt.ylabel('u')
plt.xlim(0, 100)
plt.ylim(0, 100)

u_1 = u0.copy()
for i in range(5):
    u_1 = pde_fd_reaction_diffusion(derivs = fisher, ngrid = ngrid, dX = dX, dt = dt, 
                                    D = D, t0 = 0, t_total = 5, p0 = u_1, r = r, B = B)
    plt.plot(X_all, u_1, label=f'{i+1}')

plt.legend(title='Iterations', loc='upper right')
plt.show()
No description has been provided for this image

For a constant initial condition from the very left end of the system, the traveling wave is towards the right direction.

InĀ [3]:
l = 100
dX = 1
dt = 0.01
D = 1
r = 0.64
B = 75
ngrid = int(l / dX) + 1  # number of grid points
X_all = np.arange(0, ngrid) * dX  # X values for each grid point

u0 = np.zeros(ngrid)
u0[:5] = 50  # five grids with a constant u

def fisher(t, u, r, B):
    return r * u * (1 - u / B)

plt.figure(figsize=(6,3))
plt.plot(X_all, u0, 'b-', label='0')
plt.xlabel('X')
plt.ylabel('u')
plt.xlim(0, 100)
plt.ylim(0, 100)

u_1 = u0.copy()
for i in range(10):
    u_1 = pde_fd_reaction_diffusion(derivs = fisher, ngrid = ngrid, dX = dX, dt = dt, 
                                    D = D, t0 = 0, t_total = 5, p0 = u_1, r = r, B = B)
    plt.plot(X_all, u_1, label=f'{i+1}')

plt.legend(title='Iterations', fontsize="8", loc='upper right')
plt.show()
No description has been provided for this image

Ronald Fisher used the Fisher's equation to explain the spread of advantage genes in a population. His study has become the foundation of population genetics. In the model of genetics, $u$ (after a variable scaling to remove $B$) represents the frequency of the mutant gene, $x$ is the spatial location, $r$ is the intensity of the selection in favor of the mutant, and $D$ is the rate of diffusion per generation. This model exhibits a traveling wave, which explains the spread of the advantage genes within a population.

Fokker-Planck equation¶

A stochastic differential equation (SDE) in the following form

$$\frac{dX}{dt} = f(X,t) + \sqrt{2D(X,t)}\eta(t) \tag{4} $$ , where $\eta (t)$ represents a Gaussian white noise, is equivalent to the Fokker-Planck equation:

$$\frac{\partial P}{\partial t} = -\frac{\partial (fP)}{\partial X} + \frac{\partial^2 (DP)}{\partial X^2} \tag{5} $$ A Fokker-Planck equation is similar to a reaction-diffusion equation, but the variable $P$ represents the probability as the function of $X$ and $t$. The diffusion equation is also a special case of the Fokker-Planck equation when $f=0$.

Similar to the diffusion equation, a Fokker-Planck equation can also be integrated by the finite difference method. From Equations (6-7) in Part 05A, for any function $F(X,t)$, we have

$$ \frac{F(X + \Delta X, t) - F(X-\Delta X, t)}{2\Delta X} = \frac{\partial F(X,t)}{\partial X} + O(\Delta X^2) \tag{6}$$ This demonstrates that $\frac{\partial F(X,t)}{\partial X}$ can be approximated by $\frac{F(X + \Delta X, t) - F(X-\Delta X, t)}{2\Delta X}$. Thus, the PDE integration can be achieved by the following formula (for simplification, we assume $D(X,t)$ as a constant):

\begin{equation} \begin{aligned} P(X, t+\Delta t) = P(X, t) &- \frac{f(X + \Delta X, t)P(X + \Delta X, t) - f(X-\Delta X, t)P(X-\Delta X, t)}{2\Delta X} \Delta t \\ &+ D \frac{P(X + \Delta X, t) + P(X-\Delta X, t) - 2P(X, t)}{\Delta X^2} \Delta t \end{aligned} \tag{7} \end{equation}

InĀ [4]:
## PDE integration with the finite difference method for a generic Fokker-Planck equation (constant D)
def pde_fd_fokker_planck(derivs, ngrid, X_all, dX, dt, D, t0, t_total, p0, **kwargs):
    """
    Parameters:
    - derivs: derivative function
    - ngrid: number of grid points
    - dX: X step size
    - dt: time step size
    - D: diffusion constant
    - t0: initial time
    - t_total: total simulation time
    - p0: initial condition: P(X, t = t0)
    - **kwargs: additional keyworded arguments to derivs
    
    Returns:
    - p: Final values of P(X, t)
    """
    t_all = np.arange(t0, t_total + dt, dt)
    nt_all = len(t_all)
    factor = D / (dX ** 2) * dt  # scaling factor for the diffusion term
    factor_2 = dt / (2 * dX)  # scaling factor for the drift term
    p = p0.copy()

    for i in range(nt_all - 1):
        p_plus_one = np.concatenate((p[1:], [0]))  # P(X+dX) for all Xs, 0 is added to specify the boundary conditions
        p_minus_one = np.concatenate(([0], p[:-1]))  # P(X-dX) for all Xs, 0 is added to specify the boundary conditions

        f = np.array([derivs(t_all[i], X, **kwargs) for X in X_all])
        fp = f * p
        fp_plus_one = np.concatenate((fp[1:], [0]))
        fp_minus_one = np.concatenate(([0], fp[:-1]))

        p = p - factor_2 * (fp_plus_one - fp_minus_one) + factor * (p_plus_one + p_minus_one - 2 * p)

    return p

We applied the above function on Brownian dynamics under a harmonic potential and constant noise (Ornstein–Uhlenbeck process).

$$\frac{\partial P}{\partial t} = -\frac{\partial (-kXP)}{\partial X} + D\frac{\partial^2 P}{\partial X^2} \tag{8}$$

InĀ [5]:
l = 100
dX = 1
dt = 0.01
D = 1
k = 0.01
ngrid = int(l / dX) + 1  # number of grid points
X_all = (np.arange(1, ngrid + 1) - (ngrid + 1) / 2) * dX  # X values for each grid point

u0 = np.zeros(ngrid)
u0[int(ngrid / 2) - 2: int(ngrid / 2) + 3] = 1
u0 /= np.sum(u0)

def spring(t, X, k):
    return -k * X

plt.plot(X_all, u0, 'b-', label='0')
plt.xlabel('X')
plt.ylabel('u')
plt.xlim(-50, 50)
plt.ylim(0, 0.1)

u_1 = u0.copy()
for i in range(5):
    u_1 = pde_fd_fokker_planck(derivs = spring, ngrid = ngrid, X_all = X_all, dX = dX, 
                               dt = dt, D = D, t0 = 0, t_total = 40, p0 = u_1, k = k)
    plt.plot(X_all, u_1, label=f'{i+1}')

plt.plot(X_all, np.sqrt(k / (2 * np.pi * D)) * np.exp(-k * X_all ** 2 / (2 * D)), 
         'o', label='Analytical Solution', markeredgecolor="k", markerfacecolor="w",
         markersize = 5)

plt.legend(title='Iterations', loc='upper right')
plt.show()
No description has been provided for this image

In the above simulation, the $P$ is localized at the center $X$ initially and converge to a Gaussian distribution. When using a stiffer spring, $P$ converges to a narrower Gaussian. It can be shown the steady state solution of Equation (8) ($\frac{\partial P}{\partial t} =0$) is

$$P_{ss}(X) = \sqrt{\frac{k}{2\pi D}}e^{-\frac{kX^2}{2D}} \tag{9}$$ In the plot, the dots represent the theoretical values of $P_{ss}(X)$.

InĀ [6]:
l = 100
dX = 1
dt = 0.01
D = 1
k = 0.03
ngrid = int(l / dX) + 1  # number of grid points
X_all = (np.arange(1, ngrid + 1) - (ngrid + 1) / 2) * dX  # X values for each grid point

u0 = np.zeros(ngrid)
u0[int(ngrid / 2) - 2: int(ngrid / 2) + 3] = 1
u0 /= np.sum(u0)

def spring(t, X, k):
    return -k * X

plt.plot(X_all, u0, 'b-', label='0')
plt.xlabel('X')
plt.ylabel('u')
plt.xlim(-50, 50)
plt.ylim(0, 0.1)

u_1 = u0.copy()
for i in range(5):
    u_1 = pde_fd_fokker_planck(derivs = spring, ngrid = ngrid, X_all = X_all, dX = dX, 
                               dt = dt, D = D, t0 = 0, t_total = 40, p0 = u_1, k = k)
    plt.plot(X_all, u_1, label=f'{i+1}')

plt.plot(X_all, np.sqrt(k / (2 * np.pi * D)) * np.exp(-k * X_all ** 2 / (2 * D)), 
         'o', label='Analytical Solution', markeredgecolor="k", markerfacecolor="w",
         markersize = 5)

plt.legend(title='Iterations', loc='upper right')
plt.show()
No description has been provided for this image

$P$ also converges to the same distribution when starting from a different initial condition (uniform distribution).

InĀ [7]:
l = 100
dX = 1
dt = 0.01
D = 1
k = 0.03
ngrid = int(l / dX) + 1  # number of grid points
X_all = (np.arange(1, ngrid + 1) - (ngrid + 1) / 2) * dX  # X values for each grid point

u0 = np.ones(ngrid)
u0 /= np.sum(u0)

def spring(t, X, k):
    return -k * X

plt.plot(X_all, u0, 'b-', label='0')
plt.xlabel('X')
plt.ylabel('u')
plt.xlim(-50, 50)
plt.ylim(0, 0.1)

u_1 = u0.copy()
for i in range(5):
    u_1 = pde_fd_fokker_planck(derivs = spring, ngrid = ngrid, X_all = X_all, dX = dX, 
                               dt = dt, D = D, t0 = 0, t_total = 40, p0 = u_1, k = k)
    plt.plot(X_all, u_1, label=f'{i+1}')

plt.plot(X_all, np.sqrt(k / (2 * np.pi * D)) * np.exp(-k * X_all ** 2 / (2 * D)), 
         'o', label='Analytical Solution', markeredgecolor="k", markerfacecolor="w",
         markersize = 5)

plt.legend(title='Iterations', loc='upper right')
plt.show()
No description has been provided for this image

Q: How does $P(X,t)$ change when using Neumann boundary condition? What happens when $D$ is a function of $X$ (i.e., state-dependent)?