next up previous
Next: About this document ...

Last Time
Fundamental Solutions of Co-dimension 1, 2, and 3

Summing Fundamental Solutions for Arbitrary ICs in Infinite Domains

Method of Images

Periodic or Finite Domains: Separation of Variables

Short Wavelengths Disappear Rapidly

Laplace Transform Method

Anisotropic Diffusion $ \rightarrow$ Isotropic Case

3.21 Spring 2002: Lecture 09

Solution Strategies

A number of different methods of finding a closed-form solution of the diffusion equation have been briefly described. Each one has a particular type of domain, boundary condition, or initial condition to which it is particularly suited.

Green's function solutions
Generally applicable to infinite domains, zero-flux conditions at infinity.

Separation of Variables
Finite or infinitely periodic domains with factorable boundary and initial conditions.

Laplace Transforms
Time-dependent boundary conditions on a semi-infinite (with zero flux at $ \infty$) or finite domains.

Method of Images
Extension or extraction of a solution from a known solution on different domain with a judicious choice of image source and sink initial conditions for a particular set of boundary conditions.

However, many problems will not fall into these categories--and even if they do the solution may be either too difficult or impossible to obtain. It may be said with some confidence that most problems that you as professional materials scientists will face may be described thus.

The following five step program is designed to aid you in your quest.1

  1. Draw a picture
  2. Integrate by parts
  3. Attempt to solve the parts using an appropriate method
  4. Go to the Library and copy an existing solution
  5. Use brute force

Brute Force: Numerical Methods

A closed-form solution is always preferable. From a closed-form solution, one can distinguish important physical parameters from less important ones; one can analyze limiting cases and deconstruct a solution to visualize its physical implications. However, closed-form solutions are sometimes time-consuming or impossible.

Numerical methods can be useful as a last (brute force) resort. Furthermore, numerical methods can also be a useful part of finding a closed-form solution--the visualization of a numerical solution will often provide clues of useful approximations or solution strategies.

At any rate, the tools and techniques of numerical methods have become part of the requisite apparatus of a materials scientist. A few methods of numerically solving the diffusion equation are introduced below. These are not necessarily the best methods--or the fastest (which can be an entirely different thing altogether). However, they are sufficiently general as to serve as a starting point for many numerical methods.2

A typical first approach is to discretize space so that the solution at any one time can be represented by values at a set of points $ c(\vec{x}_i , t)$ and the values that may be interpolated between them.

In the finite difference approach, the values on the grid can be used and a discrete version of a spatial derivative is obtained in terms of differences at the grid points.

For instance, in two dimensions, a discrete version of the Laplacian on a square grid of size $ h$, $ \nabla^2 = \frac{\partial^2}{\partial x^2} +
\frac{\partial^2 }{\partial y^2}$, may derived directly from the definition of a derivative:

\begin{displaymath}\begin{array}{lcccc} & & & +c(x,y+h) & \\  \nabla^2 c(x,y) & ...
...h , y) & -4 c(x,y) & +c(x+h,y)\\  & & & +c(x,y-h) & \end{array}\end{displaymath} (09-1)

In the finite element approach, a fitting function such as a polynomial is obtained for the regions between the points and the coefficients of the fitting function are determined by the values at specified points (nodes).

In each case, the spatial derivative can be used to iterate the solution in time.

In the forward differencing method, the definition of the time derivative is used to infer the solution at some time $ \Delta \tau$ later, e.g. for the diffusion equation:

$\displaystyle c(x,y,t + \Delta \tau) = c(x,y,t) + D (\nabla^2_{\mbox{discrete}} c(x,y,t)) \Delta \tau$ (09-2)

However, this method becomes numerically unstable for time-steps larger than $ \Delta \tau > h^2/D$.

Another method, called the Implicit or Crank-Nicholson method, uses part of the previous solution and part of an implicit future solution to extend the stability condition. This increases the numerical stability at the expense of having to invert a set of linear equations to obtain the `future' solution.

Figure 9-1: Illustration of the forward difference and implicit time-iteration methods. The implicit method uses values from the inferred solution to stabilize the numerical time-step.

Another method is called the spectral method and is very powerful for solving the diffusion equation. They work somewhat in the same way that the Laplace transform method works--a derivative is transformed into an algebraic operation.

In particular, the Fourier spectral method is commonly used because of the existence of very fast techniques to perform the transformation operations (i.e., Fast Fourier Transforms--FFTs).

In a nutshell, the Fourier spectral method is implemented by representing the spatial part of the field as a series of trigonometric functions:

$\displaystyle c_N(x,t) = \sum_{k=0}^N a_k e^{ikx}$ (09-3)

The diffusion equation can be written as $ N$ independent ordinary differential equations (one for each coefficient of each wavelength):

$\displaystyle \ensuremath{\frac{d {a_k}}{d {t}}}= -D k^2 a_k$ (09-4)

Each of the $ N$ ODEs is stable (increasingly more stable with large $ k$ or short wavelengths). A finite difference approach can be applied to each of the coefficients:

$\displaystyle \frac{a_k(\tau + \Delta \tau) - a_k(\tau)}{\Delta \tau} = -D k^2 a_k(\tau)$ (09-5)

So, the prediction of the future value of each coefficient can be solved simply and implicitly:

$\displaystyle a_k(\tau + \Delta \tau) = (1 - D k^2) a_k(\tau)$ (09-6)

The real space time-dependent solution can be obtained by application of an FFT.

More details may be found in the textbook and its cited references.

Diffusion as a Molecular Process

The derivation of the diffusion equation followed from consideration of a continuum distribution of conserved particles. A treatment of individual molecular or atomic motions as part of a statistical ensemble of diffusing particles will prove to be not only consistent with the continuum approach, but will provide new physical concepts that enhance understanding of diffusive processes.

Below, the fundamental diffusive process--an atomic or molecular displacement--is treated as a random jump occurring with an average frequency $ \Gamma$. It is instructive to consider how this simple model of random atomic jumps corresponds to the various diffusion systems, or diffusivities discussed above.

First of all, if a jump is truly random, then the probability of a given jump giving a net displacement $ \vec{\Delta x_\circ}$ must be exactly the same as jump with displacement $ - \vec{\Delta x_\circ}$. Therefore, the expected displacement, $ \ensuremath{\langle \vec{\Delta x} \rangle}$, of a particle after one or more jumps must be identically zero. Furthermore, the expected velocity of the particle, $ \ensuremath{\langle \vec{v} \rangle} = \ensuremath{\langle \Gamma \vec{\Delta x} \rangle}$, must also be zero. Considering the Einstein drift equation $ \ensuremath{\langle \vec{v} \rangle} = M \ensuremath{\nabla}\mu$, it must be concluded that the random walk treatment corresponds to the physical experiment that $ \ensuremath{\nabla}\mu = 0$, which is precisely the case that is intended for the measurement of the self-diffusivity $ D^*_i$; i.e. a diffusion couple of two alloys of the same composition, but differing in isotopic concentration.

Secondly, if the jumps are assumed to take place on a crystalline lattice, then each random displacement $ \vec{x_i}$ will be selected from a small subset of all the possible displacements that would be available in the more general case of diffusion in a gas. Treating the general case first, as is done below, leaves the diffusion on the lattice as particular case.

Third, for a process to be completely random, there can be no restriction on the availability of a nearby empty site--the probability of an empty site cannot appear in the simple model of random walk. In the simple random walk process, two atoms may occupy the same space. Therefore, the simple random walk treatment should not correspond to substitutional diffusion on a crystal lattice. However, substitutional crystalline diffusion can be treated as a sequence of random hops on a lattice if the ``particle'' that hops is one of a infinitely dilute concentration of vacancies-a vacancy will always have an occupied neighboring site to hop into in the dilute limit.

In the treatment of the random walk, the average rate at which a walker (i.e. an atom or molecule) takes steps is an empirical frequency

$\displaystyle \Gamma \equiv \frac{\ensuremath{\langle \mbox{number of successful \lq hops'} \rangle}}{\mbox{unit time}}$ (09-7)

A model for $ \Gamma$ will be considered after the treatment of random walks and it will be shown to depend on both a fundamental atomic rate at which an atom attempts a hop, multiplied by the probability of success.

Diffusion as a Random Walk Process

Let $ \Gamma$ be the average rate that a random walker (an atom or molecule) `take steps.' $ \Gamma$ is an average frequency. $ \vec{R}(\Delta \tau)$ be the position of a particular random walker:

Figure 9-2: Schematic illustration of a random walk as a sequence of uncorrelated hops.

The position of a particular particle (relative to the starting position) after a time $ \Delta \tau$ (or $ N_\tau = \Gamma \Delta \tau$ hops) will be:

$\displaystyle \vec{R}(\Delta \tau) = \sum_{i=1}^{N_\tau} \vec{r}_i = \sum_{i=1}^{\Gamma \Delta \tau} \vec{r}_i$ (09-8)

What is meant by a ``random jump'' is not specified, but could be represented by a number of different processes:

Figure 9-3: Illustration of a random walk on a lattice.

Figure 9-4: Illustration of a random walk.

In either case, the expected value $ \ensuremath{\langle \vec{R}(\Delta \tau) \rangle} = 0$; this is an indication that no particular direction is favored and that the driving force $ \ensuremath{\nabla}\mu = 0$ for a random walk.

However, it should be intuitively clear that the distance between two particles should increase with time. This distance is measured by the spread, or second moment, of the distribution $ \ensuremath{\langle \vec{R^2}(\Delta \tau) \rangle}$:

\begin{displaymath}\begin{array}{lr} R^2(\Delta \tau) = \vec{R}(\Delta \tau) \cd...
...\ensuremath{\vec{r}}_n \cdot \ensuremath{\vec{r}}_n \end{array}\end{displaymath} (09-9)

This can be rewritten as a sum along the diagonal and off-diagonal terms:

$\displaystyle R^2(\Delta \tau) = \sum_{i=1}^{N_\tau} \vec{r_i} \cdot \vec{r_i} + 2\sum_{i=1}^{N_\tau - 1} \sum_{j=i+1}^{N_\tau} \vec{r_i} \cdot \vec{r_j}$ (09-10)

Using the relationship $ \vec r_i\cdot \vec r_{j}=\vert r_i\vert\vert r_{j}\vert\cos \Delta \theta_{i,j}$, where $ \Delta \theta_{i,j}$ is the angle between the $ \ensuremath{\mbox{i}^{th}}$ and $ \ensuremath{\mbox{j}^{th}}$ jump-vectors. Therefore,

$\displaystyle \ensuremath{\langle \vec{R^2}(\Delta \tau) \rangle}= N_\tau \ensu...
...tau-1} \vert\vec{r}_i\vert \vert\vec{r}_j\vert \cos \Delta \theta_{ij} \rangle}$ (09-11)

Because $ \Delta \theta_{ij} = - \Delta \theta_{ji}$,

$\displaystyle \ensuremath{\langle \vec{R^2}(\Delta \tau) \rangle}= \Gamma \Delt...
..._\tau} \vert\vec{r}_i\vert \vert\vec{r}_j\vert \cos \Delta \theta_{ij} \rangle}$ (09-12)

No assumption about jump correlations, jump lengths, distribution of values $ \Delta \theta_{ij}$, or the number of dimensions has been made.

If the lengths of the jumps do not correlate to the jump-angles:

$\displaystyle \ensuremath{\langle \vec{R^2}(\Delta \tau) \rangle}= \Gamma \Delt...
...\sum_{i=1}^{N_\tau} \sum_{j=1}^{N_\tau} \cos \Delta \theta_{ij} \rangle}\right)$ (09-13)

where $ N_\tau = \Gamma \Delta \tau$ and $ \ensuremath{\langle r^2 \rangle}$ have been factored out of both terms.

Define the Correlation Factor, $ f$:

$\displaystyle f \equiv 1 + \frac{1}{N_\tau} \sum_{i=1}^{N_\tau} \sum_{j=1}^{N_\tau} \ensuremath{\langle \cos \Delta \theta_{ij} \rangle}$ (09-14)


$\displaystyle \frac{\ensuremath{\langle R^2(\Delta \tau) \rangle}}{\Delta \tau} = \Gamma \ensuremath{\langle r^2 \rangle}f$ (09-15)

This is an important equation, it relates the spread of the expected square displacements (or the second moment of the probability distribution) per unit time with three factors that are dictated by the mechanism of diffusion: the rate of successful hops $ \Gamma$, the average square hop distance $ \ensuremath{\langle r^2 \rangle}$, and a factor related to the correlation between hops, $ f$. If every hop is uncorrelated and the probability of positive displacement $ \vec{r}$ is exactly balanced by an equal probability $ -\vec{r}$ then $ f=1$.

Diffusion as a Time-Dependent Probability Distribution

In the last section, the expected position and the expected square displacement of a particle was calculated by considering the statistics of a sequence of random jumps. The concept of statistics can be coupled with that of a probability distribution. The probability of finding a particle after a time $ \Delta \tau$ will be determined and will be related to the form of a point source solution of the diffusion equation.

Consider a sequence of $ N_\tau$ random jumps on a one-dimensional lattice. If the random walker is at position $ n$ after $ N_\tau$ jumps (after starting at $ n=0$), then the number of jumps to the right $ n_R$ and to the left $ n_L$ must satisfy:

\begin{displaymath}\begin{split}n = n_R - n_L\\  N_\tau = n_R + n_L \end{split}\end{displaymath} (09-16)

The number of different ways that a random jumper could land at site $ n$ from the origin is given by the binomial coefficient:

$\displaystyle \Omega(n, N_\tau) = \frac{N_\tau !}{n_R ! n_L !} = \frac{N_\tau !}{\frac{N_\tau + n}{2} ! \frac{N_\tau - n}{2} !}$ (09-17)

Therefore the probability of getting to site $ n$ after $ N_\tau$ jumps is:

$\displaystyle p(n, N_\tau) = \frac{N_\tau !}{\frac{N_\tau + n}{2} ! \frac{N_\tau - n}{2} !} p_R^{n_R} p_L^{n_L}$ (09-18)

If the probability of jumping right $ p_R = p_L$ the probability of jumping left, then

$\displaystyle p(n, N_\tau) = \frac{N_\tau !}{\frac{N_\tau + n}{2} ! \frac{N_\tau - n}{2} !} \left(\frac{1}{2}\right)^{N_\tau}$ (09-19)

Using Stirling's formula,

$\displaystyle Q! = \sqrt{2 \pi} Q^{Q+1} e^{-Q}$ (09-20)

and taking the limit $ n/N_\tau \ll 1$,

$\displaystyle p(n, N_\tau) \propto e^{-\frac{n^2}{2 N_\tau}}$ (09-21)

which shows that the distribution of a point source in one dimension spreads as a Gaussian.

Using $ R = n \ensuremath{\langle r \rangle}$ and $ N_\tau = \Gamma \Delta \tau$:

$\displaystyle p(R, \Delta \tau) \propto e^{-\frac{R^2}{2 \ensuremath{\langle r^2 \rangle}\Gamma \Delta \tau}}$ (09-22)

Normalizing so that for all $ \Delta \tau$

$\displaystyle \int_{-\infty}^{\infty} p(R, \Delta \tau) dR = 1$ (09-23)

$\displaystyle p(R, \delta \tau) = \frac{1}{\sqrt{2 \pi \Gamma \Delta \tau \ensu...
...rangle}}} e^{-\frac{R^2}{2 \ensuremath{\langle r^2 \rangle}\Gamma \Delta \tau}}$ (09-24)

Comparing this with the fundamental point solution in 1D:

$\displaystyle c(x,t) = \int_{-\infty}^{\infty} \frac{c_{ic} (x - \zeta) e^{-\frac{(x - \zeta)^2}{4 D t}}} {\sqrt{4 \pi D t}} d \zeta$ (09-25)

The probability distribution can be used to identify the diffusivity with a random process:

$\displaystyle D = \frac{\Gamma}{2} \ensuremath{\langle r^2 \rangle}$ (09-26)

for uncorrelated jumps ($ f=1$) on a one-dimensional lattice.

Relation of the Self-Diffusivity to a Random Walk

The notion of the continuum limit for the concentration, $ c(\vec{r},t)$ of particles is consistent with an interpretation of the concentration related to a probability of finding a particle of a given type within a small distance of the point $ \vec{r}$ at a time $ t$. Treating the concentration as a probability distribution, the second moment of the distribution is related to the mean-square displacement from the average displacement:

$\displaystyle \ensuremath{\langle R^2 \rangle}(\Delta \tau) = \frac{\int_0^\infty r^2 c(\vec{r},\Delta \tau) dr} {\int_0^\infty c(\vec{r},\Delta \tau) dr}$ (09-27)

Suppose all of the particles are located at the origin at time $ t=0$, then if diffusion takes place in three dimensions:

$\displaystyle c(r, \Delta \tau) = \frac {c_0 4 \pi r^2 dr} {(4 \pi D \Delta \tau)^{3/2}} e^{-\frac{r^2}{4 D \Delta \tau}}$ (09-28)


\begin{displaymath}\begin{split}\int_0^\infty x^0 e^{-a x^2} dx = \frac{1}{2} \s...
...^{-a x^2} dx = \frac{3}{8 a^2} \sqrt{\frac{\pi}{a}} \end{split}\end{displaymath} (09-29)

$\displaystyle \ensuremath{\langle R^2 \rangle}(\Delta \tau) = \frac { 12 \sqrt{...
...D \Delta \tau)^{5/2} } { 2 \sqrt{\pi} (D \Delta \tau)^{3/2} } = 6 D \Delta \tau$ (09-30)

Comparing this to Equation 9-15,

$\displaystyle D = \frac{\Gamma \ensuremath{\langle r \rangle}^2 f}{6}$ (09-31)

This relates the macroscopic self-diffusivity to the jump frequency, average hop distance and correlation factor in three dimensions.

Using the point source in two dimensions, the analogous result for diffusion in one dimension is:

$\displaystyle D = \frac{\Gamma \ensuremath{\langle r \rangle}^2 f}{4}$ (09-32)

and in general for a point source in $ N$ dimensions:

$\displaystyle D = \frac{\Gamma \ensuremath{\langle r \rangle}^2 f}{2N}$ (09-33)

next up previous
Next: About this document ...
W. Craig Carter 2002-02-27