next up previous
Next: About this document ...

Last Time

Marker Motion and Kirkendall Effect

Stress and Diffusion

3.21 Spring 2002: Lecture 06


Electromigration is a kinetic effect that has consequences for the reliability of narrow conductors. In electromigration, there is a contribution to the net flux of atoms due to a potential difference across a conductor.

Figure 6-1: Illustration of electromigration. If the dominant mobile charged species is an electron, then when electrons (moving towards the anode) collide with lattice host atoms and transfer momentum, a net flux of atoms is produced towards anode. To conserve lattice sites, a counter-flux of vacancies develops--and if marker atoms are present they will move in the direction of the vacancies. If there are no sources and sinks for vacancies, the vacancies will condense onto the cathode. If the vacancies condense as pores prior to reaching the cathode, the pores will grow and migrate towards the cathode. If the dominant carriers are holes, then the vacancies will travel towards the anode. If you are viewing this in html, click on the figure to see an in-situ example.

The same method of associating the various fluxes with a single identifiable mechanism that was used in the analysis of stress-assisted diffusion can be used in the case of electromigration. The generalized driving force will be shown to be the gradient in electrochemical potential, $ \mu_1 + e Z_1 \phi$, where $ e$ is the magnitude of charge on the electron, and $ Z_1$ is the number of charges on the diffusing ionic species.

Considering only motion of ions and counterflow of electrons, the generalized entropy production is:1

$\displaystyle T \dot{\sigma} = - J_1 \ensuremath{\nabla}\mu_1 - J_q \ensuremath{\nabla}\phi$ (06-1)

The two fluxes are related through:

$\displaystyle J_q = e Z_1 J_1 - e J_{elect}$ (06-2)

where $ J_{elect}$ is the flux of electrons. Therefore,

$\displaystyle T \dot{\sigma} = - J_1 \ensuremath{\nabla}(\mu_1 + e Z_1 \phi) + e J_{elect} \ensuremath{\nabla}\phi$ (06-3)

the term inside the parenthesis is the electrochemical potential.

The generalized force-flux relationships are given by:

\begin{displaymath}\begin{split}J_1 = -M_1 c_1 \ensuremath{\nabla}(\mu_1 + e Z_1...
... + e Z_1 \phi) - M_e \rho_e \ensuremath{\nabla}\phi \end{split}\end{displaymath} (06-4)

The cross-term $ L_{1e}$ comes from the momentum coupling of electrons and the lattice host atoms--sometimes called the `electron wind.' This is made explicit by associating an effective ionic charge interaction, $ Z^{ew}$, with $ L_{1q}$ so that the first equation in Eq. 6-4 becomes:

$\displaystyle J_1 = -M_1 c_1 \ensuremath{\nabla}( \mu_1 + e Z^{eff} \phi)$ (06-5)

where $ Z^{eff} \equiv Z_1 - Z^{ew}$ is the effective charge on the diffusing species.

The interaction between the moving electrons and the host atoms is usually what one would intuitively expect--the atoms are dragged along with the charge carriers.

Anisotropy and Kinetic Coefficients

From consideration of expressions for the entropy production such a Eq. 6-3 and the hypothesis that the entropy production is always positive, it was reasoned that a flux would be antiparallel to its driving force. However, it is not necessary that the they are exactly antiparallel--only that there dot-product is negative. In anisotropic materials, the driving forces and fluxes are not generally in the same direction as illustrated in the following figure:

Figure 6-2: Illustration of a bar composed of alternating lays of a high-conductivity and low-conductivity material. Near the center of the bar, the high-conductivity layers will be at constant potential and therefore the gradient in potential (the driving force) will be normal to the layers. However, the net flux in the bar is between the two ends and not necessarily normal to the layers.

A linear relationship between the forces and fluxes will now include all of the vector components:

$\displaystyle \left( \begin{array}{c} J_{Q_x} \\  J_{Q_y} \\  J_{Q_z} \end{arra...{\partial T}{\partial y}\\  \frac{\partial T}{\partial z} \end{array} \right)$ (06-6)

or in component form:

$\displaystyle J_i = -D_{ij} \frac{\partial T}{\partial x_i}$ (06-7)

for the case of mass diffusion, or just simply,2

$\displaystyle \vec{J_Q} = -{\ensuremath{\underline{ \kappa}}} \ensuremath{\nabla}T$ (06-8)

From Onsager's hypotheses, $ \kappa_{ij} = \kappa_{ji}$ and $ \kappa_{ij}$ is positive definite.

These material coefficients are examples of tensors. Neumann's principle implies that the symmetry of the tensor must include any symmetry elements of the point group of the symmetry of the underlying material. Note that Neumann's principle implies that the tensors must include--this doesn't note prevent them from having more symmetry than the underlying material and, in fact, may be isotropic.

Some examples of material tensor properties include the following:

Tensor type Linear Mapping Type Tensor Type
example example example
material response material property applied field
vector (rank 1) tensor (rank 2) vector (rank 1)
current electrical conductivity $ \ensuremath{\nabla}\phi$
vector (rank 1) tensor (rank 3) tensor (rank 2)
polarization piezoelectric constant stress
tensor (rank 2) tensor (rank 4) tensor (rank 2)
strain compliance ($ C_{ijkl}$) stress

For example, the thermal conductivity of diamond is (in J/(msK)) approximately:

$\displaystyle \underline{\kappa}_{\mbox{C-diamond}} = \left( \begin{array}{ccc} 1000 & 0 & 0\\  0 & 1000 & 0\\  0 & 0 & 1000 \end{array} \right)$ (06-9)

but for graphite:

$\displaystyle \underline{\kappa}_{\mbox{C-graphite}} = \left( \begin{array}{ccc} 355 & 0 & 0\\  0 & 355 & 0\\  0 & 0 & 89 \end{array} \right)$ (06-10)

which reflects that the covalent (SP2) bonds for in-plane graphite couple more effectively to phonons than the out-of-plane van der Waals bonds. Furthermore, it show the relative effectiveness of thermal conductivity between SP3 and SP2 hybrids.

Considering the possible anisotropy of material coefficients, the general force-flux relations will have tensors multiplying vector driving forces, e.g.

$\displaystyle \ensuremath{\vec{J}}_i = -\underline{L_{iQ}} \frac{\ensuremath{\n...
... \ensuremath{\nabla}\phi - \ldots - \underline{L_{iN}} \ensuremath{\nabla}\mu_N$ (06-11)


$\displaystyle \left( \begin{array}{c} J_{i_1}\\  J_{i_2}\\  J_{i_3} \end{array}...
...  \ensuremath{\frac{\partial{\phi}}{\partial{z}}}\\  \vdots \end{array} \right)$ (06-12)

In general

$\displaystyle \left( \begin{array}{c} \ensuremath{\vec{J}}_Q\\  \ensuremath{\ve...
...nsuremath{\nabla}\phi\\  \vdots\\  \ensuremath{\nabla}\mu_N \end{array} \right)$ (06-13)

The off-diagonal tensors are related through their transposes by Onsager symmetry. The diagonal matrices are symmetric and positive definite by positive entropy production.

Figure 6-3: Example calculation of a composite bar with components that have thermal conductivities that differ by a factor of 100. The top row is an illustration of the same material, but rotated with respect to the two thermal reservoirs that maintain a constant high temperature on the left and low temperature on the right. The bottom and top edge are coated with a thermally insulating layer. The second row illustrates the steady-state temperature distribution. The bottom row is a plot of the intensity of thermal flux.

Flux, Divergence, and Accumulation Revisited: The Diffusion Equation

Figure 6-4: Consider rate of accumulation in a volume $ \Delta V$ bounded by the surface $ \mathcal{B}(\Delta V)$ with outward unit normal $ \hat{n}$

$\displaystyle \ensuremath{\frac{\partial{N_i}}{\partial{t}}}= - \int_{\mathcal{B}(\Delta V)} \ensuremath{\vec{J}}_i \cdot \hat{n} dA$ (06-14)

With the divergence theorem,

$\displaystyle \ensuremath{\frac{\partial{N_i}}{\partial{t}}}= - \int_{\Delta V} \ensuremath{\nabla}\cdot \ensuremath{\vec{J}}_i dV$ (06-15)

Consider shrinking $ \Delta V$ towards a given point. Using the mean value theorem for integration, $ \int \ensuremath{\nabla}\cdot \vec{J}$ can be replaced with $ \ensuremath{\nabla}\cdot \vec{J} \Delta V$ evaluated at some point within $ \Delta V$. Dividing both sides of Eq. 6-15 by $ \Delta V$ in this limit:

$\displaystyle \ensuremath{\frac{\partial{c_i}}{\partial{t}}}= - \ensuremath{\nabla}\cdot \ensuremath{\vec{J}}_i$ (06-16)

Using the form of Fick's first law in the laboratory frame:

$\displaystyle \ensuremath{\vec{J}}= - \tilde{D} \ensuremath{\nabla}c$ (06-17)

Combining the above equations, a single equation involving the concentration and its derivatives results:

$\displaystyle \ensuremath{\frac{\partial{C}}{\partial{t}}}= \ensuremath{\nabla}\cdot \tilde{D} \ensuremath{\nabla}c$ (06-18)

which is the diffusion equation.

A diffusion equation represents a local relation between how fast a quantity is changing and the divergence of its flux. Any quantity that is conserved will have a diffusion equation and can be derived with the same simple steps used above.

Most analytic solutions to the diffusion equation are for the case of $ \tilde{D}$ being both uniform and constant with respect to composition. As has been discussed previously, this is certainly not the case for $ \tilde{D}$. However, it would be useful to have the wealth of useful solutions that apply for constant $ \tilde{D}$ to apply to materials problems of interest.3The solutions for constant $ \tilde{D}$ are useful in limiting case where the concentration does not vary wildly. In this case, the value of $ \tilde{D}$ can be replaced with levels of approximation to a constant concentration:

$\displaystyle \tilde{D}(c) = \tilde{D}_0 + \frac{\tilde{D}_1}{\ensuremath{\langle c \rangle}} \Delta c + \ldots$ (06-19)

where $ \tilde{D}_0$ is the average value of $ \tilde{D}(c)$ taken over it maximum and minimum values, $ \Delta c = c - \ensuremath{\langle c \rangle}$, and

$\displaystyle \tilde{D}_1 = \ensuremath{\frac{\partial{\tilde{D}}}{\partial{c}}...
...right\vert}_{c = \ensuremath{\langle c \rangle}} \ensuremath{\langle c \rangle}$ (06-20)

The diffusion equation becomes:

$\displaystyle \ensuremath{\frac{\partial{c}}{\partial{t}}}= \tilde{D}_0 \nabla^...
...rangle}} \ensuremath{\nabla}c \cdot \tilde{D}_1 \ensuremath{\nabla}{c} + \ldots$ (06-21)

The first solution in a method of successive approximations in small $ \Delta c$ and small $ \vert \ensuremath{\nabla}c \vert$ is simply,

$\displaystyle \ensuremath{\frac{\partial{c}}{\partial{t}}}= \tilde{D}_0 \nabla^2 c$ (06-22)

which is the diffusion equation for constant diffusivity.

The Diffusion Equation for Constant Diffusivity

The diffusion equation has a intuitively useful geometrical interpretation:

Figure 6-5: Relation be the geometry of a concentration profile and its evolution. The local time rate of change is proportional to the local second spatial derivative

Generally, because there are two spatial derivatives equation and one time derivative in the diffusion equation, The specification of two spatial integration constants (the boundary conditions) and one time integration constant (the initial conditions) are required when stating a problem for solution..

Typically, boundary conditions (BCs) look like:

$\displaystyle c(\vec x = \vec x_1) = c_1(t)$    or $\displaystyle \ensuremath{\vec{J}}(\vec x = \vec x_1) \cdot \hat{n} = \ensuremath{\vec{J}}_1(t)$ (06-23)

The BCs on the left are called Dirichlet and those on the right are called Neumann boundary conditions. Boundary conditions are a function of time located at particular position in space.

Initial conditions (ICs) are a function of space specified at a particular instant of time:

$\displaystyle c(x,y,z,t=t_0) = c_0(x,y,z)$ (06-24)

One very useful result of the diffusion equation being a linear partial differential equation is superposition. Suppose $ p(x,t)$ and $ q(x,t)$ are both solutions to the diffusion equation, each with their own initial and boundary conditions:

$\displaystyle \ensuremath{\frac{\partial{p}}{\partial{t}}}= \tilde{D} \ensuremath{\frac{\partial^2{p}}{\partial{x}^2}}$ (06-25)

with BC's and IC:

$\displaystyle p(x=a,t) = p_a(t)$    $\displaystyle p(x=b,t) = p_b(t)$    $\displaystyle p(x,t=0) = p_0(t)$ (06-26)

$\displaystyle \ensuremath{\frac{\partial{q}}{\partial{t}}}= \tilde{D} \ensuremath{\frac{\partial^2{q}}{\partial{x}^2}}$ (06-27)

with BC's and IC:

$\displaystyle q(x=a,t) = q_a(t)$    $\displaystyle q(x=b,t) = q_b(t)$    $\displaystyle q(x,t=0) = q_0(t)$ (06-28)

Then $ r(x,t) = p(x,t) + q(x,t)$ is a solution for BC's and IC:

$\displaystyle r(x=a,t) =p_a(t) + q_a(t)$    $\displaystyle r(x=b,t) = r_b(t) + q_b(t)$    $\displaystyle r(x,t=0) = p_0(t)+ q_0(t)$ (06-29)

Steady-State Solutions

Steady-state solutions generally apply at long times.4

The steady-state condition is that the solution ceases to be a function of time:

$\displaystyle \ensuremath{\frac{\partial{c}}{\partial{t}}}= 0$ (06-30)

So, to take a simple example of a one-dimensional problem on a finite domain, with uniform diffusivity and Dirichlet BCs:

Boundary conditions:

$\displaystyle c(x=0,t) = C_0 {\mbox{\hspace{1in}}} c(x=L,t) = C_L$ (06-31)

$\displaystyle \ensuremath{\frac{\partial{c}}{\partial{t}}}= 0 = \tilde{D} \frac{\partial^2 c}{\partial x^2}$ (06-32)

Integrate once:

$\displaystyle a_1 = \ensuremath{\frac{\partial{c}}{\partial{x}}}$ (06-33)

Integrate again,

$\displaystyle a_1 x + a_0 = c(x)$ (06-34)

Plug in the two boundary conditions and solve for the two unknowns, $ a_1$ and $ a_0$, to find the steady steady-state concentration profile:

$\displaystyle c(x) = C_0 + \frac{C_L - C_O}{L} x$ (06-35)

The steady-state flux across the region $ (0,L)$,

$\displaystyle \ensuremath{\vec{J}}(x) = -\tilde{D} \frac{C_L - C_O}{L}$ (06-36)

is uniform.

For another simple example of a steady state solution, suppose the diffusivity is now a function of concentration, then the steady-state equation becomes:

$\displaystyle \ensuremath{\frac{\partial{c}}{\partial{t}}}= 0 = \frac{\partial}{\partial x} \tilde{D} (c) \frac{\partial c}{\partial x}$ (06-37)

Integrate once:

$\displaystyle a_1 = D(c) \frac{\partial c}{\partial x}$ (06-38)

Integrate again

$\displaystyle a_1 = \frac{ \int_{C_0}^{C_L} \tilde{D} (c) dc}{L}$ (06-39)

which shows that

$\displaystyle \ensuremath{\vec{J}}(x) = - \frac{ \int_{c_0}^{c_L} D(c) dc}{L}$ (06-40)

must be generally independent of $ x$ as it must be for steady state solution in one dimension.

next up previous
Next: About this document ...
W. Craig Carter 2002-02-20