Tridiagonal Matrix Solver via Thomas Algorithm

This post is part of a series of **Finite Difference Method Articles**. Other posts in the series concentrate on Derivative Approximation, Solving the Diffusion Equation Explicitly and the Crank-Nicolson Implicit Method:

- Derivative Approximation via Finite Difference Methods
- Solving the Diffusion Equation Explicitly
- Crank-Nicolson Implicit Scheme
*Tridiagonal Matrix Solver via Thomas Algorithm*

In the previous tutorial, the set of linear equations allowed a *tridiagonal matrix equation* to be formed. Solving this equation allows the calculation of the interior grid points. This linear system requires solution at every time step. Clearly this is significantly more computationally intensive per time step than the work required for an explicit solver. The implicit method counters this with the ability to substantially increase the timestep.

The method used to solve the matrix system is due to Llewellyn Thomas and is known as the **Tridiagonal Matrix Algorithm (TDMA)**. It is essentially an application of *gaussian elimination* to the banded structure of the matrix. The original system is written as:

\[ \begin{bmatrix}

b_1 & c_1 & 0 & 0 & ... & 0 \\

a_2 & b_2 & c_2 & 0 & ... & 0 \\

0 & a_3 & b_3 & c_3 & 0 & 0 \\

. & . & & & & . \\

. & . & & & & . \\

. & . & & & & c_{k-1} \\

0 & 0 & 0 & 0 & a_k & b_k \\

\end{bmatrix} \begin{bmatrix}

f_1 \\

f_2 \\

f_3 \\

.\\

.\\

.\\

f_k \\

\end{bmatrix} = \begin{bmatrix}

d_1 \\

d_2 \\

d_3 \\

.\\

.\\

.\\

d_k \\

\end{bmatrix} \]

The method begins by forming coefficients \(c^{*}_i\) and \(d^{*}_i\) in place of \(a_i\), \(b_i\) and \(c_i\) as follows:

\[ c^{*}_i = \left\{ \begin{array}{lr} \frac{c_1}{b_1} & ; i = 1\\ \frac{c_i}{b_i - c^{*}_{i-1} a_i} & ; i = 2,3,...,k-1 \end{array} \right. \]

\[ d^{*}_i = \left\{ \begin{array}{lr} \frac{d_1}{b_1} & ; i = 1\\ \frac{d_i-d^{*}_{i-1} a_i}{b_i - c^{*}_{i-1} a_i} & ; i = 2,3,...,k-1 \end{array} \right. \]

With these new coefficients, the matrix equation can be rewritten as:

\[ \begin{bmatrix}

1 & c^{*}_1 & 0 & 0 & ... & 0 \\

0 & 1 & c^{*}_2 & 0 & ... & 0 \\

0 & 0 & 1 & c^{*}_3 & 0 & 0 \\

. & . & & & & . \\

. & . & & & & . \\

. & . & & & & c^{*}_{k-1} \\

0 & 0 & 0 & 0 & 0 & 1 \\

\end{bmatrix} \begin{bmatrix}

f_1 \\

f_2 \\

f_3 \\

.\\

.\\

.\\

f_k \\

\end{bmatrix} = \begin{bmatrix}

d^{*}_1 \\

d^{*}_2 \\

d^{*}_3 \\

.\\

.\\

.\\

d^{*}_k \\

\end{bmatrix} \]

The algorithm for the solution of these equations is now straightforward and works 'in reverse':

\[ f_k = d^{*}_k, \qquad f_i = d^{*}_k - c^{*}_i x_{i+1}, \qquad i = k-1, k-2, ... ,2,1 \]

At this stage the algorithm for each grid point, at any particular time step, has been outlined. The final step towards producing a computational solution is to implement the algorithm, which will be the subject of the next tutorial.

comments powered by DisqusYou'll get instant access to a free 10-part email course packed with hints and tips to help you get started in quantitative trading!

Every week I'll send you a wrap of all activity on QuantStart so you'll never miss a post again.

Real, actionable quant trading tips with no nonsense.