In this article we will present a NumPy/SciPy listing, as well as a pure Python listing, for the **LU Decomposition** method, which is used in certain quantitative finance algorithms.

One of the key methods for solving the Black-Scholes Partial Differential Equation (PDE) model of options pricing is using Finite Difference Methods (FDM) to discretise the PDE and evaluate the solution numerically. Certain implicit Finite Difference Methods eventually lead to a system of linear equations.

This system of linear equations can be formulated as a matrix equation, involving the matrix $A$ and the vectors $x$ and $b$, of which $x$ is the solution to be determined. Often these matrices are banded (their non-zero elements are confined to a subset of diagonals) and specialist algorithms (such as the Thomas Algorithm) are used to solve them.

Although suboptimal from a performance point of view, we are going to code up a method known as LU Decomposition in order to aid us in solving the following matrix equation, without the direct need to invert the matrix $A$:

\begin{eqnarray*} Ax=b \end{eqnarray*}We will make use of the *Doolittle's LUP decomposition with partial pivoting* to decompose our matrix $A$ into $PA=LU$, where $L$ is a lower triangular matrix, $U$ is an upper triangular matrix and $P$ is a permutation matrix. $P$ is needed to resolve certain singularity issues. The algorithm is provided as follows.

To calculate the upper triangular section we use the following formula for elements of $U$:

\begin{eqnarray*} u_{ij} = a_{ij} - \sum_{k=1}^{i-1} u_{kj} l_{ik} \end{eqnarray*}The formula for elements of the lower triangular matrix $L$ is similar, except that we need to divide each term by the corresponding diagonal element of $U$. To ensure that the algorithm is numerically stable when $u_{jj} \ll 0$, a *pivoting* matrix P is used to re-order $A$ so that the largest element of each column of A gets shifted to the diagonal of $A$. The formula for elements of $L$ follows:

The simplest and most efficient way to create an $LU$ decomposition in Python is to make use of the NumPy/SciPy library, which has a built in method to produce $L$, $U$ and the permutation matrix $P$:

import pprint import scipy import scipy.linalg # SciPy Linear Algebra Library A = scipy.array([ [7, 3, -1, 2], [3, 8, 1, -4], [-1, 1, 4, -1], [2, -4, -1, 6] ]) P, L, U = scipy.linalg.lu(A) print "A:" pprint.pprint(A) print "P:" pprint.pprint(P) print "L:" pprint.pprint(L) print "U:" pprint.pprint(U)

The output from the code is given below:

A: array([[ 7, 3, -1, 2], [ 3, 8, 1, -4], [-1, 1, 4, -1], [ 2, -4, -1, 6]]) P: array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) L: array([[ 1. , 0. , 0. , 0. ], [ 0.42857143, 1. , 0. , 0. ], [-0.14285714, 0.21276596, 1. , 0. ], [ 0.28571429, -0.72340426, 0.08982036, 1. ]]) U: array([[ 7. , 3. , -1. , 2. ], [ 0. , 6.71428571, 1.42857143, -4.85714286], [ 0. , 0. , 3.55319149, 0.31914894], [ 0. , 0. , 0. , 1.88622754]])

Although it is unlikely you will ever need to code up an LU Decomposition directly, I have presented a pure Python implementation, which does not rely on any external libraries, including NumPy or SciPy. This is not intended to be a fast implementation, in fact it will be significantly slower than the SciPy variant outlined above. The goal of this listing is to help you understand how the algorithm works "under the hood":

import pprint def mult_matrix(M, N): """Multiply square matrices of same dimension M and N""" # Converts N into a list of tuples of columns tuple_N = zip(*N) # Nested list comprehension to calculate matrix multiplication return [[sum(el_m * el_n for el_m, el_n in zip(row_m, col_n)) for col_n in tuple_N] for row_m in M] def pivot_matrix(M): """Returns the pivoting matrix for M, used in Doolittle's method.""" m = len(M) # Create an identity matrix, with floating point values id_mat = [[float(i ==j) for i in xrange(m)] for j in xrange(m)] # Rearrange the identity matrix such that the largest element of # each column of M is placed on the diagonal of of M for j in xrange(m): row = max(xrange(j, m), key=lambda i: abs(M[i][j])) if j != row: # Swap the rows id_mat[j], id_mat[row] = id_mat[row], id_mat[j] return id_mat def lu_decomposition(A): """Performs an LU Decomposition of A (which must be square) into PA = LU. The function returns P, L and U.""" n = len(A) # Create zero matrices for L and U L = [[0.0] * n for i in xrange(n)] U = [[0.0] * n for i in xrange(n)] # Create the pivot matrix P and the multipled matrix PA P = pivot_matrix(A) PA = mult_matrix(P, A) # Perform the LU Decomposition for j in xrange(n): # All diagonal entries of L are set to unity L[j][j] = 1.0 # LaTeX: u_{ij} = a_{ij} - \sum_{k=1}^{i-1} u_{kj} l_{ik} for i in xrange(j+1): s1 = sum(U[k][j] * L[i][k] for k in xrange(i)) U[i][j] = PA[i][j] - s1 # LaTeX: l_{ij} = \frac{1}{u_{jj}} (a_{ij} - \sum_{k=1}^{j-1} u_{kj} l_{ik} ) for i in xrange(j, n): s2 = sum(U[k][j] * L[i][k] for k in xrange(j)) L[i][j] = (PA[i][j] - s2) / U[j][j] return (P, L, U) A = [[7, 3, -1, 2], [3, 8, 1, -4], [-1, 1, 4, -1], [2, -4, -1, 6]] P, L, U = lu_decomposition(A) print "A:" pprint.pprint(A) print "P:" pprint.pprint(P) print "L:" pprint.pprint(L) print "U:" pprint.pprint(U)

The output from the pure Python implementation is given below:

A: [[7, 3, -1, 2], [3, 8, 1, -4], [-1, 1, 4, -1], [2, -4, -1, 6]] P: [[1.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 1.0]] L: [[1.0, 0.0, 0.0, 0.0], [0.42857142857142855, 1.0, 0.0, 0.0], [-0.14285714285714285, 0.2127659574468085, 1.0, 0.0], [0.2857142857142857, -0.7234042553191489, 0.0898203592814371, 1.0]] U: [[7.0, 3.0, -1.0, 2.0], [0.0, 6.714285714285714, 1.4285714285714286, -4.857142857142857], [0.0, 0.0, 3.5531914893617023, 0.31914893617021267], [0.0, 0.0, 0.0, 1.88622754491018]]

You can see that the output above matches that produced by the SciPy implementation, albeit in a manner which is slower to calculate.