Eigen Library for Matrix Algebra in C++

We have previously considered *operator overloading* and how to create our own matrix object in C++. As a learning exercise, creating a matrix class can be extremely beneficial as it often covers dynamic memory allocation (if not using `std::vector`

s) and operator overloading across multiple object types (matrices, vectors and scalars). However, it is far from optimal to carry this out in a production environment. This article will explain why it is better to use a dedicated matrix library instead, such as Eigen.

Writing our own matrix libraries is likely to cause a few issues in a production environment:

**Poor Performance**- Unless significant time is spent optimising a custom matrix library, it is often going to be far slower than the corresponding dedicated community projects such as uBLAS, Eigen, MTL or Blitz.**Buggy Code**- A large community surrounding an open-source project leads to a greater number of bug fixes as edge cases are discovered and corrected. With a custom matrix library, it is likely that it will be restricted solely for an individual use case and thus edge cases are unlikely to be checked.**Few Algorithms**- Once again, the benefit of a large community is that multiple efficient algorithms can be generated for all aspects of numerical linear algebra. These in turn feed optimisations back to the core library and thus increase overall efficiency.**Time Wasting**- Are you in the business of quantitative finance or in the business of building an all-singing all-dancing matrix library? By spending months optimising a custom library, you are neglecting the original intent of its usage of the first place - solving quant problems!

While many libraries exist (see above), I have chosen to use the Eigen library for this article. Here are some of the benefits of Eigen:

**Up to date**- Eigen is actively developed and releases new versions frequently**API**- Eigen has a simple, straightforward and familiar API syntax**Dynamic matrices**- Supports matrices with sizes determined at runtime**Well tested**- Eigen has extensive "battle testing" and thus few bugs**Storage**- Can use either row-major or column-major storage**Optimised structures**- Dense and sparse matrices both available**Expression templates**- Lazy evaluation, which allows for complex matrix arithmetic, while maintaining performance

In this article we will install Eigen, look at examples of basic linear algebra usage and briefly study some of the advanced features, which will be the subject of later articles.

Eigen is extremely easy to install as there is no library that needs linking to. Instead the header files are simply included in the code for your program. With GCC it is necessary to use the -I flag in order for the compiler to be able to find the Eigen header files:

g++ -I /your/path/to/eigen/ example_program.cpp -o example_program

The following program requires the `Eigen/Dense`

header. It initialises a 3x3 matrix (using the `MatrixXd`

template) and then prints it to the console:

#include <iostream> #include <Eigen/Dense> int main() { Eigen::MatrixXd m(3,3); m << 1, 2, 3, 4, 5, 6, 7, 8, 9; std::cout << m << std::endl; }

Here is the (rather simple!) output for the above program:

1 2 3 4 5 6 7 8 9

Notice that the overloaded `<<`

operator can accept comma-separated lists of values in order to initialise the matrix. This is an extremely useful part of the API syntax. In addition, we can also pass the `MatrixXd`

to `std::cout`

and have the numbers output in a human-readable fashion.

Eigen is a large library and has many features. We will be exploring many of them over subsequent articles. In this section I want to describe basic matrix and vector operations, including the matrix-vector and matrix-matrix multiplication facilities provided with the library.

One of the most attractive features of the Eigen library is that it includes *expression objects* and *lazy evaluation*. This means that any arithmetic operation actually returns such an expression object, which is actually a description of the final computation to be performed rather than the actual computation itself. The benefit of such an approach is that most compilers are able to heavily optimise the expression such that additional loops are completely minimised. As an example, an expression such as:

VectorXd p(10), q(10), r(10), s(10); ... p = 5*q + 11*r - 7*s;

will compile into a single loop:

for (unsigned i=0; i<10; ++i) { p[i] = 5*q[i] + 11*r[i] - 7*s[i]; }

This means that the underlying storage arrays are only looped over once. As such Eigen can optimise relatively complicated arithmetic expressions.

Eigen allows for straightforward addition and subtraction of vectors and matrices. However, it is necessary for the operations to be mathematically well-defined: The two operands must have the same number of rows and columns. In addition, they must also possess the same scalar type. Here is an example of usage:

#include <iostream> #include <Eigen/Dense> int main() { // Define two matrices, both 3x3 Eigen::Matrix3d p; Eigen::Matrix3d q; // Define two three-dimensional vectors // The constructor provides initialisation Eigen::Vector3d r(1,2,3); Eigen::Vector3d s(4,5,6); // Use the << operator to fill the matrices p << 1, 2, 3, 4, 5, 6, 7, 8, 9; q << 10, 11, 12, 13, 14, 15, 16, 17, 18; // Output arithmetic operations for matrices std::cout << "p+q=\n" << p + q << std::endl; std::cout << "p-q=\n" << p - q << std::endl; // Output arithmetic operations for vectors std::cout << "r+s=\n" << r + s << std::endl; std::cout << "r-s=\n" << r - s << std::endl; }

Here is the output:

p+q= 11 13 15 17 19 21 23 25 27 p-q= -9 -9 -9 -9 -9 -9 -9 -9 -9 r+s= 5 7 9 r-s= -3 -3 -3

Scalar multiplication and division are just as simple in Eigen:

#include <iostream> #include <Eigen/Dense> int main() { // Define a 3x3 matrix and initialise Eigen::Matrix3d p; p << 1, 2, 3, 4, 5, 6, 7, 8, 9; // Multiply and divide by a scalar std::cout << "p * 3.14159 =\n" << p * 3.14159 << std::endl; std::cout << "p / 2.71828 =\n" << p / 2.71828 << std::endl; }

Here is the output:

p * 3.14159 = 3.14159 6.28318 9.42477 12.5664 15.708 18.8495 21.9911 25.1327 28.2743 p / 2.71828 = 0.36788 0.735759 1.10364 1.47152 1.8394 2.20728 2.57516 2.94304 3.31092

Eigen also has operations for the `transpose`

, `conjugate`

and the `adjoint`

of a matrix. For quantitative finance work we are really only interested in the `transpose`

, as we will not often be utilising complex numbers!

It is necessary to be careful when using the tranpose operation for in-place assignment (this applies to the conjugate and the adjoint too) as the transpose operation will write into the original matrix before finalising the calculation, which can lead to unexpected behaviour. Thus it is necessary to use Eigen's `transposeInPlace`

method on matrix objects when carrying out in-place transposition.

#include <iostream> #include <Eigen/Dense> int main() { // Declare a 3x3 matrix with random entries Eigen::Matrix3d p = Eigen::Matrix3d::Random(3,3); // Output the transpose of p std::cout << "p^T =\n" << p.transpose() << std::endl; // In-place transposition p.transposeInPlace(); // Output the in-place transpose of p std::cout << "p^T =\n" << p << std::endl; }

Here is the output:

p^T = -0.999984 -0.736924 0.511211 -0.0826997 0.0655345 -0.562082 -0.905911 0.357729 0.358593 p^T = -0.999984 -0.736924 0.511211 -0.0826997 0.0655345 -0.562082 -0.905911 0.357729 0.358593

Eigen handles matrix/matrix and matrix/vector multiplication with a simple API. Vectors are matrices of a particular type (and defined that way in Eigen) so all operations simply overload the `operator*`

. Here is an example of usage for matrices, vectors and transpose operations:

#include <iostream> #include <Eigen/Dense> int main() { // Define a 3x3 matrix and two 3-dimensional vectors Eigen::Matrix3d p; p << 1, 2, 3, 4, 5, 6, 7, 8, 9; Eigen::Vector3d r(10, 11, 12); Eigen::Vector3d s(13, 14, 15); // Matrix/matrix multiplication std::cout << "p*p:\n" << p*p << std::endl; // Matrix/vector multiplication std::cout << "p*r:\n" << p*r << std::endl; std::cout << "r^T*p:\n" << r.transpose()*p << std::endl; // Vector/vector multiplication (inner product) std::cout << "r^T*s:\n" << r.transpose()*s << std::endl; }

Here is the output:

p*p: 30 36 42 66 81 96 102 126 150 p*r: 68 167 266 r^T*p: 138 171 204 r^T*s: 464

Eigen also supports common vector operations, such as the inner product ("dot" product) and the vector product ("cross" product). Note that the sizes of the operand vectors are restricted by the mathematical definitions of each operator. The dot product must be applied to two vectors of equal dimension, while the cross product is only defined for three-dimensional vectors:

#include <iostream> #include <Eigen/Dense> int main() { // Declare and initialise two 3D vectors Eigen::Vector3d r(10,20,30); Eigen::Vector3d s(40,50,60); // Apply the 'dot' and 'cross' products std::cout << "r . s =\n" << r.dot(s) << std::endl; std::cout << "r x s =\n" << r.cross(s) << std::endl; }

Here is the output:

r . s = 3200 r x s = -300 600 -300

"Reduction" in this context means to take a matrix or vector as an argument and obtain a scalar as a result. There are six reduction operations which interest us:

`sum`

- Calculates the sum of all elements in a vector or matrix`prod`

- Calculates the product of all elements in a vector or matrix`mean`

- Calculates the mean average of all elements in a vector or matrix`minCoeff`

- Calculates the minimum element in a vector or matrix`maxCoeff`

- Calculates the maximum element in a vector or matrix`trace`

- Calculates the*trace*of a matrix, i.e. the sum of the diagonal elements

Here is an example of usage:

#include <iostream> #include <Eigen/Dense> int main() { // Declare and initialise a 3D matrix Eigen::Matrix3d p; p << 1, 2, 3, 4, 5, 6, 7, 8, 9; // Output the reduction operations std::cout << "p.sum(): " << p.sum() << std::endl; std::cout << "p.prod(): " << p.prod() << std::endl; std::cout << "p.mean(): " << p.mean() << std::endl; std::cout << "p.minCoeff(): " << p.minCoeff() << std::endl; std::cout << "p.maxCoeff(): " << p.maxCoeff() << std::endl; std::cout << "p.trace(): " << p.trace() << std::endl; }

Here is the output:

p.sum(): 45 p.prod(): 362880 p.mean(): 5 p.minCoeff(): 1 p.maxCoeff(): 9 p.trace(): 15

Eigen contains many more features than I have listed here. In particular, it supports multiple data structures for efficient matrix storage, depending on structural sparsity of values via the Sparse namespace. Further, Eigen has support for LR, Cholesky, SVD and QR decomposition. Eigenvalues can also be calculated in an optimised manner. The Geometry module allows calculation of geometric quantities, including transforms, translations, scaling, rotations and quaternions.

Now that we have explored the basic usage of the Eigen library, as well as glimpsed at the higher functionality, we are ready to carry out Numerical Linear Algebra (NLA) with Eigen as the basis (excuse the pun) for our code. This will help us solve problems in quantitative trading (such as using the Cholesky Decomposition for correlated asset paths) and derivatives pricing (solving Black-Scholes via implicit Finite Difference Methods and tridiagonal LR decomposition).

comments powered by DisqusYou'll get instant access to a free 10-part email course packed with hints and tips to help you get started in quantitative trading!

Every week I'll send you a wrap of all activity on QuantStart so you'll never miss a post again.

Real, actionable quant trading tips with no nonsense.