Sigma Algebras and Probability Spaces

In this article we begin the path towards learning stochastic calculus by introducing two key ideas from measure theory and probability theory - namely the Sigma Algebra and a Probability Space.

Our recent 2020 Content Survey highlighted the desire from many of you to study the more advanced mathematics necessary for carrying out applications in quantitative finance.

Two of the highlighted areas were Linear Algebra for Deep Learning along with Stochastic Calculus. The latter is the underlying theoretical framework utilised for pricing derivatives contracts.

Learning Stochastic Calculus

While we have provided an elementary high-level treatment of Stochastic Calculus ideas in the past we have not produced a set of articles that begin with the mathematical theory and proceed to advanced applications in derivatives pricing.

This article is the first in a series that will attempt to provide a sufficient grounding in the mathematics underpinning options pricing. It is designed to highlight the core ideas—at a reasonable level of mathematical rigour—to allow further study via a more formal approach, be it self-study textbooks, a MOOC or even a university taught course.

Our mid-term goal is to understand the concepts in Stochastic Calculus, including Brownian Motion and Ito Calculus. These will provide us with the ability to make a formal derivation of the celebrated Black-Scholes equation.

Prior to any discussion on Brownian Motion or Ito Calculus it is necessary to introduce Probability Theory. Many undergraduate courses in science and engineering teach probability. However they do not often utilise the framework of Measure Theory in order to give it a rigourous footing.

The initial set of articles within this series will provide an introduction to Measure-Theoretic Probability Theory and why it is necessary to use this formalism.

This article will introduce two key mathematical concepts: The ${\bf \sigma}-$algebra (or ${\bf \sigma}-$field) and Probability Spaces. Both are indispensable tools for understanding more complex stochastic processes such as Martingales and Markov Chains.

Necessary Background

Measure Theory is usually a third-year topic in most undergraduate mathematics courses, while Stochastic Calculus is often introduced in the third or fourth year.

Studying both requires a good background in Set Theory and Real Analysis. In particular, an awareness of the Riemann integral and how it is defined is a necessary prerequisite.

Hence in order to get the most out of these articles it is best to be familiar with these topics.

Two useful undergraduate mathematics textbooks on these topics include:

For more detail on learning these topics outside of a formal taught university course please see our article series on learning mathematics without heading to university.

Motivation

One of the key issues in Stochastic Calculus for finance is modelling the path of stock prices. In the real world stock prices can change extremely rapidly, but their changes are always discrete. That is, we can 'zoom in' on time and eventually see discrete jumps between stock prices over finite periods of time as new trades are made.

However because these changes are very rapid it is appropriate, mathematically, to model stock prices as if they change continuously. That is, if we attempt to 'zoom in' on time we will also see ever more 'wiggliness' of the underlying stock path. This is due to the fractal nature of Brownian motion, a common model for the evolution of stock prices in finance.

This implies that we need to consider uncountable sets of events if we are to begin discussing the concept of the probability of a stock price increasing or decreasing in a subsequent time increment.

However once we introduce uncountable sets and an attempt to 'measure' them in some fashion (of which assigning a probability to an event is an example) then we need to be sure that we can do so unambiguously.

Otherwise we may end up in a situation where we can legitimately (based on two separate mathematical proofs, say) assign a different value of probability to the same event.

Thus in order to unambiguously assign a probability to an event it is necessary to somehow exclude certain events from those which we are assigning probability to if they admit ambiguous probability values.

In essence we need to get rid of any sets that do not have a 'sensible' probability measure. This is where the concept of $\sigma-$algebras come in.

Sigma Algebras

We'll first define a $\sigma-$algebra and then try to provide some intuition for its purpose.

Definition: ${\bf \sigma}-$algebra

Let $\Omega$ be a non-empty set. A $\sigma-$algebra $\mathcal{F}$ on $\Omega$ is a family of subsets of $\Omega$ with the following properties:

1. The empty set $\emptyset \in \mathcal{F}$
2. If a set $A \in \mathcal{F}$ then $A^{C} \in \mathcal{F}$
3. If $A_1, A_2, \dots$ is a sequence of sets in $\mathcal{F}$ then their union $\bigcup_{n} A_n$ is also in $\mathcal{F}$

Let's break this definition down and try to gain some intuition into what it means.

We begin with a non-empty set $\Omega$. A $\sigma-$algebra $\mathcal{F}$ on $\Omega$ is simply a collection of subsets from this original set with some useful properties. In particular it is closed under complements and unions of sets.

This means if we are able to somehow assign a 'length' or 'measure' to a set in the $\sigma-$algebra then we also know the 'length' or 'measure' of its complement. Similarly if we know the measure of a collection of one or more sets in the $\sigma-$algebra then we can also assign a measure to the union of those sets.

If we are able to think of 'measure' as a 'probability' for a moment (this will be formalised below) and sets in the $\sigma-$algebra as 'events' that we might be interested in calculating probabilities for then we can see that the definition of a $\sigma-$algebra lets us assign unambiguous probabilities to those events.

In effect the $\sigma-$algebra excludes events (sets) that have an ambigous definition of probability (measure) associated with them.

This is why they are so important in the theory of probability.

Probability Spaces

Now that we know how to restrict sets to those which are closed under complements and countable unions we are in a position to begin discussing probability in a formal way.

The next definition helps us assign 'probabilities' to 'events':

Definition: Probability Measure

Let $\mathcal{F}$ be a $\sigma-$algebra on a set $\Omega$. A probability measure $\mathbb{P}$ is a function:

\begin{eqnarray} \mathbb{P}:\mathcal{F} \mapsto [0, 1] \end{eqnarray}

such that

1. $\mathbb{P}(\Omega) = 1$
2. If $A_1, A_2, \dots$ are pairwise disjoint sets in $\mathcal{F}$ (that is, $A_i \cap A_j = \emptyset$ for $i \neq j$) then $\mathbb{P}\left(\bigcup_n A_n \right) = \sum_n \mathbb{P}(A_n)$

Once again we will break this definition down and try to understand what it is trying to say.

Firstly a probability measure is simply a function that takes an element in the $\sigma-$algebra $\mathcal{F}$ and assigns it a value between 0 and 1 inclusive. Since all probabilities are defined to be between 0 and 1, this is a sensible definition of a probability measure!

It also has two stated properties. The first says that the probability measure assigned to the set $\Omega$ is equal to one. This informally states that the probability of something happening must be one. For instance, if we roll a die we must see one of the six numbers appear (assuming it doesn't land on its edge!). Equivalently the probability of nothing occurring is zero.

The second property states that if we have two (or more) independent events then the probability of seeing one or the other occur is simply the sum of the probabilities of each event occurring. That is, if we want to know the probability of seeing a 1 or a 3 come up on the roll of a die, we simply compute $1/6 + 1/6 = 1/3$.

This finally allows us to define a probability space as the triple $(\Omega, \mathcal{F}, \mathbb{P})$. The sets within the $\sigma-$algebra $\mathcal{F}$ are known as events.

Next Steps

In future articles we will utilise probability spaces to define another important concept in probability theory, namely the Random Variable.

However before we can carry this out we will need to discuss the concept of 'measure' a little further and define the Lebesgue Measure.

Bibliographic Note

The treatment outlined here is similar to that presented in Brzezniak et al.[1] and Shreve[3]. The latter provides some useful intuition as to the need for a $\sigma-$algebra\$ and a formal probabilistic framework.