*Updated for Python 3.10 February 2023*

Estimating the risk of loss to an algorithmic trading strategy, or portfolio of strategies, is of extreme importance for long-term capital growth. Many techniques for risk management have been developed for use in institutional settings. One technique in particular, known as **Value at Risk** or **VaR**, will be the topic of this article.

We will be applying the concept of VaR to a single strategy or a set of strategies in order to help us quantify risk in our trading portfolio. The definition of VaR is as follows:

**VaR provides an estimate, under a given degree of confidence, of the size of a loss from a portfolio over a given time period.**

In this instance "portfolio" can refer to a single strategy, a group of strategies, a trader's book, a prop desk, a hedge fund or an entire investment bank. The "given degree of confidence" will be a value of, say, 95% or 99%. The "given time period" will be chosen to reflect one that would lead to a minimal *market impact* if a portfolio were to be liquidated.

For example, a VaR equal to 500,000 USD at 95% confidence level for a time period of a day would simply state that there is a 95% probability of losing no more than 500,000 USD in the following day. Mathematically this is stated as:

\begin{eqnarray} P(L \leq -5.0 \times 10^5) = 0.05 \end{eqnarray}Or, more generally, for loss $L$ exceeding a value $VaR$ with a confidence level $c$ we have:

\begin{eqnarray} P(L \leq -VaR) = 1-c \end{eqnarray}The "standard" calculation of VaR makes the following assumptions:

**Standard Market Conditions**- VaR is not supposed to consider extreme events or "tail risk", rather it is supposed to provide the expectation of a loss under normal "day-to-day" operation.**Volatilities and Correlations**- VaR requires the volatilities of the assets under consideration, as well as their respective correlations. These two quantities are tricky to estimate and are subject to continual change.**Normality of Returns**- VaR, in its standard form, assumes the returns of the asset or portfolio are*normally distributed*. This leads to more straightforward analytical calculation, but it is quite unrealistic for most assets.

## Advantages and Disadvantages

VaR is pervasive in the financial industry, hence you should be familiar with the benefits and drawbacks of the technique. Some of the advantages of VaR are as follows:

- VaR is very straightforward to calculate for individual assets, algo strategies, quant portfolios, hedge funds or even bank prop desks.
- The time period associated with the VaR can be modified for multiple trading strategies that have different time horizons.
- Different values of VaR can be associated with different forms of risk, say broken down by asset class or instrument type. This makes it easy to interpret where the majority of portfolio risk may be clustered, for instance.
- Individual strategies can be constrained as can entire portfolios based on their individual VaR.
- VaR is straightforward to interpret by (potentially) non-technical external investors and fund managers.

However, VaR is not without its disadvantages:

- VaR does not discuss the magnitude of the expected loss beyond the value of VaR, i.e. it will tell us that we are likely to see a loss
*exceeding*a value, but not how much it exceeds it. - It does not take into account extreme events, but only typical market conditions.
- Since it uses historical data (it is rearward-looking) it will not take into account future market regime shifts that can change volatilities and correlations of assets.

VaR should not be used in isolation. It should always be used with a suite of risk management techniques, such as diversification, optimal portfolio allocation and prudent use of leverage.

## Methods of Calculation

As of yet we have not discussed the actual calculation of VaR, either in the general case or a concrete trading example. There are three techniques that will be of interest to us. The first is the variance-covariance method (using normality assumptions), the second is a Monte Carlo method (based on an underlying, potentially non-normal, distribution) and the third is known as historical bootstrapping, which makes use of historical returns information for assets under consideration.

In this article we will concentrate on the Variance-Covariance Method and in later articles will consider the Monte Carlo and Historical Bootstrap methods.

### Variance-Covariance Method

Consider a portfolio of $P$ dollars, with a confidence level $c$. We are considering daily returns, with asset (or strategy) historical standard deviation $\sigma$ and mean $\mu$. Then the *daily* VaR, under the variance-covariance method for a single asset (or strategy) is calculated as:

Where $\alpha$ is the inverse of the cumulative distribution function of a normal distribution with mean $\mu$ and standard deviation $\sigma$.

We can use the SciPy and pandas libraries from Python in order to calculate these values. If we set $P=10^6$ and $c=0.99$, we can use the SciPy ppf method to generate the values for the inverse cumulative distribution function to a normal distribution with $\mu$ and $\sigma$ obtained from some real financial data, in this case the historical daily returns of CitiGroup (we could easily substitute the returns of an algorithmic strategy in here):

```
# var.py
import numpy as np
import pandas as pd
from scipy.stats import norm
def create_dataframe(csv):
"""
Read pricing data CSV download for C
OHLCV data from 01/01/2010-01/01/2014 into a DataFrame.
Parameters
-----------
csv : `csv`
CSV file containing pricing data.
Returns
-------
ts : `pd.DataFrame`
A DataFrame containing C OHLCV data from
01/01/2010-01/01/2014. Index is a DateTime object.
"""
# obtain stock information
ts = pd.read_csv(csv)
ts = ts.set_index(pd.DatetimeIndex(ts['Date']))
return ts
def create_returns_series(ts):
"""
Create Returns series from OHLCV DataFrame.
Parameters
----------
ts : `pd.DataFrame`
A DataFrame containing C OHLCV data from
01/01/2010-01/01/2014. Index is a DateTime object.
Returns
-------
ts : `pd.DataFrame
A DateFrame with OHCLV data and a Returns series.
"""
# Calculate the percentage change
ts["rets"] = ts["Adj Close"].pct_change()
return ts
def _var_cov_var(P, c, mu, sigma):
"""
Variance-Covariance calculation of daily Value-at-Risk
using confidence level c, with mean of returns mu
and standard deviation of returns sigma, on a portfolio
of value P.
Parameters
----------
P : `int`
Portfolio Value.
c : `float`
Confidence Level.
mu : `float`
Mean of Returns Series.
sigma : `float`
Standard Deviation of Returns Series.
Returns
-------
`float`
Variance-Covariance measure.
"""
alpha = norm.ppf(1-c, mu, sigma)
return P - P*(alpha + 1)
def var(rets):
"""
Parameters for Variance-Covariance calculation.
Parameters
----------
rets : `pd.DataFrame`
OHLCV DataFrame with Returns Series.
Returns
-------
'float'
Variance-Covariance measure.
"""
P = 1e6 # 1,000,000 USD
c = 0.99 # 99% confidence interval
mu = np.mean(rets["rets"])
sigma = np.std(rets["rets"])
return _var_cov_var(P, c, mu, sigma)
if __name__ == "__main__":
# CSV file of OHLCV data for C from 1/1/2010 to 1/1/2014
csv = "PATH/TO/YOUR/CSV"
citi = create_dataframe(csv)
rets = create_returns_series(citi)
var = var(rets)
print(f"Value-at-Risk: ${var:0.2f}")
```

The calculated value of VaR is given by:

`Value-at-Risk: $56503.13`

VaR is an extremely useful and pervasive technique in all areas of financial management, but it is not without its flaws. We have yet to discuss the actual value of what could be lost in a portfolio, rather just that it may exceed a certain amount some of the time.

In follow-up articles we will not only discuss alternative calculations for VaR, but also outline the concept of **Expected Shortfall** (also known as Conditional Value at Risk), which provides an answer to how *much* is likely to be lost.