Should You Buy or Rent a GPU-Based Deep Learning Machine for Quant Trading Research?

Should You Buy or Rent a GPU-Based Deep Learning Machine for Quant Trading Research?

We've recently been considering the field of deep learning as a modelling methodology for forming new quantitative trading models. Such models have been shown to be 'unreasonably effective' in the fields of computer vision, natural language processing and games of strategy. This motivates us to see if these models can be applied to quant trading strategies.

We've so far looked at the basic mathematical foundations through a new series on linear algebra for deep learning, but have yet to discuss any deep learning models themselves beyond a cursory introduction to logistic regression using Theano.

Before we dive into deep learning model architecture it is necessary to set up a research environment that will be used for training models prior to their addition into a trading infrastructure. This article will motivate the need for specialised hardware. It will also provide an analysis of whether to purchase this hardware directly or whether to rent it from a cloud computing vendor.

Deep learning models are incredibly compute intensive. Training times on unoptimised hardware can be on the order of weeks or months. This length of time can rapidly slow down the iterative nature of quant trading research and prevent deployment of potentially profitable strategies.

Due to the heavy reliance on basic linear algebra routines deep learning training is extremely well-suited to massive parallelisation. Hence it is a perfect candidate for execution on Graphics Processing Units (GPU). GPUs are specialised hardware architectures originally designed for real-time video game graphics, but have now grown into a mature High Performance Computing (HPC) platform.

GPUs can reduce training times by a significant factor and are thus almost an essential tool for serious deep learning practitioners. However the GPUs themselves, as well as the surrounding hardware necessary to effectively run them is a significant investment. In addition to this quant traders have widely varying requirements, depending upon trade frequency, assets under management and whether the capital is being managed institutionally or through a personal retail account.

This article is largely written for retail quants who possess the time and capital to invest in deep learning R&D as a means of diversifying a portfolio. However, it should also be of use to small institutional quants in boutique funds who are considering a deep learning component to their offering.

For those on more modest budgets it can not be stressed enough that you should not invest a great deal of money into building a high-end deep learning setup right away. Since deep learning compute power has become such a commoditised resource it is very straightforward to "try before you buy" using cloud vendors. Once you have more experience at training models, along with a solid quant trading research plan in mind, the hardware specification—and financial outlay—can be tailored for your specific requirements.

Renting vs Purchasing

Now that the need for GPU hardware has been established the next task is to determine whether to rent GPU-compute resources from "the cloud" or whether to purchase a local GPU desktop workstation.

The answer is heavily dependent upon the type of models being trained, the space of (hyper)parameters being searched, the cost of electricity in your locale, your current algorithmic trading data setup along with your own personal preferences and research style.

Renting GPUs via Amazon Web Services P2 Instances

Comparing a cross-section of cloud vendors is beyond the scope of an article such as this. Any pricing and performance information will likely become out of date—particularly in the rapidly moving field of deep learning. Hence I've decided to consider one of the major vendors (if not the major vendor) in Amazon Web Services, through their Elastic Compute Cloud (EC2) platform.

The EC2 platform offers a wide range of instance types. We are particularly interested in the P2 instances found under the Accelerated Computing Instances header. Right now there are three instances offered, namely the p2.xlarge, p2.8xlarge and the p2.16xlarge. They vary in the quantity of GPUs, virtual CPU units and available RAM. They are currently priced, in the US East/Ohio block with spot values, at $0.90/hr, $7.20/hr and $14.40/hr, respectively.

This means continual usage of the p2.xlarge machine, which contains a single Nvidia K80 GPU with 12Gb of onboard video RAM, will cost approximately $650 per month. Note of course that this includes the price of electricity. There are other costs to factor in such as data transfer out of EC2 and the cost of the Elastic Block Storage (EBS) necessary for storing both the operating system and deep learning datasets. All of these costs will need to be considered when renting.

Despite this it is reasonably straightforward to set up a GPU instance. In fact we will be releasing a tutorial in the near future on how to do just that. However since these instances are Linux servers rather than desktops they are run headless meaning that the only way to access them is with through a tool such as SSH. Hence all work will need to be carried out on the command line. This is quite a shift for those who are used to a Windows local desktop environment.

Another major benefit of renting is that once the model is trained it can be exported and the GPU instance can be terminated. The model can then be executed elsewhere on much cheaper hardware (possibly locally). This is not the case for local machines where all of the cost is front-loaded. Hence, once the deep learning research has finished you may be left with a high-powered deep learning machine with nothing to do!

Buying a GPU-Enabled Local Desktop Workstation

Buying a full deep learning system is becoming more and more popular due to the significant price reductions in commodity GPUs. While the reasons for choosing a particular GPU will be left to another article (see here for a great discussion), it is possible to recommend one or two cards. In particular the Nvidia GeForce GTX 1080 Ti, with 11Gb VRAM, is currently retailing for around $700. For those on a budget the Geforce GTX 1060, 3Gb VRAM version, currently retails for around $250. The latter has significantly less video memory, which can constrain individual model size.

The main benefit of purchasing over renting is that all costs, with the exception of electricity, are paid upfront. It is now possible to build entry-level deep learning research workstations for under $1,000. However it is recommended to spend around $2,000 for a reasonably future-proof machine, as this will allow the inclusion of a single 1080 Ti. A second 1080 Ti can be added at a future date, if budgets allow.

The ability to customise the hardware is also beneficial. By obtaining a large workstation case it is possible to expand internal storage capacity cheaply, which is extremely useful if you run data-hungry models or want a local replica of a securities master database. It might also be a necessary requirement of the organisation that you work for, which may disallow usage of cloud resources for security reasons.

The downsides of purchasing and building a deep learning workstation are the upfront costs as well as the need to construct it yourself. These days it is relatively straightforward to build a workstation from components. It shouldn't take more than a couple of hours to get a working installation of Ubuntu, say.

Configuration of the current popular deep learning software stacks can be problematic though. Installing Nvidia CUDA and Tensorflow is a challenging exercise even on the latest versions of Ubuntu (at the time of writing 16.04 LTS was a popular choice). For this reason we will be writing an entire article on this process at a later date.

Next Steps

For those that are interested in "trying before they buy" the next article will show how to provision an AWS EC2 P2.xlarge instance at $0.90/hr and describe the process for installing Tensorflow and Keras.