GPU computation on Amazon EC2

Running a deep learning algorithm properly is not a big deal. We discuss the setting that allows us to run a deep learning algorithm, in particular neural stype on Amazon GPU instances.

Table of content


As mentioned in this previous blog post ‘nips 2016 neural style’, running this deep learning algorithm for image rendering is very computational intensive. It is pretty slow on a traditional CPU cluster. Naturally, we would like to run this algorithm on GPU to achieve a feasible computation. As long as you have a desktop/laptop, you have a CPU, but the problem is that not everyboday have a GPU. It seems to me that we have two options:

  1. Purchase your own GPU and make a GPU cluster.
  2. Use cloud computing services e.g., Amazon EC2.

I don’t have the money to buy my own GPU cluster. Therefore, I will be using Amazon EC2 GPU computing services. The cost of computing is roughly 0.7 euro per hour.


  1. Install torch 7 manually

    Run the following command in terminal to install torch 7

    cd ~/
    curl -s | bash
    git clone ~/torch --recursive
    cd ~/torch; ./

    Remember to use sudo right during the installation. If you use Amazon EC2 GPU cluster with Amazon Machine Image (ami-2557e256), you do not have to worry about torch 7 as it is already installed for this machine image.

  2. Install loadcaffe

    1. Take a look at the official website of loadcaffee from github and follow the instucution there. Otherwise, please go ahead with the following steps.

    2. loadcaffe has no caffe dependency, but you need to install protobuf with the following command

      sudo apt-get install libprotobuf-dev protobuf-compiler

      If two packages cannot be found from the repository, you need to update your apt-get with the following command

      sudo apt-get update

      And you will just be fine.

    3. Then you should install loadcaffe package by running the following command

      sudo luarocks install loadcaffe

      In case that luarocks cannot find the loadcaffe package, the problem can be solved with the following command at least in my case

      sudo luarocks --from= install loadcaffe

      Again you will just be fine.

  3. Install neural-style

    1. Now we have environment ready. We should go ahead with real stuffs. Take a look at the neural-style in github.

    2. Clone the package with the following git command

      git clone

    3. Get into the cloned directory and download VGG model with the following command

      sh models/


  1. Right now, running neural-style is pretty straight-forward. In particular, you can try the following example code

    th neural_style.lua -style_image examples/inputs/starry_night.jpg -content_image examples/inputs/tubingen.jpg

  2. After about 700 iteration, your rendering should be ready. Copy the result from Amazon EC2 to your local with the following command

    scp -i SparkEC2Key.pem*png ~/Desktop/

    And yes, your Amazon EC2 instance is just like a normal server and can be accessed with ssh and scp.

  3. Now the cool thing is that the running time is just about 2 mins on GPU instead of very very long on CPU.

  4. You also get intermediate pictures at 100, 200, …, upto 1000 iterations.


External reading materials

There are always some cool information available on the web. In particular, I find the following blogs useful.

  1. How to install Theano on Amazon EC2 GPU. This is s simple, clear, instructive blog about installing deep learning environment such as theano and cuda on Amazon EC2 GPU instance.

  2. Using covnet to detect facial keypoints is a tutorial for a kaggle competition.

  3. Jeff Barr’s introductory blog post on the GPU computing with Amazon - ‘build 3D streaming application with EC2’s G2 instance’.

Hongyu Su 31 January 2016