Running Stable Diffusion on Google Colab got too expensive so I moved to RunPod.io. This is how.

Aalap Davjekar
9 min readOct 12, 2022

This is a short guide to installing and running a Deforum Stable Diffusion notebook on RunPod.io. For more specific information on working with Deforum, please check my guide to the Deforum notebook.

These are the kind of videos I was able to make with the help of Stable Diffusion.

I’m currently obsessed with Stable Diffusion…

But did you know there are dozens of different versions of SD? My personal favourite is Deforum which can take a prompt and rapidly turn it into a batch of images. It even provides settings to zoom and rotate, leading to some very interesting results. There was no way I could run Deforum locally on my machine because I lacked the hardware. To render thousands and thousands of images at a single go, I had to use a Google Colab notebook which hooked me up to a cloud GPU. The entire process happens online; files are stored on Google Drive.

The problem is, Colab is expensive (GPU price comparisons below), unless you are okay with really slow GPUs. Colab Pro is the “economy” tier ($10/month) but guarantees no interruptions which are a pain in the free tier. Fast GPUs are only consistently available if you subscribe to Colab Pro+ ($50/month). All of this is still quite reasonable at first glance. $50/month for a high-end GPU is a fantastic deal — well, it was.

Colab recently introduced “compute units” which are expendable tokens that exhaust every minute you are connected to a GPU. You get 500 compute units for Colab Pro+ and 100 for Pro but if you use fast GPUs, they will deplete incredibly fast (at about 13/hour) - approximately 38.5 hours until you deplete all 500 ($50 worth of) tokens, or $1.3/hour of GPU time.

This was not something I could afford because I use these GPUs for over a hundred hours a week. I’m always rendering something — even when I’m asleep, there is a GPU somewhere out there, drawing pictures for me.

How I create the majority of my art.

Needless to say, I eventually came across cheaper alternatives — RunPod.io and Vast.ai. Both offer very similar services — rent a GPU for $$$ and make it do things — the only difference is that they offer more bang for your buck.

For example, I ran some quick math and an A100, the best GPU Google Colab currently offers, costs roughly between $1.2–1.4/hour. This beast can spit out even high-resolution images at about 5x the speed of the P100, available on the free tier. The P100, while slow, is cheap — costing roughly $0.2/hour. RunPod, as you will see, is much cheaper.

I started using RunPod first just to check if this would even work. The biggest difference is that to use Deforum, there is no special interface similar to Google Colab — one has to directly edit the code in the python file. I had to run a modified notebook which bypassed the Google Drive settings because on RunPod the images are saved on the same container you rent to be downloaded at your convenience.

The first GPU I tested my notebook on was an RTX 3080. It’s not the most powerful but it’s cheap — approximately $0.2/hour. You might say this is similar to the free tier on Colab but you’ll be surprised to hear that the 3080 outputs 3x as much horsepower as the P100. While it’s not as fast as a 3090 or A100, it’s the most reasonably priced solution that I’ve found so far.

A rundown of costings for different GPUs I tested:

Colab Pro*

  • A100 40GB — 7.5 it/s — $1.3/hr — 20,800 steps/dollar
  • Tesla T4 16GB — 1.5 it/s — $0.2/hr — 32,000 steps/dollar

RunPod*

  • RTX 3090 24GB — 5 to 5.1 it/s — $.36/hr — 50,000 steps/dollar
  • RTX 3080 Ti 12GB — 4.9 to 5 it/s — $0.26/hr — 63,692 steps/dollar
  • RTX 3080 10GB — 4.2 to 4.5 it/s — $0.22/hr — 68,000 steps/dollar
  • A4000 — 2.6 to 2.7 it/s — $.032/hr — 27,000 steps/dollar

*These are estimates based solely on Video Input mode. Your settings may cause your mileage to vary.

According to these results at least, running an RTX 3080 on RunPod provides more than twice as many steps per dollar expended as a Tesla T4. It also runs at thrice the speed for almost the same price.

Of course, you can always experiment with different GPUs, and I think that’s the best part about RunPod and Vast.ai — they give you some choice over your selection of hardware while still charging a fraction of the cost of a Google Colab subscription.

Getting started with Runpod.io

Requirements

  • A RunPod account. You can sign up with this referral link. This helps me support my work.
  • A Hugging Face account.
  • A basic knowledge of Python.
The RunPod homepage.
  • Sign up.
  • Click on Secure Cloud or Community Cloud in the menu on the left.
Choose between Secure Cloud or Community Cloud.
  • Select a GPU of your choice. I’ll be using a 3080 for demonstration purposes but feel free to pick any GPU you’d like that has at least 10GB VRAM.
Select a GPU of your choice.
  • Create an instance. Once you click on Select, you will be shown a configuration screen — what program you want to run on the GPU, with what settings, how much storage space, etc.
  • The default is PyTorch, a machine learning framework used in developing AI models. This is what we are going to use to run the Deforum notebook.
Create an instance.
  • Make sure the container and volume disks are at least 20GB in size. You can keep “Encrypt Volume” unchecked.
The container and volume disks require at least 20GB of space.
  • Make sure “Start Jupyter Notebook” is checked.
Keep “Start Jupyter Notebook” checked.
  • On the next screen you can choose whether you want a Spot (Interruptible) or On-Demand (Non-Interruptible) instance. Interruptible just means you can be outbid and someone willing to pay more will get the GPU instead. Unsurprisingly, On-Demand prices are always greater than the Spot prices. If you need the GPU continuously, for at least an hour, then On-Demand is the only way to go.
Choose a Spot (Interruptible) or On-Demand (Non-Interruptible) instance.
  • Click on “My Pods” to go to the “My Pods” page. You can also access it from the Menu.
Your instance is created. Now return to the last page.
  • Connect to your instance.
An example of a running instance.
  • It will take some time for the pod to initialise. Once it does, click on the purple “expand” button.
  • Click the “Connect” button once it lights up.
Connect to your instance.
  • Click the “Start Web Terminal” button. Once it starts, click “Connect to Web Terminal”.
Connect to a Web Terminal.
  • Once the web terminal loads, paste the following command (without the quotes): “apt-get update;apt-get install ffmpeg libsm6 libxext6 -y;pip install opencv-python;pip install ffmpeg;pip install pandas;pip install scikit-image;pip install numexpr;”
  • Remember to use ctrl+shift+v to paste or right-click and select “paste”.
Install dependencies in the terminal.
  • Wait for about 10 minutes while things are downloaded and installed on the instance. You can close the Web Terminal once you see an empty prompt.
If the prompt is empty, then your last command has finished executing.
  • Click on “Connect” -> “Connect to Jupyter Lab”.
Connect to Jupyter Lab.
  • A new window will open. If nothing happens any further, just wait for five minutes before refreshing the page. In the meantime…
  • Download the modified Deforum file that contains code you can run on RunPod.
  • Once you have them on your computer, go to JupyterLab. Click the “Upload” button on the left.
Click the “Upload” button.
  • Upload your file.
The file browser.
  • Don’t forget to switch to dark mode by going to Settings -> Theme -> JupyterLab Dark.
Switch to dark mode.
  • Double click on the notebook you wish to run. Once it opens, click on the >> symbol to restart the kernel and execute all cells in the notebook.
Restart the kernel and run all code.
  • Once it makes it down to cell 5, it will ask you for your Hugging Face login and access token. To get the token, go to the Hugging Face website and make an account. If you already have one, go to your Tokens page and create a token.
  • Click on “New token”, enter “Stable Diffusion” as the name and keep the role as “read”. Click “Generate token”. Now copy your token.
Create a token and copy it.
  • Paste it in the text box asking for the token. Press Enter.
Enter your Hugging Face username and token.
  • It will take up to 10 minutes for it to download the .ckpt model.
The ckpt file will take a few minutes to download.
  • Don’t worry about this, it’s normal.
Warnings shown in the output. Ignore this.
  • Go to View -> Render Side-by-Side to make the output cells appear next to the code cells. This will just make your notebook look a little more organised.
Use the Render Side-by-Side toggle for a notebook that is slightly easier to navigate.
  • You will notice new folders being created in the main directory. That’s SD, the ckpt model, and miscellaneous files being downloaded and installed. Once the notebook runs completely, you will see your first batch of images being outputted.
Newly created folders in the file browser.
  • To stop execution at any time to change the code, press “i” twice or click on the Stop button in the menu.
Press the square icon to stop code execution.
  • Note: You don’t need to restart the entire notebook from the first cell each time. You can directly execute the “Run” cell or start executing code from whichever cell you edited.
  • You will see your images being rendered next to the “Run” cell.
Images will appear next to the “Run” cell.
  • To download your images double click on the “output” directory. All your images are nested several folders deep. Right click on any folder and select “Download as an archive” and it will download a .zip file to your computer.
  • Use an image editor such as Kdenlive or Openshot to turn your batch of images into animations. Both are free and open-source. Both are awesome. Unfortunately, using these tools are outside the scope of this tutorial.
  • You can use the last cell to create animations from your images and this is the easiest method, but definitely not the best. This cell wont run automatically unless you set “skip_video_for_run_all” to “False”.
  • To manually execute this cell, click on the cell and press the “Run” button from the toolbar or press Shift + Enter.
  • Your animation will be in the same folder as your images.

If something didn’t work or if you have any questions, just drop a comment and I will do my best to help you out! All the best with your animations!

--

--