Making your first AI-generated animation with Deforum Stable Diffusion

Aalap Davjekar
6 min readSep 20, 2022

--

This video was made by rendering ~5000 images using Stable Diffusion and putting them together with Kdenlive. Music by Evokemusic.ai.

The video you see here was created frame-by-frame using Stable Diffusion and animated with Kdenlive. I’ve been asked by many people on how to go about doing this so I’ve put together this extremely short guide into my process.

Before you jump in, note that this guide is specifically about using a notebook maintained by Deforum. If you are looking for something more general on making art with AI or if you want an extremely thorough explanation of working with Stable Diffusion, please check out this guide instead.

If you are completely new to creating images with text, then please first go and use Midjourney, OpenAI, or Dalle mini to understand what all the fuss is about.

For the purpose of keeping this tutorial as accessible as possible, we will be using a Google Colab notebook which lets us do all our work on the Internet with a cloud GPU (which means you can do this even on your 15 year-old Thinkpad).

Also, there is no coding involved whatsoever.

Requirements

  • A Google Drive with at least 5GB of free space.
  • A Hugging Face account (to download files)
  • 20 minutes of free time.

Let’s begin!

  • Get the notebook — this contains pre-written code which lets you easily adjust parameters and pop out images in no time.
  • Note: At the time of writing, I am using version 0.4 but if you want to check for an update, please see the “official-links” channel of the Deforum community’s Discord.
When you open the notebook, this is what you should be seeing.
  • Download the model weights from the Hugging Face website. Note: You will need to create an account for this.
Scroll down to this section and download the “sd-v1–4.ckpt” file.
  • You can also use this link.
  • Upload the .ckpt file to your Google Drive.
  • Put it in the path listed in the models_path_gdrive parameter. The default location is “/content/drive/MyDrive/AI/models”. You will need to create the folders. Note: The file is pretty big (~4GB) so it might take a while to upload.
  • Once the file is uploaded, we are going to create a single image just to test this out. From the menu, click on Runtime -> Run all.
Runtime -> Run all (ctrl+F9) will run every code block in the notebook sequentially
  • Note: You will need to give this notebook permission to use your Google Drive. If you’re concerned about your data, you can create a new Google account and run the notebook using that.
  • It will take about 3–5 minutes for the setup to finish. The code blocks in the notebook will execute sequentially. Once it gets to the “Run” section, it will take less than a minute to generate the image.
Your notebook has finished executing all of the code once this stops spinning.
  • Your image will be stored in “MyDrive/AI/StableDiffusion/StableFun/”. Unless you encountered any errors along the way, the image should be there along with a copy of the settings for that batch.
  • Note: The path is listed under output_path_gdrive. The default location is “/content/drive/MyDrive/AI/StableDiffusion”.

Animating the images

Now comes the fun part.

  • Go back to your notebook. Scroll to the “Animation settings” section, set animation_mode to “2D”.
  • Set max_frames to “20” (unless you want to wait forever for this to finish).
  • Scroll down to the “Run” section and change the batch_name to “StableFun2”. This will output your new images to a new folder.
  • Important: Remember to use a new batch name for every batch. This will just keep your images well organised.
  • Scroll to the “Create video from frames” section and uncheck skip_video_for_run_all.
  • In the menu, click on Runtime -> Run all (as you did earlier).
  • This execution will take much longer because we are generating multiple images. Once it finishes, go to the “StableFun2” folder in your Google Drive and you will find an mp4 file along with all the images you generated in this batch. The mp4 file is your animation. Enjoy!

If you want to add music, you will need a video editor but that is outside the scope of this tutorial.

So what do I do now?

Play around with prompts! Prompts are what will shape the output to your heart’s content.

  • Under “Prompts”, change the text inside double quotes. There are also tools that will easily (and visually) let you create prompts.
Change only the highlighted part of the prompt (everything inside the double quotes).
  • If you want to find some pretty images and see what prompts were used to generate them, you can use something like Libraire.ai

For now, keep in mind that:

  • The first words in your prompt will have the strongest effect on the output.
  • Try to keep your prompts as succinct as possible.
  • Too many words will ruin the output.
  • For trial and error, try to change one word at a time and see the difference that makes. For this it’s better to first set animate_mode to “None” to create images instead of animations (faster feedback). Also, use a separate folder for all your image tests.

Here are some images and the prompts that were used to create them:

“a dancer in a swamp, extremely detailed oil painting, rhads, sargent and leyendecker, savrasov levitan polenov, bruce pennington, studio ghibli, tim hildebrandt, digital art, landscape painting, octane render, beautiful composition, trending on artstation”
“angry cat, Modular Origami, insanely detailed and intricate, hypermaximalist, elegant, ornate, hyper realistic, super detailed”
“lizards eating each other:10, Adobe RGB, Sunlight, super detailed:4”

What does each parameter do?

I will cover the most important parameters here which are not self explanatory. More parameters to be added at a later time.

“angle” — Changes the angle of the camera in degrees at the specified keyframes.

Example: “0:(0), 100:(25)” will increase the angle of the camera from 0 to 25 at a linear rate from frame 0 to frame 100.

zoom” — Zooms the camera in or out. “1” means no zoom. “0.9” will zoom out. “1.1” will zoom in.

Example: “0:(1), 25:(0.9), 50:(1.1)” will zoom out the camera from 1 to 0.9 at a linear rate from frame 0 to frame 25 before zooming in from 0.9 to 1.1 from frame 26 to frame 50.

“translation” — Similar to “angle”.

“diffusion_cadence” — How similar each successive frame is to the one preceding it. “1” will make huge differences in a series of images; “8” will make few differences.

“seed” — A unique value that is useful for creating similar images. By using the same seed and the same settings in two different generations, you’ll get the same output. This is a deterministic output and it is one of the greatest features of Stable Diffusion. “-1” will generate a random string.

sampler” — Changes the sampling algorithm. Makes a huge difference to the final output and the speed at which the program runs. Here is a comparison between different samplers.

“steps” — The number of times the sampler ‘paints’ the canvas. Higher values will take longer to finish but result in a more detailed output.

“scale” — Determines the strength of the prompt. Higher values will make the final output closer to the prompt. “7” is a good default.

“n_batch” — The number of images to generate per batch. Makes no difference when creating animations.

“seed_behavior”

  • iter: increments the seed value by 1 per frame
  • fixed: keeps the seed value the same throughout
  • random: uses a random seed for every frame. Don’t use this for animations.

I’ll gradually add more details to this section. I appreciate any feedback you might have. If there is something that didn’t work or if you know a better way to do something, you can message me on my Telegram.

Looking forward to seeing your animations! :)

--

--

Aalap Davjekar

Technical writer and web developer based in Goa, India. Passionate about working at the intersection of art and technology.