Local Diffusion

Introduction

Local Diffusion is a Stable Diffusion based image generator running locally and native inside AE.

_images/ld_how_to_1.gif

Hardware Requirements

Since the ai is very performance hungry, please test on your system. You’ll preferably have a NVidia GPU of the 2000 series or newer with 6-8 GB VRAM or an M1/M2 Mac running Ventura with 32GB RAM or more. Local Diffusion will run on other configurations too (including cpu only) but might be very slow (several minutes per image).

Parameters

Model

The backbone of the ai. “Base” refers to Stable Diffusion 2.1 (base), which is trained on images of 512x512 pixels. Advanced uses Stable Diffusions 2.1 model, trained on 768x768 images. XL finally makes use of the Stable Diffusion XL model. SDXL Interactive is the new default model, needing only 2-4 steps to create images.

Mode

“Text to Image” will generate an image based purely on the prompt(s). “Source+Text to Image” let’s you specify an additional source layer and prompt influence. This way you can base your creations on custom footage.

Seed

Different seeds produce different output, so play with (or animate this value) to get variations. Whilst the same seed will always produce the same output on one particular machine, it may differ between operating systems or hardware. Be careful if you transfer your project!

Steps

How many iterations to run for the image creation. Generation time scales linearly with this value. You’ll get good results from 15-20 steps upwards.

Aspect ration

The approximate aspect ratio of the output. Note that changing the aspect ration will change the output alot and wider aspect ratios might produce artifacts.

Guidance Scale

The “strenght” of the prompt. Higher values will try to match the prompt better.

Optimize for low VRAM

Decrease the models pressure on the graphics VRAM at the minor cost of some speed loss.

Use superresolution

Run a fast superresolution ai on the output.

Add text input

Adds another mask which can be used to control the images content. Keyframe the Mask Opacity to interpolate between objects and find interesting concepts. If you e.g. have one mask with the name “a zombie” and another one with the name “a clown” and set boths Mask Opacities to 50% you’ll get an image which is a mix of a zombie and a mask. By keyframing the Mask Opacity value you can therefore animate a transition.

_images/zombie_to_clown_1_1.gif

Negative Prompts

Specify negative prompts by adding a “-” before your prompt.

_images/neg_prompt.PNG

Display

Scaling

How the ai’s output is scaled on the effects layer.

Interpolation

Choose “Nearest” for a blocky appeareance, and “Bilinear” “and Bicubic” for a smooth image interpolation.

Backend & Performance

Hardware Acceleration (on GPU version of the plugin only)

Run calculations on the GPU. This will give massive speedups compared to CPU mode.

Lower Precision

Compute with reduced precision if possible. This can save up to half of the memory and give you some speedups.

Optimize for low VRAM

If you have not that much graphics card memory enabling this will try to optimize memory consumption.

Samples (not available for all settings)

The number of ai samples to calculate. This will improve the models accuracy.

Computation Tiles (not available for all settings)

Split the computation into several tiles. This can help if you run out of memory.