guide stable diffusion webui automatic1111 stable-diffusion-webui lora

⭐ CLICK HERE TO OPEN THIS DOCUMENT IN FULL WIDTH
(The index won't work otherwise).

🇪🇸🇲🇽 HAZ CLICK AQUÍ PARA VER ESTA GUÍA EN ESPAÑOL

 

Index <a name="index"></a>

 

Introduction <a name="intro"></a>

Stable Diffusion is a very powerful AI image generation software you can run on your own home computer. It uses "models" which function like the brain of the AI, and can make almost anything, given that someone has trained it to do it. The biggest uses are anime art, photorealism, and NSFW content.

The images you create may be used for any purpose, depending on the used model's license. Whether they are "yours" in a legal sense varies by local laws and is often inconclusive. Neither I or any of the people involved in Stable Diffusion or its models are responsible for anything you make, and you are expressively forbidden from creating illegal or harmful content.

This guide was finished in March 2023 and was last revised in October 2023. One month is like a year in AI time, so hopefully it is still useful by the time you read it.

 

Google Colab <a name="colab"></a>

The easiest way to use Stable Diffusion is through Google Colab. It borrows Google's computers to use AI, with variable time limitations, usually a few hours every day. You will need at least one Google account and we will be using Google Drive to store your settings and resulting images.

Revision: Google Colab now requires a subscription to run Stable Diffusion instances.

If you instead want to run it on your own computer, scroll down ▼.

  1. Open THIS PAGE.

  2. Near the top, click Copy to Drive. Wait for the new window to open and close the old one. This is now your personalized colab which will save your settings, and you should open it from your Google Drive from now on. If the original receives an update you'll have to replace yours to benefit from it.

  3. Turn on the following options under Configurations: output_to_drive, configs_in_drive, no_custom_theme. Then, turn on the following options under Models, VAEs, etc: anything_vae, wd_vae, sd_vae.

  4. If you're already familiar with Stable Diffusion, you may paste links to your desired resources in the custom_urls text box. We will add some links later in this guide. Links must be direct downloads to each file (ideally from civitai or huggingface), and must be separated by commas.

  5. Press the play button to the left, anywhere in the first section of the page labeled Start 🚀. Wait a few minutes for it to finish, while a few progress messages appear near the bottom. Then, a public link will be created, which you can open in a new tab to start using Stable Diffusion. Keep the colab tab open! (On mobile try the trick at the bottom of the colab to keep the tab open)

  6. You can now make some decent anime images thanks to the default Anything 4.5 model. But we can do more. Also, what are all of these options? Scroll down ▼ to get started.

 

Local Installation (Windows + Nvidia) <a name="install"></a>

To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM and 4 GB of VRAM (preferably 8). I will only cover the case where you are running Windows 10/11 and using an NVIDIA graphics card series 16XX, 20XX or 30XX (though 10XX also work). My apologies to AMD, Linux, and Mac users, but their cases are harder to cover. If you don't meet the hardware requirements, you can just proceed with the Google Colab method above ▲.

  1. Get the latest release from this page.

  2. Run the installer, choose an easy and accessible location to install to, and wait for it to finish.

  3. Run the program. You will see a few options. First, turn on medvram and xformers. You may skip medvram if you have 12 or more GB of VRAM.

  4. Click Launch and wait for a browser window to open with the interface. It may take a while the first time.

  5. The page is now open. It's your own private website. The starting page is where you can make your images. But first, we'll go to the Settings tab. There will be sections of settings on the left.

    • In the Stable Diffusion section, scroll down and increase Clip Skip from 1 to 2. This is said to produce better images, specially for anime.
    • In the User Interface section, scroll down to Quicksettings list and change it to sd_model_checkpoint, sd_vae
    • Scroll back up, click the big orange Apply settings button, then Reload UI next to it.
  6. You are more than ready to generate some images, but you only have the basic model available. It's not great, at most it can make some paintings. Also, what are all of these options? See below ▼ to get started.

 

Getting Started <a name="start"></a>

Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
If you followed the instructions above, the top of your page should look similar to this:

Top

Here you can select your checkpoint and VAE. We will go over what these are and how you can get some. The colab has additional settings here too, you should ignore them for now.

  1. Models <a name="model"></a>

    The model, also called checkpoint, is the brain of your AI, designed for the purpose of producing certain types of images. There are many options, most of which are on civitai. But which to choose? These are my recommendations:

    • For anime, MeinaMix and its family of models should serve most purposes very well. I also merged my own model called Limbo Mix which you may try if you'd like.
    • For general art go with DreamShaper, there are few options quite like it in terms of creativity. An honorable mention goes to Pastel Mix, which has a beautiful and unique aesthetic with the addition of anime.
    • For photorealism go with Deliberate. It can do almost anything, but specially photographs. Very intricate results.
    • The Uber Realistic Porn Merge is self-explanatory.

    If you're using the colab in this guide, copy the direct download link to the file and paste it in the text box labeled custom_urls. Multiple links are separated by commas.

    If you're running the program locally, the models normally go into the stable-diffusion-webui/models/Stable-diffusion folder.

    Please note that checkpoints in the format .safetensors are safe to use while .ckpt may contain viruses, so be careful. Additionally, when choosing models you may have a choice between fp32, fp16 and pruned. They all produce the same images within a tiny margin of error, so just go with the smallest file (pruned-fp16). If you want to use them for training or merging, go with the largest one instead.

    Tip: Whenever you place a new file manually you can either restart the UI at the bottom of the page or press the small 🔃 button next to its dropdown.

  2. VAEs <a name="vae"></a>

    Most checkpoints don't come with a VAE built in. The VAE is a small separate model, which "converts your image into human format". Without it, you'll get faded colors and ugly eyes, among other things.

    If you're using the colab in this guide, you should already have the below VAEs, as I told you to select them before running.

    Most people use one of 3 different VAEs:

    • anything vae, also known as the orangemix vae. Used to be the most popular for anime, but it's the least vibrant of all vaes.
    • vae-ft-mse, the latest from Stable Diffusion itself. Used by photorealism models and such.
    • kl-f8-anime2, also known as the Waifu Diffusion VAE, it is older and produces more saturated results.

    The VAEs normally go into the stable-diffusion-webui/models/VAE folder.

    If you did not follow this guide up to this point, you will have to go into the Settings tab, then the Stable Difussion section, to select your VAE.

    Tip: Whenever you place a new file manually you can either restart the UI at the bottom of the page or press the small 🔃 button next to its dropdown.

  3. Prompts <a name="prompt"></a>

    On the first tab, txt2img, you'll be making most of your images. This is where you'll find your prompt and negative prompt.
    Stable Diffusion is not like Midjourney or other popular image generation software, you can't just ask it what you want. You have to be specific. Very specific.
    Most people have found a prompt that works for them and they swear by it, often recommended by other people. I will show you my own personal example of a prompt and negative prompt:

    Revision: These generic prompts have become less and less useful, as modern models don't really need them to work nicely. A simple negative prompt is often all you need.

    • Anime

      • 2d, masterpiece, best quality, anime, highly detailed face, highly detailed background, perfect lighting
      • EasyNegative, worst quality, low quality, 3d, realistic, photorealistic, (loli, child, teen, baby face), zombie, animal, multiple views, text, watermark, signature, artist name, artist logo, censored
    • Photorealism

      • best quality, 4k, 8k, ultra highres, raw photo in hdr, sharp focus, intricate texture, skin imperfections, photograph of
      • EasyNegative, worst quality, low quality, normal quality, child, painting, drawing, sketch, cartoon, anime, render, 3d, blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art
    • EasyNegative: <a name="promptneg"></a>The negative prompts above use EasyNegative, which is an embedding or "magic word" that encodes many bad things to make your images better. Otherwise you'd have to use a huge negative prompt.

      • If you're using the colab in this guide you already have this installed. Otherwise, you will have to download this tiny file, put it in your stable-diffusion-webui/embeddings folder, then go to the bottom of your WebUI page and click Reload UI. It will then work when you type that word.

    A comparison with and without these negative prompts including EasyNegative can be seen further down ▼.

    Prompts

    After a "base prompt" like the above, you may then start typing what you want. For example young woman in a bikini in the beach, full body shot. Feel free to add other terms you don't like to your negatives such as old, ugly, futanari, furry, etc.

    <a name="promptweight"></a>One important technique when writing prompts are emphasis and de-emphasis. When you surround something in (parentheses), it will get more emphasis or weight in your resulting image, basically telling the AI that part is more important. The normal weight for every word is 1, and each parentheses will multiply by 1.1 (you can use multiple). You can also specify the weight yourself, like this: (full body:1.4). You can also go below 1 to de-emphasize a word: [brackets] will multiply by 0.9, but you'll still need parentheses to go lower, like (this:0.5).

    Also note that hands and feet are famously difficult for AI to generate. Models have become better at them over time, but you may need to do photoshopping, inpainting, or advanced techniques with ControlNet ▼ to get it right.

  4. Generation parameters <a name="gen"></a>

    The rest of the parameters in the starting page will look something like this:

    Parameters

    • Sampling method: This is the algorithm that formulates your image, and each produce different results. The default of Euler a is often the best. There are also very good results for DPM++ 2M Karras and DPM++ SDE Karras. See below for a comparison.
    • Sampling steps: These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find consistently good results. See below for a comparison.
    • Width and Height: 512x512 is the default, and you should almost never go above 768 in either direction as it may distort and deform your image. To produce bigger images see Hires fix.
    • Batch Count and Batch Size: Batch size is how many images your graphics card will generate at the same time, which is limited by its VRAM. Batch count is how many times to repeat that batch size. Batches have consecutive seeds, more on seeds below.
    • CFG Scale: "Lower values produce more creative results". You should almost always stick to 7, but 4 to 10 is an acceptable range.
    • Seed: A number that guides the creation of your image. The same seed with the same prompt and parameters produces the same image every time, except for small details and under some circumstances.

    Hires fix: Lets you create larger images without distortion. Often used at 2x scale. When selected, more options appear:

    • Upscaler: The algorithm to upscale with. Latent and its variations produce creative and detailed results, but you may also like R-ESRGAN 4x+ and its anime version. More explanation and some comparisons further down ▼.
    • Hires steps: I recommend at least half as many as your sampling steps. Higher values aren't always better, and they take a long time, so be conservative here.
    • Denoising strength: The most important parameter. Near 0.0, no detail will be added to the image. Near 1.0, the image will be changed completely. I recommend something between 0.2 and 0.6 depending on the image, to add enough detail as the image gets larger, without destroying any original details you like.

    Script: Lets you access useful features and extensions, such as X/Y/Z Plot ▼ which lets you compare images with varying parameters on a grid. Very powerful.

    Here is a comparison of a few popular samplers and various sampling steps:

    <details> <summary>(Click) Sampler comparison - Photography</summary>

    samplers with photos </details>

    <details> <summary>(Click) Sampler comparison - Anime</summary>

    samplers with anime </details>

    An explanation of the samplers used above: Euler is a basic sampler. DDIM is a faster version, while DPM++ 2M Karras is an improved version. Meanwhile we have Euler a or "ancestral" which produces more creative results, and DPM++ 2S a Karras which is also ancestral and thus similar. Finally DPM++ SDE Karras is the slowest and quite unique. There are many other samplers not shown here but most of them are related.

 

Extensions <a name="extensions"></a>

Stable Diffusion WebUI supports extensions to add additional functionality and quality of life. These can be added by going into the Extensions tab, then Install from URL, and pasting the links found here or elsewhere. Then, click Install and wait for it to finish. Then, go to Installed and click Apply and restart UI.

Extensions

Here are some useful extensions. If you're using the colab in this guide you already have most of these, otherwise I hugely recommend you manually add the first 2:

 

Loras <a name="lora"></a>

LoRA or Low-Rank Adaptation is a form of Extra Network and the latest technology that lets you append a sort of smaller model to any of your full models. They are similar to embeddings, one of which you might've seen earlier ▲, but Loras are larger and often more capable. Technical details omitted.

Loras can represent a character, an artstyle, poses, clothes, or even a human face (though I do not endorse this). Checkpoints are usually capable enough for general work, but when it comes to specific details with little existing examples, you'll need a Lora. They can be downloaded from civitai or elsewhere (NSFW) and are usually between 9 MB and 144 MB. Note that bigger Loras are not necessarily better. They come in .safetensors format, same as most checkpoints.

Place your Lora files in the stable-diffusion-webui/models/Lora folder, or if you're using the colab in this guide paste the direct download link into the custom_urls text box. Then, look for the 🎴 Show extra networks button below the big orange Generate button. It will open a new section either directly below or at the very bottom. Click on the Lora tab and press the Refresh button to scan for new Loras. When you click a Lora in that menu it will get added to your prompt, looking like this: <lora:filename:1>. The start is always the same. The filename will be the exact filename in your system without the .safetensors extension. Finally, the number is the weight, like we saw earlier ▲. Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.

Extra Networks

An example of a Lora is Thicker Lines Anime Style, which is perfect if you want your images to look more like traditional anime.

There are other types of Lora under the umbrella term Lycoris, but webui treats them the same now, and you don't need to know much about it as the end user.

 

Upscaling <a name="upscale"></a>

As mentioned in Generation Parameters ▲, normally you shouldn't go above 768 width or height when generating an image. Instead you should use Hires fix with your choice of upscaler and an appropiate denoising level. Hires fix is limited by your VRAM however, so you may be interested in Ultimate Upscaler ▼ to go even larger.

You can download additional upscalers and put them in your stable-diffusion-webui/models/ESRGAN folder. They will then be available in Hires fix, Ultimate Upscaler, and Extras.

The colab in this guide comes with several of them, including Remacri, which is a great all-around upscaler for all sorts of images.

Here are some comparisons. All of them were done at 0.4 denoising strength. Note that some of the differences may be completely up to random chance.

<details> <summary>(Click) Comparison 1: Anime, stylized, fantasy</summary>

Original Comparison </details>

<details> <summary>(Click) Comparison 2: Anime, detailed, soft lighting</summary>

Original Comparison </details>

<details> <summary>(Click) Comparison 3: Photography, human, nature</summary>

Original Comparison </details>

 

Scripts <a name="imgscripts"></a>

Scripts can be found at the bottom of your generation parameters in txt2img or img2img.

 

ControlNet <a name="controlnet"></a>

ControlNet is an extremely powerful recent technology for Stable Diffusion. It lets you analyze information about any previously existing image and use it to guide the generation of your AI images. We'll see what this means in a moment.

If you're using the colab in this guide, you should enable the all_control_models option. Otherwise, you should first install the ControlNet extension ▲, then go here to download some models which you'll need to place in stable-diffusion-webui/extensions/sd-webui-controlnet/models. I recommend at least Canny, Depth, Openpose and Scribble, which I will show here.

I will demonstrate how ControlNet may be used. For this I chose a popular image online as our "sample image". It's not necessary for you to follow along, but you can download the images and put them in the PNG Info tab to view their generation data.

First, you must scroll down in the txt2img page and click on ControlNet to open the menu. Then, click Enable, and pick a matching preprocessor and model. To start with, I chose Canny for both. Finally I upload my sample image. Make sure not to click over the sample image or it will start drawing. We can ignore the other settings.

Control Net

You will notice that there are 2 results for each method except Scribble. The first is an intermediate step called the preprocessed image, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to None. This is extremely powerful with external tools such as Blender and Photoshop.

In the Settings tab there is a ControlNet section where you can enable multiple controlnets at once. One particularly good use is when one of them is Openpose, to get a specific character pose in a specific environment, or with specific hand gestures or details. Observe:

<details> <summary>(Click) Openpose+Canny example</summary>

Open Pose + Canny </details>

You can also use ControlNet in img2img, in which the input image and sample image both will have a certain effect on the result. I do not have much experience with this method.

There are also alternative diff versions of each ControlNet model, which produce slightly different results. You can try them if you want, but I personally haven't.

 

Lora Training for beginners <a name="train"></a>

To train a Lora ▲ is regarded as a difficult task. However, my new guide covers everything you need to know to get started for free, thanks to Google Colab:

🎴 Read my Lora making guide here

You can also train a Lora on your own computer if you have at least 8 GB of VRAM. For that, I will list a few resources below:

 

...vtubers? <a name="vtubers"></a>

That's it, that's the end of this guide for now. I'd be grateful if you want to contribute on missing topics like:

Thank you for reading!

I have a separate repo that aggregates vtuber Loras, specially Hololive. If you're interested in that.

Cheers.