stable-diffusion text-to-image

Stable Diffusion

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. This model card gives an overview of all available model checkpoints. For more in-detail model cards, please have a look at the model repositories listed under Model Access.

Stable Diffusion Version 1

For the first version 4 model checkpoints are released. Higher versions have been trained for longer and are thus usually better in terms of image generation quality then lower versions. More specifically:

Model Access

Each checkpoint can be used both with Hugging Face's 🧨 Diffusers library or the original Stable Diffusion GitHub repository. Note that you have to "click-request" them on each respective model repository.

🤗's 🧨 Diffusers library Stable Diffusion GitHub repository
stable-diffusion-v1-1 stable-diffusion-v-1-1-original
stable-diffusion-v1-2 stable-diffusion-v-1-2-original
stable-diffusion-v1-3 stable-diffusion-v-1-3-original
stable-diffusion-v1-4 stable-diffusion-v-1-4-original

Demo

To quickly try out the model, you can try out the Stable Diffusion Space.

License

The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based.

Citation

    @InProceedings{Rombach_2022_CVPR,
        author    = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
        title     = {High-Resolution Image Synthesis With Latent Diffusion Models},
        booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        month     = {June},
        year      = {2022},
        pages     = {10684-10695}
    }

This model card was written by: Robin Rombach and Patrick Esser and is based on the DALL-E Mini model card.