stable-diffusion stable-diffusion-diffusers stable-diffusion-xl lora diffusers

<style> .title-container { display: flex; justify-content: center; align-items: center; height: 100vh; /* Adjust this value to position the title vertically */ } .title { font-size: 3em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title span { background: -webkit-linear-gradient(45deg, #7ed56f, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .custom-table { table-layout: fixed; width: 100%; border-collapse: collapse; margin-top: 2em; } .custom-table td { width: 50%; vertical-align: top; padding: 10px; box-shadow: 0px 0px 10px 0px rgba(0,0,0,0.15); } .custom-image { width: 512px; height: 512px; object-fit: cover; border-radius: 10px; transition: transform .2s; margin-bottom: 1em; } .custom-image:hover { transform: scale(1.05); } </style> <h1 class="title"><span>Character Maggie Q SDXL</span></h1>

<table class="custom-table"> <tr> <td> <a href="https://huggingface.co/frank-chieng/maggieQ/blob/main/sample_examples/maggieQ.png"> <img class="custom-image" src="https://huggingface.co/frank-chieng/maggieQ/resolve/main/sample_examples/maggieQ.png" alt="sample1"> </a> <a href="https://huggingface.co/frank-chieng/maggieQ/blob/main/sample_examples/maggieQ%20(1).png"> <img class="custom-image" src="https://huggingface.co/frank-chieng/maggieQ/resolve/main/sample_examples/maggieQ%20(1).png" alt="sample2"> </a> </td> </tr> </table> <hr>

Overview

Character Lora Miggie Q is a lora training model with sdxl1.0 base model, latent text-to-image diffusion model. The model has been fine-tuned using a learning rate of 1e-5 over 3000 total steps with a batch size of 4 on a curated dataset of superior-quality maggie Q images. This model is derived from Stable Diffusion XL 1.0.

Model Description

<!-- Provide a longer summary of what this model is. -->

<hr>

How to Use:

poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face
masterpiece, best quality

<hr>

Google Colab

Open In Colab

🧨 Diffusers

Make sure to upgrade diffusers to >= 0.18.2:

pip install diffusers --upgrade

In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:

pip install invisible_watermark transformers accelerate safetensors

Running the pipeline (if you don't swap the scheduler it will run with the default EulerDiscreteScheduler in this example we are swapping it to EulerAncestralDiscreteScheduler:

pip install -q --upgrade diffusers invisible_watermark transformers accelerate safetensors
pip install huggingface_hub
from huggingface_hub import notebook_login
notebook_login()
import torch
from torch import autocast
from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler

base_model_id = "stabilityai/stable-diffusion-xl-base-1.0"
lora_model = "frank-chieng/maggieQ"

pipe = StableDiffusionXLPipeline.from_pretrained(
    base_model_id,
    torch_dtype=torch.float16,
    use_safetensors=True,
    )
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights(lora_model, weight_name="sdxl_lora_maggie_Q.safetensors")
pipe.to('cuda')
prompt = "professional fashion close-up portrait photography of a young beautiful maggie Q at German restaurant during Sunset, Nikon Z9"
negative_prompt = "3d render"
image = pipe(
    prompt, 
    negative_prompt=negative_prompt, 
    width=1024,
    height=1024,
    guidance_scale=7,
    target_size=(1024,1024),
    original_size=(4096,4096),
    num_inference_steps=28
    ).images[0]
image.save("maggieQ.png")

<hr>

Limitation

This model inherit Stable Diffusion XL 1.0 limitation