stable-diffusion stable-diffusion-diffusers text-to-image diffusers

Text-to-image finetuning - yeonsikc/model_out8

This pipeline was finetuned from runwayml/stable-diffusion-v1-5 on the yeonsikc/sample dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['1girl, bangs, black_hair, blunt_bangs, japanese_clothes, kimono, long_sleeves, looking_at_viewer, obi, red_kimono, sash, short_hair, simple_background, smile, solo, upper_body, white_background', 'camouflage, facial_hair, helmet, male_focus, military, military_uniform, simple_background, uniform, weapon', '1boy, blonde_hair, blue_eyes, long_sleeves, male_focus, shirt, simple_background, solo, upper_body, white_background']:

val_imgs_grid

Pipeline usage

You can use the pipeline like so:

from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("yeonsikc/model_out8", torch_dtype=torch.float16)
prompt = "1girl, bangs, black_hair, blunt_bangs, japanese_clothes, kimono, long_sleeves, looking_at_viewer, obi, red_kimono, sash, short_hair, simple_background, smile, solo, upper_body, white_background"
image = pipeline(prompt).images[0]
image.save("my_image.png")

Training info

These are the key hyperparameters used during training:

More information on all the CLI arguments and the environment are available on your wandb run page.