coreml stable-diffusion stable-diffusion-diffusers

Core ML Converted Model

This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions here.<br> Provide the model to an app such as Mochi Diffusion to generate images.<br>

split_einsum version is compatible with all compute unit options including Neural Engine.<br> original version is only compatible with CPU & GPU option.

🧩 Paper Cut model V1

This is the fine-tuned Stable Diffusion model trained on Paper Cut images.

Use PaperCut in your prompts.

Sample images:

PaperCut.jpg PaperCut.jpg Based on StableDiffusion 1.5 model

🧨 Diffusers

This model can be used just like any other Stable Diffusion model. For more information, please have a look at the Stable Diffusion.

You can also export the model to ONNX, MPS and/or FLAX/JAX.

from diffusers import StableDiffusionPipeline
import torch

model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "PaperCut R2-D2"
image = pipe(prompt).images[0]

image.save("./R2-D2.png")

✨ Community spotlight :

@PiyarSquare : PiyarSquare video