coreml stable-diffusion text-to-image not-for-all-audiences

Core ML Converted SDXL Model:

<br>

DreamShaper-XL1.0-Alpha2_SDXL_8-bit:

Source(s): CivitAI<br>

This is an SDXL base model converted and quantized to 8-bits.

Finetuned over SDXL1.0.

Even if this is still an alpha version, I think it's already much better compared to the first alpha based on xl0.9.

Basically I do the first gen with DreamShaperXL, then I upscale to 2x and finally I do an img2img step with either DreamShaperXL itself, or a 1.5 model that I find suited, such as DreamShaper7 or AbsoluteReality.

What does it do better than SDXL1.0?

No need for refiner. Just do highres fix (upscale+i2i)

Better looking people

Less blurry edges

75% better dragons 🐉

Better NSFW<br><br>

image

image

image

image