text-to-image dalle-mini

This is the dalle-mini/dalle-mini text-to-image model fine-tuned on 120k <title, image> pairs from the Medium blogging platform. The full dataset can be found on Kaggle: Medium Articles Dataset (128k): Metadata + Images.

The goal of this model is to probe the ability of text-to-image models of operating on text prompts that are abstract (like the titles on Medium usually are), as opposed to concrete descriptions of the envisioned visual scene.

More context here.