music

This is a ControlNet model that turns main melody spectrograms to accompaniment spectrograms. It's trained on top of Riffusion using music downloaded from YouTube Music.

The main melody and accompaniment is separated by Spleeter. This assumes that your main melody will be vocals. Main melodies other than vocals are not tested yet. The dataset contains vocals in Traditional Chinese, English and Japanese.

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6432926efc29acb96be4d1d4/CoPuEwTnxbNQYRQsAp7DJ.mpga"></audio>