music autoencoder variational autoencoder music generation

Pivaenist

Pivaenist is a random piano music generator with a VAE architecture.

By the use of the aforementioned autoencoder, it allows the user to encode piano music pieces and to generate new ones.

Model Description

<figure> <img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/architecture.png" style="width:100%; display:block; margin:auto"> <figcaption align = "center"><b>Pivaenist's architecture.</b></figcaption> </figure>

Sources

Code: Some of the code of this repository includes modifications (not the entire code, due to the differences in the architecture) or implementations from the following sites:

  1. TensorFlow. (n.d.). Generate music with an RNN | TensorFlow Core - Tensorflow tutorial where pretty-midi is used
  2. Han, X. (2020, September 1). VAE with TensorFlow: 6 Ways - VAE explanation and code
  3. Li, C. (2019, April 15). Less pain, more gain: A simple method for VAE training with less of that KL-vanishing agony. Microsoft Research. - Microsoft article on the KL training schedule which was applied in this model

There might be acknowledgments missing. If you find some other resemblance to a site's code, please notify me and I will make sure of including it.

Using pivaenist in colab

If you preferred directly using or testing the model without the need to install it, you can use this colab notebook (stored in this repository as well) and follow its instructions. Moreover, this serves as an example of use.

Installation

To install the model, you will need to change your working directory to the desired installation location and execute the following commands:

Windows

git clone https://huggingface.co/TomRB22/pivaenist
sudo apt install -y fluidsynth
pip install -r ./pivaenist/requirements.txt

Mac

git clone https://huggingface.co/TomRB22/pivaenist
brew install fluidsynth
pip install -r ./pivaenist/requirements.txt

The first one will clone the repository. Then, fluidsynth, a real-time MIDI synthesizer, is also set up in order to be used by the pretty-midi library. With the last line, you will make sure to have all dependencies on your system.

Training Details

Pivaenist was trained on the midi files of the MAESTRO v2.0.0 dataset. Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.

<figure> <img src="https://huggingface.co/TomRB22/pivaenist/resolve/main/.images/map_example.png" style="width:30%; display:block; margin:auto"> <figcaption align = "center"><b>A horizontally cropped example of a song map.</b></figcaption> </figure>

Documentation

model.VAE

encode

def encode(self, x_input: tf.Tensor) -> tuple[tf.Tensor]:

Make a forward pass through the encoder for a given song map, in order to return the latent representation and the distribution's parameters.

Parameters:

Returns:

decode

def decode(self, z_sample: tf.Tensor=None) -> tf.Tensor:

Decode a latent representation of a song.

Parameters:

Returns:

audio

midi_to_notes

def midi_to_notes(midi_file: str) -> pd.DataFrame:

Convert midi file to "song map" (dataframe where each note is broken into its components)

Parameters:

Returns:

display_audio

def display_audio(pm: pretty_midi.PrettyMIDI, seconds=-1) -> display.Audio:

Display a song in PrettyMIDI format as a display.Audio object. This method is especially useful in a Jupyter notebook.

Parameters

Returns:

notes_to_midi

def notes_to_midi(song_map: pd.DataFrame, out_file: str, velocity: int=50) -> pretty_midi.PrettyMIDI:

Convert "song map" to midi file (reverse process with respect to midi_to_notes) and (optionally) save it, generating a PrettyMidi object in the process.

Parameters:

Returns:

generate_and_display

def generate_and_display(model: VAE, 
                         out_file: str=None, 
                         z_sample: tf.Tensor=None, 
                         velocity: int=50, 
                         seconds: int=-1) -> display.Audio:

Generate a song, (optionally) save it and display it.

Parameters:

Returns: