falcon falcon-7b code code instruct instruct code code alpaca python code code copilot copilot python coding assistant coding assistant

Training procedure

We finetuned Falcon-7B LLM on Python-Code-Instructions Dataset (iamtarun/python_code_instructions_18k_alpaca) for 10 epochs or ~ 23,000 steps using MonsterAPI no-code LLM finetuner.

The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.

The finetuning session got completed in 7.3 hours and costed us only $17.5 for the entire finetuning run!

Hyperparameters & Run details:

Framework versions

Loss metrics:

training loss