fine tuning a llm model, performance tests yet to perform.

Model: tiiuae/falcon-7b, Dataset: nampdn-ai/tiny-codes

before training output: Instruction:"Generate a python function to find number of CPU cores" Response: "def num_cpu_cores(): num_cores = (int)(os.cpu_count() * 2) return"

After training output: Instruction:"Generate a python function to find number of CPU cores" Response: "def num_cpu_cores(): num_cores = 0 for i in range(0, os.cpu_count()):""

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions