Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions

I'm NOT the author of this work.

I cite anon :

Eh whatever. That storytelling-instruct was shit as shit, so have the V2.1 regular completion retrain I did a while back. Works great as a merge with base Llama2, or on top of instructs.
I'm going to completely refactor my dataset for a potential V3.

Credit to "anon49"

Sorry bro, it completely got under my radar, fixed!