Trained on 3 epochs of the totally-not-an-llm/EverythingLM-data-V2-sharegpt
dataset.
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
note: Changed a few of the finetuning parameters this time around. I have no idea if its any good but Feel free to give it a try!