Trained on 3 epochs of the totally-not-an-llm/EverythingLM-data-V2-sharegpt dataset.

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

note: Changed a few of the finetuning parameters this time around. I have no idea if its any good but Feel free to give it a try!

<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>