Training is currently still underway, but this is the first epoch of a 32k context fine-tuning run of Mistral-7b over the following datasets: