generated_from_trainer

<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>

lora-out

This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on a synthetic recipe assistant dataset comprised of 2000 samples. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
0.9548 8.0 20 0.9240
0.8514 16.0 40 0.8523
0.7774 24.0 60 0.8498
0.7178 32.0 80 0.8597
0.7103 40.0 100 0.8666

Framework versions