generated_from_trainer instructiongen self-instruct instruction generation

flan-t5-small-instructiongen

Instead of generating questions from text, generate instructions for LLMs!

This model is a fine-tuned version of google/flan-t5-small on the None dataset. It achieves the following results on the evaluation set:

Intended uses & limitations

This is just a small model/example. There is likely to be even better performance with larger models (ex pszemraj/bart-base-instructiongen) generalizes better)

Additionally, this was trained on a dataset of only instructions+outputs, with the inputs filtered out. This means that text of 1) cookies and cream 2) chocolate chip 3) mint chip 4) oreo will not get you "Rank the following ice cream flavors: oreo, mint chip, chocolate chip, cookies and cream".

Training and evaluation data

See the linked dataset pszemraj/fleece2instructions - it is a filtered/formatted version of tatsu-lab/alpaca to generate instructions for arbitrary text.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.6161 1.0 181 1.3714 51.1003 34.5701 49.1277 49.2466 13.8357
1.539 2.0 362 1.3401 52.201 35.6154 50.2334 50.338 14.0450