generated_from_trainer chatgpt

distilgpt2-HC3

what happens if you train a smaller model on a dataset of chatGPT responses?

This happens.

example

Model description

This model is a fine-tuned version of distilgpt2 on the "chatgpt answers" column of the Hello-SimpleAI/HC3 dataset.

It achieves the following results on the evaluation set:

Intended uses & limitations

Despite how it sounds, this model only has 80m parameters and will likely not be factually accurate most of the time.

Training and evaluation data

Modifications made w.r.t. original dataset:

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.2485 0.98 41 2.1457 0.5158
2.0757 1.98 82 2.0584 0.5304
1.966 2.98 123 2.0210 0.5376
1.8602 3.98 164 2.0012 0.5422
1.8089 4.98 205 1.9977 0.5436
1.7698 5.98 246 1.9983 0.5441

Framework versions