generated_from_keras_callback

chunwoolee0/distilgpt2_eli5_clm

This model is a fine-tuned version of distilgpt2 on an eli5 dataset. It achieves the following results on the evaluation set:

Model description

DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, lighter version of GPT-2.

Intended uses & limitations

This is an exercise for finetuning of the pretrained causal language model.

Training and evaluation data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Epoch
3.9048 3.7838 0
3.7853 3.7647 1
3.7237 3.7528 2

Framework versions