generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

dhanunjaya/distilgpt2-finetuned-pragmatic-1

This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Epoch
3.9823 3.3961 0
3.7835 3.3474 1
3.6659 3.3297 2
3.5855 3.3068 3
3.5032 3.2984 4
3.4418 3.2964 5
3.3827 3.2846 6
3.3475 3.2771 7
3.2982 3.2761 8
3.2545 3.2669 9

Framework versions