generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

dico97/distilgpt2-finetuned-wikitext2-datos-propios

This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Epoch
3.2488 2.9547 0
2.9889 2.8030 1
2.8373 2.7189 2
2.7251 2.6685 3
2.6403 2.6278 4
2.5661 2.6034 5
2.5023 2.5710 6
2.4410 2.5560 7
2.3893 2.5280 8
2.3409 2.5150 9
2.2976 2.5084 10
2.2565 2.4861 11
2.2148 2.4663 12
2.1813 2.4622 13
2.1457 2.4673 14

Framework versions