<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
cluster_to_text
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0608
- Bleu: 39.5087
- Gen Len: 10.2429
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
1.8864 | 1.0 | 4678 | 1.5653 | 17.9224 | 10.3526 |
1.6271 | 2.0 | 9356 | 1.3336 | 26.9113 | 10.2905 |
1.4621 | 3.0 | 14034 | 1.1952 | 32.9922 | 10.2873 |
1.3908 | 4.0 | 18712 | 1.1183 | 36.6438 | 10.2917 |
1.3385 | 5.0 | 23390 | 1.0753 | 38.768 | 10.2479 |
1.3138 | 6.0 | 28068 | 1.0608 | 39.5087 | 10.2429 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3