<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
disaster-tweet-distilbert-4
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4269
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2869 | 0.12 | 12 | 2.3020 |
2.1069 | 0.25 | 24 | 2.2656 |
2.2329 | 0.38 | 36 | 2.2020 |
2.2317 | 0.5 | 48 | 2.1106 |
2.0789 | 0.62 | 60 | 1.9934 |
1.9849 | 0.75 | 72 | 1.8608 |
1.7433 | 0.88 | 84 | 1.7136 |
1.6526 | 1.0 | 96 | 1.5621 |
1.4068 | 1.12 | 108 | 1.4142 |
1.3952 | 1.25 | 120 | 1.2600 |
1.21 | 1.38 | 132 | 1.1151 |
1.0925 | 1.5 | 144 | 0.9866 |
0.8739 | 1.62 | 156 | 0.8821 |
0.7775 | 1.75 | 168 | 0.7948 |
0.781 | 1.88 | 180 | 0.7237 |
0.6564 | 2.0 | 192 | 0.6666 |
0.591 | 2.12 | 204 | 0.6285 |
0.5972 | 2.25 | 216 | 0.5974 |
0.5446 | 2.38 | 228 | 0.5725 |
0.5547 | 2.5 | 240 | 0.5519 |
0.4973 | 2.62 | 252 | 0.5338 |
0.5049 | 2.75 | 264 | 0.5165 |
0.5435 | 2.88 | 276 | 0.5021 |
0.4585 | 3.0 | 288 | 0.4905 |
0.4333 | 3.12 | 300 | 0.4855 |
0.439 | 3.25 | 312 | 0.4738 |
0.4046 | 3.38 | 324 | 0.4679 |
0.4484 | 3.5 | 336 | 0.4591 |
0.4391 | 3.62 | 348 | 0.4534 |
0.4044 | 3.75 | 360 | 0.4542 |
0.4636 | 3.88 | 372 | 0.4458 |
0.4162 | 4.0 | 384 | 0.4408 |
0.3497 | 4.12 | 396 | 0.4379 |
0.4339 | 4.25 | 408 | 0.4360 |
0.3812 | 4.38 | 420 | 0.4345 |
0.3944 | 4.5 | 432 | 0.4382 |
0.3784 | 4.62 | 444 | 0.4292 |
0.3745 | 4.75 | 456 | 0.4311 |
0.3697 | 4.88 | 468 | 0.4286 |
0.3598 | 5.0 | 480 | 0.4269 |
Framework versions
- Transformers 4.28.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2