<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
twitter-roberta-base-sentiment-sentiment-memes-30epcohs
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.3027
- Accuracy: 0.8517
- Precision: 0.8536
- Recall: 0.8517
- F1: 0.8523
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|
0.2504 | 1.0 | 2147 | 0.7129 | 0.8087 | 0.8112 | 0.8087 | 0.8036 |
0.2449 | 2.0 | 4294 | 0.7500 | 0.8229 | 0.8279 | 0.8229 | 0.8240 |
0.2652 | 3.0 | 6441 | 0.7460 | 0.8181 | 0.8185 | 0.8181 | 0.8149 |
0.2585 | 4.0 | 8588 | 0.7906 | 0.8155 | 0.8152 | 0.8155 | 0.8153 |
0.2534 | 5.0 | 10735 | 0.8178 | 0.8061 | 0.8180 | 0.8061 | 0.8080 |
0.2498 | 6.0 | 12882 | 0.8139 | 0.8166 | 0.8163 | 0.8166 | 0.8164 |
0.2825 | 7.0 | 15029 | 0.7494 | 0.8155 | 0.8210 | 0.8155 | 0.8168 |
0.2459 | 8.0 | 17176 | 0.8870 | 0.8061 | 0.8122 | 0.8061 | 0.8075 |
0.2303 | 9.0 | 19323 | 0.8699 | 0.7987 | 0.8060 | 0.7987 | 0.8003 |
0.2425 | 10.0 | 21470 | 0.8043 | 0.8244 | 0.8275 | 0.8244 | 0.8253 |
0.2143 | 11.0 | 23617 | 0.9163 | 0.8208 | 0.8251 | 0.8208 | 0.8219 |
0.2054 | 12.0 | 25764 | 0.8330 | 0.8239 | 0.8258 | 0.8239 | 0.8245 |
0.208 | 13.0 | 27911 | 1.0673 | 0.8134 | 0.8216 | 0.8134 | 0.8150 |
0.1668 | 14.0 | 30058 | 0.9071 | 0.8270 | 0.8276 | 0.8270 | 0.8273 |
0.1571 | 15.0 | 32205 | 0.9294 | 0.8339 | 0.8352 | 0.8339 | 0.8344 |
0.1857 | 16.0 | 34352 | 0.9909 | 0.8354 | 0.8350 | 0.8354 | 0.8352 |
0.1476 | 17.0 | 36499 | 0.9747 | 0.8433 | 0.8436 | 0.8433 | 0.8434 |
0.1341 | 18.0 | 38646 | 0.9372 | 0.8422 | 0.8415 | 0.8422 | 0.8415 |
0.1181 | 19.0 | 40793 | 1.0301 | 0.8433 | 0.8443 | 0.8433 | 0.8437 |
0.1192 | 20.0 | 42940 | 1.1332 | 0.8407 | 0.8415 | 0.8407 | 0.8410 |
0.0983 | 21.0 | 45087 | 1.2002 | 0.8428 | 0.8498 | 0.8428 | 0.8440 |
0.0951 | 22.0 | 47234 | 1.2141 | 0.8475 | 0.8504 | 0.8475 | 0.8483 |
0.0784 | 23.0 | 49381 | 1.1652 | 0.8407 | 0.8453 | 0.8407 | 0.8417 |
0.0623 | 24.0 | 51528 | 1.1730 | 0.8417 | 0.8443 | 0.8417 | 0.8425 |
0.054 | 25.0 | 53675 | 1.2900 | 0.8454 | 0.8496 | 0.8454 | 0.8464 |
0.0584 | 26.0 | 55822 | 1.2831 | 0.8480 | 0.8497 | 0.8480 | 0.8486 |
0.0531 | 27.0 | 57969 | 1.3043 | 0.8506 | 0.8524 | 0.8506 | 0.8512 |
0.0522 | 28.0 | 60116 | 1.2891 | 0.8527 | 0.8554 | 0.8527 | 0.8534 |
0.037 | 29.0 | 62263 | 1.3077 | 0.8538 | 0.8559 | 0.8538 | 0.8544 |
0.038 | 30.0 | 64410 | 1.3027 | 0.8517 | 0.8536 | 0.8517 | 0.8523 |
Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1