generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

disaster-tweet-5

This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on a Kaggle competition dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

This model was created for the Natural Language Processing with Disaster Tweets Kaggle competition.

Training and evaluation data

Information about the data can be found here

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
0.7065 0.12 12 0.7027
0.7021 0.25 24 0.6997
0.7048 0.38 36 0.6948
0.7025 0.5 48 0.6887
0.683 0.62 60 0.6808
0.6748 0.75 72 0.6715
0.6701 0.88 84 0.6610
0.6564 1.0 96 0.6487
0.6418 1.12 108 0.6341
0.6268 1.25 120 0.6165
0.6362 1.38 132 0.5985
0.5824 1.5 144 0.5776
0.5766 1.62 156 0.5541
0.5417 1.75 168 0.5281
0.5232 1.88 180 0.5064
0.4737 2.0 192 0.4909
0.4479 2.12 204 0.4826
0.456 2.25 216 0.4662
0.4718 2.38 228 0.4541
0.4198 2.5 240 0.4451
0.4333 2.62 252 0.4376
0.4086 2.75 264 0.4337
0.4419 2.88 276 0.4332
0.3857 3.0 288 0.4225
0.3878 3.12 300 0.4188
0.3578 3.25 312 0.4280
0.3562 3.38 324 0.4234
0.4125 3.5 336 0.4147
0.3882 3.62 348 0.4090
0.3751 3.75 360 0.4145
0.3892 3.88 372 0.4077
0.3946 4.0 384 0.4107

Framework versions