<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 4.9295
- Accuracy: 0.4568
- Precision: 0.3403
- Recall: 0.3408
- F1: 0.3364
- Classification Report Dict: {'0': {'precision': 0.02911392405063291, 'recall': 0.008808885484488702, 'f1-score': 0.013525433695971773, 'support': 2611}, '1': {'precision': 0.01141552511415525, 'recall': 0.037783375314861464, 'f1-score': 0.017533606078316773, 'support': 794}, '2': {'precision': 0.9802220680083276, 'recall': 0.9758203799654577, 'f1-score': 0.9780162714211529, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3402505057243719, 'recall': 0.34080421358826923, 'f1-score': 0.3363584370651471, 'support': 6300}, 'weighted avg': {'precision': 0.463940201511262, 'recall': 0.4568253968253968, 'f1-score': 0.4572370946620006, 'support': 6300}}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Classification Report Dict |
---|---|---|---|---|---|---|---|---|
0.2415 | 1.0 | 1838 | 3.9039 | 0.4595 | 0.3445 | 0.3481 | 0.3395 | {'0': {'precision': 0.048302872062663184, 'recall': 0.014170815779394868, 'f1-score': 0.021912940479715724, 'support': 2611}, '1': {'precision': 0.017884322678843226, 'recall': 0.05919395465994962, 'f1-score': 0.027469316189362943, 'support': 794}, '2': {'precision': 0.9673090158293186, 'recall': 0.9709844559585492, 'f1-score': 0.9691432511635926, 'support': 2895}, 'accuracy': 0.4595238095238095, 'macro avg': {'precision': 0.34449873685694166, 'recall': 0.3481164087992979, 'f1-score': 0.3395085026108904, 'support': 6300}, 'weighted avg': {'precision': 0.46677437333150673, 'recall': 0.4595238095238095, 'f1-score': 0.45788810107388767, 'support': 6300}} |
0.106 | 2.0 | 3676 | 4.4418 | 0.4548 | 0.3412 | 0.3441 | 0.3377 | {'0': {'precision': 0.01937984496124031, 'recall': 0.005744925315970892, 'f1-score': 0.008862629246676513, 'support': 2611}, '1': {'precision': 0.01713221601489758, 'recall': 0.05793450881612091, 'f1-score': 0.026444380569129056, 'support': 794}, '2': {'precision': 0.9869764167546639, 'recall': 0.968566493955095, 'f1-score': 0.9776847977684798, 'support': 2895}, 'accuracy': 0.45476190476190476, 'macro avg': {'precision': 0.34116282591026725, 'recall': 0.34408197602906226, 'f1-score': 0.33766393586142845, 'support': 6300}, 'weighted avg': {'precision': 0.46373023511339345, 'recall': 0.45476190476190476, 'f1-score': 0.4562753416943984, 'support': 6300}} |
0.0418 | 3.0 | 5514 | 4.5002 | 0.4568 | 0.3404 | 0.3420 | 0.3368 | {'0': {'precision': 0.02802547770700637, 'recall': 0.008425890463423975, 'f1-score': 0.012956419316843343, 'support': 2611}, '1': {'precision': 0.012898330804248861, 'recall': 0.042821158690176324, 'f1-score': 0.019825072886297375, 'support': 794}, '2': {'precision': 0.980201458839875, 'recall': 0.9747841105354059, 'f1-score': 0.9774852788361621, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3403750891170434, 'recall': 0.34201038656300203, 'f1-score': 0.33675559034643426, 'support': 6300}, 'weighted avg': {'precision': 0.46366651115761987, 'recall': 0.4568253968253968, 'f1-score': 0.4570460636410615, 'support': 6300}} |
0.0253 | 4.0 | 7352 | 4.9295 | 0.4568 | 0.3403 | 0.3408 | 0.3364 | {'0': {'precision': 0.02911392405063291, 'recall': 0.008808885484488702, 'f1-score': 0.013525433695971773, 'support': 2611}, '1': {'precision': 0.01141552511415525, 'recall': 0.037783375314861464, 'f1-score': 0.017533606078316773, 'support': 794}, '2': {'precision': 0.9802220680083276, 'recall': 0.9758203799654577, 'f1-score': 0.9780162714211529, 'support': 2895}, 'accuracy': 0.4568253968253968, 'macro avg': {'precision': 0.3402505057243719, 'recall': 0.34080421358826923, 'f1-score': 0.3363584370651471, 'support': 6300}, 'weighted avg': {'precision': 0.463940201511262, 'recall': 0.4568253968253968, 'f1-score': 0.4572370946620006, 'support': 6300}} |
Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.12.1