<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
rubert-base-cased_best_finetuned_emotion_experiment_augmented_anger_fear
This model is a fine-tuned version of DeepPavlov/rubert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4568
- Accuracy: 0.8779
- F1: 0.8777
- Precision: 0.8780
- Recall: 0.8779
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
1.2647 | 1.0 | 69 | 1.0075 | 0.6013 | 0.5671 | 0.6594 | 0.6013 |
0.9091 | 2.0 | 138 | 0.7853 | 0.7171 | 0.7138 | 0.7169 | 0.7171 |
0.7305 | 3.0 | 207 | 0.6264 | 0.7829 | 0.7811 | 0.7835 | 0.7829 |
0.5446 | 4.0 | 276 | 0.4571 | 0.8466 | 0.8465 | 0.8470 | 0.8466 |
0.4039 | 5.0 | 345 | 0.4035 | 0.8612 | 0.8606 | 0.8612 | 0.8612 |
0.3144 | 6.0 | 414 | 0.3800 | 0.8653 | 0.8653 | 0.8665 | 0.8653 |
0.2711 | 7.0 | 483 | 0.3731 | 0.8674 | 0.8673 | 0.8677 | 0.8674 |
0.2289 | 8.0 | 552 | 0.4041 | 0.8737 | 0.8728 | 0.8746 | 0.8737 |
0.1944 | 9.0 | 621 | 0.4002 | 0.8789 | 0.8785 | 0.8793 | 0.8789 |
0.171 | 10.0 | 690 | 0.3939 | 0.8831 | 0.8827 | 0.8839 | 0.8831 |
0.138 | 11.0 | 759 | 0.4106 | 0.8758 | 0.8754 | 0.8761 | 0.8758 |
0.1141 | 12.0 | 828 | 0.4200 | 0.8810 | 0.8803 | 0.8804 | 0.8810 |
0.1141 | 13.0 | 897 | 0.4426 | 0.8758 | 0.8756 | 0.8763 | 0.8758 |
0.0961 | 14.0 | 966 | 0.4494 | 0.8758 | 0.8754 | 0.8761 | 0.8758 |
0.0812 | 15.0 | 1035 | 0.4568 | 0.8779 | 0.8777 | 0.8780 | 0.8779 |
Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1