Pretrained checkpoint: roberta-large-mnli
Traning hyperparameters:
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- prompt_format: sentence aspect - sentiment
Training results
Epoch | Train loss | Subtask 3 f1 | Subtask 3 precision | Subtask 3 recall | Subtask4 accuracy |
---|---|---|---|---|---|
1 | 341.82094554277137 | 0.8827514330380406 | 0.9474272930648769 | 0.8263414634146341 | 0.8429268292682927 |
2 | 164.69039884814993 | 0.9055276381909548 | 0.933678756476684 | 0.8790243902439024 | 0.8839024390243903 |
3 | 79.89190268042148 | 0.9282533399307275 | 0.9417670682730924 | 0.9151219512195122 | 0.8692682926829268 |
4 | 34.275944823923055 | 0.9211045364891519 | 0.9312063808574277 | 0.911219512195122 | 0.8751219512195122 |