Pretrained checkpoint: roberta-large
Traning hyperparameters:
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- prompt_format: sentence aspect - sentiment
Training results
Epoch | Train loss | Subtask 3 f1 | Subtask 3 precision | Subtask 3 recall | Subtask4 accuracy |
---|---|---|---|---|---|
1 | 302.38164756447077 | 0.8747412008281573 | 0.9316427783902976 | 0.824390243902439 | 0.5219512195121951 |
2 | 152.67940049804747 | 0.8930041152263374 | 0.9445048966267682 | 0.8468292682926829 | 0.8614634146341463 |
3 | 99.03914468642324 | 0.9071318624935865 | 0.9567099567099567 | 0.8624390243902439 | 0.8721951219512195 |
4 | 60.156904806615785 | 0.905241935483871 | 0.9363920750782064 | 0.8760975609756098 | 0.8790243902439024 |
5 | 36.06248981086537 | 0.9195855944745931 | 0.9301397205588823 | 0.9092682926829269 | 0.8926829268292683 |