<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fine-tuned-IndoNLI-Translated-with-xlm-roberta-base
This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8557
- Accuracy: 0.6567
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 16
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.9932 | 1.0 | 6136 | 0.9878 | 0.5004 |
0.9742 | 2.0 | 12272 | 0.9340 | 0.5507 |
0.9043 | 3.0 | 18408 | 0.9058 | 0.5694 |
0.8726 | 4.0 | 24544 | 0.8918 | 0.5840 |
0.8651 | 5.0 | 30680 | 0.8648 | 0.6017 |
0.822 | 6.0 | 36816 | 0.8379 | 0.6253 |
0.7868 | 7.0 | 42952 | 0.8369 | 0.6299 |
0.7821 | 8.0 | 49088 | 0.8219 | 0.6410 |
0.7309 | 9.0 | 55224 | 0.8254 | 0.6465 |
0.7344 | 10.0 | 61360 | 0.8136 | 0.6479 |
0.7173 | 11.0 | 67496 | 0.8241 | 0.6532 |
0.7177 | 12.0 | 73632 | 0.8120 | 0.6536 |
0.6646 | 13.0 | 79768 | 0.8420 | 0.6570 |
0.6533 | 14.0 | 85904 | 0.8449 | 0.6546 |
0.656 | 15.0 | 92040 | 0.8495 | 0.6554 |
0.6345 | 16.0 | 98176 | 0.8557 | 0.6567 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2