<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fine-tuned-IndoNLI-Translated-with-indobert-large-p2
This model is a fine-tuned version of indobenchmark/indobert-large-p2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.6126
- Accuracy: 0.8090
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 16
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.549 | 1.0 | 6136 | 0.5307 | 0.7896 |
0.498 | 2.0 | 12272 | 0.4908 | 0.8072 |
0.3704 | 3.0 | 18408 | 0.5087 | 0.8105 |
0.3102 | 4.0 | 24544 | 0.5708 | 0.8111 |
0.2226 | 5.0 | 30680 | 0.6435 | 0.8053 |
0.1601 | 6.0 | 36816 | 0.7676 | 0.8034 |
0.1133 | 7.0 | 42952 | 0.8197 | 0.8083 |
0.1091 | 8.0 | 49088 | 0.9384 | 0.8059 |
0.066 | 9.0 | 55224 | 1.0333 | 0.8066 |
0.058 | 10.0 | 61360 | 1.1211 | 0.8061 |
0.0539 | 11.0 | 67496 | 1.2260 | 0.8080 |
0.0357 | 12.0 | 73632 | 1.3470 | 0.8058 |
0.0256 | 13.0 | 79768 | 1.4499 | 0.8079 |
0.0289 | 14.0 | 85904 | 1.5078 | 0.8070 |
0.0259 | 15.0 | 92040 | 1.5818 | 0.8078 |
0.0193 | 16.0 | 98176 | 1.6126 | 0.8090 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2