<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-large-p2-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of indobenchmark/indobert-large-p2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.1935
- Exact Match: 57.2183
- F1: 71.7072
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
---|---|---|---|---|---|
6.3204 | 0.5 | 19 | 3.6469 | 10.9155 | 20.4300 |
6.3204 | 0.99 | 38 | 2.7834 | 17.9577 | 28.8829 |
3.5802 | 1.5 | 57 | 2.3114 | 24.2958 | 36.4160 |
3.5802 | 1.99 | 76 | 2.0209 | 29.4014 | 42.5434 |
3.5802 | 2.5 | 95 | 1.7380 | 38.3803 | 51.5950 |
2.0482 | 2.99 | 114 | 1.4687 | 44.8944 | 59.1567 |
2.0482 | 3.5 | 133 | 1.3680 | 50.0 | 64.4849 |
1.3956 | 3.99 | 152 | 1.2840 | 50.5282 | 65.7446 |
1.3956 | 4.5 | 171 | 1.2633 | 52.6408 | 67.0356 |
1.3956 | 4.99 | 190 | 1.2035 | 53.5211 | 68.4126 |
1.0901 | 5.5 | 209 | 1.2142 | 54.5775 | 69.1038 |
1.0901 | 5.99 | 228 | 1.1843 | 55.6338 | 69.8223 |
1.0901 | 6.5 | 247 | 1.1881 | 56.6901 | 70.7746 |
0.9217 | 6.99 | 266 | 1.1898 | 56.1620 | 70.2471 |
0.9217 | 7.5 | 285 | 1.1882 | 56.5141 | 70.7193 |
0.8307 | 7.99 | 304 | 1.2073 | 56.8662 | 71.6134 |
0.8307 | 8.5 | 323 | 1.1930 | 57.0423 | 71.3981 |
0.8307 | 8.99 | 342 | 1.1980 | 57.0423 | 71.8225 |
0.7811 | 9.5 | 361 | 1.1940 | 57.2183 | 71.7072 |
0.7811 | 9.99 | 380 | 1.1935 | 57.2183 | 71.7072 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2