<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-without-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of xlm-roberta-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9538
- Exact Match: 69.0141
- F1: 82.7291
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
---|---|---|---|---|---|
6.2063 | 0.5 | 19 | 3.6974 | 7.9225 | 18.1433 |
6.2063 | 0.99 | 38 | 2.5673 | 20.4225 | 30.5107 |
3.7106 | 1.5 | 57 | 1.5397 | 48.4155 | 64.0947 |
3.7106 | 1.99 | 76 | 1.2075 | 60.9155 | 74.9130 |
3.7106 | 2.5 | 95 | 1.0867 | 61.2676 | 75.6856 |
1.4112 | 2.99 | 114 | 0.9742 | 64.2606 | 78.6353 |
1.4112 | 3.5 | 133 | 0.9502 | 67.7817 | 81.5092 |
0.9522 | 3.99 | 152 | 0.9184 | 66.5493 | 80.7104 |
0.9522 | 4.5 | 171 | 0.9341 | 67.2535 | 81.5452 |
0.9522 | 4.99 | 190 | 0.9357 | 66.1972 | 81.2448 |
0.7334 | 5.5 | 209 | 0.9149 | 67.6056 | 81.7638 |
0.7334 | 5.99 | 228 | 0.9134 | 67.7817 | 82.2855 |
0.7334 | 6.5 | 247 | 0.9167 | 69.1901 | 82.3011 |
0.5938 | 6.99 | 266 | 0.9453 | 68.1338 | 82.0887 |
0.5938 | 7.5 | 285 | 0.9145 | 68.4859 | 82.8642 |
0.5273 | 7.99 | 304 | 0.9403 | 68.4859 | 82.5820 |
0.5273 | 8.5 | 323 | 0.9415 | 68.8380 | 82.4565 |
0.5273 | 8.99 | 342 | 0.9538 | 69.0141 | 82.7291 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2