<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fine-tuned-DatasetQAS-IDK-MRC-with-indobert-base-uncased-with-ITTL-with-freeze
This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.3111
- Exact Match: 56.5445
- F1: 62.0693
- Precision: 62.6451
- Recall: 66.3389
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
4.8072 | 0.49 | 73 | 2.1488 | 49.8691 | 49.8691 | 49.8691 | 49.8691 |
2.499 | 0.99 | 146 | 1.8266 | 49.4764 | 50.1167 | 50.0416 | 51.3416 |
1.9561 | 1.49 | 219 | 1.6905 | 48.8220 | 52.2799 | 52.0974 | 55.2549 |
1.8359 | 1.98 | 292 | 1.5588 | 51.3089 | 56.6619 | 56.4524 | 61.1890 |
1.5685 | 2.48 | 365 | 1.4738 | 51.7016 | 57.4150 | 57.7753 | 61.7455 |
1.5727 | 2.98 | 438 | 1.3563 | 56.9372 | 62.9376 | 63.1871 | 67.0306 |
1.3775 | 3.47 | 511 | 1.3414 | 55.6283 | 61.2198 | 61.8070 | 65.9293 |
1.3928 | 3.97 | 584 | 1.3111 | 56.5445 | 62.0693 | 62.6451 | 66.3389 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2