<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
scenario-kd-from-post-finetune-gold-silver-div-6-4000-data-smsa-model-haryoaw-sc
This model is a fine-tuned version of haryoaw/scenario-normal-finetune-clf-data-smsa-model-xlm-roberta-base on the smsa dataset. It achieves the following results on the evaluation set:
- Loss: 1.7382
- Accuracy: 0.8873
- F1: 0.8510
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6969
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
No log | 0.8 | 100 | 4.3417 | 0.7730 | 0.5293 |
No log | 1.6 | 200 | 3.3765 | 0.8286 | 0.7376 |
No log | 2.4 | 300 | 2.6664 | 0.8563 | 0.8095 |
No log | 3.2 | 400 | 2.4554 | 0.8675 | 0.8253 |
3.5032 | 4.0 | 500 | 2.4425 | 0.8651 | 0.8247 |
3.5032 | 4.8 | 600 | 2.6921 | 0.8444 | 0.8084 |
3.5032 | 5.6 | 700 | 2.3385 | 0.8714 | 0.8217 |
3.5032 | 6.4 | 800 | 2.2296 | 0.8730 | 0.8346 |
3.5032 | 7.2 | 900 | 2.2516 | 0.8690 | 0.8286 |
1.2022 | 8.0 | 1000 | 2.3047 | 0.8683 | 0.8256 |
1.2022 | 8.8 | 1100 | 2.2434 | 0.8778 | 0.8423 |
1.2022 | 9.6 | 1200 | 2.1163 | 0.8770 | 0.8333 |
1.2022 | 10.4 | 1300 | 2.0552 | 0.8825 | 0.8416 |
1.2022 | 11.2 | 1400 | 2.1097 | 0.8778 | 0.8379 |
0.7568 | 12.0 | 1500 | 2.1757 | 0.8778 | 0.8343 |
0.7568 | 12.8 | 1600 | 1.9856 | 0.8857 | 0.8531 |
0.7568 | 13.6 | 1700 | 2.1317 | 0.8722 | 0.8333 |
0.7568 | 14.4 | 1800 | 2.2002 | 0.8817 | 0.8522 |
0.7568 | 15.2 | 1900 | 2.0033 | 0.8786 | 0.8430 |
0.5549 | 16.0 | 2000 | 1.8851 | 0.8865 | 0.8551 |
0.5549 | 16.8 | 2100 | 1.9722 | 0.8817 | 0.8426 |
0.5549 | 17.6 | 2200 | 1.9477 | 0.8841 | 0.8435 |
0.5549 | 18.4 | 2300 | 1.9899 | 0.8841 | 0.8455 |
0.5549 | 19.2 | 2400 | 1.8801 | 0.8849 | 0.8526 |
0.4718 | 20.0 | 2500 | 2.1347 | 0.8786 | 0.8423 |
0.4718 | 20.8 | 2600 | 2.0240 | 0.8762 | 0.8304 |
0.4718 | 21.6 | 2700 | 1.8134 | 0.8889 | 0.8515 |
0.4718 | 22.4 | 2800 | 1.8425 | 0.8810 | 0.8425 |
0.4718 | 23.2 | 2900 | 1.9403 | 0.8889 | 0.8560 |
0.4025 | 24.0 | 3000 | 1.8455 | 0.8865 | 0.8428 |
0.4025 | 24.8 | 3100 | 1.8592 | 0.8881 | 0.8473 |
0.4025 | 25.6 | 3200 | 1.9242 | 0.8849 | 0.8396 |
0.4025 | 26.4 | 3300 | 1.8489 | 0.8802 | 0.8423 |
0.4025 | 27.2 | 3400 | 1.9230 | 0.8849 | 0.8477 |
0.3678 | 28.0 | 3500 | 1.8492 | 0.8905 | 0.8558 |
0.3678 | 28.8 | 3600 | 1.7454 | 0.8929 | 0.8616 |
0.3678 | 29.6 | 3700 | 1.8007 | 0.8873 | 0.8414 |
0.3678 | 30.4 | 3800 | 1.8313 | 0.8794 | 0.8385 |
0.3678 | 31.2 | 3900 | 1.8054 | 0.8865 | 0.8526 |
0.3345 | 32.0 | 4000 | 1.9744 | 0.8730 | 0.8336 |
0.3345 | 32.8 | 4100 | 1.8985 | 0.8833 | 0.8502 |
0.3345 | 33.6 | 4200 | 1.9455 | 0.8810 | 0.8356 |
0.3345 | 34.4 | 4300 | 1.8458 | 0.8825 | 0.8470 |
0.3345 | 35.2 | 4400 | 1.8144 | 0.8825 | 0.8452 |
0.309 | 36.0 | 4500 | 1.8929 | 0.8738 | 0.8275 |
0.309 | 36.8 | 4600 | 1.8957 | 0.8802 | 0.8421 |
0.309 | 37.6 | 4700 | 1.7668 | 0.8810 | 0.8435 |
0.309 | 38.4 | 4800 | 1.8182 | 0.8849 | 0.8435 |
0.309 | 39.2 | 4900 | 1.8258 | 0.8770 | 0.8407 |
0.2891 | 40.0 | 5000 | 1.7258 | 0.8929 | 0.8553 |
0.2891 | 40.8 | 5100 | 1.7382 | 0.8873 | 0.8510 |
Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3