<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
scenario-kd-from-post-finetune-gold-silver-div-3-6000-data-smsa-model-haryoaw-sc
This model is a fine-tuned version of haryoaw/scenario-normal-finetune-clf-data-smsa-model-xlm-roberta-base on the smsa dataset. It achieves the following results on the evaluation set:
- Loss: 1.1258
- Accuracy: 0.9056
- F1: 0.8653
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6969
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
No log | 0.53 | 100 | 2.3189 | 0.8635 | 0.8159 |
No log | 1.06 | 200 | 1.8856 | 0.8929 | 0.8544 |
No log | 1.6 | 300 | 1.7808 | 0.8944 | 0.8483 |
No log | 2.13 | 400 | 1.8733 | 0.8929 | 0.8514 |
2.0271 | 2.66 | 500 | 1.6916 | 0.8897 | 0.8469 |
2.0271 | 3.19 | 600 | 1.7905 | 0.8960 | 0.8415 |
2.0271 | 3.72 | 700 | 1.8337 | 0.8929 | 0.8541 |
2.0271 | 4.26 | 800 | 1.4930 | 0.9048 | 0.8607 |
2.0271 | 4.79 | 900 | 1.4662 | 0.9008 | 0.8565 |
0.8327 | 5.32 | 1000 | 1.5080 | 0.9032 | 0.8598 |
0.8327 | 5.85 | 1100 | 1.5582 | 0.8937 | 0.8464 |
0.8327 | 6.38 | 1200 | 1.3192 | 0.9040 | 0.8635 |
0.8327 | 6.91 | 1300 | 1.4358 | 0.8968 | 0.8486 |
0.8327 | 7.45 | 1400 | 1.2117 | 0.9048 | 0.8693 |
0.6005 | 7.98 | 1500 | 1.2485 | 0.9127 | 0.8727 |
0.6005 | 8.51 | 1600 | 1.2886 | 0.9024 | 0.8600 |
0.6005 | 9.04 | 1700 | 1.4128 | 0.9032 | 0.8671 |
0.6005 | 9.57 | 1800 | 1.2958 | 0.9103 | 0.8718 |
0.6005 | 10.11 | 1900 | 1.3286 | 0.9048 | 0.8649 |
0.4985 | 10.64 | 2000 | 1.2462 | 0.9040 | 0.8632 |
0.4985 | 11.17 | 2100 | 1.3528 | 0.8937 | 0.8432 |
0.4985 | 11.7 | 2200 | 1.3115 | 0.9063 | 0.8618 |
0.4985 | 12.23 | 2300 | 1.1824 | 0.9087 | 0.8724 |
0.4985 | 12.77 | 2400 | 1.4163 | 0.8952 | 0.8429 |
0.4328 | 13.3 | 2500 | 1.2076 | 0.9079 | 0.8743 |
0.4328 | 13.83 | 2600 | 1.2415 | 0.8976 | 0.8477 |
0.4328 | 14.36 | 2700 | 1.3284 | 0.9063 | 0.8643 |
0.4328 | 14.89 | 2800 | 1.2130 | 0.9048 | 0.8576 |
0.4328 | 15.43 | 2900 | 1.2671 | 0.9103 | 0.8655 |
0.3966 | 15.96 | 3000 | 1.2021 | 0.9032 | 0.8532 |
0.3966 | 16.49 | 3100 | 1.1322 | 0.9087 | 0.8721 |
0.3966 | 17.02 | 3200 | 1.2196 | 0.9063 | 0.8706 |
0.3966 | 17.55 | 3300 | 1.2347 | 0.8992 | 0.8521 |
0.3966 | 18.09 | 3400 | 1.1332 | 0.9111 | 0.8732 |
0.3506 | 18.62 | 3500 | 1.2256 | 0.8976 | 0.8462 |
0.3506 | 19.15 | 3600 | 1.0997 | 0.9095 | 0.8681 |
0.3506 | 19.68 | 3700 | 1.1598 | 0.9079 | 0.8721 |
0.3506 | 20.21 | 3800 | 1.2913 | 0.9040 | 0.8560 |
0.3506 | 20.74 | 3900 | 1.0467 | 0.9151 | 0.8756 |
0.3337 | 21.28 | 4000 | 1.0574 | 0.9190 | 0.8859 |
0.3337 | 21.81 | 4100 | 1.1742 | 0.9071 | 0.8669 |
0.3337 | 22.34 | 4200 | 1.0714 | 0.9119 | 0.8754 |
0.3337 | 22.87 | 4300 | 1.0969 | 0.9063 | 0.8672 |
0.3337 | 23.4 | 4400 | 1.0878 | 0.9111 | 0.8712 |
0.3067 | 23.94 | 4500 | 1.1340 | 0.9063 | 0.8704 |
0.3067 | 24.47 | 4600 | 1.1223 | 0.9040 | 0.8610 |
0.3067 | 25.0 | 4700 | 1.1525 | 0.9071 | 0.8621 |
0.3067 | 25.53 | 4800 | 1.1375 | 0.9063 | 0.8615 |
0.3067 | 26.06 | 4900 | 1.1749 | 0.9048 | 0.8641 |
0.293 | 26.6 | 5000 | 1.2024 | 0.9063 | 0.8622 |
0.293 | 27.13 | 5100 | 1.1349 | 0.9040 | 0.8665 |
0.293 | 27.66 | 5200 | 1.1001 | 0.9071 | 0.8716 |
0.293 | 28.19 | 5300 | 1.1867 | 0.9024 | 0.8611 |
0.293 | 28.72 | 5400 | 1.0862 | 0.9040 | 0.8613 |
0.2804 | 29.26 | 5500 | 1.1258 | 0.9056 | 0.8653 |
Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3