<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
distilbert-coherent-v3
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2895
- Accuracy: 0.8940
- Precision: 0.8938
- Recall: 0.8443
- F1: 0.8683
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|
0.618 | 0.27 | 250 | 0.5437 | 0.7211 | 0.6885 | 0.5427 | 0.6070 |
0.5165 | 0.53 | 500 | 0.4744 | 0.7732 | 0.7288 | 0.6253 | 0.6731 |
0.466 | 0.8 | 750 | 0.4681 | 0.7713 | 0.7836 | 0.6295 | 0.6982 |
0.4403 | 1.07 | 1000 | 0.4254 | 0.8056 | 0.8031 | 0.6814 | 0.7372 |
0.411 | 1.33 | 1250 | 0.4023 | 0.8271 | 0.8320 | 0.7352 | 0.7806 |
0.4074 | 1.6 | 1500 | 0.3547 | 0.8357 | 0.8533 | 0.6954 | 0.7663 |
0.3751 | 1.87 | 1750 | 0.3855 | 0.8247 | 0.8560 | 0.6719 | 0.7529 |
0.3598 | 2.13 | 2000 | 0.3453 | 0.8486 | 0.8308 | 0.7779 | 0.8035 |
0.362 | 2.4 | 2250 | 0.3413 | 0.8467 | 0.8359 | 0.7693 | 0.8012 |
0.3482 | 2.67 | 2500 | 0.3501 | 0.8457 | 0.8564 | 0.7390 | 0.7933 |
0.343 | 2.93 | 2750 | 0.3410 | 0.8543 | 0.7723 | 0.8886 | 0.8264 |
0.318 | 3.2 | 3000 | 0.3340 | 0.8524 | 0.8269 | 0.7940 | 0.8101 |
0.3166 | 3.46 | 3250 | 0.3015 | 0.8782 | 0.8393 | 0.8622 | 0.8506 |
0.3042 | 3.73 | 3500 | 0.3140 | 0.8682 | 0.81 | 0.8741 | 0.8408 |
0.3111 | 4.0 | 3750 | 0.2851 | 0.8763 | 0.8146 | 0.8712 | 0.8420 |
0.2991 | 4.26 | 4000 | 0.3244 | 0.8696 | 0.8480 | 0.8312 | 0.8395 |
0.2905 | 4.53 | 4250 | 0.3386 | 0.8644 | 0.8240 | 0.8240 | 0.8240 |
0.2748 | 4.8 | 4500 | 0.2819 | 0.8806 | 0.8555 | 0.8368 | 0.8461 |
0.2923 | 5.06 | 4750 | 0.3036 | 0.8706 | 0.8476 | 0.8221 | 0.8347 |
0.2746 | 5.33 | 5000 | 0.2988 | 0.8811 | 0.8994 | 0.7877 | 0.8399 |
0.2794 | 5.6 | 5250 | 0.2565 | 0.8859 | 0.8770 | 0.8414 | 0.8588 |
0.2743 | 5.86 | 5500 | 0.2969 | 0.8739 | 0.8815 | 0.7965 | 0.8368 |
0.2725 | 6.13 | 5750 | 0.2490 | 0.9035 | 0.8881 | 0.8644 | 0.8761 |
0.2508 | 6.4 | 6000 | 0.2743 | 0.8878 | 0.8867 | 0.8318 | 0.8583 |
0.2518 | 6.66 | 6250 | 0.2681 | 0.8940 | 0.9086 | 0.8176 | 0.8607 |
0.2468 | 6.93 | 6500 | 0.2888 | 0.8825 | 0.9089 | 0.7986 | 0.8502 |
0.2327 | 7.2 | 6750 | 0.2910 | 0.8830 | 0.9048 | 0.7849 | 0.8406 |
0.2475 | 7.46 | 7000 | 0.2626 | 0.8988 | 0.9003 | 0.8428 | 0.8706 |
0.2215 | 7.73 | 7250 | 0.2821 | 0.8964 | 0.8717 | 0.8707 | 0.8712 |
0.2287 | 8.0 | 7500 | 0.2541 | 0.8988 | 0.9193 | 0.8281 | 0.8714 |
0.2097 | 8.26 | 7750 | 0.2327 | 0.9145 | 0.8968 | 0.8915 | 0.8941 |
0.2286 | 8.53 | 8000 | 0.2897 | 0.8964 | 0.9217 | 0.8112 | 0.8629 |
0.2489 | 8.8 | 8250 | 0.2591 | 0.9031 | 0.8974 | 0.8556 | 0.8760 |
0.22 | 9.06 | 8500 | 0.2443 | 0.9074 | 0.8789 | 0.8874 | 0.8831 |
0.206 | 9.33 | 8750 | 0.2213 | 0.9202 | 0.9178 | 0.8683 | 0.8923 |
0.1933 | 9.59 | 9000 | 0.2588 | 0.9131 | 0.9005 | 0.8665 | 0.8832 |
0.2264 | 9.86 | 9250 | 0.2895 | 0.8940 | 0.8938 | 0.8443 | 0.8683 |
Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3