<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
model_v1_complete_training_wt_init_48_tiny_emb_comp
This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.7768
- Accuracy: 0.3787
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
6.0945 | 0.33 | 30000 | 6.0802 | 0.1412 |
5.2818 | 0.66 | 60000 | 5.2151 | 0.2395 |
4.8774 | 0.98 | 90000 | 4.8105 | 0.2760 |
4.7096 | 1.31 | 120000 | 4.6474 | 0.2894 |
4.6109 | 1.64 | 150000 | 4.5460 | 0.2985 |
4.5415 | 1.97 | 180000 | 4.4761 | 0.3050 |
4.4884 | 2.29 | 210000 | 4.4231 | 0.3101 |
4.446 | 2.62 | 240000 | 4.3791 | 0.3144 |
4.4072 | 2.95 | 270000 | 4.3416 | 0.3179 |
4.3755 | 3.28 | 300000 | 4.3064 | 0.3218 |
4.3455 | 3.6 | 330000 | 4.2724 | 0.3254 |
4.3172 | 3.93 | 360000 | 4.2410 | 0.3291 |
4.2921 | 4.26 | 390000 | 4.2130 | 0.3324 |
4.2718 | 4.59 | 420000 | 4.1892 | 0.3348 |
4.2485 | 4.92 | 450000 | 4.1688 | 0.3370 |
4.2267 | 5.24 | 480000 | 4.1500 | 0.3394 |
4.2081 | 5.57 | 510000 | 4.1314 | 0.3412 |
4.198 | 5.9 | 540000 | 4.1117 | 0.3435 |
4.1666 | 6.23 | 570000 | 4.0949 | 0.3451 |
4.1498 | 6.55 | 600000 | 4.0786 | 0.3464 |
4.1104 | 6.88 | 630000 | 4.0465 | 0.3499 |
4.0715 | 7.21 | 660000 | 4.0078 | 0.3539 |
4.0298 | 7.54 | 690000 | 3.9722 | 0.3576 |
4.0085 | 7.87 | 720000 | 3.9520 | 0.3599 |
3.99 | 8.19 | 750000 | 3.9390 | 0.3615 |
3.9799 | 8.52 | 780000 | 3.9272 | 0.3627 |
3.9766 | 8.85 | 810000 | 3.9138 | 0.3641 |
3.9534 | 9.18 | 840000 | 3.9034 | 0.3651 |
3.9521 | 9.5 | 870000 | 3.8918 | 0.3662 |
3.9314 | 9.83 | 900000 | 3.8817 | 0.3670 |
3.9096 | 10.16 | 930000 | 3.8709 | 0.3683 |
3.904 | 10.49 | 960000 | 3.8604 | 0.3695 |
3.8965 | 10.81 | 990000 | 3.8509 | 0.3704 |
3.8788 | 11.14 | 1020000 | 3.8406 | 0.3717 |
3.8748 | 11.47 | 1050000 | 3.8329 | 0.3728 |
3.8638 | 11.8 | 1080000 | 3.8250 | 0.3733 |
3.8586 | 12.13 | 1110000 | 3.8203 | 0.3739 |
3.8495 | 12.45 | 1140000 | 3.8146 | 0.3746 |
3.8469 | 12.78 | 1170000 | 3.8054 | 0.3753 |
3.8352 | 13.11 | 1200000 | 3.8007 | 0.3761 |
3.8339 | 13.44 | 1230000 | 3.7949 | 0.3766 |
3.8215 | 13.76 | 1260000 | 3.7894 | 0.3772 |
3.8175 | 14.09 | 1290000 | 3.7835 | 0.3779 |
3.817 | 14.42 | 1320000 | 3.7768 | 0.3787 |
Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.1
- Tokenizers 0.13.3