<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fg-bert-sustainability-2e-5-0.01-32-20_augmented_40_percent_empty
This model is a fine-tuned version of Raccourci/fairguest-bert on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0325
- F1: 0.9326
- Roc Auc: 0.9700
- Accuracy: 0.9442
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
No log | 0.99 | 50 | 0.2300 | 0.0 | 0.5 | 0.4926 |
No log | 2.0 | 101 | 0.1572 | 0.2201 | 0.5641 | 0.5421 |
No log | 2.99 | 151 | 0.1066 | 0.6629 | 0.8015 | 0.6914 |
No log | 4.0 | 202 | 0.0713 | 0.8561 | 0.9146 | 0.8736 |
No log | 4.99 | 252 | 0.0555 | 0.9107 | 0.9496 | 0.9244 |
No log | 6.0 | 303 | 0.0462 | 0.9209 | 0.9595 | 0.9343 |
No log | 6.99 | 353 | 0.0414 | 0.9225 | 0.9655 | 0.9337 |
No log | 8.0 | 404 | 0.0378 | 0.9303 | 0.9682 | 0.9405 |
No log | 8.99 | 454 | 0.0359 | 0.9257 | 0.9625 | 0.9368 |
0.1137 | 10.0 | 505 | 0.0344 | 0.9282 | 0.9654 | 0.9399 |
0.1137 | 10.99 | 555 | 0.0348 | 0.9279 | 0.9664 | 0.9411 |
0.1137 | 12.0 | 606 | 0.0340 | 0.9247 | 0.9630 | 0.9380 |
0.1137 | 12.99 | 656 | 0.0338 | 0.9297 | 0.9682 | 0.9411 |
0.1137 | 14.0 | 707 | 0.0322 | 0.9310 | 0.9661 | 0.9430 |
0.1137 | 14.99 | 757 | 0.0322 | 0.9315 | 0.9694 | 0.9424 |
0.1137 | 16.0 | 808 | 0.0324 | 0.9329 | 0.9684 | 0.9449 |
0.1137 | 16.99 | 858 | 0.0323 | 0.9348 | 0.9702 | 0.9449 |
0.1137 | 18.0 | 909 | 0.0331 | 0.9304 | 0.9693 | 0.9424 |
0.1137 | 18.99 | 959 | 0.0327 | 0.9310 | 0.9694 | 0.9430 |
0.0199 | 19.8 | 1000 | 0.0325 | 0.9326 | 0.9700 | 0.9442 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3