<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
fg-bert-sustainability-2e-5-0.01-32-20_augmented_60_percent_empty
This model is a fine-tuned version of Raccourci/fairguest-bert on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0297
- F1: 0.9175
- Roc Auc: 0.9587
- Accuracy: 0.9435
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
No log | 1.0 | 63 | 0.1881 | 0.0 | 0.5 | 0.6090 |
No log | 1.99 | 126 | 0.1135 | 0.4280 | 0.6589 | 0.6606 |
No log | 2.99 | 189 | 0.0713 | 0.8123 | 0.8598 | 0.8801 |
No log | 4.0 | 253 | 0.0511 | 0.8831 | 0.9360 | 0.9267 |
No log | 5.0 | 316 | 0.0413 | 0.9019 | 0.9458 | 0.9371 |
No log | 5.99 | 379 | 0.0369 | 0.9024 | 0.9508 | 0.9361 |
No log | 6.99 | 442 | 0.0360 | 0.9071 | 0.9560 | 0.9386 |
0.1032 | 8.0 | 506 | 0.0333 | 0.9046 | 0.9536 | 0.9351 |
0.1032 | 9.0 | 569 | 0.0324 | 0.9043 | 0.9498 | 0.9371 |
0.1032 | 9.99 | 632 | 0.0310 | 0.9168 | 0.9603 | 0.9415 |
0.1032 | 10.99 | 695 | 0.0312 | 0.9127 | 0.9579 | 0.9405 |
0.1032 | 12.0 | 759 | 0.0294 | 0.9174 | 0.9609 | 0.9415 |
0.1032 | 13.0 | 822 | 0.0291 | 0.9152 | 0.9575 | 0.9415 |
0.1032 | 13.99 | 885 | 0.0292 | 0.9193 | 0.9599 | 0.9455 |
0.1032 | 14.99 | 948 | 0.0304 | 0.9116 | 0.9579 | 0.9386 |
0.0181 | 16.0 | 1012 | 0.0290 | 0.9151 | 0.9537 | 0.9435 |
0.0181 | 17.0 | 1075 | 0.0290 | 0.9201 | 0.9583 | 0.9445 |
0.0181 | 17.99 | 1138 | 0.0294 | 0.9202 | 0.9589 | 0.9450 |
0.0181 | 18.99 | 1201 | 0.0301 | 0.9167 | 0.9598 | 0.9440 |
0.0181 | 19.92 | 1260 | 0.0297 | 0.9175 | 0.9587 | 0.9435 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3