<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bert-etpc
This model is a fine-tuned version of bert-large-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2908
- F1: 0.6599
- Roc Auc: 0.7713
- Accuracy: 0.0517
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
---|---|---|---|---|---|---|
0.2374 | 1.0 | 1483 | 0.2739 | 0.6289 | 0.7510 | 0.0502 |
0.2169 | 2.0 | 2966 | 0.2589 | 0.6433 | 0.7586 | 0.0532 |
0.1722 | 3.0 | 4449 | 0.2663 | 0.6584 | 0.7676 | 0.0638 |
0.1251 | 4.0 | 5932 | 0.2845 | 0.6594 | 0.7715 | 0.0486 |
0.0987 | 5.0 | 7415 | 0.2908 | 0.6599 | 0.7713 | 0.0517 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1