<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.3509
- Recall: 0.3793
- F1: 0.3646
- Accuracy: 0.9468
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 38 | 0.2997 | 0.1125 | 0.2057 | 0.1454 | 0.8669 |
No log | 2.0 | 76 | 0.2620 | 0.1928 | 0.2849 | 0.2300 | 0.8899 |
No log | 3.0 | 114 | 0.2497 | 0.1923 | 0.2906 | 0.2314 | 0.8918 |
No log | 4.0 | 152 | 0.2474 | 0.1819 | 0.3377 | 0.2365 | 0.8905 |
No log | 5.0 | 190 | 0.2418 | 0.2128 | 0.3264 | 0.2576 | 0.8997 |
Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3