<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
distilbert-base-uncased-multil-cls-legal
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5448
- Accuracy: 0.9022
- F1: 0.9015
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
2.67 | 1.0 | 396 | 1.9327 | 0.5209 | 0.4806 |
1.5362 | 2.0 | 792 | 1.0998 | 0.7061 | 0.6869 |
0.8991 | 3.0 | 1188 | 0.7546 | 0.8013 | 0.7975 |
0.5899 | 4.0 | 1584 | 0.6136 | 0.8403 | 0.8392 |
0.4082 | 5.0 | 1980 | 0.5527 | 0.8601 | 0.8589 |
0.2874 | 6.0 | 2376 | 0.5200 | 0.8736 | 0.8731 |
0.2136 | 7.0 | 2772 | 0.4991 | 0.8831 | 0.8815 |
0.1564 | 8.0 | 3168 | 0.4946 | 0.8853 | 0.8843 |
0.1123 | 9.0 | 3564 | 0.4814 | 0.8928 | 0.8920 |
0.0866 | 10.0 | 3960 | 0.4959 | 0.8912 | 0.8908 |
0.0685 | 11.0 | 4356 | 0.5060 | 0.8928 | 0.8923 |
0.0508 | 12.0 | 4752 | 0.5114 | 0.8997 | 0.8989 |
0.037 | 13.0 | 5148 | 0.5199 | 0.8978 | 0.8971 |
0.0316 | 14.0 | 5544 | 0.5236 | 0.9003 | 0.8993 |
0.0243 | 15.0 | 5940 | 0.5253 | 0.9022 | 0.9015 |
0.021 | 16.0 | 6336 | 0.5385 | 0.9025 | 0.9019 |
0.0177 | 17.0 | 6732 | 0.5396 | 0.9038 | 0.9032 |
0.014 | 18.0 | 7128 | 0.5449 | 0.9025 | 0.9018 |
0.014 | 19.0 | 7524 | 0.5467 | 0.9010 | 0.9002 |
0.0103 | 20.0 | 7920 | 0.5448 | 0.9022 | 0.9015 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3