<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
legal_bert_legal_test_sm
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5535
- Accuracy: 0.75
- Precision: 0.7892
- Recall: 0.6854
- F1: 0.7337
- D-index: 1.6150
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
---|---|---|---|---|---|---|---|---|
No log | 0.98 | 26 | 0.6956 | 0.5259 | 0.5180 | 0.8122 | 0.6325 | 1.2181 |
No log | 2.0 | 53 | 0.6949 | 0.5 | 0.5015 | 0.7934 | 0.6145 | 1.1686 |
No log | 2.98 | 79 | 0.6933 | 0.5259 | 0.575 | 0.2160 | 0.3140 | 1.2208 |
No log | 4.0 | 106 | 0.6035 | 0.6981 | 0.7607 | 0.5822 | 0.6596 | 1.5283 |
No log | 4.98 | 132 | 0.5535 | 0.75 | 0.7892 | 0.6854 | 0.7337 | 1.6150 |
No log | 6.0 | 159 | 0.6376 | 0.7146 | 0.7614 | 0.6291 | 0.6889 | 1.5561 |
No log | 6.98 | 185 | 0.7555 | 0.7358 | 0.7205 | 0.7746 | 0.7466 | 1.5911 |
No log | 8.0 | 212 | 0.9223 | 0.7217 | 0.7149 | 0.7418 | 0.7281 | 1.5676 |
No log | 8.98 | 238 | 1.0061 | 0.7311 | 0.8 | 0.6197 | 0.6984 | 1.5839 |
No log | 10.0 | 265 | 1.1761 | 0.7217 | 0.7844 | 0.6150 | 0.6895 | 1.5681 |
No log | 10.98 | 291 | 1.2807 | 0.7264 | 0.8345 | 0.5681 | 0.6760 | 1.5762 |
No log | 12.0 | 318 | 1.3035 | 0.7311 | 0.7735 | 0.6573 | 0.7107 | 1.5837 |
No log | 12.98 | 344 | 1.4680 | 0.7406 | 0.8503 | 0.5869 | 0.6944 | 1.5997 |
No log | 14.0 | 371 | 1.3238 | 0.7358 | 0.7327 | 0.7465 | 0.7395 | 1.5912 |
No log | 14.98 | 397 | 1.3373 | 0.7547 | 0.8303 | 0.6432 | 0.7249 | 1.6229 |
No log | 16.0 | 424 | 1.3234 | 0.7736 | 0.8162 | 0.7089 | 0.7588 | 1.6536 |
No log | 16.98 | 450 | 1.3853 | 0.7736 | 0.8162 | 0.7089 | 0.7588 | 1.6536 |
No log | 18.0 | 477 | 1.4619 | 0.7594 | 0.8323 | 0.6526 | 0.7316 | 1.6306 |
0.2167 | 18.98 | 503 | 1.4222 | 0.7571 | 0.8161 | 0.6667 | 0.7339 | 1.6267 |
0.2167 | 19.62 | 520 | 1.4074 | 0.7689 | 0.8212 | 0.6901 | 0.75 | 1.6460 |
Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2