<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
Is_there_relation
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1011
- Macro F1: 0.9873
- Precision: 0.9875
- Recall: 0.9873
- Kappa: 0.9708
- Accuracy: 0.9873
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 280 | 0.0828 | 0.9746 | 0.9747 | 0.9745 | 0.9413 | 0.9745 |
0.1162 | 2.0 | 560 | 0.1149 | 0.9684 | 0.9699 | 0.9682 | 0.9278 | 0.9682 |
0.1162 | 3.0 | 840 | 0.0942 | 0.9852 | 0.9855 | 0.9851 | 0.9659 | 0.9851 |
0.0231 | 4.0 | 1120 | 0.0749 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0231 | 5.0 | 1400 | 0.1058 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0084 | 6.0 | 1680 | 0.1145 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0084 | 7.0 | 1960 | 0.0813 | 0.9852 | 0.9853 | 0.9851 | 0.9658 | 0.9851 |
0.0056 | 8.0 | 2240 | 0.1235 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0022 | 9.0 | 2520 | 0.0928 | 0.9894 | 0.9895 | 0.9894 | 0.9756 | 0.9894 |
0.0022 | 10.0 | 2800 | 0.1079 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0019 | 11.0 | 3080 | 0.0796 | 0.9894 | 0.9895 | 0.9894 | 0.9756 | 0.9894 |
0.0019 | 12.0 | 3360 | 0.1084 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0004 | 13.0 | 3640 | 0.1099 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0004 | 14.0 | 3920 | 0.1233 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
0.0005 | 15.0 | 4200 | 0.1011 | 0.9873 | 0.9875 | 0.9873 | 0.9708 | 0.9873 |
Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3