<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
MergedSpamModel
This model is a fine-tuned version of aubmindlab/bert-base-arabert on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.96
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.3525 | 0.06 | 50 | 0.2102 | 0.94 |
0.1706 | 0.12 | 100 | 0.1646 | 0.94 |
0.1352 | 0.18 | 150 | 0.2294 | 0.95 |
0.083 | 0.24 | 200 | 0.1940 | 0.97 |
0.0653 | 0.3 | 250 | 0.0980 | 0.97 |
0.0681 | 0.36 | 300 | 0.3397 | 0.93 |
0.1234 | 0.43 | 350 | 0.1216 | 0.97 |
0.0579 | 0.49 | 400 | 0.2255 | 0.96 |
0.0823 | 0.55 | 450 | 0.1365 | 0.98 |
0.0903 | 0.61 | 500 | 0.1127 | 0.98 |
0.0521 | 0.67 | 550 | 0.2477 | 0.96 |
0.0724 | 0.73 | 600 | 0.1276 | 0.96 |
0.0661 | 0.79 | 650 | 0.1584 | 0.97 |
0.0379 | 0.85 | 700 | 0.1760 | 0.97 |
0.0405 | 0.91 | 750 | 0.1920 | 0.97 |
0.0483 | 0.97 | 800 | 0.1422 | 0.97 |
0.0342 | 1.03 | 850 | 0.1665 | 0.97 |
0.0188 | 1.09 | 900 | 0.2370 | 0.96 |
0.045 | 1.15 | 950 | 0.0700 | 0.98 |
0.0323 | 1.22 | 1000 | 0.1456 | 0.96 |
0.0313 | 1.28 | 1050 | 0.2402 | 0.96 |
0.0485 | 1.34 | 1100 | 0.0872 | 0.96 |
0.0447 | 1.4 | 1150 | 0.1648 | 0.95 |
0.0327 | 1.46 | 1200 | 0.1398 | 0.98 |
0.025 | 1.52 | 1250 | 0.0873 | 0.98 |
0.0278 | 1.58 | 1300 | 0.1508 | 0.98 |
0.0468 | 1.64 | 1350 | 0.1693 | 0.97 |
0.0149 | 1.7 | 1400 | 0.1865 | 0.97 |
0.0063 | 1.76 | 1450 | 0.1688 | 0.97 |
0.0149 | 1.82 | 1500 | 0.1952 | 0.97 |
0.0298 | 1.88 | 1550 | 0.1650 | 0.96 |
0.0192 | 1.94 | 1600 | 0.1696 | 0.96 |
0.0147 | 2.0 | 1650 | 0.1952 | 0.96 |
0.0092 | 2.07 | 1700 | 0.2052 | 0.96 |
0.0081 | 2.13 | 1750 | 0.2553 | 0.96 |
0.0296 | 2.19 | 1800 | 0.1026 | 0.97 |
0.0072 | 2.25 | 1850 | 0.2272 | 0.96 |
0.026 | 2.31 | 1900 | 0.0770 | 0.98 |
0.0037 | 2.37 | 1950 | 0.2135 | 0.96 |
0.0045 | 2.43 | 2000 | 0.1758 | 0.96 |
0.0088 | 2.49 | 2050 | 0.2158 | 0.96 |
0.004 | 2.55 | 2100 | 0.1966 | 0.97 |
0.0165 | 2.61 | 2150 | 0.1716 | 0.96 |
0.0043 | 2.67 | 2200 | 0.2351 | 0.96 |
0.0012 | 2.73 | 2250 | 0.2474 | 0.96 |
0.0141 | 2.79 | 2300 | 0.1028 | 0.97 |
0.0247 | 2.86 | 2350 | 0.2873 | 0.96 |
0.0148 | 2.92 | 2400 | 0.2114 | 0.96 |
0.014 | 2.98 | 2450 | 0.2191 | 0.96 |
0.0032 | 3.04 | 2500 | 0.2361 | 0.96 |
0.002 | 3.1 | 2550 | 0.2592 | 0.96 |
0.0026 | 3.16 | 2600 | 0.1533 | 0.97 |
0.0062 | 3.22 | 2650 | 0.2561 | 0.96 |
0.0043 | 3.28 | 2700 | 0.2511 | 0.96 |
0.0002 | 3.34 | 2750 | 0.2755 | 0.96 |
0.0084 | 3.4 | 2800 | 0.2566 | 0.96 |
0.0032 | 3.46 | 2850 | 0.2463 | 0.96 |
0.0064 | 3.52 | 2900 | 0.2367 | 0.96 |
0.0004 | 3.58 | 2950 | 0.2455 | 0.96 |
0.0001 | 3.65 | 3000 | 0.2618 | 0.96 |
0.0072 | 3.71 | 3050 | 0.2679 | 0.96 |
0.0102 | 3.77 | 3100 | 0.2248 | 0.96 |
0.0002 | 3.83 | 3150 | 0.2453 | 0.96 |
0.0009 | 3.89 | 3200 | 0.2717 | 0.96 |
0.0224 | 3.95 | 3250 | 0.2056 | 0.97 |
0.0031 | 4.01 | 3300 | 0.2054 | 0.97 |
0.0058 | 4.07 | 3350 | 0.3137 | 0.96 |
0.0027 | 4.13 | 3400 | 0.3201 | 0.96 |
0.0026 | 4.19 | 3450 | 0.3367 | 0.96 |
0.0016 | 4.25 | 3500 | 0.1906 | 0.98 |
0.0013 | 4.31 | 3550 | 0.1935 | 0.98 |
0.0058 | 4.37 | 3600 | 0.1962 | 0.98 |
0.0001 | 4.43 | 3650 | 0.1956 | 0.98 |
0.0016 | 4.5 | 3700 | 0.1884 | 0.98 |
0.0001 | 4.56 | 3750 | 0.1978 | 0.97 |
0.007 | 4.62 | 3800 | 0.2155 | 0.97 |
0.0034 | 4.68 | 3850 | 0.2019 | 0.97 |
0.0009 | 4.74 | 3900 | 0.2200 | 0.97 |
0.0001 | 4.8 | 3950 | 0.2174 | 0.97 |
0.0 | 4.86 | 4000 | 0.2220 | 0.97 |
0.0001 | 4.92 | 4050 | 0.2251 | 0.97 |
0.0015 | 4.98 | 4100 | 0.2263 | 0.96 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3