<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bert-multilingual-ner
This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0678
- Precision: 0.7057
- Recall: 0.7305
- F1: 0.7179
- Accuracy: 0.9734
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0946 | 0.06 | 10000 | 0.0974 | 0.5971 | 0.6368 | 0.6163 | 0.9612 |
0.0935 | 0.12 | 20000 | 0.0892 | 0.6258 | 0.6633 | 0.6440 | 0.9652 |
0.0829 | 0.17 | 30000 | 0.0913 | 0.6393 | 0.6557 | 0.6474 | 0.9653 |
0.086 | 0.23 | 40000 | 0.0829 | 0.6868 | 0.6131 | 0.6478 | 0.9679 |
0.0836 | 0.29 | 50000 | 0.0806 | 0.6450 | 0.7019 | 0.6722 | 0.9673 |
0.0813 | 0.35 | 60000 | 0.0784 | 0.6718 | 0.6823 | 0.6770 | 0.9694 |
0.0756 | 0.41 | 70000 | 0.0771 | 0.6559 | 0.7217 | 0.6872 | 0.9696 |
0.0753 | 0.47 | 80000 | 0.0733 | 0.6944 | 0.6660 | 0.6799 | 0.9706 |
0.0716 | 0.52 | 90000 | 0.0781 | 0.6698 | 0.7151 | 0.6917 | 0.9704 |
0.0785 | 0.58 | 100000 | 0.0723 | 0.6936 | 0.6960 | 0.6948 | 0.9714 |
0.0707 | 0.64 | 110000 | 0.0729 | 0.6943 | 0.7098 | 0.7020 | 0.9718 |
0.0699 | 0.7 | 120000 | 0.0714 | 0.6928 | 0.7200 | 0.7061 | 0.9717 |
0.0729 | 0.76 | 130000 | 0.0715 | 0.6887 | 0.7305 | 0.7090 | 0.9722 |
0.0696 | 0.82 | 140000 | 0.0703 | 0.6826 | 0.7479 | 0.7137 | 0.9722 |
0.0654 | 0.87 | 150000 | 0.0697 | 0.6851 | 0.7500 | 0.7161 | 0.9727 |
0.0636 | 0.93 | 160000 | 0.0684 | 0.6968 | 0.7410 | 0.7183 | 0.9731 |
0.0641 | 0.99 | 170000 | 0.0679 | 0.7047 | 0.7315 | 0.7179 | 0.9734 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3