<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
bert-italian-finetuned-ner
This model is a fine-tuned version of dbmdz/bert-base-italian-cased on the wiki_neural dataset. It achieves the following results on the evaluation set:
- Loss: 0.0361
- Precision: 0.9438
- Recall: 0.9542
- F1: 0.9490
- Accuracy: 0.9918
Model description
Token classification for italian language experiment, NER.
Example
from transformers import pipeline
ner_pipeline = pipeline("ner", model="nickprock/bert-italian-finetuned-ner", aggregation_strategy="simple")
text = "La sede storica della Olivetti รจ ad Ivrea"
output = ner_pipeline(text)
Intended uses & limitations
The model can be used on token classification, in particular NER. It is fine tuned on italian language.
Training and evaluation data
The dataset used is wikiann
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0297 | 1.0 | 11050 | 0.0323 | 0.9324 | 0.9420 | 0.9372 | 0.9908 |
0.0173 | 2.0 | 22100 | 0.0324 | 0.9445 | 0.9514 | 0.9479 | 0.9915 |
0.0057 | 3.0 | 33150 | 0.0361 | 0.9438 | 0.9542 | 0.9490 | 0.9918 |
Framework versions
- Transformers 4.27.3
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2