generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

bert-finetuned-ner-per-v2

This model is a fine-tuned version of BERT on three datasets:

It achieves the following results on the conll-endava mixed, second version evaluation set:

It achieves the following results on the NERPERDemo evaluation set:

It achieves the following results on the wikiann evaluation set:

Model description

The model is a fine-tuned version of BERT with the intent of solving the NER task. It is trained to recognize four classes of entities:

Intended uses & limitations

It can be used as a general purpose model for recognizing the 4 mentioned entities, but it may have some phrase specific bias introduced by the two datasets (conll-endava and NERPERDemo). The model is part of a project and is fine-tuned to meet the specific requirements, but feel free to test it in your own environment as it has fine-tuned on general data too.

Training and evaluation data

Training and evaluation data are from the three mentioned datasets.

Training procedure

Training is inspired from HuggingFace tutorial.

Training hyperparameters

The following hyperparameters were used during training:

Training results

On conll-endava mixed, second version:

Train Loss Validation Loss Epoch
0.2091 0.0391 0
0.0336 0.0322 1
0.0190 0.0310 2

On NERPERDemo:

Train Loss Validation Loss Epoch
0.0202 0.0005 0
0.0009 0.0002 1
0.0005 0.0002 2

On wikiann:

Train Loss Validation Loss Epoch
0.2975 0.2869 0
0.1755 0.2934 1
0.1217 0.3073 2

Framework versions