generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

pretoxtm-ner

This model is a fine-tuned version of dmis-lab/biobert-v1.1 on an unknown dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Study Test Manifestation Finding Specimen Dose Dose Qualification Sex Group
No log 1.0 257 0.2315 {'precision': 0.7803680981595092, 'recall': 0.8845618915159944, 'f1': 0.8292046936114733, 'number': 719} {'precision': 0.8481375358166189, 'recall': 0.8996960486322189, 'f1': 0.8731563421828908, 'number': 329} {'precision': 0.8012618296529969, 'recall': 0.7109867039888034, 'f1': 0.7534297367445311, 'number': 1429} {'precision': 0.7451456310679612, 'recall': 0.8468965517241379, 'f1': 0.7927695287282118, 'number': 725} {'precision': 0.9078498293515358, 'recall': 0.9333333333333333, 'f1': 0.9204152249134948, 'number': 570} {'precision': 0.703125, 'recall': 0.7894736842105263, 'f1': 0.743801652892562, 'number': 57} {'precision': 0.9241706161137441, 'recall': 0.9653465346534653, 'f1': 0.9443099273607748, 'number': 202} {'precision': 0.625, 'recall': 0.8928571428571429, 'f1': 0.7352941176470589, 'number': 112}
0.0608 2.0 514 0.2285 {'precision': 0.803680981595092, 'recall': 0.9109874826147427, 'f1': 0.8539765319426338, 'number': 719} {'precision': 0.846820809248555, 'recall': 0.8905775075987842, 'f1': 0.8681481481481481, 'number': 329} {'precision': 0.7780040733197556, 'recall': 0.8019594121763471, 'f1': 0.7898001378359754, 'number': 1429} {'precision': 0.8052631578947368, 'recall': 0.8441379310344828, 'f1': 0.8242424242424241, 'number': 725} {'precision': 0.9047619047619048, 'recall': 0.9333333333333333, 'f1': 0.9188255613126078, 'number': 570} {'precision': 0.7368421052631579, 'recall': 0.7368421052631579, 'f1': 0.7368421052631579, 'number': 57} {'precision': 0.9289099526066351, 'recall': 0.9702970297029703, 'f1': 0.9491525423728814, 'number': 202} {'precision': 0.6783216783216783, 'recall': 0.8660714285714286, 'f1': 0.7607843137254903, 'number': 112}
0.0608 3.0 771 0.2356 {'precision': 0.8107098381070984, 'recall': 0.9054242002781642, 'f1': 0.8554533508541393, 'number': 719} {'precision': 0.8428571428571429, 'recall': 0.8966565349544073, 'f1': 0.8689248895434463, 'number': 329} {'precision': 0.7924263674614306, 'recall': 0.7907627711686495, 'f1': 0.7915936952714535, 'number': 1429} {'precision': 0.7935064935064935, 'recall': 0.8427586206896551, 'f1': 0.817391304347826, 'number': 725} {'precision': 0.8894472361809045, 'recall': 0.9315789473684211, 'f1': 0.910025706940874, 'number': 570} {'precision': 0.75, 'recall': 0.7368421052631579, 'f1': 0.743362831858407, 'number': 57} {'precision': 0.9282296650717703, 'recall': 0.9603960396039604, 'f1': 0.9440389294403893, 'number': 202} {'precision': 0.6992481203007519, 'recall': 0.8303571428571429, 'f1': 0.7591836734693878, 'number': 112}

Framework versions