<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
layoutlm-funsd
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.6993
- Answer: {'precision': 0.7155172413793104, 'recall': 0.8207663782447466, 'f1': 0.7645365572826713, 'number': 809}
- Header: {'precision': 0.2781954887218045, 'recall': 0.31092436974789917, 'f1': 0.2936507936507936, 'number': 119}
- Question: {'precision': 0.783303730017762, 'recall': 0.828169014084507, 'f1': 0.8051118210862619, 'number': 1065}
- Overall Precision: 0.7238
- Overall Recall: 0.7943
- Overall F1: 0.7574
- Overall Accuracy: 0.8095
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.7894 | 1.0 | 10 | 1.6149 | {'precision': 0.029508196721311476, 'recall': 0.03337453646477132, 'f1': 0.031322505800464036, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18211920529801323, 'recall': 0.15492957746478872, 'f1': 0.167427701674277, 'number': 1065} | 0.1053 | 0.0963 | 0.1006 | 0.3666 |
1.4628 | 2.0 | 20 | 1.2718 | {'precision': 0.21764705882352942, 'recall': 0.22867737948084055, 'f1': 0.2230259192284509, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4429190751445087, 'recall': 0.5755868544600939, 'f1': 0.5006124948958759, 'number': 1065} | 0.3572 | 0.4004 | 0.3776 | 0.5813 |
1.1079 | 3.0 | 30 | 0.9869 | {'precision': 0.42190889370932755, 'recall': 0.48084054388133496, 'f1': 0.4494511842865395, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6041666666666666, 'recall': 0.6807511737089202, 'f1': 0.640176600441501, 'number': 1065} | 0.5215 | 0.5590 | 0.5396 | 0.6898 |
0.8376 | 4.0 | 40 | 0.8064 | {'precision': 0.6006036217303823, 'recall': 0.7379480840543882, 'f1': 0.6622296173044925, 'number': 809} | {'precision': 0.04918032786885246, 'recall': 0.025210084033613446, 'f1': 0.03333333333333334, 'number': 119} | {'precision': 0.6531302876480541, 'recall': 0.7248826291079812, 'f1': 0.6871384067645749, 'number': 1065} | 0.6133 | 0.6884 | 0.6487 | 0.7512 |
0.6793 | 5.0 | 50 | 0.7442 | {'precision': 0.6339468302658486, 'recall': 0.7663782447466008, 'f1': 0.693900391717963, 'number': 809} | {'precision': 0.15306122448979592, 'recall': 0.12605042016806722, 'f1': 0.1382488479262673, 'number': 119} | {'precision': 0.7100802854594113, 'recall': 0.7474178403755869, 'f1': 0.7282708142726441, 'number': 1065} | 0.6513 | 0.7180 | 0.6831 | 0.7720 |
0.5643 | 6.0 | 60 | 0.6937 | {'precision': 0.6551373346897253, 'recall': 0.796044499381953, 'f1': 0.7187499999999999, 'number': 809} | {'precision': 0.24175824175824176, 'recall': 0.18487394957983194, 'f1': 0.20952380952380953, 'number': 119} | {'precision': 0.71, 'recall': 0.8, 'f1': 0.752317880794702, 'number': 1065} | 0.6675 | 0.7617 | 0.7115 | 0.7895 |
0.4869 | 7.0 | 70 | 0.6780 | {'precision': 0.676130389064143, 'recall': 0.7948084054388134, 'f1': 0.7306818181818182, 'number': 809} | {'precision': 0.2072072072072072, 'recall': 0.19327731092436976, 'f1': 0.2, 'number': 119} | {'precision': 0.7147568013190437, 'recall': 0.8140845070422535, 'f1': 0.7611940298507464, 'number': 1065} | 0.6738 | 0.7692 | 0.7184 | 0.7962 |
0.439 | 8.0 | 80 | 0.6706 | {'precision': 0.696068012752391, 'recall': 0.8096415327564895, 'f1': 0.7485714285714284, 'number': 809} | {'precision': 0.2184873949579832, 'recall': 0.2184873949579832, 'f1': 0.2184873949579832, 'number': 119} | {'precision': 0.7454858125537404, 'recall': 0.8140845070422535, 'f1': 0.7782764811490125, 'number': 1065} | 0.6964 | 0.7767 | 0.7343 | 0.8022 |
0.3922 | 9.0 | 90 | 0.6689 | {'precision': 0.707742639040349, 'recall': 0.8022249690976514, 'f1': 0.7520278099652375, 'number': 809} | {'precision': 0.21774193548387097, 'recall': 0.226890756302521, 'f1': 0.2222222222222222, 'number': 119} | {'precision': 0.7601380500431406, 'recall': 0.8272300469483568, 'f1': 0.7922661870503598, 'number': 1065} | 0.7077 | 0.7812 | 0.7427 | 0.8038 |
0.3518 | 10.0 | 100 | 0.6692 | {'precision': 0.7065677966101694, 'recall': 0.8244746600741656, 'f1': 0.7609811751283514, 'number': 809} | {'precision': 0.23529411764705882, 'recall': 0.23529411764705882, 'f1': 0.23529411764705882, 'number': 119} | {'precision': 0.7663469921534438, 'recall': 0.8253521126760563, 'f1': 0.7947558770343581, 'number': 1065} | 0.7122 | 0.7898 | 0.7490 | 0.8092 |
0.3165 | 11.0 | 110 | 0.6863 | {'precision': 0.714902807775378, 'recall': 0.8182941903584673, 'f1': 0.7631123919308358, 'number': 809} | {'precision': 0.2631578947368421, 'recall': 0.29411764705882354, 'f1': 0.27777777777777773, 'number': 119} | {'precision': 0.7724867724867724, 'recall': 0.8225352112676056, 'f1': 0.7967257844474761, 'number': 1065} | 0.7173 | 0.7893 | 0.7516 | 0.8083 |
0.3043 | 12.0 | 120 | 0.6898 | {'precision': 0.7173678532901834, 'recall': 0.8220024721878862, 'f1': 0.7661290322580644, 'number': 809} | {'precision': 0.27692307692307694, 'recall': 0.3025210084033613, 'f1': 0.2891566265060241, 'number': 119} | {'precision': 0.7812223206377326, 'recall': 0.828169014084507, 'f1': 0.8040109389243391, 'number': 1065} | 0.7242 | 0.7943 | 0.7576 | 0.8084 |
0.2853 | 13.0 | 130 | 0.6935 | {'precision': 0.7167755991285403, 'recall': 0.8133498145859085, 'f1': 0.7620150550086855, 'number': 809} | {'precision': 0.30303030303030304, 'recall': 0.33613445378151263, 'f1': 0.3187250996015936, 'number': 119} | {'precision': 0.7855251544571933, 'recall': 0.8356807511737089, 'f1': 0.8098271155595996, 'number': 1065} | 0.7274 | 0.7968 | 0.7605 | 0.8109 |
0.2724 | 14.0 | 140 | 0.6985 | {'precision': 0.7212581344902386, 'recall': 0.8220024721878862, 'f1': 0.7683419988445985, 'number': 809} | {'precision': 0.2900763358778626, 'recall': 0.31932773109243695, 'f1': 0.304, 'number': 119} | {'precision': 0.786096256684492, 'recall': 0.828169014084507, 'f1': 0.8065843621399177, 'number': 1065} | 0.7287 | 0.7953 | 0.7606 | 0.8091 |
0.2741 | 15.0 | 150 | 0.6993 | {'precision': 0.7155172413793104, 'recall': 0.8207663782447466, 'f1': 0.7645365572826713, 'number': 809} | {'precision': 0.2781954887218045, 'recall': 0.31092436974789917, 'f1': 0.2936507936507936, 'number': 119} | {'precision': 0.783303730017762, 'recall': 0.828169014084507, 'f1': 0.8051118210862619, 'number': 1065} | 0.7238 | 0.7943 | 0.7574 | 0.8095 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3