<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
perioli_vgm_v5.4
This model is a fine-tuned version of microsoft/layoutlmv3-base on the sroie dataset. It achieves the following results on the evaluation set:
- Loss: 0.0135
- Precision: 0.9585
- Recall: 0.9665
- F1: 0.9625
- Accuracy: 0.9980
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 0.52 | 100 | 0.0894 | 0.5174 | 0.4351 | 0.4727 | 0.9782 |
No log | 1.04 | 200 | 0.0414 | 0.6754 | 0.6444 | 0.6595 | 0.9883 |
No log | 1.55 | 300 | 0.0413 | 0.6481 | 0.7782 | 0.7072 | 0.9874 |
No log | 2.07 | 400 | 0.0187 | 0.7757 | 0.8536 | 0.8127 | 0.9940 |
0.0743 | 2.59 | 500 | 0.0121 | 0.8427 | 0.8745 | 0.8583 | 0.9963 |
0.0743 | 3.11 | 600 | 0.0178 | 0.8192 | 0.8912 | 0.8537 | 0.9951 |
0.0743 | 3.63 | 700 | 0.0166 | 0.8353 | 0.8912 | 0.8623 | 0.9954 |
0.0743 | 4.15 | 800 | 0.0119 | 0.8051 | 0.9163 | 0.8571 | 0.9954 |
0.0743 | 4.66 | 900 | 0.0122 | 0.9224 | 0.9456 | 0.9339 | 0.9972 |
0.0095 | 5.18 | 1000 | 0.0149 | 0.9313 | 0.9079 | 0.9195 | 0.9971 |
0.0095 | 5.7 | 1100 | 0.0146 | 0.9578 | 0.9498 | 0.9538 | 0.9979 |
0.0095 | 6.22 | 1200 | 0.0164 | 0.9309 | 0.9582 | 0.9443 | 0.9969 |
0.0095 | 6.74 | 1300 | 0.0183 | 0.8814 | 0.9331 | 0.9065 | 0.9964 |
0.0095 | 7.25 | 1400 | 0.0132 | 0.9540 | 0.9540 | 0.9540 | 0.9979 |
0.0025 | 7.77 | 1500 | 0.0142 | 0.9456 | 0.9456 | 0.9456 | 0.9977 |
0.0025 | 8.29 | 1600 | 0.0136 | 0.9617 | 0.9456 | 0.9536 | 0.9979 |
0.0025 | 8.81 | 1700 | 0.0143 | 0.9494 | 0.9414 | 0.9454 | 0.9977 |
0.0025 | 9.33 | 1800 | 0.0144 | 0.9388 | 0.9623 | 0.9504 | 0.9977 |
0.0025 | 9.84 | 1900 | 0.0121 | 0.9540 | 0.9540 | 0.9540 | 0.9979 |
0.0012 | 10.36 | 2000 | 0.0127 | 0.9707 | 0.9707 | 0.9707 | 0.9982 |
0.0012 | 10.88 | 2100 | 0.0140 | 0.9465 | 0.9623 | 0.9544 | 0.9979 |
0.0012 | 11.4 | 2200 | 0.0129 | 0.9506 | 0.9665 | 0.9585 | 0.9982 |
0.0012 | 11.92 | 2300 | 0.0126 | 0.9506 | 0.9665 | 0.9585 | 0.9982 |
0.0012 | 12.44 | 2400 | 0.0140 | 0.9585 | 0.9665 | 0.9625 | 0.9980 |
0.0007 | 12.95 | 2500 | 0.0135 | 0.9585 | 0.9665 | 0.9625 | 0.9980 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.2.2
- Tokenizers 0.13.3