<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
swin-large-patch4-window7-224-fv-finetuned-memes
This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.6502
- Accuracy: 0.8601
- Precision: 0.8582
- Recall: 0.8601
- F1: 0.8583
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|
1.2077 | 0.99 | 20 | 0.9499 | 0.6461 | 0.6764 | 0.6461 | 0.5863 |
0.5687 | 1.99 | 40 | 0.5365 | 0.7975 | 0.8018 | 0.7975 | 0.7924 |
0.3607 | 2.99 | 60 | 0.4007 | 0.8423 | 0.8419 | 0.8423 | 0.8398 |
0.203 | 3.99 | 80 | 0.3751 | 0.8509 | 0.8502 | 0.8509 | 0.8503 |
0.1728 | 4.99 | 100 | 0.4168 | 0.8509 | 0.8519 | 0.8509 | 0.8506 |
0.0963 | 5.99 | 120 | 0.4351 | 0.8586 | 0.8573 | 0.8586 | 0.8555 |
0.0956 | 6.99 | 140 | 0.4415 | 0.8547 | 0.8542 | 0.8547 | 0.8541 |
0.079 | 7.99 | 160 | 0.5312 | 0.8501 | 0.8475 | 0.8501 | 0.8459 |
0.0635 | 8.99 | 180 | 0.5376 | 0.8601 | 0.8578 | 0.8601 | 0.8577 |
0.0593 | 9.99 | 200 | 0.5060 | 0.8609 | 0.8615 | 0.8609 | 0.8604 |
0.0656 | 10.99 | 220 | 0.4997 | 0.8617 | 0.8573 | 0.8617 | 0.8587 |
0.0561 | 11.99 | 240 | 0.5430 | 0.8586 | 0.8604 | 0.8586 | 0.8589 |
0.0523 | 12.99 | 260 | 0.5354 | 0.8624 | 0.8643 | 0.8624 | 0.8626 |
0.0489 | 13.99 | 280 | 0.5539 | 0.8609 | 0.8572 | 0.8609 | 0.8577 |
0.0487 | 14.99 | 300 | 0.5785 | 0.8609 | 0.8591 | 0.8609 | 0.8591 |
0.0485 | 15.99 | 320 | 0.6186 | 0.8601 | 0.8578 | 0.8601 | 0.8573 |
0.0518 | 16.99 | 340 | 0.6342 | 0.8624 | 0.8612 | 0.8624 | 0.8606 |
0.0432 | 17.99 | 360 | 0.6302 | 0.8586 | 0.8598 | 0.8586 | 0.8580 |
0.0469 | 18.99 | 380 | 0.6323 | 0.8617 | 0.8606 | 0.8617 | 0.8604 |
0.0426 | 19.99 | 400 | 0.6502 | 0.8601 | 0.8582 | 0.8601 | 0.8583 |
Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1