<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
glpn-nyu-finetuned-diode-221121-063504
This model is a fine-tuned version of vinvino02/glpn-nyu on the diode-subset dataset. It achieves the following results on the evaluation set:
- Loss: 0.3533
- Mae: 0.2668
- Rmse: 0.3716
- Abs Rel: 0.3427
- Log Mae: 0.1167
- Log Rmse: 0.1703
- Delta1: 0.5522
- Delta2: 0.8362
- Delta3: 0.9382
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
---|---|---|---|---|---|---|---|---|---|---|---|
1.3991 | 1.0 | 72 | 1.2199 | 3.6023 | 3.6519 | 5.2780 | 0.7010 | 0.7461 | 0.0 | 0.0007 | 0.0616 |
1.1099 | 2.0 | 144 | 0.7471 | 1.2562 | 1.5028 | 1.6644 | 0.3550 | 0.4165 | 0.0965 | 0.2342 | 0.4292 |
0.5036 | 3.0 | 216 | 0.4876 | 0.5019 | 0.6198 | 0.7615 | 0.2000 | 0.2599 | 0.2878 | 0.5643 | 0.7803 |
0.4157 | 4.0 | 288 | 0.3789 | 0.3008 | 0.4211 | 0.3793 | 0.1291 | 0.1840 | 0.4961 | 0.7961 | 0.9261 |
0.4043 | 5.0 | 360 | 0.3795 | 0.3025 | 0.4117 | 0.4028 | 0.1303 | 0.1850 | 0.4889 | 0.7892 | 0.9278 |
0.3638 | 6.0 | 432 | 0.3790 | 0.3022 | 0.4019 | 0.4175 | 0.1313 | 0.1862 | 0.4851 | 0.7889 | 0.9262 |
0.3532 | 7.0 | 504 | 0.3605 | 0.2756 | 0.3864 | 0.3447 | 0.1201 | 0.1732 | 0.5397 | 0.8202 | 0.9330 |
0.3087 | 8.0 | 576 | 0.3599 | 0.2781 | 0.3896 | 0.3365 | 0.1206 | 0.1722 | 0.5312 | 0.8183 | 0.9332 |
0.3232 | 9.0 | 648 | 0.3613 | 0.2772 | 0.3879 | 0.3444 | 0.1204 | 0.1733 | 0.5341 | 0.8237 | 0.9334 |
0.3072 | 10.0 | 720 | 0.3570 | 0.2752 | 0.3794 | 0.3582 | 0.1195 | 0.1731 | 0.5341 | 0.8270 | 0.9374 |
0.2673 | 11.0 | 792 | 0.3633 | 0.2747 | 0.3838 | 0.3390 | 0.1207 | 0.1728 | 0.5330 | 0.8221 | 0.9333 |
0.3222 | 12.0 | 864 | 0.3548 | 0.2713 | 0.3783 | 0.3441 | 0.1180 | 0.1711 | 0.5448 | 0.8315 | 0.9367 |
0.3072 | 13.0 | 936 | 0.3532 | 0.2668 | 0.3700 | 0.3441 | 0.1168 | 0.1701 | 0.5502 | 0.8353 | 0.9387 |
0.3214 | 14.0 | 1008 | 0.3553 | 0.2674 | 0.3747 | 0.3322 | 0.1177 | 0.1701 | 0.5472 | 0.8324 | 0.9355 |
0.3406 | 15.0 | 1080 | 0.3533 | 0.2668 | 0.3716 | 0.3427 | 0.1167 | 0.1703 | 0.5522 | 0.8362 | 0.9382 |
Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Tokenizers 0.13.2