<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
conditional-detr-resnet-50_til-2023-cv-9
This model is a fine-tuned version of microsoft/conditional-detr-resnet-50 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2502
- Loss Ce: 0.0010
- Loss Bbox: 0.0160
- Loss Giou: 0.0842
- Cardinality Error: 2.1237
- Map: 0.8063
- Map 50: 0.9901
- Map 75: 0.9609
- Map Small: 0.8063
- Map Medium: -1.0
- Map Large: -1.0
- Mar 1: 0.4097
- Mar 10: 0.8555
- Mar 100: 0.8555
- Mar Small: 0.8555
- Mar Medium: -1.0
- Mar Large: -1.0
- Map Per Class: -1.0
- Mar 100 Per Class: -1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Loss Ce | Loss Bbox | Loss Giou | Cardinality Error | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Per Class | Mar 100 Per Class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4695 | 1.0 | 708 | 0.4327 | 0.0120 | 0.0256 | 0.1404 | 2.1237 | 0.7356 | 0.9796 | 0.9229 | 0.7356 | -1.0 | -1.0 | 0.3810 | 0.7910 | 0.7910 | 0.7910 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2915 | 2.0 | 1416 | 0.3432 | 0.0056 | 0.0217 | 0.1118 | 2.1237 | 0.7640 | 0.9892 | 0.9391 | 0.7640 | -1.0 | -1.0 | 0.3900 | 0.8128 | 0.8128 | 0.8128 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2713 | 3.0 | 2124 | 0.3150 | 0.0063 | 0.0194 | 0.1026 | 2.1237 | 0.7819 | 0.9894 | 0.9494 | 0.7819 | -1.0 | -1.0 | 0.3977 | 0.8274 | 0.8274 | 0.8274 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2583 | 4.0 | 2832 | 0.2754 | 0.0026 | 0.0174 | 0.0915 | 2.1237 | 0.7931 | 0.9898 | 0.9515 | 0.7931 | -1.0 | -1.0 | 0.4026 | 0.8387 | 0.8387 | 0.8387 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2264 | 5.0 | 3540 | 0.2768 | 0.0019 | 0.0178 | 0.0921 | 2.1237 | 0.8011 | 0.9899 | 0.9623 | 0.8011 | -1.0 | -1.0 | 0.4057 | 0.8452 | 0.8452 | 0.8452 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2841 | 6.0 | 4248 | 0.3362 | 0.0049 | 0.0207 | 0.1115 | 2.1237 | 0.7973 | 0.9900 | 0.9614 | 0.7973 | -1.0 | -1.0 | 0.4043 | 0.8434 | 0.8434 | 0.8434 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2929 | 7.0 | 4956 | 0.3310 | 0.0078 | 0.0203 | 0.1071 | 2.1237 | 0.7986 | 0.9899 | 0.9616 | 0.7986 | -1.0 | -1.0 | 0.4053 | 0.8445 | 0.8445 | 0.8445 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2405 | 8.0 | 5664 | 0.2681 | 0.0017 | 0.0168 | 0.0904 | 2.1237 | 0.8018 | 0.9900 | 0.9619 | 0.8018 | -1.0 | -1.0 | 0.4067 | 0.8481 | 0.8481 | 0.8481 | -1.0 | -1.0 | -1.0 | -1.0 |
0.1851 | 9.0 | 6372 | 0.2680 | 0.0019 | 0.0168 | 0.0901 | 2.1237 | 0.8050 | 0.9900 | 0.9622 | 0.8050 | -1.0 | -1.0 | 0.4081 | 0.8511 | 0.8511 | 0.8511 | -1.0 | -1.0 | -1.0 | -1.0 |
0.1842 | 10.0 | 7080 | 0.2553 | 0.0013 | 0.0163 | 0.0856 | 2.1237 | 0.8074 | 0.9900 | 0.9627 | 0.8074 | -1.0 | -1.0 | 0.4095 | 0.8544 | 0.8544 | 0.8544 | -1.0 | -1.0 | -1.0 | -1.0 |
0.3201 | 11.0 | 7788 | 0.3556 | 0.0034 | 0.0226 | 0.1179 | 2.1237 | 0.8040 | 0.9900 | 0.9617 | 0.8040 | -1.0 | -1.0 | 0.4080 | 0.8511 | 0.8511 | 0.8511 | -1.0 | -1.0 | -1.0 | -1.0 |
0.266 | 12.0 | 8496 | 0.3296 | 0.0021 | 0.0191 | 0.1151 | 2.1237 | 0.7996 | 0.9900 | 0.9600 | 0.7996 | -1.0 | -1.0 | 0.4069 | 0.8489 | 0.8489 | 0.8489 | -1.0 | -1.0 | -1.0 | -1.0 |
0.2086 | 13.0 | 9204 | 0.2753 | 0.0016 | 0.0178 | 0.0916 | 2.1237 | 0.8007 | 0.9900 | 0.9603 | 0.8007 | -1.0 | -1.0 | 0.4076 | 0.8506 | 0.8506 | 0.8506 | -1.0 | -1.0 | -1.0 | -1.0 |
0.1853 | 14.0 | 9912 | 0.2452 | 0.0009 | 0.0156 | 0.0827 | 2.1237 | 0.8037 | 0.9900 | 0.9606 | 0.8037 | -1.0 | -1.0 | 0.4088 | 0.8533 | 0.8533 | 0.8533 | -1.0 | -1.0 | -1.0 | -1.0 |
0.1588 | 15.0 | 10620 | 0.2502 | 0.0010 | 0.0160 | 0.0842 | 2.1237 | 0.8063 | 0.9901 | 0.9609 | 0.8063 | -1.0 | -1.0 | 0.4097 | 0.8555 | 0.8555 | 0.8555 | -1.0 | -1.0 | -1.0 | -1.0 |
Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3