generated_from_trainer

mit-b0-Image_segmentation_Dominoes_v2

This model is a fine-tuned version of nvidia/mit-b0.

It achieves the following results on the evaluation set:

Model description

For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Segmentation/Dominoes/Fine-Tuning%20-%20Dominoes%20-%20Image%20Segmentation%20with%20LoRA.ipynb

Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

Training and evaluation data

Dataset Source: https://huggingface.co/datasets/adelavega/dominoes_raw

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Segment 0 Per Category Iou Segment 1 Per Category Accuracy Segment 0 Per Category Accuracy Segment 1
0.0461 1.0 86 0.1233 0.9150 0.9527 0.9762 0.9721967854031923 0.8578619172251059 0.9869082633464498 0.9184139264010376
0.0708 2.0 172 0.1366 0.9172 0.9490 0.9771 0.9732821853093164 0.8611008788165083 0.9898473600751747 0.9082362492748777
0.048 3.0 258 0.1260 0.9199 0.9534 0.9777 0.9740118174014271 0.8658241844233872 0.9888392553004053 0.9179240730467295
0.0535 4.0 344 0.1184 0.9200 0.9520 0.9778 0.974142444792198 0.8658711064023369 0.9896291184589182 0.9142864290038782
0.0185 5.0 430 0.1296 0.9182 0.9477 0.9775 0.9737715695013129 0.8627108292167807 0.9910418746696423 0.904378218719681
0.036 6.0 516 0.1410 0.9213 0.9538 0.9782 0.9745002408443008 0.8680673581922554 0.9892677512186527 0.9182967669045321
0.0376 7.0 602 0.1451 0.9206 0.9550 0.9779 0.9741455743906073 0.8669703237367214 0.9883004639689904 0.9216576612178001
0.0186 8.0 688 0.1380 0.9175 0.9496 0.9772 0.9733616852468584 0.8616466350192237 0.9897043519116697 0.9094762400541087
0.0162 9.0 774 0.1459 0.9218 0.9539 0.9783 0.9746840649852051 0.8688930149000804 0.989455276913138 0.9182917005479264
0.0169 10.0 860 0.1467 0.9191 0.9502 0.9776 0.9739086600912814 0.8642187978193332 0.9901195747929759 0.9102564589713776
0.0102 11.0 946 0.1549 0.9191 0.9524 0.9775 0.9737696499931041 0.8644247331609153 0.9889789745698009 0.915789237032027
0.0204 12.0 1032 0.1502 0.9215 0.9527 0.9783 0.974639596078376 0.8682964916021273 0.989902977623774 0.9155653673995151
0.0268 13.0 1118 0.1413 0.9194 0.9505 0.9777 0.9740020531855834 0.8647199376136 0.99011699066189 0.9107963425971664
0.0166 14.0 1204 0.1584 0.9173 0.9518 0.9770 0.9731154475737929 0.8614276032542578 0.9884142831972749 0.9152366875147241
0.0159 15.0 1290 0.1563 0.9170 0.9492 0.9770 0.9731832402253996 0.8607442858381036 0.9896456803899689 0.9087960816798012
0.0211 16.0 1376 0.1435 0.9150 0.9481 0.9764 0.9725201360275898 0.8574847000491036 0.989323310037 0.9068449010920532
0.0128 17.0 1462 0.1421 0.9212 0.9519 0.9782 0.9745789801464504 0.8677394402794754 0.9901920479238856 0.9136255861141298
0.0167 18.0 1548 0.1558 0.9217 0.9532 0.9783 0.9746811993626879 0.8686470009484697 0.9897428202266988 0.9166850322093621
0.0201 19.0 1634 0.1623 0.9156 0.9484 0.9766 0.9727184720007118 0.8584339325695252 0.9894484642039114 0.9072695251050635
0.0133 20.0 1720 0.1573 0.9189 0.9505 0.9776 0.9738320500157303 0.8640203613069115 0.9898665061373113 0.9112263496140702
0.012 21.0 1806 0.1631 0.9165 0.9472 0.9769 0.9731344243001482 0.8597866189796295 0.9904592118400188 0.9040137576913626
0.0148 22.0 1892 0.1629 0.9181 0.9507 0.9773 0.9735162429121835 0.8627239955489192 0.9894034768309156 0.9120129014770962
0.0137 23.0 1978 0.1701 0.9136 0.9484 0.9760 0.9719681843338751 0.8552607882028388 0.9885083690609032 0.908250815050119
0.0142 24.0 2064 0.1646 0.9146 0.9488 0.9763 0.9723134197764093 0.8568918401744342 0.9887405884771245 0.9089100747034281
0.0156 25.0 2150 0.1615 0.9144 0.9465 0.9763 0.9723929259786395 0.856345354289624 0.9898487696012216 0.9032139066422469

Framework versions