<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
peft-finetuned-starcoderbase
This model is a fine-tuned version of bigcode/starcoderbase-1b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.2537
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.0771 | 0.04 | 20 | 3.7800 |
5.063 | 0.08 | 40 | 3.6625 |
4.7828 | 0.12 | 60 | 3.4743 |
4.623 | 0.16 | 80 | 3.2127 |
4.2529 | 0.2 | 100 | 2.8753 |
3.7035 | 0.24 | 120 | 2.5061 |
3.0316 | 0.28 | 140 | 2.1584 |
3.2956 | 0.32 | 160 | 1.8338 |
3.0677 | 0.36 | 180 | 1.5939 |
2.8945 | 0.4 | 200 | 1.4555 |
2.7656 | 0.44 | 220 | 1.3756 |
2.5756 | 0.48 | 240 | 1.3353 |
2.7785 | 0.52 | 260 | 1.3103 |
2.7042 | 0.56 | 280 | 1.2947 |
2.3891 | 0.6 | 300 | 1.2853 |
2.4432 | 0.64 | 320 | 1.2745 |
2.4347 | 0.68 | 340 | 1.2691 |
2.3808 | 0.72 | 360 | 1.2655 |
2.4511 | 0.76 | 380 | 1.2537 |
2.438 | 0.8 | 400 | 1.2534 |
2.3947 | 0.84 | 420 | 1.2535 |
2.4199 | 0.88 | 440 | 1.2533 |
2.3194 | 0.92 | 460 | 1.2536 |
2.2654 | 0.96 | 480 | 1.2536 |
2.5226 | 1.0 | 500 | 1.2537 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.0
- Tokenizers 0.13.3