generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

distilbert-base-uncased-lora-text-classification

This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4284 1.0 1469 0.6122 {'accuracy': 0.8515997277059224}
0.318 2.0 2938 0.4710 {'accuracy': 0.8686181075561606}
0.3091 3.0 4407 0.3991 {'accuracy': 0.8910823689584751}
0.267 4.0 5876 0.3609 {'accuracy': 0.9081007488087134}
0.2111 5.0 7345 0.5392 {'accuracy': 0.8876786929884275}
0.2231 6.0 8814 0.6888 {'accuracy': 0.8978897208985704}
0.1116 7.0 10283 0.6468 {'accuracy': 0.8965282505105514}
0.1111 8.0 11752 0.8718 {'accuracy': 0.8849557522123894}
0.076 9.0 13221 0.8075 {'accuracy': 0.8965282505105514}
0.0672 10.0 14690 0.8488 {'accuracy': 0.8924438393464942}

Framework versions