<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
V2_20230929-4-xlm-roberta-base-new
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Accuracy: 0.4980
 - Loss: 2.6341
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
 - train_batch_size: 8
 - eval_batch_size: 8
 - seed: 42
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: linear
 - num_epochs: 15
 
Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss | 
|---|---|---|---|---|
| 4.4422 | 1.38 | 200 | 0.2888 | 4.2369 | 
| 3.9018 | 2.76 | 400 | 0.3333 | 3.9767 | 
| 3.5709 | 4.14 | 600 | 0.3669 | 3.5533 | 
| 3.3829 | 5.52 | 800 | 0.3891 | 3.3396 | 
| 3.2242 | 6.9 | 1000 | 0.4244 | 3.0648 | 
| 3.0837 | 8.28 | 1200 | 0.4515 | 3.2200 | 
| 2.9448 | 9.66 | 1400 | 0.4637 | 2.8563 | 
| 2.8529 | 11.03 | 1600 | 0.4664 | 2.9343 | 
| 2.8343 | 12.41 | 1800 | 0.4498 | 3.1041 | 
| 2.813 | 13.79 | 2000 | 0.4980 | 2.6341 | 
Framework versions
- Transformers 4.33.3
 - Pytorch 2.0.1+cu118
 - Datasets 2.14.5
 - Tokenizers 0.13.3