<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->
XLMRobertaTrainedOnSWEz
This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.5177
 - Train End Logits Accuracy: 0.8335
 - Train Start Logits Accuracy: 0.8275
 - Validation Loss: 1.0855
 - Validation End Logits Accuracy: 0.7143
 - Validation Start Logits Accuracy: 0.7089
 - Epoch: 3
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29248, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
 - training_precision: float32
 
Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | 
|---|---|---|---|---|---|---|
| 1.3112 | 0.6049 | 0.5996 | 0.9920 | 0.6931 | 0.6899 | 0 | 
| 0.8736 | 0.7267 | 0.7230 | 0.9677 | 0.7119 | 0.7100 | 1 | 
| 0.6621 | 0.7879 | 0.7839 | 1.0244 | 0.7074 | 0.7058 | 2 | 
| 0.5177 | 0.8335 | 0.8275 | 1.0855 | 0.7143 | 0.7089 | 3 | 
Framework versions
- Transformers 4.20.1
 - TensorFlow 2.6.4
 - Datasets 2.1.0
 - Tokenizers 0.12.1