<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->
sayakpaul/masked-lm-tpu
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 9.9067
- Train Accuracy: 0.0116
- Validation Loss: 9.8225
- Validation Accuracy: 0.0198
- Epoch: 8
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 0.0001, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0001, 'decay_steps': 22325, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'passive_serialization': True}, 'warmup_steps': 1175, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
---|---|---|---|---|
10.2116 | 0.0 | 10.1957 | 0.0000 | 0 |
10.2017 | 0.0 | 10.1798 | 0.0 | 1 |
10.1890 | 0.0000 | 10.1604 | 0.0000 | 2 |
10.1733 | 0.0000 | 10.1145 | 0.0000 | 3 |
10.1336 | 0.0000 | 10.0666 | 0.0000 | 4 |
10.0906 | 0.0001 | 10.0200 | 0.0005 | 5 |
10.0360 | 0.0006 | 9.9646 | 0.0049 | 6 |
9.9830 | 0.0038 | 9.8938 | 0.0155 | 7 |
9.9067 | 0.0116 | 9.8225 | 0.0198 | 8 |
Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3