<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
roberta-base-research-papers
This model is a fine-tuned version of roberta-base on the elsevier-oa-cc-by dataset. It achieves the following results on the evaluation set:
- Loss: 1.2956
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5522 | 0.99 | 31 | 1.4074 |
1.5314 | 1.99 | 62 | 1.3907 |
1.5157 | 2.99 | 93 | 1.3799 |
1.504 | 3.99 | 124 | 1.3777 |
1.489 | 4.99 | 155 | 1.3654 |
1.4778 | 5.99 | 186 | 1.3556 |
1.4674 | 6.99 | 217 | 1.3506 |
1.4552 | 7.99 | 248 | 1.3414 |
1.4474 | 8.99 | 279 | 1.3346 |
1.4396 | 9.99 | 310 | 1.3321 |
1.4284 | 10.99 | 341 | 1.3314 |
1.4191 | 11.99 | 372 | 1.3222 |
1.4146 | 12.99 | 403 | 1.3165 |
1.4067 | 13.99 | 434 | 1.3227 |
1.403 | 14.99 | 465 | 1.3175 |
1.399 | 15.99 | 496 | 1.3154 |
1.3901 | 16.99 | 527 | 1.3187 |
1.3891 | 17.99 | 558 | 1.3045 |
1.3838 | 18.99 | 589 | 1.2992 |
1.3804 | 19.99 | 620 | 1.2966 |
1.3792 | 20.99 | 651 | 1.3040 |
1.3735 | 21.99 | 682 | 1.2964 |
1.3685 | 22.99 | 713 | 1.2993 |
1.3697 | 23.99 | 744 | 1.2930 |
1.3636 | 24.99 | 775 | 1.2943 |
1.3653 | 25.99 | 806 | 1.2857 |
1.3623 | 26.99 | 837 | 1.2931 |
1.3584 | 27.99 | 868 | 1.2911 |
1.3577 | 28.99 | 899 | 1.2917 |
1.3573 | 29.99 | 930 | 1.2963 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2