xlm-roberta-large-caresC
This model is a finetuned version of xlm-roberta-large for the Cares Chapters dataset used in a benchmark in the paper TODO. The model has a F1 of 0.847
Please refer to the original publication for more information TODO LINK
Parameters used
| parameter | Value |
|---|---|
| batch size | 32 |
| learning rate | 3e-05 |
| classifier dropout | 0 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
BibTeX entry and citation info
TODO