exbert

CorefBERTa base model

Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in this paper and first released in this repository.

Disclaimer: The team releasing CorefBERT did not write a model card for this model so this model card has been written by me.

Model description

CorefBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:

This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, especially those that involve coreference resolution. If you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CorefBERT model as inputs.

BibTeX entry and citation info

@misc{ye2020coreferential,
      title={Coreferential Reasoning Learning for Language Representation}, 
      author={Deming Ye and Yankai Lin and Jiaju Du and Zhenghao Liu and Peng Li and Maosong Sun and Zhiyuan Liu},
      year={2020},
      eprint={2004.06870},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}