roberta-cord19-1M7k

This model is based on RoBERTa and was pre-trained on 1.7 million sentences.

The training corpus was papers taken from Semantic Scholar's CORD-19 historical releases. Corpus size is 13k papers, ~60M tokens. I used the full-text "body_text" of the papers in training (details below).

Usage

from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM

tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")

fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)

text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
  'score': 0.6273621320724487,
  'token': 660,
  'token_str': 'Ġpatients'},
 {'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
  'score': 0.19800445437431335,
  'token': 1868,
  'token_str': 'Ġindividuals'},
 {'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
  'score': 0.022069649770855904,
  'token': 1471,
  'token_str': 'Ġanimals'}]

Dataset

Parameters

Evaluation

Citation

Allen Institute CORD-19 Historical Releases

@article{Wang2020CORD19TC,
	title={CORD-19: The Covid-19 Open Research Dataset},
	author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
	journal={ArXiv},
	year={2020}
}