InformBERT
Introduction
InformBERT is pretrained using variable masking strategy, where informative tokens are masked more frequently compared to other tokens. InformBERT outperforms random masking based pretrained models on the factual recall benchmark LAMA and extractive question answering benchmark SQuAD.
More detail: https://arxiv.org/abs/2210.11771
How to load
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("nsadeq/InformBERT")
model = AutoModel.from_pretrained("nsadeq/InformBERT")
from transformers import pipeline
unmasker = pipeline('fill-mask', model='nsadeq/InformBERT',tokenizer=tokenizer)
unmasker("SpeedWeek is an American television program on [MASK].")
Citation
@misc{https://doi.org/10.48550/arxiv.2210.11771,
doi = {10.48550/ARXIV.2210.11771},
url = {https://arxiv.org/abs/2210.11771},
author = {Sadeq, Nafis and Xu, Canwen and McAuley, Julian},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {InforMask: Unsupervised Informative Masking for Language Model Pretraining},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}