Domain adaptation is the process of fine-tuning pre-trained language models (PLMs) on domain-specific datasets to produce predictions that are better suited to the new datasets. Here, we re-train the BERT-base-uncased model on an unlabelled COVID-19 fake news dataset (Constraint@AAAI2021) using the masked language modeling (MLM) objective, where 15% of input text is masked, and the model is expected to predict the masked tokens.