protein language model

ProtBert-BFD finetuned on Rosetta 20AA dataset

This model is finetuned to predict Rosetta fold energy using a dataset of 100k 20AA sequences.

Current model in this repo: prot_bert_bfd-finetuned-032722_1752

Performance

prot_bert_bfd from ProtTrans

The starting pretrained model is from ProtTrans, trained on 2.1 billion proteins from BFD. It was trained on protein sequences using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository.

Created by Ladislav Rampasek