generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

Mara_DistilBert_Pretrained

DistilBERT, a variant of BERT, was employed to pre-trained a Marathi language model from scratch using one million sentences. This compact yet powerful model utilizes a distilled version of BERT's transformer architecture

Examples

माझं प्रिय मित्र [MASK] आहे

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Examples

माझं प्रिय मित्र [MASK] आहे

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
7.9421 0.84 1000 7.4249

Framework versions