generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

DeBERTa-finetuned-SNLI2

This model is a fine-tuned version of gyeoldere/test_trainer on the snli dataset.

Test_trainer model is a fine-tuned version of microsoft/deberta-base on the snli dataset.

This model achieves the following results on the evaluation set:

Model description

This model fine-tuned to perform 2 tasks simultaneously; NLI task and MLM task.

Output vector of DeBERTa processed through two different fc layer to predict. I used layer structure introduced in BERT paper, which is implemented on huggingface transformers; DebertaForTokenClassification and DebertaForMaskedLM. [https://huggingface.co/docs/transformers/index]

BinaryCrossEntrophyLoss are used for each class, and two losses are added to obtain final loss final_loss = MLM_loss + NLI_loss

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Framework versions