generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

transformers-question-answer

This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:

Model description

This is a sample transformer trained for question-answer use case. I have used a pre-trained BERT model and then finetuned it using the hugging-face transformer library.

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Epoch
1.4951 1.1651 0

Framework versions