generated_from_keras_callback

distilbert-base-future

Table of Contents

This model is a fine-tuned version of distilbert-base-uncased on the future-statements dataset. It achieves the following results on the evaluation set:

Model description

Intended uses & limitations

Training and evaluation data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Train Sparse Categorical Accuracy Validation Loss Validation Sparse Categorical Accuracy Epoch
0.3816 0.8594 0.1547 0.9475 0
0.1142 0.9613 0.1272 0.9625 1

Framework versions