generated_from_keras_callback

distilbert-truncated

This model is a fine-tuned version of distilbert-base-uncased on the 20 Newsgroups dataset. It achieves the following results on the evaluation set:

Training and evaluation data

The data was split into training and testing: model trained on 90% of the data, and had a testing data size of 10% of the original dataset.

Training procedure

DistilBERT has a maximum input length of 512, so with this in mind the following was performed:

  1. I used the distilbert-base-uncased pretrained model to initialize an AutoTokenizer.
  2. Setting a maximum length of 256, each entry in the training, testing and validation data was truncated if it exceeded the limit and padded if it didn't reach the limit.

Training hyperparameters

The following hyperparameters were used during training:

Training results

EPOCHS = 3 batches_per_epoch = 636 total_train_steps = 1908

Model accuracy 0.8337758779525757

Model loss 0.568471074104309

Framework versions