generated_from_trainer

distilbert-base-uncased-finetuned-emotion

This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set:

The notebook used to fine-tune this model may be found HERE.

Model description

DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained with three objectives:

This way, the model learns the same inner representation of the English language than its teacher model, while being faster for inference or downstream tasks.

Intended uses & limitations

Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. This dataset was developed for the paper entitled "CARER: Contextualized Affect Representations for Emotion Recognition" (Saravia et al.) through noisy labels, annotated via distant supervision as in the paper"Twitter sentiment classification using distant supervision" (Go et al).

The DistilBERT model was fine-tuned to this dataset, allowing for the classification of sentences into one of the six basic emotions (anger, fear, joy, love, sadness, and surprise).

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.5337 1.0 250 0.1992 0.927 0.9262
0.1405 2.0 500 0.1448 0.9375 0.9379

Framework versions