generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

message-contribution

This model is a fine-tuned version of distilbert-base-uncased on a custom dataset curated by the model engineer. It achieves the following results on the evaluation set:

Model description

A binary classifier of text inputs (messages) designed to represent the contribution of messages as "High" or "Low".

Intended uses & limitations

Designed for natural language detection and/or weighting of natural language messages.

Training procedure

# label maps
id2label = {0: "low", 1: "high"}
label2id = {"low": 0, "high": 1}

# auto model
model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert-base-uncased",
    num_labels=2,
    id2label=id2label,
    label2id=label2id,
)

Training hyperparameters

The following hyperparameters were used during training:

Training results

Epoch Step Val. Loss Accuracy
0.01 10 0.4780 0.96
0.02 20 0.1759 0.965
0.03 30 0.0477 0.995
0.04 40 0.1199 0.95
0.05 50 0.0413 0.99
0.06 60 0.0068 1.0
0.07 70 0.0056 1.0
0.08 80 0.0220 0.995
0.09 90 0.0081 1.0
0.1 100 0.0074 0.995
0.11 110 0.0035 1.0
0.12 120 0.0030 1.0
0.13 130 0.0022 1.0
0.14 140 0.0024 1.0
0.15 150 0.0021 1.0
0.16 160 0.0016 1.0
0.17 170 0.0016 1.0
0.18 180 0.0016 1.0
0.19 190 0.0015 1.0
0.2 200 0.0015 1.0

Framework versions