generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

distilgpt2-finetune-acl22

This model is a fine-tuned version of distilgpt2 on the ACL-anthology-corpus dataset. It achieves the following results on the evaluation set:

Model description

We finetune the gpt2 LLM on the full-text from ACL-anthology-corpus

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss
3.6676 1.0 9852 3.5623
3.5959 2.0 19704 3.4995
3.5719 3.0 29556 3.4835

Framework versions

What can it do?

Write introductions/abstract