generated_from_keras_callback

<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. -->

suarkadipa/GPT-2-finetuned-papers

This model is a fine-tuned version of distilgpt2 on an CShorten/ML-ArXiv-Papers dataset. Based on https://python.plainenglish.io/i-fine-tuned-gpt-2-on-100k-scientific-papers-heres-the-result-903f0784fe65 It achieves the following results on the evaluation set:

Model description

More information needed

Intended uses & limitations

How to run in Google Colab

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer_fromhub = AutoTokenizer.from_pretrained("suarkadipa/GPT-2-finetuned-papers")
model_fromhub = AutoModelForCausalLM.from_pretrained("suarkadipa/GPT-2-finetuned-papers", from_tf=True)

text_generator = pipeline(
    "text-generation",
    model=model_fromhub,
    tokenizer=tokenizer_fromhub,
    framework="tf",
    max_new_tokens=3000
)

// change with your text
test_sentence = "the role of recommender systems"
res=text_generator(test_sentence)[0]["generated_text"].replace("\n", " ")
res

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Train Loss Validation Loss Epoch
2.4225 2.2164 0

Framework versions