generated_from_trainer

argpt2-goodreads

This model is a fine-tuned version of gpt2-medium on an goodreads LABR dataset. It achieves the following results on the evaluation set:

Model description

Generate sentences either positive/negative examples based on goodreads corpus in arabic language.

Intended uses & limitations

the model fine-tuned on arabic language only with aspect to generate sentences such as reviews in order todo the same for other languages you need to fine-tune it in your own. any harmful content generated by GPT2 should not be used in anywhere.

Training and evaluation data

training and validation done on goodreads dataset LABR 80% for trainng and 20% for testing

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("mofawzy/argpt2-goodreads")

model = AutoModelForCausalLM.from_pretrained("mofawzy/argpt2-goodreads")

Training hyperparameters

The following hyperparameters were used during training:

Training results

Evaluation results

train metrics

eval metrics

Framework versions