gpt2-large-integ2 This model is a fine-tuned version of gpt2-large on the customized dataset.

Model description More information needed

Intended uses & limitations More information needed

Training and evaluation data More information needed

Training procedure Training hyperparameters The following hyperparameters were used during training:

learning_rate: 4e-05 train_batch_size: 1 eval_batch_size: 8 seed: 42 distributed_type: multi-GPU num_devices: 6 total_train_batch_size: 6 total_eval_batch_size: 48 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: linear num_epochs: 2.0 Training results Framework versions Transformers 4.32.1 Pytorch 2.0.1+cu117 Datasets 2.10.1 Tokenizers 0.13.3