This model is a instruct-tuned llama-2-ko-7b model, using only 10% of [Kullm, OIG, KoAlpaca] Instruction dataset. len10_k100_mppl_n0.1.json -> 121step
Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5