This repo contains a low-rank adapter for LLaMA-7b fit extended for tamil-language
This version of the weights was trained with the following hyperparameters:
- Epochs: 2
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora r: 16
- Lora target modules: q_proj, k_proj, v_proj, o_proj
Model to be uploaded shortly