generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

Ruckus-PyAssi-13b

This model is a fine-tuned version of meta-llama/Llama-2-13b-hf on a 10 000 examples from flytech/llama-python-codes-30k dataset.

Model description

Model trained in 4-bit architecture using SFT (Supervised Fine Tuning) and LoRA (Low-Rank Adaptation) methods, fine-tuning further is possible.

Intended uses & limitations

Code-generation, but as like all Ruckus models

Training procedure

Model was being trained for 13 hours of A6000 single 48GB vRAM GPU

Training hyperparameters

The following hyperparameters were used during training:

Inference

[INST]Ruckus, open google[/INST]

Framework versions