generated_from_trainer pytorch

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

dolly-v2-3-openassistant-guanaco

This model is a fine-tuned version of databricks/dolly-v2-3b on timdettmers/openassistant-guanaco dataset.

Model description

This is a PEFT model, hence the model file and the config files are

This fined-tuned model was created with the following bitsandbytes config<br>

BitsAndBytesConfig(load_in_8bit = True, bnb_4bit_quant_type = 'nf4', bnb_4bit_compute_type = torch.bfloat16, bnb_4bit_use_double_quant = True)

The peft_config is as follows:

peft_config = LoraConfig( lora_alpha=16, lora_dropout = 0.1, r = 64, bias = "none", task_type = "CAUSAL_LM", target_modules = [ 'query_key_value', 'dense', 'dense_h_to_4h', 'dense_4h_to_h' ] ) </br>

Intended uses & limitations

Model is intended for fair use only.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Framework versions