4-bit quantized files for georgesung/open_llama_7b_qlora_uncensored

Quantized using GPTQ-for-LLaMa.

Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --save_safetensors /my/output/file.safetensors