4-bit (32 groupsize) quantized files for Devden/Lawyer-Vicuna-200

Quantized using GPTQ-for-LLaMa.

Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors