Vicuna-13b v1.1 GPTQ 4bits 128 group size

Quantized from eachadea/vicuna-13b-1.1

CUDA_VISIBLE_DEVICES=0,1 python llama.py ./vicuna-13b-v1 c4 --wbits 4 --true-sequential --groupsize 128 --save-safetensors vic-v1-13b-4b-128g.safetensors