This model is GPTQ 4 bit quantized version of meta-llama/Llama-2-7b-hf.