This is a merge of the below models/LoRAs. Merge was done at a 1:1 ratio. LLongMA-2-13b-16k airoboros-l2-gpt-1.4.1-13b-PEFT GPTQ quantization is available in a separate repo