text-generation-inference

Llama-2-7B-Chat-GGUF

Used for llama.cpp Original model: meta-llama/Llama-2-7b-chat-hf

GGUF: New model file format used by llama.cpp

This repo contains all quantized (q4_0, q4_1, q5_0, q5_1, q8_0) GGUF version of Llama-2-7B-Chat model