llama2 quantization nlp transformers language-model bitsandbytes fine-tuned causal-lm

Overview

This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset. It is optimized for causal language modeling tasks with specific quantization configurations. The model is trained using the PEFT framework and leverages the bitsandbytes quantization method.

Training Procedure

The following bitsandbytes quantization config was used during training:

Framework Versions

The model was trained using PEFT version 0.6.0.dev0.