facebook meta pytorch llama llama-2

Llama 2 7B Chat - QLoRA Traditional Chinese corpus finetunined

<!-- description start -->

Description

This repo contains QLoRA format model files for Meta's Llama 2 7B-chat.

<!-- description end --> <!-- README_QLoRA.md-about-gguf start -->

About QLoRA

QLoRA, or Quantized Low-Rank Adaptation, takes the concept of LoRA and adds a twist. Going back to our chef analogy, imagine now that you’ve decided to digitize your recipe book to save space. You’ve compressed the book into a smaller file (the 4-bit quantized pre-trained model), but you still need to make your annotations (the Low Rank Adapters). QLoRA allows you to do just that, backpropagating gradients through the compressed model into the adapters.

<!-- prompt-template start -->

Prompt template: Llama-2-Chat

[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.  Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]

<!-- prompt-template end -->

How to download this model?

Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.

The following clients/libraries will automatically download models for you, providing a list of available models to choose from: HuggingFace

In text-generation-webui

Under Download Model, you can enter the model repo: DavidLanz/Llama-2-7b-chat-traditional-chinese-qlora and below it, a specific filename to download.

Then click Download.

On the command line, including multiple files at once

I recommend using the huggingface-hub Python library:

pip3 install huggingface-hub>=0.17.1

Then you can download any individual model file to the current directory, at high speed, with a command like this:

huggingface-cli download DavidLanz/Llama-2-7b-chat-traditional-chinese-qlora --local-dir . --local-dir-use-symlinks False

<details> <summary>More advanced huggingface-cli download usage</summary>

You can also download multiple files at once with a pattern:

huggingface-cli download DavidLanz/Llama-2-7b-chat-traditional-chinese-qlora --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'

For more documentation on downloading with huggingface-cli, please see: HF -> Hub Python Library -> Download files -> Download from the CLI.

To accelerate downloads on fast connections (1Gbit/s or higher), install hf_transfer:

pip3 install hf_transfer

And set environment variable HF_HUB_ENABLE_HF_TRANSFER to 1:

HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download DavidLanz/Llama-2-7b-chat-traditional-chinese-qlora --local-dir . --local-dir-use-symlinks False

Windows CLI users: Use set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 before running the download command. </details> <!-- README_QLoRA.md-how-to-download end -->

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

How to run from Python code

You can use QLoRA models from Python using the llama-cpp-python or ctransformers libraries.

How to load this model from Python using ctransformers

First install the package

pip install -q transformers accelerate peft bitsandbytes trl -U

Simple example code to load one of these QLoRA models

import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
    HfArgumentParser,
    TrainingArguments,
    pipeline,
    logging,
)

# Load the entire model on the GPU 0
device_map = {"": 0}

# Compute dtype for 4-bit base models
use_4bit = True

# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"

# Quantization type (fp4 or nf4)
bnb_4bit_quant_type = "nf4"

# Activate nested quantization for 4-bit base models (double quantization)
use_nested_quant = False

compute_dtype = getattr(torch, bnb_4bit_compute_dtype)

bnb_config = BitsAndBytesConfig(
    load_in_4bit=use_4bit,
    bnb_4bit_quant_type=bnb_4bit_quant_type,
    bnb_4bit_compute_dtype=compute_dtype,
    bnb_4bit_use_double_quant=use_nested_quant,
)

# Load base model
model_path = "DavidLanz/Llama-2-7b-chat-traditional-chinese-qlora"
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    quantization_config=bnb_config,
    device_map=device_map
)
model.config.use_cache = False
model.config.pretraining_tp = 1

### Load LLaMA tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"

# Ignore warnings
logging.set_verbosity(logging.CRITICAL)

# Run text generation pipeline with our next model
prompt = "如果我去日本旅遊,我應該購買新幹線鐵路通行證嗎?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=250)
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])

<!-- footer end -->

<!-- original-model-card start -->

Original model card: Meta's Llama 2 7B-chat

Llama 2

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.

Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.

Model Developers Meta

Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.

Input Models input text only.

Output Models generate text only.

Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.

Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.

Model Dates Llama 2 was trained between January 2023 and July 2023.

Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Research Paper "Llama-2: Open Foundation and Fine-tuned Chat Models"

Intended Use

Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between (we recommend calling strip() on inputs to avoid double-spaces). See our reference code in github for details: chat_completion.

Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.

Hardware and Software

Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.