facebook meta pytorch llama llama-2 kollama llama-2-ko gptq

Llama-2-Ko-70b-GPTQ

<!-- description start -->

Description

이 레포는 Llama-2-ko-70b의 GPTQ 모델 파일을 포함하고 있습니다.

<!-- description end -->

<!-- README_GPTQ.md-provided-files start -->

Provided files and GPTQ parameters

하드웨어와 요구사항에 가장 적합한 양자화 매개변수를 선택할 수 있도록 여러 가지(곧) 양자화 매개변수가 제공됩니다. 각 양자화는 다른 브랜치에 있습니다. 모든 GPTQ 양자화는 AutoGPTQ로 만들어졌습니다.

<details> <summary>GPTQ 파라미터 정보</summary>

</details>

Branch Bits GS Act Order Damp % GPTQ Dataset Seq Len Size ExLlama Desc
main 4 None Yes 0.1 wikitext 4096 35.8 GB Yes 4-bit, Act Order 포함. VRAM 사용량을 줄이기 위한 group size -1.

<!-- README_GPTQ.md-provided-files end -->

<!-- original model card start -->

Original model card: Llama 2 ko 70b

🚧 Note: this repo is under construction 🚧

Llama-2-Ko 🦙🇰🇷

Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 70B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.

Model Details

Model Developers Junbum Lee (Beomi)

Variations Llama-2-Ko will come in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.

Input Models input text only.

Output Models generate text only.

Usage

Use with 8bit inference

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_8bit = AutoModelForCausalLM.from_pretrained(
    "beomi/llama-2-ko-70b", 
    load_in_8bit=True,
    device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model_8bit, tokenizer=tk)
def gen(x):
    gended = pipe(f"### Title: {x}\n\n### Contents:",  # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
        max_new_tokens=300,
        top_p=0.95,
        do_sample=True,
    )[0]['generated_text']
    print(len(gended))
    print(gended)

Use with bf16 inference

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained(
    "beomi/llama-2-ko-70b", 
    device_map="auto",
)
tk = AutoTokenizer.from_pretrained('beomi/llama-2-ko-70b')
pipe = pipeline('text-generation', model=model, tokenizer=tk)
def gen(x):
    gended = pipe(f"### Title: {x}\n\n### Contents:",  # Since it this model is NOT finetuned with Instruction dataset, it is NOT optimal prompt.
        max_new_tokens=300,
        top_p=0.95,
        do_sample=True,
    )[0]['generated_text']
    print(len(gended))
    print(gended)

Model Architecture

Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.

Training Data Params Content Length GQA Tokens LR
Llama-2-Ko 70B A new mix of Korean online data 70B 4k >20B 1e<sup>-5</sup>
*Plan to train upto 300B tokens
Vocab Expansion
Model Name Vocabulary Size Description
--- --- ---
Original Llama-2 32000 Sentencepiece BPE
Expanded Llama-2-Ko 46592 Sentencepiece BPE. Added Korean vocab and merges
*Note: Llama-2-Ko 70B uses 46592 not 46336(7B), will update new 7B model soon.

Tokenizing "안녕하세요, 오늘은 날씨가 좋네요. ㅎㅎ"

Model Tokens
Llama-2 ['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요', '.', '▁', '<0xE3>', '<0x85>', '<0x8E>', '<0xE3>', '<0x85>', '<0x8E>']
Llama-2-Ko *70B ['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.', '▁', 'ㅎ', 'ㅎ']
Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"
Model Tokens
--- ---
Llama-2 ['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']
Llama-2-Ko 70B ['▁L', 'l', 'ama', '▁', '2', ':', '▁Open', '▁Foundation', '▁and', '▁Fine', '-', 'T', 'un', 'ed', '▁Ch', 'at', '▁Mod', 'els']

Model Benchmark

LM Eval Harness - Korean (polyglot branch)

TBD

Note for oobabooga/text-generation-webui

Remove ValueError at load_tokenizer function(line 109 or near), in modules/models.py.

diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
                 trust_remote_code=shared.args.trust_remote_code,
                 use_fast=False
             )
-        except ValueError:
+        except:
             tokenizer = AutoTokenizer.from_pretrained(
                 path_to_model,
                 trust_remote_code=shared.args.trust_remote_code,

Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package, it is required to use use_fast=True option when initialize tokenizer. Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)

LICENSE

Citation

@misc {l._junbum_2023,
	author       = { {L. Junbum} },
	title        = { llama-2-ko-70b },
	year         = 2023,
	url          = { https://huggingface.co/beomi/llama-2-ko-70b },
	doi          = { 10.57967/hf/1130 },
	publisher    = { Hugging Face }
}

<!-- original model card end -->