<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end -->

YuLan Chat 2 13B - GPTQ

<!-- description start -->

Description

This repo contains GPTQ model files for RUC-GSAI-YuLan's YuLan Chat 2 13B.

Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.

<!-- description end --> <!-- repositories-available start -->

Repositories available

<!-- prompt-template start -->

Prompt template: YulanChat

The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.
[|Human|]:{prompt}
[|AI|]:

<!-- prompt-template end --> <!-- licensing start -->

Licensing

The creator of the source model has listed its license as mit, and this quantization has therefore used that same license.

As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.

In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: RUC-GSAI-YuLan's YuLan Chat 2 13B. <!-- licensing end --> <!-- README_GPTQ.md-provided-files start -->

Provided files and GPTQ parameters

Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.

Each separate quant is in a different branch. See below for instructions on fetching from different branches.

All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the main branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.

<details> <summary>Explanation of GPTQ parameters</summary>

</details>

Branch Bits GS Act Order Damp % GPTQ Dataset Seq Len Size ExLlama Desc
main 4 128 No 0.1 wikitext 4096 7.65 GB Yes 4-bit, without Act Order and group size 128g.
gptq-4bit-32g-actorder_True 4 32 Yes 0.1 wikitext 4096 8.40 GB Yes 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage.
gptq-4bit-64g-actorder_True 4 64 Yes 0.1 wikitext 4096 7.90 GB Yes 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy.
gptq-4bit-128g-actorder_True 4 128 Yes 0.1 wikitext 4096 7.65 GB Yes 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy.
gptq-8bit--1g-actorder_True 8 None Yes 0.1 wikitext 4096 13.76 GB No 8-bit, with Act Order. No group size, to lower VRAM requirements.
gptq-8bit-128g-actorder_True 8 128 Yes 0.1 wikitext 4096 14.05 GB No 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy.

<!-- README_GPTQ.md-provided-files end -->

<!-- README_GPTQ.md-download-from-branches start -->

How to download from branches

git clone --single-branch --branch main https://huggingface.co/TheBloke/YuLan-Chat-2-13B-GPTQ

How to easily download and use this model in text-generation-webui.

Please make sure you're using the latest version of text-generation-webui.

It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter TheBloke/YuLan-Chat-2-13B-GPTQ.
  1. Click Download.
  2. The model will start downloading. Once it's finished it will say "Done".
  3. In the top left, click the refresh icon next to Model.
  4. In the Model dropdown, choose the model you just downloaded: YuLan-Chat-2-13B-GPTQ
  5. The model will automatically load, and is now ready for use!
  6. If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
  1. Once you're ready, click the Text Generation tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end -->

<!-- README_GPTQ.md-use-from-python start -->

How to use this GPTQ model from Python code

Install the necessary packages

Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.

pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/  # Use cu117 if on CUDA 11.7

If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:

pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .

For CodeLlama models only: you must use Transformers 4.33.0 or later.

If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:

pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git

You can then use the following code

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name_or_path = "TheBloke/YuLan-Chat-2-13B-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             revision="main")

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

prompt = "Tell me about AI"
prompt_template=f'''The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.
[|Human|]:{prompt}
[|AI|]:

'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
    top_k=40,
    repetition_penalty=1.1
)

print(pipe(prompt_template)[0]['generated_text'])

<!-- README_GPTQ.md-use-from-python end -->

<!-- README_GPTQ.md-compatibility start -->

Compatibility

The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with Occ4m's GPTQ-for-LLaMa fork.

ExLlama is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.

Huggingface Text Generation Inference (TGI) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end -->

<!-- footer start --> <!-- 200823 -->

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute

Thanks to the chirper.ai team!

Thanks to Clay from gpus.llm-utils.org!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

<!-- footer end -->

Original model card: RUC-GSAI-YuLan's YuLan Chat 2 13B

<div align=center> <h1>YuLan-Chat: An Open-Source Bilingual Chatbot</h1> </div> YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by continually-pretraining and instruction-tuning LLaMA-2 with high-quality English and Chinese data. The model has the following technical characteristics:

YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本基于LLaMA-2进行了中英文双语的继续预训练和指令微调。该版模型具有如下技术特点:

  • 由于在高质量中英双语数据上进行了继续预训练,模型的语言能力得到提高;
  • 为了更好的支持中文和更长的输入输出,对原版LLaMA-2的词表及长度进行了扩充,目前可支持8k上下文;
  • 为了让模型更好地服从用户指令,构建了高质量双语指令数据集,并行了多阶段指令微调。

Model Zoo

Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the Usage section for more details.

Limitations: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.

由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。

局限性:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。

Model Backbone Extended Vocab Extended Length Continue PT SFT Released Date
YuLan-Chat-2-13B LLaMA2-13B ✅ 51,190 ✅ 8,192 2023.8.2
YuLan-LLaMA-2-13B LLaMA2-13B ✅ 51,190 ✅ 8,192 2023.8.2
YuLan-Chat-1-65B-v2 LLaMA-65B ✅ 51,190 ❌ 2,048 2023.8.2
YuLan-Chat-1-13B-v1 LLaMA-13B ❌ 32,000 ❌ 2,048 2023.6.8
YuLan-Chat-1-65B-v1 LLaMA-65B ❌ 32,000 ❌ 2,048 2023.6.8

Evaluation

We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.

我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。

MMLU

MMLU (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.

MMLU是一个评估模型知识量的常用的英文基准测试集。

Model STEM Social Science Humanities Others Avg.
YuLan-Chat-1-13B-v1 39.6 57.8 42.6 57.6 49.4
YuLan-Chat-1-65B-v1 49.2 71.7 57.7 66.7 61.3
YuLan-Chat-1-65B-v2 46.3 67.9 56.9 63.9 58.7
LLaMA-2-13B 44.6 64.2 53.9 62.2 56.2
FlagAlpha/Llama2-Chinese-13b-Chat 44.4 63.2 51.6 60.6 55.0
Linly-AI/Chinese-LLaMA-2-13B-hf 43.6 62.7 49.8 61.6 54.4
YuLan-LLaMA-2-13B 42.9 61.5 50.4 58.6 53.4
YuLan-Chat-2-13B 45.3 66.7 53.8 62.8 57.2

C-Eval

C-Eval is a comprehensive Chinese evaluation suite for foundation models.

C-Eval是一个针对基石模型综合能力的中文基准测试集。

Model STEM Social Science Humanities Others Avg. Avg. (Hard)
YuLan-Chat-1-13B-v1 30.2 37.4 31.9 30.7 32.0 25.7
YuLan-Chat-1-65B-v1 37.7 46.1 36.8 38.0 39.2 31.1
YuLan-Chat-1-65B-v2 39.9 55.9 47.7 43.7 45.4 31.4
LLaMA-2-13B 36.9 43.2 37.6 36.6 38.2 32.0
FlagAlpha/Llama2-Chinese-13b-Chat 36.8 44.5 36.3 36.5 38.1 30.9
Linly-AI/Chinese-LLaMA-2-13B-hf 33.7 44.8 36.6 36.5 37 27.7
YuLan-LLaMA-2-13B 35.3 46.4 41.9 37.6 39.3 28.6
YuLan-Chat-2-13B 38.9 49.7 45.0 40.8 42.6 32.2

AGI-Eval-Gaokao

AGI-Eval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.

AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。

Model Avg. Chinese English Geography History Biology Chemistry Physics Math-QA Math-Cloze
YuLan-Chat-1-13B-v1 24.3 22.4 60.1 27.6 25.5 21.9 30.0 8.0 21.1 1.7
YuLan-Chat-1-65B-v1 29.3 25.2 79.1 37.2 36.6 28.6 24.2 11.0 21.9 0.0
YuLan-Chat-1-65B-v2 37.9 31.4 80.4 50.8 56.6 33.3 29.0 32.0 24.4 0.8
LLaMA-2-13B 32.7 27.2 72.2 36.2 43.0 26.2 32.4 30.0 26.2 0.9
FlagAlpha/Llama2-Chinese-13b-Chat 31.6 26.4 70.6 35.2 38.7 28.1 28.0 29.5 25.6 2.5
Linly-AI/Chinese-LLaMA-2-13B-hf 31.1 22.8 74.8 42.2 37.9 24.3 28.0 23.0 26.5 0.0
YuLan-LLaMA-2-13B 34.2 25.2 70.3 43.2 48.5 30.0 29.5 31.0 28.5 1.7
YuLan-Chat-2-13B 39.5 37.0 85.3 46.7 51.9 43.8 38.2 29.0 23.1 0.9

Usage

Import from Huggingface Transformers

As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.

由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。

>>> from transformers import LlamaTokenizer, LlamaForCausalLM
>>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b")
>>> model = LlamaForCausalLM.from_pretrained("yulan-team/YuLan-Chat-2-13b").cuda()
>>> model = model.eval()
>>> input_text = "hello"
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
>>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=8192, truncation=True, return_attention_mask=True, add_special_tokens=True)
>>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 8192, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
>>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])
Hello! How can I assist you today?

License

YuLan-Chat uses MIT License. All data and code in this project can only be used for academic purposes.

本项目使用MIT许可,所有的数据和代码仅供学术研究使用。

Contributors

Pre-training Fine-tuning
Yutao Zhu (Lead), Kelong Mao, Wentong Chen, Yiding Sun, Yihan Wu, Qian Cao, Lei Zhang, Feng Wang, Qiangqiang Ren Kun Zhou (Lead), Yushuo Chen, Zhipeng Chen, Lei Wang, Yupeng Hou, Xincheng Pang, Junyi Li, Yuhan Chen, Shufang Xie

Reference

Please kindly cite our work if it helps you.

如果我们的项目对您有帮助,请引用我们,谢谢!

@misc{YuLan-Chat,
  author = {YuLan-Team},
  title = {YuLan-Chat: An Open-Source Bilingual Chatbot},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/RUC-GSAI/YuLan-Chat}},
}