Chinese-LLaMA-2-7B-16K

This is the full Chinese-LLaMA-2-7B-16K (context size 16K),model,which can be loaded directly for inference and full-parameter training.

Related models👇

Description of Chinese-LLaMA-Alpaca-2

This project is based on the Llama-2, released by Meta, and it is the second generation of the Chinese LLaMA & Alpaca LLM project. We open-source Chinese LLaMA-2 (foundation model) and Alpaca-2 (instruction-following model). These models have been expanded and optimized with Chinese vocabulary beyond the original Llama-2. We used large-scale Chinese data for incremental pre-training, which further improved the fundamental semantic understanding of the Chinese language, resulting in a significant performance improvement compared to the first-generation models. The relevant models support a 4K context and can be expanded up to 18K+ using the NTK method.

The main contents of this project include:

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for details.