Taiwan-LLM-7B-v2.0.1-chat - GGUF

Description

This repo contains GGUF format model files for Yen-Ting Lin's Taiwan LLM based on LLaMa2-7b v2.0.1-chat.

Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author.

使用Taiwan LLM必須明確地承認和歸功於原始作者。

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.

As of August 25th, here is a list of clients and libraries that are known to support GGUF:

<!-- footer start --> <!-- footer end -->

Original model card


Taiwan LLM based on LLaMa2-7b

continue pretraining on 20 billion tokens in traditional mandarin and instruction fine-tuning on millions of conversations.

This version does NOT include commoncrawl.

🌟 Checkout New Taiwan-LLM Demo Chat-UI 🌟