llm-rs ggml

GGML converted versions of OpenLM Research's LLaMA models

OpenLLaMA: An Open Reproduction of LLaMA

In this repo, we present a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the project homepage of OpenLLaMA for more details.

Weights Release, License and Usage

We release the weights in two formats: an EasyLM format to be use with our EasyLM framework, and a PyTorch format to be used with the Hugging Face transformers library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.

Converted Models:

Name Based on Type Container GGML Version
open_llama_3b-f16.bin openlm-research/open_llama_3b F16 GGML V3
open_llama_3b-q4_0-ggjt.bin openlm-research/open_llama_3b Q4_0 GGJT V3
open_llama_3b-q5_1-ggjt.bin openlm-research/open_llama_3b Q5_1 GGJT V3
open_llama_7b-f16.bin openlm-research/open_llama_7b F16 GGML V3
open_llama_7b-q4_0-ggjt.bin openlm-research/open_llama_7b Q4_0 GGJT V3
open_llama_7b-q5_1-ggjt.bin openlm-research/open_llama_7b Q5_1 GGJT V3

Usage

Python via llm-rs:

Installation

Via pip: pip install llm-rs

Run inference

from llm_rs import AutoModel

#Load the model, define any model you like from the list above as the `model_file`
model = AutoModel.from_pretrained("rustformers/open-llama-ggml",model_file=" open_llama_7b-q4_0-ggjt.bin")

#Generate
print(model.generate("The meaning of life is"))

Using local.ai GUI

Installation

Download the installer at www.localai.app.

Running Inference

Download your preferred model and place it in the "models" directory. Subsequently, you can start a chat session with your model directly from the interface.

Rust via Rustformers/llm:

Installation

git clone --recurse-submodules https://github.com/rustformers/llm.git
cd llm
cargo build --release

Run inference

cargo run --release -- llama infer -m path/to/model.bin  -p "Tell me how cool the Rust programming language is:"