Original model card
Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
Description
GGML Format model files for This project.
inference
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
Original model card
Model Details
This is an unofficial implementation of AlpaGasus-13B, which is a chat assistant trained by fine-tuning LLaMA on a Claud-filtered Alpaca dataset with around 5K triplets.
- Developed by: gpt4life
- Model type: An auto-regressive language model based on the transformer architecture.
- License: Non-commercial license
- Finetuned from model: LLaMA-13B.
Please see the original LLaMA license before using this model.
Model Sources
- Repository: https://github.com/gpt4life/alpagasus
- Paper: https://arxiv.org/pdf/2307.08701.pdf
Training Details
AlpaGasus-13B is fine-tuned from LLaMA-13B with supervised instruction fine-tuning on the filtered Alpaca dataset.