StableLM-Tuned-Alpha 7b: sharded checkpoint
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
This is a sharded checkpoint (with ~4GB shards) of the model. Refer to the original model for all details.
- this enables low-RAM loading, i.e. Colab :)
Basic Usage
install transformers
, accelerate
, and bitsandbytes
.
pip install -U -q transformers bitsandbytes accelerate
Load the model in 8bit, then run inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/stablelm-tuned-alpha-7b-sharded"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, load_in_8bit=True, device_map="auto"
)