stablelm-tuned-alpha-7b-sharded-8bit
This is a sharded checkpoint (with ~4GB shards) of the stabilityai/stablelm-tuned-alpha-7b
model in 8bit
precision using bitsandbytes
.
Refer to the original model for all details w.r.t. to the model. For more info on loading 8bit models, refer to the example repo and/or the 4.28.0
release info.
- total model size is only ~7 GB!
- this enables low-RAM loading, i.e. Colab :)
Basic Usage
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
You can use this model as a drop-in replacement in the notebook for the standard sharded models.
Python
Install/upgrade transformers
, accelerate
, and bitsandbytes
. For this to work you must have transformers>=4.28.0
and bitsandbytes>0.37.2
.
pip install -U -q transformers bitsandbytes accelerate
Load the model. As it is serialized in 8bit you don't need to do anything special:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/stablelm-tuned-alpha-7b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)