BLOOMZ, a version for Petals

This model is a version of bigscience/bloomz post-processed to be run at home using the Petals swarm.

Please check out:

We provide minimal code examples below.

Using the model

from petals import DistributedBloomForCausalLM

model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloomz-petals")
# Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet

inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0]))  # A cat sat on a mat...

Serving the model blocks

python -m petals.cli.run_server bigscience/bloomz-petals