Dataset

Japanese subset of the mC4 dataset

Training

Trained for 3000 steps on top of the MPT 7b checkpoint mosaicml/mpt-7b

How to load

Before running this model, please install the following pip package:

pip install einops

To load the model, run the following command.

from transformers import AutoModelForCausalLM

model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype='auto',
    trust_remote_code=True
)

To run this model, you may need to load it in a lower precision in order for it to fit onto your GPU. We found for a T4 GPU, it requires loading the model in 8-bit precision. To load the model in 8-bit and 4-bit, please install the following pip packages:

pip install bitsandbytes accelerate

Caution - you will also need enough RAM to load the model. We estimate loading this model requires ~30GB.

<details> <summary><b>Code to load the model in 8 bit</b></summary>

from transformers import AutoModelForCausalLM

model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype='auto',
    load_in_8bit=True,
    trust_remote_code=True
)

</details><details> <summary><b>Code to load the model in 4 bit</b></summary>

from transformers import AutoModelForCausalLM

model_name = "lightblue/japanese-mpt-7b"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype='auto',
    load_in_4bit=True,
    trust_remote_code=True
)

</details>

<br/>

How to use

from transformers import AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

prompt = """A: こんにちは
B: こんにちは
A: 好きなスポーツは何ですか?
B: サッカーです
A: 好きな食べ物は何ですか?
B:"""

pipe(prompt, temperature=0, do_sample=False, return_full_text=False, max_new_tokens=32)
# [{"generated_text": " カレーです
# A: 好きな色は何ですか?
# B: 赤です"}]