HumanEval StarCoder

StarCoder-1b-textbook

StarCoder-1b-textbook is a finetuned version of starcoderbase-1b on the code_exercices dataset

It achieves 27.0 pass@1 on the Human Eval coding benchmark while being only 1b parameters. That is an improvement of almost 12 points over the starcoder 1b baseline, almost doubling the score.

The results (on the human eval benchmark) are on par with other open-source models like StarCoderBase (30.4) StarCoder(33.6) CodeGen-16B-Mono(29.3) while the model being 15 times smaller.

It still underperforms compared to other models like CodeLLama (53%) chat gpt 4 (82) or wizard coder (73.2), but these model are more than 30 times bigger.

Usage

You can download and use the model like so:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
        "jinaai/starcoder-1b-textbook", device_map='auto'
    )

tokenizer = AutoTokenizer.from_pretrained("jinaai/starcoder-1b-textbook")

prompt = '''
def unique(l: list):
    """Return sorted unique elements in a list
    >>> unique([5, 3, 5, 2, 3, 3, 9, 0, 123])
    [0, 2, 3, 5, 9, 123]
    """
'''

inputs = tokenizer(prompt.rstrip(), return_tensors="pt").to("cuda")

generation_output = model.generate(
    **inputs,
    max_new_tokens=128,
    eos_token_id=tokenizer.eos_token_id,
    return_dict_in_generate=True,
)

s = generation_output.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)

print(output)

Finetuning details

We did full parameter fine-tuning and used a Nvidia a40 for 12 hours using a batch size of 128 and a micro-batch size of 8.

To reproduce the training just follow the training instructions in our open source codebase

Disclaimer

Credits

This model was trained and released by Jina.ai