code

Model Card for SantaFixer

<!-- Provide a quick summary of what the model is/does. -->

This is a LLM for code that is focussed on generating bug fixes using infilling.

Model Details

Model Description

<!-- Provide a longer summary of what this model is. -->

How to Get Started with the Model

Use the code below to get started with the model.

# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

checkpoint = "lambdasec/santafixer"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint,
              trust_remote_code=True).to(device)

input_text = "<fim-prefix>def print_hello_world():\n
              <fim-suffix>\n print('Hello world!')
              <fim-middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

Training Details

Training Data

The model was fine-tuned on the CVE single line fixes dataset

Training Procedure

Supervised Fine Tuning (SFT)

Training Hyperparameters

Evaluation

The model was tested with the GitHub top 1000 projects vulnerabilities dataset