Model Card for SantaFixer
<!-- Provide a quick summary of what the model is/does. -->
This is a LLM for code that is focussed on generating bug fixes using infilling.
Model Details
Model Description
<!-- Provide a longer summary of what this model is. -->
- Developed by: codelion
- Model type: GPT-2
- Finetuned from model: bigcode/santacoder
How to Get Started with the Model
Use the code below to get started with the model.
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "lambdasec/santafixer"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint,
trust_remote_code=True).to(device)
input_text = "<fim-prefix>def print_hello_world():\n
<fim-suffix>\n print('Hello world!')
<fim-middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Training Details
- GPU: Tesla P100
- Time: ~5 hrs
Training Data
The model was fine-tuned on the CVE single line fixes dataset
Training Procedure
Supervised Fine Tuning (SFT)
Training Hyperparameters
- optim: adafactor
- gradient_accumulation_steps: 4
- gradient_checkpointing: true
- fp16: false
Evaluation
The model was tested with the GitHub top 1000 projects vulnerabilities dataset