CodeLlama-7b-Instruct-SQL
- Model creator: Codellama
- Original model: CodeLlama-7b-Instruct-hf
Description
This repo contains LoRA finetuned model files for CodeLlama-7b-Instruct-hf.
<!-- prompt-template start -->
Prompt template: CodeLlama
<s>[INST] <<SYS>> {system_msg} <</SYS>> \n {prompt} [/INST]
<!-- prompt-template end -->
<!-- prompt-template start -->
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST]
and [/INST]
tokens. The system message should be surrounded by <<SYS>>
and <</SYS>>
tokens. The very first instruction should begin with a sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
For example:
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "machinists/CodeLlama-7b-Instruct-SQL"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
table_schema = "CREATE TABLE head (age INTEGER)"
question = "How many heads of the departments are older than 56 ?"
system_msg = f"<<SYS>> Generate a correct SQL query from the following database schema. \n {table_schema} <</SYS>>"
prompt = f"<s>[INST] {system_msg} \n \n {question} [/INST]"
sequences = pipeline(
prompt,
max_length=1000,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
<!-- prompt-template end -->
Finetuning
Finetuning technique involving rank of matrices to reduce computational and memory requirements. Read more: LoRA Hugging Face article
Epoch - 05
Dataset Name - b-mc2/sql-create-context
No. of records - 78.6k
Model Loading - bf16
Finetuning Technique - Using LoRA
MaxSeqLength - 1024
Mixed Precision Training - tf32
Hardware and Software
- Training Hardware: 1 X Nvidia A100 80GB GPU
The Machinists Team
Manish Kumar, Aakash Sarin
ReadMe References
https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf
https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-AWQ