meta-llama code instruct databricks-dolly-15k Llama-2-70b-hf

Note: This repo contains the base weights already merged with lora, pls check qblocks/llama2_70B_dolly15k repo for LORA adapters only

Finetuning Overview:

Model Used: meta-llama/Llama-2-70b-hf
Dataset: Databricks-dolly-15k

Dataset Insights:

The Databricks-dolly-15k dataset is an impressive compilation of over 15,000 records, made possible by the hard work and dedication of a multitude of Databricks professionals. It has been tailored to:

The contributors had the opportunity to rephrase and answer queries from their peers, highlighting a focus on accuracy and clarity. Additionally, some data subsets feature Wikipedia-sourced reference texts, marked by bracketed citation numbers like [42].

Finetuning Details:

Using MonsterAPI's user-friendly LLM finetuner, the finetuning:

Hyperparameters & Additional Details:


Prompt Structure:

### INSTRUCTION:
[instruction]

[context]

### RESPONSE:
[response]

Loss metrics

Training loss (Blue) Validation Loss (orange): training loss


license: apache-2.0