mistral databricks dolly mistral 7b llama finetune finetuning

Training procedure

We finetuned mistralai/Mistral-7B-v0.1 on databricks/databricks-dolly-15k Dataset for 1 epoch using MonsterAPI no-code LLM finetuner.

Finetuning with MonsterAPI no-code LLM Finetuner in 5 easy steps:

  1. Select an LLM: Mistral 7B v0.1
  2. Select a task and Dataset: Instruction Finetuning and databricks-dolly-15k Dataset
  3. Specify Hyperparameters: We used default values suggested by finetuner
  4. Review and submit the job: That's it!

Hyperparameters & Run details:

About Model:

The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on majority of the benchmarks as tested by Mistral team.

About Dataset:

databricks-dolly-15k is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT.

Framework versions