Model Overview
Model license: Llama-2<br> This model is trained based on NousResearch/Llama-2-7b-chat-hf model that is QLoRA finetuned on Photolens/oasst1-langchain-openorca-formatted dataset.<br>
Subjective performance
The performance is nearly on par with ChatGPT on langchain applications in both response format and clarity in using tools.
Prompt Template: Llama-2
User: Prompter Message <end_of_turn>
Assistant: Assistant Message <end_of_turn>
Intended Use
Dataset that is used to finetune base model is optimized for langchain applications.<br> So this model is intended for a langchain LLM.
Training Details
This model took 2:56:54
to train in QLoRA on a single A100 40gb
GPU.<br>
- epochs:
1
- train batch size:
12
- eval batch size:
12
- gradient accumulation steps:
1
- maximum gradient normal:
0.3
- learning rate:
2e-4
- weight decay:
0.001
- optimizer:
paged_adamw_32bit
- learning rate schedule:
cosine
- warmup ratio (linear):
0.03
Models in this series
Model | Train time | Size (in params) | Base Model |
---|---|---|---|
llama-2-7b-langchain-chat | 1:14:16 | 7 billion | NousResearch/Llama-2-7b-chat-hf |
llama-2-13b-langchain-chat | 2:50:27 | 13 billion | TheBloke/Llama-2-13B-Chat-fp16 |
Photolens/OpenOrcaxOpenChat-2-13b-langchain-chat | 2:56:54 | 13 billion | Open-Orca/OpenOrcaxOpenChat-Preview2-13B |