FP16 model merge of airoboros 70b 1.4.1 (https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1) and limarpv3-llama2-70b-qlora (https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora).
Original LoRA card:
limarpv3-llama2-70b-qlora
This model is an unofficial Llama 2 70B training on the LimaRP v3 dataset by lemonilia. It does not include the pretraining stage using stories.
It achieves the following results on the evaluation set:
- Loss: 1.8232
Model description
For more details about LimaRP, see the model page for the previously released v2 version for Llama-2. Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style roleplaying chat model intended to replicate the experience of 1-on-1 roleplay on Internet forums. Short-form, IRC/Discord-style RP (aka "Markdown format") is not supported yet. The model does not include instruction tuning, only manually picked and slightly edited RP conversations with persona and scenario data.
Prompt format is the extended Alpaca format:
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
This has an immediately noticeable effect on bot responses. The lengths using during training are:
micro
, tiny
, short
, medium
, long
, massive
, huge
, enormous
, humongous
, unlimited
.
The recommended starting length is medium. Keep in mind that the AI can ramble or impersonate
the user with very long messages.
The length control effect is reproducible, but the messages will not necessarily follow lengths very precisely, rather follow certain ranges on average, as seen in this table with data from tests made with one reply at the beginning of the conversation:
Response length control appears to work well also deep into the conversation. By omitting the modifier, the model will choose the most appropriate response length (although it might not necessarily be what the user desires).
Intended uses & limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
Training and evaluation data
For more details about LimaRP, see the model page for the previously released v2 version for Llama-2.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.8482 | 0.09 | 20 | 1.8569 |
1.6823 | 0.18 | 40 | 1.8400 |
1.779 | 0.27 | 60 | 1.8329 |
1.7776 | 0.36 | 80 | 1.8287 |
1.7773 | 0.45 | 100 | 1.8280 |
1.7328 | 0.53 | 120 | 1.8273 |
1.7349 | 0.62 | 140 | 1.8243 |
1.7789 | 0.71 | 160 | 1.8228 |
1.8113 | 0.8 | 180 | 1.8215 |
1.7 | 0.89 | 200 | 1.8203 |
1.7279 | 0.98 | 220 | 1.8201 |
1.7605 | 1.07 | 240 | 1.8225 |
1.7492 | 1.16 | 260 | 1.8245 |
1.7823 | 1.25 | 280 | 1.8235 |
1.6247 | 1.34 | 300 | 1.8247 |
1.6858 | 1.43 | 320 | 1.8246 |
1.6561 | 1.51 | 340 | 1.8240 |
1.7093 | 1.6 | 360 | 1.8240 |
1.6844 | 1.69 | 380 | 1.8235 |
1.6608 | 1.78 | 400 | 1.8233 |
1.7686 | 1.87 | 420 | 1.8233 |
1.7189 | 1.96 | 440 | 1.8232 |
Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
Original model card
Overview
Llama 2 70b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
Licence and usage restrictions
Base model has a custom Meta license:
- See the meta-license/LICENSE.txt file attached for the original license provided by Meta.
- See also meta-license/USE_POLICY.md and meta-license/Responsible-Use-Guide.pdf, also provided by Meta.
The fine-tuning data was generated by OpenAI API calls to gpt-4, via airoboros
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that competes with OpenAI
- what does compete actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.