not-for-all-audiences nsfw

exl2 version of Undi95/PsyMedRP-v1-13B
used dataset : wikitext
quantized by IHaBiS

command : python convert.py -i models/Undi95_PsyMedRP-v1-13B -o Undi95_PsyMedRP-v1-13B-temp2 -cf Undi95_PsyMedRP-v1-13B-6bpw-h8-exl2 -c 0000.parquet -l 4096 -b 6 -hb 8 -ss 4096 -m Undi95_PsyMedRP-v1-13B-temp/measurement.json

Below this sentence is original model card

PsyMedRP-v1-13B-p1:
[jondurbin/airoboros-l2-13b-3.0](0.85) x [ehartford/Samantha-1.11-13b](0.15)

PsyMedRP-v1-13B-p2:
[Xwin-LM/Xwin-LM-13B-V0.1](0.85) x [chaoyi-wu/MedLLaMA_13B](0.15)

PsyMedRP-v1-13B-p3:
[PsyMedRP-v1-13B-p1](0.55) x [PsyMedRP-v1-13B-p2](0.45)

PsyMedRP-v1-13B-p4:
[The-Face-Of-Goonery/Huginn-13b-FP16 merge with Gryphe gradient with PsyMedRP-v1-13B-p3]

PsyMedRP-v1-13B:
Apply Undi95/LimaRP-v3-120-Days at 0.3 weight to PsyMedRP-v1-13B-p4

In testing. 20B will follow!

If you want to support me, you can here.