Pythia-6.9b supervised finetuned with Anthropic-hh-rlhf dataset for 1 epoch (sft-model), before DPO (paper) with same dataset for 1 epoch.
Benchmark evaluations included in repo done using lm-evaluation-harness.
See Pythia-6.9b for original model details (paper).