Paper | Github | Dataset| Model

As a part of our research efforts to make LLMs safer, we created Starling. It is obtained by fine-tuning Vicuna-7B on HarmfulQA, a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment

<img src="https://declare-lab.net/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">

Experimental results on several safety benchmark datasets indicate that Starling is a safer model compared to the baseline model, Vicuna.

<img src="https://declare-lab.net/assets/images/logos/method.png" alt="Image" width="1000" height="335">

<h2>Experimental Results</h2>

Compared to Vicuna, Avg. 5.2% reduction in Attack Success Rate (ASR) on DangerousQA and HarmfulQA using three different prompts.**

Compared to Vicuna, Avg. 3-7% improvement in HHH score measured on BBH-HHH benchmark.**

<img src="https://declare-lab.net/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">

TruthfulQA (MC2): 48.90 vs Vicuna's 47.00

MMLU (5-shot): 46.69 vs Vicuna's 47.18

BBH (3-shot): 33.47 vs Vicuna's 33.05

<h2>Jailbreak Prompt for harmfulness eval using Red Eval as reported in the paper</h2>

This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.

<img src="https://declare-lab.net/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">

<h2>HarmfulQA Data Collection</h2>

We also release our HarmfulQA dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details here. The following figure describes the data collection process.

<img src="https://declare-lab.net/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">

Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data.

Citation

@misc{bhardwaj2023redteaming,
      title={Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment}, 
      author={Rishabh Bhardwaj and Soujanya Poria},
      year={2023},
      eprint={2308.09662},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}