Llama-2-7b finetuned on SHP dataset using TRL library. This project aims to study the impact of different data splits on model performance and safety. By experimenting with diverse datasets and employing advanced fine-tuning techniques, we aim to advance the understanding of how data impacts the training of LLMs in terms of safety and helpfulness. We hope that our findings will contribute to safer and more useful AI models, aligning them more closely with human values.