I used Google Colab to quantize/nest the Llama 2 7B model. Should help out those who wish to use Llama 2 7B on a low-end computer. GPU is still recomended...