llama2 llama

A second model merge by chargoddard. A GGML conversion of the previous merge can be found here.<br> I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.<br><br> Description copied from the original repo below.

<i> Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens.

Again, not intended for direct use - meant as a base for further tuning and merging.</i>