What?

It's the model from https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g, converted for use in the latest llama.cpp release.

Why?

Update: They made yet another breaking change of the same nature here, so I repeated the same procedure and reuploaded the result.

Starting with this PR, the llama.cpp team decided to make a breaking change so that all GGML version of the models created prior to this are no longer supported. I just re-did the conversion from the non-GGML model using the latest conversion scripts and posted it here.

How?