The models have been hacked together because their base weights share a similar architecture. But for now using the Pythia inference code only gibberish is generated, while when trying to use the MPT based inference code, i am running into errors that stop it from working.

Currently trying to adapt the "MPT-7b Storywriter 65k" based inference code to work with this new model merge. I'd appreciate tips if anyone tries their hand at it.

This model is not functional as is.