Quantized : https://huggingface.co/Sao10K/Euryale-L2-70B

With : https://github.com/turboderp/exllamav2

Fits into 24GB of ram with 4096 context. Unfortunately it seems to be dumbed down a bit too much by compression. Files also include measurement.json that can be used to speed up quantization process for other BPW size.

Measurement done with default parameters and https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-103-raw-v1/test


license: other