This is a very bad attempt at quantizing 128g 4 bit with alpaca (in orca style prompt

python quantize_alpaca.py --pretrained_model_dir orca_mini_3b/ --bits 4 --group_size 128 --quantized_model_dir orca_mini_3b_gptq/ --save_and_reloa

Downloqd cleaned dataset first: https://github.com/gururise/AlpacaDataCleaned