This is Chinese-Llama-2-7b f16 ggml model running llama.cpp.You can run

./main -m Chinese-Llama-2-7b-f16-ggml.bin -p 'hello world'

from model see: https://huggingface.co/LinkSoul/Chinese-Llama-2-7b