KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)

This model is a large multimodal model (LMM) that combines the LLM(LLaMA-2-7b-ko) with visual encoder of CLIP(ViT-14), trained on Korean visual-instruction dataset using QLoRA.

Detail codes are available at KoLLaVA github repository

Model License: cc-by-nc-4.0