GORANI 10k

<br>

The project is currently in progress. Please refrain from using weights and datasets.

KORANI is derived from GORANI, a project within llama2 that experiments with the distribution of appropriate datasets to transfer or distill knowledge based on English datasets. Officially, it's called Grid Of Ranvier Node In llama2 (GORANI), based on the biological term Ranvier Node, and aims to explore the optimal dataset for transferring knowledge in various languages and specific domains. Due to strict licensing issues with English datasets, GORANI is primarily for research purposes. Therefore, I am refining and training a commercially usable Korean dataset on top of llama2, based on the experimental results of the GORANI project, and this project is named KORANI (Korean GORANI).

Status: Fixed weights for experimentation.

Update Schedule Task Description Status
23-10-07 EXP1 Completed training - 5k 13b weight (REV 01) Done
23-10-07 EXP1 Completed training - 10k 13b weight (REV 02) Done
23-10-07 Submitted EXP1 model weights Done
23-10-09 Q.C Done
23-10-10 EXP2 training - 5k 13b weight Done
23-10-12 Q.C Done
23-10-26 EXP2 training - 10k 13b weight Done
23-10-26 Q.A On Process
23-10-26 Submit to Open LLM Leader Board Done
23-10- Release official model weight

GORANI 10k

Template

I use llama2-13b with LFM, but I have used it without a default system message. If a system message is specified in some datasets, I use that content.

### System:
{System}

### User:
{New_User_Input}

### Input:
{New User Input}

### Response:
{New_Assistant_Answer}

Caution

The model weights and dataset have not been properly curated yet and are strictly prohibited for use under any license. In relation to this, the developers do not assume any responsibility, either implicitly or explicitly.

Updates

<details> <summary>How to load adpater model weights.</summary>

Load adpater model

import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, LlamaTokenizer, BitsAndBytesConfi

base_model_name = "meta-llama/Llama-2-13b-hf"
adapter_model_name = "danielpark/gorani-10k-llama2-13b-instruct"
device_map = {"": 0}  # Using single GPU
revision =

def load_pretrained_model_from_adapter(base_model_name: str, adapter_model_name: str, device_map: dict) -> tuple:
    """
    Load a pretrained model with an adapter from Hugging Face Transformers.
    Args:
        base_model_name (str): The base model name or path.
        adapter_model_name (str): The name or path of the adapter base model.
        device_map (dict): A dictionary specifying the device for model components.
    Returns:
        tuple: A tuple containing the pretrained model, tokenizer, and stop token IDs.
    Raises:
        Exception: If there is an issue loading the adapter model.
    Example:
        base_model_name = "meta-llama/Llama-2-13b-hf"
        adapter_model_name = "danielpark/gorani-10k-llama2-13b-instruct"
        device_map = {"": 0}  # Using single GPU
        loaded_model, tokenizer, stop_token_ids = load_pretrained_model_from_adapter(
            base_model_name, adapter_model_name, device_map
        )
    """
    quantization_config = BitsAndBytesConfig(
      load_in_4bit=use_4bit,
      bnb_4bit_quant_type=bnb_4bit_quant_type,
      bnb_4bit_compute_dtype=compute_dtype,
      bnb_4bit_use_double_quant=use_nested_quant,
    )
    try:
        adapter_model = AutoModelForCausalLM.from_pretrained(
            base_model_name,
            quantization_config=quantization_config,
            torch_dtype=torch.bfloat16,
            device_map=device_map
        )
    except Exception as e:
        print(f"Failed to load adapter model:\n{e}")
        return None
    pretrained_model = PeftModel.from_pretrained(adapter_model, adapter_model_name, revision=revision)
    tok = LlamaTokenizer.from_pretrained(base_model_name)
    tok.bos_token_id = 1
    stop_token_ids = [0]
    print(f"{adapter_model_name} model is successfully loaded.")
    return pretrained_model, tok, stop_token_ids

loaded_model, tokenizer, stop_token_ids = load_pretrained_model_from_adapter(base_model_name, adapter_model_name, device_map)