Pythia 12B SFT
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using this raw template.
Model Details
Model Description
- Developed by: Open Assistant
- Model type: Pythia
- Language(s) (NLP): English
- License: Apache-2.0
Model Sources [optional]
<!-- Provide the basic links for the model. -->
- Repository: Open Assistant
Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> See the example on the right
Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "theblackcat102/pythia-12b-deduped-sft"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).half().eval().cuda()
input_text = "<human>What's the earth population?<bot>"
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to(0)
outputs = model.generate(
    **inputs,
    early_stopping=True,
    max_new_tokens=args.max_new_tokens,
    do_sample=True,
    top_k=args.top_k,
    temperature=args.temperature,
    pad_token_id=tokenizer.eos_token_id,
    # dialogue_collator.py line 36
)
output = tokenizer.decode(outputs[0], truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
print(output)
Training Details
Training Data
Trainining data includes 2023-02-10 openassistant unfiltered conversation tree dump
Training Procedure
deepspeed trainer_sft.py --configs defaults pythia-80 --deepspeed
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
deepspeed stage 2
config are as follows:
defaults:
  learning_rate: 1e-5
  gradient_checkpointing: false
  gradient_accumulation_steps: 32
  per_device_train_batch_size: 2
  per_device_eval_batch_size: 2
  weight_decay: 0.00
  warmup_steps: 600
  eval_steps: 250
  save_steps: 250
  max_length: 512
  num_train_epochs: 2
  logging_steps: 10
  max_grad_norm: 2.0
  save_total_limit: 4
  fp16: true
  eval_accumulation_steps:
  freeze_layer:
  datasets:
    - gsm8k_hard
    - webgpt
    - squad_v2
    - adversarial_qa
    - private_tuning
    - oa_translated
    - prosocial_dialogue
    - math_qa
    - wikihow
    - joke
    - gsm8k
    - ted_trans_en-hi
    - ted_trans_de-ja
    - ted_trans_nl-en
    - ted_trans_en-ja
    - ted_trans_en-es
    - ted_trans_en-ms
    - xsum:
        fraction: 0.5
    - cnn_dailymail:
        fraction: 0.5
    - multi_news:
        fraction: 0.5
    - tldr_news:
        fraction: 0.5
    - scitldr:
        fraction: 0.5
    - samsum:
        fraction: 0.5
    - debate_sum:
        fraction: 0.5
    - billsum:
        fraction: 0.5
    - wmt2019_zh-en:
        fraction: 0.9
    - wmt2019_ru-en:
        fraction: 0.9
    - wmt2019_de-en:
        fraction: 0.9
    - wmt2019_fr-de:
        fraction: 0.9
    - essay_instruction
    - reddit_eli5
    - reddit_askh
    - reddit_asks
  loss_fn: CrossEntropyLoss
  log_dir: "base"
  quantization: false
  seq2seqmodel: false
  poly_eps: 1.0
  fuse_gelu: true
  log_wandb: true
  samples_mixing: true # uses collator that mixes samples in the batch to create a single sample with possible multiple tasks within
  verbose: false
pythia-80:
  learning_rate: 5e-6
  model_name: EleutherAI/pythia-12b-deduped
  weight_decay: 0.01
  max_length: 520
  warmup_steps: 1000
  gradient_checkpointing: false
  gradient_accumulation_steps: 20
  per_device_train_batch_size: 6
  per_device_eval_batch_size: 6
Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Testing Data, Factors & Metrics
Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
Pythia 12B deduppped model
Compute Infrastructure
Stability AWS Slurm Cluster
Hardware
8 x A100 80G
Software
[More Information Needed]
Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
Acknowledgements
- LAION & EleutherAI
- Stability.ai : this project wouldn't be possible without their compute resource
- Teams and contributors at Open Assistant : who put their time after their day job or whatever into this project
- Huggingface : For the storage and spaces here
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
 
       
      