llama-2 alpaca

Model Card for Llama-2-7b-alpaca-cleaned

<!-- Provide a quick summary of what the model is/does. -->

This model checkpoint is the Llama-2-7b fine-tuned on alpaca-cleaned dataset with the original Alpaca fine-tuning hyper-parameters.

Model Details

Model Description

This model checkpoint is the Llama-2-7b fine-tuned on alpaca-cleaned dataset with the original Alpaca fine-tuning hyper-parameters.
The original Alpaca model is fine-tuned on Llama with the alpaca dataset by researchers from Stanford University

Model Sources

<!-- Provide the basic links for the model. -->

Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

The model is intended to be used for research purposes only in English, complying with stanford_alpaca project.
The model has been fine-tuned on the alpaca-cleaned dataset for assistant-like chat and general natural language generation tasks.
The use of this model should also comply with the restrictions from Llama-2-7b.

Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

The out-of-Scope use of this model should also comply with stanford_alpaca project and Llama-2-7b.

Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

{{ bias_risks_limitations | default("[More Information Needed]", true)}}

How to Get Started with the Model

Use the code below to get started with the model.

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")
model = AutoModelForCausalLM.from_pretrained("NEU-HAI/Llama-2-7b-alpaca-cleaned")

Training Details

Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

We use the alpaca-cleaned dataset, which is the cleaned version of the original alpaca dataset created by researchers from Stanford University.

Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We follow the same training procedure and mostly same hyper-parameters to fine-tune the original Alpaca model on Llama. The procedure can be found in stanford_alpaca project.

Training Hyperparameters

--bf16 True \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 8 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True

Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

Testing Data, Factors & Metrics

Testing Data

<!-- This should link to a Data Card if possible. -->

N/A

Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

N/A

Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

N/A

Results

N/A

Summary

N/A

<!--

Environmental Impact

Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

Please cite the stanford_alpaca project

@misc{alpaca,
  author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
  title = {Stanford Alpaca: An Instruction-following LLaMA model},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}

Model Card Authors

Northeastern Human-centered AI Lab

Model Card Contact