Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions

Project Title

Short description of your project or the model you've fine-tuned.

Table of Contents

Overview

Provide a brief introduction to your project. Explain what your fine-tuned model does and its potential applications. Mention any notable achievements or improvements over the base model.

Training Procedure

Describe the training process for your fine-tuned model. Include details such as:

Quantization Configuration

Explain the quantization configuration used during training. Include details such as:

Framework Versions

List the versions of the frameworks or libraries you used for this project. Include specific versions, e.g., PEFT 0.5.0.

Usage

Provide instructions on how to use your fine-tuned model. Include code snippets or examples on how to generate summaries using the model. Mention any dependencies that need to be installed.

# Example usage command
python generate_summary.py --model your-model-name --input input.txt --output output.txt