bert oBERT sparsity pruning compression

oBERT-12-downstream-pruned-unstructured-90-qqp

This model is obtained with The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models.

It corresponds to the model presented in the Table 1 - 30 Epochs - oBERT - QQP 90%.

Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12

The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with (*)):

| oBERT 90%    | acc   | F1    |
| ------------ | ----- | ----- |
| seed=42      | 91.30 | 88.24 |
| seed=3407 (*)| 91.39 | 88.36 |
| seed=54321   | 91.36 | 88.29 |
| ------------ | ----- | ----- |
| mean         | 91.35 | 88.30 |
| stdev        | 0.045 | 0.060 |

Code: https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT

If you find the model useful, please consider citing our work.

Citation info

@article{kurtic2022optimal,
  title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
  author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
  journal={arXiv preprint arXiv:2203.07259},
  year={2022}
}