mteb sparse sparsity quantized onnx embeddings int8

bge-base-en-v1.5-sparse

This is the sparsified ONNX variant of the bge-base-en-v1.5 embeddings model created with DeepSparse Optimum for ONNX export/inference pipeline and Neural Magic's Sparsify for one-shot quantization (INT8) and unstructured pruning (50%).

Current list of sparse and quantized bge ONNX models:

Links Sparsification Method
zeroshot/bge-large-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-large-en-v1.5-quant Quantization (INT8)
zeroshot/bge-base-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-base-en-v1.5-quant Quantization (INT8)
zeroshot/bge-small-en-v1.5-sparse Quantization (INT8) & 50% Pruning
zeroshot/bge-small-en-v1.5-quant Quantization (INT8)
pip install -U deepsparse-nightly[sentence_transformers]
from deepsparse.sentence_transformers import DeepSparseSentenceTransformer
model = DeepSparseSentenceTransformer('zeroshot/bge-base-en-v1.5-sparse', export=False)

# Our sentences we like to encode
sentences = ['This framework generates embeddings for each input sentence',
    'Sentences are passed as a list of string.',
    'The quick brown fox jumps over the lazy dog.']

# Sentences are encoded by calling model.encode()
embeddings = model.encode(sentences)

# Print the embeddings
for sentence, embedding in zip(sentences, embeddings):
    print("Sentence:", sentence)
    print("Embedding:", embedding.shape)
    print("")

For further details regarding DeepSparse & Sentence Transformers integration, refer to the DeepSparse README.

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.

;)