feature-extraction sentence-similarity

biencoder-mMiniLMv2-L12-mmarcoFR

This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the French portion of the mMARCO dataset.

Usage


Sentence-Transformers

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
embeddings = model.encode(sentences)
print(embeddings)

🤗 Transformers

Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')
model = AutoModel.from_pretrained('antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

Evaluation


We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compared the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).

model Vocab. #Param. Size MRR@10 NDCG@10 MAP@10 R@10 R@100(↑) R@500
1 biencoder-camembert-base-mmarcoFR 🇫🇷 110M 443MB 28.53 33.72 27.93 51.46 77.82 89.13
2 biencoder-mpnet-base-all-v2-mmarcoFR 🇬🇧 109M 438MB 28.04 33.28 27.50 51.07 77.68 88.67
3 biencoder-distilcamembert-mmarcoFR 🇫🇷 68M 272MB 26.80 31.87 26.23 49.20 76.44 87.87
4 biencoder-MiniLM-L6-all-v2-mmarcoFR 🇬🇧 23M 91MB 25.49 30.39 24.99 47.10 73.48 86.09
5 biencoder-mMiniLMv2-L12-mmarcoFR 🇫🇷,99+ 117M 471MB 24.74 29.41 24.23 45.40 71.52 84.42
6 biencoder-camemberta-base-mmarcoFR 🇫🇷 112M 447MB 24.78 29.24 24.23 44.58 69.59 82.18
7 biencoder-electra-base-french-mmarcoFR 🇫🇷 110M 440MB 23.38 27.97 22.91 43.50 68.96 81.61
8 biencoder-mMiniLMv2-L6-mmarcoFR 🇫🇷,99+ 107M 428MB 22.29 26.57 21.80 41.25 66.78 79.83

Training


Background

We used the nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large model and fine-tuned it on a 500K sentence pairs dataset in French. We used a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. Formally, we compute the cos similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.

Hyperparameters

We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 65.7k steps) using a batch size of 152. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 128 tokens.

Data

We used the French version of the mMARCO dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a large-scale IR dataset comprising:

Citation

@online{louis2023,
   author    = 'Antoine Louis',
   title     = 'biencoder-mMiniLMv2-L12-mmarcoFR: A Biencoder Model Trained on French mMARCO',
   publisher = 'Hugging Face',
   month     = 'may',
   year      = '2023',
   url       = 'https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR',
}