erzya mordovian fill-mask pretraining embeddings masked-lm feature-extraction sentence-similarity

This is an Erzya (myv, cyrillic script) sentence encoder from the paper The first neural machine translation system for the Erzya language.

It is based on sentence-transformers/LaBSE (license here), but with updated vocabulary and checkpoint:

The model can be used as a sentence encoder or a masked language modelling predictor for Erzya, or fine-tuned for any downstream NLU dask.

Sentence embeddings can be produced with the code below:

import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("slone/LaBSE-en-ru-myv-v1")
model = AutoModel.from_pretrained("slone/LaBSE-en-ru-myv-v1")
sentences = ["Hello World", "Привет Мир", "Шумбратадо Мастор"]
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
   model_output = model(**encoded_input)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings.shape)  # torch.Size([3, 768])