<body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DistilUSE</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body>


<h3>Model description</h3>

This is a <b>Universal Sentence Encoder</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mDistilUSE</b> (distiluse-base-multilingual-cased-v1) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)

The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.

It can be used to encode Italian texts and compute similarities between them.

<h3>Quick usage</h3>

from transformers import AutoTokenizer, AutoModel
import numpy as np

tokenizer = AutoTokenizer.from_pretrained("osiria/distiluse-base-italian")
model = AutoModel.from_pretrained("osiria/distiluse-base-italian")

text1 = "Alessandro Manzoni è stato uno scrittore italiano"
text2 = "Giacomo Leopardi è stato un poeta italiano"

vec1 = model(tokenizer.encode(text1, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()
vec2 = model(tokenizer.encode(text2, return_tensors = "pt")).last_hidden_state[0,0,:].cpu().detach().numpy()

cosine_similarity = np.dot(vec1, vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2))
print("COSINE SIMILARITY:", cosine_similarity)

# COSINE SIMILARITY: 0.734292

<h3>References</h3>

[1] https://arxiv.org/abs/1907.04307

[2] https://arxiv.org/abs/2010.05609

<h3>License</h3>

The model is released under <b>Apache-2.0</b> license