<body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DistilBERT</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body>


<h3>Model description</h3>

This is a <b>DistilBERT</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using the multilingual <b>DistilBERT</b> (distilbert-base-multilingual-cased) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset)

The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB.

<h3>Quick usage</h3>

from transformers import BertTokenizerFast, DistilBertModel

tokenizer = DistilBertTokenizerFast.from_pretrained("osiria/distilbert-base-italian-cased")
model = DistilBertModel.from_pretrained("osiria/distilbert-base-italian-cased")

<h3>References</h3>

[1] https://arxiv.org/abs/1910.01108

[2] https://arxiv.org/abs/2010.05609

<h3>License</h3>

The model is released under <b>Apache-2.0</b> license