The following model is a Pytorch pre-trained model obtained from converting pytorch checkpoint found in the official distilbert-base-uncased.

This is one of the faster pre-trained BERT variants, that can be used for multiple tasks. This model is trained on stackoverflow data to predict the language tag.