DistilBERT with word2vec token embeddings

This model has a word2vec token embedding matrix with 256k entries. The word2vec was trained on 100GB data from C4, MSMARCO, News, Wikipedia, S2ORC, for 3 epochs.

Then the model was trained on this dataset with MLM for 1.37M steps (batch size 64). The token embeddings were NOT updated.

For the initial word2vec weights with Gensim see: https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_1M/tree/main/word2vec