Cloud4bert

This model is a specialised version of the BERT base model. The code for the training process will be uploaded here. This model is uncased: it does not make a difference between english and English.

Model description

Cloud4bert is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained with three objectives:

This way, the model learns the same inner representation of the English language than its teacher model, while being faster for inference or downstream tasks.

Intended uses & limitations

will be added soon

How to use

You can use this model directly with a pipeline for masked language modeling:

>>> from transformers import pipeline
>>> sentiment_analzyor = pipeline('text-classification', model='ultraleow/cloud4bert')
>>> sentiment_analzyor("Sorry, I don't understand - are you saying you don't have the `paypal` section defined? You need to, otherwise, it's an 'unknown element' in the web.config.")
[{'label': 'LABEL_0', 'score': 0.6916515231132507}]
#LABEL_0 = negative
#LABEL_1 = neutral
#LABEL_2 = positive

Here is how to use this model to get the features of a given text in PyTorch:

from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ultraleow/cloud4bert")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)

and in TensorFlow:

from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ultraleow/cloud4bert")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)

Training data

will be added soon

Training procedure

will be added soon

Preprocessing

will be added soon

Pretraining

will be added soon

Evaluation results

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

Task Recall(Weighted) Precision(Weighted) f1(Weighted) ACC
94.03% 94.06% 94.02% 94.03%