Model Overview
This is the model presented in the paper "Detecting Text Formality: A Study of Text Classification Approaches".
The original model is DeBERTa (large). Then, it was fine-tuned on the English corpus for fomality classiication GYAFC. In our experiments, the model showed the best results within Transformer-based models for the task. More details, code and data can be found here.
Evaluation Results
Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values. This is the task of English monolingual formality classification.
acc | f1-formal | f1-informal | |
---|---|---|---|
bag-of-words | 79.1 | 81.8 | 75.6 |
CharBiLSTM | 87.0 | 89.0 | 84.0 |
DistilBERT-cased | 80.1 | 83.0 | 75.6 |
DeBERTa-large | 87.8 | 89.0 | 86.1 |
How to use
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = 'deberta-large-formality-ranker'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
Citation
TBD
Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.