PopBERT
PopBERT is a model for German-language populism detection in political speeches within the German Bundestag, based on the deepset/gbert-large model: https://huggingface.co/deepset/gbert-large
It is a multilabel model trained on a manually curated dataset of sentences from the 18th and 19th legislative periods. In addition to capturing the foundational dimensions of populism, namely "anti-elitism" and "people-centrism," the model was also fine-tuned to identify the underlying ideological orientation as either "left-wing" or "right-wing."
Prediction
The model outputs a Tensor of length 4. The table connects the position of the predicted probability to its dimension.
Index | Dimension |
---|---|
0 | Anti-Elitism |
1 | People-Centrism |
2 | Left-Wing Host-Ideology |
3 | Right-Wing Host-Ideology |
Usage Example
import torch
from transformers import AutoModel
from transformers import AutoTokenizer
# optional commit_hash to ensure a consistent version of the model
commit_hash = "2354335caedc36df44da926291786f0159a502f0"
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("luerhard/PopBERT", revision=commit_hash)
# load model
# trust_remote_code is necessary to use the custom architecture of this model (module.py)
model = AutoModel.from_pretrained("luerhard/PopBERT", trust_remote_code=True, revision=commit_hash)
# define text to be predicted
text = (
"Das ist Klassenkampf von oben, das ist Klassenkampf im Interesse von "
"Vermögenden und Besitzenden gegen die Mehrheit der Steuerzahlerinnen und "
"Steuerzahler auf dieser Erde."
)
# encode text with tokenizer
encodings = tokenizer(text, padding=True, return_tensors="pt")
# predict
with torch.inference_mode():
_, prediction_tensor = model(**encodings)
# convert prediction from torch tensor to numpy array
prediction = prediction_tensor.numpy()
print(prediction)
[[0.84803474 0.9991047 0.9919584 0.19843338]]
Performance
This table presents the classification report for a 5-fold cross-validation of our model. The hyperparameters are consistent across all 5 runs. The final and published model was then trained on all data with the same hyperparameters. It is evident that the model performs, on average, best for anti-elitism but performs the worst for detecting right-wing host ideology. The relatively small standard deviations suggest that the split into training and test data has minimal impact on model performance. Therefore, it is expected that the performance of the final model will be comparable to what is depicted here.
Dimension | Precision | Recall | F1 |
---|---|---|---|
Anti-Elitism | 0.812 (0.013) | 0.885 (0.006) | 0.847 (0.007) |
People-Centrism | 0.670 (0.011) | 0.725 (0.040) | 0.696 (0.019) |
Left-Wing Ideology | 0.664 (0.023) | 0.771 (0.024) | 0.713 (0.010) |
Right-Wing Ideology | 0.654 (0.029) | 0.698 (0.050) | 0.674 (0.031) |
--- | --- | --- | --- |
micro avg | 0.732 (0.009) | 0.805 (0.006) | 0.767 (0.007) |
macro avg | 0.700 (0.011) | 0.770 (0.010) | 0.733 (0.010) |