Contrastive user encoder (single post)

This model is a DistilBertModel trained by fine-tuning distilbert-base-uncased on author-based triplet loss.

Details

Training and evaluation details are provided in our EMNLP Findings paper:

Training

We fine-tuned DistilBERT on triplets consisting of:

To compute the loss, we use [CLS] encoding of the anchor, positive example and negative example from the last layer of the DistilBERT encoder. We optimize for \(max(||f(a) - f(n)|| - ||f(a) - f(p)|| + \alpha,0)\)

where:

Evaluation and usage

The model yields performance advantages downstream user-based classification tasks.

We encourage usage and benchmarking on tasks involving:

Limitations

Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment.

Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks.