psychology cognitive distortions

Classification of Cognitive Distortions using Bert

<span style="color:red">This article is under development. Please use the model for retraining on your data, not a "ready to use" solution.</span>

Problem Description

Cognitive distortion refers to patterns of biased or distorted thinking that can lead to negative emotions, behaviors, and beliefs. These distortions are often automatic and unconscious, and can affect a person's perception of reality and their ability to make sound judgments.

Some common types of cognitive distortions include:

  1. Personalization: Blaming oneself for things that are outside of one's control.

Examples:

  1. Emotional Reasoning: Believing that feelings are facts, and letting emotions drive one's behavior.

Examples:

  1. Overgeneralizing: Drawing broad conclusions based on a single incident or piece of evidence.

Examples:

  1. Labeling: Attaching negative or extreme labels to oneself or others based on specific behaviors or traits.

Examples:

  1. Should Statements: Rigid, inflexible thinking that is based on unrealistic or unattainable expectations of oneself or others.

Examples:

  1. Catastrophizing: Assuming the worst possible outcome in a situation and blowing it out of proportion.

Examples:

  1. Reward Fallacy: Belief that one should be rewarded or recognized for every positive action or achievement.

Examples:

Model Description

This is one of the smaller BERT variants, pretrained model on English language using a masked language modeling objective. BERT was introduced in this paper and first released in this repository.

Data Description

[In progress]

Using

Example of single-label classification:

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("amedvedev/bert-tiny-cognitive-bias")
model = AutoModelForSequenceClassification.from_pretrained("amedvedev/bert-tiny-cognitive-bias")

inputs = tokenizer("He must never disappoint anyone.", return_tensors="pt")
with torch.no_grad():
    logits = model(**inputs).logits

predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]

Metrics

Model accuracy by labels:

Precision Recall F1
No Distortion 0.84 0.74 0.79
Personalization 0.86 0.89 0.87
Emotional Reasoning 0.88 0.96 0.92
Overgeneralizing 0.80 0.88 0.84
Labeling 0.84 0.80 0.82
Should Statements 0.88 0.95 0.91
Catastrophizing 0.88 0.86 0.87
Reward Fallacy 0.87 0.95 0.91

Average model accuracy:

Accuracy Top-3 Accuracy Top-5 Accuracy Precision Recall F1
0.86 ± 0.04 0.99 ± 0.01 0.99 ± 0.01 0.86 ± 0.04 0.85 ± 0.04 0.85 ± 0.04

References

[In progress]