Trained on 6040 posts (39% threats, 61% non-threats); variying from authentic Swedish text and translated English texts (synthetic and authentic).

Feel free to contact us and provide feedback, as we will try to iteratively improve the model with time. Reach out via: nova.threat.analyzer@gmail.com

Trained using:


Average performance based on all three tests

Average Accuracy: 0.87

Label Precision Recall F1 Score
0 (Non-Threat) 0.96 0.86 0.90
1 (Threat) 0.67 0.89 0.76
Macro 0.81 0.89 0.83
Weighted 0.90 0.87 0.87

Performance for each test

Test Label Precision Recall F1 Score
Test Set 0 (Non-Threat) 0.91 0.93 0.92
Test Set 1 (Threat) 0.88 0.85 0.86
Test Set Macro Avg 0.89 0.89 0.89
Test Set Weighted Avg 0.90 0.90 0.90
- - - - -
In-the-Wild 1 0 (Non-Threat) 0.99 0.77 0.86
In-the-Wild 1 1 (Threat) 0.60 0.97 0.74
In-the-Wild 1 Macro Avg 0.79 0.87 0.80
In-the-Wild 1 Weighted Avg 0.88 0.82 0.83
- - - - -
In-the-Wild 2 0 (Non-Threat) 0.99 0.88 0.93
In-the-Wild 2 1 (Threat) 0.53 0.94 0.68
In-the-Wild 2 Macro Avg 0.76 0.91 0.80
In-the-Wild 2 Weighted Avg 0.93 0.89 0.90
Accuracy
Test Set 0.90
In-the-Wild 1 0.82
In-the-Wild 2 0.89