ba-claim/distilbert
Model Details
Fine-tuned DistilBERT Model for Claim Relevance Identification Fine-tuned BERT Model for Claim Relevance Identification
Based on this model: https://huggingface.co/distilbert-base-uncased
Model Description
This Hugging Face model is a fine-tuned DistilBERT model specifically developed for identifying relevant claims in the context of combating fake news. The model was trained as part of a bachelor thesis project aimed at automating the fact-checking process by automatically identifying claims of interest.
The project participated in the CheckThat!2023 competition, focusing on task 1B, organized by the Conference and Labs of the Evaluation Forum (CLEF). The CheckThat! lab provided relevant training data for predicting the checkworthiness of claims. The data was analyzed, and various transformer models, including BERT and ELECTRA, were experimented with to identify the most effective architecture.
Overall, this fine-tuned DistilBERT model serves as a valuable tool in automating the identification of relevant claims, reducing the need for manual fact-checking, and contributing to efforts to combat the challenges posed by the widespread dissemination of fake news.
Examples
37440 There's no way they would give it up. No
37463 They're able to charge women more for the same exact procedure a man gets. Yes
Training Details
Hyperparameters | |
---|---|
Learning Rate | 2.251e-05 |
Weight Decay | 50.479e-04 |
Batch Size | 128 |
Number of Epochs | 5 |