CITDA:
Fine-tuned bert-base-uncased on the emotions dataset
Demo Notebook: https://colab.research.google.com/drive/10ZCFvlf2UV3FjU4ymf4OoipQvqHbIItG?usp=sharing
Packages
- Install
torch - Also,
pip install transformers datasets scikit-learn wandb seaborn python-dotenv
Train
- Rename
.env.exampleto.envand set an API key from wandb - You can adjust model parameters in the
explainableai.pyfile. - The model (
pytorch_model.bin) is a based on thebert-base-uncasedand already trained on theemotionsdataset. To re-produce the training runfinetune-emotions.py. You can change the base model, or the dataset by changing that file's code.
Example
Run example.py
Train
The model is already trained on bert-base-uncased with the emotions dataset. However, you can change parameters and re-fine-tune the model by running finetune-emotions.py.