Titlewave: bert-base-uncased

Model description

Titlewave is a Chrome extension that helps you choose better titles for your Stack Overflow questions. See the github repository for more information. This is one of two NLP models used in the Titlewave project, and its purpose is to classify whether question will be answered or not just based on the title. The companion model suggests a new title based on on the body of the question.

Intended use

Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer. You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the github repository, which integrates the tool directly into the Stack Overflow website.

You can also run the model locally in Python like this (which automatically downloads the model to your machine):

>>> from transformers import pipeline
>>> classifier = pipeline('sentiment-analysis', model='tennessejoyce/titlewave-bert-base-uncased')
>>> classifier('[Gmail API] How can I extract plain text from an email sent to me?')

[{'label': 'Answered', 'score': 0.8053370714187622}]

The 'score' in the output represents the probability of getting an answer with this title: 80.5%.

Training data

The weights were initialized from the BERT base model, which was trained on BookCorpus and English Wikipedia. Then the model was fine-tuned on the dataset of previous Stack Overflow post titles, which is publicly available here. Specifically I used three years of posts from 2017-2019, filtered out posts which were closed (e.g., duplicates, off-topic), and selected 5% of the remaining posts at random to use in the training set, and the same amount for validation and test sets (278,155 posts each).

Training procedure

The model was fine-tuned for two epochs with a batch size of 32 (17,384 steps total) using 16-bit mixed precision. After some hyperparameter tuning, I found that the following two-phase training procedure yields the best performance (ROC-AUC score) on the validation set:

Otherwise, all parameters we set to the defaults listed here, including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the github repository for the scripts that were used to train the model.

Evaluation

See this notebook for the performance of the title classification model on the test set.