roberta roberta-base token-classification NER named-entities BIO movies

roberta-base + Movies NER Task

Objective: This is Roberta Base trained for the NER task using MIT Movie Dataset

model_name = "thatdramebaazguy/roberta-base-MITmovie"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="ner")

Overview

Language model: roberta-base
Language: English
Downstream-task: NER
Training data: MIT Movie
Eval data: MIT Movie
Infrastructure: 2x Tesla v100
Code: See example

Hyperparameters

Num examples = 6253  
Num Epochs = 5 
Instantaneous batch size per device = 64
Total train batch size (w. parallel, distributed & accumulation) = 128  

Performance

Eval on MIT Movie

Github Repo: