question-answering

Multilingual XLM-RoBERTa large for QA on various languages

Overview

Language model: xlm-roberta-large
Language: Multilingual
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD dev set - German MLQA - German XQuAD
Training run: MLFlow link
Infrastructure: 4x Tesla v100

Hyperparameters

batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64

Performance

Evaluated on the SQuAD 2.0 English dev set with the official eval script.

  "exact": 79.45759285774446,
  "f1": 83.79259828925511,
  "total": 11873,
  "HasAns_exact": 71.96356275303644,
  "HasAns_f1": 80.6460053117963,
  "HasAns_total": 5928,
  "NoAns_exact": 86.93019343986543,
  "NoAns_f1": 86.93019343986543,
  "NoAns_total": 5945

Evaluated on German MLQA: test-context-de-question-de.json

"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,

Evaluated on German XQuAD: xquad.de.json

"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,

Usage

In Haystack

For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:

reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or 
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

model_name = "deepset/xlm-roberta-large-squad2"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

Authors

Branden Chan: branden.chan@deepset.ai
Timo Möller: timo.moeller@deepset.ai
Malte Pietsch: malte.pietsch@deepset.ai
Tanay Soni: tanay.soni@deepset.ai

About us

<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div>

deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.

Some of our other work:

Get in touch and join the Haystack community

<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.

We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>

Twitter | LinkedIn | Discord | GitHub Discussions | Website

By the way: we're hiring!