xlm-roberta-base-finetune-qa

Finetuning xlm-roberta-base with the training set of iapp_wiki_qa_squad, thaiqa_squad, and nsc_qa (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 newmm words). Benchmarks shared on wandb using validation and test sets of iapp_wiki_qa_squad. Trained with thai2transformers.

Train with:

export WANDB_PROJECT=wangchanberta-qa

export MODEL_NAME=xlm-roberta-base
python train_question_answering_lm_finetuning.py \
  --model_name $MODEL_NAME \
  --dataset_name chimera_qa \
  --output_dir $MODEL_NAME-finetune-chimera_qa-model \
  --log_dir $MODEL_NAME-finetune-chimera_qa-log \
  --pad_on_right \
  --fp16