Model Card of lmqg/t5-large-squad-qg
This model is fine-tuned version of t5-large for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg
.
Overview
- Language model: t5-large
- Language: en
- Training data: lmqg/qg_squad (default)
- Online Demo: https://autoqg.net/
- Repository: https://github.com/asahi417/lm-question-generation
- Paper: https://arxiv.org/abs/2210.03992
Usage
- With
lmqg
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-large-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
- With
transformers
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
Evaluation
- Metric (Question Generation): raw metric file
Score | Type | Dataset | |
---|---|---|---|
BERTScore | 91 | default | lmqg/qg_squad |
Bleu_1 | 59.54 | default | lmqg/qg_squad |
Bleu_2 | 43.79 | default | lmqg/qg_squad |
Bleu_3 | 34.14 | default | lmqg/qg_squad |
Bleu_4 | 27.21 | default | lmqg/qg_squad |
METEOR | 27.7 | default | lmqg/qg_squad |
MoverScore | 65.29 | default | lmqg/qg_squad |
ROUGE_L | 54.13 | default | lmqg/qg_squad |
- Metric (Question & Answer Generation, Reference Answer): Each question is generated from the gold answer. raw metric file
Score | Type | Dataset | |
---|---|---|---|
QAAlignedF1Score (BERTScore) | 95.57 | default | lmqg/qg_squad |
QAAlignedF1Score (MoverScore) | 71.1 | default | lmqg/qg_squad |
QAAlignedPrecision (BERTScore) | 95.62 | default | lmqg/qg_squad |
QAAlignedPrecision (MoverScore) | 71.41 | default | lmqg/qg_squad |
QAAlignedRecall (BERTScore) | 95.51 | default | lmqg/qg_squad |
QAAlignedRecall (MoverScore) | 70.8 | default | lmqg/qg_squad |
- Metric (Question & Answer Generation, Pipeline Approach): Each question is generated on the answer generated by
lmqg/t5-large-squad-ae
. raw metric file
Score | Type | Dataset | |
---|---|---|---|
QAAlignedF1Score (BERTScore) | 92.97 | default | lmqg/qg_squad |
QAAlignedF1Score (MoverScore) | 64.72 | default | lmqg/qg_squad |
QAAlignedPrecision (BERTScore) | 92.83 | default | lmqg/qg_squad |
QAAlignedPrecision (MoverScore) | 64.87 | default | lmqg/qg_squad |
QAAlignedRecall (BERTScore) | 93.14 | default | lmqg/qg_squad |
QAAlignedRecall (MoverScore) | 64.66 | default | lmqg/qg_squad |
- Metrics (Question Generation, Out-of-Domain)
Dataset | Type | BERTScore | Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
---|---|---|---|---|---|---|---|
lmqg/qg_squadshifts | amazon | 91.15 | 6.9 | 23.01 | 61.22 | 25.34 | link |
lmqg/qg_squadshifts | new_wiki | 93.17 | 11.18 | 27.92 | 66.31 | 30.06 | link |
lmqg/qg_squadshifts | nyt | 92.42 | 8.05 | 25.67 | 64.37 | 25.19 | link |
lmqg/qg_squadshifts | 90.95 | 5.95 | 21.85 | 60.64 | 21.99 | link | |
lmqg/qg_subjqa | books | 87.94 | 0.0 | 11.97 | 55.48 | 9.87 | link |
lmqg/qg_subjqa | electronics | 87.86 | 0.84 | 16.16 | 56.05 | 14.13 | link |
lmqg/qg_subjqa | grocery | 87.5 | 0.76 | 15.4 | 56.76 | 10.5 | link |
lmqg/qg_subjqa | movies | 87.34 | 0.0 | 13.03 | 55.36 | 12.27 | link |
lmqg/qg_subjqa | restaurants | 88.25 | 0.0 | 12.45 | 55.91 | 11.93 | link |
lmqg/qg_subjqa | tripadvisor | 89.29 | 0.78 | 16.3 | 56.81 | 14.59 | link |
Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-large
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 16
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at fine-tuning config file.
Citation
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}