Model Card of vocabtrimmer/mt5-small-trimmed-ko-60000-koquad-qg
This model is fine-tuned version of ckpts/mt5-small-trimmed-ko-60000 for question generation task on the lmqg/qg_koquad (dataset_name: default) via lmqg.
Overview
- Language model: ckpts/mt5-small-trimmed-ko-60000
 - Language: ko
 - Training data: lmqg/qg_koquad (default)
 - Online Demo: https://autoqg.net/
 - Repository: https://github.com/asahi417/lm-question-generation
 - Paper: https://arxiv.org/abs/2210.03992
 
Usage
- With 
lmqg 
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ko", model="vocabtrimmer/mt5-small-trimmed-ko-60000-koquad-qg")
# model prediction
questions = model.generate_q(list_context="1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.", list_answer="남부군")
- With 
transformers 
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-ko-60000-koquad-qg")
output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
Evaluation
- Metric (Question Generation): raw metric file
 
| Score | Type | Dataset | |
|---|---|---|---|
| BERTScore | 83.43 | default | lmqg/qg_koquad | 
| Bleu_1 | 26.36 | default | lmqg/qg_koquad | 
| Bleu_2 | 19.38 | default | lmqg/qg_koquad | 
| Bleu_3 | 14.59 | default | lmqg/qg_koquad | 
| Bleu_4 | 11.1 | default | lmqg/qg_koquad | 
| METEOR | 28.4 | default | lmqg/qg_koquad | 
| MoverScore | 82.96 | default | lmqg/qg_koquad | 
| ROUGE_L | 26.7 | default | lmqg/qg_koquad | 
Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_koquad
 - dataset_name: default
 - input_types: paragraph_answer
 - output_types: question
 - prefix_types: None
 - model: ckpts/mt5-small-trimmed-ko-60000
 - max_length: 512
 - max_length_output: 32
 - epoch: 12
 - batch: 16
 - lr: 0.001
 - fp16: False
 - random_seed: 1
 - gradient_accumulation_steps: 4
 - label_smoothing: 0.15
 
The full configuration can be found at fine-tuning config file.
Citation
@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}