generated_from_trainer

<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->

ke_t5_base_bongsoo_ko_en_epoch2

This model is a fine-tuned version of chunwoolee0/ke_t5_base_bongsoo_ko_en on bongsoo/news_news_talk_en_ko dataset.

Model description

KE-T5 is a pretrained-model of t5 text-to-text transfer transformers using the Korean and English corpus developed by KETI (한국전자연구원). The vocabulary used by KE-T5 consists of 64,000 sub-word tokens and was created using Google's sentencepiece. The Sentencepiece model was trained to cover 99.95% of a 30GB corpus with an approximate 7:3 mix of Korean and English.

Intended uses & limitations

Translation from Korean to English : epoch = 2

>>> from transformers import pipeline
>>> translator = pipeline('translation', model='chunwoolee0/ke_t5_base_bongsoo_en_ko')

>>> translator("나는 습관적으로 점심식사 후에 산책을 한다.")
[{'translation_text': 'I habitally walk after lunch.'}]

>>> translator("이 강좌는 허깅페이스가 만든 거야.")
[{'translation_text': 'This class was created by Huggface.'}]

>>> translator("오늘은 늦게 일어났다.")
[{'translation_text': 'This day I woke up earlier.'}]

Training and evaluation data

bongsoo/news_news_talk_en_ko

train : 360000 rows test: 20000 rows validation 20000 rows

Training procedure

Use chunwoolee0/ke_t5_base_bongsoo_ko_en as a pretrained model checkpoint. max_token_length is set to 64 for stable training. learing rate is reduced from 0.0005 for epoch 1 to 0.00002 here.

Training hyperparameters

The following hyperparameters were used during training:

Training results

Training Loss Epoch Step Validation Loss Bleu
No log 1.0 5625 1.6646 12.5566

TrainOutput(global_step=5625, training_loss=1.8157017361111112, metrics={'train_runtime': 11137.6996, 'train_samples_per_second': 32.323, 'train_steps_per_second': 0.505, 'total_flos': 2.056934156746752e+16, 'train_loss': 1.8157017361111112, 'epoch': 1.0})

Framework versions