pretrained model: https://huggingface.co/Salesforce/codet5-small
finetuning dataset: https://huggingface.co/datasets/code_x_glue_ct_code_to_text (only the python split)
official inference check point (for comparison, using base, not small, size): https://storage.googleapis.com/sfr-codet5-data-research/finetuned_models/summarize_python_codet5_base.bin
for fine-tuning process metrics see this w&b report
<!-- <iframe src="https://wandb.ai/stmnk/CodeT5/reports/Code-T5-code_x_glue_code2text--VmlldzoxMjM4MTUy" style="border:none;height:1024px;width:100%"> -->