code

The distilled version of the 'airesearch/wangchanberta-base-att-spm-uncased'. This is the 62M params model trained with Assorted Thai Texts (4.8 GB) used for WangchanBERTa pre-training.

pls use the tokenizer from the 'airesearch/wangchanberta-base-att-spm-uncased'