finance

Additional pretrained BERT base Japanese finance

This is a BERT model pretrained on texts in the Japanese language.

The codes for the pretraining are available at retarfi/language-pretraining.

Model architecture

The model architecture is the same as BERT small in the original BERT paper; 12 layers, 768 dimensions of hidden states, and 12 attention heads.

Training Data

The models are additionally trained on financial corpus from Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese).

The financial corpus consists of 2 corpora:

The financial corpus file consists of approximately 27M sentences.

Tokenization

You can use tokenizer Tohoku University's BERT base Japanese model (cl-tohoku/bert-base-japanese).

You can use the tokenizer:

tokenizer = transformers.BertJapaneseTokenizer.from_pretrained('cl-tohoku/bert-base-japanese')

Training

The models are trained with the same configuration as BERT base in the original BERT paper; 512 tokens per instance, 256 instances per batch, and 1M training steps.

Citation

@article{Suzuki-etal-2023-ipm,
  title = {Constructing and analyzing domain-specific language model for financial text mining}
  author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
  journal = {Information Processing & Management},
  volume = {60},
  number = {2},
  pages = {103194},
  year = {2023},
  doi = {10.1016/j.ipm.2022.103194}
}

Licenses

The pretrained models are distributed under the terms of the Creative Commons Attribution-ShareAlike 4.0.

Acknowledgments

This work was supported by JSPS KAKENHI Grant Number JP21K12010 and JST-Mirai Program Grant Number JPMJMI20B1.