text language-modeling bert pretraining greek-media domain-adaptation

Greek Media BERT (uncased)

This model is a domain-adapted version of nlpaueb/bert-base-greek-uncased-v1 on Greek media centric data.

Model description

Details will be updated soon.

Intended uses & limitations

Details will be updated soon.

Training and evaluation data

Details will be updated soon.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Details will be updated soon.

Framework versions

Citation

The model has been officially released with the article "PIMA: Parameter-shared Intelligent Media Analytics Framework for Low Resource Language. Dimitrios Zaikis, Nikolaos Stylianou and Ioannis Vlahavas. In the Special Issue: New Techniques of Machine Learning and Deep Learning in Text Classification, Applied Sciences Journal. 2023" (https://www.mdpi.com/2174928).

If you use the model, please cite the following:

@Article{app13053265,
  AUTHOR = {Zaikis, Dimitrios and Stylianou, Nikolaos and Vlahavas, Ioannis},
  TITLE = {PIMA: Parameter-Shared Intelligent Media Analytics Framework for Low Resource Languages},
  JOURNAL = {Applied Sciences},
  VOLUME = {13},
  YEAR = {2023},
  NUMBER = {5},
  ARTICLE-NUMBER = {3265},
  URL = {https://www.mdpi.com/2076-3417/13/5/3265},
  ISSN = {2076-3417},
  DOI = {10.3390/app13053265}
}