kenlm perplexity n-gram kneser-ney bigscience

Fork of edugp/kenlm

KenLM models

This repo contains several KenLM models trained on different tokenized datasets and languages.
KenLM models are probabilistic n-gram languge models that models. One use case of these models consist on fast perplexity estimation for filtering or sampling large datasets. For example, one could use a KenLM model trained on French Wikipedia to run inference on a large dataset and filter out samples that are very unlike to appear on Wikipedia (high perplexity), or very simple non-informative sentences that could appear repeatedly (low perplexity).

The models have been trained using some of the preprocessing steps from cc_net, in particular replacing numbers with zeros and normalizing punctuation. So, it is important to keep the default values for the parameters: lower_case, remove_accents, normalize_numbers and punctuation when using the pre-trained models in order to replicate the same pre-processing steps at inference time.

Dependencies

Example:

from model import KenlmModel


# Load model trained on English wikipedia
model = KenlmModel.from_pretrained("wikipedia", "en")

# Get perplexity
model.get_perplexity("I am very perplexed")
# 341.3 (low perplexity, since sentence style is formal and with no grammar mistakes)

model.get_perplexity("im hella trippin")
# 46793.5 (high perplexity, since the sentence is colloquial and contains grammar mistakes)

In the example above we see that, since Wikipedia is a collection of encyclopedic articles, a KenLM model trained on it will naturally give lower perplexity scores to sentences with formal language and no grammar mistakes than colloquial sentences with grammar mistakes.