ctranslate2 translation

# Fast-Inference with Ctranslate2

Speedup inference by 2x-8x using int8 inference in C++

quantized version of Helsinki-NLP/opus-mt-fr-en

pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0

Converted using

ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-en --output_dir /home/michael/tmp-ct2fast-opus-mt-fr-en --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16

Checkpoint compatible to ctranslate2 and hf-hub-ctranslate2

from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer

model_name = "michaelfeil/ct2fast-opus-mt-fr-en"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = TranslatorCT2fromHfHub(
        # load in int8 on CUDA
        model_name_or_path=model_name, 
        device="cuda",
        compute_type="int8_float16",
        tokenizer=AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-fr-en")
)
outputs = model.generate(
    text=["How do you call a fast Flan-ingo?", "User: How are you doing?"],
)
print(outputs)

Licence and other remarks:

This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.

Original description

opus-mt-fr-en

Benchmarks

testset BLEU chr-F
newsdiscussdev2015-enfr.fr.en 33.1 0.580
newsdiscusstest2015-enfr.fr.en 38.7 0.614
newssyscomb2009.fr.en 30.3 0.569
news-test2008.fr.en 26.2 0.542
newstest2009.fr.en 30.2 0.570
newstest2010.fr.en 32.2 0.590
newstest2011.fr.en 33.0 0.597
newstest2012.fr.en 32.8 0.591
newstest2013.fr.en 33.9 0.591
newstest2014-fren.fr.en 37.8 0.633
Tatoeba.fr.en 57.5 0.720