Examples

As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning. Quality issues:

Example 1

Input

'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Ich mussen sehr rückwärts fahren.#### AI:Correct version of the sentence:

Output

Ich muss sehr zurückfahren. Repairs:

Example 2

Input

'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:

Output

Er geht dort, aber ich muss anders gehen. Repairs:

Example 3

Input

'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Habe ich das wirklich gemachen?#### AI:Correct version of the sentence:

Output

Habe ich das wirklich gemacht? Repairs:

Training procedure

Trained on: 1x RTX A6000 Used: 30GB Ram, 47GB GPU, 130GB Disc 8 Epochs, cca 25 minutes, Loss: 0.36

The following bitsandbytes quantization config was used during training:

Framework versions