Examples

As you can see from examples bellow the output is far from ideal, and far from simple GPT/LLama2 prompt without finetuning.

Quality issues:

Interesting observation: The LLM react 100% on the learned example 1 (exactly from training data)

Example 1

Input

''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:

Output

"Mir geht es gut, danke."

Repairs:

Example 2

Input

''### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Er gehen dort, aber ich muss ander geneh.#### AI:Correct version of the sentence:

Output

Er macht dort dorte, aber ich muss einmal dorte.

Repairs:

Training procedure

The following bitsandbytes quantization config was used during training:

Framework versions