Training procedure

We decided to release an ARIA 7B model trained with mistral 7B instruct as base model. We adressed the language challenge with a dataset focused on french language.

The finetuning has been done with Nvidia GPUs.

The following bitsandbytes quantization config was used during training:

Framework versions