Training procedure

This model contains LoRA (rank=64) weights to be used with LLaMA2-7b-chat-hf. The model has been trained on a reformatted variant of the CaRB dataset, for the purpose of information-rich Open IE.

It produces open relation triples in the following enriched format:

<subj> ,, (<auxi> ###) <predicate> ,, (<prep1> ###) <obj1>, (<prep2> ###) <obj2>, ...

For more information on how this model has been trained, please go to our Github repository for more details.

Framework versions