Training procedure
This model contains LoRA (rank=64) weights to be used with LLaMA2-7b-chat-hf. The model has been trained on a reformatted variant of the CaRB dataset, for the purpose of information-rich Open IE.
It produces open relation triples in the following enriched format:
<subj> ,, (<auxi> ###) <predicate> ,, (<prep1> ###) <obj1>, (<prep2> ###) <obj2>, ...
<subj>
: the subject of the triple<auxi>
: the auxiliary of the triple (e.g. modal verbs, negations, temporal markers)<predicate>
: the predicate of the triple- objects:
<prepX>
: the optional preposition corresponding to this object<objX>
: the object (this could include the direct objects, dative objects, as well as what was traditionally categorized as obliques)
For more information on how this model has been trained, please go to our Github repository for more details.
Framework versions
- PEFT 0.5.0