Instruction Tuned GPT-NeoXT-20B model on Stanford Alpaca-2 Instruction Tuning dataset (outputs from ChatGPT) (52k data) using Colossal AI

Base Model: togethercomputer/GPT-NeoXT-Chat-Base-20B (not fine-tuned on feedback data)

Training Details :

Dataset Details :

Dataset : iamplus/Instruction_Tuning

Files :