tiiuae/falcon-7b code instruct instruct-code logical-reasoning Platypus2

We finetuned TIIUAE/Falcon-7B on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 3 epochs using MonsterAPI no-code LLM finetuner.

About OpenPlatypus Dataset

OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0.

The finetuning session got completed in ~ 3 hrs and costed us only $14 for the entire finetuning run!

Hyperparameters & Run details:


license: apache-2.0