A further fine-tuned version of Locutusque/gpt2-large-conversational on MedText and pubmed_qa
Evaluation
This model was evaluated using GPT-3.5, and it was asked medical questions. It achieved an average accuracy of 80%.
A further fine-tuned version of Locutusque/gpt2-large-conversational on MedText and pubmed_qa
This model was evaluated using GPT-3.5, and it was asked medical questions. It achieved an average accuracy of 80%.