Model description
DialoGPT finetuned on empathetic dialogues
Training data
It was trained on a large corpus of text, including some emotionally engaging datasets such as the "Facebook Empathetic Dialogues" dataset containing 25k conversations. A dataset of 25k conversations grounded in emotional situations to facilitate training and evaluating dialogue systems. You can find a dataset here.
How to use
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")
>>> model = AutoModelForCausalLM.from_pretrained("AliiaR/DialoGPT-medium-empathetic-dialogues")