Base model: roberta-large

Fine tuned for persuadee donation detection on the Persuasion For Good Dataset (Wang et al., 2019):

Given a complete dialogue from Persuasion For Good, the task is to predict the binary label:

Only persuadee utterances are input to the model for this task - persuader utterances are discarded. Each training example is the concatenation of all persuadee utterances in a single dialogue, each separated by the </s> token.

For example:

Input: <s>How are you?</s>Can you tell me more about the charity?</s>...</s>Sure, I'll donate a dollar.</s>...</s>

Label: 1

Input: <s>How are you?</s>Can you tell me more about the charity?</s>...</s>I am not interested.</s>...</s>

Label: 0

The following Dialogues were excluded:

Data Info:

Training Info:

Testing Info: