nli

Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This is a XLM-RoBERTa-base fine-tuned model on 5K (premise, hypothesis) sentence pairs from the ASSIN (Avaliação de Similaridade Semântica e Inferência textual) corpus. The original reference papers are: Unsupervised Cross-Lingual Representation Learning At Scale, ASSIN: Avaliação de Similaridade Semântica e Inferência Textual, respectivelly. This model is suitable for Portuguese (from Brazil or Portugal).

Model Details

Model Description

<!-- Provide a longer summary of what this model is. -->

Model Sources

<!-- Provide the basic links for the model. -->

Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

Direct Use

This fine-tuned version of XLM-RoBERTa-base performs Natural Language Inference (NLI), which is a text classification task. Therefore, it classifies pairs of sentences in the form (premise, hypothesis) into one of the following classes ENTAILMENT, PARAPHRASE or NONE. Salvatore's definition [1] for ENTAILEMENT is assumed to be the same as the one found in ASSIN's labels in which this model was trained on.

PARAPHRASE and NONE are not defined in [1].Therefore, it is assumed that in this model's training set, given a pair of sentences (paraphase, hypothesis), hypothesis is a PARAPHRASE of premise if premise is an ENTAILMENT of hypothesis and vice-versa. If (premise, hypothesis) don't have an ENTAILMENT or PARAPHARSE relationship, (premise, hypothesis) is classified as NONE.

<!-- <div id="assin_function">

Definition 1. Given a pair of sentences $(premise, hypothesis)$, let $\hat{f}^{(xlmr_base)}$ be the fine-tuned models' inference function:

$$ \hat{f}^{(xlmr_base)} = \begin{cases} ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\ PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\ NONE & \text{otherwise} \end{cases} $$ </div>

The (premise, hypothesis)$ entailment definition used is the same as the one found in Salvatore's paper [1].-->

<!-- ## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Demo

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model_path = "giotvr/portuguese-nli-3-labels"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)

with torch.no_grad():
    logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
    print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")

Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

This model should be used for scientific purposes only. It was not tested for production environments.

<!-- ## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed] -->

Fine-Tuning Details

Fine-Tuning Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->


This is a fine tuned version of XLM-RoBERTa-base using the ASSIN (Avaliação de Similaridade Semântica e Inferência textual) dataset. ASSIN is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment, paraphrase or neutral relationship between the members of such pairs. Such corpus has three subsets: ptbr (Brazilian Portuguese), ptpt (Portuguese Portuguese) and full (the union of the latter with the former). The full subset has 10k sentence pairs equally distributed between ptbr and ptpt subsets.

Fine-Tuning Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The model's fine-tuning procedure can be summarized in three major subsequent tasks: <ol type="i"> <li>Data Processing:</li> ASSIN's validation and train splits were loaded from the Hugging Face Hub and processed afterwards; <li>Hyperparameter Tuning:</li> XLM-RoBERTa-base's hyperparameters were chosen with the help of the Weights & Biases API to track the results and upload the fine-tuned models; <li>Final Model Loading and Testing:</li> The models' performance was evaluated using different datasets and metrics that will be better described in the future paper. </ol>

<!-- ##### Column Renaming The Hugging Face's transformers module's DataCollator used by its Trainer requires that the class label column of the collated dataset to be called label. ASSIN's class label column for each hypothesis/premise pair is called entailment_judgement. Therefore, as the first step of the data preprocessing pipeline the column entailment_judgement was renamed to label so that the Hugging Face's transformers module's Trainer could be used. -->

Hyperparameter Tuning

<!-- The model's training hyperparameters were chosen according to the following definition:

<div id="hyperparameter_tuning">

Definition 2. Let $Hyperparms= {i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr_base)}}$ and $\hat{f}^{(xlmr_base)}$ be the model's inference function defined in Definition 1 :

$$ Hyperparms = \argmax_{hyp}(eval_acc(\hat{f}^{(xlmr_base)}_{hyp}, assin_validation)) $$ </div> -->

The following hyperparameters were tested in order to maximize the evaluation accuracy.

The hyperaparemeter tuning experiments were run and tracked using the Weights & Biases' API and can be found at this link.

Training Hyperparameters

The hyperparameter tuning performed yelded the following values:

Evaluation

ASSIN

Testing this model in ASSIN's test split is straightforward because this model was tested using ASSIN's training set and therefore can predict the same labels as the ones found in its test set.

ASSIN2

<!-- Given a pair of sentences $(premise, hypothesis)$, $\hat{f}^{(xlmr_base)}(premise, hypothesis)$ can be equal to $PARAPHRASE, ENTAILMENT$ or $NONE$ as defined in Definition 1. -->

ASSIN2's test split's class label's column has only two possible values: ENTAILMENT and NONE. Therefore some mapping must be done so this model can be tested in ASSIN2's test split. More information on how such mapping is performed will be available in the referred paper.

Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. --> The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.

Results

test set accuracy f1 score precision recall
assin 0.89 0.89 0.89 0.89
assin2 0.70 0.69 0.73 0.70

Model Examination

<!-- Relevant interpretability work for the model goes here --> Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.

<!--## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

<!-- ## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.

BibTeX:

    @article{tcc_paper,
    author    = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
    title     = {Modelos Transformer para Inferência de Linguagem Natural em Português},
    pages     = {x--y},
    year      = {2023}
    }
``` -->

## References

[1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)

<!--[2][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa  (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)

[3][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->