Model Card for Statement_Equivalence
<!-- Provide a quick summary of what the model is/does. -->
This model Compares the similarity of two text objects. It is the first BERT model I have fine tuned so there may be bugs. The model labels should read equivalent/not-equivalent but despite mapping the id2label variables they are presently still displaying as label0/1 in the inference module. I may come back and fix this at a later date.
Model Details
Model Description
<!-- Provide a longer summary of what this model is. -->
- Developed by: Matt Stammers
- Shared by [optional]: Matt Stammers
- Model type: BERT-Base-Uncased
- Language(s) (NLP): en
- License: mit
- Finetuned from model [optional]: Glue
Model Sources [optional]
<!-- Provide the basic links for the model. -->
- Repository: https://huggingface.co/MattStammers/Statement_Equivalence?text=I+like+you.+I+love+you
- Paper [optional]: N/A
- Demo [optional]: N/A
Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Test it out here
Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This is a standalone app
Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model will not work with any very complex sentences or to compare more than 3 statements
Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Biases inherent in Glue also apply here
Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Do not be surprised if unusual results are obtained
How to Get Started with the Model
Use the code below to get started with the model.
``` python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="MattStammers/Statement_Equivalence")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("MattStammers/Statement_Equivalence")
model = AutoModelForSequenceClassification.from_pretrained("MattStammers/Statement_Equivalence")
```
Training Details
Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
See Glue Dataset: https://huggingface.co/datasets/glue
Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Preprocessing [optional]
Sentence Pairs to analyse similarity
Training Hyperparameters
- Training regime: User Defined <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
Not Relevant
Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Testing Data, Factors & Metrics
Testing Data
<!-- This should link to a Data Card if possible. -->
MRCP. Link: https://huggingface.co/datasets/SetFit/mrpc
Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
N/A
Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
N/A
Results
See evaluation results.
Summary
See Over
Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
Model should be interpreted with user discretion.
Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: T600
- Hours used: 0.1
- Cloud Provider: N/A
- Compute Region: N/A
- Carbon Emitted: <1
Technical Specifications [optional]
Model Architecture and Objective
Bert fine-tuned
Compute Infrastructure
requires less than 4GB of GPU to run quickly
Hardware
T600
Software
Python, pytorch with transformers
Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
BibTeX:
N/A
APA:
N/A
Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
N/A
More Information [optional]
Can be made available on request
Model Card Authors [optional]
Matt Stammers
Model Card Contact
Matt Stammers