text classification multiple-choice

Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This model was finetuned on RACE for multiple choice (text classification). The initial model used was distilroberta-base https://huggingface.co/distilroberta-base

The model was trained using the code from https://github.com/zphang/lrqa. Please refer to and cite the authors.

Model Details

Model Description

<!-- Provide a longer summary of what this model is. -->

Model Sources [optional]

<!-- Provide the basic links for the model. -->

Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.178 kgCO$_2$eq/kWh. A cumulative of 4 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W). Total emissions are estimated to be 0.18 kgCO$_2$eq of which 0 percent were directly offset. Estimations were conducted using the \href{https://mlco2.github.io/impact#compute}{MachineLearning Impact calculator} presented in \cite{lacoste2019quantifying}.