Question Answering with DistilBERT README

This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given context paragraph. The training process utilizes PyTorch, the Hugging Face transformers library, and the datasets library.

Prerequisites

Before running the code, make sure you have the following installed:

NVIDIA GPU (for faster training, optional but recommended) NVIDIA CUDA Toolkit (if using GPU) Python 3.x Jupyter Notebook or another Python environment

Installation

You can set up your environment by running the following commands:

!nvidia-smi  # Check GPU availability
!pip install -q transformers datasets torch tqdm

Usage

Training

To train the model, follow these steps:

Notes

Credits

License

This code is provided under the MIT License. Feel free to modify and use it as needed.