vision image-classification

Vision Transformer (base-sized model)

Vision Transformer (ViT) model trained on the Chaoyang dataset at resolution 384x384, using a fixed 10% of the training set as the validation set and evaluated on the official test set using the best validation model based on the loss

Augmentation pipeline

To address the issue of class imbalance in our training set, we performed oversampling with repetition. Specifically, we duplicated the minority classes images until we obtained an even distribution across all classes. This resulted in a larger training set, but ensured that our model was exposed to an equal number of samples from each class during training. We verified that this approach did not lead to overfitting or other issues by using a validation set with the original class distribution. We used the following Albumentationsaugmentation pipeline for our experiments:

This pipeline consists of the following transformations:

These transformations were chosen to augment the dataset with a variety of geometric transformations, while preserving important visual features.

Results

Our model represents the current state-of-the-art in the field, as it outperforms previous state-of-the-art models proposed in papers with code, based on the dataset's reference paper. The results are summarized in the following table using macro avg metrics.

Model Accuracy F1-Score Precision Recall
Baseline 0.83 0.77 0.78 0.75
Vit-384-finetuned 0.86 ↑3% 0.81 ↑4% 0.82 ↑4% 0.80 ↑5%
Vit-384-from-scratch 0.78 0.74 0.74 0.74
Vit-224-distilled-resnet50 0.74 0.00 0.00 0.00

How to use

Here is how to use this model to classify an image of the Chaoyang dataset into one of the 4 classes:

from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('Snarci/ViT-base-patch16-384-Chaoyang-finetuned')
model = ViTForImageClassification.from_pretrained('Snarci/ViT-base-patch16-384-Chaoyang-finetuned')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 4 Chaoyang classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])

Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.

Training data

The ViT model was tuned on the Chaoyang dataset at resolution 384x384, using a fixed 10% of the training set as the validation set

Training procedure

Preprocessing

The exact details of preprocessing of images during training/validation can be found here.

Images are resized/rescaled to the same resolution 384x384 during training and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).

License

This model is provided for non-commercial use only and may not be used in any research or publication without prior written consent from the author.