Model card for boldgpt_small_patch10.kmq

Example training predictions

A Vision Transformer (ViT) model trained on BOLD activation maps from NSD-Flat. Patches were quantized to discrete tokens using k-means (KMeansTokenizer). The training objective was to auto-regressively predict the next patch with shuffled patch order and cross-entropy loss.

Dependencies

Usage

from boldgpt.data import ActivityTransform
from boldgpt.models import create_model
from datasets import load_dataset

model = create_model("boldgpt_small_patch10.kmq", pretrained=True)

dataset = load_dataset("clane9/NSD-Flat", split="train")
dataset.set_format("torch")

transform = ActivityTransform()
batch = dataset[:1]
batch["activity"] = transform(batch["activity"])

# output: (B, N + 1, K) predicted next token logits
output, state = model(batch)

Reproducing