SLIM-Brain: A Data- and Training-Efficient Foundation Model for fMRI Data Analysis
Paper
β’
2512.21881
β’
Published
This repository contains the official implementation of SLIM-Brain. SLIM-Brain is a two-stage, selective-compute pipeline for voxel-level fMRI representation learning. A lightweight global branch ranks informative temporal windows; a high-capacity 4D HieraβJEPA encoder processes only those windows, focusing compute on brain voxels and drastically reducing memory.
Setting up the environment requires Python 3.13 and CUDA-compatible PyTorch for GPU acceleration:
conda create -n hiera-jepa python=3.13.5
conda activate hiera-jepa
# Install dependencies
pip install -r requirements.txt
The codebase is organized into modular components for easy navigation and extension:
hiera-jepa/
βββ configs/ # YAML configuration files for training and model parameters
βββ checkpoints/ # Saved model weights and training checkpoints
βββ hiera/ # Hierarchical Vision Transformer backbone implementation
βββ scripts/ # Bash....
βββ finetune.py # Downstream task training and feature extraction script
βββ requirements.txt # Python package dependencies
data_root/
βββ ABIDE_train/
βββ ABIDE_val/
βββ HCP_val/
βββ HCP_train/
βββ 0010001/ # Subject ID
βββ 0010002/
βββ 0010002_run-1_0000-0199_1.npz # Data chunk 1
βββ 0010002_run-1_0000-0199_2.npz # Data chunk 2
task:
csv: "/path/to/data_csv"
data:
data_root: /path/to/data_root
datasets: ["HCP"]
mode: "directory"
# running downstream training
sh scripts/finetune.sh
Our pre-trained model weights can be found in the checkpoints directory: ./checkpoints/best_model.pth