synthstroke-qsynth / README.md
liamchalcroft's picture
Update README.md
ae0185a verified
metadata
license: mit
library_name: pytorch
tags:
  - medical
  - segmentation
  - stroke
  - neurology
  - mri
pipeline_tag: image-segmentation

qSynth

Synthseg-style model trained on qMRI-constrained synthetic data derived from OASIS3 tissue maps and ATLAS binary lesion masks.

Model Details

  • Name: qSynth
  • Classes: 0 (Background), 1 (Gray Matter), 2 (White Matter), 3 (Gray/White Matter Partial Volume), 4 (Cerebro-Spinal Fluid), 5 (Stroke)
  • Patch Size: 192³
  • Voxel Spacing: 1mm³
  • Input Channels: 1

Usage

Loading from Hugging Face Hub

import torch
from synthstroke_model import SynthStrokeModel

# Load the model from Hugging Face Hub
model = SynthStrokeModel.from_pretrained("liamchalcroft/synthstroke-qsynth")

# Prepare your input (example shape: batch_size=1, channels=1, H, W, D)
input_tensor = torch.randn(1, 1, 192, 192, 192)

# Get predictions (with optional TTA for improved accuracy)
predictions = model.predict_segmentation(input_tensor, use_tta=True)

# Get tissue probability maps
background = predictions[:, 0]  # Background
gray_matter = predictions[:, 1]  # Gray Matter
white_matter = predictions[:, 2]  # White Matter
partial_volume = predictions[:, 3]  # Gray/White Matter PV
csf = predictions[:, 4]  # Cerebro-Spinal Fluid
stroke = predictions[:, 5]  # Stroke lesion

# Alternative: Get logits without TTA
logits = model.predict_segmentation(input_tensor, apply_softmax=False)

Citation

arXiv

@misc{chalcroft2025domainagnosticstrokelesionsegmentation,
      title={Domain-Agnostic Stroke Lesion Segmentation Using Physics-Constrained Synthetic Data}, 
      author={Liam Chalcroft and Jenny Crinion and Cathy J. Price and John Ashburner},
      year={2025},
      eprint={2412.03318},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2412.03318}, 
}

License

MIT License - see the LICENSE file for details.