vit-g-14

Access Terms for Athena-0

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card: Athena-0 (Histopathology Foundation Model)

Athena-0 is a ViT-G/14 foundation model trained on only 115 million patches derived from a diverse set of 282k H&E slides, emphasizing slide diversity over patch volume and achieving near state-of-the-art performance on both tile- and slide-level downstream tasks.

Model Details

  • Model type: ViT-G/14
  • Params: ~1.1B
  • Input: RGB patches 224Γ—224
  • Output: 1536-dim features (CLS)

Training Data

  • Slides: ~282,500 H&E WSIs
  • Patches: ~115 million
  • Diversity: Multi-country (25), multi-institution, 8 scanner models, broad organ coverage

Paper

Install

This model requires the official DINOv2 repository.
Clone it and add it to your PYTHONPATH:

git clone https://github.com/facebookresearch/dinov2.git
export PYTHONPATH="$PYTHONPATH:$(pwd)/dinov2"

Usage

from huggingface_hub import snapshot_download
import torch
from torchvision import transforms
import sys

local_dir = snapshot_download(
    "PAICON-GmbH/Athena-0",
    revision="main",
    allow_patterns=["athena0.py","model_39999.safetensors","config.json"]
)
sys.path.insert(0, local_dir)

from athena0 import Athena0
model, transform = Athena0.from_pretrained(weights_path=f"{local_dir}/model_39999.safetensors", device="cuda")
model.eval()

inp = torch.rand(3, 224, 224)
inp = transforms.ToPILImage()(inp)
inp = transform(inp).unsqueeze(0).to("cuda")

with torch.inference_mode():
    features = model(inp)

assert features.shape == (1, 1536)
Downloads last month
105
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support