LocAtViT: Locality-Attending Vision Transformer

arXiv hfpaper

Pretrain vision transformers so that their patch representations transfer better to dense prediction (e.g., segmentation), without changing the pretraining objective.

Usage

import timm
model = timm.create_model("hf_hub:sinahmr/locatvit_base", pretrained=True)

Citation

@inproceedings{hajimiri2026locatvit,
  author    = {Hajimiri, Sina and Beizaee, Farzad and Shakeri, Fereshteh and Desrosiers, Christian and Ben Ayed, Ismail and Dolz, Jose},
  title     = {Locality-Attending Vision Transformer},
  booktitle = {International Conference on Learning Representations},
  year      = {2026}
}
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train sinahmr/locatvit_base

Paper for sinahmr/locatvit_base