FAINT
Collection
Fast, Appearance-Invariant Navigation Transformer models
•
2 items
•
Updated
Fast, Appearance-Invariant Navigation Transformer (FAINT) is a learned policy for vision-based topological navigation.
Please see the Project Page for more information.
The FAINT-Real model uses Theia-Tiny-CDDSV as backbone, and was trained for 30 epochs with the ~1.2M samples of the datasets used in the GNM/ViNT papers.
This repo contains two versions of the trained model weights.
model_pytorch.pt: Weights-only state dict of the Pytorch model.model_torchscript.pt: A 'standalone' Torchscript model for deployment.See the main Github repo for details, input preprocessing etc.
Only dependency is Pytorch.
import torch
ckpt_path = 'FAINT-Real/model_torchscript.pt'
model = torch.jit.load(ckpt_path)
Need to have the Faint library installed.
import torch
from faint.common.models.faint import FAINT
ckpt_path = 'FAINT-Real/model_pytorch.pt'
state_dict = torch.load(ckpt_path)
model = FAINT() # The weights in this repo correspond to FAINT initialized with the default arguments
model.load_state_dict(state_dict)
If you use FAINT in your research, please use the following BibTeX entry:
@article{suomela2025synthetic,
title={Synthetic vs. Real Training Data for Visual Navigation},
author={Suomela, Lauri and Kuruppu Arachchige, Sasanka and Torres, German F. and Edelman, Harry and Kämäräinen, Joni-Kristian}
journal={arXiv:2509.11791},
year={2025}
}