Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1.29k
4.47k
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

CS-Wild-Places: Large-Scale 3D Dataset for Lidar Place Recognition Across Ground and Aerial Views

The dataset is published as a contribution of our CVPR paper titled: HOTFormerLoc: Hierarchical Octree Transformer for Versatile Lidar Place Recognition Across Ground and Aerial Views, CVPR 2025 by Ethan Griffiths, Maryam Haghighat, Simon Denman, Clinton Fookes, and Milad Ramezani
[Website] [arXiv] [CS-Wild-Places Dataset] [CS-Wild-Places README]

CS-Wild-Places, is a 3D cross-source dataset featuring point cloud data from aerial and ground lidar scans captured in four dense forests. Point clouds in CS-Wild-Places contain representational gaps and distinctive attributes such as varying point densities and noise patterns, making it a challenging benchmark for cross-view localisation in the wild.

CS-Wild-Places CS-Wild-Places dataset. (Top row) birds eye view of aerial global maps from all four forests. (Bottom row) sample ground and aerial submap from each forest.

In our paper, we demonstrated that HOTFormerLoc achieves a top-1 average recall improvement of 5.5% – 11.5% on the CS-Wild-Places benchmark. Furthermore, it consistently outperforms SOTA 3D place recognition methods, with an average performance gain of 4.9% on well established urban and forest datasets. Please check out our paper for more information.

Hero Figure

Note: If you use our dataset please cite the folowing paper.

Citation

If you find this work useful, please consider citing:

@InProceedings{HOTFormerLoc,
    author    = {Griffiths, Ethan and Haghighat, Maryam and Denman, Simon and Fookes, Clinton and Ramezani, Milad},
    title     = {{HOTFormerLoc}: {Hierarchical Octree Transformer} for {Versatile Lidar Place Recognition Across Ground} and {Aerial Views}},
    booktitle = {2025 {IEEE}/{CVF Conference} on {Computer Vision} and {Pattern Recognition} ({CVPR})},
    year      = {2025},
    month     = {June},
}

CS-Wild-Places

We train on our novel CS-Wild-Places dataset, introduced in further detail in our paper. CS-Wild-Places is built upon the ground traversals introduced by Wild-Places, so it is required to download the Wild-Places dataset alongside our data following the instructions in the above section (generating train/test pickles for Wild-Places is not required for CS-Wild-Places, so this step can be skipped). Note that the full Wild-Places dataset must be downloaded as our post-processing utilises the full resolution submaps.

Download our dataset from CSIRO's data access portal, and place or symlink the data in data/CS-Wild-Places (this should point to the top-level directory, with the data/ and metadata/ subdirectories). Note that our experiments only require the post-processed submaps (folder postproc_voxel_0.80m_rmground_normalised), so you can ignore the raw submaps if space is an issue. Check the README for further information and installation instructions for CS-Wild-Places.

Assuming you have followed the above instructions to setup Wild-Places, you can use the below command to post-process the Wild-Places ground submaps into the format required for CS-Wild-Places (set num_workers to a sensible number for your system). Note that this may take several hours depending on your CPU:

cd datasets/CSWildPlaces
python postprocess_wildplaces_ground.py \
    --root '../../data/wild_places/data/' \
    --cswildplaces_save_dir '../../data/CS-Wild-Places/data/CS-Wild-Places/postproc_voxel_0.80m_rmground_normalised' \
    --remove_ground \
    --downsample \
    --downsample_type 'voxel' \
    --voxel_size 0.8 \
    --normalise \
    --num_workers XX \
    --verbose

Note that this script will generate the submaps used for the results reported in the paper, i.e. voxel downsampled, ground points removed, and normalised. We also provide a set of unnormalised submaps for convenience, and the corresponding Wild-Places ground submaps can be generated by omitting the --normalise option, and by setting --cswildplaces_save_dir to '../../data/CS-Wild-Places/data/CS-Wild-Places/postproc_voxel_0.80m_rmground'.

Before network training or evaluation, run the below code to generate pickles with positive and negative point clouds for each anchor point cloud:

cd datasets/CSWildPlaces
python generate_train_test_tuples.py \
    --root '../../data/CS-Wild-Places/data/CS-Wild-Places/postproc_voxel_0.80m_rmground_normalised/' \
    --eval_thresh '30' \
    --pos_thresh '15' \
    --neg_thresh '60' \
    --buffer_thresh '30' \
    --v2_only

Note that training and evaluation pickles are saved to the directory specified in --root by default.

Acknowledgements

We would also like to thank the authors of Wild-Places for their fantastic dataset which serves as the base that CS-Wild-Places is built upon.

Downloads last month
35