LangMap: A Hierarchical Benchmark for Open-Vocabulary Goal Navigation
Abstract
HieraNav presents a multi-granularity, open-vocabulary navigation task with LangMap benchmark that enables agents to follow natural language instructions across different semantic levels in 3D environments.
The relationships between objects and language are fundamental to meaningful communication between humans and AI, and to practically useful embodied intelligence. We introduce HieraNav, a multi-granularity, open-vocabulary goal navigation task where agents interpret natural language instructions to reach targets at four semantic levels: scene, room, region, and instance. To this end, we present Language as a Map (LangMap), a large-scale benchmark built on real-world 3D indoor scans with comprehensive human-verified annotations and tasks spanning these levels. LangMap provides region labels, discriminative region descriptions, discriminative instance descriptions covering 414 object categories, and over 18K navigation tasks. Each target features both concise and detailed descriptions, enabling evaluation across different instruction styles. LangMap achieves superior annotation quality, outperforming GOAT-Bench by 23.8% in discriminative accuracy using four times fewer words. Comprehensive evaluations of zero-shot and supervised models on LangMap reveal that richer context and memory improve success, while long-tailed, small, context-dependent, and distant goals, as well as multi-goal completion, remain challenging. HieraNav and LangMap establish a rigorous testbed for advancing language-driven embodied navigation. Project: https://bo-miao.github.io/LangMap
Community
The paper introduces HieraNav, a hierarchical object-oriented navigation task spanning scene, room, region, and instance levels, and presents the first large-scale benchmark LangMap to advance language-driven goal navigation research.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs (2025)
- Spatial-VLN: Zero-Shot Vision-and-Language Navigation With Explicit Spatial Perception and Exploration (2026)
- ImagineNav++: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination (2025)
- MomaGraph: State-Aware Unified Scene Graphs with Vision-Language Model for Embodied Task Planning (2025)
- D3D-VLP: Dynamic 3D Vision-Language-Planning Model for Embodied Grounding and Navigation (2025)
- VLN-MME: Diagnosing MLLMs as Language-guided Visual Navigation agents (2025)
- Point What You Mean: Visually Grounded Instruction Policy (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper