Papers
arxiv:2512.06558

Embodied Referring Expression Comprehension in Human-Robot Interaction

Published on Dec 6
ยท Submitted by Aman Chadha on Dec 9
Authors:
,
,
,

Abstract

A large-scale dataset and multimodal model improve embodied interaction comprehension in robots by addressing perspective bias and enhancing multimodal signal integration.

AI-generated summary

As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four HRI datasets, including the Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.

Community

Paper author Paper submitter

The paper introduces Refer360, a comprehensive multimodal dataset for embodied referring expression comprehension in human-robot interaction (HRI), and proposes MuRes, a lightweight guided residual module that selectively reinforces modality-specific features to improve multimodal grounding performance in real-world scenarios.

โžก๏ธ ๐Š๐ž๐ฒ ๐‡๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ ๐จ๐Ÿ ๐ญ๐ก๐ž ๐‘๐ž๐Ÿ๐ž๐ซ๐Ÿ‘๐Ÿ”๐ŸŽ ๐๐ž๐ง๐œ๐ก๐ฆ๐š๐ซ๐ค + ๐Œ๐ฎ๐‘๐ž๐ฌ ๐Œ๐จ๐๐ฎ๐ฅ๐ž:
๐Ÿง  ๐‘น๐’†๐’‡๐’†๐’“๐Ÿ‘๐Ÿ”๐ŸŽ: ๐‘ญ๐’Š๐’“๐’”๐’• ๐‘ฌ๐’Ž๐’ƒ๐’๐’…๐’Š๐’†๐’… ๐‘น๐‘ฌ ๐‘ซ๐’‚๐’•๐’‚๐’”๐’†๐’• ๐’˜๐’Š๐’•๐’‰ ๐‘ด๐’–๐’๐’•๐’Š-๐‘ฝ๐’Š๐’†๐’˜, ๐‘ด๐’–๐’๐’•๐’Š-๐‘บ๐’†๐’๐’”๐’๐’“ ๐‘ด๐’๐’…๐’‚๐’๐’Š๐’•๐’Š๐’†๐’”: Introduces a dataset with synchronized egocentric and exocentric views, RGB, depth, infrared, 3D skeleton, eye gaze, and audio, across indoor and outdoor environments. With 13,990 annotated interactions (3.2M frames), it overcomes biases in existing datasets (e.g., single view, indoor-only, no gesture/gaze integration).
๐Ÿ” ๐‘ด๐’–๐‘น๐’†๐’”: ๐‘ฎ๐’–๐’Š๐’…๐’†๐’… ๐‘น๐’†๐’”๐’Š๐’…๐’–๐’‚๐’ ๐‘ฉ๐’๐’•๐’•๐’๐’†๐’๐’†๐’„๐’Œ ๐’‡๐’๐’“ ๐‘ด๐’–๐’๐’•๐’Š๐’Ž๐’๐’…๐’‚๐’ ๐‘ญ๐’–๐’”๐’Š๐’๐’: Proposes a novel residual architecture that uses cross-attention to guide modality-specific signals (visual/language) through an information bottleneck, preventing feature dilution during fusion and outperforming both vanilla residuals and attention-only fusion across 4 datasets.
๐Ÿ“ˆ ๐‘บ๐’Š๐’ˆ๐’๐’Š๐’‡๐’Š๐’„๐’‚๐’๐’• ๐‘ฎ๐’‚๐’Š๐’๐’” ๐’‚๐’„๐’“๐’๐’”๐’” ๐‘ฏ๐‘น๐‘ฐ ๐’‚๐’๐’… ๐‘ฝ๐‘ธ๐‘จ ๐‘ป๐’‚๐’”๐’Œ๐’”: On Refer360, integrating MuRes into CLIP improved IOU-25 by +3.4%, and on CAESAR-PRO by +4.99%. For broader VQA tasks like ScienceQA and A-OKVQA, MuRes boosted model accuracy by up to +30%, highlighting its generalization ability across task domains.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.06558 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.06558 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.06558 in a Space README.md to link it from this page.

Collections including this paper 1