Papers
arXiv:2511.07332

Grounding Computer Use Agents on Human Demonstrations

Published on Nov 10
· Submitted by Aarash Feizi on Nov 12
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

GroundCUA, a large-scale desktop grounding dataset, enables the development of GroundNext models that achieve state-of-the-art performance in mapping instructions to UI elements with less training data.

AI-generated summary

Building reliable computer-use agents requires grounding: accurately connecting natural language instructions to the correct on-screen elements. While large datasets exist for web and mobile interactions, high-quality resources for desktop environments are limited. To address this gap, we introduce GroundCUA, a large-scale desktop grounding dataset built from expert human demonstrations. It covers 87 applications across 12 categories and includes 56K screenshots, with every on-screen element carefully annotated for a total of over 3.56M human-verified annotations. From these demonstrations, we generate diverse instructions that capture a wide range of real-world tasks, providing high-quality data for model training. Using GroundCUA, we develop the GroundNext family of models that map instructions to their target UI elements. At both 3B and 7B scales, GroundNext achieves state-of-the-art results across five benchmarks using supervised fine-tuning, while requiring less than one-tenth the training data of prior work. Reinforcement learning post-training further improves performance, and when evaluated in an agentic setting on the OSWorld benchmark using o3 as planner, GroundNext attains comparable or superior results to models trained with substantially more data,. These results demonstrate the critical role of high-quality, expert-driven datasets in advancing general-purpose computer-use agents.

Community

Paper author
•
edited 1 day ago
Paper author Paper submitter

GroundCUA introduces the first large-scale, human-demonstrated desktop grounding dataset (56K screenshots, 3.56M UI elements, 700 K instruction pairs) for training computer-use agents. Built on it, the GroundNext models (3B & 7B) achieve state-of-the-art grounding performance across desktop, web, and mobile benchmarks, all from real human interactions data. Together, they establish a new foundation for multimodal grounding and practical AI-agent research.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.07332 in a Space README.md to link it from this page.

Collections including this paper 2