Abstract
A quantitative framework for agent system scaling using empirical coordination metrics identifies optimal multi-agent strategies based on task properties.
Agents, language model (LM)-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored, leaving practitioners to rely on heuristics rather than principled design choices. We address this gap by deriving quantitative scaling principles for agent systems. We evaluate this across four diverse benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench. Using five canonical architectures (Single, Independent, Centralized, Decentralized, Hybrid) instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations with standardized tools and token budgets. We derive a predictive model using empirical coordination metrics, including efficiency, overhead, error amplification, and redundancy, that achieves cross-validated R^2=0.513. We identify three dominant effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: coordination yields diminishing or negative returns (beta=-0.408, p<0.001) once single-agent baselines exceed ~45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2x through unchecked propagation, while centralized coordination contains this to 4.4x. Centralized coordination improves performance by 80.9% on parallelizable tasks like financial reasoning, while decentralized coordination excels on dynamic web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, all multi-agent variants degraded performance by 39-70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations, providing a predictive principle of agentic scaling based on measurable task properties.
Community
so single agents perform better at centralized tasks (e.g. planning) and parallel agents at independent tasks. the main limitations being "coordination tax" for parallel agents and "capability saturation" for single agents.
it seems like a perfect opportunity for a dynamic mixed strategy and adaptive topology?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stop Wasting Your Tokens: Towards Efficient Runtime Multi-Agent Systems (2025)
- Maestro: Learning to Collaborate via Conditional Listwise Policy Optimization for Multi-Agent LLMs (2025)
- Beyond Accuracy: A Multi-Dimensional Framework for Evaluating Enterprise Agentic AI Systems (2025)
- Verification-Aware Planning for Multi-Agent Systems (2025)
- A2FM: An Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning (2025)
- ParaCook: On Time-Efficient Planning for Multi-Agent Systems (2025)
- Budget-Aware Tool-Use Enables Effective Agent Scaling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
