Papers
arxiv:2512.14391

RePo: Language Models with Context Re-Positioning

Published on Dec 16
· Submitted by
Huayang
on Dec 17
Authors:

Abstract

RePo, a novel context re-positioning mechanism in LLMs, reduces extraneous cognitive load by differentiably assigning token positions, enhancing performance on noisy and long contexts without compromising short-context tasks.

AI-generated summary

In-context learning is fundamental to modern Large Language Models (LLMs); however, prevailing architectures impose a rigid and fixed contextual structure by assigning linear or constant positional indices. Drawing on Cognitive Load Theory (CLT), we argue that this uninformative structure increases extraneous cognitive load, consuming finite working memory capacity that should be allocated to deep reasoning and attention allocation. To address this, we propose RePo, a novel mechanism that reduces extraneous load via context re-positioning. Unlike standard approaches, RePo utilizes a differentiable module, f_φ, to assign token positions that capture contextual dependencies, rather than replying on pre-defined integer range. By continually pre-training on the OLMo-2 1B backbone, we demonstrate that RePo significantly enhances performance on tasks involving noisy contexts, structured data, and longer context length, while maintaining competitive performance on general short-context tasks. Detailed analysis reveals that RePo successfully allocate higher attention to distant but relevant information, assign positions in dense and non-linear space, and capture the intrinsic structure of the input context. Our code is available at https://github.com/SakanaAI/repo.

Community

Paper author Paper submitter

TL;DR: We want to give LLMs the architectural ability to reorganize input context just like humans do. Our solution is to incorporate a lightweight RePo module to dynamically assign positions before position encoding functions.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.14391 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.14391 in a Space README.md to link it from this page.

Collections including this paper 1