Papers
arXiv:2510.01879

REPAIR: Robust Editing via Progressive Adaptive Intervention and Reintegration

Published on Oct 2
Ā· Submitted by Ming Wang on Oct 6
Authors:
,
,
,
,
,

Abstract

REPAIR is a lifelong editing framework for large language models that enhances editing accuracy and reduces knowledge forgetting through progressive adaptive intervention and reintegration.

AI-generated summary

Post-training for large language models (LLMs) is constrained by the high cost of acquiring new knowledge or correcting errors and by the unintended side effects that frequently arise from retraining. To address these issues, we introduce REPAIR (Robust Editing via Progressive Adaptive Intervention and Reintegration), a lifelong editing framework designed to support precise and low-cost model updates while preserving non-target knowledge. REPAIR mitigates the instability and conflicts of large-scale sequential edits through a closed-loop feedback mechanism coupled with dynamic memory management. Furthermore, by incorporating frequent knowledge fusion and enforcing strong locality guards, REPAIR effectively addresses the shortcomings of traditional distribution-agnostic approaches that often overlook unintended ripple effects. Our experiments demonstrate that REPAIR boosts editing accuracy by 10%-30% across multiple model families and significantly reduces knowledge forgetting. This work introduces a robust framework for developing reliable, scalable, and continually evolving LLMs.

Community

Paper author Paper submitter

LLM Post Training: Why Large Language Models Keep Forgetting What They Just Learned

Ever wonder why editing an AI model to fix one mistake often breaks something else entirely? It's like trying to perform brain surgery with a sledgehammer.

šŸ‘‰ The Problem

When we update large language models after training, we face a brutal tradeoff. Fix a factual error about Newton's birthplace, and suddenly the model might forget basic physics. Scale this to thousands of corrections, and models often collapse entirely.

Current editing methods work fine for single changes but fail catastrophically when applied sequentially. They're either too aggressive (causing widespread damage) or too conservative (failing to make lasting changes).

šŸ‘‰ The Solution

Researchers at ContiAI introduce REPAIR - a framework that treats model editing like a feedback loop rather than a one-shot operation.

Three key innovations:

• Closed-loop monitoring: The system continuously checks if edits actually worked and automatically fixes failures
• Smart batching: Groups similar edits together and uses knowledge distillation to ensure consistency
• Surgical merging: Combines updates using loss-aware weighting, prioritizing reliable changes over risky ones

šŸ‘‰ The Results

Testing across multiple model families (LLaMA-3, Qwen-2.5, DeepSeek) shows REPAIR delivers:

• 10-30% better editing accuracy
• Dramatically reduced knowledge forgetting
• Stable performance even with 1000+ sequential edits

The breakthrough lies in treating editing as an iterative process with built-in error correction, rather than hoping single modifications will stick.

This addresses a critical challenge for deploying AI systems that need regular updates without expensive retraining. When models can learn and unlearn reliably, they become far more practical for real applications.

What editing challenges have you encountered with language models?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.01879 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.01879 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.01879 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.