Papers
arxiv:2512.16625

DeContext as Defense: Safe Image Editing in Diffusion Transformers

Published on Dec 18
ยท Submitted by
Xingyi Yang
on Dec 19
Authors:
,
,

Abstract

DeContext defends against unauthorized in-context image editing by weakening cross-attention pathways in multimodal attention layers, preserving visual quality while blocking unwanted modifications.

AI-generated summary

In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation.

Community

Paper author Paper submitter

โœจ Image editing is awesome; but it can leak user information!
๐Ÿ›ก๏ธ Introducing DeContext: the first method to safeguard input images from unauthorized DiT-based in-context editing.

๐Ÿ“„ Paper: https://arxiv.org/abs/2512.16625
๐Ÿ’ป Code: https://github.com/LinghuiiShen/DeContext
๐ŸŒ Project Page: https://linghuiishen.github.io/decontext_project_page/

arXiv lens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/decontext-as-defense-safe-image-editing-in-diffusion-transformers-3370-a86427c1

  • Key Findings
  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.16625 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.16625 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.16625 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.