|
|
--- |
|
|
license: apache-2.0 |
|
|
pipeline_tag: text-to-image |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# 🛡️DAA: Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models |
|
|
|
|
|
This repository contains artifacts and code related to the paper: [**Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models**](https://huggingface.co/papers/2504.20518). |
|
|
|
|
|
Code: https://github.com/Robin-WZQ/DAA |
|
|
|
|
|
This study introduces a novel backdoor detection perspective from **Dynamic Attention Analysis (DAA)**, which shows that the **dynamic feature in attention maps** can serve as a much better indicator for backdoor detection in text-to-image diffusion models. By examining the dynamic evolution of cross-attention maps, backdoor samples exhibit distinct feature evolution patterns compared to benign samples, particularly at the `<EOS>` token. |
|
|
|
|
|
## 📄 Citation |
|
|
|
|
|
If you find this project useful in your research, please consider cite: |
|
|
```bibtex |
|
|
@article{wang2025dynamicattentionanalysisbackdoor, |
|
|
title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models}, |
|
|
author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen}, |
|
|
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)}, |
|
|
year={2025}, |
|
|
} |
|
|
``` |