File size: 4,832 Bytes
3a39ac6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
license: apache-2.0
pipeline_tag: image-to-image
library_name: diffusers
---
# PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
This repository presents **PaCo-RL**, a comprehensive framework for consistent image generation, as described in the paper [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
Project Page: [https://x-gengroup.github.io/HomePage_PaCo-RL/](https://x-gengroup.github.io/HomePage_PaCo-RL/)
Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-GenGroup/PaCo-RL)
<div align="center">
<a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a>
<a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a>
<a href="https://github.com/X-GenGroup/PaCo-RL"><img src="https://img.shields.io/badge/Code-9E95B7?logo=github"></a>
<a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a>
</div>
## π Overview
**PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
### Key Components
- **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
- **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
## π Quick Start
### Installation
```bash
git clone https://github.com/X-GenGroup/PaCo-RL.git
cd PaCo-RL
```
### Train Reward Model
```bash
cd PaCo-Reward
conda create -n paco-reward python=3.12 -y
conda activate paco-reward
cd LLaMA-Factory && pip install -e ".[torch,metrics]" --no-build-isolation
cd .. && bash train/paco_reward.sh
```
See π [PaCo-Reward Documentation](PaCo-Reward/README.md) for detailed guide.
### Run RL Training
```bash
cd PaCo-GRPO
conda create -n paco-grpo python=3.12 -y
conda activate paco-grpo
pip install -e .
# Setup vLLM reward server
conda create -n vllm python=3.12 -y
conda activate vllm && pip install vllm
export CUDA_VISIBLE_DEVICES=0
export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
export VLLM_MODEL_NAMES='Paco-Reward-7B'
bash vllm_server/launch.sh
# Start training
export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
conda activate paco-grpo
bash scripts/single_node/train_flux.sh t2is
```
See π [PaCo-GRPO Documentation](PaCo-GRPO/README.md) for detailed guide.
## π Repository Structure
```
PaCo-RL/
βββ PaCo-GRPO/ # RL training framework
β βββ config/ # RL configurations
β βββ scripts/ # Training scripts
β βββ README.md
βββ PaCo-Reward/ # Reward model training
β βββ LLaMA-Factory/ # Training framework
β βββ config/ # Training configurations
β βββ README.md
βββ README.md
```
## π Model Zoo
| Model | Type | HuggingFace |
|-------|------|-------------|
| **PaCo-Reward-7B** | Reward Model | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-Reward-7B) |
| **PaCo-Reward-7B-Lora** | Reward Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-Reward-7B-Lora) |
| **PaCo-FLUX.1-dev** | T2I Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-dev-Lora) |
| **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
| **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
## π€ Acknowledgement
Our work is built upon [Flow-GRPO](https://github.com/yifan123/flow_grpo), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), [vLLM](https://github.com/vllm-project/vllm), and [Qwen2.5-VL](https://github.com/QwenLM/Qwen3-VL). We sincerely thank the authors for their valuable contributions to the community.
## β Citation
```bibtex
@misc{ping2025pacorladvancingreinforcementlearning,
title={PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling},
author={Bowen Ping and Chengyou Jia and Minnan Luo and Changliang Xia and Xin Shen and Zhuohang Dang and Hangwei Qian},
year={2025},
eprint={2512.04784},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.04784},
}
```
<div align="center">
<sub>β Star us on GitHub if you find PaCo-RL helpful!</sub>
</div> |