nielsr's picture
nielsr HF Staff
Add comprehensive model card for PaCo-RL
3a39ac6 verified
|
raw
history blame
4.83 kB
---
license: apache-2.0
pipeline_tag: image-to-image
library_name: diffusers
---
# PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
This repository presents **PaCo-RL**, a comprehensive framework for consistent image generation, as described in the paper [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
Project Page: [https://x-gengroup.github.io/HomePage_PaCo-RL/](https://x-gengroup.github.io/HomePage_PaCo-RL/)
Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-GenGroup/PaCo-RL)
<div align="center">
<a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a> &nbsp;
<a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a> &nbsp;
<a href="https://github.com/X-GenGroup/PaCo-RL"><img src="https://img.shields.io/badge/Code-9E95B7?logo=github"></a> &nbsp;
<a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a> &nbsp;
</div>
## 🌟 Overview
**PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
### Key Components
- **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
- **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
## πŸš€ Quick Start
### Installation
```bash
git clone https://github.com/X-GenGroup/PaCo-RL.git
cd PaCo-RL
```
### Train Reward Model
```bash
cd PaCo-Reward
conda create -n paco-reward python=3.12 -y
conda activate paco-reward
cd LLaMA-Factory && pip install -e ".[torch,metrics]" --no-build-isolation
cd .. && bash train/paco_reward.sh
```
See πŸ“– [PaCo-Reward Documentation](PaCo-Reward/README.md) for detailed guide.
### Run RL Training
```bash
cd PaCo-GRPO
conda create -n paco-grpo python=3.12 -y
conda activate paco-grpo
pip install -e .
# Setup vLLM reward server
conda create -n vllm python=3.12 -y
conda activate vllm && pip install vllm
export CUDA_VISIBLE_DEVICES=0
export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
export VLLM_MODEL_NAMES='Paco-Reward-7B'
bash vllm_server/launch.sh
# Start training
export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
conda activate paco-grpo
bash scripts/single_node/train_flux.sh t2is
```
See πŸ“– [PaCo-GRPO Documentation](PaCo-GRPO/README.md) for detailed guide.
## πŸ“ Repository Structure
```
PaCo-RL/
β”œβ”€β”€ PaCo-GRPO/ # RL training framework
β”‚ β”œβ”€β”€ config/ # RL configurations
β”‚ β”œβ”€β”€ scripts/ # Training scripts
β”‚ └── README.md
β”œβ”€β”€ PaCo-Reward/ # Reward model training
β”‚ β”œβ”€β”€ LLaMA-Factory/ # Training framework
β”‚ β”œβ”€β”€ config/ # Training configurations
β”‚ └── README.md
└── README.md
```
## 🎁 Model Zoo
| Model | Type | HuggingFace |
|-------|------|-------------|
| **PaCo-Reward-7B** | Reward Model | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-Reward-7B) |
| **PaCo-Reward-7B-Lora** | Reward Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-Reward-7B-Lora) |
| **PaCo-FLUX.1-dev** | T2I Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-dev-Lora) |
| **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
| **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
## πŸ€— Acknowledgement
Our work is built upon [Flow-GRPO](https://github.com/yifan123/flow_grpo), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), [vLLM](https://github.com/vllm-project/vllm), and [Qwen2.5-VL](https://github.com/QwenLM/Qwen3-VL). We sincerely thank the authors for their valuable contributions to the community.
## ⭐ Citation
```bibtex
@misc{ping2025pacorladvancingreinforcementlearning,
title={PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling},
author={Bowen Ping and Chengyou Jia and Minnan Luo and Changliang Xia and Xin Shen and Zhuohang Dang and Hangwei Qian},
year={2025},
eprint={2512.04784},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.04784},
}
```
<div align="center">
<sub>⭐ Star us on GitHub if you find PaCo-RL helpful!</sub>
</div>