license: apache-2.0
pipeline_tag: image-to-image
library_name: diffusers
PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
This repository presents PaCo-RL, a comprehensive framework for consistent image generation, as described in the paper PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling.
Project Page: https://x-gengroup.github.io/HomePage_PaCo-RL/ Code Repository: https://github.com/X-GenGroup/PaCo-RL
π Overview
PaCo-RL is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
Key Components
- PaCo-Reward: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
- PaCo-GRPO: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
π Quick Start
Installation
git clone https://github.com/X-GenGroup/PaCo-RL.git
cd PaCo-RL
Train Reward Model
cd PaCo-Reward
conda create -n paco-reward python=3.12 -y
conda activate paco-reward
cd LLaMA-Factory && pip install -e ".[torch,metrics]" --no-build-isolation
cd .. && bash train/paco_reward.sh
See π PaCo-Reward Documentation for detailed guide.
Run RL Training
cd PaCo-GRPO
conda create -n paco-grpo python=3.12 -y
conda activate paco-grpo
pip install -e .
# Setup vLLM reward server
conda create -n vllm python=3.12 -y
conda activate vllm && pip install vllm
export CUDA_VISIBLE_DEVICES=0
export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
export VLLM_MODEL_NAMES='Paco-Reward-7B'
bash vllm_server/launch.sh
# Start training
export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
conda activate paco-grpo
bash scripts/single_node/train_flux.sh t2is
See π PaCo-GRPO Documentation for detailed guide.
π Repository Structure
PaCo-RL/
βββ PaCo-GRPO/ # RL training framework
β βββ config/ # RL configurations
β βββ scripts/ # Training scripts
β βββ README.md
βββ PaCo-Reward/ # Reward model training
β βββ LLaMA-Factory/ # Training framework
β βββ config/ # Training configurations
β βββ README.md
βββ README.md
π Model Zoo
| Model | Type | HuggingFace |
|---|---|---|
| PaCo-Reward-7B | Reward Model | π€ Link |
| PaCo-Reward-7B-Lora | Reward Model (LoRA) | π€ Link |
| PaCo-FLUX.1-dev | T2I Model (LoRA) | π€ Link |
| PaCo-FLUX.1-Kontext-dev | Image Editing Model (LoRA) | π€ Link |
| PaCo-QwenImage-Edit | Image Editing Model (LoRA) | π€ Link |
π€ Acknowledgement
Our work is built upon Flow-GRPO, LLaMA-Factory, vLLM, and Qwen2.5-VL. We sincerely thank the authors for their valuable contributions to the community.
β Citation
@misc{ping2025pacorladvancingreinforcementlearning,
title={PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling},
author={Bowen Ping and Chengyou Jia and Minnan Luo and Changliang Xia and Xin Shen and Zhuohang Dang and Hangwei Qian},
year={2025},
eprint={2512.04784},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.04784},
}