nielsr's picture
nielsr HF Staff
Add comprehensive model card for PaCo-RL
3a39ac6 verified
|
raw
history blame
4.83 kB
metadata
license: apache-2.0
pipeline_tag: image-to-image
library_name: diffusers

PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling

This repository presents PaCo-RL, a comprehensive framework for consistent image generation, as described in the paper PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling.

Project Page: https://x-gengroup.github.io/HomePage_PaCo-RL/ Code Repository: https://github.com/X-GenGroup/PaCo-RL

       

🌟 Overview

PaCo-RL is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.

Key Components

  • PaCo-Reward: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
  • PaCo-GRPO: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation

πŸš€ Quick Start

Installation

git clone https://github.com/X-GenGroup/PaCo-RL.git
cd PaCo-RL

Train Reward Model

cd PaCo-Reward
conda create -n paco-reward python=3.12 -y
conda activate paco-reward
cd LLaMA-Factory && pip install -e ".[torch,metrics]" --no-build-isolation
cd .. && bash train/paco_reward.sh

See πŸ“– PaCo-Reward Documentation for detailed guide.

Run RL Training

cd PaCo-GRPO
conda create -n paco-grpo python=3.12 -y
conda activate paco-grpo
pip install -e .

# Setup vLLM reward server
conda create -n vllm python=3.12 -y
conda activate vllm && pip install vllm
export CUDA_VISIBLE_DEVICES=0
export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
export VLLM_MODEL_NAMES='Paco-Reward-7B'
bash vllm_server/launch.sh

# Start training
export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
conda activate paco-grpo
bash scripts/single_node/train_flux.sh t2is

See πŸ“– PaCo-GRPO Documentation for detailed guide.

πŸ“ Repository Structure

PaCo-RL/
β”œβ”€β”€ PaCo-GRPO/              # RL training framework
β”‚   β”œβ”€β”€ config/             # RL configurations
β”‚   β”œβ”€β”€ scripts/            # Training scripts
β”‚   └── README.md
β”œβ”€β”€ PaCo-Reward/            # Reward model training
β”‚   β”œβ”€β”€ LLaMA-Factory/      # Training framework
β”‚   β”œβ”€β”€ config/             # Training configurations
β”‚   └── README.md
└── README.md

🎁 Model Zoo

Model Type HuggingFace
PaCo-Reward-7B Reward Model πŸ€— Link
PaCo-Reward-7B-Lora Reward Model (LoRA) πŸ€— Link
PaCo-FLUX.1-dev T2I Model (LoRA) πŸ€— Link
PaCo-FLUX.1-Kontext-dev Image Editing Model (LoRA) πŸ€— Link
PaCo-QwenImage-Edit Image Editing Model (LoRA) πŸ€— Link

πŸ€— Acknowledgement

Our work is built upon Flow-GRPO, LLaMA-Factory, vLLM, and Qwen2.5-VL. We sincerely thank the authors for their valuable contributions to the community.

⭐ Citation

@misc{ping2025pacorladvancingreinforcementlearning,
      title={PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling}, 
      author={Bowen Ping and Chengyou Jia and Minnan Luo and Changliang Xia and Xin Shen and Zhuohang Dang and Hangwei Qian},
      year={2025},
      eprint={2512.04784},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.04784}, 
}
⭐ Star us on GitHub if you find PaCo-RL helpful!