Update README.md
Browse files
README.md
CHANGED
|
@@ -1,16 +1,11 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
pipeline_tag:
|
| 4 |
library_name: diffusers
|
| 5 |
---
|
| 6 |
|
| 7 |
# PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
|
| 8 |
|
| 9 |
-
This repository presents **PaCo-RL**, a comprehensive framework for consistent image generation, as described in the paper [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
|
| 10 |
-
|
| 11 |
-
Project Page: [https://x-gengroup.github.io/HomePage_PaCo-RL/](https://x-gengroup.github.io/HomePage_PaCo-RL/)
|
| 12 |
-
Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-GenGroup/PaCo-RL)
|
| 13 |
-
|
| 14 |
<div align="center">
|
| 15 |
<a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a>
|
| 16 |
<a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a>
|
|
@@ -18,6 +13,8 @@ Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-Ge
|
|
| 18 |
<a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a>
|
| 19 |
</div>
|
| 20 |
|
|
|
|
|
|
|
| 21 |
## π Overview
|
| 22 |
|
| 23 |
**PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
|
|
@@ -27,60 +24,45 @@ Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-Ge
|
|
| 27 |
- **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
|
| 28 |
- **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
|
| 29 |
|
| 30 |
-
## π Quick Start
|
| 31 |
|
| 32 |
-
|
| 33 |
-
```bash
|
| 34 |
-
git clone https://github.com/X-GenGroup/PaCo-RL.git
|
| 35 |
-
cd PaCo-RL
|
| 36 |
-
```
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
```
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
conda create -n paco-grpo python=3.12 -y
|
| 53 |
-
conda activate paco-grpo
|
| 54 |
-
pip install -e .
|
| 55 |
-
|
| 56 |
-
# Setup vLLM reward server
|
| 57 |
-
conda create -n vllm python=3.12 -y
|
| 58 |
-
conda activate vllm && pip install vllm
|
| 59 |
-
export CUDA_VISIBLE_DEVICES=0
|
| 60 |
-
export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
|
| 61 |
-
export VLLM_MODEL_NAMES='Paco-Reward-7B'
|
| 62 |
-
bash vllm_server/launch.sh
|
| 63 |
-
|
| 64 |
-
# Start training
|
| 65 |
-
export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
|
| 66 |
-
conda activate paco-grpo
|
| 67 |
-
bash scripts/single_node/train_flux.sh t2is
|
| 68 |
-
```
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
```
|
| 85 |
|
| 86 |
## π Model Zoo
|
|
@@ -93,9 +75,6 @@ PaCo-RL/
|
|
| 93 |
| **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
|
| 94 |
| **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
|
| 95 |
|
| 96 |
-
## π€ Acknowledgement
|
| 97 |
-
|
| 98 |
-
Our work is built upon [Flow-GRPO](https://github.com/yifan123/flow_grpo), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), [vLLM](https://github.com/vllm-project/vllm), and [Qwen2.5-VL](https://github.com/QwenLM/Qwen3-VL). We sincerely thank the authors for their valuable contributions to the community.
|
| 99 |
|
| 100 |
## β Citation
|
| 101 |
```bibtex
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-to-image
|
| 4 |
library_name: diffusers
|
| 5 |
---
|
| 6 |
|
| 7 |
# PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
|
| 8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
<div align="center">
|
| 10 |
<a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a>
|
| 11 |
<a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a>
|
|
|
|
| 13 |
<a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a>
|
| 14 |
</div>
|
| 15 |
|
| 16 |
+
The model presented in [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
|
| 17 |
+
|
| 18 |
## π Overview
|
| 19 |
|
| 20 |
**PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
|
|
|
|
| 24 |
- **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
|
| 25 |
- **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
|
| 26 |
|
|
|
|
| 27 |
|
| 28 |
+
## Example Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
```python
|
| 31 |
+
import os
|
| 32 |
+
from PIL import Image
|
| 33 |
+
import torch
|
| 34 |
+
from diffusers import QwenImageEditPipeline
|
| 35 |
+
from peft import PeftModel
|
| 36 |
+
from diffusers.utils import load_image
|
|
|
|
| 37 |
|
| 38 |
+
pipeline = QwenImageEditPipeline.from_pretrained(
|
| 39 |
+
"Qwen/Qwen-Image-Edit",
|
| 40 |
+
torch_dtype=torch.bfloat16,
|
| 41 |
+
device_map="balanced"
|
| 42 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
+
pipeline.transformer = PeftModel.from_pretrained(
|
| 45 |
+
pipeline.transformer,
|
| 46 |
+
'X-GenGroup/PaCo-Qwen-Image-Edit-Lora'
|
| 47 |
+
)
|
| 48 |
|
| 49 |
+
pipeline.set_progress_bar_config(disable=None)
|
| 50 |
+
|
| 51 |
+
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
|
| 52 |
+
|
| 53 |
+
prompt = "Add a blue hat to the cat."
|
| 54 |
+
inputs = {
|
| 55 |
+
"image": input_image,
|
| 56 |
+
"prompt": prompt,
|
| 57 |
+
"generator": torch.manual_seed(0),
|
| 58 |
+
"true_cfg_scale": 4.0,
|
| 59 |
+
"negative_prompt": " ",
|
| 60 |
+
"num_inference_steps": 50,
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
with torch.inference_mode():
|
| 64 |
+
output = pipeline(**inputs)
|
| 65 |
+
output_image = output.images[0]
|
| 66 |
```
|
| 67 |
|
| 68 |
## π Model Zoo
|
|
|
|
| 75 |
| **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
|
| 76 |
| **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [π€ Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
|
| 77 |
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
## β Citation
|
| 80 |
```bibtex
|