Jayce-Ping commited on
Commit
8ad10b5
Β·
verified Β·
1 Parent(s): bb63436

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -58
README.md CHANGED
@@ -1,16 +1,11 @@
1
  ---
2
  license: apache-2.0
3
- pipeline_tag: image-to-image
4
  library_name: diffusers
5
  ---
6
 
7
  # PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
8
 
9
- This repository presents **PaCo-RL**, a comprehensive framework for consistent image generation, as described in the paper [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
10
-
11
- Project Page: [https://x-gengroup.github.io/HomePage_PaCo-RL/](https://x-gengroup.github.io/HomePage_PaCo-RL/)
12
- Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-GenGroup/PaCo-RL)
13
-
14
  <div align="center">
15
  <a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a> &nbsp;
16
  <a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a> &nbsp;
@@ -18,6 +13,8 @@ Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-Ge
18
  <a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a> &nbsp;
19
  </div>
20
 
 
 
21
  ## 🌟 Overview
22
 
23
  **PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
@@ -27,60 +24,45 @@ Code Repository: [https://github.com/X-GenGroup/PaCo-RL](https://github.com/X-Ge
27
  - **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
28
  - **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
29
 
30
- ## πŸš€ Quick Start
31
 
32
- ### Installation
33
- ```bash
34
- git clone https://github.com/X-GenGroup/PaCo-RL.git
35
- cd PaCo-RL
36
- ```
37
 
38
- ### Train Reward Model
39
- ```bash
40
- cd PaCo-Reward
41
- conda create -n paco-reward python=3.12 -y
42
- conda activate paco-reward
43
- cd LLaMA-Factory && pip install -e ".[torch,metrics]" --no-build-isolation
44
- cd .. && bash train/paco_reward.sh
45
- ```
46
 
47
- See πŸ“– [PaCo-Reward Documentation](PaCo-Reward/README.md) for detailed guide.
48
-
49
- ### Run RL Training
50
- ```bash
51
- cd PaCo-GRPO
52
- conda create -n paco-grpo python=3.12 -y
53
- conda activate paco-grpo
54
- pip install -e .
55
-
56
- # Setup vLLM reward server
57
- conda create -n vllm python=3.12 -y
58
- conda activate vllm && pip install vllm
59
- export CUDA_VISIBLE_DEVICES=0
60
- export VLLM_MODEL_PATHS='X-GenGroup/PaCo-Reward-7B'
61
- export VLLM_MODEL_NAMES='Paco-Reward-7B'
62
- bash vllm_server/launch.sh
63
-
64
- # Start training
65
- export CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
66
- conda activate paco-grpo
67
- bash scripts/single_node/train_flux.sh t2is
68
- ```
69
 
70
- See πŸ“– [PaCo-GRPO Documentation](PaCo-GRPO/README.md) for detailed guide.
 
 
 
71
 
72
- ## πŸ“ Repository Structure
73
- ```
74
- PaCo-RL/
75
- β”œβ”€β”€ PaCo-GRPO/ # RL training framework
76
- β”‚ β”œβ”€β”€ config/ # RL configurations
77
- β”‚ β”œβ”€β”€ scripts/ # Training scripts
78
- β”‚ └── README.md
79
- β”œβ”€β”€ PaCo-Reward/ # Reward model training
80
- β”‚ β”œβ”€β”€ LLaMA-Factory/ # Training framework
81
- β”‚ β”œβ”€β”€ config/ # Training configurations
82
- β”‚ └── README.md
83
- └── README.md
 
 
 
 
 
84
  ```
85
 
86
  ## 🎁 Model Zoo
@@ -93,9 +75,6 @@ PaCo-RL/
93
  | **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
94
  | **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
95
 
96
- ## πŸ€— Acknowledgement
97
-
98
- Our work is built upon [Flow-GRPO](https://github.com/yifan123/flow_grpo), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory), [vLLM](https://github.com/vllm-project/vllm), and [Qwen2.5-VL](https://github.com/QwenLM/Qwen3-VL). We sincerely thank the authors for their valuable contributions to the community.
99
 
100
  ## ⭐ Citation
101
  ```bibtex
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-to-image
4
  library_name: diffusers
5
  ---
6
 
7
  # PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
8
 
 
 
 
 
 
9
  <div align="center">
10
  <a href='https://arxiv.org/abs/2512.04784'><img src='https://img.shields.io/badge/ArXiv-red?logo=arxiv'></a> &nbsp;
11
  <a href='https://x-gengroup.github.io/HomePage_PaCo-RL/'><img src='https://img.shields.io/badge/ProjectPage-purple?logo=github'></a> &nbsp;
 
13
  <a href='https://huggingface.co/collections/X-GenGroup/paco-rl'><img src='https://img.shields.io/badge/Data & Model-green?logo=huggingface'></a> &nbsp;
14
  </div>
15
 
16
+ The model presented in [PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling](https://huggingface.co/papers/2512.04784).
17
+
18
  ## 🌟 Overview
19
 
20
  **PaCo-RL** is a comprehensive framework for consistent image generation through reinforcement learning, addressing challenges in preserving identities, styles, and logical coherence across multiple images for storytelling and character design applications.
 
24
  - **PaCo-Reward**: A pairwise consistency evaluator with task-aware instruction and CoT reasoning.
25
  - **PaCo-GRPO**: Efficient RL optimization with resolution-decoupled training and log-tamed multi-reward aggregation
26
 
 
27
 
28
+ ## Example Usage
 
 
 
 
29
 
30
+ ```python
31
+ import os
32
+ from PIL import Image
33
+ import torch
34
+ from diffusers import QwenImageEditPipeline
35
+ from peft import PeftModel
36
+ from diffusers.utils import load_image
 
37
 
38
+ pipeline = QwenImageEditPipeline.from_pretrained(
39
+ "Qwen/Qwen-Image-Edit",
40
+ torch_dtype=torch.bfloat16,
41
+ device_map="balanced"
42
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ pipeline.transformer = PeftModel.from_pretrained(
45
+ pipeline.transformer,
46
+ 'X-GenGroup/PaCo-Qwen-Image-Edit-Lora'
47
+ )
48
 
49
+ pipeline.set_progress_bar_config(disable=None)
50
+
51
+ input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
52
+
53
+ prompt = "Add a blue hat to the cat."
54
+ inputs = {
55
+ "image": input_image,
56
+ "prompt": prompt,
57
+ "generator": torch.manual_seed(0),
58
+ "true_cfg_scale": 4.0,
59
+ "negative_prompt": " ",
60
+ "num_inference_steps": 50,
61
+ }
62
+
63
+ with torch.inference_mode():
64
+ output = pipeline(**inputs)
65
+ output_image = output.images[0]
66
  ```
67
 
68
  ## 🎁 Model Zoo
 
75
  | **PaCo-FLUX.1-Kontext-dev** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-FLUX.1-Kontext-Lora) |
76
  | **PaCo-QwenImage-Edit** | Image Editing Model (LoRA) | [πŸ€— Link](https://huggingface.co/X-GenGroup/PaCo-Qwen-Image-Edit-Lora) |
77
 
 
 
 
78
 
79
  ## ⭐ Citation
80
  ```bibtex