Improve model card: Add pipeline tag, library, project page, abstract, visuals, and usage example

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,15 +1,70 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
4
  # MMaDA-Parallel-A
5
 
6
  We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
7
 
8
  This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
9
 
10
- [Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- # Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ```
14
  @article{tian2025mmadaparallel,
15
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
@@ -17,4 +72,7 @@ This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quali
17
  journal={arXiv preprint arXiv:2511.09611},
18
  year={2025}
19
  }
20
- ```
 
 
 
 
1
  ---
2
  license: mit
3
+ pipeline_tag: any-to-any
4
+ library_name: transformers
5
  ---
6
+
7
  # MMaDA-Parallel-A
8
 
9
  We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
10
 
11
  This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
12
 
13
+ [Paper](https://arxiv.org/abs/2511.09611) | [Code](https://github.com/tyfeld/MMaDA-Parallel) | [Project Page](https://tyfeld.github.io/mmadaparellel.github.io/)
14
+
15
+ ## Abstract
16
+ While thinking-aware generation aims to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can paradoxically degrade performance due to error propagation. To systematically analyze this issue, we propose ParaBench, a new benchmark designed to evaluate both text and image output modalities. Our analysis using ParaBench reveals that this performance degradation is strongly correlated with poor alignment between the generated reasoning and the final image. To resolve this, we propose a parallel multimodal diffusion framework, MMaDA-Parallel, that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory. MMaDA-Parallel is trained with supervised finetuning and then further optimized by Parallel Reinforcement Learning (ParaRL), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency. Experiments validate that our model significantly improves cross-modal alignment and semantic consistency, achieving a 6.9% improvement in Output Alignment on ParaBench compared to the state-of-the-art model, Bagel, establishing a more robust paradigm for thinking-aware image synthesis.
17
+
18
+ ## Architecture
19
+ <div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
20
+ <img src="https://github.com/tyfeld/MMaDA-Parallel/raw/main/assets/method.png" style="width: 90%" />
21
+ <p align="center">Architecture of MMaDA-Parallel. During Training, image and text responses are masked and predicted in parallel with a uniform mask predictor. During Sampling, the model performs parallel decoding to generate both image and text responses jointly, enabling continuous cross-modal interaction. </p>
22
+ </div>
23
+
24
+ ## Results
25
+ <div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
26
+ <img src="https://github.com/tyfeld/MMaDA-Parallel/raw/main/assets/lumina_01.png" alt="Main Results" style="width: 90%" />
27
+ <p align="center">Qualitative comparison. </p>
28
+ </div>
29
+
30
+ <div align="center" style="display: flex; justify-content: center; flex-wrap: wrap;">
31
+ <img src="https://github.com/tyfeld/MMaDA-Parallel/raw/main/assets/mainresults.png" alt="Main Results" style="width: 90%" />
32
+ <p align="center">Quantitative Results on ParaBench.</p>
33
+ </div>
34
+
35
+ ## Quick Start
36
+
37
+ ### 1. Environment Setup
38
+ First, start with a torch environment with torch 2.3.1 or higher version, then install the following dependencies:
39
+ ```
40
+ pip install -r requirements.txt
41
+ ```
42
 
43
+ We provide two varients of MMaDA-Parallel: MMaDA-Parallel-A and MMaDA-Parallel-M. These two varients are with different tokenizer, Amused-VQ and Magvitv2 respectively. MMaDA-Parallel-A is trained with Amused-VQ, from Lumina-DiMOO, and MMaDA-Parallel-M is trained with Magvitv2, from MMaDA-8B.
44
+
45
+ ### 2. Experiencing Parallel Gen with MMaDA-Parallel-A
46
+ ```bash
47
+ cd MMaDA-Parallel-A
48
+ python inference.py \
49
+ --checkpoint tyfeld/MMaDA-Parallel-A \
50
+ --vae_ckpt tyfeld/MMaDA-Parallel-A \
51
+ --prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
52
+ --image_path examples/image.png \
53
+ --height 512 \
54
+ --width 512 \
55
+ --timesteps 64 \
56
+ --text_steps 128 \
57
+ --text_gen_length 256 \
58
+ --text_block_length 32 \
59
+ --cfg_scale 0 \
60
+ --cfg_img 4.0 \
61
+ --temperature 1.0 \
62
+ --text_temperature 0 \
63
+ --seed 42 \
64
+ --output_dir output/results_interleave
65
+ ```
66
+
67
+ ## Citation
68
  ```
69
  @article{tian2025mmadaparallel,
70
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
 
72
  journal={arXiv preprint arXiv:2511.09611},
73
  year={2025}
74
  }
75
+ ```
76
+
77
+ ## Acknowledgments
78
+ This work is heavily based on [MMaDA](https://github.com/Gen-Verse/MMaDA) and [Lumina-DiMOO](https://github.com/Alpha-VLLM/Lumina-DiMOO). Thanks to all the authors for their great work.