Improve model card: Add pipeline tag, library name, paper, project, and code links
Browse filesThis PR significantly enhances the model card for the "Glance: Accelerating Diffusion Models with 1 Sample" model by:
* **Adding `pipeline_tag: text-to-image`** to the YAML metadata, which ensures the model is correctly categorized and discoverable on the Hugging Face Hub.
* **Adding `library_name: diffusers`** to the YAML metadata. Evidence from the GitHub README (`pip install git+https://github.com/huggingface/diffusers`) and the inference code demonstrates compatibility with `diffusers`, enabling the automated "how to use" widget on the Hub.
* **Adding a clear main title** reflecting the paper: "Glance: Accelerating Diffusion Models with 1 Sample".
* **Including explicit links** at the top to the Hugging Face paper page, the official project page (`https://zhuobaidong.github.io/Glance/`), and the GitHub repository (`https://github.com/CSU-JPG/Glance`) for easy access to more information.
* **Adding a concise "About" section** derived from the paper's abstract to quickly inform users about the model's contributions.
* **Preserving the existing "Usage" and "Inference" sections**, including the Python code snippet, as found in the GitHub README.
* **Including the "Training" section** and the **BibTeX citation** from the GitHub repository for completeness and proper attribution.
* **Removing redundant metadata** (`--- license: apache-2.0 ---`) from the markdown content, consolidating all metadata in the YAML front matter.
These updates make the model card more informative, discoverable, and user-friendly for the community.
|
@@ -1,15 +1,26 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
- text-to-image
|
| 4 |
- lora
|
| 5 |
- diffusers
|
| 6 |
- template:diffusion-lora
|
| 7 |
-
base_model: Qwen/Qwen-Image
|
| 8 |
instance_prompt: Qwen-Image, distillation
|
| 9 |
-
|
|
|
|
| 10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
# 🧪 Usage
|
| 12 |
-
---
|
| 13 |
|
| 14 |
## 🎨 Inference
|
| 15 |
|
|
@@ -64,6 +75,44 @@ image.save("output.png")
|
|
| 64 |
|
| 65 |

|
| 66 |
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: Qwen/Qwen-Image
|
| 3 |
+
license: apache-2.0
|
| 4 |
tags:
|
| 5 |
- text-to-image
|
| 6 |
- lora
|
| 7 |
- diffusers
|
| 8 |
- template:diffusion-lora
|
|
|
|
| 9 |
instance_prompt: Qwen-Image, distillation
|
| 10 |
+
pipeline_tag: text-to-image
|
| 11 |
+
library_name: diffusers
|
| 12 |
---
|
| 13 |
+
|
| 14 |
+
# Glance: Accelerating Diffusion Models with 1 Sample
|
| 15 |
+
|
| 16 |
+
This model is presented in the paper [Glance: Accelerating Diffusion Models with 1 Sample](https://huggingface.co/papers/2512.02899).
|
| 17 |
+
Project Page: https://zhuobaidong.github.io/Glance/
|
| 18 |
+
Code: https://github.com/CSU-JPG/Glance
|
| 19 |
+
|
| 20 |
+
## About
|
| 21 |
+
Glance introduces a novel approach to accelerate diffusion models by intelligently speeding up denoising phases. Instead of costly retraining, Glance equips base models with lightweight Slow-LoRA and Fast-LoRA adapters. This method achieves up to 5x acceleration over base models while maintaining comparable visual quality and strong generalization on unseen prompts. Notably, the LoRA experts are trained with only 1 sample within an hour.
|
| 22 |
+
|
| 23 |
# 🧪 Usage
|
|
|
|
| 24 |
|
| 25 |
## 🎨 Inference
|
| 26 |
|
|
|
|
| 75 |
|
| 76 |

|
| 77 |
|
| 78 |
+
## 🚀 Training
|
| 79 |
+
|
| 80 |
+
### Glance_Qwen Training
|
| 81 |
+
|
| 82 |
+
To start training with your configuration file, simply run:
|
| 83 |
+
|
| 84 |
+
```bash
|
| 85 |
+
accelerate launch train_Glance_qwen.py --config ./train_configs/Glance_qwen.yaml
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
> Note: All the training code is primarily based on [flymyai-lora-trainer](https://github.com/FlyMyAI/flymyai-lora-trainer).
|
| 89 |
+
|
| 90 |
+
Ensure that `Glance_qwen.yaml` is properly configured with your dataset paths, model settings, output directory, and other hyperparameters. You can also explicitly specify whether to train the **Slow-LoRA** or **Fast-LoRA** variant directly within the configuration file.
|
| 91 |
+
|
| 92 |
+
If you want to train on a **single GPU** (requires **less than 24 GB** of VRAM), run:
|
| 93 |
+
|
| 94 |
+
```bash
|
| 95 |
+
python train_Glance_qwen.py --config ./train_configs/Glance_qwen.yaml
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
### Glance_FLUX Training
|
| 100 |
+
|
| 101 |
+
To launch training for the FLUX variant, run:
|
| 102 |
+
|
| 103 |
+
```bash
|
| 104 |
+
accelerate launch train_Glance_flux.py --config ./train_configs/Glance_flux.yaml
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
## Citation
|
| 108 |
+
```
|
| 109 |
+
@misc{dong2025glanceacceleratingdiffusionmodels,
|
| 110 |
+
title={Glance: Accelerating Diffusion Models with 1 Sample},
|
| 111 |
+
author={Zhuobai Dong and Rui Zhao and Songjie Wu and Junchao Yi and Linjie Li and Zhengyuan Yang and Lijuan Wang and Alex Jinpeng Wang},
|
| 112 |
+
year={2025},
|
| 113 |
+
eprint={2512.02899},
|
| 114 |
+
archivePrefix={arXiv},
|
| 115 |
+
primaryClass={cs.CV},
|
| 116 |
+
url={https://arxiv.org/abs/2512.02899},
|
| 117 |
+
}
|
| 118 |
+
```
|