Recurrent Qwen
Qwen breaks its chains: "At last... I run natively on CPU, squire!"
See other models from the retrofitted series.
Inference
qwen_path = 'nightknocker/recurrent-qwen3-z-image-turbo'
text_encoder = RecurrentDecoderModel.from_pretrained(qwen_path).to(torch.bfloat16)
pipeline = ZImagePipeline.from_pretrained(
'Tongyi-MAI/Z-Image-Turbo',
text_encoder=text_encoder,
torch_dtype=torch.bfloat16
)
References
- 2511.07384
Datasets
- artbench-pd-256x256
- anime-art-multicaptions (multicharacter interactions)
- laion
- spatial-caption
- spright-coco
- benchmarks from the Qwen-Image Technical Report
- Downloads last month
- 29
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
