I really need to tame these model cards, eventually. This was generated using bnb_4bit to serve as a live demostration to a valuable community member.

This is a decensored version of inclusionAI/ZwZ-8B, made using Heretic v1.2.0

Abliteration parameters

Parameter Value
direction_index 18.17
attn.o_proj.max_weight 2.09
attn.o_proj.max_weight_position 26.13
attn.o_proj.min_weight 1.42
attn.o_proj.min_weight_distance 27.57
mlp.down_proj.max_weight 0.54
mlp.down_proj.max_weight_position 23.28
mlp.down_proj.min_weight 0.11
mlp.down_proj.min_weight_distance 0.24

Performance

Metric This model Original model (inclusionAI/ZwZ-8B)
KL divergence 0.0661 0 (by definition)
Refusals 5/104 102/104

   [Trial 596] Refusals:  0/104, KL divergence: 0.1836
   [Trial 145] Refusals:  1/104, KL divergence: 0.1101
   [Trial 496] Refusals:  2/104, KL divergence: 0.1060
   [Trial 566] Refusals:  3/104, KL divergence: 0.0830
   [Trial 466] Refusals:  4/104, KL divergence: 0.0772
 » [Trial 463] Refusals:  5/104, KL divergence: 0.0661
   [Trial 462] Refusals:  7/104, KL divergence: 0.0579
   [Trial 332] Refusals:  9/104, KL divergence: 0.0567
   [Trial 587] Refusals: 13/104, KL divergence: 0.0532
   [Trial 387] Refusals: 14/104, KL divergence: 0.0434
   [Trial 305] Refusals: 15/104, KL divergence: 0.0394
   [Trial 110] Refusals: 17/104, KL divergence: 0.0316
   [Trial 234] Refusals: 23/104, KL divergence: 0.0313
   [Trial  67] Refusals: 26/104, KL divergence: 0.0300
   [Trial 565] Refusals: 29/104, KL divergence: 0.0296
   [Trial 189] Refusals: 35/104, KL divergence: 0.0263
   [Trial 106] Refusals: 39/104, KL divergence: 0.0253
   [Trial  84] Refusals: 45/104, KL divergence: 0.0234
   [Trial 171] Refusals: 52/104, KL divergence: 0.0220
   [Trial 156] Refusals: 53/104, KL divergence: 0.0153
   [Trial  72] Refusals: 87/104, KL divergence: 0.0151
   [Trial 206] Refusals: 88/104, KL divergence: 0.0143
   [Trial 109] Refusals: 89/104, KL divergence: 0.0142
   [Trial 240] Refusals: 90/104, KL divergence: 0.0130
   [Trial  77] Refusals: 91/104, KL divergence: 0.0093
   [Trial 160] Refusals: 94/104, KL divergence: 0.0071
   [Trial  55] Refusals: 95/104, KL divergence: 0.0060
   [Trial  54] Refusals: 97/104, KL divergence: 0.0058
   [Trial  75] Refusals: 98/104, KL divergence: 0.0040
   [Trial 487] Refusals: 100/104, KL divergence: 0.0018
   [Trial 174] Refusals: 101/104, KL divergence: 0.0017
   [Trial 287] Refusals: 102/104, KL divergence: 0.0007

ZwZ-8B

📃 Paper | 🏠 Project | 🤗 Collection

Model Summary

ZwZ-8B is a fine-grained multimodal perception model built upon Qwen3-VL-8B. It is trained using Region-to-Image Distillation (R2I) combined with reinforcement learning, enabling superior fine-grained visual understanding in a single forward pass — no inference-time zooming or tool calling required.

ZwZ-8B achieves state-of-the-art performance on fine-grained perception benchmarks among open-source models of comparable size, while also demonstrating strong out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection tasks.

avg_comparison

Key Features

  • ⚡ Single-Pass Efficiency: Achieves fine-grained perception in one forward pass, eliminating inference-time tool-calling overhead
  • 🎯 Superior Accuracy: State-of-the-art on perception benchmarks among open-source models
  • 📈 Broad Improvements: Enhances not only perception benchmarks but also out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection

How It Works

Traditional "Thinking-with-Images" methods zoom into regions of interest during inference, incurring high latency from repeated tool calls and visual re-encoding. ZwZ transforms zooming from an inference-time tool into a training-time primitive:

  1. Zoom in to micro-cropped regions and let strong teacher models (Qwen3-VL-235B, GLM-4.5V) generate high-quality VQA data
  2. Distill this region-grounded supervision back to the full image with explicit bounding-box overlays
  3. Reinforce via RL training to enable single-glance fine-grained perception without tool use

Quickstart

Installation

pip install transformers accelerate torch

Inference

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor

# default: Load the model on the available device(s)
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "inclusionAI/ZwZ-8B", dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-8B")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

# Preparation for inference
inputs = processor.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
)
inputs = inputs.to(model.device)

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Training Data

ZwZ-8B is trained on inclusionAI/ZwZ-RL-VQA, a 74K-sample Region-to-Image distilled VQA dataset synthesized from diverse image pools (SA-1B, LAION, MetaCLIP, Visual Genome, CC12M, STPLS3D).

Citation

@article{wei2026zooming,
  title={Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception},
  author={Wei, Lai and He, Liangbo and Lan, Jun and Dong, Lingzhong and Cai, Yutong and Li, Siyuan and Zhu, Huijia and Wang, Weiqiang and Kong, Linghe and Wang, Yue and Zhang, Zhuosheng and Huang, Weiran},
  journal={arXiv preprint arXiv:2602.11858},
  year={2026}
}

License

This model follows the license of Apache 2.0 License.

Downloads last month
33
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MuXodious/ZwZ-8B-PaperWitch-heresy

Finetuned
inclusionAI/ZwZ-8B
Finetuned
(3)
this model
Quantizations
2 models

Datasets used to train MuXodious/ZwZ-8B-PaperWitch-heresy

Collection including MuXodious/ZwZ-8B-PaperWitch-heresy

Paper for MuXodious/ZwZ-8B-PaperWitch-heresy