You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

[BENCHMARKS Added, W 12B-VL vs 8B-VL model too.]

Qwen3-VL-12B-Instruct-Brainstorm20x

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

This model contains the full source of the original "Qwen3-VL-Instruct-8B" coupled with Brainstorm 20x adapter (by DavidAU).

This is the full image/text/multi-modal model AND fully tested.

Brainstorm 20x will augment text generation as well as "image" description and in many cases raises the raw benchmarks of the model too.

Addition of the Brainstorm adapter has altered the model from 8B to 12B (now with 55 layers and 608 tensors).

IMPORTANT: - Looping?

This model tends to loop sometimes, if this happens set temp to "1" (or higher) and rep pen to "1.1" . The org model has the same issue. If you have the option, set "presence penalty" to 2.0 [as per Qwen's own suggestions]

IMPORTANT-GGUF QUANTS:

You need both the GGUF quant(s) AND the special "mmproj" gguf quant (Q8,F16,BF16 or F32) to use all functions with this model.

Specialized MAX GGUF QUANTS are here:

https://huggingface.co/DavidAU/Qwen3-VL-12B-Instruct-Brainstorm20x-NEO-MAX-GGUF


MODEL DETAILS:

(Quants, BENCHMARKS, Brainstorm details, org model details from Qwen, and then help section)


This model requires:

  • Jinja (embedded) or CHATML template
  • Max context of 256k.

Settings used for testing (suggested):

  • Temp .3 to .7 (but .8 to 1.5 for creative)
  • Rep pen 1.05 to 1.1
  • Topp .8 , minp .05
  • Topk 20
  • Min context of 8k for thinking / output.
  • No system prompt.

This model will respond well to both detailed instructions and step by step refinement and additions to code.

Likewise for creative use cases.

As this is an instruct model, it will also benefit from a detailed system prompt too.

For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8.


QUANTS:


GGUF? GGUF Imatrix? Other?

Special thanks to Team Mradermacher, Team Nightmedia and other quanters!

See under "model tree", upper right and click on "quantizations".

New quants will automatically appear.


BENCHMARKS by Nightmedia (also makes MLX quants too)

https://huggingface.co/nightmedia/

https://huggingface.co/nightmedia/Qwen3-VL-12B-Instruct-Brainstorm20x-qx86-hi-mlx


📊 Benchmark Summary Table (Performance)

 
Model    ARC-Challenge    ARC-Easy    BoolQ    HellaSwag    OpenBookQA    PIQA    Winogrande
q6-hi       0.501            0.649    0.870    0.634            0.414    0.758    0.641
q8-hi       0.511            0.661    0.872    0.640            0.420    0.763    0.646
qx86-hi     0.502            0.646    0.871    0.637            0.412    0.761    0.644
qx86        0.497            0.646    0.873    0.637            0.414    0.758    0.639
qx86x-hi    0.500            0.650    0.873    0.636            0.410    0.760    0.645

 
🔍 Cognitive Evaluation by Benchmark

✅ ARC-Easy & ARC-Challenge
These test reasoning, especially science-based logical inference.

q8-hi leads in ARC-Easy (0.661), a clear cognitive strength.
q8-hi also leads ARC-Challenge (0.511), showing strong reasoning 
ability on difficult science questions.

👉 Impression: This suggests the model has solid abstract reasoning 
capabilities, especially in foundational cognitive domains.

✅ BoolQ
Binary QA (logical yes/no inference).

All models score well (>0.87), with q6-hi, qx86x-hi, qx86 at 0.873.
This reflects strong logical deduction—a hallmark of cognitive performance.

👉 Impression: The model consistently processes binary truth claims with 
high reliability—akin to sharp logical reasoning.

✅ HellaSwag
Commonsense reasoning about likely consequences of events (e.g., 
"I took out the trash. Then..." → what next?).

q8-hi scores highest (0.640), but all are relatively low (~0.635–0.640).
This is a known hard benchmark, so even the top model struggles (~64%).

👉 Impression: The cognitive ability here is decent but still human-level 
in difficulty—it reflects a struggle for all models, as expected.

✅ OpenBookQA
Science QA requiring external knowledge and reasoning 
(e.g., "Why do plants need sunlight?").

q8-hi edges out at 0.420, which is the highest score.
This reflects ability to retrieve and reason with factual knowledge, 
a cognitive requirement.

👉 Impression: It can handle straightforward fact-based reasoning, but 
struggles with deeper explanations (typical of strong models).

✅ PIQA
Physical commonsense reasoning ("Which is harder to lift: a feather or a rock?").

q8-hi and qx86x-hi score highest (0.763, 0.760), both close to ~76% accuracy.

👉 Impression: Strong cognition in everyday, embodied reasoning—shows 
understanding of physical causality.

✅ Winogrande
Pronoun coreference resolution (understanding entity references in sentences).

q8-hi scores highest at 0.646.
All models hover around ~0.64, which is challenging for language understanding.

👉 Impression: Solid semantic and syntactic parsing ability—requires 
deep linguistic modeling.

🧠 Cognitive Strengths Summary

Reasoning: 
q8-hi leads in ARC-Easy and ARC-Challenge → strong analytical reasoning.

Logical Deduction: 
BoolQ scores >0.87 → excellent at binary logic processing.

Commonsense Understanding: 
PIQA (~76%) and HellaSwag (~64%) → solid grasp of real-world reasoning.

Knowledge Retrieval: 
OpenBookQA (~0.42) → decent but not outstanding.

Language Understanding: 
Winogrande (~0.64) → functional but still challenging.


💡 Final Cognitive Verdict

✅ The q8-hi variant is the most cognitively mature, showing balanced, 
reliable reasoning across diverse cognitive tasks.

It consistently outperforms the others in key benchmarks that reflect 
logical and commonsense cognition — especially ARC-Easy, PIQA, and OpenBookQA.

🧠 Key Takeaway: This AI model demonstrates cognitive abilities most 
aligned with a rational, logical thinker—akin to Spock (but not quite).

It’s efficient at reasoning without emotional interference, has strong 
factual grounding (BoolQ), and a good intuitive grasp of the real world 
(PIQA, Winogrande).

📌 Bonus Insight: Quantization Impact
q8-hi → best cognitive balance overall (~64.8% average accuracy).
q6-hi → slightly lower reasoning but competitive in some areas.
qx86 variants → generally perform equivalently to q6-hi or slightly worse—
they are likely optimized for size/speed, not cognition.
qx86x-hi → a hybrid variant that nearly matches q8-hi in PIQA and BoolQ, 
but trails slightly elsewhere.

🧠 Final Comparison to Spock
Trait    q8-hi
Logical ✅ High accuracy in BoolQ, ARC
Consistent ✅ Stable across benchmarks
Rational ✅ No overconfidence in weak areas
Knowledge-based 🟠 Moderate (OpenBookQA ~0.42)

✅ q8-hi is the best approximate Spock — not perfect, but rational, 
reliable, and strong in reasoning.

So yes, if Spock were an AI model, it would definitely run q8-hi. 💡🖖

BRAINSTORM VS REG QWen3 -VL 8B (Brainstorm is added to this model)

Let's analyze the Qwen3-VL-12B-Instruct-Brainstorm20x series. It has an 
extra 4B of brainstorming space. For comparison, the metrics from the 
Qwen3-VLTO-8B-Instruct that is similar to the baseline used for the 12B

How did brainstorming improve the model, and how do the individual quants perform

qwen3-vl-12b-instruct-brainstorm20x-qx86-hi-mlx

This is a fascinating and highly relevant comparison — you’re now analyzing 
the Qwen3-VL-12B-Instruct-Brainstorm20x series, which is a 12B model 
with “extra 4B of brainstorming space”, and comparing it to the 
Qwen3-VLTO-8B-Instruct baseline (which is quantized at qx86x-hi).

🧠 1. What Does “Brainstorm20x” Mean?

The name suggests:

“Brainstorm” — likely refers to enhanced internal reasoning 
capacity, possibly via:

Expanded attentional memory (e.g., longer context or more 
intermediate reasoning steps).

“20x” — likely refers to 20× more internal “thinking space” 
or reasoning capacity, perhaps via:

Expanded hidden states.

More layers of intermediate reasoning (e.g., “think step-by-step”).

Dedicated “brainstorming” layers — perhaps a MoE or attention 
expansion layer.

This is not just model size, but architectural augmentation — 
adding “thinking space” to improve reasoning depth.


📊 2. Benchmark Comparison: 

Qwen3-VLTO-8B vs Qwen3-VL-12B-Brainstorm20x

Metric         Qwen3-VLTO-8B (qx86x-hi) Qwen3-VL-12B-Brainstorm20x (qx86x-hi)
arc_challenge    0.455                  0.502
arc_easy         0.601                  0.646
boolq            0.878                  0.871
hellaswag        0.546                  0.637
openbookqa       0.424                  0.410
piqa             0.739                  0.760
winogrande       0.595                  0.645
Overall Avg      0.579                  0.634


✅ The 12B-Brainstorm20x model is clearly superior across all metrics 
— +0.05–0.13 gains, with the most dramatic improvements in:

ARC Challenge (+0.047)
ARC Easy (+0.045)
Hellaswag (+0.091)
Winogrande (+0.05)

The only metric where it’s slightly worse is OpenBookQA (↓0.014) — 
likely due to overfitting or less effective handling of purely textual 
inference without visual grounding.

🧪 3. How Did “Brainstorm20x” Improve the Model?

The key insight: adding 4B of “brainstorming space” didn’t 
just scale the model — it enhanced its reasoning depth.

🔍 Cognitive Impact:

ARC Challenge & ARC Easy: 
+0.047 and +0.045 — this suggests better reasoning chain decomposition.

Hellaswag: 
+0.091 — this suggests better commonsense inference, likely due to more 
intermediate reasoning steps.

Winogrande: 
+0.05 — this suggests better contextual understanding, likely due to 
expanded attentional memory.

Piqa: 
+0.021 — this suggests better step-by-step reasoning, likely due to 
more intermediate steps.


The model is now capable of “thinking deeper” — not just “thinking faster”.

🧩 4. Quantization Comparison within the 12B-Brainstorm20x Series
Let’s compare the qx86x-hi variant (best overall) to others:

Model                                  arc_challenge  arc_easy   boolq  hellaswag openbookqa piqa   winogrande

Qwen3-VL-12B-Instruct-Brainstorm20x-q6-hi      0.501    0.649    0.870    0.634    0.414    0.758    0.641
Qwen3-VL-12B-Instruct-Brainstorm20x-q8-hi      0.511    0.661    0.872    0.640    0.420    0.763    0.646
Qwen3-VL-12B-Instruct-Brainstorm20x-qx86-hi    0.502    0.646    0.871    0.637    0.412    0.761    0.644
Qwen3-VL-12B-Instruct-Brainstorm20x-qx86       0.497    0.646    0.873    0.637    0.414    0.758    0.639
Qwen3-VL-12B-Instruct-Brainstorm20x-qx86x-hi   0.500    0.650    0.873    0.636    0.410    0.760    0.645

✅ qx86x-hi is the best overall, with +0.01–0.02 gains over other variants.

The q8-hi variant is the best performer overall, with:

+0.01 in arc_challenge
+0.02 in arc_easy
+0.006 in hellaswag
+0.01 in winogrande

The qx86x-hi variant is the most balanced, with slightly better 
performance across all metrics.

🧭 5. Recommendation: Which Quant to Choose?

✅ For Maximum Performance:
Qwen3-VL-12B-Instruct-Brainstorm20x-q8-hi
→ Best overall performance, especially in Hellaswag, Winogrande, and Piqa.

✅ For Balanced Performance & RAM Efficiency:
Qwen3-VL-12B-Instruct-Brainstorm20x-qx86x-hi
→ Best overall average, with slightly better performance across all metrics.

✅ For RAM-Constrained Macs (32GB):
Qwen3-VL-12B-Instruct-Brainstorm20x-qx86x-hi
→ ~27.7GB RAM usage — fits comfortably.

🧠 6. Cognitive Pattern Insight: “Brainstorm20x” Enhances Reasoning Depth

The “Brainstorm20x” augmentation is not just scaling 
— it’s enhancing reasoning depth.

The model now has more internal “thinking space”, allowing it to:

Break down complex problems into intermediate steps.
Infer commonsense reasoning more accurately.
Handle ambiguous prompts with better context.

This is a cognitive upgrade, not just a computational one — 
the model now “thinks deeper”, not just “faster”.

📈 7. Summary Table: Best Model for Each Use Case

Goal                          Recommended Model                            RAM Usage  Performance Rank

Max performance               Qwen3-VL-12B-Instruct-Brainstorm20x-q8-hi    ~27.6GB    #1
Balanced performance          Qwen3-VL-12B-Instruct-Brainstorm20x-qx86x-hi ~27.7GB    #1
RAM-constrained Mac (32GB)    Qwen3-VL-12B-Instruct-Brainstorm20x-qx86x-hi ~27.7GB    #1
Cognitive depth & metaphors   Qwen3-VL-12B-Instruct-Brainstorm20x-q8-hi    ~27.6GB    #1
OpenBookQA (text-only)        Qwen3-VL-12B-Instruct-Brainstorm20x-q6-hi    ~24GB      #1

🚀 Bonus: “Brainstorm20x” as a Cognitive Architecture

The “Brainstorm20x” augmentation is not just more parameters — it’s more reasoning capacity.

It adds 20× internal “thinking space”, allowing the model to:

Break down complex problems into intermediate steps.
Infer commonsense reasoning more accurately.
Handle ambiguous prompts with better context.

This is a cognitive upgrade, not just a computational one — 
the model now “thinks deeper”, not just “faster”.

“Brainstorm20x is like adding a second brain — not just more 
neurons, but more thinking steps.”

— Inspired by the human mind’s ability to “think step-by-step”.

What is Brainstorm?


Brainstorm 20x

The BRAINSTORM process was developed by David_AU.

Some of the core principals behind this process are discussed in this scientific paper : Progressive LLaMA with Block Expansion .

However I went in a completely different direction from what was outlined in this paper.

What is "Brainstorm" ?

The reasoning center of an LLM is taken apart, reassembled, and expanded.

In this case for this model: 20 times

Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.

The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following.

This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.

Here are some of the enhancements this process brings to the model's performance:

  • Prose generation seems more focused on the moment to moment.
  • Sometimes there will be "preamble" and/or foreshadowing present.
  • Fewer or no "cliches"
  • Better overall prose and/or more complex / nuanced prose.
  • A greater sense of nuance on all levels.
  • Coherence is stronger.
  • Description is more detailed, and connected closer to the content.
  • Simile and Metaphors are stronger and better connected to the prose, story, and character.
  • Sense of "there" / in the moment is enhanced.
  • Details are more vivid, and there are more of them.
  • Prose generation length can be long to extreme.
  • Emotional engagement is stronger.
  • The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
  • The MORE instructions and/or details you provide the more strongly the model will respond.
  • Depending on the model "voice" may be more "human" vs original model's "voice".

Other "lab" observations:

  • This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
  • However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
  • From lab testing it seems to ponder, and consider more carefully roughly speaking.
  • You could say this process sharpens the model's focus on it's task(s) at a deeper level.

The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.


Qwen3-VL-8B-Instruct


Meet Qwen3-VL — the most powerful vision-language model in the Qwen series to date.

This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities.

Available in Dense and MoE architectures that scale from edge to cloud, with Instruct and reasoning‑enhanced Thinking editions for flexible, on‑demand deployment.

Key Enhancements:

  • Visual Agent: Operates PC/mobile GUIs—recognizes elements, understands functions, invokes tools, completes tasks.

  • Visual Coding Boost: Generates Draw.io/HTML/CSS/JS from images/videos.

  • Advanced Spatial Perception: Judges object positions, viewpoints, and occlusions; provides stronger 2D grounding and enables 3D grounding for spatial reasoning and embodied AI.

  • Long Context & Video Understanding: Native 256K context, expandable to 1M; handles books and hours-long video with full recall and second-level indexing.

  • Enhanced Multimodal Reasoning: Excels in STEM/Math—causal analysis and logical, evidence-based answers.

  • Upgraded Visual Recognition: Broader, higher-quality pretraining is able to “recognize everything”—celebrities, anime, products, landmarks, flora/fauna, etc.

  • Expanded OCR: Supports 32 languages (up from 19); robust in low light, blur, and tilt; better with rare/ancient characters and jargon; improved long-document structure parsing.

  • Text Understanding on par with pure LLMs: Seamless text–vision fusion for lossless, unified comprehension.

Model Architecture Updates:

  1. Interleaved-MRoPE: Full‑frequency allocation over time, width, and height via robust positional embeddings, enhancing long‑horizon video reasoning.

  2. DeepStack: Fuses multi‑level ViT features to capture fine‑grained details and sharpen image–text alignment.

  3. Text–Timestamp Alignment: Moves beyond T‑RoPE to precise, timestamp‑grounded event localization for stronger video temporal modeling.

This is the weight repository for Qwen3-VL-8B-Instruct.


Model Performance

Multimodal performance

Pure text performance

Quickstart

Below, we provide simple examples to show how to use Qwen3-VL with 🤖 ModelScope and 🤗 Transformers.

The code of Qwen3-VL has been in the latest Hugging Face transformers and we advise you to build from source with command:

pip install git+https://github.com/huggingface/transformers
# pip install transformers==4.57.0 # currently, V4.57.0 is not released

Using 🤗 Transformers to Chat

Here we show a code snippet to show how to use the chat model with transformers:

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor

# default: Load the model on the available device(s)
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen3-VL-8B-Instruct", dtype="auto", device_map="auto"
)

# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen3VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen3-VL-8B-Instruct",
#     dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )

processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-8B-Instruct")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

# Preparation for inference
inputs = processor.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_dict=True,
    return_tensors="pt"
)
inputs = inputs.to(model.device)

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Generation Hyperparameters

VL

export greedy='false'
export top_p=0.8
export top_k=20
export temperature=0.7
export repetition_penalty=1.0
export presence_penalty=1.5
export out_seq_length=16384

Text

export greedy='false'
export top_p=1.0
export top_k=40
export repetition_penalty=1.0
export presence_penalty=2.0
export temperature=1.0
export out_seq_length=32768

Help, Adjustments, Samplers, Parameters and More


CHANGE THE NUMBER OF ACTIVE EXPERTS:

See this document:

https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts

Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
370
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Qwen3-VL-12B-Instruct-Brainstorm20x

Finetuned
(54)
this model
Quantizations
5 models

Collections including DavidAU/Qwen3-VL-12B-Instruct-Brainstorm20x