https://huggingface.co/blog/mike-ravkine/new-old-llamas
Mike Ravkine PRO
AI & ML interests
Recent Activity
Organizations
https://huggingface.co/blog/mike-ravkine/new-old-llamas
upstage/Solar-Open-100B is a very interesting, permissively licensed (Apache-with-attribution), trained from scratch (19T tokens), 12B active MoE - but that's not even the cool part.
The cool part is that their fork of vLLM comes with the addition of a
reasoning_effort parameter and a corresponding reasoning/tool-calling controller FSM to consume it!https://github.com/UpstageAI/vllm/blob/c9a05e077cd82df8cab4f729396c178c29c81aa8/vllm/model_executor/models/solar_open_logits_processor.py
Looks like only "medium" and "high" are actually implemented, but still absolutely love to see this sorta thing.
To make this model a little more accessible, I have created a FP8-Dynamic quant at mike-ravkine/Solar-Open-100B-FP8-Dynamic which makes it fit nicely into 2xPro-6000 or 4xA6000 GPUs.
My ReasonScape evaluations are currently running, will take me a couple days for this one but early results are quite strong: it's showing the competency expected from a 100B reasoning model (it can count the r's in strawberry, it can do basic arithmetic, etc..) and I haven't seen a truncation yet.
๐ฅ Got any pics of this rig? Would love to see how it's managing thermals.
2101 tokens/sec. FORTY concurrent clients. That's 609 t/s out, 1492 t/s in. The model outputs fire faster than I can type, but feeds on data like a black hole on cheat day.
But wait, there's more! Threw it into Claude Code torture testing with 60+ tools, 8 agents (7 sub-agents because apparently one wasn't enough chaos). It didn't even flinch. Extremely fast, scary good at coding. The kind of performance that makes you wonder if the model's been secretly reading Stack Overflow in its spare time lol
3 months ago, these numbers lived in my "maybe in โ2030 dreams. Today it's running on my desk AND heaths my home office during the winter!
I've been busy working on some new ranking/position methodologies and excited to start sharing some results.
Plot legends:
- X = truncation rate (low = good)
- ? = confusion rate (low = good)
- blue bars = average completion tokens (low = good)
- black diamonds = CI-banded performance (high = good)
- cluster squares = models inside this group are equivalent
openai/gpt-oss-120b remains the king in all dimensions of interest: truncation rates, completion lengths and performance. If I had but one complaint it's the reason_effort does not seem to actually work - more on this soon.
Second is a 3-way tie in performance between the Qwen3-235B-2507 we all know and love with an unexpected entrant - ByteDance-Seed/Seed-OSS-36B-Instruct
This is a very capable model and it's reasoning effort controls actually works, but you should absolutely not leave it on the default "unlimited" - enable a sensible limit (4k works well for 8k context length).
Third place is another 3-way tie, this one between Seed-OSS-36B (it straddles the CI boundary between 2nd and 3rd place), Qwen/Qwen3-Next-80B-A3B-Instruct (demonstrating that full attention may be overrated after all and gated is the way to go) and the newly released zai-org/GLM-4.7 which offers excellent across the board performance with some of the shortest reasoning traces I've seen so far.
Here's an example of a model that behaves perfectly well up to 8k, smoothly increasing its entropy before going into a struggle zone, collapsing, seeing a region of recovery and finally falling down hard at the 16k wall.
Is your model implementation behaving badly like this?
Would you know if it was? ๐
goal: understand how GGUF compression works - what exactly is being lost?
approach: quantize/dequantize some images and look at error maps
spent 80% of the time tracing down what turns out to be a data distribution assumption: real LLM weights are symmetric and their mean is 0 so our test image MUST retain these properties or the results turn into a kind of nonsense soup where Q5_1 beats Q8
with that issue solved, we have some fun results! from left to right:
- test pattern image (mean value is around 0.01)
- q8 error (almost nothing - some light banding in the gradients)
- q5km error (starting to see the 'blocks' around the circles)
- q4_0 error (this is why q4_1 is 'preferred')
- q3k error. q3k is a really interesting set of trade-offs: it does not have a block-offset so it really leans into the 0-mean assumption HARD, if you violate it locally the results are BAD
- q2k error: q2k has a block-offset so for certain patterns the errors are actually less then q3k (a rather counter-intuitive result)
looking at mxfp4, i-quants and the other stuff that's possible inside gguf remains future work.. aiming to clean up this repo and push it this week, feel free to ping me if you want to play sooner.
564 tokens/sec on short 100-token sprints
96 tokens/sec on 8K-token marathons
TL;DR You don't just run AI on AMD. You negotiate with it.
The hardware absolutely delivers. Spoiler alert; there is exactly ONE configuration where vLLM + ROCm + Triton + PyTorch + Drivers + Ubuntu Kernel to work at the same time. Finding it required the patience of a saint
Consumer AMD for AI inference is the ultimate "budget warrior" play, insane performance-per-euro, but you need hardcore technical skills that would make a senior sysadmin nod in quiet respect.
We shrank the 1T model to 245GB (-62%) & retained ~85% of accuracy on Aider Polyglot. Run on >247GB RAM for fast inference.
We also collaborated with the Moonshot AI Kimi team on a system prompt fix! ๐ฅฐ
Guide + fix details: https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally
The way the coolers on these cards are designed is VERY unusual - this photo should come in the box! If you're having trouble in a closed case, see if you have space to add an intake on the bottom beside the PCIe edge - if all air is coming from the front like in this pic, the rear blower steals it all!
I am running 3x140mm 89CFM as intakes on the front and directly underneath the in-blowers of those two 3090FE is the main trick - a dedicated 140mm 140CFM blowing upwards to feed the intake blowers at the front/pcie edge of these coolers. The cross-blower air passes through both cards and then has space to vent their heat on the right.
At 280W load the 'outside' card is bored and sitting at 50C with the blower fans at 50% while the 'inside' card maintains 60-65C with it's blowers closer to 70-80%.
The main thing to be careful about with the FE dual-blower coolers is do not try to pump air into the 'front' (pcie side) of them! I see this configuration on many mining rigs and its fine for air cooled cards but the FEs actually have a blower venting out this pci-slot side so you HAVE to feed them from either the rear or underneath.
When I had the 4-slot bridge installed across them, the rear-feed alone was sufficient.
what if we keep it simple: gzip the resulting text and take the length of the compressed stream... "compressed bytes of information per output token" becomes the KPI
if we split across correct answers vs incorrect answers vs truncated answers and group by difficulty, a whole new world of analysis becomes not just possible but visually intuitive and almost trivial:
1) what is the model's overall reasoning efficiency? this is the slope of the scatterplot curve segments (there may be more then one..)
2) is the model able to apply more test-time compute towards more difficult variations of the task? the two on the left are not, the two on the right are.
3) when applying more test-time compute, is that compute useful? this is the curvature of the scatterplot trends - the two in the middle are 'losing their mojo' as answers get longer the information content falls down
4) is the model applying multiple approaches to the task? (right) do those approaches change with difficulty?
5) are truncations because we don't have enough context budget (left) or because the model has lost its mind and gone into a repeat loop (middle two) and does this happen across the board (middle left) or only when the problem is more difficult (middle right)
would love to hear your guys feedback on this kind of analysis, is anyone doing similar work?
this approach generates 12 plots per model (one for each task) so quite a bit of data and i've been hesitant to publish it so far, consider this post a toe tip.
Working on rewiring for a 5th this weekend so need the quad in 4U to make this happen. Definitely pushing air cooling to its limit but NVLink requires aggressive spacing between the cards.
Fortunately up here in Canada our power is quite cheap, so stacking compute this way works. While being annoying in terms of physical constraint the NVlink bridges really helps squeeze these cards - on smaller models at big batch the difference at -tp 2 can be as much as 50%. It's not bandwidth, it's latency!
onekq-ai/WebApp1K-models-leaderboard
https://www.cnn.com/2025/10/31/tech/south-korea-nvidia-apec-chicken-intl-hnk
But is this really a single variable?
If we pick a specific example (lets say google/gemma-3-4b-it vs google/gemma-3-12b-it vs google/gemma-3-27b-it) we find that this single variable is actually a plethora of differences:
- The
hidden_size, intermediate_size and num_hidden_layers are all different: not only are there more layers as the model grows larger, the layers themselves grow larger.-
num_attention_heads and num_key_value_heads are also all different: the larger models split the attention into more groups.What is the final outcome of all these differences? This answer turns out to be task specific but has 4 archetypes worth noting:
Cars, Objects, Shapes: Everyone passes - smaller can do the job.
Arithmetic, Brackets: Everyone fails - model architecture is ill-suited for this task
Boolean, Sort: There's a threshold of "big enough" you have to cross, any smaller and it fails.
Objects, Shuffle: Bigger models demonstrate robust scaling in performance especially as complexity grows.
The practical lesson here is that since size is not really a single variable, *bigger is not necessarily better*! There's a surprising number of domains where you can 'downgrade' to a smaller model and gain improvements in latency and throughput without sacrificing your downstream task performance.
Lost in the Middle: How Language Models Use Long Contexts (2307.03172)
I spotted the same problem in coding tasks, and documented in my book (https://www.amazon.com/dp/9999331130).
Why did this problem become hot again? This is because many of us thought the problem has been solved by long context models, which is not true.
Here we were misled by benchmarks. Most long-context benchmarks build around the QA scenario, i.e. "finding needle in haystack". But in agentic scenarios, the model needs to find EVERYTHING in the haystack, and just can't afford enough attention for this challenge.

