14 comments

  • Maxious10 hours ago
    ICYMI unsloth has had some major breakthroughs today with the Qwen3.5 local models <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks</a><p>With the Qwen3.5 35B A3B at Q4 I&#x27;ve got 200k context running at 62.98 tokens per second on a local RTX5080 16GB.
    • danielhanchen8 hours ago
      Oh I didn&#x27;t expect this to be on HN haha - but yes for our new benchmarks for Qwen3.5, we devised a slightly different approach for quantization which we plan to roll out to all new models from now on!
      • nnx6 hours ago
        Can you describe what is this slightly different approach and why it should work on all models?
      • hedora1 hour ago
        Nice! Your stuff ran LLMs extremely well on &lt; $500 boxes (24-32GB ram) with iGPUS before this update.<p>I’m eager to try it out, especially if 16GB is viable now.
    • Kayou10 hours ago
      Wait, the Q4 quantization which is more than 20GB fits in your 16GB GPU ? I didn&#x27;t know that was possible, I was always restricting myself to smaller model than the VRAM I had
      • Maxious9 hours ago
        Yep. These Mixture of Experts models are well suited for paging in only the relevant data for a certain task <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;moe" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;blog&#x2F;moe</a><p>There&#x27;s some experiments of just removing or merging experts post training to shrink models even more <a href="https:&#x2F;&#x2F;bknyaz.github.io&#x2F;blog&#x2F;2026&#x2F;moe&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bknyaz.github.io&#x2F;blog&#x2F;2026&#x2F;moe&#x2F;</a>
        • vlovich1232 hours ago
          MoE is not suited for paging because it’s essentially a random expert per token. It only improves throughput because you reduce the memory bandwidth requirements for generating a token since 1&#x2F;n of the weights are accessed per token (but a different 1&#x2F;n on each loop).<p>Now shrinking them sure, but I’ve seen nothing that indicates you can just page weights in and out without cratering your performance like you would with a non MoE model
          • FuckButtons1 hour ago
            Not entirely true, it’s random access within the relevant subset of experts and since concepts are clustered you actually have a much higher probability of repeatedly accessing the same subset of experts more frequently.
        • bee_rider3 hours ago
          That blog post was super interesting. It is neat that he can select experts and control the routing in the model—not having played with the models in detail, tended to assume the “mixing” in mixture of experts was more like a blender, haha. The models are still quite lumpy I guess!
      • segmondy9 hours ago
        llama.cpp is designed for partial offloading, the most important part of the model will be loaded into the GPU and the rest on system ram. I run 500B+ models such as DeepSeek&#x2F;KimiK2.5&#x2F;GLM-5 without having that much GPU vram.
      • Koffiepoeder8 hours ago
        The A3B part in the name stands for `Active 3B`, so for the inference jobs a core 3B is used in conjunction with another subpart of the model, based on the task (MoE, mixture of experts). If you use these models mostly for related&#x2F;similar tasks, that means you can make do with a lot less than the 35B params in active RAM. These models are therefore also sometimes called sparse models.
      • nurettin8 hours ago
        This is why they say &quot;A3B&quot; meaning only 3B is active at a time, limiting VRAM usage.
    • roxolotl7 hours ago
      What method are you using to do that? I’ve been playing with llama.cpp a lot lately and trying to figure out the cleanest options for getting a solid context window on 32gb vram and 64gb system ram.
      • jychang7 hours ago
        32GB vram is more than enough for Qwen 3.5 35b<p>You can just load the Q4_K_XL model like normal, and put all tensors on GPU without any -ot or --cpu-moe flags.<p>If you need a massive context for some reason where model+kv cache won&#x27;t fit in 32gb, then use -ot to move the ffn moe experts for 1-2 layers into RAM. You&#x27;ll get a speed hit (due to loading params from slower RAM instead of fast VRAM) but it&#x27;ll work.
        • roxolotl6 hours ago
          Nice ok I’ll play with that. I’m mostly just learning what’s possible. Qwen 3.5 35b has been great without any customizations but it’s interesting to learn what the options are.
    • mirekrusin8 hours ago
      2x RTX 4090, Q8, 256k context, 110 t&#x2F;s
      • instagib2 hours ago
        1 4090, Qwen3.5-35B-A3B-UD-MXFP4_MOE, 64k context, 122 t&#x2F;s. Llama.cpp
    • cpburns20096 hours ago
      Does llama.cpp support Qwen3.5 yet? When I tried it before, it failed saying &quot;qwen35moe&quot; is an unsupported architecture.
      • hnfong5 hours ago
        Yes, but make sure you grab the latest llama.cpp release<p>New model archs usually involve code changes.
        • cpburns20094 hours ago
          Awesome! It looks like the llama.cpp-hip AUR was updated today to b8179, and it works.
      • reactordev6 hours ago
        You would need the Dynamic 2.0 GGUF as discussed in the article.<p>But mmmmmm, Q8_K_XL looks mighty nice.
    • RS-2327 hours ago
      That’s intriguing. I have the same card, maybe I should give it a go. Curious about your CPU&#x2F;RAM&#x2F;storage capacity as well.<p>Any resources for configuring the local setup?<p>My entire home media stack is a single compose file in a WSL distro so it would be cool if local LLM worked the same way.
    • jychang10 hours ago
      Not really breakthroughs, more like bugfixes for their broken first batch.
      • danielhanchen7 hours ago
        No this is false - unsure if you saw our new blog - <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks</a> which shows SOTA on nearly all bits, and we shared all our research as well
        • zargon1 hour ago
          Explain what about that statement is false. Your original Q4_K_XL quant was broken. People noticing that it was a total outlier among other quants is what prompted this &quot;research&quot;. Your own data proves that your new release fixes the bugs of your original, in order to match AesSedai&#x27;s PPL. Fixing bugs is great. Searching for the best quant mix is helpful. I use your quants and appreciate your work. But whitewashing this situation dilutes trust and good will.
        • jychang7 hours ago
          Yeah, I saw that yesterday. The blog post does not explain why&#x2F;how the Qwen 3.5 quants uploaded on 2&#x2F;27 are different from the files uploaded on 2&#x2F;24.<p>Old 2&#x2F;24 Q4_K_XL commit (pre bugfix files): <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Qwen3.5-35B-A3B-GGUF&#x2F;commit&#x2F;7a8e0b23fcaf1a052ad02eb73f1c0627177e8325" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Qwen3.5-35B-A3B-GGUF&#x2F;commit&#x2F;7...</a><p>Questions for a postmortem that the blog post left unanswered:<p>- Why the change? Is it just to improve PPL&#x2F;KLD? Sure, we can assume PPL and KLD are not perfect benchmarks. If yes, then why change the quantization anyways? Or was the old 2&#x2F;24 quant actually much worse performing in the real world?I presume the Q4_K_XL quant using mxfp4 was the issue? If the 2&#x2F;24 files having a lower PPL is an actual issue due to low quality tensors, then why not just say that?<p>- What were the main tensors that had the quantizations changed from 2&#x2F;24 to 2&#x2F;27? Did you now quantize attention tensors differently? Or perhaps ssm? T<p>- What was it changed from? Was it changed from mxfp4 or q4_k to q8, or something else?<p>A quick sentence in the blog post saying &quot;ok, we&#x27;ve confirmed that using mxfp4 (or q3 or whatever) in the attention&#x2F;ssm&#x2F;biases&#x2F;norms&#x2F;etc is a bad idea, we had that in our old models on 2&#x2F;24 and our new models today are better&quot; that would make it clear. As it&#x27;s written, it&#x27;s trying to both say &quot;PPL&#x2F;KLD don&#x27;t actually reflect real world quality&quot; and &quot;we changed our quant to increase PPL&#x2F;KLD&quot; at the same time, which seems contradictory.
  • Archit3ch7 hours ago
    What&#x27;s the verdict for real world use on Q3 120B (fits in 64GB) vs Q4 of a smaller model?
    • FuckButtons58 minutes ago
      Bigger model wins as long as the quantization was done properly.
  • santa_boy3 hours ago
    Great timing. I downloaded the models today on LM Studio, they seem to work remarkably well.<p>Any HN model recommendations to run on my 24GB M5 and any best practices while running them?
  • jychang9 hours ago
    What&#x27;s up with this post? It&#x27;s a link to something which has existed for a long time, and there&#x27;s a bunch of dead comments below. Some weird SEO campaign thing?
    • tosh9 hours ago
      Unsloth have just released benchmarks on how their dynamic quants perform for Qwen 3.5<p><a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks</a>
      • jychang9 hours ago
        I&#x27;m aware of that, but that&#x27;s not the link of the post. The post is linking to their UD 2.0 quants from a few months back.<p>Also, the benchmarks are because they messed up the first version of their Qwen 3.5 XL quants by quanting some tensors to mxfp4 that should have been in higher quality, and this is their bugfix. The post literally starts out with &quot;We updated Qwen3.5-35B Unsloth Dynamic quants being SOTA on nearly all bits&quot; without explaining WHY they needed to update from the original version.
        • danielhanchen7 hours ago
          Didn&#x27;t expect this to be on HN haha - but sometimes HN does have older posts come up sometimes.<p>No your conclusion is false - only the old Q4_K_XL had slightly higher perplexity, all other quants are fine. We uploaded 9TB of research artifacts to <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Qwen3.5-35B-A3B-Experiments-GGUF" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Qwen3.5-35B-A3B-Experiments-G...</a> for the community.<p>If you read our blog, it says KLD and PPL are actually sometimes counterintuitive - for example MiniMax some of our quants do worse on PPL and KLD vs AesSedai&#x27;s one for example, but does worse on LiveCodeBench by a lot see <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks#id-3-perplexity-and-kld-can-be-misleading">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks#id-3-...</a><p>This is because see <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks#id-1-some-tensors-are-very-sensitive-to-quantization">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks#id-1-...</a> - although bitwidths are in general monotonic ie q2_k &lt; q3_k &lt; q4_k &lt; q5_k etc, we find KLD and PPL are actually not monotonic ie q3_k can actually have BETTER PPL than q4_k.<p>So the main point is bad luck on quantization - sometimes lower bits might get lower PPL and KLD, but actually this is a ruse and wrong, since on actual real world tasks, it&#x27;s worse.
          • jychang7 hours ago
            The Q4_K_XL is easily the most popular quant for the model, though.<p>So then why was Q4_K_XL having issues? Is it just a PPL issue that doesn&#x27;t reflect in real world usage? If yes, why not just say that? &quot;The Q4_K_XL had lower PPL, but don&#x27;t worry, PPL can be wrong, and other benchmarks show it&#x27;s fine&quot;. If it was a real quality issue, then where was the issue caused by?<p>The blog post says &quot;Retiring MXFP4 from all GGUF quants: Q2_K_XL, Q3_K_XL and Q4_K_XL, except for pure MXFP4_MOE&quot; but doesn&#x27;t say why. The easy assumption that most people would make is &quot;oh, you quanted attention or ssn or something to mxfp4 and that turned out to be bad, so you retire mxfp4&quot; but if you say that it&#x27;s not that, then what&#x27;s the actual issue?
      • lostmsu9 hours ago
        Looking at their benchmarks there doesn&#x27;t appear to be meaningful difference between their quants and bartowsky quants.
        • danielhanchen8 hours ago
          No our Qwen3.5 new ones show the opposite see <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks</a>
          • lostmsu2 hours ago
            Am I misreading the table?<p><pre><code> Unsloth Q4_K_M PPL: 6.6053 KLD 99.9%: 0.5478 KLD mean: 0.0192 bartowski Qwen_Q4_K_M PPL: 6.6097 KLD 99.9%: 0.5771 KLD mean: 0.0182 </code></pre> Barely noticeable drop in PPL; noticeable KLD drop (good, 5%); but worse KLD mean (bad, 5%).
    • danielhanchen8 hours ago
      Didn&#x27;t expect this as well haha on HN again - probably related to Qwen3.5
  • qskousen8 hours ago
    This is pretty interesting, based on the blog post, it seems like they are using a technique similar to what I have been using to generate &quot;layer sensitivity&quot; data in my (still pretty beta) ggufy project, which is more aimed at diffusion (image) models. <a href="https:&#x2F;&#x2F;github.com&#x2F;qskousen&#x2F;ggufy" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;qskousen&#x2F;ggufy</a>
  • deepsquirrelnet5 hours ago
    I love the work unsloth is doing. I only wish gguf format had better vllm support. It’s sometimes hard to find trustworthy quants that work well with vllm.
  • electroglyph9 hours ago
    Cheers Daniel and Mike and team, keep up the good work!
  • tenpa00009 hours ago
    I run Llama 3.2 3B locally for latency-sensitive classification (sub-50ms, so no room for bigger models). At that scale Q2_K vs Q4_K_M isn&#x27;t just smaller — Q2 starts flipping yes&#x2F;no answers that Q4 gets right. Not often, but enough to notice in production.<p>So the KL divergence numbers here are more useful to me than the MMLU tables honestly. I&#x27;ve had MMLU hold steady while the output distribution drifted enough to break things downstream.<p>Does the calibration dataset make much difference at 3B though? There&#x27;s so little redundancy that I&#x27;d expect it to hit a floor pretty fast regardless of how good the calibration data is.
    • am17an8 hours ago
      What do you use for sub-50ms inference?
    • zozbot2349 hours ago
      For a simple classification task you generally want to prioritize regularization over more sophisticated behavior, so fewer parameters with larger quantization makes sense. For more generic chat-like purposes, Q2 of a larger model may often be preferable to Q4 of a smaller one.
  • Havoc10 hours ago
    Advances in this space are always welcome.<p>I see the change in kld values is pretty modest vs prior version. Does anyone know how that translates to real world? Is more of a linear type situation or exponential etc
    • danielhanchen7 hours ago
      Yes the new blog post <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks">https:&#x2F;&#x2F;unsloth.ai&#x2F;docs&#x2F;models&#x2F;qwen3.5&#x2F;gguf-benchmarks</a> has some benchmarks from community people on our quants vs others on LiveCodeBench for eg!
  • dyl0009 hours ago
    So q6 is practically perfect, and q3 is meaningfully decent. very impressive!
  • raphaelmolly82 hours ago
    [dead]
  • aichen_dev9 hours ago
    [dead]
  • MarcLore9 hours ago
    [dead]
  • shablulman10 hours ago
    [dead]