3 comments

  • bertili14 minutes ago
    Does this translate into a similar reduction in compute?<p>What&#x27;s the catch?
  • xiphias22 hours ago
    The most interesting part of this idea for me is how it wasn&#x27;t tried &#x2F; implemented before, as it makes sense.<p>I haven&#x27;t read the paper but of course DTree tricks work here as well
  • FranckDernoncou9 hours ago
    Paper: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2605.12825" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2605.12825</a> ; Code+models: <a href="https:&#x2F;&#x2F;github.com&#x2F;chiennv2000&#x2F;orthrus" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;chiennv2000&#x2F;orthrus</a> ; Disclosure: co-author.<p>Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model.<p>Results:<p>- Up to 7.8x TPF, ~6x wall-clock on MATH-500.<p>- 16% of params trained, &lt;1B tokens, 24h on 8xH200.<p>- vs. diffusion LMs (Dream, Fast-dLLM-v2, SDAR, Mercury, Gemini Diffusion): they modify base weights and lose accuracy (Fast-dLLM-v2: -11 pts on MATH-500). Orthrus freezes the backbone; accuracy matches Qwen3-8B exactly.<p>- vs. Speculative Decoding (EAGLE-3, DFlash): no external drafter, no separate cache, zero TTFT penalty (no drafter to init&#x2F;sync). KV overhead is O(1) (~4.5 MiB flat). Acceptance length on MATH-500: 11.7 vs. 7.9 (DFlash) vs. 3.5 (EAGLE-3).<p>- Single-step denoising beats multi-step (6.35 vs. 3.53 TPF). KL distillation beats CE on acceptance rate.<p>Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only.
    • dot_treo0 minutes ago
      Do you plan on releasing the training code?
    • ilaksh3 hours ago
      Amazing. Is it possible to do this with Qwen 3.6 27B? Will it work with quants (I assume so)?
      • sleepyeldrazi49 minutes ago
        From a quick and shallow view of the paper, it looks very feasible (with a little tinkering ) to be adapted to qwen3.6 27B. The process looks somewhat similar to training a LoRA, or in a way distilling your own model so that a mini model learns how to imitate it, and you glue them. I might bite the bullet and rent a gpu to do it for 3.6 27b, as this will solve a lot of my problems.
        • sleepyeldrazi28 minutes ago
          Scratch that, I don&#x27;t have that kind of money, and 3.5&#x27;s architecture is a little more divergent from 3&#x27;s, so it will be a bit less trivial. It does look possible, just not on a student&#x27;s paycheck.
          • Boranbruh13 minutes ago
            There are websites that let you rent GPUs for cheap, such as QuickPod. Have you checked those P2P GPU rentals out?
    • littlestymaar21 minutes ago
      So, it&#x27;s D-Flash but at each transformer layer and share the KV cache of the original model? Very smart!