6 comments

  • Ari_Rahikkala4 hours ago
    Neat. Very similar to tree-based speculation as they point out, and they also point how to combine them.<p>Speculative decoding: Sample a linear output (next n tokens) from draft model, submit it to a verifier model. At some index the verifier might reject a token and say that no, actually the next token should be this other token instead (&quot;bonus token&quot; in this paper), and that&#x27;s your output. Or if it accepts the whole draft, you still get a bonus token as the next token past the draft. Then you draft again from that prefix on.<p>Tree-based speculation: Sample a tree of outputs from draft model, submit whole tree to verifier, pick longest accepted prefix (and its bonus token).<p>Speculative speculative decoding: Sample a linear output from draft model, then in parallel both verify it with the verifier model, and produce a tree of drafts branching out from different rejection points and different choices of bonus tokens at those points. When the verifier finishes, you might have have a new draft ready to submit right away.<p>Combined: Sample a tree from the draft model, submit the whole tree to the verifier and in parallel also plan out drafts for different rejection points with different bonus tokens anywhere in the tree.
  • boltzmann-brain3 hours ago
    &gt; Our implementation is up to 2x faster than optimized speculative decoding baselines and up to 5x faster than autoregressive decoding with open source inference engines<p>what about per-FLOP?
  • libraryofbabel3 hours ago
    This is interesting stuff. I wonder if these sorts of tricks are already in use at the big labs.<p>Incidentally, I would recommend trying implementing speculative decoding yourself if you <i>really</i> want to understand LLM inference internals (that, and KV caching of course). I tried it over the Christmas holidays and it was a wonderful learning experience. (And hard work, especially because I forced myself to do it by hand without coding agent assistance.)
  • saagarjha5 hours ago
    Yo dawg I heard you liked speculation so we speculated your speculating
    • LoganDark4 hours ago
      Wait till they speculate the speculation&#x27;s speculation. Yo dawg I heard that yo dawg I heard
      • kozzion8 minutes ago
        Is it gonna be speculation all the way down?
  • monster_truck2 hours ago
    We&#x27;re almost to Duddits Decoding (SSDD)