3 comments

  • vorticalbox1 hour ago
    This reminds me of <a href="https:&#x2F;&#x2F;dnhkng.github.io&#x2F;posts&#x2F;rys&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dnhkng.github.io&#x2F;posts&#x2F;rys&#x2F;</a><p>David looks into the LLM finds the thinking layers and cut duplicates then and put them back to back.<p>This increases the LLM scores with basically no over head.<p>Very interesting read.
  • l4tq317 minutes ago
    [dead]
  • 34ylsh24 minutes ago
    [flagged]