7 comments

  • nl1 hour ago
    Trinity Large Preview managed 17&#x2F;25 on my agentic SQL benchmark: <a href="https:&#x2F;&#x2F;sql-benchmark.nicklothian.com&#x2F;?#all-data" rel="nofollow">https:&#x2F;&#x2F;sql-benchmark.nicklothian.com&#x2F;?#all-data</a> which is a fairly mediocre score for a large model (Qwen 27B managed 23&#x2F;25)<p>This non-Preview release scored 16&#x2F;25. Probably the same model as the preview, or at least not particularly improved if you want agentic performance.<p>Good to see more options for large open models though!<p>It&#x27;s hard to point definitively to a reason it underperforms but generally models that perform well at agentic tasks were trained on very large numbers of tokens (Qwen, frontier models) or were heavily post trained for reasoning (see eg Nemotron-Cascade-2-30B-A3B at 21&#x2F;25 vs the base model Nemotron-3-Nano-30B-A3B-Base at 12&#x2F;25 )
  • edg50001 hour ago
    They are repeating a million times on their huggingface page that the thinking output should be included in the conversation history for multiturn use. That makes me wonder, is this generally needed for LLMs? Because that implies that they only really function well on typicial multiturn flows; I&#x27;m experimenting with a completely different approach: there is still the main message stream in the context, but the agent can use structured means to exchange messages and interact with terminals and the file system in a statefull manner. The state is rendered to the context on every cycle, with the message history just being a &quot;panel&quot;. I&#x27;m still in the middle of trying this out so I can&#x27;t say yet if it will work. But I hope the models are flexible enough for this.
    • kristianp18 minutes ago
      I&#x27;ve heard someone mention feeding back thinking when talking about gpt-oss-120, at the time that was the only evidence I could see that this is a thing.
  • wmf2 hours ago
    Maybe a better link: <a href="https:&#x2F;&#x2F;www.arcee.ai&#x2F;blog&#x2F;trinity-large-thinking" rel="nofollow">https:&#x2F;&#x2F;www.arcee.ai&#x2F;blog&#x2F;trinity-large-thinking</a>
  • gslepak2 hours ago
    This is one of the first high-performing fully open weight American models to my knowledge. Congrats! (insert American flag here)
    • nl1 hour ago
      The OG Lllama 3? GPT-OSS? NVidia Nemotron 3?
      • gslepak26 minutes ago
        I think Facebook gets the credit there.
  • kristianp2 hours ago
    The weights are on huggingface, surprisingly: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;arcee-ai&#x2F;Trinity-Large-Thinking" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;arcee-ai&#x2F;Trinity-Large-Thinking</a>
  • jauntywundrkind2 hours ago
    That&#x27;s crazy affordable. Promising! Maybe gonna give today&#x27;s submission StepFun 3.5 Flash a run perhaps, who knows. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47602879">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47602879</a> <a href="https:&#x2F;&#x2F;app.uniclaw.ai&#x2F;arena?tab=costEffectiveness&amp;via=hn" rel="nofollow">https:&#x2F;&#x2F;app.uniclaw.ai&#x2F;arena?tab=costEffectiveness&amp;via=hn</a>
  • Caum1 hour ago
    [dead]