15 comments

  • mynti1 day ago
    They trained it in 33 days for ~20m (that includes apparently not only the infrastructure but also the salaries over a 6 month period). And the model is coming close to QWEN and Deepseek. Pretty impressive
    • zamadatix17 hours ago
      The price&#x2F;scaling of training another same class model always seems to be dropping through the floor but training models which score much better seems to be hitting a brick wall.<p>E.g. gemini-3-pro tops the lmarena text chart today at 1488 vs 1346 for gpt-4o-2024-05-13. That&#x27;s a win rate of 70% (where 50% is equal chance of winning) over 1.5 years. Meanwhile, even the open weights stuff OpenAI gave away last summer scores between the two.<p>The exception seems to be net new benchmarks&#x2F;benchmark versions. These start out low and then either quickly get saturated or hit a similar wall after a while.
      • gwern15 hours ago
        &gt; E.g. gemini-3-pro tops the lmarena text chart today at 1488 vs 1346 for gpt-4o-2024-05-13. That&#x27;s a win rate of 70% (where 50% is equal chance of winning) over 1.5 years. Meanwhile, even the open weights stuff OpenAI gave away last summer scores between the two.<p>Why do you care about LM Arena? It has so many problems, and the fact that no one would suggest using GPT-4o for doing math or coding right now, or much of anything, should tell you that a &#x27;win rate of 70%&#x27; does not mean whatever it looks like it means. (Does GPT-4o solve roughly as many Erdos questions as gemini-3-pro...? Can you write roughly as good poetry?)
        • DoctorOetker1 hour ago
          It very sad there is so much gaming of metrics with LLMs.<p>If we wish to avoid everyone creating benchmarks for themselves, then instead of predetermined benchmarks (public ones allow gaming, while publicly scored private ones require blind trust) we could use gradient descent on sentences to find disagreements between models, and then present them to human domain experts.<p>At least it could be public without possibility of leaking (since the model creators don&#x27;t yet know of all possible disagreements between LLM&#x27;s, which ones will be selected for review by human experts)
        • zamadatix15 hours ago
          It&#x27;d certainly be odd if people were recommending old LLMs which score worse, even if marginally. That said, 4o is really a lot more usable than you&#x27;re making it out to be.<p>The particular benchmark in the example is fungible but you have to pick something to make a representative example. No matter which you pick someone always has a reason &quot;oh, it&#x27;s not THAT benchmark you should look at&quot;. The benchmarks from the charts in the post exhibit the same as described above.<p>If someone was making new LLMs which were consistently solving Erdos problems at rapidly increasing rates then they&#x27;d be showing how it does that rather than showing how it scores the same or slightly better on benchmarks. Instead the progress is more like years since we were surprised LLMs were writing poetry to massage out an answer to one once. Maybe by the end of the year a few. The progress has definitely become very linear and relatively flat compared to roughly the initial 4o release. I&#x27;m just hoping that&#x27;s a temporary thing rather than a sign it&#x27;ll get even flatter.
          • nl11 hours ago
            Progress has <i>not</i> become linear. We&#x27;ve just hit the limits of what we can measure and explain easily.<p>One year ago coding agents could barely do decent auto-complete.<p>Now they can write whole applications.<p>That&#x27;s much more difficult to show than an ELO score based on how people like emjois and bold text in their chat responses.<p>Don&#x27;t forget Llama4 led Lmarena and turned out to be very weak.
            • dajonker7 hours ago
              You are equally understating past performance as you are overstating current performance.<p>One year ago I already ran qwen2.5-coder 7B locally for pretty decent autocomplete. And I still use it today as I haven&#x27;t found anything better, having tried plenty of alternatives.<p>Today I let LLM agents write probably 60-80% of the code, but I frequently have to steer and correct it and that final 20% still takes 80% of the time.
            • anon3738393 hours ago
              Much of these gains can be attributed to better tooling and harnesses around the models. Yes, the models also had to be retrained to work with the new tooling, but that doesn’t mean there was a step change in their general “intelligence” or capabilities. And sure enough, I’m seeing the same old flaws as always: frontier models fabricating info not present in the context, having blindness to what is present, getting into loops, failing to follow simple instructions…
          • refulgentis14 hours ago
            Frankly, this reads as a lot of words that amount to an excuse for using <i>only</i> LMArena, and the rationale is quite clear: it’s for an unrelated argument that isn’t going to ring true to people, especially an audience of programmers who just spent the last year watching the AI go from being able to make coherent file edits to multi hour work.<p>LMArena is, de facto, a sycophancy and Markdown usage detector.<p>Two others you can trust, off the top of my head, are LiveBench.ai and Artifical Analysis. Or even Humanity’s Last Exam results. (Though, frankly, I’m a bit suspicious of them. Can’t put my finger on why. Just was a rather rapid hill climb for a private benchmark over the last year.)<p>FWIW GPT 5.2 unofficial marketing includes the Erdos thing you say isn’t happening.
            • zamadatix13 hours ago
              I&#x27;ve always found LiveBench a bit confusing to try to compare over time as the dataset isn&#x27;t meant to be compared over time. It also currently claims GPT-5 Mini High from last summer is within ~15% of Claude 4.5 Opus Thinking High Effort in the average, but I&#x27;ll wait with bated breath for the millions of amazing apps which couldn&#x27;t be coded before to start showing up (or, more likely, be told in 6 months how these 2 benchmarks weren&#x27;t the ones that should matter either). Artificial Analysis at least has the same at 20% from the top, so maybe that&#x27;s the one we all agree to use for now since it implies faster growth.<p>&gt; FWIW GPT 5.2 unofficial marketing includes the Erdos thing you say isn’t happening.<p>Certainly not, unless you&#x27;re about to tell me I can pop into ChatGPT and pop out Erdos proofs regularly since #728 was massaged out with multiple prompts and external tooling a few weeks ago - which is what I was writing about. It was great, it was exciting, but it&#x27;s exactly the slow growth I&#x27;m talking about.<p>I like using LLMs, I use them regularly, and I&#x27;m hoping they continue to get better for a long time... but this is in no way the GPT 3 -&gt; 3.5 -&gt; 4 era of mind boggling growth of frontier models anymore. At best, people are finding out how to attach various tooling to the models to eek more out as the models themselves very slowly improve.
              • nl11 hours ago
                &gt; I&#x27;ll wait with bated breath for the millions of amazing apps which couldn&#x27;t be coded before to start showing up<p>Appstore releases were roughly linear until July 25 and are up 60% since then:<p><a href="https:&#x2F;&#x2F;www.coatue.com&#x2F;c&#x2F;takes&#x2F;chart-of-the-day-2026-01-22" rel="nofollow">https:&#x2F;&#x2F;www.coatue.com&#x2F;c&#x2F;takes&#x2F;chart-of-the-day-2026-01-22</a>
                • refulgentis10 hours ago
                  One of the best surgically executed nukes on HN in my 16 years here.
              • refulgentis10 hours ago
                See peer reply re: yes, your self-chosen benchmark has been reached.<p>Generally, I&#x27;ve learned to warn myself off of a take when I start writing emotionally charged stuff like [1]. Without any prompting (who mentioned apps? and why would you without checking?), also, when reading minds, and assigning weak arguments, now and in my imagination of the future. [2]<p>At the very least, [2] is a signal to let the <i>keyboard</i> have a rest, and ideally my mind.<p>Bailey: &gt; &quot;If [there were] new LLMs...consistently solving Erdos problems at rapidly increasing rates then they&#x27;d be showing...that&quot;<p>Motte: &gt; &quot;I can[&#x27;t] pop into ChatGPT and pop out Erdos proofs regularly&quot;<p>No less than <i>Terence Tao</i>, a month ago, pointing out your bailey was newly happening with the latest generation: <a href="https:&#x2F;&#x2F;mathstodon.xyz&#x2F;@tao&#x2F;115788262274999408" rel="nofollow">https:&#x2F;&#x2F;mathstodon.xyz&#x2F;@tao&#x2F;115788262274999408</a>. Not sure how you only saw one Erdos problem.<p>[1] &quot;I&#x27;ll wait with bated breath for the millions of amazing apps which couldn&#x27;t be coded before to start showing up&quot;<p>[2] &quot;...or, more likely, be told in 6 months how these 2 benchmarks weren&#x27;t the ones that should matter either&quot;
      • lumost1 hour ago
        It’s becoming clear that training a frontier model is a capex&#x2F;infra problem. This problem involves data acquisition, compute, and salaries for the researchers familiar with the little nuances of training at this scale.<p>For the same class model, you can train on more or less the same commodity datasets. Over time these datasets become more efficient to train on as errata are removed and the data is cleaner. The cost of dataset acquisition can be amortized and sometimes drops to 0 as the dataset is open sourced.<p>Frontier models mean acquiring fresh datasets at unknown costs.
      • esskay57 minutes ago
        Training costs might be coming down but costs for hardware that can run these models is still obscenely high and rising. We&#x27;re still nowhere near a point where its realistically feasible to run a home LLM that doesn&#x27;t feel like it&#x27;s suffering with severe brain damage.
      • Zababa6 hours ago
        &gt;E.g. gemini-3-pro tops the lmarena text chart today at 1488 vs 1346 for gpt-4o-2024-05-13. That&#x27;s a win rate of 70% (where 50% is equal chance of winning) over 1.5 years. Meanwhile, even the open weights stuff OpenAI gave away last summer scores between the two.<p>I think in that specific case that says more about LMArena than about the newer models. Remember that GPT 4o was so specifically loved by people that when GPT 5 replaced there was lots of backlash against OpenAI.<p>One of the popular benchmarks right now is METR which shows some real improvement with newer models, like Opus 4.5. Another way of getting data is anecdotes, lots of people are really impressed with Opus 4.5 and Codex 5.2 (but they&#x27;re hard distangle from people getting better with those tools, the scaffolding (Claude code, Codex) getting better, and lots of other stuff). SWEBench is still not saturated (less than 75% I think).
      • YetAnotherNick8 hours ago
        &gt; The exception seems to be net new benchmarks&#x2F;benchmark versions.<p>How is this an exception? If a genius and kindergarden student takes a test to add two single digit numbers how is that result any relevant? Even though adding single digit number is in the class of possible test.<p>We can only look at non saturated test.
    • tgrowazay8 hours ago
      &gt; 2048 Nvidia B300 GPU<p>With average price of $6&#x2F;hour that is $12,288&#x2F;hour for whole cluster.<p>Times 33 days times 24 hours it comes out to be $9.7MM , assuming no discounts.<p>That leaves $10.3MM&#x2F;6 months for salaries, which is 103 employees at $200k&#x2F;year or 51 employee at $400k&#x2F;year.
      • YetAnotherNick8 hours ago
        It would likely be something like $4.5&#x2F;hour for this big cluster.<p>[1]: <a href="https:&#x2F;&#x2F;verda.com&#x2F;products#B300" rel="nofollow">https:&#x2F;&#x2F;verda.com&#x2F;products#B300</a>
    • jychang14 hours ago
      They didn&#x27;t do something stupid like Llama 4 &quot;one active expert&quot;, but 4 of 256 is very sparse. It&#x27;s not going to get close to Deepseek or GLM level performance unless they trained on the benchmarks.<p>I don&#x27;t think that was a good move. No other models do this.
    • Der_Einzige10 hours ago
      I&#x27;ll straight up accuse them of on purpose muddying the waters. To get to the point of executing a successful training run like that, you have to count every failed experiment and experiment that gets you to the final training run. They spent well over 100 Million to train this model by that definition, and all definitions which don&#x27;t include the failed runs up to the successful one at the end are at best disingenuous and at worst outright lies designed to trick investors into dumping Nvidia.<p>No, deepseek did not spend only 5.5 million for Deepseek V3. No Gemini was not &quot;entirely trained on TPUs&quot;. They did hundreds of experiments on GPUs to get to the final training run done entirely on TPUs. GCP literally has millions of GPUs and you bet your ass that the gemini team has access to them and uses them daily. Deepseek total cost to make Deepseek V3 is also in the 100-400 million range when you count all of what&#x27;s needed to get to the final training run.<p>Edit: (Can&#x27;t post cus this site&#x27;s &quot;posting too fast&quot; thing is really stupid&#x2F;bad)<p>The only way I can get reliable information out of folks like you is to loudly proclaim something wrong on the internet. I&#x27;m just going to even more aggressively do that from now on to goad people like you to set the record straight.<p>Even if they only used TPUs, they sure as shit spent orders of magnitude more than they claim due to &quot;count the failed runs too&quot;
      • querez10 hours ago
        &gt; No Gemini was not &quot;entirely trained on TPUs&quot;. They did hundreds of experiments on GPUs to get to the final training run done entirely on TPUs. GCP literally has millions of GPUs and you bet your ass that the gemini team has access to them and uses them daily.<p>You are wrong. Gemini was definitely trained entirely on TPU. Of course your point of &quot;you need to count failed experiments, too&quot;. Is correct. But you seem to have misconceptions around how deepmind operates and what infra it possess. Deepmind (or barely any of Google internal stuff) runs on Borg, an internal cloud system, which is completely separate (and different) from gcp. Deepmind does not have access to any meaningful gcp resources. And Borg barely has any GPUs. At the time I left deepmind, the amount of tpu compute available was probably 1000x to 10000x larger than the amount of gpu compute. You would never even think of seriously using GPUs for neural net training, it&#x27;s too limited (in terms of available compute) and expensive (in terms of internal resource allocation units), and frankly less well supported by internal tooling than tpu. Even for small, short experiments, you would always use TPUs.
        • hansvm9 hours ago
          At least blessed teams we used GPUs when we were allowed, else CPUs. TPUs were basically banned in YT since they were reserved for higher priority purposes. Gemini was almost certainly trained with one, but I guarantee an ungodly amount of compute has gone into training neural nets with CPUs and GPUs.
        • YetAnotherNick7 hours ago
          Using TPU has the same opportunity cost as GPU. Just because they built something doesn&#x27;t mean it&#x27;s cheaper. If it is they can rent it cheaper to save money on paying billions of dollars to Nvidia.<p>A big segment of the market just uses GPU&#x2F;TPU to train LLMs, so they don&#x27;t exactly need flexibility if some tool is well supported.
          • querez2 hours ago
            I assume TPU TCO is significantly cheaper than GPU TCO. At the same time, I also assume that market demand for GPUs is higher than TPUs (external tooling is just more suited to GPU -- e.g. I&#x27;m not sure what the Pytorch-on-TPU story is these days, but I&#x27;d be astounded if it&#x27;s on par with their GPU support). So moving all your internal teams to TPUs means that all the GPUs can be allocated to GCP.
            • YetAnotherNick1 hour ago
              Just doesn&#x27;t make sense. If you make significantly more money renting TPU, why not rent them cheaper to shift the customers(and save billions that you are giving to Nvidia). TPU right now isn&#x27;t significantly more cheaper to external customer.<p>Again I am talking about LLM training&#x2F;inference which if I were to guess is more than half of the workload currently for which the switching cost is close to 0.
      • Zababa6 hours ago
        &gt;To get to the point of executing a successful training run like that, you have to count every failed experiment and experiment that gets you to the final training run.<p>I get the sentiment, but then, do you count all the other experiments that were done by that company before specifically trying to train this model? All the experiments done by people in that company at other companies? Since they rely on that experience to train models.<p>You could say &quot;count everything that has been done since the last model release&quot;, but then for the same amount of effort&#x2F;GPU, if you release 3 models does that divide each model cost by 3?<p>Genuinely curious in how you think about this, I think saying &quot;the model cost is the final training run&quot; is fine as it seems standard ever since DeepSeek V3, but I&#x27;d be interested if you have alternatives. Possibly &quot;actually don&#x27;t even talk about model cost as it will always be misleading and you can never really spend the same amount of money to get the same model&quot;?
    • iberator9 hours ago
      Why even do such thing if there is free Google, chatgpt and dozen more models? Waste of money towards ultimate goal: global loss of jobs and destroying earth.
  • tcdent14 hours ago
    It&#x27;s super exciting to see another American lab get in the ring. Even if they&#x27;re not at SOTA on the first release, the fact that they&#x27;re trying is incredible for open source AI.
  • trilogic1 hour ago
    Testing it now in HugstonOne. Running smooth at 5.8 T&#x2F;S : Loaded Trinity-Large-Preview-UD-Q4_K_XL-00001-of-00005.gguf.<p>The T&#x2F;S speed is acceptable, also stable 60 degrees celcius for the gpu temperature. Accuracy and precision in math problems. So far so good. Results: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Hugston&#x2F;comments&#x2F;1qq9d5i&#x2F;testing_trinity_large_an_open_400b_sparse_moe&#x2F;?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;Hugston&#x2F;comments&#x2F;1qq9d5i&#x2F;testing_tr...</a>
  • linolevan1 day ago
    I&#x27;m particularly excited to see a &quot;true base&quot; model to do research off of (<a href="https:&#x2F;&#x2F;huggingface.co&#x2F;arcee-ai&#x2F;Trinity-Large-TrueBase" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;arcee-ai&#x2F;Trinity-Large-TrueBase</a>).
    • hahahahhaah10 hours ago
      I&#x27;d love to &quot;chat&quot; to that model see how it behaves
      • Grimblewald6 hours ago
        I highly recommend. As a tip, you can quite easily get into a chat like state by simply using in context learning. Have a few turns of conversation pre-written and generate from that. It&#x27;ll continue the conversation (for both parties) so you just stop it from generating when it starts generating on your behalf.<p>That said, it&#x27;s useful for so much more beyond. Outline the premise of a Book, then &quot;what follows is that book\n #Chapter 1:&quot; and watch it rip. Base models are my preferred way of using LLM&#x27;s by a long margin.
  • Alifatisk16 hours ago
    What did they do to make the loss drop so much in phase 3?<p>Also, why are they comparing with Llama 4 Maverick? Wasn’t it a flop?
    • observationist15 hours ago
      ```During development of the RSDB, we noted significant enough performance gains from it that we decided to integrate it during phase 3 of the Trinity Large training run instead of waiting for a later training run. While the data distributions between phase 2 and phase 3 make direct comparison difficult, the overall effect was notable: BatchHet reduced by a factor of 4.23x, and step-to-step variance reduced by a factor of 2.4x (see Figure 1), a significant improvement when compared to the default packing strategy. We note that training runs without the RSDB exhibit much higher values in the higher-order moments of the running loss distribution, which we believe to correlate with network instability during training. ```<p>Page 9 of the technical report has more details, but it looks like they found some data prep methods as well as some other optimizations that overall worked out really well. I don&#x27;t think it was any one particular thing.<p>As far as Llama 4 goes, it was only referenced as a similarly sized model, they called it one of their model &quot;peers&quot;; I don&#x27;t think they intended any sort of quality comparison. Llama 4 was notable for sparsity, despite its poor performance and reception, some of the things they achieved technically were solid, useful research.
    • QuadmasterXLII15 hours ago
      you can’t directly compare losses because they changed the data distribution for each phase ( I think. 100% guaranteed they change the data distribution after the 10 trillion token mark, that’s when they start adding in instruction following data, but I don’t know for sure if the other phase changes also include data distribution changes.)
  • mwcampbell17 hours ago
    Given that it&#x27;s a 400B-parameter model, but it&#x27;s a sparse MoE model with 13B active parameters per token, would it run well on an NVIDIA DGX Spark with 128 GB of unified RAM, or do you practically need to hold the full model in RAM even with sparse MoE?
    • timschmidt16 hours ago
      Even with MoE, holding the model in RAM while individual experts are evaluated in VRAM is a bit of a compromise. Experts can be swapped in and out of VRAM for each token. So RAM &lt;-&gt; VRAM bandwidth becomes important. With a model larger than RAM, that bandwidth bottleneck gets pushed to the SSD interface. At least it&#x27;s read-only, and not read-write, but even the fastest of SSDs will be significantly slower than RAM.<p>That said, there are folks out there doing it. <a href="https:&#x2F;&#x2F;github.com&#x2F;lyogavin&#x2F;airllm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lyogavin&#x2F;airllm</a> is one example.
      • radarsat14 hours ago
        &gt; Experts can be swapped in and out of VRAM for each token.<p>I&#x27;ve often wondered how much it happens in practice. What does the per-token distribution of expert selection actually look like during inference? For example does it act like uniform random variable, or does it stick with the same 2 or 3 experts for 10 tokens in a row? I haven&#x27;t been able to find much info on this.<p>Obviously it depends on what model you are talking about, so some kind of survey would be interesting. I&#x27;m sure this must but something that the big inference labs are knowledgeable about.<p>Although, I guess if you are batching things, then even if a subset of experts is selected for a single query, maybe over the batch it appears completely random, that would destroy any efficiency gains. Perhaps it&#x27;s possible to intelligently batch queries that are &quot;similar&quot; somehow? It&#x27;s quite an interesting research problem when you think about it.<p>Come to think of it, how does it work then for the &quot;prompt ingestion&quot; stage, where it likely runs all experts in parallel to generate the KV cache? I guess that would destroy any efficiency gains due to MoE too, so the prompt ingestion and AR generation stages will have quite different execution profiles.
        • yorwba2 hours ago
          The model is explicitly trained to produce as uniform a distribution as possible, because it&#x27;s designed for batched inference with a batch size much larger than the expert count, so that all experts are constantly activated and latency is determined by the highest-loaded expert, so you want to distribute the load evenly to maximize utilization.<p>Prompt ingestion is still fairly similar to that setting, so you can first compute the expert routing for all tokens, load the first set of expert weights and process only those tokens that selected the first expert, then load the second expert and so on.<p>But if you want to optimize for single-stream token generation, you need a completely different model design. E.g. PowerInfer&#x27;s SmallThinker moved expert routing to a previous layer, so that the expert weights can be prefetched asynchronously while another layer is still executing: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2507.20984" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2507.20984</a>
      • nick4948817110 hours ago
        With a non-sequential generative approach perhaps the RAM cache misses could be grouped together and swapped on a when available&#x2F;when needed prioritized bases.
    • antirez16 hours ago
      Can run with mmap() but it is slower. 4-bit quantized there is a decent ratio between the model size and the RAM, with a fast SSD one could try to see how it works. However when a model is 4-bit quantized there is often the doubt that it is not better than an 8-bit quantized model of 200B parameters, it depends on the model, on the use case, ... Unfortunately the street for local inference of SOTA model is being stopped by the RAM prices and the GPU request of the companies, leaving us with little. Probably today the best bet is to buy Mac Studio systems and then run distributed inference (MLX supports this for instance), or a 512 GB Mac Studio M4 that costs, like 13k$.
      • vardump9 hours ago
        I think 512 GB Mac Studio was M3 Ultra.<p>Anyways, isn&#x27;t a new Mac Studio due in a few months? It should be significantly faster as well.<p>I just hope RAM prices don&#x27;t ruin this...
      • notpublic15 hours ago
        Talking about RAM prices, you can still get a framework Max+ 395 with 128GB RAM for ~$2,459 USD. They have not increased the price for it yet.<p><a href="https:&#x2F;&#x2F;frame.work&#x2F;products&#x2F;desktop-diy-amd-aimax300&#x2F;configuration&#x2F;new" rel="nofollow">https:&#x2F;&#x2F;frame.work&#x2F;products&#x2F;desktop-diy-amd-aimax300&#x2F;configu...</a>
        • Scipio_Afri15 hours ago
          Pretty sure those use to be $1999 ... but not entirely sure
          • notpublic15 hours ago
            Yep. You be right. Looks like they increased it earlier this month. Bummer!
    • jychang13 hours ago
      No.<p>128GB vram gets you enough space for 256B sized models. But 400B is too big for the DGX Spark, unless you connect 2 of them together and use tensor parallel.
  • greggh17 hours ago
    The only thing I question is the use of Maverick in their comparison charts. That&#x27;s like comparing a pile of rocks to an LLM.
    • jychang13 hours ago
      It&#x27;s because they&#x27;re doing 4 of 256 sparsity, which was a bad decision caused by financial limitations.<p>Training cost (FLOPs) = 6 * active params * total tokens. By keeping the MoE experts param count low, it reduces total training costs.<p>I don&#x27;t think this was a good move. They should have just trained way past chinchilla like the other major labs, and keep sparsity above 2%. Even Kimi K2 is above 2%. GLM is at 5%, which makes it very expensive (and high performing) for its small size.<p>Arcee went the other way. They trained a massive 400b model (bigger than GLM-4.5&#x2F;4.6&#x2F;4.7, bigger than Qwen3 235b A23b), but only have 17b active params, which is smaller than Qwen and GLM. It&#x27;s also only trained on 17T tokens, vs 20-30T+ tokens for the other models. It&#x27;s just undertrained and undersized (in terms of active parameters), and they got much worse performance than those models:<p><a href="https:&#x2F;&#x2F;45777467.fs1.hubspotusercontent-na1.net&#x2F;hubfs&#x2F;45777467&#x2F;MMLU-Pro%20%20AIME%202025%20%20GPQA-Diamond.png" rel="nofollow">https:&#x2F;&#x2F;45777467.fs1.hubspotusercontent-na1.net&#x2F;hubfs&#x2F;457774...</a><p>It&#x27;s not a bad showing considering the limitations they were working with, but yeah they definitely need double the active experts (8 out of 256 instead of 4 out of 256) to be competitive. That would roughly double the compute cost for them, though.<p>Their market strategy right now is to have less active params so it&#x27;s cheaper for inference, more total params so it&#x27;s smarter for the amount of active params they have, but not too big to fit into a H200 cluster. I... guess this is a valid niche strategy? The target audience is basically &quot;people who don&#x27;t need all the intelligence of GLM&#x2F;Qwen&#x2F;Deepseek, but want to serve more customers on the H200 cluster they already have sitting around&quot;. It&#x27;s a valid niche, but a pretty small one.
    • eldenring15 hours ago
      There aren&#x27;t too many base models out there to compare against.
  • frogperson17 hours ago
    What exactly does &quot;open&quot; mean in this case? Is it weights and data or just weights?
    • someotherperson17 hours ago
      It&#x27;s always open weights.
      • jetpackjoe16 hours ago
        It&#x27;s never open data
        • jacquesm16 hours ago
          Well, it is, it&#x27;s your data to begin with after all but admitting that would create some problems.
          • linolevan16 hours ago
            This model is sort of interesting since it seems to be using a lot of synthetic training data – but your point stands
            • cyanydeez15 hours ago
              So it&#x27;s a rip off of a rip off, is that whats interesting?
              • freakynit3 hours ago
                reminds of this recent news <a href="https:&#x2F;&#x2F;www.medianama.com&#x2F;2026&#x2F;01&#x2F;223-nvidia-high-speed-access-pirated-books-annas-archive&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.medianama.com&#x2F;2026&#x2F;01&#x2F;223-nvidia-high-speed-acce...</a>
        • tucnak8 hours ago
          unless you&#x27;re Ai2
  • kristianp12 hours ago
    There&#x27;s a free preview on openrouter: <a href="https:&#x2F;&#x2F;openrouter.ai&#x2F;arcee-ai&#x2F;trinity-large-preview:free" rel="nofollow">https:&#x2F;&#x2F;openrouter.ai&#x2F;arcee-ai&#x2F;trinity-large-preview:free</a>
  • fuddle15 hours ago
    &gt; We optimize for performance per parameter and release weights under Apache-2.0<p>How do they plan to monetize?
    • lambda10 hours ago
      I&#x27;m guessing by selling fine-tuning, consulting on hosting, and other services? They also seem to be offering their own inference service with their model, obviously as an open weight model that will be commoditized but I&#x27;m sure there are some people who&#x27;d prefer to buy from the originating lab. But yeah, when you&#x27;re offering open weights models, your customers are going to be people who want to self-host, fine tune, etc, so they might be offering services for that.
  • syntaxing15 hours ago
    So refreshing to see open source models like this come from the US. I would love for a 100Bish size one that can compete against OSS-120B and GLM air 4.5
  • khimaros13 hours ago
    unsloth quants are up <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Trinity-Large-Preview-GGUF" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;unsloth&#x2F;Trinity-Large-Preview-GGUF</a>
  • observationist21 hours ago
    This is a wonderful release.
  • LoganDark13 hours ago
    According to the article, nearly 50% of the dataset is synthetic (8T out of 17T tokens). I don&#x27;t know what constitutes &quot;a breadth of state-of-the-art rephrasing approaches&quot;, but I lack some confidence in models trained on LLM output, so I hope it wasn&#x27;t that.
    • NitpickLawyer7 hours ago
      &gt; but I lack some confidence in models trained on LLM output, so I hope it wasn&#x27;t that.<p>That&#x27;s misguided. Models have been trained on synthetic data for ~2+ years already. The &quot;model collapse&quot; myth is based on a very poor paper that got waaaay more attention than it deserved (because negativity sells, I guess). In practice every lab out there is doing this, because it works.
      • LoganDark47 minutes ago
        When ChatGPT first released and jailbreaks were pretty easy, I was able to easily get some extremely good&#x2F;detailed output from it, with very little errors or weirdness. Now even when I can get jailbreaks to work with their newer models, it&#x27;s just not the same, and no open-source model or even commercial model has seem to come close to the quality of that very first release. They&#x27;re all just weird, dumb, random or incoherent. I keep trying even the very large open-source or open-weights models, and new versions of OpenAI&#x27;s models and Claude and Gemini and so on, but it just all sucks. It all feels like slop!<p>I&#x27;m convinced it&#x27;s because that first ChatGPT release was probably trained on data almost entirely untainted by other LLMs, and it may no longer ever be possible to obtain such a dataset again. Every model feels so artificial and synthetic. I do not know for sure why this is, but I bet it has something to do with people thinking it&#x27;s possible to programmatically generate almost half the dataset?! I feel like OpenAI&#x27;s moat could have been the quality and authenticity of their dataset, since they scraped practically most of the internet before LLMs became widespread, but even they&#x27;ve probably lost it by now.<p>I haven&#x27;t really internalized anything about &quot;model collapse&quot;, other than that if you train an LLM on outputs from other LLMs, you will be training to emulate an imprecise version <i>of an imprecise version</i> of writing, which will be measurably and perceptibly worse than merely one layer of imprecise version of actual writing.
  • 0xdeadbeefbabe15 hours ago
    Is anyone excited to do ablative testing on it?
    • manbitesdog15 hours ago
      With such a high throughput because of sparsity, I&#x27;m particulary interested in distilling it into other architectures. I&#x27;d like to try a recurrent transformer when I have the time