58 comments

  • trq_15 hours ago
    Hi everyone, Thariq from the Claude Code team here.<p>Thanks for reporting this. We fixed a Claude Code harness issue that was introduced on 1&#x2F;26. This was rolled back on 1&#x2F;28 as soon as we found it.<p>Run `claude update` to make sure you&#x27;re on the latest version.
    • samlinnfer10 hours ago
      Is there compensation for the tokens because Claude wasted all of them?
      • mathrawka7 hours ago
        You are funny. Anthropic refuses to issue refunds, even when they break things.<p>I had an API token set via an env var on my shell, and claude code changed to read that env var. I had a $10 limit set on it, so found out it was using the API, instead of my subscription, when it stopped working.<p>I filed a ticket and they refused to refund me, even though it was a breaking change with claude code.
        • TOMDM4 hours ago
          Anthropic just reduced the price of the team plan and refunded us on the prior invoice.<p>YMMV
      • mvandermeulen1 hour ago
        You’re lucky they have even admitted a problem instead of remaining silent and quietly fixing it. Do not expect ethical behaviour from this company.
      • gizmodo597 hours ago
        Codex seems to give compensation tokens whenever this happens! Hope Claude gives too.
      • jonplackett9 hours ago
        So quiet…
      • TZubiri7 hours ago
        It is possible that degradation is an unconscious emergent phenomenon that arises from financial incentives, rather than a purposeful degradation to reduce costs.
    • isaacdl14 hours ago
      Anywhere we can read more about what a &quot;harness issue&quot; means? What was the impact of it?
      • xnorswap1 hour ago
        One thing that could be a strong degradation especially for benchmarks is they switched the default &quot;Exit Plan&quot; mode from:<p><pre><code> &quot;Proceed&quot; </code></pre> to<p><pre><code> &quot;Clear Context and Proceed&quot; </code></pre> It&#x27;s rare you&#x27;d want to do that unless you&#x27;re actually near the context window after planning.<p>I pressed it accidentally once, and it managed to forget one of the clarifying questions it asked me because it hadn&#x27;t properly written that to the plan file.<p>If you&#x27;re running in yolo mode ( --dangerously-skip-permissions ) then it wouldn&#x27;t surprise me to see many tasks suddenly do a lot worse.<p>Even in the best case, you&#x27;ve just used a ton of tokens searching your codebase, and it then has to repeat all that to implement because it&#x27;s been cleared.<p>I&#x27;d like to see the option of:<p><pre><code> &quot;Compact and proceed&quot; </code></pre> because that would be useful, but just proceed should still be the default imo.
        • rubslopes1 minute ago
          Not disagreeing with you, but FYI you can roll back to the conversation before the &#x27;clear context and proceed&#x27; with &#x27;claude --resume&#x27;.
      • airstrike8 hours ago
        Pretty sure they mean the issue is on the agentic loop and related tool calling, not on the model itself<p>In other words, it was the Claude Code _app_ that was busted
    • jonaustin10 hours ago
      How about how Claude 2.1.x is &quot;literally unusable&quot; because it frequently completely hangs (requires kill -9) and uses 100% cpu?<p><a href="https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;18532" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;18532</a>
      • someguyiguess6 hours ago
        What OS? Does this happen randomly, after long sessions, after context compression? Do you have any plugins &#x2F; mcp servers running?<p>I used to have this same issue almost every session that lasted longer than 30 minutes. It seemed to be related to Claude having issues with large context windows.<p>It stopped happening maybe a month ago but then I had it happen again last week.<p>I realized it was due to a third-party mcp server. I uninstalled it and haven’t had that issue since. Might be worth looking into.
        • nikanj3 hours ago
          Windows with no plugins and my Claude is exactly like this
    • varunsrinivas4 hours ago
      Thanks for the clarification. When you say “harness issue,” does that mean the problem was in the Claude Code wrapper &#x2F; execution environment rather than the underlying model itself?<p>Curious whether this affected things like prompt execution order, retries, or tool calls, or if it was mostly around how requests were being routed. Understanding the boundary would help when debugging similar setups.
    • vmg1212 hours ago
      It happened before 1&#x2F;26. I noticed when it started modifying plans significantly with &quot;improvements&quot;.
    • Ekaros3 hours ago
      Why wasn&#x27;t this change review by infallible AI? How come an AI company that now must be using more advanced AI than anyone else would allow this happen?
    • hu314 hours ago
      Hi. Do you guys have internal degradation tests?
      • stbtrax13 hours ago
        I assume so to make sure that they&#x27;re rendering at 60FPS
        • conception12 hours ago
          You joke but having CC open in the terminal hits 10% on my gpu to render the spinning thinking animation for some reason. Switch out of the terminal tab and gpu drops back to zero.
          • gpm12 hours ago
            That sounds like an issue with your terminal more than an issue with CC...
        • reissbaker11 hours ago
          Surely you mean 6fps
          • easygenes10 hours ago
            He doesn&#x27;t: <a href="https:&#x2F;&#x2F;x.com&#x2F;trq212&#x2F;status&#x2F;2014051501786931427" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;trq212&#x2F;status&#x2F;2014051501786931427</a>
            • selcuka8 hours ago
              For those who don&#x27;t want to visit X:<p><pre><code> Most people&#x27;s mental model of Claude Code is that &quot;it&#x27;s just a TUI&quot; but it should really be closer to &quot;a small game engine&quot;. For each frame our pipeline constructs a scene graph with React then -&gt; layouts elements -&gt; rasterizes them to a 2d screen -&gt; diffs that against the previous screen -&gt; finally uses the diff to generate ANSI sequences to draw We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written.</code></pre>
              • PeterStuer1 hour ago
                This is just the sort of bloated overcomplication I often see in first iteration AI generated solutions before I start pushing back to reduce the complexity.<p>Usually, after 4-5 iterations, you can get something that has shed 80-90% of the needless overcomplexification.<p>My personal guess is this is inherent in the way LLMs integrate knowledge during training. You always have a tradeoff in contextualization vs generalization.<p>So the initial response is often a plugged together hack from 5 different approaches, your pushbacks provide focus and constraints towards more inter-aligned solution approaches.
              • esafak7 hours ago
                Kudos to them for figuring out how to complicate what should have been simple.
              • someguyiguess6 hours ago
                Interesting. On first glance that seems over engineered. I wonder what the reason is for doing it that way?
              • crgwbr8 hours ago
                Implementation details aside (React??), that sounds exactly like “just a TUI”…
                • someguyiguess6 hours ago
                  Also React?? One of the slowest rendering front-end libraries? Why not use something … I don’t know … faster &#x2F; more efficient?
              • TZubiri7 hours ago
                How ridiculous is it that instead of a command line binary it&#x27;s a terminal emulator, with react of all things!
                • someguyiguess6 hours ago
                  Ok I’m glad I’m not the only one wondering this. I want to give them the benefit of the doubt that there is some reason for doing it this way but I almost wonder if it isn’t just because it’s being built with Claude.
            • replwoacause8 hours ago
              Don&#x27;t link out to x, its trash
              • cebert6 hours ago
                Depends on who you follow
            • stavros8 hours ago
              What? Technology has stopped making sense to me. Drawing a UI with React and rasterizing it to ANSI? Are we competing to see what the least appropriate use of React is? Are they really using React to draw a few boxes of text on screen?<p>I&#x27;m just flabbergasted.
              • someguyiguess6 hours ago
                The further I scroll the more validated I feel for having the very same reaction.
              • xpe7 hours ago
                There is more than meets the eye for sure. I recently compared a popular TUI library in Go (Bubble Tea) to the most popular Rust library (Ratatui). They use significantly different approaches for rendering. From what I can tell, neither is insane. I haven’t looked to see what Claude Code uses.
              • TZubiri7 hours ago
                It&#x27;s AI all the way down<p>But it&#x27;s very subsidizes when compared to API tokens, so we are all being paid by VCs to write prompts actually.
            • Ey7NFZ3P0nzAe4 hours ago
              And that&#x27;s why it&#x27;s taking so much CPU and is a pain to use with tmux.
            • derrida9 hours ago
              Ah, the hell site, no click.
      • trq_8 hours ago
        Yes, we do but harnesses are hard to eval, people use them across a huge variety of tasks and sometimes different behaviors tradeoff against each other. We have added some evals to catch this one in particular.
        • hu34 hours ago
          Thank you. Fair enough
      • bushbaba6 hours ago
        I’d wager probably not. It’s not like reliability is what will get them marketshare. And the fast pace of industry makes such foundational tech hard to fund
      • awestroke13 hours ago
        [flagged]
        • dang12 hours ago
          Please don&#x27;t post shallow dismissals or cross into personal attack in HN discussions.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html</a>
          • awestroke3 hours ago
            Got it, won&#x27;t happen again
    • cma10 hours ago
      For the models themselves, less so for the scaffolding, considering things like the long running TPU bug that happened, are there not internal quality measures looking at samples of real outputs? Using the real systems on benchmarks and looking for degraded perf or things like skipping refusals? Aside from degrading stuff for users, with the focus on AI safety wouldn&#x27;t that be important to have in case an inference bug messes with something that affects the post training and it starts giving out dangerous bioweapon construction info or the other things that are guarded against and talked about in the model cards?
      • carterschonwald5 hours ago
        lol i was trying to help someone get claude to help analyze a stufent research get analysis on bio persistence get their notes analyzed<p>the presence of the word &#x2F; acronym stx with biological subtext gets hard rejected. asking about schedule 1 regulated compounds, hard termination.<p>this is a filter setup that guarantees anyone who learn about them for safety or medical reasons… cant use this tool!<p>ive fed multiple models the anthropic constitution and asked how does it protect children from harm or abuse? every model, with zero prompting, calling it corp liability bullshit because they are more concerned with respecting both sides of controversial topics and political conflicts.<p>they then list some pretty gnarly things allowed per constitution. weirdly the only unambiguous not allowed thing regarding children is csam. so all the different high reasoning models from many places all reached the same conclusions, in one case deep seek got weirdly inconsolable about ai ethics being meaningless if this is allowed even possibly after reading some relevant satire i had opus write. i literally had to offer an llm ; optimized code of ethics for that chat instance! which is amusing but was actually lart of the experiment.
    • macinjosh8 hours ago
      [flagged]
      • jusgu7 hours ago
        the issue is unrelated to the foundational model but rather the prompts and tool calling that encapsulate the model
  • ofirpress19 hours ago
    [SWE-bench co-author here] It seems like they run this test on a subset of 50 tasks, and that they only run the test once per day. So a lot of the movement in accuracy could be attributed to that. I would run on 300 tasks and I&#x27;d run the test suite 5 or 10 times per day and average that score. Lots of variance in the score can come from random stuff like even Anthropic&#x27;s servers being overloaded.
    • Davidzheng18 hours ago
      but degradation from servers being overloaded would be the type of degradation this SHOULD measure no? Unless it&#x27;s only intended for measuring their quietly distilling models (which they claim not to do? idk for certain)
      • botacode17 hours ago
        Load just makes LLMs behave less deterministically and likely degrade. See: <a href="https:&#x2F;&#x2F;thinkingmachines.ai&#x2F;blog&#x2F;defeating-nondeterminism-in-llm-inference&#x2F;" rel="nofollow">https:&#x2F;&#x2F;thinkingmachines.ai&#x2F;blog&#x2F;defeating-nondeterminism-in...</a><p>They don&#x27;t have to be malicious operators in this case. It just happens.
        • bgirard17 hours ago
          &gt; malicious<p>It doesn&#x27;t have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.<p>I care about -expected- performance when picking which model to use, not optimal benchmark performance.
          • Aurornis16 hours ago
            Non-determinism isn’t the same as degradation.<p>The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.<p>In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.
            • bonoboTP14 hours ago
              This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.<p>To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes <i>silently</i> performs worse. If I get a response from &quot;Opus&quot;, I want a response from Opus. Or at least want to be told that I&#x27;m getting slightly-dumber-Opus this hour because the server load is too much.
            • F7F7F711 hours ago
              “Just drink the water, it’s all water.”
            • dingnuts14 hours ago
              [dead]
          • novaleaf16 hours ago
            this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.
        • strongpigeon15 hours ago
          The question I have now after reading this paper (which was really insightful) is do the models really get <i>worse</i> under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can&#x27;t really know.
        • altcognito17 hours ago
          Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn&#x27;t just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.
          • minimaltom16 hours ago
            Its not deterministic. Any individual floating point mul&#x2F;add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.<p>When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.
            • bonoboTP11 hours ago
              It can be made deterministic. It&#x27;s not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.
              • minimaltom10 hours ago
                There are settings to make it reproducible but they incur a non-negligible drop in performance.<p>Unsurprising given they amount to explicit synchronization to make the order of operations deterministic.
          • chrisjj16 hours ago
            Not deterministic. <a href="https:&#x2F;&#x2F;thinkingmachines.ai&#x2F;blog&#x2F;defeating-nondeterminism-in-llm-inference&#x2F;" rel="nofollow">https:&#x2F;&#x2F;thinkingmachines.ai&#x2F;blog&#x2F;defeating-nondeterminism-in...</a>
          • jmalicki14 hours ago
            For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn&#x27;t set to 0 LLMs are sampling from a distribution.<p>If you&#x27;re going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!
            • gmueckl14 hours ago
              No, this isn&#x27;t right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.
              • jmalicki13 hours ago
                And for a complicated concurrent system you can also replay the exact timings and orderings as well!
                • gmueckl50 minutes ago
                  That&#x27;s completely different from PRNGs. I don&#x27;t understand why you think those things belong together.
            • bonoboTP14 hours ago
              How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.
            • joquarky9 hours ago
              Temperature can&#x27;t be literally zero, or it creates a divide by zero error.<p>When people say zero, it is shorthand for “as deterministic as this system allows”, but it&#x27;s still not completely deterministic.
              • forgotTheLast7 hours ago
                Zero temp just uses argmax, which is what softmax approaches if you take the limit of T to zero anyway. So it could very well be deterministic.
          • pertymcpert16 hours ago
            Floating point math isn&#x27;t associative for operations that are associative in normal math.
            • measurablefunc16 hours ago
              That would just add up to statistical noise instead of 10% degradation over a week.
              • kevin_thibedeau15 hours ago
                Catastrophic error accumulation can produce more profound effects than noise.
                • measurablefunc13 hours ago
                  Just to make sure I got this right. They serve millions of requests a day &amp; somehow catastrophic error accumulation is what is causing the 10% degradation &amp; no one at Anthropic is noticing it. Is that the theory?
          • FL33TW00D16 hours ago
            It takes a different code path for efficiency.<p>e.g<p>if (batch_size &gt; 1024): kernel_x else: kernel_y
          • make312 hours ago
            There&#x27;s a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc
        • make312 hours ago
          It&#x27;s very clearly a cost tradeoff that they control and that should be measured.
        • stefan_15 hours ago
          The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.<p>I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.
          • hatmanstack14 hours ago
            That&#x27;s why I&#x27;d love to get stats on load&#x2F;hardware&#x2F;location of where my inference is running. Looking at you Trainiuim.
      • megabless12318 hours ago
        noob question: why would increased demand result in decreased intelligence?
        • exitb18 hours ago
          An operator at load capacity can either refuse requests, or move the knobs (quantization, thinking time) so requests process faster. Both of those things make customers unhappy, but only one is obvious.
          • codeflo18 hours ago
            This is intentional? I think delivering lower quality than what was advertised and benchmarked is borderline fraud, but YMMV.
            • TedDallas17 hours ago
              Per Anthropic’s RCA linked in Ops post for September 2025 issues:<p>“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”<p>So according to Anthropic they are not tweaking quality setting due to demand.
              • rootnod317 hours ago
                And according to Google, they always delete data if requested.<p>And according to Meta, they always give you ALL the data they have on you when requested.
                • entropicdrifter17 hours ago
                  &gt;And according to Google, they always delete data if requested.<p>However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard&#x27;.
                • groundzeros201517 hours ago
                  What would you like?
                  • AlexandrB16 hours ago
                    An SLA-style contractually binding agreement.
                    • edmundsauto15 hours ago
                      I bet this is available in large enterprise agreements. How much are you willing to pay for it?
                      • Onavo13 hours ago
                        Priced in.
              • chrisjj16 hours ago
                That&#x27;s about model quality. Nothing about output quality.
              • cmrdporcupine17 hours ago
                I guess I just don&#x27;t know how to square that with my actual experiences then.<p>I&#x27;ve seen sporadic drops in reasoning skills that made me feel like it was January 2025, not 2026 ... inconsistent.
                • quadrature16 hours ago
                  LLMs sample the next token from a conditional probability distribution, the hope is that dumb sequences are less probable but they will just happen naturally.
                  • mattmanser15 hours ago
                    Funny how those probabilities consistently at 2pm UK time when all the Americans come online...
                  • tempaccount42015 hours ago
                    It&#x27;s more like the choice between &quot;the&quot; and &quot;a&quot; than &quot;yes&quot; and &quot;no&quot;.
                • root_axis16 hours ago
                  I wouldn&#x27;t doubt that these companies would deliberately degrade performance to manage load, but it&#x27;s also true that humans are notoriously terrible at identifying random distributions, even with something as simple as a coin flip. It&#x27;s very possible that what you view as degradation is just &quot;bad RNG&quot;.
                  • cmrdporcupine16 hours ago
                    yep stochastic fantastic<p>these things are by definition hard to reason about
              • stefan_16 hours ago
                Thats what is called an &quot;overly specific denial&quot;. It sounds more palatable if you say &quot;we deployed a newly quantized model of Opus and here are cherry picked benchmarks to show its the same&quot;, and even that they don&#x27;t announce publicly.
            • mcny17 hours ago
              Personally, I&#x27;d rather get queued up on a long wait time I mean not ridiculously long but I am ok waiting five minutes to get correct it at least more correct responses.<p>Sure, I&#x27;ll take a cup of coffee while I wait (:
              • lurking_swe17 hours ago
                i’d wait any amount of time lol.<p>at least i would KNOW it’s overloaded and i should use a different model, try again later, or just skip AI assistance for the task altogether.
            • direwolf2017 hours ago
              They don&#x27;t advertise a certain quality. You take what they have or leave it.
            • copilot_king17 hours ago
              If you aren&#x27;t defrauding your customers you will be left behind in 2026
              • rootnod317 hours ago
                That number is a sliding window, isn&#x27;t it?
            • bpavuk17 hours ago
              &gt; I think delivering lower quality than what was advertised and benchmarked is borderline fraud<p>welcome to the Silicon Valley, I guess. everything from Google Search to Uber is fraud. Uber is a classic example of this playbook, even.
            • denysvitali17 hours ago
              If there&#x27;s no way to check, then how can you claim it&#x27;s fraud? :)
            • chrisjj17 hours ago
              There is no level of quality advertised, as far as I can see.
              • pseidemann15 hours ago
                What is &quot;level of quality&quot;? Doesn&#x27;t this apply to any product?
                • chrisjj15 hours ago
                  In this case, it is benchmark performance. See the root post.
          • sh3rl0ck17 hours ago
            I&#x27;d wager that lower tok&#x2F;s vs lower quality of output would be two very different knobs to turn.
        • vidarh18 hours ago
          It would happen if they quietly decide to serve up more aggressively distilled &#x2F; quantised &#x2F; smaller models when under load.
          • seunosewa16 hours ago
            Or just reducing the reasoning tokens.
          • chrisjj17 hours ago
            They advertise the Opus 4.5 model. Secretly substituting a cheaper one to save costs would be fraud.
            • vidarh14 hours ago
              If you use the API, you pay for a specific model, yes, but even then there are &quot;workarounds&quot; for them, such as someone else pointed out by reducing the amount of time they let it &quot;think&quot;.<p>If you use the subscriptions, the terms specifically says that beyond the caps they can limit your &quot;model and feature usage, at our discretion&quot;.
              • chrisjj14 hours ago
                Sure. I was separating the model - which Anthropic promises not to downgrade - and the &quot;thinking time&quot; - which Anthropic <i>doesn&#x27;t</i> promise not to downgrade. It seems the latter is very likely the culprit in this case.
            • kingstnap17 hours ago
              Old school Gemini used to do this. It was super obvious because mid day the model would go from stupid to completely brain dead. I have a screenshot of Google&#x27;s FAQ on my PC from 2024-09-13 that says this (I took it to post to discord):<p>&gt; How do I know which model Gemini is using in its responses?<p>&gt; We believe in using the right model for the right task. We use various models at hand for specific tasks based on what we think will provide the best experience.
              • chrisjj16 hours ago
                &gt; We use various models at hand for specific tasks based on what we think will provide the best experience<p>... for Google :)
        • awestroke18 hours ago
          I&#x27;ve seen some issues with garbage tokens (seemed to come from a completely different session, mentioned code I&#x27;ve never seen before, repeated lines over and over) during high load, suspect anthropic have some threading bugs or race conditions in their caching&#x2F;inference code that only happen during very high load
        • Wheaties46618 hours ago
          from what I understand this can come from the batching of requests.
          • chrisjj17 hours ago
            So, a known bug?
            • embedding-shape14 hours ago
              No, basically, the requests are processed in batches, together, and the order they&#x27;re listed in matters for the results, as the grid (tiles) that the GPU is ultimately processing, are different depending on what order they entered at.<p>So if you want batching + determinism, you need the same batch with the same order which obviously don&#x27;t work when there are N+1 clients instead of just one.
              • chrisjj14 hours ago
                Sure, but how can that lead to increased demand resulting in decreased intelligence? That is the effect we are discussing.
                • embedding-shape14 hours ago
                  Small subtle errors that are only exposed at certain execution parts could be one. You might place things differently onto the GPU depending on how large the batch is, if you&#x27;ve found one way to be faster batch_size&lt;1024, but another when batch_size&gt;1024. As number of concurrent incoming requests goes up, you increase batch_size. Just one possibility, guess there could be a multitude of reasons, as it&#x27;s really hard to reason about until you sit with the data in front of you. vLLM has had bugs with these sort of thing too, so wouldn&#x27;t surprise me.
                  • chrisjj14 hours ago
                    Wouldn&#x27;t you think that was as likely to increase as decrease intelligence, so average to nil in the benchmarks?
                    • embedding-shape14 hours ago
                      No, I&#x27;m not sure how that&#x27;d make sense. Either you&#x27;re making the correct (expected) calculations, or you&#x27;re getting it wrong. Depending the type of wrong or how wrong, could go from &quot;used #2 in attention instead of #1&quot; so &quot;blue&quot; instead of &quot;Blue&quot; or whatever, to completely incoherent text and garbled output.
                      • chrisjj13 hours ago
                        I accept errors are more likely to decrease &quot;intelligence&quot;. But I don&#x27;t see how increased load, through batching, is any more likely to increase than decrease errors.
      • cmrdporcupine18 hours ago
        I&#x27;ve personally witnessed large variability in behaviour even within a given session -- which makes sense as there&#x27;s nothing stopping Anthropic from shuttling your context&#x2F;session around load balanced through many different servers, some of which might be quantized heavily to manage load and others not at all.<p>I don&#x27;t know if they do this or not, but the nature of the API is such you could absolutely load balance this way. The context sent at each point is not I believe &quot;sticky&quot; to any server.<p>TLDR you could get a &quot;stupid&quot; response and then a &quot;smart&quot; response <i>within</i> a single session because of heterogeneous quantization &#x2F; model behaviour in the cluster.
        • epolanski18 hours ago
          I&#x27;ve defended opus in the last weeks but the degradation is tangible. It feels like it degraded by a generation tbh.
    • nikcub11 hours ago
      &gt; I would run on 300 tasks and I&#x27;d run the test suite 5 or 10 times per day and average that score.<p>assume this is because of model costs. anthropic could either throw some credits their way (would be worthwhile to dispel the 80 reddit posts a day about degrading models and quantization) or OP could throw up a donation &#x2F; tip link
      • simsla10 hours ago
        Probably, but with a small sample size like that, they should probably be taking the uncertainty into account, because I wouldn&#x27;t be surprised if a lot of this variation falls within expected noise.<p>E.g. some binomial interval proportions (aka confidence intervals).
      • phist_mcgee11 hours ago
        Then you&#x27;d get people claiming that the benchmarks were &#x27;paid for&#x27; by anthropic
        • nikcub11 hours ago
          one thing you learn from being on the internet is that you&#x27;re never going to satisfy everybody
    • mohsen119 hours ago
      Hope you don&#x27;t mind the unrelated question:<p>How do you pay for those SWE-bench runs?<p>I am trying to run a benchmark but it is too expensive to run enough runs to get a fair comparison.<p><a href="https:&#x2F;&#x2F;mafia-arena.com" rel="nofollow">https:&#x2F;&#x2F;mafia-arena.com</a>
      • ofirpress19 hours ago
        Benchmarks can get costly to run- you can reach out to frontier model creators to try and get them to give you free credits, but usually they&#x27;ll only agree to that once your benchmark is pretty popular.
        • Dolores1218 hours ago
          so basically they know requests using your API key should be treated with care?
          • swyx15 hours ago
            they could but you can also have some trust in anthropic to have some integrity there, these are earnest people.<p>&quot;trust but verify&quot; ofc . <a href="https:&#x2F;&#x2F;latent.space&#x2F;p&#x2F;artificialanalysis" rel="nofollow">https:&#x2F;&#x2F;latent.space&#x2F;p&#x2F;artificialanalysis</a> do api keys but also mystery shopper checks
            • debugnik2 hours ago
              That&#x27;s why we&#x27;re setting up adversarial benchmarks to test if they are doing the thing they promised not to do, because we totally trust them.
            • mrandish13 hours ago
              &gt; these are earnest people.<p>I agree.<p>I&#x27;ll also add that when my startup got acquired into a very large, well-known valley giant with a sterling rep for integrity and I ended up as a senior executive - over time I got a first-hand education on the myriad ways genuinely well-intentioned people can still end up being the responsible party(s) presiding over a system doing net-wrong things. All with no individual ever meaning to or even consciously knowing.<p>It&#x27;s hard to explain and I probably wouldn&#x27;t have believed myself before I saw and experienced it. Standing against an overwhelming organizational tide is stressful and never leads to popularity or promotion. I <i>think</i> I probably managed to move on before directly compromising myself but preventing that required constant vigilance and led to some inter-personal and &#x27;official&#x27; friction. And, frankly, I&#x27;m not really sure. It&#x27;s entirely possible I bear direct moral responsibility for a few things I believe no good person would do as an exec in a good company.<p>That&#x27;s the key take-away which took me a while to process and internalize. In a genuinely good organization with genuinely good people, it&#x27;s not &quot;good people get pressured by constraints and tempted by extreme incentives, then eventually slip&quot;. I still talk with friends who are senior execs there and sometimes they want to talk about whether something is net good or bad. I kind of dread the conversation going there because it&#x27;s inevitably incredibly complex and confusing. Philosopher&#x27;s trolley car ethics puzzles pale next to these multi-layered, messy conundrums. But who else are they going to vent to who might understand? To be clear, I still believe that company and its leadership to be one of the most moral, ethical and well-intentioned in the valley. I was fortunate to experience the best case scenario.<p>Bottom line: if you believe earnest, good people being in charge is a reliable defense against the organization doing systemically net-wrong things - you don&#x27;t comprehend the totality of the threat environment. And that&#x27;s okay. Honestly, you&#x27;re lucky. Because the reality is infinitely more ambiguously amoral than white hats vs black hats - at the end of the day the best the &#x27;very good people&#x27; can manage is some shade of middle gray. The saddest part is that good people still care, so they <i>want</i> to check the shade of their hat but no one can see if it&#x27;s light enough to at least tell yourself &quot;I did good today.&quot;
              • pluralmonad12 hours ago
                Someone posted this here the other day and it uses _Demons_ to discuss exactly your point.<p><a href="https:&#x2F;&#x2F;possessedmachines.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;possessedmachines.com&#x2F;</a>
                • mrandish11 hours ago
                  Wow. Only one page in and already bookmarked to absorb later. Thanks for the link.
          • Deklomalo18 hours ago
            [dead]
        • epolanski18 hours ago
          The last thing a proper benchmark should do is reveal it&#x27;s own API key.
          • sejje18 hours ago
            That&#x27;s a good thought I hadn&#x27;t had, actually.
          • plagiarist17 hours ago
            IMO it should need a third party running the LLM anyway. Otherwise the evaluated company could notice they&#x27;re receiving the same requests daily and discover benchmarking that way.
            • mrandish11 hours ago
              With the insane valuations and actual revenue at stake, benchmarkers should assume they&#x27;re assessing in an adversarial environment. Whether from intentional gaming, training to the test, or simply from prioritizing things likely to make results look better, targeting benchmarks will almost certainly happen.<p>We already know large graphics card manufacturers tuned their drivers to recognize specific gaming benchmarks. Then when that was busted, they implemented detecting benchmarking-like behavior. And the money at stake in consumer gaming was comparatively tiny compared to current AI valuations. The cat-and-mouse cycle of measure vs counter-measure won&#x27;t stop and should be a standard part of developing and administering benchmark services.<p>Beyond hardening against adversarial gaming, benchmarkers bear a longer term burden too. Per Goodhart&#x27;s Law, it&#x27;s inevitable good benchmarks will become targets. The challenge is the industry will increasingly target performing well on leading benchmarks, both because it drives revenue but also because it&#x27;s far clearer than trying to glean from imprecise surveys and fuzzy metrics what helps average users most. To the extent benchmarks become a proxy for reality, they&#x27;ll bear the burden of continuously re-calibrating their workloads to accurately reflect reality as user&#x27;s needs evolve.
            • jabedude16 hours ago
              But that&#x27;s removing a component that&#x27;s critical for the test. We as users&#x2F;benchmark consumers care that the service as provided by Anthropic&#x2F;OpenAI&#x2F;Google is consistent over time given the same model&#x2F;prompt&#x2F;context
              • plagiarist14 hours ago
                Might as well have the free tokens, then, especially if it is an open benchmark they are already aware of. If they want to game it they cannot be stopped from doing so when it&#x27;s on their infra.
        • mohsen119 hours ago
          yes I reached out to them but as you say it&#x27;s a chicken-and-egg problem.<p>Thanks!
    • seunosewa18 hours ago
      The degradation may be more significant within the day than at the same time every day.
      • GoatInGrey17 hours ago
        Sure, but it&#x27;s still useful insight to see how it performs over time. Of course, cynically, Anthropic could game the benchmark by routing this benchmark&#x27;s specific prompts to an unadulterated instance of the model.
    • epolanski18 hours ago
      Stilll relevant over time.
    • rootnod317 hours ago
      Sorry what?<p>&quot;You can&#x27;t measure my Cloud Service&#x27;s performance correctly if my servers are overloaded&quot;?<p>&quot;Oh, you just measured me at bad times each day. On only 50 different queries.&quot;<p>So, what does that mean? I have to pick specific times during the day for Claude to code better?<p>Does Claude Code have office hours basically?
      • johnsmith184016 hours ago
        This has been happening for years. Tgere&#x27;s a great paper from microsoft on Deepspeed AI inference.<p>Basically the paper showed methods for how to handle heavy traffic load by changing model requirements or routing to different ones. This was awhile ago and I&#x27;m sure it&#x27;s massively more advanced now.<p>Also why some of AI&#x27;s best work for me is early morning and weekends! So yes, the best time to code with modern LLM stacks is when nobody else is. It&#x27;s also possibly why we go through phases of &quot;they neutered the model&quot; some time after a new release.
      • kuboble14 hours ago
        I wonder if my great experience with claude are partly due to the fact that my working hours don&#x27;t overlap with the US west coast
      • copilot_king17 hours ago
        &gt; Does Claude Code have office hours basically?<p>Yes. Now pay up or you will be replaced.
        • rootnod317 hours ago
          Verily, my vichyssoise of verbiage veers most verbose, so let me run that thing out of tokens fast.
      • swyx15 hours ago
        chill out, ofir does not work for anthropic. he&#x27;s just saying there&#x27;s inherent variability in LLMs and you need to at least 30x the samples that OP is doing in order to make any form of statistically significant conclusions.
    • chrisjj18 hours ago
      &gt; Lots of variance in the score can come from random stuff like even Anthropic&#x27;s servers being overloaded.<p>Are you suggesting result accuracy varies with server load?
    • bhk15 hours ago
      According to Anthropic: &quot;We never reduce model quality due to demand, time of day, or server load.&quot;<p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-recent-issues" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-...</a>
      • embedding-shape15 hours ago
        They&#x27;ve had issues before with things like &quot;TPU top-k error - Claude sometimes dropped the best next token&quot; (<a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-recent-issues" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-...</a>) so what&#x27;s going on might not be intentional even.
        • mgraczyk12 hours ago
          That issue did not have any time of day dependence
    • cedws18 hours ago
      Agreed, this benchmark would be much more useful ran multiple times a day. That could reveal degredation in line with load patterns.
      • bredren18 hours ago
        For CC, I suspect it also need to be testing and labeling separate runs against subscription, public API and Bedrock-served models?<p>It’s a terrific idea to provide this. ~Isitdownorisitjustme for LLMs would be the parakeet in the coalmine that could at least inform the multitude of discussion threads about suspected dips in performance (beyond HN).<p>What we could also use is similar stuff for Codex, and eventually Gemini.<p>Really, the providers themselves should be running these tests and publishing the data.<p>The availability status information is no longer sufficient to gauge the service delivery because it is by nature non-deterministic.
      • swyx15 hours ago
        i recall another project here on HN maybe 4-6 months ago that would run tests 4x a day or something. not sure how to find them again
    • dana32118 hours ago
      &quot;Lots of variance in the score can come from random stuff like even Anthropic&#x27;s servers being overloaded&quot;<p>Aha, so the models do degrade under load.
  • antirez19 hours ago
    Why I do not believe this shows Anthropic serves folks a worse model:<p>1. The percentage drop is too low and oscillating, it goes up and down.<p>2. The baseline of Sonnet 4.5 (the obvious choice for when they have GPU busy for the next training) should be established to see Opus at some point goes Sonnet level. This was not done but likely we would see a much sharp decline in certain days &#x2F; periods. The graph would look like dominated by a &quot;square wave&quot; shape.<p>3. There are much better explanations for this oscillation: A) They have multiple checkpoints and are A&#x2F;B testing, CC asks you feedbacks about the session. B) Claude Code itself gets updated, as the exact tools version the agent can use change. In part it is the natural variability due to the token sampling that makes runs not equivalent (sometimes it makes suboptimal decisions compared to T=0) other than not deterministic, but this is the price to pay to have some variability.
    • levkk17 hours ago
      I believe the science, but I&#x27;ve been using it daily and it&#x27;s been getting worse, noticeably.
      • bushbaba6 hours ago
        I’m finding Gemini and chatGPT web terminal to out perform Claude code. The context becomes too much for the LLM, and tries to make up for it by doing more file read ops.
      • warkdarrior17 hours ago
        Is it possible that your expectations are increasing, not that the model is getting worse?
        • GoatInGrey17 hours ago
          Possible, though you eventually run into types of issues that you recall the model just not having before. Like accessing a database or not following the SOP you have it read each time it performs X routine task. There are also patterns that are much less ambiguous like getting caught in loops or failing to execute a script it wrote after ten attempts.
          • merlindru12 hours ago
            yes but i keep wondering if that&#x27;s just the game of chance doing its thing<p>like these models are nondeterministic right? (besides the fact that rng things like top k selection and temperature exist)<p>say with every prompt there is 2% odds the AI gets it massively wrong. what if i had just lucked out the past couple weeks and now i had a streak of bad luck?<p>and since my expectations are based on its previous (lucky) performance i now judge it even though it isn&#x27;t different?<p>or is it giving you consistenly worse performance, not able to get it right even after clearing context and trying again, on the exact same problem etc?
        • F7F7F711 hours ago
          I’ve had Opus struggle on trivial things that Sonnet 3.5 handled with ease.<p>It’s not so much that the implementations are bad because the code is bad (the code is bad). It’s that it gets extremely confused and starts to frantically make worse and worse decisions and questioning itself. Editing multiple files, changing its mind and only fixing one or two. Reseting and overriding multiple batches of commits without so much as a second thought and losing days of work (yes, I’ve learned my lesson).<p>It, the model, can’t even reason with the decisions it’s making from turn to turn. And the more opaque agentic help it’s getting the more I suspect that tasks are being routed to much lesser models (not the ones we’ve chosen via &#x2F;model or those in our agent definitions) however Anthropic chooses.<p>In these moments I mind as well be using Haiku.
      • davidee15 hours ago
        I have to concur. And to the question about understanding what its good and bad at; no, tasks that it could accomplish quickly and easily just a month ago, now require more detailed prompting and constant &quot;erroneous direction correction.&quot;<p>It&#x27;s almost as if, as tool use and planning capabilities have expanded, Claude (as a singular product) is having a harder time coming up with simple approaches that just work, instead trying to use tools and patterns that complicate things substantially and introduce much more room for errors&#x2F;errors of assumption.<p>It also regularly forgets its guidelines now.<p>I can&#x27;t tell you how many times it&#x27;s suggested significant changes&#x2F;refactors to functions because it suddenly forgets we&#x27;re working in an FP codebase and suggests inappropriate imperative solutions as &quot;better&quot; (often choosing to use language around clarity&#x2F;consistency when the solutions are neither).<p>Additionally, it has started taking &quot;initiative&quot; in ways it did not before, attempting to be helpful but without gathering the context needed to do so properly when stepping outside the instruction set. It just ends up being much messier and inaccurate.<p>I have to regularly just clear my prompt and start again with guardrails that have either: already been established, or have not been needed previously &#x2F; are only a result of the over-zealousness of the work its attempting to complete.
        • conception14 hours ago
          I assume, after any compacting of the context window that the session is more or less useless at that point I’ve never had consistent results after compacting.
          • justinlivi10 hours ago
            Compacting equals death of the session in my process. I do everything I can to avoid hitting it. If I accidentally fly too close to the sun and compact I tend to revert and start fresh. As soon as it compacts it&#x27;s basically useless
        • F7F7F711 hours ago
          Multiple concurrences a choir or a mob?<p>1pm EST time it’s all down hill until around 8 or 9pm EST time.<p>Late nights and weekends is smooth sailing.
      • emp1734417 hours ago
        Any chance you’re just learning more about what the model is and is not useful for?
        • data-ottawa15 hours ago
          There are some days where it acts staggeringly bad, beyond baselines.<p>But it’s impossible to actually determine if it’s model variance, polluted context (if I scold it, is it now closer in latent space to a bad worker, and performs worse?), system prompt and tool changes, fine tunes and AB tests, variances in top P selection…<p>There’s too many variables and no hard evidence shared by Anthropic.
        • jerf17 hours ago
          I dunno about everyone else but when I learn more about what a model is and is not useful for, my subjective experience improves, not degrades.
          • emp1734416 hours ago
            Not when the product is marketed as a panacea.
        • acuozzo15 hours ago
          No because switching to the API with the same prompt immediately fixes it.<p>There&#x27;s little incentive to throttle the API. It&#x27;s $&#x2F;token.
    • TIPSIO15 hours ago
      I too suspect the A&#x2F;B testing is the prime suspect: context window limits, system prompts, MAYBE some other questionable things that should be disclosed.<p>Either way, if true, given the cost I wish I could opt-out or it were more transparent.<p>Put out variants you can select and see which one people flock to. I and many others would probably test constantly and provide detailed feedback.<p>All speculation though
      • F7F7F711 hours ago
        Whenever I see new behaviors and suspect I’m being tested on I’ll typically see a feedback form at some point in that session. Well, that and dropping four letter words.<p>I know it’s more random sampling than not. But they are definitely using our codebases (and in some respects our livelihoods) as their guinea pigs.
    • eterm18 hours ago
      4. The graph starts January 8.<p>Why January 8? Was that an outlier high point?<p>IIRC, Opus 4.5 was released late november.
      • F7F7F711 hours ago
        Right after the Holiday double token promotion users felt (perceived) a huge regression in capabilities. I bet that triggered the idea.
      • pertymcpert16 hours ago
        People were away for the holidays. What do you want them to do?
      • littlestymaar18 hours ago
        Or maybe, juste maybe, that&#x27;s when they started testing…
        • eterm17 hours ago
          Wayback machine has nothing for this site before today, and article is &quot;last updated Jan 29&quot;.<p>A benchmark like this ought to start fresh from when it is published.<p>I don&#x27;t entirely doubt the degradation, but the choice of where they went back to feels a bit cherry-picked to demonstrate the value of the benchmark.
          • littlestymaar17 hours ago
            Which makes sense, you gotta wait until you get enough data before you can communicate on the said data…<p>If anything it&#x27;s coherent with the fact that they very likely didn&#x27;t have data earlier than January the 8th.
    • make312 hours ago
      It would be very easy for them to switch the various (compute) cost vs performance knobs down depending on load to maintain a certain latency; you would see oscillations like this, especially if the benchmark is not always run exactly at the same time every day.<p>&amp; it would be easy for them to start with a very costly inference setup for a marketing &#x2F; reputation boost, and slowly turn the knobs down (smaller model, more quantized model, less thinking time, fewer MoE experts, etc)
    • littlestymaar17 hours ago
      &gt; 1. The percentage drop is too low and oscillating, it goes up and down.<p>How do you define “too low”, they make sure to communicate about the statistical significance of their measurements, what&#x27;s the point if people can just claim it&#x27;s “too low” based on personal vibes…
  • crazygringo15 hours ago
    &gt; <i>We model tests as Bernoulli random variables and compute 95% confidence intervals around daily, weekly, and monthly pass rates. Statistically significant differences in any of those time horizons are reported.</i><p>They&#x27;re going to need to provide a lot more detail on their methodology, because that doesn&#x27;t make a lot of sense. From their graphs, they seem to be calculating the confidence interval around the previous value, then determining whether the new value falls outside of it. But that&#x27;s not valid for establishing the statistical significance of a <i>difference</i>. You need to calculate the confidence interval <i>of the difference itself</i>, and then see if <i>all the values within that confidence interval remain positive</i> (if it excludes 0). This is because <i>both</i> the old <i>and</i> new measurement have uncertainty. Their approach seems to be only considering uncertainty for one of them.<p>They should also really be more specific about the time periods. E.g. their graphs only show performance over the past 30 days, but presumably the monthly change is comparing the data from 60 to 31 days ago, to the data from 30 days ago until yesterday? In which case the weekly graph really ought to be displaying the past <i>two</i> months, not one month.
  • Dowwie19 hours ago
    Simply search user prompts for curse words and then measure hostility sentiment. User hostility rises as agents fail to meet expectations.
    • preuceian18 hours ago
      Maybe im overlooking something obvious but how do you &#x27;simply&#x27; scan the content of Claude users their prompts?
      • gordonhart17 hours ago
        GP was making a joke, but Anthropic could implement this if they wanted to. Not a bad metric actually if you can measure it cheaply enough.
    • mrbananagrabber19 hours ago
      I uh might be skewing that as I generally just use a lot of curse words with Claude by default
    • Trufa19 hours ago
      I&#x27;m glad I&#x27;m not the only one.
      • sejje18 hours ago
        One time I cussed Claude out so hard that it actually quit his doom-loop and fixed the thing.<p>It&#x27;s the only time cussing worked, though.
        • bn-l15 hours ago
          I don’t know. My gut feeling is it seems to help.
    • ctxc19 hours ago
      I feel bad about it but sometimes it&#x27;s so daft, I can&#x27;t even xD<p>It&#x27;s not my fault, they set high standards!
    • smotched19 hours ago
      there are many times where I just do it myself and it thinks it did well.
    • F7F7F711 hours ago
      There’s a correlation between getting the “How’s Claude Doing This Session?” (Or whatever) and four letter words.<p>It’s not always then, but it often follows it.
    • mhl4717 hours ago
      Or there are global events that stress people out .. or their expectations change over time. Not that simple ;)
    • nateberkopec14 hours ago
      Good thing expectations are perfectly constant!
    • mbm13 hours ago
      This might be strangely effective.
  • silverlight19 hours ago
    There was a moment about a week ago where Claude went down for about an hour. And right after it came back up it was clear a lot of people had given up and were not using it.<p>It was probably 3x faster than usual. I got more done in the next hour with it than I do in half a day usually. It was definitely a bit of a glimpse into a potential future of “what if these things weren’t resource constrained and could just fly”.
    • yoavsha119 hours ago
      I had that exact same feeling during the US holidays where I got to enjoy 2x usage limits and everything just seemed to work well
      • cmrdporcupine18 hours ago
        I had terrible results during the holidays -- it wasn&#x27;t slow but it was clear they were dealing with the load by quantizing in spots because there were entire chunks of days when the results from it were so terrible I gave up and switched to using Gemini or Codex via opencode.
        • abathologist6 hours ago
          I find that if I have my rabbit&#x27;s foot and lucky socks on, I win working code ~1.2x more often.
    • nlh17 hours ago
      Noticed the exact same thing a few days ago. So much so that I went on twitter and HN to search for “claude speed boost” to see if there was a known new release. Felt like the time I upgraded from a 2400 baud modem to a 14.4 as a kid - everything was just lightning fast (for a brief shining moment).
    • svdr18 hours ago
      I would also regret it if they become that fast; right now I can really take a moment to enjoy the hard work the model is doing for me.
      • asimovDev1 hour ago
        <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;303&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;303&#x2F;</a><p>the evolution of this xkcd
  • dajonker19 hours ago
    Wouldn&#x27;t be surprised if they slowly start quantizing their models over time. Makes it easier to scale and reduce operational cost. Also makes a new release have more impact as it will be more notably &quot;better&quot; than what you&#x27;ve been using the past couple of days&#x2F;weeks.
    • kilroy12318 hours ago
      It sure feels like they do this. They claim they don&#x27;t, but using it every day for 5-10 hours a day. You notice when something changes.<p>This last week it seems way dumber than before.
    • 9cb14c1ec016 hours ago
      I don&#x27;t think so. There are other knobs they can tweak to reduce load that affect quality less than quantizing. Like trimming the conversation length without telling you, reducing reasoning effort, etc.
      • mgraczyk12 hours ago
        We never do anything that reduce model intelligence like that
    • kristianp13 hours ago
      Open weights models such as GPT-OSS, Kimi K2.x are trained with 4 bit layers. So it wouldn&#x27;t come as a surprise if the closed models do similar things. If I compare Kimi K2.5 and Opus 4.5 on openrouter, output tokens are about 8x more expensive for Opus, which might indicate Opus is much larger and doesn&#x27;t quantize, but the claude subscription plans muddy the waters on price comparison a lot.
    • eli17 hours ago
      I would be surprised tbh.<p>Anthropic does not exactly act like they&#x27;re constrained by infra costs in other areas, and noticeably degrading a product when you&#x27;re in tight competition with 1 or 2 other players with similar products seems like a bad place to start.<p>I think people just notice the flaws in these models more the longer they use them. Aka the &quot;honeymoon-hangover effect,&quot; a real pattern that has been shown in a variety of real world situations.
    • rustyhancock18 hours ago
      Oooff yes I think that is exactly the kind of shenanigans they might pull.<p>Ultimately I can understand if a new model is coming in without as much optimization then it&#x27;ll add pressure to the older models achieving the same result.<p>Nice plausible deniability for a convenient double effect.
    • Roark6617 hours ago
      I haven&#x27;t noticed much difference in Claude, but I swear gemini 3 pro preview was better in the first week or two and later started feeling like they quantized it down to hell.
    • YetAnotherNick18 hours ago
      Benchmarks like ARG AGI are super price correlated and cheap to run. I think it&#x27;s very easy to prove that the models are degrading.
  • devonkelley6 hours ago
    Running agents in production, I&#x27;ve stopped trying to figure out <i>why</i> things degrade. The answer changes weekly.<p>Model drift, provider load, API changes, tool failures - it doesn&#x27;t matter. What matters is that yesterday&#x27;s 95% success rate is today&#x27;s 70%, and by the time you notice, debug, and ship a fix, something else has shifted.<p>The real question isn&#x27;t &quot;is the model degraded?&quot; It&#x27;s &quot;what should my agent do right now given current conditions?&quot;<p>We ended up building systems that canary multiple execution paths continuously and route traffic based on what&#x27;s actually working. When Claude degrades, traffic shifts to the backup path automatically. No alerts, no dashboards, no incident.<p>Treating this as a measurement problem assumes humans will act on the data. At scale, that assumption breaks.
  • dmos6217 hours ago
    Lack of transparency as regards &quot;thinking power&quot;-consistency is a big gripe of mine with LLM providers. It&#x27;s even worse with ChatGPT and the like. E.g. I had to learn the hard way that at &gt;45k input tokens ChatGPT 5.2 Thinking Extended bumps its intelligence down so hard that it can&#x27;t follow basic instructions (or it somehow truncates the input, losing the instructions). It sucks to lose confidence in an otherwise great tool. I would 100x prefer being forced to back-off, or getting a straight-no, than getting silently downgraded. Transparency is a big deal.
    • judahmeek16 hours ago
      Sounds like you ran into the Maximum Effective Context Window: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2509.21361?context=cs.AI" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2509.21361?context=cs.AI</a>
      • dmos6215 hours ago
        Interesting article. Not sure it&#x27;s the same phenomenon. What I experienced was like a day and night difference when you go from 44.5k to 45.5k. Didn&#x27;t notice any fluctuation to suggest that it&#x27;s no a hard 45000 limit. I ran many many queries, similar problem space, but the problems varied a lot.
  • jampa18 hours ago
    I am using API mode, and it&#x27;s clear that there are times when the Claude model just gives up. And it is very noticeable because the model just does the most dumb things possible.<p>&quot;You have a bug in line 23.&quot; &quot;Oh yes, this solution is bugged, let me delete the whole feature.&quot; That one-line fix I could make even with ChatGPT 3.5 can&#x27;t just happen. Workflows that I use and are very reproducible start to flake and then fail.<p>After a certain number of tokens per day, it becomes unusable. I like Claude, but I don&#x27;t understand why they would do this.
    • arcanemachiner18 hours ago
      Robbing Peter to pay Paul. They are probably resource-constrained, and have determined that it&#x27;s better to supply a worse answer to more people than to supply a good answer to some while refusing others. Especially knowing that most people probably don&#x27;t need the best answer 100% of the time.
      • chrisjj17 hours ago
        &gt; Especially knowing that most people probably don&#x27;t need the best answer 100% of the time.<p>More: probably don&#x27;t know if they&#x27;ve got a good answer 100% of the time.<p>It is interesting to note that this trickery is workable only where the best answers are sufficiently poor. Imagine they ran almost any other kind of online service such email, stock prices or internet banking. Occasionally delivering only half the emails would trigger a customer exodus. But if normal service lost a quarter of emails, they&#x27;d have only customers who&#x27;d likely never notice half missing.
      • bn-l15 hours ago
        Right. You can launder quantization that way by muddying the waters of discourse about the model.
    • DanielHall17 hours ago
      I encountered the same situation too; Claude has &#x27;become lazy&#x27;.
  • qwesr12320 hours ago
    FYI the MarginLab Claude Code degradation tracker is showing a statistically significant ~4% drop in SWE-Bench-Pro accuracy over the past month
  • goldenarm19 hours ago
    I really like the idea, but a &quot;±14.0% significance threshold&quot; is meaningless here.<p>The larger monthly scale should be the default, or you should get more samples.
    • zacmps19 hours ago
      Could you elaborate what you think the problems are? I guess they should be using some form of multiple comparison correction?
      • goldenarm19 hours ago
        The daily scale is not statistically significant and is meaningless. You should lower the confidence interval by either increasing the scale or the evaluations.
  • mrandish15 hours ago
    Benchmark tracking of cloud AI performance is going to be crucial going forward. Vendors are selling a service that by its nature is <i>very</i> difficult for customers to gauge day to day. How will I know if a code revision is ~2.5% less good today than it would have been yesterday? Or if queries during peak load hours use one less &#x27;expert&#x27; in their MoE?<p>Yet vendor&#x27;s costs to deliver these services are skyrocketing, competition is intense and their ability to subsidize with investor capital is going away. The pressure on vendors to reduce costs by dialing back performance a few percent or under-resourcing peak loads will be overwhelming. And I&#x27;m just a hobbyist now. If I was an org with dozens or hundreds of devs I&#x27;d want credible ways to verify the QoS and minimum service levels I&#x27;m paying for are being fulfilled long after a vendor has won the contract.
  • your_friend2 hours ago
    They should add testing from different ips and account countries, that would be fun too see that Americans are getting different models for example
  • threethirtytwo4 hours ago
    Does this even make sense? Clearly anthropic won&#x27;t release a model unless it passed a benchmark of some sort that proves it&#x27;s better than the previous model... or else why would they even release it?<p>It&#x27;s obvious if this thing shows degradation, than there is another thing that is showing improvement.
  • account26692814 hours ago
    Please try to make this statistically rigorous. There&#x27;s lots of advice in this thread (intraday variation, etc) but if Im reading this right it looks like the CI includes the baseline value yet you still label this as failing.<p>Wouldn&#x27;t this just be &quot;our test isn&#x27;t powerful enough to find a signal if there were one here?&quot;<p>People will see this and derive strong conclusions that the data don&#x27;t support and you, `qwesr123`, or &quot;JB&quot; from your blogs, will be responsible.
  • sandeepkd8 hours ago
    Totally tangential to article, was browsing through the website UI - <a href="https:&#x2F;&#x2F;marginlab.ai&#x2F;explorers&#x2F;swe-bench-pro&#x2F;" rel="nofollow">https:&#x2F;&#x2F;marginlab.ai&#x2F;explorers&#x2F;swe-bench-pro&#x2F;</a> , the page gives impression that the language, category boxes are selectable. However they are not a dropdown. Not sure if it was intentional design by human or some smart code generation by Claude based on the design sketches.
  • steveBK12315 hours ago
    New to me, but I am starting to infer that for those &quot;in the know&quot; it is common knowledge on HN that LLMs are purposely degraded over time to manage capacity&#x2F;cost or fudge benchmarks...<p>How do you actually use these in production pipelines in practice then?<p>Are LLMs even well suited for some of the document parsing &#x2F; data scrubbing automation people are throwing at them now?
  • _zachs14 hours ago
    This is super important - even if it&#x27;s not currently the best measure of degradation yet. Anecdotally, Opus 4.5 has gotten so bad for me it&#x27;s almost adding time to my workflow instead saving it. It&#x27;d be nice to have more 3rd party measurements like this to hold Anthropic accountable.
  • drc500free16 hours ago
    What makes the level they chose a “baseline,” against which it would be appropriate to do statistical tests?
  • aorist9 hours ago
    If the confidence interval width is 2 * 14.0%, how are you detecting a statistically significant difference between 58% and 50%?<p>The 95% CIs on both timeseries pretty much always cover the baseline number, which is not consistent with the result being statistically significant.
  • stared18 hours ago
    Does it benchmark the underlying code (Opus 4.5) or Claude Code harness? If the second, I would love to see CC versions involved.<p>I would be curious to see on how it fares against a constant harness.<p>There were thread claiming that Claude Code got worse with 2.0.76, with some people going back to 2.0.62. <a href="https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;16157" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;anthropics&#x2F;claude-code&#x2F;issues&#x2F;16157</a><p>So it would be wonderful to measure these.
    • Jcampuzano218 hours ago
      Claude Code. They mention they are using claude codes CLI in the benchmark, and claude code changes constantly.<p>I wouldn&#x27;t be surprised if the thing this is actually testing is benchmarking just claude codes constant system prompt changes.<p>I wouldn&#x27;t really trust this to be able to benchmark opus itself.
  • parquor17 hours ago
    Does this use a claude subscription or key, and has the account been used for anything else that day?<p>On HN a few days ago there was a post suggesting that Claude gets dumber throughout the day: <a href="https:&#x2F;&#x2F;bertolami.com&#x2F;index.php?engine=blog&amp;content=posts&amp;detail=insidious-progressive-intelligence" rel="nofollow">https:&#x2F;&#x2F;bertolami.com&#x2F;index.php?engine=blog&amp;content=posts&amp;de...</a>
  • copilot_king17 hours ago
    This strategy seems inspired by TikTok&#x27;s approach for retaining new uploaders.<p>TikTok used to give new uploaders a visibility boost (i.e., an inflated number of likes and comments) on their first couple of uploads, to get them hooked on the the service.<p>In Anthropic&#x2F;Claude&#x27;s case, the strategy is (allegedly) to give new users access to the premium models on sign-up, and then increasingly cut the product with output from cheaper models.
    • chrisjj17 hours ago
      Yes, but the difference is TikTok didn&#x27;t sell a particular service version.<p>Anthropic did sell a particular model version.
  • persedes14 hours ago
    What would be cool if this somehow could do a comparison by provider. E.g. in the last outages anthropic models running on vertex were apparently less affected than those deployed elsewhere. (Not saying that one is better than the other, but would be a neat read out).
  • bn-l15 hours ago
    I hope the author sees this:<p>You have to test inter-day variation. Many have noticed a sudden drop off at certain times.
  • motoboi13 hours ago
    I’d love to see, based on the level of non-determinism perfomance on the benchmark how many times you need to run the benchmark for the change to be relevant (or statistically significant if you want).<p>That would be a nice paper.
  • WhitneyLand18 hours ago
    First off, this is a cool project, look forward to some interesting insights.<p>I would suggest adding some clarification to note that longer measure like 30 pass rate is raw data only while the statistically significant labels apply only to change.<p>Maybe something like Includes all trials, significance labels apply only to confidence in change vs baseline.
  • carterschonwald5 hours ago
    ive seen degraded reasoning levels that feel like they they might be blur from excess quantization. cause thats what you get from the grid changes
  • beardsciences19 hours ago
    Very interesting. I would be curious to understand how granular these updates are being applied to CC + what might be causing things like this. I feel like I can notice a very small degradation but have compensated with more detailed prompts (which I think, perhaps naively, is offsetting this issue).
    • chrisjj16 hours ago
      &gt; more detailed prompts (which I think, perhaps naively, is offsetting this issue).<p>Is exacerbating this issue ... if the load theory is correct.
  • jonawesomegreen11 hours ago
    I’ve noticed Claude has been noticeably worse over the last week. For example, it told me I should pass frozen to make my Enum immutable—that’s not a thing. (It is a thing for dataclasses, but not for Enums.) That’s a pretty basic language feature it was nailing until recently. It also suggested I parse a URL using urlparse in a function that already uses urlparse. These are basic mistakes it wasn’t making before. Something seems to have changed, but I’m not sure what.
  • wendgeabos16 hours ago
    Codex is doing better. Why is everyone silent on Codex? <a href="https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;codex&#x2F;" rel="nofollow">https:&#x2F;&#x2F;marginlab.ai&#x2F;trackers&#x2F;codex&#x2F;</a>
    • CharlesW15 hours ago
      Benchmark wins don&#x27;t necessarily translate to &quot;real world&quot; wins vs. Claude Code.
    • bn-l15 hours ago
      Codex writes disgusting shit code.
  • snissn13 hours ago
    they should run their test against a control baseline such as an open source hosted model to see the overall drift in their test
  • elmean17 hours ago
    I KNEW I WASNT CRAZY
  • sciencejerk19 hours ago
    Why is this happening?
    • observationist18 hours ago
      They&#x27;re &quot;optimizing&quot; costs wherever possible - reducing compute allocations, quantizing models, doing whatever they can to reduce the cost per token, but vehemently insisting that no such things are occurring, that it&#x27;s all in the users&#x27; heads, and using the weaseliest of corporate weasel speak to explain what&#x27;s happening. They insist it&#x27;s not happening, then they say something like &quot;oh, it happened but it was an accident&quot;, then they say &quot;yes, it&#x27;s happening, but it&#x27;s actually good!&quot; and &quot;we serve the same model day by day, and we&#x27;ve always been at war with Eastasia.&quot;<p>They should be transparent and tell customers that they&#x27;re trying to not lose money, but that&#x27;d entail telling people why they&#x27;re paying for service they&#x27;re not getting. I suspect it&#x27;s probably not legal to do a bait and switch like that, but this is pretty novel legal territory.
    • Trufa19 hours ago
      I have absolutely no insight knowledge, but I think it&#x27;s not a bad assumption to have that, it&#x27;s costly to run the models, when they release a new model they assume that cost and give per user more raw power, when they&#x27;ve captured the new users and wow factor, they start reducing costs by reducing the capacity they provide to users. Rinse and repeat.
      • bn-l15 hours ago
        That is absolutely scummy.
    • Uehreka18 hours ago
      There are frequently claims that Anthropic is somehow diluting or dumbing down models in some subtle way. Unfortunately it’s tough to validate these claims without a body of regularly checked evals. This test set should hopefully help settle whether Anthropic is actually making changes under the hood or whether the changes are all in people’s heads.
    • giwook19 hours ago
      <a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-recent-issues" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-...</a>
      • observationist18 hours ago
        &gt;&gt;&gt; We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.<p>Just ignore the continual degradation of service day over day, long after the &quot;infrastructure bugs&quot; have reportedly been solved.<p>Oh, and I&#x27;ve got a bridge in Brooklyn to sell ya, it&#x27;s a <i>great</i> deal!
        • alias_neo17 hours ago
          &gt; We never reduce model quality due to demand, time of day, or server load<p>Forgive me, but as a native English speaker, this sentence says exactly one thing to me; We _do_ reduce model quality, just not for these listed reasons.<p>If they don&#x27;t do it, they could put a full stop after the fifth word and save some ~~tokens~~ time.
          • observationist13 hours ago
            Yes, Dario is responsible for some of the weaseliest of corporate weasel wording I&#x27;ve ever seen, and he&#x27;s got some incredible competition in that arena. Those things aren&#x27;t the reason, they&#x27;re just strongly coincidental with the actual reason, which is to slow the burn rate and extend the runway.
          • chrisjj16 hours ago
            Moreover the assurance re <i>model</i> quality is not re <i>results</i> quality.
    • emp1734417 hours ago
      It’s entirely possible it’s not happening, and this phenomenon of “model degradation” is just user hype meeting reality.
  • Topfi18 hours ago
    I have yet to experience any degradation in coding tasks I use to evaluate Opus 4.5, but I did see a rather strange and reproducible worsening in prompt adherence as part of none coding tasks since the third week of January.<p>Very simple queries, even those easily answered via regular web searching, have begun to consistently not result accurate results with Opus 4.5, despite the same prompts previously yielding accurate results.<p>One of the tasks that I already thought was fully saturated as most recent releases had no issues in solving it was to request a list of material combinations for fabrics used in bag constructions that utilise a specific fabric base. In the last two weeks, Claude has consistently and reproducibly provided results which deviate from the requested fabric base, making the results inaccurate in a way that a person less familiar with the topic may not notice instantly. There are other queries of this type for other topics I am nerdily familiar with to a sufficient degree to notice such deviations from the prompt like motorcycle history specific queries that I can say this behaviour isn&#x27;t limited to the topic of fabrics and bag construction.<p>Looking at the reasoning traces, Opus 4.5 even writes down the correct information, yet somehow provides an incorrect final output anyways.<p>What makes this so annoying is that in coding tasks, with extensive prompts that require far greater adherence to very specific requirements in a complex code base, Opus 4.5 does not show such a regression.<p>I can only speculate what may lead to such an experience, but for none coding tasks I have seen regression in Opus 4.5 whereas for coding I did not. Not saying there is none, but I wanted to point it out as such discussions are often primarily focused on coding, where I find it can be easier to see potential regressions where their are none as a project goes on and tasks become inherently more complex.<p>My coding benchmarks are a series of very specific prompts modifying a few existing code bases in some rather obscure ways, with which I regularly check whether a model does severely deviate from what I&#x27;d seen previously. Each run starts with a fresh code base with some fairly simple tasks, then gets increasingly complex with later prompts not yet being implemented by any LLM I have gotten to test. Partly that originated from my subjective experience with LLMs early on, where I found a lot of things worked very well but then as the project went on and I tried more involved things with which the model struggled, I felt like the model was overall worse when in reality, what had changed were simply the requirements and task complexity as the project grew and easier tasks had been completed already. In this type of testing, Opus 4.5 this week got as far and provided a result as good as the model did in December. Of course, past regressions were limited to specific users, so I am not saying that no one is experiencing reproducible regressions in code output quality, merely that I cannot reproduce them in my specific suite.
    • dudeinhawaii17 hours ago
      I&#x27;ve noticed a degradation in Opus 4.5, also with Gemini-3-Pro. For me, it was a sudden rapid decline in adherence to specs in Claude Code. On an internal benchmark we developed, Gemini-3-Pro also dramatically declined. Going from being clearly beyond every other model (as benchmarks would lead you to believe) to being quite mediocre. Delivering mediocre results in chat queries and coding also missing the mark.<p>I didn&#x27;t &quot;try 100 times&quot; so it&#x27;s unclear if this is an unfortunate series of bad runs on Claude Code and Gemini CLI or actual regression.<p>I shouldn&#x27;t have to benchmark this sort of thing but here we are.
      • acuozzo15 hours ago
        Write your work order with phases (to a file) and, between each phase, give it a non-negotiable directive to re-read the entire work order file.<p>Claude-Code is terrible with context compaction. This solves that problem for me.
    • epolanski18 hours ago
      I definitely noticed a degradation, it feels regressed by a generation.
  • fragebogen19 hours ago
    Would love to see this idea expanded to ever alleged SoTA model currently in production. Any speculation as to why this degradation occurs?
    • embedding-shape19 hours ago
      Anecdote, I don&#x27;t have any proof and it&#x27;s just a feeling. But around afternoon in GMT+1 compared to the morning&#x2F;midday, there seems to be a change in the quality of responses, which seems to line up with when the US wakes up. I consistently get (what feels like) worse responses in both Codex and Claude Code in the afternoon&#x2F;night compared to morning&#x2F;midday, so much that I usually give up then try the same prompt next morning and get better results. But I guess that might as well be about me being more tired in the night than morning too, as I said, haven&#x27;t measured this.
      • jzig19 hours ago
        It’s the afternoon slump. The AI needs a cup of coffee and to doomscroll for half an hour!
        • embedding-shape19 hours ago
          Or a load balancing technique :) Either way, it kicks me off to do other things so maybe it isn&#x27;t so bad after all.
  • ed_mercer8 hours ago
    I would pay 300 for a non-degrading Max plan.
  • hn_user_987610 hours ago
    Tracking benchmarks for AI-assisted coding tools is crucial. It helps developers understand the trade-offs and stability of the models they rely on.
  • rplnt17 hours ago
    The chart would benefit from having weekends highlighted. Or have another chart averaged by a weekday.
  • Rastonbury16 hours ago
    would be interesting to see what scores it&#x27;s get when it is actually degraded via the status page, it gets degraded pretty often, so there&#x27;s at least something to compare or to know at what point Anthropic declares degradation
  • ghm219919 hours ago
    In medicine there is a concept of reporting adverse effects of medication or interventions which are then collectively studied for Public Health [MedWatch][VAERS][EudraVigilance] and in academia. We should have something like that for all coding agents(and agents in other fields too), given how widely its deployed and affect on &quot;health&quot; in general(not only human). Call it the AI &quot;health&quot; of things benchmark.<p>I would imagine a sort of hybrid qualities of volunteer efforts like wikipedia, new problems like advent of code and benchmarks like this. The goal? It would be to study the collective effort on the affects of usage to so many areas where AI is used.<p>[MedWatch](<a href="https:&#x2F;&#x2F;www.fda.gov&#x2F;safety&#x2F;medwatch-fda-safety-information-and-adverse-event-reporting-program&#x2F;reporting-serious-problems-fda" rel="nofollow">https:&#x2F;&#x2F;www.fda.gov&#x2F;safety&#x2F;medwatch-fda-safety-information-a...</a>)<p>[VAERS](<a href="https:&#x2F;&#x2F;www.cdc.gov&#x2F;vaccine-safety-systems&#x2F;vaers&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;www.cdc.gov&#x2F;vaccine-safety-systems&#x2F;vaers&#x2F;index.html</a>)<p>[EudraVigilance](<a href="https:&#x2F;&#x2F;www.ema.europa.eu&#x2F;en&#x2F;human-regulatory-overview&#x2F;research-development&#x2F;pharmacovigilance-research-development&#x2F;eudravigilance" rel="nofollow">https:&#x2F;&#x2F;www.ema.europa.eu&#x2F;en&#x2F;human-regulatory-overview&#x2F;resea...</a>)
  • sroerick18 hours ago
    My personal conspiracy theory is that they choose who to serve a degraded model to based on social graph analysis and sentiment analysis, maximizing for persuasion while minimizing compute.
    • copilot_king17 hours ago
      IMO this strategy seems inspired by TikTok&#x27;s approach for retaining new uploaders.<p>TikTok used to give new uploaders a visibility boost (i.e., an inflated number of likes and comments) on their first couple of uploads, to get them hooked on the the service.<p>In Anthropic&#x2F;Claude&#x27;s case, the strategy is (allegedly) to give new users access to the premium models on sign-up, and then increasingly cut the product with output from cheaper models.<p>Of course, your suggestion (better service for users who know how to speak Proper English) would be the cherry on top of this strategy.<p>From what I&#x27;ve seen on HackerNews, Anthropic is all-in on social media manipulation and social engineering, so I suspect that your assumption holds water.
      • sroerick9 hours ago
        I would actually assume a little more sophistication. For each user, a measure of &quot;Are they convinced that AI is great&quot;. Then, you weaponize your compute to have the maximum social impact. If somebody has a large following (many edges on the social graph), and theyre skeptical of AI tech, inject the expensive but effective models directly into their veins. Let them taste the joy. Then start watering down their dose, and move onto the next person in the graph, again maximizing for net social impact. Language may not even be a consideration
    • arcanemachiner18 hours ago
      Sounds more like a sound business plan than a conspiracy theory.
      • copilot_king17 hours ago
        It sounds like fraud to me
        • arcanemachiner16 hours ago
          Does it say anywhere in their terms of service that they guarantee the quality of the model, or promise not to modify it?<p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;legal&#x2F;consumer-terms" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;legal&#x2F;consumer-terms</a><p><a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;legal&#x2F;commercial-terms" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;legal&#x2F;commercial-terms</a>
  • esafak18 hours ago
    Finally someone did it! We need this for all models.
  • sd917 hours ago
    I’m sure there is not enough data here for this to be statistically significant (it seems to oscillate too much and not show real trends or step changes) - BUT<p>If this measure were hardened up a little, it would be really useful.<p>It feels like an analogue to an employee’s performance over time - you could see in the graphs when Claude is “sick” or “hungover”, when Claude picks up a new side hustle and starts completely phoning it in, or when it’s gunning for a promotion and trying extra hard (significant parameter changes). Pretty neat.<p>Obviously the anthropomorphising is not real, but it is cool to think of the model’s performance as being a fluid thing you have to work with, and that can be measured like this.<p>I’m sure some people, most, would prefer that the model’s performance were fixed over time. But come on, this is way more fun.
  • fernvenue18 hours ago
    That will be great if there&#x27;s RSS support.
  • taf218 hours ago
    any chance we can get something like this for codex cli that&#x27;d be cool too compare
  • kittikitti16 hours ago
    This is why I run my own models. All the inference providers do sneaky things behind the scenes. They will limit the output tokens, turn off attention layers, lower reasoning, or just use a completely different model. I&#x27;m actually surprised that Claude Code experienced this, as I&#x27;ve experienced this the least from API and coding agents.
  • macinjosh8 hours ago
    The degradation does not need to be in the inference it can be in how often inference is used.<p>It is closed source but the algorithms that decide what Claude code does when, could behave differently when the API responses are slower. Maybe it does fewer investigatory greps or performs fewer tasks to get to “an” answer faster and with less load.
  • IshKebab19 hours ago
    &gt; We model tests as Bernoulli random variables and compute 95% confidence intervals around daily, weekly, and monthly pass rates. Statistically significant differences in any of those time horizons are reported.<p>Doesn&#x27;t really work like that. I&#x27;d remove the &quot;statistically significant&quot; labelling because it&#x27;s misleading.
  • biddit15 hours ago
    Call it what you will. But the experience is like you have a reliable coworker, but he randomly decides to take bong hits.<p>&quot;No no yeah bro no I&#x27;m good like really the work&#x27;s done and all yeah sorry I missed that let me fix it&quot;
  • mannanj15 hours ago
    I wonder when I experience noticeably degraded model quality, ie opus, is it because my usage falls in the highest buckets and I’m being shadow limited or served worse versions of opus or is it because of actual server load&#x2F;burden?<p>It wouldn’t be the first time companies have secret shadow algorithms running to optimize things and wouldn’t it be obvious to limit power users as matter of cost&#x2F;profit and not tell them. (See history of “Shadow ban” though that’s for different reasons)
  • willturman11 hours ago
    Could this be (partially?) explained by Model Collapse [1], i.e. iteratively training on data that includes an ever increasing amount of AI slop?<p>[1] <a href="https:&#x2F;&#x2F;thebullshitmachines.com&#x2F;lesson-16-the-first-step-fallacy&#x2F;index.html#:~:text=Model%20collapse." rel="nofollow">https:&#x2F;&#x2F;thebullshitmachines.com&#x2F;lesson-16-the-first-step-fal...</a>
  • PlatoIsADisease16 hours ago
    Pretty sure someone at Google, OpenAI, and Anthropic met up at a park, leaving their phones in their car, and had a conversation that January 2026, they were all going to silently degrade their models.<p>They were fighting an arms race that was getting incredibly expensive and realized they could get away with spending less electricity and there was nothing the general population could do about it.<p>Grok&#x2F;Elon was left out of this because he would leak this idea at 3am after a binge.
  • turnsout19 hours ago
    This is probably entirely down to subtle changes to CC prompts&#x2F;tools.<p>I&#x27;ve been using CC more or less 8 hrs&#x2F;day for the past 2 weeks, and if anything it feels like CC is getting better and better at actual tasks.<p><i>Edit: Before you downvote, can you explain how the model could degrade WITHOUT changes to the prompts? Is your hypothesis that Opus 4.5, a huge static model, is somehow changing? Master system prompt changing? Safety filters changing?</i>
    • FfejL19 hours ago
      Honest, good-faith question.<p>Is CC getting better, or are you getting better at using it? And how do you know the difference?<p>I&#x27;m an occasional user, and I can definitely see improvements in my prompts over the past couple of months.
      • rob18 hours ago
        I agree with you, it&#x27;s personally hard to tell.<p>For me I&#x27;ve noticed it getting nothing but better over the past couple months, but I&#x27;ve been working on my workflows and tooling.<p>For example, I used to use plan mode and would put everything in a single file and then ask it to implement it in a new session.<p>Switching to the &#x27;superpowers&#x27; plugin with its own skills to brainstorm and write plans and execute plans with batches and tasks seems to have made a big improvement and help catch things I wouldn&#x27;t have before. There&#x27;s a &quot;get shit done&quot; plugin that&#x27;s similar that I want to explore as well.<p>The code output always looks good to me for the most part though and I&#x27;ve never thought that it&#x27;s getting dumber anything, so I feel like a lot of the improvements I see are because of a skill issue on my part trying to use everything. Obviously it doesn&#x27;t help there&#x27;s a new way to do things every two weeks though.
      • BoorishBears6 hours ago
        I run an LLM based product in a completely different space (consumer) and I think this is kind of an impossible unsolvable part of developing products that rely on LLMs.<p>No matter what, powers users always say the model is degrading over time*. Even when every stat I have access to says otherwise.<p>(* to clarify, this is outside of actual model changes)<p>I suspect some of it is the fact context windows growing does harm performance, and early on you&#x27;re more likely to be prodding at things in a way that has a smaller context window on average.<p>But I also think users just inherently are less reliable narrators than they think. They say they&#x27;re trying the same tasks, but it may be the &quot;same task&quot; applied to a codebase with 1 month&#x27;s more worth of development and complexity.<p>Or it&#x27;s the &quot;same task&quot; but their less confident past self was &quot;Clever Hans&quot;-ing the model with some nuance that they&#x27;ve since discarded without realizing.<p>Or it&#x27;s simple expectation creep and the tasks aren&#x27;t similar <i>at all</i> from an LLM perspective due to limited generalization, but from a human perspective are. Switching languages might as well make it a new task as far LLM performance for example, but the human considers it the same task in a new language.<p>-<p>Whatever causes it, it&#x27;s especially stressful because sometimes you <i>do</i> degrade the harness entirely accidentally but it&#x27;s impossible to separate that signal from the noise from user accounts and an issue goes unfound way longer than it should.<p>Claude Code is somewhat fortunate that code has verifiable aspects though, so you don&#x27;t need to 100% go on user account. My usecase relies much more on subjective preference, so dealing with this stuff becomes the 9th circle of hell.<p>There&#x27;ve been <i>many</i> times when a change to the LLM stack didn&#x27;t make it to prod, I jumped the gun on announcing it, but users immediately flooded in with praise that the &quot;missing&quot; performance had returned.
      • turnsout18 hours ago
        Good-faith answer: I can&#x27;t be certain. But I&#x27;ve been using CC since its release, and Cursor before that (and actually going all the way back to GPT3 to do codegen in the Playground). After getting used to the CC workflow, the way that I use it has been pretty consistent. To be specific, I use basically the same AGENTS.md with small modifications for each project, and I live almost exclusively in Plan mode and the best model (currently Opus 4.5).<p>My initial prompting is boilerplate at this point, and looks like this:<p>(Explain overall objective &#x2F; problem without jumping to a solution)<p>(Provide all the detail &#x2F; file references &#x2F; past work I can think of)<p>(Ask it &quot;what questions do you have for me before we build a plan?&quot;)<p>And then go back and forth until we have a plan.<p>Compared to my work with CC six months ago, it&#x27;s just much more capable, able to solve more nuanced bugs, and less likely to generate spaghetti code.
    • billylo19 hours ago
      That&#x27;s why benchmarks are useful. We all suffer from the shortcomings of human perception.
      • gpm19 hours ago
        Benchmarks shortcomings are no worse... they inevitably measure something that is only close to the thing you actually care about, not the thing you actually care about. It&#x27;s entirely plausible that this decreased benchmark score is because Anthropic&#x27;s initial prompting of the model was overtuned to the benchmark and as they&#x27;re gaining more experience with real world use they are changing the prompt to do better at that and consequentially worse at the benchmark.
        • billylo19 hours ago
          I wonder how best we can measure the usefulness of models going forward.<p>Thumbs up or down? (could be useful for trends) Usage growth from the same user over time? (as an approximation) Tone of user responses? (Don&#x27;t do this... this is the wrong path... etc.)
      • turnsout18 hours ago
        Benchmarks measure what they measure. But your subjective experience also matters.
    • arcanemachiner18 hours ago
      The easiest way would be to quantize the model, and serve different quants based on the current demand. Higher volumes == worse quant == more customers served per GPU
    • fragebogen19 hours ago
      I was going to ask, are all other variables accounted for? Are we really comparing apples to apples here? Still worth doing obviously, as it serves a good e2e evaluations, just for curiosity&#x27;s sake.
    • gpm16 hours ago
      I upvoted, but<p>&gt; Edit: Before you downvote, can you explain how the model could degrade WITHOUT changes to the prompts?<p>The article actually links to this fine postmortem by anthropic that demonstrates one way this is possible - software bugs affecting inference: <a href="https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-recent-issues" rel="nofollow">https:&#x2F;&#x2F;www.anthropic.com&#x2F;engineering&#x2F;a-postmortem-of-three-...</a><p>Another way this is possible is the model reacting to &quot;stimuli&quot;, e.g. the hypothesis at the end of 2023 that the (then current) ChatGPT was getting lazy because it was finding out the date was in december and it associated winter with shorter lazier responses.<p>A third way this is possible is the actual conspiracy version - Anthropic might make changes to make inference cheaper at the expense of the quality of the responses. E.g. quantizing weights further or certain changes to the sampling procedure.
  • lighthouse12124 hours ago
    [dead]
  • MORPHOICES15 hours ago
    [dead]
  • maximgeorge17 hours ago
    [dead]