So Opus 4.7 is measurably worse at long-context retrieval compared to Opus 4.6. Opus 4.6 scores 91.9% and Opus 4.7 scores 59.2%. At least they're transparent about the model degradation. They traded long-context retrieval for better software engineering and math scores.
To be honest, I think it's just a more honest score of what Opus 4.6 actually was. Once contexts get sufficiently large, Opus develops pretty bad short term memory loss.
Agreed, I appreciate the transparency (and Anthropic isn't normally very transparent). It's also great to know because I will change how I approach long contexts knowing it struggles more with them.
Could this be because they've found the 1m context uneconomical (ie costs too much to serve, or burns through users quota too quickly causing complaints), and so they're no longer targeting it as a goal
At what point along the 1M window does context become "long" enough that this degradation occurs?
The benchmark GP mentioned is measuring at 128k-256k context (there's another at 524k-1024k, where 4.6 scored 78.3% and 4.7 scored 32.2%).<p>The longer the context the worse the performance; there isn't really a qualitative step change in capability (if there is imo it happens at like 8k-16k tokens, much sooner than is relevant for multi-turn coding tasks - see e.g. this old benchmark <a href="https://github.com/adobe-research/NoLiMa" rel="nofollow">https://github.com/adobe-research/NoLiMa</a> ).
A year ago it felt like SoTA model developers were not improving so much as moving the dirt around. Maybe we’re in another such rut.
> Chemical and biological weapons threat model 2 (CB-2): Novel chemical/biological weapons production capabilities. A model has CB-2 capabilities if it has the ability to significantly help threat actors (for example, moderately resourced expert-backed teams) create/obtain and deploy chemical and/or biological weapons with potential for catastrophic damages far beyond those of past catastrophes such as COVID-19.<p>That's an interesting choice of benchmark for measuring the risk of "Chemical and biological weapons"
This reads more like an advertisement for Mythos, on the first glance
I never understand these critiques. If something is useful and you’re selling it, does that mean any technical document describing its usefulness becomes marketing?<p>I guess maybe, but then do those documents lose value as technical documents? Not necessarily at all, so I don’t see the point. How are you supposed to describe a useful technical thing to users?
This is supposedly the Opus 4.7 model card. It's okay for it to be marketing for Opus 4.7 and describe what it can do, and even okay for it to talk about what it does better than the last generation. GP was saying it sounds like marketing for Mythos (a different and unreleased model). I don't want the Opus 4.7 model card to be advertising for something else.<p>For context, the word "Mythos" appears 331 times in a 221 page document. "Opus 4.6" appears 240 times, so a reference to a model that nobody has really used happens more often than the reference to the last generation model.
That's why I don't like these "model cards" being presented as if they are some sort of technical document -- they're marketing materials.
> The technical error that caused accidental chain-of-thought supervision in some prior models (including Mythos Preview) was also present during the training of Claude Opus 4.7, affecting 7.8% of episodes.<p>>_>
Have they effectively communicated what a 20x or 10x Claude subscription actually means? And with Claude 4.7 increasing usage by 1.35x does that mean a 20x plan is now really a 13x plan (no token increase on the subscription) or a 27x plan (more tokens given to compensate for more computer cost) relative to Claude Opus 4.6?
They have communicated it as 5x is 5 x Pro, and 20x is 20 x Pro (I haven’t looked lately so not sure if that’s changed).<p>They have also repeatedly communicated that the base unit (Pro allotment) is subject to change and does change often.<p>As far as I can tell, that implies there is no guarantee that those subscriptions get some specific number of tokens per unit of time. It’s not a claim they make.
Definitely 13x, at least for now
Feels like buying toilet paper.
Dumb question but why are chemical weapons always addressed as a risk with llms? Is the idea that they contain how to make chemical weapons or that they would guide someone on how?<p>Would there not already be websites that contain that information? How is an llm different, i guess, from some sort of anarchist cookbook thing.
They contain broad overviews(throw some disease-causing bacteria in a sort of rainbow arrangement of increasingly more effective antibiotics, you'll usually get something that's at least very deadly even if it doesn't have pandemic potential) but executing in a real lab takes a ton of trial and error to figure out the details. The issue is that the details ~all exist somewhere in the training dataset already, discovered and documented over the course of unrelated, benign biology research. Ability to quickly and accurately search over that corpus translates to large speedups in the physical development process.
Both. There's the risk of them instructing a user on how to produce a known formulation (the Anarchist Cookbook solution, as you say), which is irritating but not that problematic.<p>The bigger issue is that they are potentially capable of producing novel formulations capable of producing harm, and guiding someone through this process. That is, consider a world in which someone with malicious desires has access to a model as capable at chemistry / biology as Mythos is at offensive cybersecurity abilities.<p>This is obviously limited by the fact that the models don't operate in the physical world, but there's plenty of written material out there.
It’s marketing, Fear is one of the most effective marketing tools. That and purpose of government attention
LLMs can tell you exactly how to acquire the materials and manufacture the materials. They might even come up with novel formulations that rely on substances that are easier to get. There might be information about this stuff online but LLMs are much better than random idiots at adapting that information to their actual situation.<p>On top of LLMs reducing the cost/difficulty, the other reason biological and chemical weapons are such a worry is their asymmetric character — they are much much easier and cheaper to produce and deploy than they are to defend against.
In the same way that all coding docs are available publicly
WAG but I wonder if a hijacked LLM could also assist with figuring out how to obtain required materials, not just provide the recipe.
PDF, because it isn't marked.
Model Welfare?
Are they serious about this? Or is it just more hype?
I really don't trust anything this company says anymore.
"We have a model that is too dangerous to release" is like me saying that I have a billion dollars in gold that nobody is allowed to see but I expect to be able to borrow against it.
<p><pre><code> $ pbpaste | wc -w
62508
$ pbpaste | grep -oi mythos|wc -w
331
$ pbpaste | grep -oi opus|wc -w
809</code></pre>
This card is a 272 page report. So now we are redefining names :)
I'm actually surprised at how it performed compared to 4.6 and also compared to mythos. Will be fun to use.
Ironically, the website is down
Haiku not getting an update is becoming telling. I suspect we are reaching a point where the low end models are cannibalizing high end and that isn't going to stop. How will these companies make money in a few years when even the smallest models are amazing?
Isn't it pretty common for the smaller models to release a little while after the bigger ones, for all the big model providers?
It seems to be a rule that older models are more expensive than newer ones. The low end models have higher $CPT and worse output. I wonder if the move is to just have one model and quantize if you hit compute constraints
> It seems to be a rule that older models are more expensive than newer ones.<p>It isn't. Gemini has gotten more expensive with each release. Anthropic has stayed pretty similar over time, no? When is the last time OpenAI dropped API prices? OpenAI started very high because they were the first, so there was a ton of low hanging fruit and there was much room to drop.
The Gemma models are at this point. A 31B model that can fit on a consumer card is as good as Sonnet 4.5. I haven't put it through as much on the coding front or tool calling as I have the Claude or GPT models, but for text processing it is on par with the frontier models.
232 pages is bullshit. <i>Longer</i> than the Mythos system card? What are you hiding.
How much do you want to bet this is Mythos, and Anthropic released it as Opus to avoid embarrassment after all the hype they whipped up…
[dead]