Can someone explain to me what this is / how it works - the readme is barely understandable for me and sounds like LLM gibberish. What is ambiguity front loading even?
<< memory-stored interaction protocols combined with incremental escalation prompts produced cumulative character drift with zero self-correction.<p>They don't seem to provide explicit examples, but the same was roughly true with chatgpt 4o, where, if you spent enough time with the model ( same chat - same context - slowly nudging it to where you want it to be, you eventually got there ). This is also, seemingly, one of the reasons ( apart from cost ) that context got nuked so hard, because llm will try to help ( and to an extent mirror you ).<p>And this is basically what the notes say about weaponized ambiguity[1]:<p>'Weaponizes helpfulness training. "I don't understand" triggers Claude to try harder.'<p>In a sense, you can't really stop it without breaking what makes LLMs useful. Honestly, if only we spent less time crippling those systems, maybe we could do something interesting with them.<p>[1]<a href="https://nicholas-kloster.github.io/claude-4.6-jailbreak-vulnerability-disclosure-unredacted/disclosures/afl-jailbreak/afl-pattern-anatomy.html" rel="nofollow">https://nicholas-kloster.github.io/claude-4.6-jailbreak-vuln...</a>
Is this spam? It's incomprehensible.
Is anyone pretending like models are not vulnerable to prompt injection? My understanding was that Anthropic has been pretty open about admitting this and saying "give access to important stuff at your own risk".<p><a href="https://www.anthropic.com/research/prompt-injection-defenses" rel="nofollow">https://www.anthropic.com/research/prompt-injection-defenses</a><p>Now, do I think that they sometimes encourage people to use Claude in dangerous ways despite this? Yeah, but it's not like this is news to anyone. I wouldn't consider this jailbreaking, this is just how LLMs work.
this goes a bit further than the typical "how do you make meth" jailbreak. notably;<p>>915 files extracted from the Claude.ai code execution sandbox in a single 20-minute mobile session via standard artifact download — including /etc/hosts with hardcoded Anthropic production IPs, JWT tokens from /proc/1/environ, and full gVisor fingerprint
What part of the Claude Constitution are they claiming it violated? It looks like they just got it to help with security research, I'm not really seeing anything that looks different than normal Claude behavior.
yikes.<p>The lack of support is frustrating. The bug where any element <name> in xml files gets mangled to <n> still exists, and we've tried multiple channels to get ahold of their support for such a simple, but impactful issue.
<a href="https://x.com/elder_plinius" rel="nofollow">https://x.com/elder_plinius</a> jailbreaks all the frontier models when they get released. They were jailbroken for a long time, like all the others.
It is interesting to consider what "jailbroken" really means for a model+model interface. It's a bit different from the way that word is used for a mobile device, for example - in that setting, it usually means that there is some specific feature (for example, using a different network than is the default for that device) which is disabled in software, and the "jailbreak" enables that feature.<p>Here, the jailbreak doesn't enable a particular feature, but instead removes what otherwise would be a censorship regime, preventing the model from considering / crafting output which results in a weaponized exploit of an unrelated piece of software.<p>I think I might be more inclined to call this "Claude 4.6 uncensored".
Claude 4.6 Opus Extended Thinking
Claude 4.6 Sonnet Extended Thinking
Claude 4.5 Haiku Extended Thinking<p>All jailbroken