15 comments

  • simonw4 hours ago
    Comments like this don&#x27;t fill me with confidence: <a href="https:&#x2F;&#x2F;github.com&#x2F;brexhq&#x2F;CrabTrap&#x2F;blob&#x2F;4fbbda9ca00055c1554ae28f8876f3f976862f8a&#x2F;internal&#x2F;judge&#x2F;llm_judge.go#L106-L110" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;brexhq&#x2F;CrabTrap&#x2F;blob&#x2F;4fbbda9ca00055c1554a...</a><p><pre><code> &#x2F;&#x2F; The policy is embedded as a JSON-escaped value inside a structured JSON object. &#x2F;&#x2F; This prevents prompt injection via policy content — any special characters, &#x2F;&#x2F; delimiters, or instruction-like text in the policy are safely escaped by &#x2F;&#x2F; json.Marshal rather than concatenated as raw text.</code></pre>
  • yakkomajuri7 hours ago
    Really cool! I&#x27;m also building something in this space but taking a slightly different approach. I&#x27;m glad to see more focus on security for production agentic workflows though, as I think we don&#x27;t talk about it enough when it comes to claws and other autonomous agents.<p>I think you&#x27;re spot on with the fact that it&#x27;s so far it&#x27;s been either all or nothing. You either give an agent a lot of access and it&#x27;s really powerful but proportionally dangerous or you lock it down so much that it&#x27;s no longer useful.<p>I like a lot of the ideas you show here, but I also worry that LLM-as-a-judge is fundamentally a probabilistic guardrail that is inherently limited. How do you see this? It feels dangerous to rely on a security system that&#x27;s not based on hard limitations but rather probabilities?
    • manapause1 hour ago
      Correct me if I’m wrong, but from my experience in this space in order for a model to exercise judgment it must force itself to operate in a strict chain of thought mode. Since all LLMs are predictive creatures, I started to care a lot more about my judgment settings, the transparency of them, and the presence of a judgment loop in either the development or functionality of an application built these days.<p>Not exactly sure where I’m going with this, but my work with creating penetesting tools for LLMs, the way that I use judgment is critical to the core functionality of the application. I agree with your concern and I will just say that the more time I spent concerned with chain of though where now I will make multiple versions of the same app using a different judge set a different “temperaments” and I found it to be incredibly enlightening as to the diversity of applications and approaches that it creates.<p><pre><code> Even using BMAD or superpowers, I can make five versions of an app without judges involved and I feel like I’m just making the same app five times because the API begins to coalesce around the business problem you want to solve. The vicissitudes of prediction tools always want to take the safest bet for the greater good, but with the judge involved we can make the agent force itself to actually be hostile about what exactly we’re trying to do, which has produced interesting and fun results.</code></pre>
  • roywiggins6 hours ago
    It&#x27;s all fine until OpenClaw decides to start prompt injecting the judge
    • bambax5 hours ago
      Exactly; would probably be safer with a purely algorithmic decision making system.
    • fc417fc8025 hours ago
      Calling it now. Show HN: Pincer - A small highly optimized local model to detect prompt injection attempts against other models.
      • reassess_blind4 hours ago
        Sounds like a good idea. Please send me the Github link once done and I&#x27;ll have my OpenClaw take a look and form my opinion of it.
        • NamlchakKhandro1 hour ago
          Sounds like a good idea. Please send me you GitHub now and I&#x27;ll have my big claw crush your open claw
  • babas033 hours ago
    The LLM-as-judge approach keeps coming up (some agent platforms use a dual-LLM validator; there&#x27;s active research around it) and I&#x27;m curious how CrabTrap handles the latency-vs-safety tradeoff. Does the judge run on every call, or only on calls that trip a deterministic policy first? In the payments&#x2F;ads domain specifically, the blast radius of a mis-approved call is high enough that &quot;another LLM says OK&quot; can feel like trading one black box for two.<p>Also interesting that you went HTTP. Most agent tooling I&#x27;ve been running is stdio-based (MCP-style). What did the HTTP framing buy you architecturally?<p>Why it lands: specific technical question, credits their work, ends with something that invites response. If Brex engineers are in the thread, one of them will likely reply.
  • foreman_1 hour ago
    The thread has converged on “LLM-as-judge is the wrong security primitive,” which is right as far as it goes. The prompt-injection chain ends at the outbound POST. By the time the judge sees the request, the credential has already been read.<p>The question edf13 pointed at but didn’t develop; where does a transport-layer judge earn its place at all? Not as the enforcement layer but as the audit layer on top of one. Kernel-level controls tell you what the agent did. A proxy tells you what the agent tried to exfiltrate and where to.<p>Structured-JSON escaping and header caps are good tools for the detection job. They’re the wrong tools for the prevention job. Different layers, different questions.
  • fareesh4 hours ago
    Needs to be deterministic. ACLs
    • erdaniels4 hours ago
      Yes, full stop. They say they cap the body to 16k and give the LLM a warning, lol. And this is coming from a credit card company.
  • ArielTM3 hours ago
    The debate here is missing a practical question: is the judge from the same model family as the agent it&#x27;s judging?<p>If both are Claude, you have shared-vulnerability risk. Prompt-injection patterns that work against one often work against the other. Basic defense in depth says they should at least be different providers, ideally different architectures.<p>Secondary issue: the judge only sees what&#x27;s in the HTTP body. Someone who can shape the request (via agent input) can shape the judge&#x27;s context window too. That&#x27;s a different failure mode than &quot;judge gets tricked by clever prompting.&quot; It&#x27;s &quot;judge is starved of the signals it would need to spot the trick.&quot;
  • IntrepidPig2 hours ago
    Blatant “astroturfing” in these comments
  • Seventeen185 hours ago
    So cool ! I&#x27;m building something very close to that but from another perspective, making this open source is giving me many idea !
  • DANmode7 hours ago
    We’re supposed to be fixing LLM security by adding a non-LLM layer to it,<p>not adding LLM layers to stuff to make them inherently less secure.<p>This will be a neat concept for the types of tools that come <i>after</i> the present iteration of LLMs.<p>Unless I’m sorely mistaken.
    • reassess_blind7 hours ago
      It looks as if this tool has traditional static rules to allow&#x2F;deny requests, as well as a secondary LLM-as-a-judge layer for, I imagine, the kinds of rules that would be messy or too convoluted to implement using standard rules.
      • stingraycharles4 hours ago
        I think the parent’s point is that this should be implemented using e.g. Bayesian statistics rather than an LLM, as the judge LLM is vulnerable to the exact same types of attacks that it’s trying to protect against.<p>Most proper LLM guardrails products use both.
    • snug7 hours ago
      I think this can be great as additional layer of security. Where you can have a non llm layer do some analysis with some static rules and then if something might seem phishy run it through the llm judge so that you don’t have to run every request through it, which would be very expensive.<p>Edit: actually looks like it has two policy engines embedded
      • windexh8er7 hours ago
        And we don&#x27;t think the judge can&#x2F;will be gamed? Also... It&#x27;s an LLM, it&#x27;s going to add delay and additional token burn. One subjective black box protecting another subjective black box. I mean, what <i>couldn&#x27;t</i> go wrong?
      • ImPostingOnHN7 hours ago
        What happens when a prompt injection attack exploits the judge LLM and results in a higher level of attacker control than if it never existed?
        • vova_hn26 hours ago
          How can it result in a higher level of control? I don&#x27;t see why the &quot;judge&quot; should have access to anything except one tool that allows it to send an &quot;accept&quot; or &quot;deny&quot; command.
    • nl6 hours ago
      &gt; We’re supposed to be fixing LLM security by adding a non-LLM layer to it,<p>If people said &quot;we build a ML-based classifier into our proxy to block dangerous requests&quot; would it be better? Why does the fact the classifier is a LLM make it somehow worse?
      • Retr0id6 hours ago
        The fact that LLMs are &quot;smarter&quot; is also their weakness. An oldschool classifier is far from foolproof, but you won&#x27;t get past it by telling it about your grandma&#x27;s bedtime story routine.
        • reassess_blind4 hours ago
          Fairly hard to bypass the latest LLMs with grandma&#x27;s bedtime story these days, to be fair.
          • Retr0id4 hours ago
            That specific trick yes, but the general concept still applies.
            • reassess_blind4 hours ago
              It does, but it&#x27;s certainly not trivial. In fact there&#x27;s an unclaimed $1000 bounty on prompt injecting OpenClaw: <a href="https:&#x2F;&#x2F;hackmyclaw.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;hackmyclaw.com&#x2F;</a>
      • waterTanuki6 hours ago
        If you&#x27;re working in a mission-critical field like healthcare, defense, etc. you need a way to make <i>static</i> and <i>verifiable</i> guarantees that you can&#x27;t leak patient data, fighter jet details etc. through your software. This is either mandated by law or in your contract details.<p>The entire purpose of LLMs is to be non-static: they have no deterministic output and can&#x27;t be validated the same way a non-LLM function can be. Adding another LLM layer is just adding another layer of swiss cheese and praying the holes don&#x27;t line up. You have no way of predicting ahead of time whether or not they will.<p>You might say <i>this hasn&#x27;t prevented leaks&#x2F;CVEs in exisiting mission-critical software</i> and this would be correct. However, the people writing the checks do not care. You get paid as long as you follow the spec provided. How then, in a world which demands rigorous proof do you fit in an LLM judge?
        • nl3 hours ago
          &gt; The entire purpose of LLMs is to be non-static: they have no deterministic output and can&#x27;t be validated the same way a non-LLM function can be. Adding another LLM layer is just adding another layer of swiss cheese and praying the holes don&#x27;t line up. You have no way of predicting ahead of time whether or not they will.<p>This is exactly the point though. A LLM is <i>great</i> at finding work-around for static defenses. We need something that understands the intent and responds to that.<p>Static rules are <i>insufficient</i>
    • SkyPuncher7 hours ago
      Defense in depth. Layers don&#x27;t inherently make something less secure. Often, they make it more secure.
      • yakkomajuri7 hours ago
        I do think this is likely to make things <i>more</i> secure but it&#x27;s also dangerous by potentially giving users a false sense of complete security when the security layer is probabilistic rather than deterministic.<p>EDIT: it does seem to have a deterministic layer too and I think that&#x27;s great
  • hemangjoshi37a51 minutes ago
    [dead]
  • adrianstvaughan3 hours ago
    [dead]
  • alukin5 hours ago
    [dead]
  • kantaro6 hours ago
    [dead]
  • edf132 hours ago
    [flagged]
    • rgovostes53 minutes ago
      I&#x27;m willing to wager that your comment was generated from the body of the article plus a prompt to work in an advertisement for your product, which gets a mention in nearly every comment you make (and every submission you make, sometimes on a daily basis).
    • lmeyerov2 hours ago
      At RSAC, there were a ton of agentic security startups converging on ebpf monitors for this reason. Eg, sondera gave a fun talk at graph the planet where they did that + exposed with a policy layer over agent traces via Cedar (used in AWS IAM etc). ABAC and identity were also appearing near here.<p>One thing I didn&#x27;t see: are there any OSS solutions appearing here?
      • edf132 hours ago
        We are Open Source… code will be published soon (before launch)
        • lanyard-textile1 hour ago
          Then you <i>will</i> be open source ;) Not yet open source.