I don't understand why LLMs get a free pass when all of the existing businesses have to play by the rules.<p>Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.<p>NONE of these apply to the LLMs.<p>Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
In this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.<p>This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
as someone who is working in the cybersecurity space and recently obtained my CISSP designation, i am left wondering when the pedagogy of my field will expand and include a separate domain dedicated to AI agent safety and security best practices<p>it really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
The TLDR is that current agents are as problematic as many of us already know they are:<p>> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
This is begging to turned into a youtube style "Real World", where you pit 12 humans with 12 AIs and they're only allowed to interact through CLIs.<p>Then you slowly reveal they're all humans.
All this to say: OpenClaw is hella insecure and unreliable?<p>I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
[dead]
[dead]
[dead]
This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:<p><a href="https://community.safebots.ai/t/researchers-gave-ai-agents-email-and-shell-access-chaos-ensued-heres-the-fix/37" rel="nofollow">https://community.safebots.ai/t/researchers-gave-ai-agents-e...</a>
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
your IQ > Model IQ- you will have good results as you have the ability to detect when model is wrong.<p>your IQ < Model IQ - god bless you.