So this is essentially monkey-patching every variation of fetch/library fetch and doing math on the reported token counts?<p>It's an... intrusive solution. Glad to hear it works for you though.
For this I use LiteLLM proxy - you can create virtual keys with daily/weekly/... budget, its pretty flexible and has a nice UI.<p>See <a href="https://docs.litellm.ai/docs/proxy/users">https://docs.litellm.ai/docs/proxy/users</a>
Wouldn't the obvious solution to this problem to stop using agents that don't respect your usage limits instead of trying to build sketchy containers around misbehaving software?
Yeah I don’t understand this problem, who uses so many agents at the same time in the first place?<p>And if this is really a problem, why not funnel your AI agents through a proxy server which they all support instead of this hacky approach? It would be super easy to build a proxy server that keeps track of costs per day/session and just returns errors once you hit a limit.
<i>"Commit 2ef776f
dipampaul17
committed
Jul 31, 2025
·
Update READMEs: honest, clear, aesthetic
- Removed pretentious language and marketing speak
- Added real developer experience based on actual testing
- Clear, direct explanations of what it actually does
- Aesthetic improvements with better formatting
- Accurate feature descriptions based on verified functionality
- Honest about capabilities without overselling
- Reflects the 30-second integration we tested<p>The README now matches what developers actually experience:
two lines of code, automatic tracking, no code changes needed."</i><p>Hey OP - next time perhaps at least write the commit messages yourself?
Honestly feels very vibe-coded [1] [2] and would not really trust my money with something like this.
I had to read the code to understand what it actually protects me from, as the README.md (other than telling me it's production-ready, professional, and protects me from so much!) tells me "Supports all major providers: OpenAI, Anthropic, auto-detected from URLs".
OpenAI and Anthropic are "all" major providers [3]?<p>[1] <a href="https://github.com/dipampaul17/AgentGuard/blob/51395c36809aabc62d612b63250253d19e7f10f3/FINAL_AUDIT.md">https://github.com/dipampaul17/AgentGuard/blob/51395c36809aa...</a><p>[2] <a href="https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f345ddba538deebcda9e23a2cdaeaa8">https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f3...</a><p>[3] <a href="https://github.com/dipampaul17/AgentGuard/blob/083ae9896459b7bc709bcb5767e5d7b4b9791e12/agent-guard.js#L710">https://github.com/dipampaul17/AgentGuard/blob/083ae9896459b...</a>
> [1] <a href="https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f345ddba538deebcda9e23a2cdaeaa8">https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f3...</a><p>It's kind of crazy that people use these multi-billion parameter machine learning models to do search/replace of words in text files, rather than the search/replace in their code editor. I wonder what the efficiency difference is, must be 1000x or even 10000x difference?<p>Don't get me wrong, I use LLMs too, but mostly for things I wouldn't be able to do myself (like isolated math-heavy functions I can't bother to understand the internals of), not for trivial things like changing "test" to "step" across five files.<p>I love that the commit ends with<p>> Codebase is now enterprise-ready with professional language throughout<p>Like "enterprise-ready" is about error messages and using "Examples" instead of "Demo".
This is the future of the field: amateur night that never ends. And this is why I'm looking for a new career at my advanced age.
If you have any tips, feel free to enlighten me. Even though I'm "only" in my 30s I feel the future is uncertain - the original author of this post made one other post that was also clearly vibe-coded, but not many comments seem to point it out. It'll only get worse from here, depending how you look at it of course; hackers will have a WAY easier time as time goes on.
Still figuring it out, but if I do, I promise I'll circle back here and let you know. So far the best idea I have is to stay in software and ideally my current job for the moment, but phone it in, while I retrain as something else at night.
I use LLMs, too, and that did give me a chuckle.
It's incredible how emoji overuse has become a giant red flag for AI over-/abuse.
The AI's idea of developing a startup is eerily reminiscent of a hacking scene in CSI
> Close first customers at $99/month<p>> The foundation is bulletproof. Time to execute the 24-hour revenue sprint.<p>Comedy gold. This is one of those times where i cant figure out if the author is in on the joke, or if they're actually so deluded that they think this doesn't make them look idiotic. If it's the latter, we need to bring bullying back.<p>Either way it's hilarious.
Well the author has many, many repositories like this.<p>It seems like he's still stuck in the "If I just say to my AI that I want a production-ready package that people will pay me $99/month for, I'll get it eventually, right?" phase of discovering LLMs.<p>The end-result is many commits saying "fixed all issues, enterprise-ready now as requested!" adding 500 lines of code causing more issues.<p>The funniest part, to me, is that this only damages his image, instead of solidifying it. We've had so many applicants at my company recently where we go to their github, and they have 10 repositories all obviously vibe-coded together, acting like they made some amazing stuff. Instant deletion of application, no coming back from that - this person would NOT get a job here.
So it monkey-patches a set of common http libraries and then detects calls to AI APIs? Not obvious which APIs it would detect or in what situations it would miss them. Seems kind of dangeorus to rely on something like that. You install it and it might be doing nothing, You only find out after somethings gone wrong.<p>If I was using something like this I think I'd rather have it wrap the AI API clients. Then it can throw an error if it doesn't recongise the client library I'm using. This way it'll just silently fail to monitor if what I'm using isn't in its supported list (whatever that is!)<p>I do think the idea is good though, just needs to be obvious how it will work when used and how/when it will fail.
I really wonder how much $$$ was burned while testing this against production.
Expand it to "enterprise security solutions for agent deployments" and you will get VC funding for it. For any new technology, the playbook is create startups that do: "compliance", "governance", "security", "observability" around that technology. So, the big security companies can acquire said startup and add it as a feature to their existing products.<p>While you are at it, use the term "guardrails" as that is quite fashionable.
So that's how AGI will escape containment.