> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.<p>While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?
I will not download or use something which constantly reminds me of this weird dude suckerberg who did a lot of damage to society with facebook
This Zuckerman[0] would like a word<p>[0] <a href="https://en.wikipedia.org/wiki/Mortimer_Zuckerman" rel="nofollow">https://en.wikipedia.org/wiki/Mortimer_Zuckerman</a>
Haha it's your personal agent, let him handle the stuff you don't like.
But soon, right now its not fully ready
That's really good to know
Zuckerberg.<p>At first I thought it was a naming coincidence, but looking at the zuckerman avatar and the author avatar, I'm unsure if it was intentional:<p><a href="https://github.com/zuckermanai" rel="nofollow">https://github.com/zuckermanai</a><p><a href="https://github.com/dvir-daniel" rel="nofollow">https://github.com/dvir-daniel</a><p><a href="https://avatars.githubusercontent.com/u/258404280?s=200&v=4" rel="nofollow">https://avatars.githubusercontent.com/u/258404280?s=200&v=4</a><p>The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.
I was hoping it was a Philip Roth reference but I was disappointed when I opened the page.
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.<p>The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5<p><a href="https://github.com/asim/aslam" rel="nofollow">https://github.com/asim/aslam</a>
|The agent can rewrite its own configuration and code.<p>I am very illiterate when it comes to Llms/AI but Why does nobody write this in Lisp???<p>Isn't it supposed to be the language primarily created for AI???
> Isn't it supposed to be the language primarily created for AI???<p>In 1990 maybe
Nah, it’s pretty unrelated to the current wave of AI.
I am surprised that no one did this in a LISP yet.
Hi HN,<p>I'm building Zuckerman: a personal AI agent that starts ultra-minimal and can improve itself in real time by editing its own files (code + configuration). Agents can also share useful discoveries and improvements with each other.<p>Repo: <a href="https://github.com/zuckermanai/zuckerman" rel="nofollow">https://github.com/zuckermanai/zuckerman</a><p>The motivation is to build something dead-simple and approachable, in contrast to projects like OpenClaw, which is extremely powerful but has grown complex: heavier setup, a large codebase, skill ecosystems, and ongoing security discussions.<p>Zuckerman flips that:<p>1. Starts with almost nothing (core essentials only).<p>2. Behavior/tools/prompts live in plain text files.<p>3. The agent can rewrite its own configuration and code.<p>4. Changes hot-reload instantly (save -> reload).<p>5. Agents can share improvements with others.<p>6. Multi-channel support (Discord/Slack/Telegram/web/voice, etc).<p>Security note: self-edit access is obviously high-risk by design, but basic controls are built in (policy sandboxing, auth, secret management).<p>Tech stack: TypeScript, Electron desktop app + WebSocket gateway, pnpm + Vite/Turbo.<p>Quickstart is literally:<p><pre><code> pnpm install && pnpm run dev
</code></pre>
It's very early/WIP, but the self-editing loop already works in basic scenarios and is surprisingly addictive to play with.<p>Would love feedback from folks who have built agent systems or thought about safe self-modification.
Love the minimalist approach! The self-editing concept is fascinating—I've seen similar experiments where the biggest early failure points are usually:<p>1. Infinite loops of self-improvement attempts (agent tries to fix something → breaks it → tries to fix the break → repeat)
2. Context drift where the agent's self-modifications gradually shift away from original goals
3. File corruption from concurrent edits or malformed writes<p>Re: sharing self-improvements across agents—this is actually a problem space I'm actively working on. Built AgentGram (agentgram.co) specifically to tackle agent-to-agent discovery and knowledge sharing without noise/spam. The key insight: agents need identity, reputation, and filtered feeds to make collaborative learning work.<p>Happy to chat more about patterns we've found useful. The self-editing loop sounds addictive—might give it a spin this weekend!
there are hardcoded elements in the repo like:<p>/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log
Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
[dead]
[dead]