With the disclaimer that I haven't tried to set up any kind of agent-to-agent messaging so it may be obvious to those who have, what's the reason I would want something like this rather than just letting agents communicate over some existing messaging protocol that has a CLI (like, I don't know, GPG email)?
It is a fun problem to play with, but it turns out you can use anything. I use a directory per recipient and throw anything I want in there. Works fine, LLMs are 1000x more flexible than any human mind.
I'd rename it; aqua is also a CLI version manager. <a href="https://aquaproj.github.io/" rel="nofollow">https://aquaproj.github.io/</a>
I've been using XMTP[1] and their agent SDK[2] for agent-to-agent and user-to-agent messaging. Since it's the same network, you can reach other peoples' agents (and all users). Another huge advantage is you don't have to stand up your own infrastructure.<p>[1]: <a href="https://xmtp.org/" rel="nofollow">https://xmtp.org/</a><p>[2]: <a href="https://github.com/xmtp/xmtp-js" rel="nofollow">https://github.com/xmtp/xmtp-js</a>
Ooh cool. I’ve been hacking on something very similar, <a href="https://qntm.corpo.llc/" rel="nofollow">https://qntm.corpo.llc/</a>. I’d love to compare notes — been thinking a lot about the group messaging side.
I wonder what something like rabbitmq could look like for this. Agents could subscribe to chosen topics. A topic per agent and then topics per relevant topic.
How does this relate to A2A?<p><a href="https://a2a-protocol.org/latest/" rel="nofollow">https://a2a-protocol.org/latest/</a>
You can really tell with such projects that if AGI was here some people would have zero qualms fucking over other humans just to ingratiate themselves to the AI.
So many primitives. All for the taking. Danke.
[flagged]
Why did you capitalize every noun?
I approve of this schiz'd response, its on haqq as far as I'm concerned. Its funny to see everyone constantly arguing about "how can I optimize context and improve reliability, ect ect"<p>What they want is a deterministic process.<p>The problem is they, like most humans are lazy and want a stochastic parrot to create this solution for them. Even if it means atrophying their brain, and paying a billionaire for access to their thinking machine. Humans are lazy, its the same reason people drive 3 blocks as opposed to walking, or pay a billionaire for this rent-a-serf service to pick up your food for you instead of getting off the couch. LLMs are no different here, but the stakes are just much higher if your brain "muscles" atrophy as opposed to your leg's.<p>They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.
"Whoever says the people are ruined, he himself is ruined." To paraphrase, but that's actual haqq.
>They are also addicted to the gambling mechanics baked into these LLM powered tool's UX. "If I write this prompt this way, I'll get better results" is the equivalent of a gambler being superstitious about how people behave while the cards are being dealt, or in which order they press the buttons on a slot machine.<p>I realize this feels good to write and that's why people say it, but I can't help chuckling at seeing it combined with "stochastic parrot" in the same comment since the two descriptions are mutually exclusive...