2 comments

  • csto123 hours ago
    Is there a new agent orchestrater posted every day? Is this the new JS framework?
    • reconnecting1 minute ago
      The timeline is always the same.<p><i>Day one:</i> Develop a new agent orchestration with 70K LOC from Claude.<p><i>Day three:</i> Post it on Show HN.<p><i>Day four:</i> Get 50–150 stars on GitHub.<p><i>Day seven:</i> Never open this repo again.
    • guessmyname15 minutes ago
      Yes. Everyone and their grandma wants to build the ultimate panacea of AI so of course you’ll see a myriad of AI-powered products and services on a daily basis until the tech industry as a whole is done with the topic.
    • unohee24 minutes ago
      Kind of. My point is that agent orchestrators become actually useful when the framework is specific about what&#x27;s safe to delegate to machines — things that reduce friction in CI&#x2F;CD operations, not agents that shoot iMessages, click around in browsers, or delete files without approval.
    • himata41131 hour ago
      Everyone has different needs. I&#x27;ve made one for oh-my-pi that has file backed tasks which accept natural language to create jobs (parallelize them whenever relevant).<p>Haven&#x27;t felt the need to show the world tho.
      • avoutic15 minutes ago
        This! I have one with Linear, Nanobot, Claude Code, all automated in a way that works for me.<p>Welcome to the age of selfware! Where everybody makes what they need! :)
        • verdverm13 minutes ago
          I&#x27;ll chime in that I use CUE, ADK-Go, Dagger, and Gemini-flash to build a Copilot alternative that is much better.<p>The best part of building your own is all the things you will learn along the way.
    • verdverm10 minutes ago
      life with tools like openclaw means life with ns;nt abundance<p>hopefully it dies down as people realize there&#x27;s more to it that the code
  • mihneadevries2 hours ago
    the reviewer&#x2F;worker pipeline is honestly the part I&#x27;m most curious about. like how do you handle disagreements between agents, does the reviewer just block and the worker retries, or is there a loop with a hard cutoff?<p>the failure mode I&#x27;d worry about most is cascading context drift, where each agent in the chain slightly misunderstands the task and by the time you get to the test agent it&#x27;s validating the wrong thing entirely. fwiw I think the LanceDB memory is the right call for this kind of setup, keeping shared context grounded is probably what prevents most of those drift issues.
    • unohee27 minutes ago
      The worker-reviewer pipeline typically runs 1–2 self-revision iterations. In my experience, agents handle most tasks fine, but they tend to miss quality gates — docstrings, minor business logic edge cases, that kind of thing. The reviewer catches what slips through on the code quality side. This is all based on observed behavior from daily Claude Code CLI usage, where I&#x27;ve added hooks specifically to catch systematic failure patterns. OpenSwarm is essentially a productized version of those scaffoldings from my actual workflow — packaged into a more reusable architecture. On context drift — good call, and yeah, that&#x27;s exactly why the shared memory layer matters. LanceDB keeps the grounding consistent across the chain so each agent isn&#x27;t just working off its own drifting interpretation. As for disagreements: right now the reviewer blocks and the worker retries with feedback, with a hard cutoff to prevent infinite loops. It&#x27;s simple but it works — the revision depth rarely needs to go beyond 2 rounds. And when it does fail, that&#x27;s actually the useful signal — especially when you&#x27;re triaging larger projects, the points where agents break down are exactly where a human engineer needs to step in. At this point, what OpenSwarm really needs is broader testing from other users to validate these patterns outside my own workflow.