15 comments

  • jensneuse1 minute ago
    One thing I find interesting is how GraphQL has evolved from an API technology for API consumers with &quot;different needs&quot; to an API technology for agents. What helped organizations scale GraphQL across multiple teams is Federation, a way to split one supergraph into multiple subgraphs. So, what works well to scale teams actually works equally well for agents. The core value you can get from Federation is a &quot;coordination&quot; layer that is deterministic. Now, what&#x27;s interesting is that you can scale agentic software development pretty well when you have a deterministic layer where everyone involved can agree. I wrote more about this on our blog if anyone is interested: <a href="https:&#x2F;&#x2F;wundergraph.com&#x2F;blog&#x2F;graphql-api-layer-for-ai-agents" rel="nofollow">https:&#x2F;&#x2F;wundergraph.com&#x2F;blog&#x2F;graphql-api-layer-for-ai-agents</a>
  • iainmerrick24 minutes ago
    Like almost all of these articles, there&#x27;s really nothing AI- or LLM-specific here at all. Modularization, microservices, monorepos etc have all been used in the past to help scale up software development for huge teams and complex systems.<p>The only new thing is that small teams using these new tools will run into problems that previously only affected much larger teams. The cadence is faster, sometimes a lot faster, but the architectural problems and solutions are the same.<p>It seems to me that existing good practices continue to work well. I haven&#x27;t seen any radically new approaches to software design and development that <i>only</i> work with LLMs and wouldn&#x27;t work without them. Are there any?<p>I&#x27;ve seen a few suggestions of using LLMs directly <i>as</i> the app logic, rather than using LLMs to write the code, but that doesn&#x27;t seem scalable, at least not at current LLM prices, so I&#x27;d say it&#x27;s unproven at best. And it&#x27;s not really a new idea either; it&#x27;s always been a classic startup trick to do some stuff manually until you have both the time and the necessity to automate it.
  • nikeee2 hours ago
    What matters for LLMs is what matters for humans, which usually means DX. Most Microservice setups are extremely hard to debug across service boundaries, so I think in the future, we&#x27;ll see more architectural decisions that make sense for LLMs to work with. Which will probably mean modular monoliths or something like that.
    • onlyrealcuzzo46 minutes ago
      Aren&#x27;t libraries just &quot;services&quot; without some transport layer &#x2F; gateway?<p>You should only ever have a separate &quot;service&quot; if there&#x27;s a concrete reason to. You should never have a &quot;service&quot; to make things simpler (it inherently does not).<p>Libraries on the other hand are much more subjective.
      • staticassertion3 minutes ago
        &gt; Aren&#x27;t libraries just &quot;services&quot; without some transport layer &#x2F; gateway?<p>Libraries can share memory, mutable state, etc. Services can not.<p>&gt; (it inherently does not)<p>That&#x27;s going to be debatable.
    • zoho_seni1 hour ago
      Definitively our approach is ai dev ex first.
  • int_19h2 hours ago
    That&#x27;s an argument for components with well-defined contracts on their interfaces, but making them microservices just complicates debugging for the model.<p>It&#x27;s also unclear whether tight coupling is actually a problem when you can refactor this fast.
    • jillesvangurp26 minutes ago
      Whether you call it modularization, good design, SOLID principles, or micro services, etc. It all boils down to the same thing. I usually dumb it down to two easy to understeand metrics: cohesiveness and coupling. Something with high cohesiveness and low coupling tends to be small and easy to reason about.<p>Things that are small, can be easily replaced, fixed, changed, etc. with relatively low risk. Even if you have a monolith, you probably want to impose some structure on it. Whenever you get tight coupling and low cohesiveness in a system, it can become a problem spot.<p>Easy reasoning here directly translates into low token cost when reasoning. That&#x27;s why it&#x27;s beneficial to keep things that way also with LLMs. Bad design always had a cost. But with LLMs you can put a dollar cost on it.<p>My attitude with micro services is that it&#x27;s a lot of heavy handed isolation where cheaper mechanisms could achieve much of the same effects. You can put things in a separate git repository and force all communication over the network. Or you can put code in different package and guard internal package cohesiveness and coupling a bit and use well defined interfaces to call a functions through. Same net result from a design point of view but one is a bit cheaper to call and whole lot less hassle and overhead. IMHO people do micro-services mostly for the wrong reasons: organizational convenience vs. actual benefits in terms of minimizing resource usage and optimizing for that.
    • dist-epoch2 hours ago
      You are taking the article argument too literally. They meant microservices also in the sense of microlibraries, etc, not strictly a HTTP service.
      • iainmerrick1 hour ago
        No, I think you’re not reading it literally enough. “Microservices” generally does mean separate HTTP (or at least RPC) servers. Near the beginning, the article says:<p><i>A microservice has a very well-defined surface area. Everything that flows into the service (requests) and out (responses, webhooks)</i>
        • mexicocitinluez42 minutes ago
          I think a better word would have been &quot;modularization&quot; than &quot;microservices&quot; as I also highly correlate &quot;microservices&quot; with http-based calls.
      • benfortuna1 hour ago
        Why arbitrarily invent new meanings (for microservices) and new words (microlibraries) when there are already many adequate ways to describe modular, componentized architecures?<p>A totally valid and important point but it has been diluted by talking about microservices rather than importance of modular architectures for agent-based coding.
        • mexicocitinluez41 minutes ago
          &gt; describe modular,<p>Agreed. Modular is what they were probably after.
  • veselin1 hour ago
    I think this is a promise, probably also for spec driven development. You write the spec, the whole thing can be reimplemented in rust tomorrow. Make small modules or libraries.<p>One colleague describes monolith vs microservices as &quot;the grass is greener of the other side&quot;.<p>In the end, having microservices is that that the release process becomes much harder. Every feature spans 3 services at least, with possible incompatibility between some of their versions. Precisely the work you cannot easily automate with LLMs.
  • tatrions3 hours ago
    The bounded surface area insight is right, but the actual forcing function is context window size. Small codebase fits in context, LLM can reason end-to-end. You get the same containment with well-defined modules in a monolith if your tooling picks the right files to feed into the prompt.<p>Interesting corollary: as context windows keep growing (8k to 1M+ in two years), this architectural pressure should actually reverse. When a model can hold your whole monolith in working memory, you get all the blast radius containment without the operational overhead of separate services, billing accounts, and deployment pipelines.
    • stingraycharles2 hours ago
      This makes no sense as you’re able to have similar interfaces and contracts using regular code.<p>Microservices solve an organizational problem mostly — teams being able to work completely independently, do releases independently, etc — but as soon you’re going to actually do that, you’re introducing a lot of complexity (but gain organizational scalability).<p>This has nothing to do with context sizes.
    • lyricalstring2 hours ago
      Agree on the context window framing. If an LLM needs well-defined boundaries to work well, just write clean module interfaces. You don&#x27;t need a network boundary for that.<p>The part about &quot;less scrutiny on PR review&quot; and committing straight to main is telling too. That&#x27;s not really about microservices, that&#x27;s just wanting to ship faster with less oversight. Works until it doesn&#x27;t.
      • Kim_Bruning1 hour ago
        &gt; The part about &quot;less scrutiny on PR review&quot; and committing straight to main is telling too. That&#x27;s not really about microservices, that&#x27;s just wanting to ship faster with less oversight. Works until it doesn&#x27;t.<p>And that&#x27;s the reason I think the author proposes microservices I think. Doesn&#x27;t need to be microservices, but something where your codebase is split up so that when-not-if it does blow up, you only roll back the one component and try again.<p>Modularization is hardly a new idea, but might need a slight spin to allow agents to work by themselves a bit more. The speed advantages are too tantalizing not to.
        • Kim_Bruning1 hour ago
          Expanding: Think of it this way: A typical sprint in current best practices is 1-2 weeks. Having to scrap a module and start over loses you a lot of time and money. A typical &quot;AI sprint &quot; is &lt;&lt; 20 minutes. Several passes of failing a module and rewriting the spec is still only a few hours.<p>A typical rant is &quot;You claim only the output is what counts; but what about the human warmth?&quot;. Well, this is IT. If you can thoroughly prove that the inputs and outputs are identical to spec <i>you have done the thing</i>.<p>Harder than it sounds: CDNs and suss libraries no one told you about, abysmal security, half baked features? Uh.... yeah that happens. But if the blast radius is small, it&#x27;s fixable and survivable. Hopefully.<p>Famous last words.
    • dist-epoch2 hours ago
      Large context windows cost more money. So the pressure is still there to keep it tight.
  • Theaetetus1 hour ago
    I don&#x27;t think LLMs push us to use microservices as much as Borgers says they do. They don&#x27;t avoid the problems microservices have always faced, and encapsulation is mostly independent from whether a boundary is a service-to-service boundary:<p><a href="https:&#x2F;&#x2F;www.natemeyvis.com&#x2F;agentic-coding-and-microservices&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.natemeyvis.com&#x2F;agentic-coding-and-microservices&#x2F;</a>
    • scotty791 hour ago
      service-to-service boundary is easiest to keep with the way we are using LLMs to code right now
  • siruwastaken3 hours ago
    This seems like the idea of modularizing code, and using specific function sighatures for data exchange as an API is being re-invented by people using AI. Aren&#x27;t we already mostly doing things this way, albeit via submodules in a monolith, due to the cognitive ctrain it puts on humans to understand the whole thing at any given time?
  • victorbjorklund1 hour ago
    I think no. But I think it makes sense to break down your app into libraries etc
  • _pdp_3 hours ago
    This makes no sense. You can easily make a monolith and build all parts of it in isolation - i.e. modules, plugins, packages.<p>In fact, my argument is that there will be more monolith applications due to AI coding assistants, not less.
  • c1sc03 hours ago
    Why microservices when small composable CLI tools seem a better fit for LLMs?
    • mrbungie2 hours ago
      His argument is not about LLM tools but rather about which architecture is better suited for coding with LLMs.
  • Kim_Bruning2 hours ago
    A typical rant (composed from memory) goes something like this:<p>&gt; &quot;These AI types are all delusional. My job is secure. Sure your model can one-shot a small program in green field in 5 minutes with zero debugging. But make it a little larger and it starts to forget features, introduces more bugs than you can fix, and forget letting it loose on large legacy codebases&quot;<p>What if that&#x27;s not a diagnosis? What if we see that as an opportunity? O:-)<p>I&#x27;m not saying it needs to be microservices, but say you can constrain the blast radius of an AI going oops (compaction is a famous oops-surface, for instance); and say you can split the work up into self-contained blocks where you can test your i&#x2F;o and side effects thoroughly...<p>... well, that&#x27;s going to be interesting, isn&#x27;t it?<p>Programming has always supposed to be about that: Structured programming, functions (preferably side-effect-less for this argument), classes&amp;objects, other forms of modularization including -ok sure- microservices. I&#x27;m not sold on exactly the latter because it feels a bit too heavy for me. But ... something like?
  • claud_ia1 hour ago
    [dead]
  • jeremie_strand8 hours ago
    [dead]
  • benh24779 hours ago
    [dead]