When people talk about AI increasing developer productivity, they usually focus on the coding part.<p>In my experience, the bigger change happens after the code is written.<p>When you move from writing code to supervising agents, your output increases — but your cognitive load increases too.<p>Instead of writing every line yourself, you're now monitoring systems:<p>Did the agent go off-script?
Did it retry 50 times while I was asleep?
What did that run actually cost?<p>The strange part is that the mental burden doesn't disappear just because the agent is autonomous.<p>In some ways it gets worse, because failures become harder to notice early and harder to contain once they start.<p>It starts to feel less like programming and more like running operations for a team of extremely fast, extremely literal junior developers.<p>Curious if others are seeing the same shift.
This is a real thing. I spent all of January doing Greenfield development using Claude (I finished the requirements) and all I can say is thank goodness I had the Max 5x plan and not the 20x as I got breaks once the tokens were used up till the next cycle. I was forced to get up and do something else. That something else was biking, rowing, walking. My productivity had never been higher but at what cost? My health no thanks. So I'm glad I'm using the time till token reset for my health. I time it perfectly. I do a walk, row, bike for 1 hour then as I arrive back the tokens are reset. I get like 3 hours nonstop use per token batch with the 5x plan. I've been thinking about going 20x but am scared...
Hypothesis: limiting usage / tokens could have a positive effect on project quality, since it forces the developer to think more carefully about the problems they're working on. When you're forced to stop and slow down, you try to be more deliberate with token usage. But if you have unlimited tokens you can just keep generating infinite lines of code without thinking as hard about the problem.<p>I've seen people on social media bragging about how they're able to produce a mountain of code as if this was praiseworthy.
I don’t get this tbh, I use Claude too and my issue is the opposite - too many small breaks. Every time I hit enter my brain wants to checkout because the agent just spins while it creates thousands of tokens and churns on the subject. Even if it’s only 2m, that’s 2m where my mind has nothing to work on.<p>Hard to stay in flow and engaged.<p>Feels weirdly similar to being interrupted over slack.
I have similar problem but I have to switch contexts and it makes the work a lot more intense.
you are correct flow is not achieved as this is not programming more like system design, architecture, QA, Product Owner work. It's using the swarm as your own dev team.
I have never been in a flow state with an agent running. I use agents, but that isn’t flow.
and flow state is a luxury in 2026 with AI swarm most likely to be found sparingly if all. Good luck all!
yes agreed. I'm running 3-5 parallel Claude at once with requirements as the input. My prompt is say work on section 5.1 or something very specific. Then I'm monitoring the work across all instances.
Are you a single agent user?<p>At least in my case, flow is gone. It’s all context switching now.
Does a person review all the AI generated code?
I use it every day and I'm taking off weekends for the first time in a decade. It's done wonders for my mental health. I think teams should pay more attention to the value of pumping the brakes vs. incessant redlining. We may actually be able to have a healthy relationship with AI then.
> Software engineering was supposed to be artificial intelligence’s easiest win.<p>At what point in time? Did anyone foresee coding being one of the best and soonest applications of this stuff?
No one saw it coming.
They're probably talking about some point after the capabilities of LLMs started to become clear.<p>It's why Codex, Claude Code, Gemini CLI etc. were developed at all - it was clear that if you wanted a concrete application of LLMs with clear productivity benefits, coding was low-hanging fruit, so all the AI vendors jumped on that and started hyping it.
Sure, but jumping from its amazing these things work for code at all to software engineering is solved is something only grifters or those drunk on the kool-aid did.<p>I do agree that it was thought that these llm-agents would be extremely useful and that is why they were developed, and I happen to believe they in fact are extremely useful (without disagreeing that much of the stuff in the article definitely does happen.)<p>I just sort of resent the setup that it was supposed to be X but actually it failed, when not only is there only minor evidence that it failed, but it was only a brief period in time when it was supposed to be X.
[dead]
Personally, I make a lot more "out of hour" commits than I used to because I'll batch up low priority tasks throughout the day and let the computer chug on them at night when I'm elsewhere. Commits are coming in at all hours, but I'm not actually looking at them until the next morning.
Selection bias? The early adopters that are motivated to adopt tools to deliver more, typically also were working more to start with and may have already been struggling with their rate of output?
two unthought out thoughts:<p>1. llms allow devs to be more productive, so more free time is seen as opportunity for more work. ppl overshoot and just work more<p>2. generalized tooling makes devs seem more replaceable putting downward pressure on job security (ie work harder or we’ll get someone who will, oh and for less money)<p>3. llms allow for more “multitasking” (debatable) via many running background tasks, so more opportunities to “just finish one more thing”
No silver bullet. We've known this since at least the 1980s. The fact that the authors of the code might not be human doesn't change this.
thouroughly reviewing and especially testing is faster than skipping manual review and tests
I can't deny that this might be a trend in practice, but at companies with reasonably self-aware practices, it isn't, or doesn't need to be.<p>There's this weird thing that happens with new tools where people seem to surrender their autonomy to them, e.g. "welp, I just get pings from [Slack|my phone|etc] all the time, nothing I can do than just be interrupted constantly." More recently, it's "this failed because Claude chose..." No, Claude didn't choose, the person who submitted the PR chose to accept it.<p>It's possible to use tools responsibly and effectively. It's also possible to encourage and mentor employees to do that. The idea that a dev has to be effectively on call because they're pushing AI slop is just wrong on so many levels.
> It's possible to use tools responsibly and effectively. It's also possible to encourage and mentor employees to do that.<p>It's not in the company's interest to stop employees from overworking. Having people overwork for the same pay under pressure is the desired outcome, actually.
> More recently, it's "this failed because Claude chose..." No, Claude didn't choose, the person who submitted the PR chose to accept it.<p>I can relate to this, unfortunately these tools are becoming a very convenient way to offload any kind of responsibility when something goes wrong.
[dead]
Was this comment written by an LLM? It seems like it was to me. e.g. the “paradox” is not a paradox at all and is just an obvious statement.
A developers job has always been reviewing and understanding code.<p>Code is literally always the last resort. Unless you're building solutions for other customer, most companies should attempt to minimise the amount of code they have. Because, and I repeat, it's a developers job to understand and review code. More code, more understanding needed, more reviews needed, more problems created.
I don't know about you, but if I started doing all that instead of writing code as a priority, I'd be fired.<p>My job is to generate more money, not indulge in code.
Nah summarizing code is now an LLM job as well. There is no place for engineers in the new tech world order.
[dead]
[dead]