<i>But you can’t just not review things!</i><p>Actually you can. If you shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits, then 90% of what people think a review should find goes away. The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build. The remain 10% of things like var naming, whitespace, and patterns can be checked with a linter instead of a person. If you can get the team to that level you can stop doing code reviews.<p>You also need to build a team that you can trust to write the code you agreed you'd write, but if your reviews are there to check someone has done their job well enough then you have bigger problems.
This falls for the famous "hours of planning can save minutes of coding". Architecture can't (all) be planned out on a whiteboard, it's the response to the difficulty you only realize as you try to implement.<p>If you can agree what to build and how to build it and then it turns out that actually is a working plan - then you are better than me. That hasn't happened in 20 years of software development. Most of what's planned falls down within the first few hours of implementation.<p>Iterative architecture meetings will be necessary. But that falls into the pit of weekly meeting.
I've worked waterfall (defense) and while I hated it at the time I'd rather go back to it. Today we move much faster but often build the wrong thing or rewrite and refactor things multiple times. In waterfall we move glacially but what we would build sticks. Also, with so much up front planning the code practically writes itself. I'm not convinced there's any real velocity gains in agile when factoring in all the fiddling, rewrites, and refactoring.<p>> Most of what's planned falls down within the first few hours of implementation.<p>Not my experience at all. We know what computers are capable of.
> > Most of what's planned falls down within the first few hours of implementation.<p>> Not my experience at all. We know what computers are capable of.<p>You must not work in a field where uncertainty is baked in, like Data Science. We call them “hypotheses”. As an example, my team recently had a week-long workshop where we committed to bodies of work on timelines and 3 out of our 4 workstreams blew up just a few days after the workshop because our initial hypotheses were false (i.e. “best case scenario X is true and we can simply implement Y; whoops, X is false, onto the next idea”)
Wait, are you perhaps saying that... "it depends"? ;-)<p>Every single reply in this thread is someone sharing their subjective anecdotal experience..<p>There are so many factors involved in how work pans out beyond planning. Even a single one of us could probably tell 10 different stories about 10 different projects that all went differently.
> I've worked waterfall and while I hated it at the time I'd rather go back to it. Today we move much faster but build the wrong thing or rewrite and refactor things multiple times.<p>My experience as well. Waterfall is like - let's think about where we want this product to go, and the steps to get there. Agile is like ADHD addled zig zag journey to a destination cutting corners because we are rewriting a component for the third time, to get to a much worse product slightly faster. Now we can do that part 10x faster, cool.<p>The thing is, at every other level of the company, people are actually planning in terms of quarters/years, so the underlying product being given only enough thought for the next 2 weeks at a time is a mismatch.
There's an abstraction level above which waterfall makes more sense, and below which [some replacement for agile but without the rituals] makes more sense.
I think Qs to ask are.. if the nature of user facing deliverable tasks are longer than a sprint, the tasks have linear dependencies, there are coordination concerns, etc
It’s possible to manage the quarterly expectations by saying “we can improve metric X by 10% in a quarter”. It’s often possible to find an improvement that you’re very confident of making very quickly. Depending on how backwards the company is you may need to hide the fact that the 10% improvement required a one line change after a month of experimentation, or they’ll fight you on the experimentation time and expect that one line to take 5 minutes, after which you should write lots more code that adds no value.<p>Agile isn’t a good match for a business that can only think in terms of effort and not learning+value. That doesn’t make agile the problem.
My experience in an agile firm was that they hired a lot of experienced people and then treated them like juniors. Actively allergic to thinking ahead.<p>To get around the problem that deliverables took more than a few days, actual tasks would be salami sliced down into 3 point tickets that simply delivered the starting state the next ticket needed. None of these tickets being completed was an actual user observable deliverable or something you could put on a management facing status report.<p>Each task was so time boxed, seniors would actively be upbraided in agile ceremonies for doing obvious next steps. 8 tickets sequentially like - Download the data. Analyze the data. Load a sample of the data. Load all the data. Ok now put in data quality tests on the data. OK now schedule the daily load of the data. OK now talk to users about the type of views/aggregations/API they want on the data. OK now do a v0 of that API.<p>It's sort of interesting because we have fully transitioned from the agile infantilization of seniors to expecting them to replace a team of juniors with LLMs.
> Today we move much faster but often build the wrong thing or rewrite and refactor things multiple times. In waterfall we move glacially but what we would build sticks.<p>That's an interesting observation. That's one of the biggest criticisms of waterfall: by the time you finish building something the requirements have changed already, so you have to rewrite it.
Comparing the same work done between agile and waterfall I can accept your experience of what sounds like an org with unusually effective long term planning.<p>However the value of agile is in the learning you do along the way that helps you see that the value is only in 10% of the work. So you’re not comparing 100% across two methodologies, you’re comparing 100% effort vs 10% effort (or maybe 20% because nobody is perfect).<p>Most of the time when I see unhappiness at the agile result it’s because the assessment is done on how well the plan was delivered, as opposed to how much value was created.
“Everyone has a plan until they get punched in the mouth" - Mike Tyson
I've seen engineers I respect abandon this way of working as a team for the productivity promise of conjuring PRs with a coding agent. It blows away years of trust so quickly when you realize they stopped reviewing their own output.
Perhaps due to FOMO outbreak[1], upper management everywhere has demanded AI-powered productivity gains, based on LoC/PR metrics, it <i>looks</i> like they are getting it.<p>1. The longer I work in this industry, the more it becomes clear that CxO's aren't great at projecting/planning, and default to copy-cat, herd behaviors when uncertain.
Software engineers are pushed to their limits (and beyond). Unrealistic expectations are established by Twitter <i>"I shipped an Uber clone in 2 hours with Claude"</i> forcing every developer to crank out PRs, managers are on the look out for any kind of perceived inefficiency in tools like GetDX and Span.<p>If devs are expected to ship 10x faster (or else!), then they will find a way to ship 10x faster.
I always found it weird how most management would do almost <i>anything</i> other than ask their dev team "hey, is there any way to make you guys more productive?"<p>Ive had metrics rammed down my throat, Ive had AI rammed down my throat, Scrum rammed down my throad and Ive had various other diktats rammed down my throat.<p>95% of which slowed us down.<p>The only time ive been asked is when there is a deadline and it's pretty clear we arent going to hit it and even then they're interested in quick wins like "can we bring lunch to you for a few weeks?", not systemic changes.<p>The fastest and most productive times have been when management just set high level goals and stopped prodding.<p>Im convinced that the companies which seek developer autonomy will leave the ones which seek to maximize token usage in the dust in the next tech race.
Would love to be a fly on the wall for a couple of months to see what corporate CxO's actually do.<p>Surely I could do a mediocre job as a CxO by parroting whatever is hot on Linkedin. Probably wouldn't be a massively successful one, but good enough to survive 2 years and have millions in the bank for that, or get fired and get a golden parachute.<p>(half) joking - most likely I'm massively trivializing the role.
Funny enough, the author of this blog post wrote another one on exactly that topic, entitled "What do executives do, anyway?"[1]. If you read it, you'll find it's written from quite an interesting perspective, not quite "fly on the wall," but perhaps as close as you're going to get in a realistic scenario.<p>[1]: <a href="https://apenwarr.ca/log/20190926" rel="nofollow">https://apenwarr.ca/log/20190926</a>
"Surely I could do a mediocre job as a CxO by parroting whatever is hot on Linkedin"<p>Having worked for a pretty decent CIO of a global business I'd say his main job was to travel about speak to other senior leaders and work out what business problems they had and try and work out, at a very high level, how technology would fit into that addressing those problems.<p>Just parroting latest technology trends would, I suspect, get you sacked within a few weeks.
A charitable explanation for what CxOs do is that they figure out their strategic goals and then focus really hard on ways to herd cats en masse to achieve the goals in an efficient manner. Some people end up doing a great job, some do so accidentally, other just end up doing a job. Sometimes parroting some linkadink drivel is enough to keep the ship on course - usually because the winds are blowing in the right direction or the people at the oars are working well enough on their own.
This is the part that doesn't get talked about enough. Code review was never just about catching bugs it was how teams built shared understanding of the codebase. When someone skips reviewing their own AI-generated PR, they're not just shipping unreviewed code, they're opting out of knowing what's in their own system. The trust problem isn't really about the AI output quality, it's about whether the person submitting it can answer questions about it six months from now.
Putting too much trust in an agent is definitely a problem, but I have to admit I've written about a dozen little apps in the past year without bothering to look at the code and they've all worked really well. They're all just toys and utilities I've needed and I've not put them into a production system, but I would if I had to.<p>Agents are getting <i>really</i> good, and if you're used to planning and designing up front you can get a ton of value from them. The main problem with them that I see today is people having that level of trust without giving the agent the context necessary to do a good job. Accepting a zero-shotted service to do something important into your production codebase is still a step too far, but it's an increasingly small step.
>> Putting too much trust in an agent is definitely a problem, but I have to admit I've written about a dozen little apps in the past year without bothering to look at the code and they've all worked really well. They're all just toys and utilities I've needed and I've not put them into a production system, but I would if I had to.<p>I have been doing this to, and I've forgotten half of them. For me the point is that this usage scenario is really good, but it also has no added value to it, really. The moment Claude Code raises it prices 2x this won't be viable anymore, and at the same time to scale this to enterprise software production levels you need to spend on an agent probably as much as hiring two SWEs, given that you need at least one to coordinate the agents.
Deepseek v3.2 tokens are $0.26/0.38 on OpenRouter. That model - released 4 months ago - isn't really good enough by today's standards, but its significantly stronger than Opus 4.1, which was only released last August! In 12 months I think its reasonable to expect there will be a model with less cost than that which is significantly stronger than anything available now.<p>And no, it isn't ONLY because VC capital is being burned to subsidize cost. That is impossible for the dozen smaller providers offering service at that cost on OpenRouter who have to compete with each other for every request and also have to pay compute bills.<p>Qwen3.5-9B is stronger than GPT-4o and it runs on my laptop. That isn't just benchmarks either. Models are getting smaller, cheaper and better at the same time and this is going to continue.
I think Claude could raise it's prices 100x and people would still use it. It'd just shift to being an enterprise-only option and companies would actually start to measure the value instead of being "Whee, AI is awesome! We're definitely going really fast now!"
I’m so disappointed to see the slip in quality by colleagues I think are better than that. People who used to post great PRs are now posting stuff with random unrelated changes, little structs and helpers all over the place that we already have in common modules etc :’(
That's partly the point of the article, except the article acknowledges that this is organizationally hard:<p>> You get things like the famous Toyota Production System where they eliminated the QA phase entirely.<p>> [This] approach to manufacturing didn’t have any magic bullets. Alas, you can’t just follow his ten-step process and immediately get higher quality engineering. The secret is, you have to get your engineers to engineer higher quality into the whole system, from top to bottom, repeatedly. Continuously.<p>> The basis of [this system] is trust. Trust among individuals that your boss Really Truly Actually wants to know about every defect, and wants you to stop the line when you find one. Trust among managers that executives were serious about quality. Trust among executives that individuals, given a system that can work and has the right incentives, will produce quality work and spot their own defects, and push the stop button when they need to push it.<p>> I think we’re going to be stuck with these systems pipeline problems for a long time. Review pipelines — layers of QA — don’t work. Instead, they make you slower while hiding root causes. Hiding causes makes them harder to fix.
Anybody has idea on how to avoid childish resistance? Anytime something like this pops up people discuss it into oblivion and teams stay in their old habits
Bean counters do not like pair programming.<p>If <i>we</i> hired two programmers, the goal was to produce twice the LOC per week. Now we are doing far less than our weekly target. Does not meet expectation.
>shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits<p>hell in one sentence
This is also the premise of pair programming/extreme programming: if code review is useful, we should do it <i>all the time</i>.
> You also need to build a team that you can trust to write the code you agreed you'd write<p>I tell every hire new and old “Hey do your thing, we trust you. Btw we have your phone number. Thanks”<p>Works like a charm. People even go out of their way to write tests for things that are hard to verify manually. And they verify manually what’s hard to write tests for.<p>The other side of this is building safety nets. Takes ~10min to revert a bad deploy.
> I tell every hire new and old “Hey do your thing, we trust you. Btw we have your phone number. Thanks”<p>That's cool. Expect to pay me for the availability outside work hours. And extra when I'm actually called
> The other side of this is building safety nets. Takes ~10min to revert a bad deploy.<p>Does it? Reverting a bad deploy is not only about running the previous version.<p>Did you mess up data? Did you take actions on third party services that that need to be reverted? Did it have legal reprecursions?
How does the phone number help?
Never received a phone call at 5am on a Sunday because a bug is causing a valued customer to lose $10k/minute, and by the way, the SVP is also on the line? Lucky bastard
That's the polite version of "we know where you live". Telling someone you have their phone number is a way of saying "we'll call you and expect immediacy if you break something."<p>Wanna be treated like an adult? Cool. You'll also be held accountable like an adult.
Presumably they will be contacted if there's a problem. So the hire has an interest in not creating problems.
Unless you're covering 100% of edge/corner cases during planning (including roughly how they're handled) then there is still value in code reviews.<p>You conveniently brushed this under the rug of pair programming but of the handful of companies I've worked at, only one tried it and just as an experiment which in the end failed because no one really wanted to work that way.<p>I think this "don't review" attitude is dangerous and only acceptable for hobby projects.
Reviews are vital for 80% of the programmers I work with but I happily trust the other 20% to manage risk, know when merging is safe without review, and know how to identify and fix problems quickly. With or without pairing. The flip side is that if the programmer and the reviewer are both in the 80% then the review doesn’t decrease the risk (it may even increase it).
This seems to be a core of the problem with trying to leave things to autonomous agents .. The response to Amazons agents deleting prod was to implement review stages<p><a href="https://blog.barrack.ai/amazon-ai-agents-deleting-production/" rel="nofollow">https://blog.barrack.ai/amazon-ai-agents-deleting-production...</a>
I'm in a company that does <i>no</i> reviews and I'm medior. The tools we make is not interesting at all, so it's probably the best position I could ask for. I occasionally have time to explore some improvements, tools and side projects (don't tell my boss about that last one)
Then you spend all your budget on code design sessions and have nothing to show to the customer.
yes!<p>and it also works for me when working with ai. that produces much better results, too, when I first so a design session really discussing what to build. then a planning session, in which steps to build it ("reviewability" world wonder). and then the instruction to stop when things get gnarly and work with the hooman.<p>does anyone here have a good system prompt for that self observance "I might be stuck, I'm kinda sorta looping. let's talk with hooman!"?
I never review PRs, I always rubber-stamp them, unless they come from a certified idiot:<p>1. I don't care because the company at large fails to value quality engineering.<p>2. 90% of PR comments are arguments about variable names.<p>3. The other 10% are mistakes that have very limited blast radius.<p>It's just that, unless my coworker is a complete moron, then most likely whatever they came up with is at least in acceptable state, in which case there's no point delaying the project.<p>Regarding knowledge share, it's complete fiction. Unless you actually make changes to some code, there's zero chance you'll understand how it works.
I'm very surprised by these comments...<p>I regularly review code that is way more complicated that it should.<p>The last few days I was going back and forth on reviews on a function that had originally cyclomatic complexity of 23. Eventually I got it down to 8, but I had to call him into a pair programming session and show him how the complexity could be reduced.
Someone giving work like that should be either junior enough that there is potential for training them, so your time investment is worth it, or managed out.<p>Or it didn't really matter that the function was complex if the structure of what's surrounding it was robust and testable; just let it be a refactor or bug ticket later.
Do people really argue about variable names? Most reviews comments I see are fairly trivial, but almost always not very subjective. (Leftover debug log, please add comment here, etc) Maybe it helps that many of our seniors are from a team where we had no auto-formatter or style guide at all for quite a while. I think everyone should experience that a random mix of `){` and `) {` does not really impact you in any way beyond the mild irking of a crooked painting or something. There's a difference between aesthetically bothersome and actually harmful. Not to say that you shouldn't run a formatter, but just for some perspective.
>Do people really argue about variable names?<p>Of course they do. A program's code is mostly a graph of names; they can be cornerstones of its clarity, or sources of confusion and bugs.<p>The first thing I do when debugging is ensuring proper names, sometimes that's enough to make the bug obvious.
Yes. 80% of comments to my PRs are "change _ to -" or something like that.
People always makes mistakes. Like forgetting to include a change. The point of PRs for me is to try to weed out costly mistakes. Automated tests should hopefully catch most of them though.
The point of PRs is not to avoid mistakes (though sometimes this can happen). Automated tests are the tool to weed out those kinds of mistakes. The point of PRs is to spread knowledge. I try to read every PR, even if it's already approved, so I'm aware of what changes there are in code I'm going to own. They are the RSS feed of the codebase.
> 2. 90% of PR comments are arguments about variable names.<p>This sort of comment is meaningless noise that people add to PRs to pad their management-facing code review stats. If this is going on in your shop, your senior engineers have failed to set a suitable engineering culture.<p>If you <i>are</i> one of the seniors, schedule a one-on-one with your manager, and tell them in no uncertain terms that code review stats are off-limits for performance reviews, because it's causing perverse incentives that fuck up the workflow.
The most senior guy has the worst reviews because it takes multiple rounds, each round finds new problems. Manager thinks this contributes to code quality. I was denied promotion because I failed to convince half of the company to drop everything and do my manager's pet project that had literally zero business value.
I used to do this! I can’t anymore, not with the advent of AI coding agents.<p>My trust in my colleagues is gone, I have no reason to believe they wrote the code they asked me to put my approval on, and so I certainly don’t want to be on a postmortem being asked why I approved the change.<p>Perhaps if I worked in a different industry I would feel like you do, but payments is a scary place to cause downtime.
[dead]
These systems make it more efficient to remove the actively toxic members for your team. Beligerence can be passively aggressively “handled” by additional layers but at considerable time and emotional labor cost to people who could be getting more work done without having to coddle untalented assholes.
Okay but Claude is a fucking moron.
The issue is that every review adds a lot of delay. A lot of alignment and pair programming won't be time expensive?
Yes. This is the way. Declarative design contracts are the answer to A.I. coders. A team declares what they want, agents code it together with human supervision. Then code review is just answering the question "is the code conformant with the design contract?"<p>But. The design contract needs review, which takes time.
I wonder what delayed continuous release would be like. Trust folks to merge semi-responsibly, but have a two week delay before actually shipping to give yourself some time to find and fix issues.<p>Perhaps kind of a pain to inject fixes in, have to rebase the outstanding work. But I kind of like this idea of the org having responsibility to do what review it wants, without making every person have to coral all the cats to get all the check marks. Make it the org's challenge instead.
Valve is one of the only companies that appears to understand this, as well as that individual productivity is almost always limited by communication bandwidth, and communication burden is exponential as nodes in the tree/mesh grow linearly. [or some derated exponent since it doesn't need to be fully connected]
> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore<p>Amen brother
I wonder where the reviewer worked where PRs are addressed in 5 hours. IME it's measured in units of days, not hours.<p>I agree with him anyway: if every dev felt comfortable hitting a stop button to fix a bug then reviewing might not be needed.<p>The reality is that any individual dev will get dinged for not meeting a release objective.
I worked in a company where reviews took days. The CTO complained a lot about the speed, but we had decent code quality.<p>Now I work at a company where reviews take minutes. We have 5 lines of technical debt per 3 lines of code written. We spend months to work on complicated bugs that have made it to production.
My last FAANG team had a soft 4-hour review SLA, but if it was a complicated change then that might just mean someone acknowledging it and committing to reviewing it by a certain date/time. IIRC, if someone requested a review and you hadn't gotten to it by around the 3-hour mark you'd get an automated chat message "so-and-so has been waiting a while for your review".<p>Everyone was very highly paid, managers measured <i>everything</i> (including code review turnaround), and they frequently fired bottom performers. So, tradeoffs.
At the bottom of the page it says he is CEO of Tailscale.
I'm yet to see a project where reviews are handled seriously. Both business and developers couldn't care less.
I have, and in each sprint we always had tickets for reviewing the implementation, which could take anywhere from an hour to 2 days.<p>The code quality was much better than in my current workplace where the reviews are done in minutes, although the software was also orders of magnitude more complex.
I worked somewhere that actually had a great way to deal with this. It only works in small teams though.<p>We had a "support rota", i.e. one day a week you'd be essentially excused from doing product delivery.<p>Instead, you were the dev to deal with big triage, any code reviews, questions about the product, etc.<p>Any spare time was spent looking for bugs in the backlog to further investigate / squash.<p>Then when you were done with your support day you were back to sprint work.<p>This meant there was no ambiguity of who to ask for code review, and limited / eliminated siloing of skills since everyone had to be able to review anyone else's work.<p>That obviously doesn't scale to large teams, but it worked wonders for a small team.
Bonus points: reviews are not taken seriously in the legitimate sense, but a facade of seriousness consisting of picky complaints is put forth to reinforce hierarchy and gatekeeping
I’ve worked on teams like you describe and it’s been terrible. My current team’s SDLC is more along the 5-hour line - if someone hasn’t reviewed your code by the end of today, you bring it up in standup and have someone commit to doing it.
One thing that often gets dismissed is the value/effort ratio of reviews.<p>A review must be useful and the time spent on reviewing, re-editing, and re-reviewing must improve the quality enough to warrant the time spent on it. Even long and strict reviews are worth it if they actually produce near bugless code.<p>In reality, that's rarely the case. Too often, reviewing gets down into the rabbithole of various minutiae and the time spent to gain the mutual compromise between what the programmer wants to ship and the reviewer can agree to pass is not worth the effort. The time would be better spent on something else if the process doesn't yield substantiable quality. Iterating a review over and over and over to hone it into one interpretation of perfection will only bump the change into the next 10x bracket in the wallclock timeline mentioned in this article.<p>In the adage of "first make it work, then make it correct, and then make it fast" a review only needs to require that the change reaches the first step or, in other words, to prevent breaking something or the development going into an obviously wrong direction straight from the start. If the change works, maybe with caveats but still works, then all is generally fine enough that the change can be improved in follow-up commits. For this, the review doesn't need to be thorough details: a few comments to point the change into the right direction is often enough. That kind of reviews are very efficient use of time.<p>Overall, in most cases a review should be a very short part of the development process. Most of the time should be spent programming and not in review churn. A review serves as a quick check-point that things are still going the right way but it shouldn't dictate the exact path that should be used in order to get there.
Nice piece, and rings true. I also think startups and smaller organizations will be able to capture better value out of AI because they simply don't have all those approval layers.
I think this makes an assumption early on which is that things are serialized, when usually they are not.<p>If I complete a bugfix every 30 minutes, and submit them all for review, then I really don't care whether the review completes 5 hours later. By that time I have fixed 10 more bugs!<p>Sure, getting review feedback 5 hours later will force me to context switch back to 10 bugs ago and try to remember what that was about, and that might mean spending a few more minutes than necessary. But that time was going to be spent _anyway_ on that bug, even if the review had happened instantly.<p>The key to keeping speed up in slow async communication is just working on N things at the same time.
Excellent article. Based on personal experience, if you build cutting edge stuff then you need great engineers and reviewers.<p>But for anything else, you just need an individual (not a team) who's okay (not great) at multiple things (architecting, coding, communicating, keeping costs down, testing their stuff). Let them build and operate something from start to finish without reviewing. Judge it by how well their produce works.
Not before coding agents nor after coding agents has any PR taken me 5 hours to review. Is the delay here coordination/communication issues, the "Mythical Mammoth" stuff? I could buy that.
The article is referring to the total time including delays. It isn’t saying that PR review literally takes 5 hours of work. It’s saying you have to wait about half a day for someone else to review it.
Which is a thing that depend very much on team culture. In my team it is perhaps 15 min for smaller fixes to get signoff. There is a virtuous feedback loop here - smaller PRs give faster reviews, but also more frequent PRs, which give more frequent times to actually check if there is something new to review.
If I'm deep in coding flow the last thing I'm going to do is immediately jump on to someone else's PR. Half a day to a day sounds about right from when the PR is submitted to actually getting the green light
Does your team just context switch all the time? That sounds like a terrible place to work.
The PR won’t take 5 hours of work, but it could easily sit that long waiting for another engineer to willing to context switch from their own heads-down work.
Exactly. Even if I hammer the erstwhile reviewer with Teams/Slack messages to get it moved to the top of the queue and finished before the 5 hours are up, then all the other reviews get pushed down. It averages out, and the review market corrects.
Exaxtly. Can you get a lawyer on the phone now or do you wait ~ 5 hours. How about a doctor appt. Or a vet appt. Or a mechanic visit.<p>Needing full human attention on a co.plex task from a pro who can only look at your thing has a wait time. It is worse when there are only 2 or 3 such people in the world you can ask!
The article specified wall clock time. One day turnaround is pretty typical if its not urgent enough to demand immediate review, lots of people review incoming PRs as a morning activity.
I've had PRs that take me five hours to review. If your one PR is an entire feature that touches the database, the UI, and an API, and I have to do the QA on every part of it because as soon as I give the thumbs up it goes out the door to clients? Then its gonna take a while and I'm probably going to find a few critical issues and then the loop starts again
Some devs interrupt what they are doing when they see a PR in a Slack notification, most don't.<p>Most devs set aside some time at most twice a day for PRs. That's 5 hours at least.<p>Some PRs come in at the end of the day and will only get looked at the next day. That's more than 5 hours.<p>IME it's rare to see a PR get reviewed in under 5 hours.
I use a PR notifier chrome extension, so I have a badge on the toolbar whenever a PR is waiting on me. I get to them in typically <2 minutes during work hours because I tab over to chrome whenever AI is thinking. Sometimes I even get to browse HN if not enough PRs are coming and not too many parallel work sessions.
But there's more than one person that can review a PR.<p>If you work in a team of 5 people, and each one only reviews things twice a day, that's still less than 5 hours any way you slice it.
> "Mythical Mammoth"<p>Most excellent.
One pattern I've seen is that a team with a decently complex codebase will have 2-3 senior people who have all of the necessary context and expertise to review PRs in that codebase. They also assign projects to other team members. All other team members submit PRs to them for review. Their review queue builds up easily and average review time tanks.<p>Not saying this is a good situation, but it's quite easy to run into it.
That’s because most teams are doing engineering wrong.<p>The handover to a peer for review is a falsehood. PRs were designed for open source projects to gate keep public contributors.<p>Teams should be doing trunk-based development, group/mob programming and one piece flow.<p>Speed is only one measure and AI is pushing this further to an extreme with the volume of change and more code.<p>The quality aspect is missing here.<p>Speed without quality is a fallacy and it will haunt us.<p>Don’t focus on speed alone, and the need to always be busy and picking up the next item - focus on quality and throughput keeping work in progress to a minimum (1). Deliver meaningful reasoned changed as a team, together.
This is a profound point but is review really the problem or is it the handoff that crosses boundaries (me to others, our team to other team, our org to outside our org)?
Communication overhead is the #1 schedule killer, in my experience.<p>Whenever we have to talk/write about our work, it slows things down. Code reviews, design reviews, status updates, etc. all impact progress.<p>In many cases, they are vital, and can’t be eliminated, but they can be streamlined. People get really hung up on tools and development dogma, but I've found that there’s no substitute for having experienced, trained, invested, technically-competent people involved. The more they already know, the less we have to communicate.<p>That’s a big reason that I have for preferring small meetings. I think limiting participants to direct technical members, is <i>really important.</i> I also don’t like regularly-scheduled meetings (like standups). Every meeting should be <i>ad hoc</i>, in my opinion.<p>Of course, I spent a majority of my career, at a Japanese company, where meetings are a currency, so fewer meetings is sort of my Shangri-La.<p>I’m currently working on a rewrite of an app that I originally worked on, for nearly four years. It’s been out for two years, and has been fairly successful. During that time, we have done a lot of incremental improvements. It’s time for a 2.0 rewrite.<p>I’ve been working on it for a couple of months, with LLM assistance, and the speed has been astounding. I’m probably halfway through it, already. But I have also been working primarily alone, on the backend and model. The design and requirements are stable and well-established. I know pretty much exactly what needs to be done. Much of my time is spent testing LLM output, and prompting rework. I’m the “review slowdown,” but the results would be disastrous, if I didn’t do it.<p>It’s a very modular design, with loosely-coupled, well-tested and documented components, allowing me to concentrate on the “sharp end.” I’ve worked this way for decades, and it’s a proven technique.<p>Once I start working on the GUI, I guarantee that the brakes will start smoking. All because of the need for non-technical stakeholder team involvement. They <i>have</i> to be involved, and their involvement will make a huge difference (like a Graphic UX Designer), but it will still slow things down. I have developed ways to streamline the process, though, like using TestFlight, way earlier than most teams.
That's exactly why I think vibecoding uniquely benefits solo and small team founders. For anything bigger, work is not the bottleneck, it's someone's lack of imagination.<p><a href="https://capocasa.dev/the-golden-age-of-those-who-can-pull-it-off" rel="nofollow">https://capocasa.dev/the-golden-age-of-those-who-can-pull-it...</a>
Well, this all makes sense for application code, but not necessarily for infrastructure changes. Imagine a failed Terraform merge that deletes the production database but opens the inbound at 0.0.0.0/0, and you can't undo it for 10 minutes. In my opinion, you need to pay attention to the narrow scope specific to a given project.
Try to imagine a deployment/CI system where that isn't possible. That's what the post is asking.<p>* Maybe you don't have privileges to delete the database<p>* Maybe your CI environments are actually high fidelity, and will fail when there is no DB<p>* Maybe destructive actions require further review<p>* Maybe your service isn't exposed to the public internet, and exposing to 0.0.0.0/0 isn't a problem.<p>* Maybe we engineer our systems to have trivial instant undo, and deleting a DB triggers an undo<p>Our tooling is kind of crappy. There's a lot we can do.
I broadly agree with this, it really is all about trust. Just, as a company scales it’s hard to make sure that everybody in the team remains trustworthy – it isn’t just about personality and culture, it’s also about people actually having the skill, motivation, and track record of doing good work efficiently. Maybe AI‘s greatest value will be to allow teams to stay small, which reduces the difficulty of maintaining trust.
Reviewing things is fast and smooth is things are small. If you have all the involved parties stay in the loop, review happens in the real time. Review is only problematic if you split the do and review steps. The same applies to AI coding, you can chose to pair program with it and then it's actually helpful, or you can have it generate 10k lines of code you have no way of reviewing. You just need people understand that switching context is killing productivity. If more things are happening at the same time and your memory is limited, the time spent on load/save makes it slower than just doing one thing at the time and staying in the loop.
Honestly if I'm just following what a single LLM is doing I'm arguably slower than doing it myself so I'd say that approach isn't very useful for me.<p>I prefer to review plan (this is more to flush out my assumptions about where something fits in the codebase and verify I communicated my intent correctly).<p>I'll loosely monitor the process if it's a longer one - then I review the artifacts. This way I can be doing 2/3 things in parallel, using other agents or doing meetings/prod investigation/making coffee/etc.
I find to be true for expensive approvals as well.<p>If I can approve something without review, it’s instant. If it requires only immediate manager, it takes a day. Second level takes <i>at least</i> ten days. Third level trivially takes at least a quarter (at least two if approaching the end of the fiscal year). And the largest proposals I’ve pushed through at large companies, going up through the CEO, take over a year.
Managers are expected to say that we should be productive yet they're responsible for the framework which slows down everyone and it's quite clear that they're perfectly fine with this framework. I'm not saying it's good or bad because it's complicated.
A few years ago there was a thread about "How complex systems fail" here on HN[1], and one aspect of it (rule 9) is about how individuals have to balance between security and productivity, and being judged differently depending on the context (especially being judged after-the-fact for the security aspect, while being judged before the accident for the productivity aspect).<p>The linked page in the thread is short and quite enlightening, but here is the relevant passage:<p><pre><code> > Rule 9: Human operators have dual roles: as producers & as defenders against failure.
> The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized. At either time, the outsider’s view misapprehends the operator’s constant, simultaneous engagement with both roles.
</code></pre>
[1] <a href="https://news.ycombinator.com/item?id=32895812">https://news.ycombinator.com/item?id=32895812</a>
In my experience a culture where teammates prioritise review times (both by checking on updates in GH a few times a day, and by splitting changes agressively into smaller patches) is reflected in much faster overall progress time. It's definitely a culture thing, there's nothing technically or organisationally difficult about implementing it, it just requires people working together considering team velocity more important than personal velocity.
Let's say a teammate is writing code to do geometric projection of streets and roads onto live video. Another teammate is writing code to do automated drone pursuit of cars. Let's say I'm over here writing auth code, making sure I'm modeling all the branches which might occur in some order.<p>To what degree do we expect intellectual peerage from someone just glancing into this problem because of a PR? I would expect that to be the proper intellectual peer of someone studying the problem, it's quite reasonable to basically double your efforts.
If the team is that small and working on things that are that disparate, then it is also very vulnerable to one of those people leaving, at which point there's a whole part of the project that nobody on the team has a good understanding of.<p>Having somebody else devote enough time to being up to speed enough to do code review on an area is also an investment in resilience so the team isn't suddenly in huge difficulty if the lone expert in that area leaves. It's still a problem, but at least you have one other person who's been looking at the code and talking about it with the now-departed expert, instead of nobody.
This is an unusually low overlap per topic; probably needs a different structure to traditional prs to get the best chance to benefit from more eyes... Higher scope planning or something like longer but intermittent partner programming.<p>Generally if the reviewer is not familiar with the content asynchronous line by line reviews are of limited value.
The 10x estimate tracks — I've seen it too. The underlying mechanism is queuing theory: each approval step is a single-server queue with high variance inter-arrival times, so average wait explodes non-linearly. AI makes the coding step ~10x faster but doesn't touch the approval queue. The orgs winning right now are the ones treating async review latency as a first-class engineering metric, same way they treat p99 latency for services.
> Code a simple bug fix
30 minutes<p>> Get it code reviewed by the peer next to you
300 minutes → 5 hours → half a day<p>Is it takes 5 hours for a peer to review a simple bugfix your operation is dysfunctional.
Its rare that devs are on standby, waiting for a pr to review. Usually they are working on their own pr, are in meetings, have focus time.<p>We talked a lot about the costs of context switches so its reasonable to finish your work before switching to the review.
People are busy, and small bugfixes are usually not that critical. If you make everyone drop everything to review everything, <i>that</i> is much more dysfunctional.
nobody will immediately jump on your code review
Sure, but five hours is a lot of time, and a small fix takes little to review.<p>So, 1 hour? Sure. Two hours? Ok.
But five hours means you only look at your teammates code once a day.<p>It's ok for a process where you work on something for a week and then come back for reviews but then it's silly to complain about overhead.
This reads like a scattered mind with a few good gems, a few assumptions that are incorrect but baked into the author’s world view, and loose coherence tying it all together. I see a lot of myself in it.<p>I’ll cover one of them: layers of management or bureaucracy does not reduce risk. It creates in-action, which gives the appearance of reducing risk, until some startup comes and gobbles up your lunch. Upper management knows it’s all bullshit and the game theoretic play is to say no to things, because you’re not held accountable if you say no, so they say no and milk the money printer until the company stagnates and dies. Then they repeat at another company (usually with a new title and promotion).
Waiting for a few days of design review is a pain that is easy to avoid: all we need is to be ready to spend a few months building a potentially useless system.
Meanwhile there are people who, as we speak, say that AI will do review and all we need to do is to provide quality gates...
I don’t agree that AI can’t fix this. It is too easy to dismiss.<p>With AI my task to review is to see high level design choices and forget reviewing low level details. It’s much simpler.
A lot of this goes away when the person who builds also decides what to build.
> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself.<p>That's me. I'm the mad reviewer. Each time I ranted against AI on this site, it was after reviewing sloppy code.<p>Yes, Claude Opus is better on average than my juniors/new hires. But it will do the same mistakes twice. I _need_ you to fucking review your own generated code and catch the obvious issues before you submit it to me. Please.
The descent into madness cycle isn't new to AI. Better IDEs, better frameworks, better languages — each one sped up step 1 and the bottleneck moved further downstream. AI just makes the contrast so extreme that the review bottleneck becomes impossible to ignore.<p>Maybe that's the real contribution of AI coding: it finally makes the actual problem visible.
In my experience, good mature organisations have clear review processes to ensure quality, improve collaboration and reduce errors and risk. This is regardless of field. It does slow you down - not 10x - but the benefits outweigh the downsides in the long run.<p>The worst places I’ve worked have a pattern where someone senior drives a major change without any oversight, review or understanding causing multiple ongoing issues. This problem then gets dumped onto more junior colleagues, at which point it becomes harder and more time consuming to fix (“technical debt”). The senior role then boasts about their successful agile delivery to their superiors who don’t have visibility of the issues, much to the eye-rolls of all the people dealing with the constant problems.
I totally agree with his ideas, but somehow he seems just stating the obvious: startups move better than big orgs and you can solve a problem by dividing it in smaller problems - if possible. And that AI experimentation is cheap.
What makes me slower is the moment is the AI slop my team lead posts into reviews. I have to spend time to argue why that's not a valid comment.
As they say: an hour of planning saves ten hours of doing.<p>You don't need so much code or maintenance work if you get better requirements upfront. I'd much rather implement things at the last minute knowing what I'm doing than cave in to the usual incompetent middle manager demands of "starting now to show progress". There's your actual problem.
If an hour of planning always saved ten hours of work, software schedules would be a whiteboard exercise.<p>Instead everyone wants perfect foresight, but systems are full of surprises you only find by building and the cost of pushing uncertainty into docs is that the docs rot because nobody updates them. Most "progress theater" starts as CYA for management but hardens into process once the org is too scared to change anything after the owners move on.
> As they say: an hour of planning saves ten hours of doing.<p>In software it's the opposite, in my experience.<p>> You don't need so much code or maintenance work if you get better requirements upfront.<p>Sure, and if you could wave a magic wand and get rid of all your bugs that would cut down on maintenance work too. But in the real world, with the requirements we get, what do we do?
> In software it's the opposite, in my experience.<p>That's been my experience as well: ten hours of doing will definitely save you an hour of planning.<p>If you aren't getting requirements from elsewhere, at least document the set of requirements you <i>think</i> you're working towards, and post them for review. You sometimes get new useful requirements <i>very fast</i> if you post "wrong" ones.
I think what they meant is you “can save 10 hours of planning with one hour of doing”<p>And I think this has become even more so with the age of ai, because there is even more unknown unknowns, which is harder to discover while planning, but easy wile “doing” and that “doing” itself is so much more streamlined.<p>In my experience no amount of planning will de-risk software engineering effort, what works is making sure coming back and refactoring, switching tech is less expensive, which allows you to rapidly change the approach when you inevitably discover some roadblock.<p>You can read all the docs during planning phases, but you will stumble with some undocumented behaviour / bug / limitation every single time and then you are back to the drawing board. The faster you can turn that around the faster you can adjust and go forward.<p>I really like the famous quote from Churchill- “Plans are useless, planning is essential”
> I really like the famous quote from Churchill- “Plans are useless, planning is essential”<p>I really like Churchill’s second famous quote: “What the fuck is software, lol”.
> I think what they meant is you “can save 10 hours of planning with one hour of doing”<p>I know what they meant, and I also meant the thing I said instead. I have seen many, many people forge ahead on work that could have been saved by a bit more planning. Not <i>overplanning</i>, but doing a <i>reasonable</i> amount of planning.<p>Figuring out <i>where</i> the line is between planning and "just start trying some experiments" is a matter of experience.
Planning includes the prototype you build with AI.
[dead]
>> Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.<p>This seems to check out, and it's the reason why I can't reconcile the claims of the industry about workers replacement with reality. I still wonder when a reckoning will come, though. seems long overdue in the current environment
> I still wonder when a reckoning will come, though. seems long overdue in the current environment<p>Never. Until 1-10 person teams starts disrupt enterprises (legacy banks, payments systems, consultancies).<p>“Why” would you ask? Because it’s a house of cards. If engineers get redundant, then we don’t need teams. If we don’t need teams, then we don’t need team leads/PMs/POs and others, if we don’t need middle management, then we don’t need VPs and others. All of those layers will eventually catch up to what’s going on and kill any productivity gains via bureaucracy.
I don't agree with this take in the article. One person with Claude Code can replace a team of devs. It resolves many issues, such as the tension between devs wanting to focus and devs wanting their peers to put aside their task to review their pull requests. Claude generates the code and the human reviews it. There's no delay in the back-and-forth unlike in a team of humans. There's no ego and there's no context switching fatigue. Given that code reviewing is a bottleneck, it's feasible that one person can do it by themselves. And Claude can certainly generate working code at least 10x faster than any dev.
You’re talking from idealistic requirements - input - programming - output point. That’s not how the world operates. Egos are “important”, politics, bureaucracy, all of those are essential parts of the organizations. LLMs don’t change that, and without changing that there’s no chance at all. Previously coding was maybe 0.1 bottleneck, now it’s 0.07 bottleneck.
[dead]
[dead]
[flagged]
Great point! We’ve been shifting quality left in human teams for decades now, and there’s still territory there that few (no?) teams have explored. AI is probably where that unexplored terrain becomes worthwhile to explore.
building software is no easy that now that every comment on HN has turned into a growth hacking sales pitch.
> I know what you're thinking. Come on, 10x? That’s a lot. It’s unfathomable. Surely we’re exaggerating.<p>See this rarely known trick! You can be up to 9x more efficient if you code something else when you wait for review<p>> AI<p><i>projectile vomits</i><p>Fuck engineering, let's work on methods to make artificial retard be more efficient!
from article:<p>1. Whoa, I produced this prototype so fast! I have super powers!<p>2. This prototype is getting buggy. I’ll tell the AI to fix the bugs.<p>3. Hmm, every change now causes as many new bugs as it fixes.<p>4. Aha! But if I have an AI agent also review the code, it can find its own bugs!<p>5. Wait, why am I personally passing data back and forth between agents<p>6. I need an agent framework<p>7. I can have my agent write an agent framework!<p>8. Return to step 1<p>the author seems to imply this is recursive when it isn't. when you have an effective agent framework you can ship more high quality code quickly.
This is one of the reasons I'm so interested in sandboxing. A great way to reduce the need for review is to have ways of running code that limit the blast radius if the code is bad. Running code in a sandbox can mean that the worst that can happen is a bad output as opposed to a memory leak, security hole or worse.
Isn’t “bad output” already worst case? Pre-LLMs correct output was table stakes.<p>You expect your calculator to always give correct answers, your bank to always transfer your money correctly, and so on.
I've seen plenty of decision makers act on bad output from human employees in the past. The company usually survives.
> Isn’t “bad output” already worst case?<p>Worst case in a modern agentic scenario is more like "drained your bank account to buy bitcoin and then deleted your harddrive along with the private key"<p>> Pre-LLMs correct output was table stakes<p>We're only just getting to the point where we have languages and tooling that can reliably prevent segfaults. Correctness isn't even on the table, outside of a few (mostly academic) contexts
And if the bad output leads to a decision maker making a bad decision, that takes down your company or kills your relative ?
[dead]
I think the problem is the shape of review processes. People higher up in the corporate food chain are needed to give approval on things. These people also have to manage enormous teams with their own complexities. Getting on their schedule is difficult, and giving you a decision isn't their top priority, slowing down time to market for everything.<p>So we will need to extract the decision making responsibility from people management and let the Decision maker be exclusively focused on reviewing inputs, approving or rejecting. Under an SLA.<p>My hypothesis is that the future of work in tech will be a series of these input/output queue reviewers. It's going to be really boring I think. Probably like how it's boring being a factory robot monitor.
If you save 3 hours building something with agentic engineering and that PR sits in review for the same 30 hours or whatever it would have spent sitting in review if you handwrote it, you’re still saving 3 hours building that thing.<p>So in that extra time, you can now stack more PRs that still have a 30 hour review time and have more overall throughput (good lord, we better get used to doing more code review)<p>This doesn’t work if you spend 3 minutes prompting and 27 minutes cleaning up code that would have taken 30 minutes to write anyway, as the article details, but that’s a different failure case imo
> So in that extra time, you can now stack more PRs that still have a 30 hour review time and have more overall throughput<p>Hang on, you think that a queue that drains at a rate of $X/hour can be filled at a rate of 10x$X/hour?<p>No, it cannot: it doesn't matter how fast you fill a queue if the queue has a constant drain rate, sooner or later you are going to hit the bounds of the queue or the items taken off the queue are too stale to matter.<p>In this case, filling a queue at a rate of 20 items per hour (every 3 minutes) while it drains at a rate of 1 item every 5 hours means that after a single day, you can expect your last PR to be reviewed in ((8x20) - 1) hours.<p>IOW, after a single day the time-to-review is 159 hours. Your PRs after the second day is going to take +300 hours.
This is the fundamental issue currently in my situation with AI code generation.<p>There are some strategies that help: a lot of the AI directives need to go towards making the code actually easy to review. A lot of it it sits around clarity, granularity (code should be committed <i>primarily</i> in reviewable chunks - units of work that make sense for review) rather than whatever you would have done previously when code <i>production</i> was the bottleneck. Similarly, AI use needs to be weighted not just more towards tests, but towards tests that concretely and clearly answer questions that come up in review (what happens on this boundary condition? or if that variable is null? etc). Finally, changes need to be stratified along lines of risk rather than code modularity or other dimensions. That is, if a change is evidently risk free (in the sense of, "even if this IS broken it doesn't matter) it should be able to be rapidly approved / merged. Only things where it actually matters if it wrong should be blocked.<p>I have a feeling there are whole areas of software engineering where best practices are just operating on inertia and need to be reformulated now that the underlying cost dynamics have fundamentally shifted.
>Finally, changes need to be stratified along lines of risk rather than code modularity or other dimensions.<p>Why don't those other dimensions, and especially the code modularity, already reflect the lines of business risk?<p>Lemme guess, you cargo culted some "best practices" to offload <i>risk awareness</i>, so now your code is organized in "too big to fail" style and matches your vendor's risk profile instead of yours.
> Why don't those other dimensions, and especially the code modularity, already reflect the lines of business risk?<p>I guess the answer (if you're really asking seriously) is that previously when code production cost so far outweighed everything else, it made sense to structure everything to optimise efficiency in that dimension.<p>So if a change was implemented, the developer would deliver it as a functional unit that might cut across several lines of risk (low risk changes like updating some CSS sitting along side higher risk like a database migration, all bundled together). Because this was what made it fastest for the developer to implement the code.<p>Now if AI is doing it, screw how easy or fast it is to make the change. Deliver it in review chunks.<p>Was the original method cargo culted? I think most of what we do is cargo culted regardless. Virtually the entire software industry is built that way. So probably.
You are considering a good-faith environment where GP cares about throughput of the queue.<p>I think GP is thinking in terms of being incentivized by their environment to demonstrate an image of high <i>personal</i> throughput.<p>In a dysfunctional organization one is forced to overpromise and underdeliver, which the AI facilitates.
If your team's bottleneck is code review by senior engineers, adding more low quality PRs to the review backlog will not improve your productivity. It'll just overwhelm and annoy everyone who's gotta read that stuff.<p>Generally if your job is acting as an expensive frontend for senior engineers to interact with claude code, well, speaking as a senior engineer I'd rather just use claude code directly.
Except that when you have 10 PRs out, it takes longer for people to get to them, so you end up backlogged.