Few things give me more dread than reviewing the mediocre code written by an overconfident LLM, but arguing in a PR with an overconfident LLM that its review comments are wrong is up there.
I can’t agree more. I’m torn on LLM code reviews. On the one hand I think it is a place that makes a lot of sense and they can quickly catch silly human errors like misspelled variables and whatnot.<p>On the other hand the amount of flip flopping they go through is unreal. I’ve witnessed numerous instances where either the cursor bugbot or Claude has found a bug and recommended a reasonable fix. The fix has been implemented and then the LLM has argued the case against the fix and requested the code be reverted. Out of curiosity to see what happens I’ve reverted the code just to be told the exact same recommendation as in the first pass.<p>I can foresee this becoming a circus for less experienced devs so I turned off the auto code reviews and stuck them in request only mode with a GH action so that I can retain some semblance of sanity and prevent the pr comment history from becoming cluttered with overly verbose comments from an agent.
I have no problem accepting the odd comment that actually highlights a flaw and dismissing the rest, because I can use discretion and have an understanding of what it has pointed out and if it’s legit.<p>The dread is explaining this to someone less experienced, because it’s not helpful to just say to use your gut. So I end up highlighting the comments that are legit and pointing out the ones that aren’t to show how I’m approaching them.<p>It turns out that this is a waste of time, nobody learns anything from it (because they’re using an LLM to write the code anyway) and it’s better to just disable the integration and maybe just run a review thing locally if you care. I would say that all of this has made my responsibility as a mentor much more difficult.
The battle I am fighting at the moment is that our glorious engineering team, who are the lowest bidding external outsourcer, make the LLM spew look pretty good. The reality of course is they are both terrible, but no one wants to hear that, only that the LLM is better than the humans. And that's only because it's the narrative they need to maintain.<p>Relative quality is better but the absolute quality is not. I only care about absolute quality.
Do you have actual experience with bugbot? Its live in our org and is actually pretty good, almost none of its comments are frivolous or wrong, and it finds genuine bugs most reviewers miss. This is unlike Graphite and Copilot, so no one's glazing AI for AIs sake.<p>Bugbot is now a valuable part of our SD process. If you have genuine examples to show that we are just being delusional or haven’t hit a roadblock, I would love to know.