Sorry, so the tool is now even circumventing human review? Is that the goal?<p>So the agent can now merge shit by itself?<p>Just the let damn thing push nto prod by itself at this point.
I don’t think “ready to merge” necessarily means the agent actually merges. Just that it’s gone as far as it can automatically. It’s up to you whether to review at that point or merge, depending on the project and the stakes.<p>If there are CI failures or obvious issues that another AI can identify, why not have the agent keep going until those are resolved? This tool just makes that process more token efficient. Seems pretty useful to me.
That's EXACTLY right. Ready to merge is an important gate, but it is very stupid to just merge everything without further checks/testing by a human!
This tool seems agent-oriented for them to merge, rather merely check readiness. In that regard, the page doesn't mention anything about human reviewers, only AI reviewers. Honestly wouldn't be surprised if author, someone seemingly running fully agentic workflows, didn't even consider human reviewers. If it's AI start-to-end*, then yes, quite possibly could push directly to master without much difference.<p>Call me pessimistic, and considering [1][2][3] (and other similar articles/discussions), believe this tool will be most useful to AI PR spammers the moment is modified to also parse non-AI PR comments.<p>*Random question: is it start-to-end or end-to-end?<p>edit: P.S. Agree that it's useful, given its design goals, tool though.<p>[1]: <a href="https://old.reddit.com/r/opensource/comments/1q3f89b/" rel="nofollow">https://old.reddit.com/r/opensource/comments/1q3f89b/</a>
[2]: <a href="https://devansh.bearblog.dev/ai-slop/" rel="nofollow">https://devansh.bearblog.dev/ai-slop/</a>
[3]: <a href="https://etn.se/index.php/nyheter/72808-curl-removes-bug-bounties.html" rel="nofollow">https://etn.se/index.php/nyheter/72808-curl-removes-bug-boun...</a> (currently trending first page)
[dead]
Someone’s gonna think about wiring all this up to Linear or Jira, and there’ll be a whole new set of vulnerabilities created from malicious bug reports.
At a scale, I don't see a net negative of AI merging "shit by itself" if the developer (or the agent) is ensuring sufficient e2e, integration and unit test coverage prior to every merge, if in return I get my team to crank out features at a 10x speed.<p>The reality is that probably 99.9999% of code bases on this earth (but this might drop soon, who knows) pre-date LLMs and organizing them in a way that coding agents can produce consistent results from sprint to sprint, will need a big plumbing work from all dev teams. And that will include refactoring, documentation improvements, building consensus on architectures and of course reshaping the testing landscape. So SWE's will have a lot of dirty work to do before we reach the aforementioned "scale".<p>However, a lot of platforms are being built from ground-up today in a post-CC (claude code) era . And they should be ready to hit that scale today.
Yup! Software engineers aren't going to be out of work anytime soon, but I'm acting more like a CTO or VPE with a team of agents now, rather than just a single dev with a smart intern.
I hate this paradigm because it pits me against my tools as if we're adversaries. The tools are prone to rewrite or even delete the tests, so we have to write other tools to sandbox agents from each other and check each others' work, and I just don't see a way to get deterministically good results over just building shit myself. It comes down to needing high trust in my tools to feel confident in what we're shipping.
The key is that at the end of the day productivity is king which is a polite term for cutting head count and/or delivering at a ridiculously higher velocity.<p>You can deterministically always get good results at your pace. But most likely, you won't achieve that at the speed and scale that a coding agent running in 4-5 worktrees, 24/7 without food or toilet breaks, especially if the latter will mostly help achieve the product/business goals at an "OK" quality (in which case you will perhaps be measured by how good you can steer these agents to elevate that quality from "OK" without sacrificing scale too much).
Man if you are so frustrated by AI just stop reading articles relevant to it if you don't even take the time to read it properly.<p>And yes there are plenty of use cases were ai code doesn't hurt anyone even if it gets merged automatically...<p>See it as an interesting new field of r&d...
In some workflows it's helpful for the full loop to be automated so that the agent can test if what's done works.<p>And you can do a more exhaustive test later, after the agents are done running amok to merge various things.
It sounds like the goal is to get the code to human review without it being obviously broken in CI but the agent has no idea that's the case.
Yeah, it is about making sure that EVERY actionable PR comment gets addressed - whwther by fixing, resolving, creating a new issue, commenting that it is a will not fix, or blocking for human review - and then giving you a clear deterministic check you can do to reliably enforce your policy.
No,<p>The linked page explains how this fits into a development workflow<p>eg.<p>> A reviewer wrote “consider using X”… is that blocking or just a thought?<p>> AMBIGUOUS - Needs human judgment (suggestions, questions)
Right! It doesn't assume that all comments are actionable, or need to be worked on. However, if you allow anyone to comment on your PRs, it could be a malicious vector. So don't let anyone review PRs on projects that you care about!!!
I’m not saying this is, but if I were a malicious state actor, that’s exactly the kind of thing I’d like to see in widespread use.
No, it just prepares the PR - it doesn't automatically merge. That would be very dangerous, imho!