24 comments

  • Negitivefrags2 hours ago
    At my company I just tell people “You have to stand behind your work”<p>And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.<p>I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
    • themacguffinman1 hour ago
      The difference I see between a company dealing with this as opposed to an open source community dealing with this is that the company can fire employees as a reactive punishment. Drive-by open source contributions cost very little to lob over and can come from a wide variety of people you don&#x27;t have much leverage over, so maintainers end up making these specific policies to <i>prevent</i> them from having to react to the thousandth person who used &quot;The AI did it&quot; as an excuse.
      • osigurdson24 minutes ago
        When you shout &quot;use AI or else!&quot; from a megaphone, don&#x27;t expect everyone to interpret it perfectly. Especially when you didn&#x27;t actually understand what you were saying in the first place.
    • EE84M3i1 hour ago
      &gt;I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.<p>I find this bit confusing. Do you provide enterprise contracts for AI tools? Or do you let employees use their personal accounts with company data? It seems all companies have to be managing this somehow at this point.
    • darth_avocado2 hours ago
      &gt; At my company I just tell people “You have to stand behind your work”<p>Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.<p>Using AI as a scapegoat for sloppy and lazy work needs to be unacceptable.
      • Negitivefrags2 hours ago
        Of course it’s the minimum standard, and it’s obvious if you view AI as a tool that a human uses.<p>But some people view it as a seperate entity that writes code for you. And if you view AI like that, then “The AI did it” becomes an excuse that they use.
        • atoav2 hours ago
          &quot;Yes, but <i>you</i> submitted it to us.&quot;<p>If you&#x27;re illiterate and can&#x27;t read maybe don&#x27;t submit the text someone has written for you if you can&#x27;t even parse the letters.
          • fourthark1 hour ago
            The policy in TFA is a nicer way of saying that.
      • fwipsy2 hours ago
        Bad example. If the toaster carbonized bread in 20 seconds it&#x27;s defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.<p>Taking responsibility for outcomes is a powerful paradigm but I refuse to be held responsible for things that are genuinely beyond my power to change.<p>This is tangential to the AI discussion though.
        • darth_avocado2 hours ago
          &gt; If the toaster carbonized bread in 20 seconds it&#x27;s defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.<p>If the toaster is defective, not using it, identifying how to use it if it’s still usable or getting it replaced by reporting it as defective are all well within the pay grade of a sandwich pusher as well as part of their responsibilities.<p>And you’re still responsible for the sandwich. You can’t throw up your arms and say “the toaster did it”. And that’s where it’s not tangential to the AI discussion.<p>Toaster malfunctioning is beyond your control, but whether you serve up the burnt sandwich is absolutely within your control, which you will be and should be held responsible for.
        • dullcrisp1 hour ago
          No it’s not. If you burn a sandwich, you make new sandwich. Sandwiches don’t abide by the laws of physics. If you call a physicist and tell them you burnt your sandwich, they won’t care.
        • atoav1 hour ago
          I think it depends on the pay. You pay below the <i>living</i> wage? Better live with your sla.. ah employees.. serving charcoal. You pay them well above the living wage? Now we start to get into <i>they should care</i>-territory.
    • anonzzzies2 hours ago
      But &quot;AI did it&quot; is not immediate you are out thing? If you cannot explain why something is made the way you committed to git, we can just replace you with AI right?
      • EagnaIonat1 hour ago
        &gt; we can just replace you with AI right?<p>Accountability and IP protection is probably the only thing saving someone in that situation.
      • tjr1 hour ago
        Why stop there? We can replace git with AI too!
        • ronsor1 hour ago
          If you generate the code each time you need it, all version control becomes obsolete.
          • verbify1 hour ago
            They&#x27;ll version control the prompts because the requirements change.
            • ronsor1 hour ago
              Not if we AI-generate the requirements!
    • jeroenhd1 hour ago
      Some people who just want to polish their resume will feed any questions&#x2F;feedback back into the AI that generated their slop. That goes back and forth a few times until the reviewing side learns that the code authors have no idea what they&#x27;re doing. An LLM can easily pretend to &quot;stand behind its work&quot; if you tell it to.<p>A company can just fire someone who doesn&#x27;t know what they&#x27;re doing, or at least take some kind of measure against their efforts. On a public project, these people can be a death by a thousand cuts.<p>The best example of this is the automated &quot;CVE&quot; reports you find on bug bounty websites these days.
    • i2talics1 hour ago
      What good does it really do me if they &quot;stand behind their work&quot;? Does that save me any time drudging through the code? No, it just gives me a script for reprimanding. I don&#x27;t want to reprimand. I want to review code that was given to me in good faith.<p>At work once I had to review some code that, in the same file, declared a &quot;FooBar&quot; struct and a &quot;BarFoo&quot; struct, both with identical field names&#x2F;types, and complete with boilerplate to convert between them. This split served no purpose whatsoever, it was probably just the result of telling an agent to iterate until the code compiled then shipping it off without actually reading what it had done. Yelling at them that they should &quot;stand behind their work&quot; doesn&#x27;t give me back the time I lost trying to figure out why on earth the code was written this way. It just makes me into an asshole.
      • sb82441 hour ago
        It adds accountability, which is unfortunately something that ends up lacking in practice.<p>If you write bad code that creates a bug, I expect you to own it when possible. If you can&#x27;t and the root cause is bad code, then we probably need to have a chat about that.<p>Of course the goal isn&#x27;t to be a jerk. Lots of normal bugs make it through in reality. But if the root cause is true negligence, then there&#x27;s a problem there.<p>AI makes negligence much easier to achieve.
      • nineteen9991 hour ago
        If you asked Claude to review the code it would probably have pointed out the duplication pretty quickly. And I think this is the thing - if we are going to manage programmers who are using LLM&#x27;s to write code, and have to do reviews for their code, reviewers aren&#x27;t going to be able to do it for much longer without resorting to LLM assistance themselves to get the job done.<p>It&#x27;s not going to be enough to say - &quot;I don&#x27;t use LLM&#x27;s&quot;.
      • nradov1 hour ago
        Yelling at incompetent or lazy co-workers isn&#x27;t your responsibility, it&#x27;s your manager&#x27;s. Escalate the issue and let them be the asshole. And if they don&#x27;t handle it, well it&#x27;s time to look for a new job.
        • skeeter20201 hour ago
          &gt;&gt; Yelling at incompetent or lazy co-workers isn&#x27;t your responsibility, it&#x27;s your manager&#x27;s<p>First: Somebody hired these people, so are they really &quot;lazy and incompetent&quot;?<p>Second: There is no one who&#x27;s &quot;job&quot; is to yell at incompetent or lazy workers.
    • bitwize2 hours ago
      The smartest and most sensible response.<p>I&#x27;m dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.
      • locusofself2 hours ago
        It&#x27;s already happened at some very big tech companies
        • skeeter20201 hour ago
          One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn&#x27;t being tracked; just the various other usage paths.<p>It doesn&#x27;t take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can&#x27;t get correlation and causation right.
  • jonas212 hours ago
    The title should be changed to &quot;<i>LLVM AI tool policy: human in the loop</i>&quot;.<p>At the moment it&#x27;s &quot;<i>We don&#x27;t need more contributors who aren&#x27;t programmers to contribute code</i>,&quot; which is from a reply and isn&#x27;t representative of the original post.<p>The HN guidelines say: please use the original title, unless it is misleading or linkbait; don&#x27;t editorialize.
    • SunlitCat1 hour ago
      Additionally, it comes across as pretty hostile toward new contributors, which isn’t the intent of the article at all.
    • atoav1 hour ago
      I&#x27;ll say it as it is: if you can&#x27;t <i>read</i> code, but have a friend <i>write</i> it for you, you are in fact the wrong person to submit it, since you are the most exhausting person to deal with.<p>A human in the loop that doesn&#x27;t understand what is going on but still pushes isn&#x27;t only useless, but actively harmful.
    • octoberfranklin1 hour ago
      I&#x27;d like to know why the title hasn&#x27;t been fixed.
  • scuff3d2 hours ago
    It&#x27;s depressing this has to be spelled out. You&#x27;d think people would be smart enough not to harass maintainers with shit they don&#x27;t understand.
    • ActionHank2 hours ago
      People who are smart enough to think that far ahead are also smart enough not to fall into the “ai can do all jobs perfectly all the time and just need my divine guidance” trap.
    • georgeburdell2 hours ago
      It’s happening a lot with me at work. I am a programmer working largely with a hardware team and now they’re contributing large changes that I’m supposed to just roll with the punches. Management loves it
    • dvrp2 hours ago
      Not if they&#x27;re not programmers!
    • 0xpgm1 hour ago
      I think it&#x27;s part of the &quot;AI replacing software developers&quot; hype.<p>Many beginners or aspiring developers swallow this whole and simply point an AI to a problem and submit the generated slop to maintainers.
    • bakugo1 hour ago
      People who rely on a computer algorithm to literally think for them are not going to be very smart.
  • itissid1 hour ago
    One (narrow) circumstance to make the process of reviewing a large contribution — with significant aid from LLM — easier to review is to jump on a call with the reviewer, explain what the change is, and answer their questions on why is it necessary and what it brings to the table. This first pass is useful for a few reasons:<p>1. It shifts the cognitive load from the reviewer to the author because now the author has to do an elevator pitch and this can work sort of like a &quot;rubber duck&quot; where one would likely have to think about these questions up front.<p>2. In my experience this is a much faster to do this than a lonesome review with no live input from the author on the many choices they made.<p>First pass and have a reviewer give a go&#x2F;no-go with optional comments on design&#x2F;code quality etc.
    • utopiah0 minutes ago
      &gt; jump on a call with the reviewer<p>Have you ever done that with new contributors to open source projects? Typically things tend to be asynchronous but maybe it&#x27;s a practice I&#x27;ve just not encountered in such context.
  • whatever13 hours ago
    The code writers increased exponentially overnight. The number of reviewers is constant (slightly reduced due to layoffs).
    • rvz3 hours ago
      And so did the slop.
  • jdlyga2 hours ago
    As a a developer, you&#x27;re not only responsible for contributing code. But <i>verifying that it works</i>. I&#x27;ve seen this practice be put in place on other teams, not just with LLM&#x27;s, but with devs who contribute bugfixes without understanding the problem
    • willtemperley1 hour ago
      You also have a second responsibility with LLM generated code you publish: it must not be copyrighted.
  • willtemperley1 hour ago
    Their copyright clause reflects my own quandry about LLM usage:<p>I am responsible for ensuring copyright has not been violated with LLM generated code I publish. However, proving the negative, i.e. the code is not copyrighted is almost impossible.<p>I have experienced this - Claude came up with an almost perfect solution to a tricky problem, ten lines to do what I&#x27;ve seen done in multiple KLOC, and I later found the almost identical solution in copyrighted material.
    • mierz001 hour ago
      I’m curious have you ever been burnt or seen anyone burnt by copyright infringements in code?<p>Sometimes it’s super obvious because a game company steals code from a previous employer, but I have never seen this play out in entreprise software.
      • willtemperley44 minutes ago
        Personally I have not experienced it, but I have heard of people scanning for LGPL library usage in iOS apps, then essentially extorting the developers for their source code.<p>I can&#x27;t find the specific article now, but I am extremely careful to avoid anything GPL or LGPL.<p>It&#x27;s unlikely to be a problem until an app is highly successful, but once that happens people will grasp at whatever straws they can, e.g. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Google_LLC_v._Oracle_America%2C_Inc" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Google_LLC_v._Oracle_America%2...</a>.
    • foxglacier1 hour ago
      Copyright is for creative work so if it really is the best way to do it, you should be safe even if the AI copied the idea from somebody else. You can&#x27;t use copyright to restrict access to useful technology like a patent.
      • willtemperley36 minutes ago
        That&#x27;s very useful feedback. I suspect if the solution is irreducible it&#x27;s OK, which in this case is close to true (this is for bit-unpacking integers):<p><pre><code> &#x2F;&#x2F; bytes are added to the buffer until a value of bitwidth is available for _ in 0..&lt;bitWidth { let byte = try UInt8(parsing: &amp;input) buffer |= UInt64(byte) &lt;&lt; bitsInBuffer bitsInBuffer += 8 &#x2F;&#x2F; Values of bitwidth are right-shifted off the buffer. while bitsInBuffer &gt;= bitWidth &amp;&amp; outPos &lt; numValues { let value = Int(buffer &amp; mask) out[outPos + outOffset] = value outPos += 1 buffer &gt;&gt;= bitWidth bitsInBuffer -= bitWidth } }</code></pre>
    • octoberfranklin1 hour ago
      &gt; However, proving the negative, i.e. the code is not copyrighted is almost impossible.<p>Nonsense, it&#x27;s trivially easy <i>if you wrote the code yourself.</i> You hold the copyright.<p>If you had some LLM puke out the code for you, well, then you have no idea. So you can&#x27;t contribute that puke.
      • willtemperley1 hour ago
        Did you read my comment? I was specifically talking about publishing LLM generated code:<p>&gt; I am responsible for ensuring copyright has not been violated with LLM generated code I publish<p>Your comment:<p>&gt; If you had some LLM puke out the code for you, well, then you have no idea. So you can&#x27;t contribute that puke.<p>That&#x27;s not true. You can if it&#x27;s not violating copyright. That &quot;puke&quot; as you put it comes in many flavours.
  • bryanhogan1 hour ago
    I feel like the title should definitely be changed.<p>Requiring people who contribute to &quot;able to answer questions about their work during review.&quot; is definitely reasonable.<p>The current title of &quot;We don&#x27;t need more contributors who aren&#x27;t programmers to contribute code&quot; is an entirely different discussion.
    • atoav1 hour ago
      Is it tho?<p>If you don&#x27;t speak Hungarian and you have a Hungarian friend write a text for you that you submit to a collection of worthy Hungarian poetry, do you really think <i>you</i> are the correct person to answer questions about the text that was written?<p>You know what is <i>supposed</i> to be in it, but that&#x27;s it. You can&#x27;t judge the quality of the text and how it is fitting the rest at all. You can only trust your friend. And it is okay if you do, just don&#x27;t pull others into it.<p>IMO it is extremely rude to even try to pull this off and if you do, shame on you for wasting peoples time.
  • porksoda1 hour ago
    Its everywhere. I worked with a micro-manager CTO who farmed code review out to claude, which of course, when instructed to find issues with my code, did so.<p>With little icons of rocket ships and such.
  • yxhuvud1 hour ago
    The new policy looks very reasonable and fair. Unfortunately I&#x27;d be surprised if the bad apples will read the policy before spamming their &quot;help&quot;.
  • looneysquash2 hours ago
    Looks like a good policy to me.<p>One thing I didn&#x27;t like was the copy&#x2F;paste response for violations.<p>It makes sense to have one. Just the text they propose uses what I&#x27;d call insider terms, and also terms that sort of put down the contributor.<p>And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.
  • SunlitCat1 hour ago
    Oh wow. That something like this is necessary is kind of sad. At first (while reading the title), I thought they just didn’t want AI-generated contributions at all (which would be understandable as well). But all they are actually asking for is that one understands (and label) the contributions they submit, regardless of whether those are AI-generated, their own work, or maybe even written by a cat (okay, that last one was added by me ;).<p>Reading through the (first few) comments and seeing people defending the use of pure AI tools is really disheartening. I mean, they’re not asking for much just that one reviews and understands what the AI produced for them.
  • hsuduebc22 hours ago
    Contributors should never find themselves in the position of saying “I don’t know, an LLM did it”<p>I would never have thought that someone could actually write this.
    • clayhacks2 hours ago
      I’ve seen a bunch of my colleagues say this when I ask about the code they’ve submitted for review. Incredibly frustrating, but likely to become more common
    • jfreds2 hours ago
      I get this at work, frequently.<p>“Oh, cursor wrote that.”<p>If it made it into your pull request, YOU wrote it, and it it’ll be part of your performance review. Cursor doesn’t have a performance review. Simple as
      • lokar2 hours ago
        I could see this coming when I quit. I would not have been able to resist insisting people doing that be fired.
      • hsuduebc22 hours ago
        Yea, this is just lazy. If you don&#x27;t know what it does and how then you shouldn&#x27;t submit it at all.
    • imron2 hours ago
      See this thread from a while back: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46039274">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46039274</a>
  • EdwardDiego2 hours ago
    Good policy.
  • vjay152 hours ago
    It is insane that this is happening in one of the most essential piece of software. This is a much needed step to decrease the increase of slop contribution. It&#x27;s more work for the maintainer to review all this mess.
  • mmsc2 hours ago
    This AI usage is like a turbo-charger for the Dunning–Kruger effect, and we will see these policies crop up more and more, as technical people become more and more harassed and burnt out by AI slop.<p>I also recently wrote a similar policy[0] for my fork of a codebase. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [1].<p>On an analysis level, I recently commented[2] that &quot;Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind.&quot;<p>But what&#x27;s more, we&#x27;re also seeing programmers use AI creating slop. They&#x27;re effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it&#x27;s just easier and more natural to simply assume they grasp far more than they actually do, because @grok told them this is true.<p>[0]: <a href="https:&#x2F;&#x2F;gixy.io&#x2F;contributing&#x2F;#ai-llm-tooling-usage-policy" rel="nofollow">https:&#x2F;&#x2F;gixy.io&#x2F;contributing&#x2F;#ai-llm-tooling-usage-policy</a><p>[1]: <a href="https:&#x2F;&#x2F;joshua.hu&#x2F;gixy-ng-new-version-gixy-updated-checks#quality-degradation" rel="nofollow">https:&#x2F;&#x2F;joshua.hu&#x2F;gixy-ng-new-version-gixy-updated-checks#qu...</a><p>[2]: <a href="https:&#x2F;&#x2F;joshua.hu&#x2F;ai-slop-story-nginx-leaking-dns-chatgpt#final-thoughts" rel="nofollow">https:&#x2F;&#x2F;joshua.hu&#x2F;ai-slop-story-nginx-leaking-dns-chatgpt#fi...</a>
  • 29athrowaway2 hours ago
    Then the vibe coder will ask an LLM to answer questions about the contribution.
    • ronsor2 hours ago
      I don&#x27;t know how it is for you, but I find it rather easy to tell when someone doesn&#x27;t actually understand what they&#x27;re talking about.
    • lifthrasiir2 hours ago
      At least that&#x27;s much better than not being able to answer them. If LLMs are truly intelligent enough to justify their contributions (including copyrights!), why should we treat them differently from human contributions?
      • wmf1 hour ago
        That would also include humans not taking credit for AI&#x27;s work.
  • zeroonetwothree2 hours ago
    I only wish my workplace had the same policy. I’m so tired of reviewing slop where the submitter has no idea what it’s even for.
    • ivraatiems2 hours ago
      For what it&#x27;s worth, this is essentially the policy my current and most recent previous workplace followed. (My employers before that were pre-LLMs.)<p>If you are the one with your name on the PR, it&#x27;s your code and you have to understand it. If you don&#x27;t understand it at least well enough to speak intelligently about it, you aren&#x27;t ready to submit it for review. If you ask Copilot, Cursor, or whatever to generate a PR for you, it still must be reviewed and approved by you and another engineer who acts as your reviewer.<p>I haven&#x27;t heard a lot of pushback on this; it feels like common sense to me. It&#x27;s effectively the same rules we&#x27;d use if somebody who wasn&#x27;t an engineer wanted to submit code; they&#x27;d need to go through an engineer to do it.<p>LLM usage has increased our throughput and the quality of our code thus far, but without these rules (and people following the spirit of them, being bought in to their importance), I really don&#x27;t think it would.<p>I encourage you to raise this policy with your management, if you think you can get them to listen, and demonstrate how it might help. I would be very frustrated if my colleagues were submitting AI-generated code without thinking it through.
  • rvz2 hours ago
    Open source projects like LLVM need to do this as it is one of those projects that is widely used in the software supply chain, on the level that needs protection from contributors who do not understand the code they are writing or cannot defend their changes.<p>There needs to be a label which designates such open source projects that is so important and adopted throughout the industry that not anyone can throw patches to it without understanding what it does, and why they need it.
  • mberning2 hours ago
    I am so exhausted by reviewing the AI slop from other “developers”. For a while I was trying to be a good sport and point out where it was just wrong or doing things that were unnecessary or inefficient. I’m at the point of telling people to not bother using an AI. I don’t have the time or energy to deal with it. It’s like a missile defense system that costs a million dollars to intercept but the incoming projectile cost $10 for your adversary. It’s not sustainable.
  • colesantiago2 hours ago
    &quot;Vibe coding&quot; (i.e. the kind of code that is statistically &#x27;plausible&#x27; that sometimes works and the user doesn&#x27;t look at the code but tries it to see if it works to their liking (with no tests) )<p>Was the worst thing to happen to programming, computer science I have seen, good for prototypes but not production software, and especially for important projects like LLVM.<p>It is good to gatekeep this slop from LLVM before it gets out of control.
    • cookiengineer1 hour ago
      I think it&#x27;s great that this AI slop fatigue happened so quickly.<p>Now we can still identify them easily, and I am maintaining bookmarks of non slop codebases, so that I know which software to avoid.<p>I encourage everyone to do the same because the slopcode fallout will be inevitable, and likely be the most harmful thing that ever happened to open source as a philosophy.<p>We need to change our methodology of how to develop verifiable and specdriven software, because TDD isn&#x27;t good enough to catch this. Something that is able to verify logical conclusions and implications (aka branches) and not only methods and their types&#x2F;signatures.
      • RodgerTheGreat1 hour ago
        Not just codebases, but developers too. Keep your eyes open: if someone visibly embraces slop, you know they&#x27;re a clown. Don&#x27;t let clowns touch <i>your</i> repos, and do your best to cut out dependencies on anything maintained by clowns.
      • InvertedRhodium1 hour ago
        Personally, I’ve got fatigue at the phrase “AI slop”. It’s used as a catch all to dismiss the content due to the source, regardless of the quality or suitability when taken in context.<p>Just like everything else these days the responses skew towards both extremes on the spectrum and people hand waving away the advancements is just as annoying as those who are zealots on the other end.
  • fleroviumna1 hour ago
    [dead]
  • jfreds2 hours ago
    &gt; automated review tools that publish comments without human review are not allowed<p>This seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it
    • justatdotin2 hours ago
      one reason i can imagine for this choice is that human review distributes new knowledge among human maintainers. Automated review might discourage that valuable interaction.<p>The comment is an artefact of an inherently valuable process, not the sole objective. So I&#x27;d prefer code is reviewed by a human who maybe initiates a discussion with an agent. I&#x27;d hope this minor workflow detail encourages the human to take a more active lead, rather than risk mere ceremonial approval.
    • SunlitCat1 hour ago
      The policy isn’t about whether those tools raise good points. It’s about not letting agents act autonomously in project spaces. Human reviewed, opt in use is explicitly allowed.
    • bandrami2 hours ago
      An LLM is a plausibility engine. That can&#x27;t be the final step of any workflow.