"Just show me the prompt."<p>If you don't have time, just write the damn issue as you normally would. I don't quite understand why one would waste so much resources and compute to expand some lazily conceived half-sentence into 10 paragraphs, as if it scores them some points.<p>If you don't have time to write an issue yourself or carefully proofread whatever LLM makes up for you, whom are you trying to fool by making it look pretty? At least if it is visibly lazy anyone knows to treat it with appropriate grain of salt.<p>Even if you are one of those who likes to code by having to correct LLMs all the time, surely you understand if your LLM can make candy out of poo when you post an issue then it can do the exact same thing when it processes the issue and makes a PR. Likely next month it will do a better job at parsing your quick writing, and having it immediately "upscaled" would only hinder future performance.
The github.com/*/*/issues namespace is home to the worst bugtracker behavior in the world. A bugtracker should should be restricted to bug reports and (well-informed) proposals and discussion about the bug/bugfix. The bug report should contain, at minimum and at maximum:<p>1. Clear steps to reproduce (ideally, using the prepared testcase as input, if applicable)<p>2. A description of the behavior observed from the program<p>3. A description of the expected behavior<p>4. Optionally, your justification for why the program should be changed to behave the way described in #3 and not #4<p>Everything else belongs on a message board, mailing list, or social media.<p>But this is all totally foreign to, like, 80% of GitHub's userbase (including the majority of the project managers aka maintainers who are in charge the tone and tenor of the place).
What would make sense for me is to use an AI to turn implicit context that is only there in the moment into explicit context that is stored in the ticket.<p>E.g. maybe you have your application open in a browser and are currently viewing a page with a very prominent red button. You hit that /issue command with "button should be yellow not red".<p>That half-sentence makes sense <i>if you also have that open browser window as context</i>, but would be completely cryptic without.<p>An AI could use both the input and the browser window to generate a description like "The background color of the #submit_unsafe button widget in frontend/settings/advanced.tsx should be changed from red to yellow." or something.<p>Sort of like a semantic equivalent to realpath if you want.<p>I do see utility in that.
The context windows before a prompt is often large and contains all sorts of information though, it wouldn't be just a prompt in isolation.
I was going by this example:<p>> /issue you know that paint bucket in google docs i want that for tldraw so that I can copy styles from one shape and paste it to another, if those styles exist in the other shape. i want to like slurp up the styles<p>What kind of context may be there?<p>Also, the entire repository and issue tracker is context. Over time it gets only more complete.
I don't get it either. The LLM-generated issue from the above prompt is just the same information written more verbosely.
The entire chat log up until the user asks it to generate a summary. And maybe "memories" and custom system prompts for good measure. A lot of potentially private information, in other words. "Just the prompt" only works in a very particular case where you ask it for something out of the blue.
> A few years ago I submitted a full TypeScript rewrite of a text editor because I thought it would be fun. I hope the maintainers didn't read it. Sorry.<p>Love the transparency. To be fair, rewrites are almost impossible to review. Anything >5k diff takes at least multiple review cycles. I don't know how some maintainers do it while also working on the codebase themselves
>Once or twice, I would begin fixing and cleaning up these PRs, often asking my own Claude to make fixes that benefited from my wider knowledge: use this helper, use our existing UI components, etc. All the while thinking that it would have been easier to vibe code this myself.<p>I had an odd experience a few weeks ago, when I spent a few minutes trying to find a small program I had written. It suddenly struck me that I could have asked for a new one, in less time than it took to find it.
> AI changed all of that. My low-effort issues were becoming low-effort pull requests, with AI doing both sides of the work. My poor Claude had produced a nonsense issue causing the contributor's poor Claude to produce a nonsense solution.
The thing is, my shitty AI issue was providing value.<p>Seems like shitty AI issue did more harm than good?
This is the bit that struck me as odd. The author is creating issue slop but blames the contributor for treating it as genuine. The author wants to continue creating slop issues and decides that blocking all external contributions is the solution, rather than spending less time creating slop.<p>Their slop issues do not actually have value because the fixes based on the slop are equal in their sloppiness.<p>Author could instead create these slop issues in a place where external contributors can't see them instead of shitting on the contributors for not reading their mind.<p>Really bizarre lack of self awareness. How do the internal contributors deal with the slop? I wonder what they say about this person in private.
>As a high-powered tech CEO, I'm<p><i>cough</i> linkedin cringe <i>cough</i>
I suppose this is banal/obvious to many, but I found this very interesting given the practical context.<p>>I write code with AI tools. I expect my team to use AI tools too. If you <i>know the codebase and know what you're doing</i>, writing great code has never been easier than with these tools.<p>This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.<p>>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?<p>...<p>>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.<p>The negative <i>net</i> value of external contributiona is to make the decision. End external contributions.<p>For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.<p>AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.
Guy uses his project's GitHub issues as personal TODO list, realizes his one line GitHub issues look unprofessional, uses AI to hallucinate them into fake but realistic looking issues, and then complains when he gets AI slop PRs.<p>An alternative idea: Use a TODO list and stop using GitHub Issues as your personal dumping ground, whether you use AI to pad them or not. If the issue requires discussion or more detail and would warrant a proper issue, then make a proper issue.
> If writing the code is the easy part, why would I want someone else to write it?<p>Arguably, because LLM tokens are expensive so LLM generated code could be considered a donation? But then so is the labor involved so it's kinda moot. I don't believe people pay software developers to write code for them to contribute to open source projects either (if that makes any sense).
Interesting point. To me, it seems more like those donations where you’re offerred some money in exchange for taking an action which you know is going to take more time/cost way more than the donation amount. Tho to be completely fair, it’s similar with large non-LLM pull requests as well.
> Once we had the context we needed and the alignment on what we would do, the final implementation would have been almost ceremonial. Who wants to push the button?<p>> ...<p>> But if you ask me, the bigger threat to GitHub's model comes from the rapid devaluation of someone else's code. When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.<p>> If that's the case, which I'm starting to think it is, then it's better to limit community contribution to the places it still matters: reporting, discussion, perspective, and care. Don't worry about the code, I can push the button myself.
You should never sign a CLA unless you're getting paid to.
> If writing the code is the easy part, why would I want someone else to write it?<p>Exactly my takeaway to current AI developments as well. I am also confused by corporate or management who seem to think they are immune to AI developments. If AI ever does get to the point where it can write flawless code, what exactly makes them think they will do any better in composing these tools than the developers who've been working with this technology for years? Their job security is hedged precisely IN THE FACT that we are limited by time and need managed teams of humans to create larger projects. If this limitation falls, I feel like their jobs would be the first on the chopping block, long before me as a developer. Competition from tech-savvy individuals would be massive overnight. Very weird horse to bet on unless you are part of a frontier AI company who do actually control the resources.
Ultimately, this would lead to.a situation where only the customer-facing (if there are any) or "business-facing" (i.e. C-suite) roles remain. I'm not sure, I like that.
I don't think this will be an issue, given history. COBOL was developed so that someone higher up could use more human language to write software. (BASIC too? I don't know, I wasn't around for either).<p>More modern-day, low/no-code platforms are advertised as such... and yet, they don't replace software developers. (in fact, some projects my employer does is migrating away from low/no-code platforms in favor of code, because performance and other nonfunctionals are hidden away. We had a major outage as a result when traffic increased.)
Do you think any of them cares about long term? Regardless of AI, your head is always on a chopping block. You always grab that promo in front of you, even if it means you’ll be axed in two years by your own decisions.
I mean I understand that you want your business to not fall behind right now, sure. But I don't understand people in management who are audibly _excited_ about the prospect of these developments even behind closed doors. I guess some of them imagine they are the next Steve Jobs only held back by their dev teams, but most of them are in for a rude awakening lol. And I guess a lot are just grifting. The amount of psychotic B2B SaaS rambling on Twitter is already unbearable as is.
> Authors would solve a problem in a way that ignored existing patterns<p>if you’re not writing your code why do you expect people to read it and follow your lead for whatever your preference is for a convention.<p>I get people who hand write being fussy about this but you start the article off devaluing coding entirely then pivot to how your codebase is written having value that needs to be followed.<p>It’s either low value or it isn’t but you can’t approach it as worthless then complain when others view your code as worthless and not worth reading too
We need a chrome extension like SponsorBlock, which publicly tags slop contributors. Maintainers can just reject PRs from those users.