I wonder how.
Everything I let claude code majorly write, whether Go, F#, C or Python, I end up eventually at a point where I systematically rip it apart and start writing it over.<p>In my study days, we talked of “spikes”. Software or components which functionally addressed some need, but often was badly written and architected.<p>That’s what I think most resembles claude code output.<p>And I ask the llm to write todo-lists, break tasks into phases, maintain both larger docs on individual features and a highly condensed overview doc.
I also have written claude code like tools myself, run local LLMs and so on.
That is to say, I may still be “doing it wrong”, but I’m not entirely clueless .<p>The only place where claude code has nearly done the whole thing and largely left me with workable code was some react front-end work I did (and no, it wasn’t great either, just fair enough).
As someone who knows how to code and who employs a number of coders I am not sure choosing to do it yourself means the underlying code is unworkable.<p>In two decades I have never met an engineer who joined a project and didn’t at some point suggest starting over.<p>The world runs on buggy, hack filled, good enough code. The idea LLMs are failing when that’s what they produce is wrong in my opinion.
Have you tried with Opus 4.5. It's a step change IMO.
There are different degrees of "ai wrote all my code". A very crappy way of doing it is to keep on one shotting it expecting it to "fall on the right solution" - very much infinite monkeys, infinite typewriters scenario.<p>The other way is to spend a fair bit of time building out a design and ask it to implement it while verifying what it is producing then and there instead of reviewing reams of slop later on. AI still produced 100% - just that it is not as glamorous or as marketing friendly of a sound bite. After all which product manager wants to understand refactoring or TDD or SOLID or design principles etc?
Because companies/users don’t pay for “great code”. They pay for results.<p>Does it work? How fast can we get it? How much does it cost to use it?
> Because companies/users don’t pay for “great code”<p>Unless you work in an industry with standards, like medical or automotive. Setting ISO compliance aside, you could also work for a company which values long term maintainability, uptime etc. I'm glad I do. Not everyone is stuck writing disposable web apps.
Or space, or defense, or some corners of finance and insurance.<p>> Not everyone is stuck writing disposable web apps.<p>Exactly. What I've noticed is that the a lot of the conversations on HNs is web devs talking to engineers and one side understands boths sides, and other one doesn't.
"Does it work?" covers what you said.
[dead]
Sounds like the best way to sell an OK product.
yes, but to achieve those, one often needs great code
I’m one of those people.<p>Used Claude Code until September then Codex exclusively.<p>All my code has been AI generated, nothing by hand.<p>I review the code and if I don’t like something- I let it know how it should be changed.<p>Used to be a lot of back and forth in August, but these days GPT 5.2 Codex one shots everything so far. It worked for 40 hours for me one time to get a big thing in place and I’m happy with the code.<p>For bigger things start with a plan and go back and forth on different pieces, have it write it to an md file as you talk it through, feed it anything you can - user stories, test cases, design, whiteboards, backs of napkins and in the end it just writes the code for you.<p>Works great, can’t fathom going back to writing everything by hand.
Glad to hear. For me, the process does not converge — once code gets big enough (it happens fast, claude hates using existing code and writes duplicate logic every oportunity it gets) it starts dealing more damage every turn. At some point, no forward progress happens, because claude keeps dismantling and breaking existing working code.
Okay but has this process actually improved anything, or just substituted one process for another? Do you have fewer defects, quicker ticket turnaround, or some other metric you’re judging success?
Oh yeah, I’ve been a lot more productive, closing tickets faster.<p>These tools are somewhat slow, so you need to work on several things at once, so multitasking is vital.<p>When i get defects from the QA team, I spawn several agents with several worktrees that do each of the tickets- then i review the code, test it out and leave my notes.<p>Closing the loop is also vital, if agents can see their work, logs, test results it helps them to be more autonomous
How long did it take for you to get used to this workflow?
I started in July-August and it’s a learning curve, you start with “i’ve got all the power” and gradually learn the limits of the AI models, so you start closing the loop and getting them what they need to make sure that they’ll output what you consider acceptable.<p>With each new drop, they become more and more powerful, so it’s easier and easier to jump in.<p>Go look at Peter Steinberger’s stuff at <a href="https://steipete.me/posts/just-talk-to-it" rel="nofollow">https://steipete.me/posts/just-talk-to-it</a> it’s a great way to get going.<p>Follow him and <a href="https://jeffreyemanuel.com/" rel="nofollow">https://jeffreyemanuel.com/</a> on X to keep up to date on the latest and most advanced techniques to work with AI. Learned a lot from them.
> I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed<p>I wonder how much of these 40k lines added/38k lines removed were just replacing the complete code of a previous PR created by Claude Code.<p>I'm happy that it's working for them (whatever that means), but shouldn't we see an exponential improvement in Claude Code in this case?
Claude Code user¹ says Claude Code wrote continuously incorrect code for the last hour.<p>I asked it to write Python code to retrieve a list of Kanbord boards using the official API. I gave it a link to the API docs. First, it wrote a wrong JSONRPC call. Then it invented a Python API call that does not exist. In a new try, I I mentioned that there is an official Python package that it could use (which is prominently described in the API docs). Claude proceeded to search the web and then used the wrong API call. Only after prompting it again, it used the correct API call - but still used an inelegant approach.<p>I still find some value in using Claude Code but I'm much happier writing code myself and rather teach kids and colleagues how to do stuff correctly than a machine.<p>¹) me
"If the AI builds the house, the human must become the Architect who understands why the house exists."<p>In Japanese traditional carpentry (Miya-daiku), the master doesn't just cut wood. He reads the "heart of the tree" and decides the orientation based on the environment.<p>The author just proved that "cutting wood" (coding) is now automated. This is not the end of engineers, but the beginning of the "Age of Architects."<p>We must stop competing on syntax speed and start competing on Vision and Context.
I’m nearly the same. Though I do find I’m still writing code, just not the code that’s ending up in the commit. I’ll write pseudo code, example code, rough function signatures then Claude writes the rest.
Man that has vested financial interest in thing praises thing
It shows, I have to kill it forcefully over 10 times per day.
View the full thread without Twitter/X account:
<a href="https://xcancel.com/bcherny/status/2004897269674639461" rel="nofollow">https://xcancel.com/bcherny/status/2004897269674639461</a>
The guy who write the typescript/bun cli and probably maintains that?<p>It would be helpful if people also included what kind of code they are writing (language, domain, module, purpose, etc)<p>The hallucinations are still there, sometimes worse than others but manageable. This is mostly when I have to do some database management style work. This is off the beaten path and hallucinations are crazy.
I'm sure it's unrelated(right guys? right?) but they had to revert a big update to CC this month.<p><a href="https://x.com/trq212/status/2001848726395269619" rel="nofollow">https://x.com/trq212/status/2001848726395269619</a>
They didn’t have to, they decided that it’ll be more stable to revert them for the holidays, so that they won’t be in the office fixing issues on Christmas.<p>You can read more about it at <a href="https://steipete.me/posts/2025/signature-flicker" rel="nofollow">https://steipete.me/posts/2025/signature-flicker</a>
What %age of his reversions this month are done by Claude? ;)
Not sure why you are getting downvoted, but this <i>IS</i> the key worry: That people lose contact with the code and really don’t understand what is going on, increasing “errors” in production (for some definition of error), that result in much more production firefighting that, then, reduce the amount of time to write code.
Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.<p>Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.
> Not sure why you are getting downvoted<p>A: The comment is bad for business.
honestly i've been becoming too lazy, I know exactly what I want and AI is at a point where it can turn that into code. It's good enough to a point where I start to design code around AI where it's easier for AI to understand (less DRY, less abtractions, closer to C)<p>And it's probably a bad thing? Not sure yet.
I just let myself use AI on non-critical software. Personal projects and projects without deadline or high quality standards.<p>If it uses anything I don't know, some tech I hadn't grasped yet, I do a markdown conversation summary and make sure to include technical solutions overview. I then shove that into note software for later and, at a convenient time, use that in study mode to make sure I understand implications of whatever AI chose. I'm mostly a backend developer and this has been a great html+css primer for me.
It is not bad. It is mastery.<p>You are treating the AI not as a tool, but as a "Material" (like wood or stone).<p>A master carpenter works <i>with</i> the grain of the wood, not against it. You are adapting your architectural style to the grain of the AI model to get the best result.<p>That is exactly what an Architect should do. Don't force the old rules (DRY) on a new material.
First I thought CC wrote all its code, but it’s about the engineer’s contributions to CC, which is quite different.
I mean, that’s possible, but the more interesting datapoint would be “and then how much did you have to delete and/or redo because it was slop”
Cool, the person who financially benefits from hyping AI is hyping AI.<p>What's with the ad here though?
The tweet from Dec 24 was interesting, why is Boris only <i>now</i> deciding to engage?<p>I refuse to believe real AI conversations of any value are happening on X.<p><i>Hi I'm Boris and I work on Claude Code. I am going to start being more active here on X, since there are a lot of AI and coding related convos happening here.</i><p><a href="https://xcancel.com/bcherny/status/2003916001851686951" rel="nofollow">https://xcancel.com/bcherny/status/2003916001851686951</a>
does that count as self-hosting?
IMHO it's very misleading to claim that some LLM wrote all the code, if it's just a compression of thousands of peoples' codes that lead to this very LLM even having something to output.