I encourage everyone to RTFA and not just respond to the headline. This really is a glimpse into where the future is going.<p>I've been saying "the last job to be automated will be QA" and it feels more true every day. It's one thing to be a product engineer in this era. It's another to be working at the level the author is, where code needs to be verifiable. However, once people stop vibing apps and start vibing kernels, it really does fundamentally change the game.<p>I also have another saying: "any sufficiently advanced agent is indistinguishable from a DSL." I hadn't considered Lean in this equation, but I put these two ideas together and I feel like we're approaching some world where Lean eats the entire agentic framework stack and the entire operating system disappears.<p>If you're thinking about building something today that will still be relevant in 10 years, this is insightful.
I am as enthusiastic about formal methods as the next guy, but I very much doubt any LLM-based technique will make it economical to write a substantial fraction of application software in Lean. The LLM can play a powerful heuristic role in searching for proof-bearing code in areas where there is good training data. Unfortunately those areas are few and far between.<p>Moreover, humans will still need to read even rigorously proved code if only to suss out performance issues. And training people to read Lean will continue to be costly.<p>Though, as the OP says, this is a very exciting time for developing provably correct systems programming.
This assumes everything about current capabilities stay static, and it wasn't long ago before LLMs couldn't do math. Many were predicting the genAI hype had peaked this time last year.<p>If you want it to be a question of economics, I think the answer is in whether this approach is more economical than the alternative, which is having people run this substrate. There's a lot of enthusiasm here and you can't deny there has been progress.<p>I wouldn't be so quick to doubt. It costs nothing to be optimistic.
LLMs are writing non-trivial math proofs in Lean, and software proofs tend to be individually easier than proofs in math, just more tedious because there's so much more of them in any non-trivial development.<p>Some performance issues (asymptotics) can be addressed via proof, others are routinely verified by benchmarking.
> "any sufficiently advanced agent is indistinguishable from a DSL."<p>I don't quite follow but I'd love to hear more about that.
If you give an agent a task, the typical agentic pattern is that it calls tools in some non-deterministic loop, feeding the tool output back into the LLM, until it deems the task complete. The LLM internalizes an algorithm.<p>Another way of doing it is the agent just writes an algorithm to perform the task and runs it. In this world, tools are just APIs and the agent has to think through its entire process end to end before it even begins and account for all cases.<p>Only latter is turing complete, but the former approaches the latter as it improves.
My read was roughly that agents require constraining scaffolding (CLAUDE.md) and careful phrasing (prompt engineering) which together is vaguely like working in a DSL?
If the llm is able to code it, there is enough training data that youight be better off in a different language that removes the boilerplate.
<a href="https://en.wikipedia.org/wiki/Clarke's_three_laws" rel="nofollow">https://en.wikipedia.org/wiki/Clarke's_three_laws</a>
There are still no successful useful vibe codes apps. Kernels are pretty far away I think.
I think this might miss the point. We put off upgrading to an new RMM at work because I was able to hack together some dashboards in a couple days. It's not novel and does exactly what we need it to do, no more. We don't need to pay 1000's of dollars a month for the bloated Solarwinds stack. We aren't saving lives, we're saving PDFs so any arguments about 5 9s and maintainability are irrelevant. LLMs are going to give us on demand, one off software. I think the SaaS market is terrified right now because for decades they've gouged customers for continual bloat and lock in that now we can escape from. In a single day I was able to build an RMM that fits our needs exactly. We don't need to hire anyone to maintain it because it's simple, like most business applications should be, but SV needs to keep complicating their offerings with bloat to justify crazy monthly costs that should have been a one time purchase from the start. SV shot itself in the face with AI.
This is a very strange statement. People don't always announce when they use AI for writing their software since it's a controversial topic. And it's a sliding scale. I'm pretty sure a large fraction of new software has some AI involved in its development.
To be fair, Claude Code is vibe-coded. It's a terrible piece of software from an engineering (and often usability) standpoint, and the problems run deeper than just the choice of JavaScript. But it is good enough for people to get what they want out of it.
But also, based on what I have heard of their headcount, they are not necessarily saving any money by vibecoding it - it seems like their productivity per programmer is still well within the historical range.<p>That isn’t necessarily a hit against them - they make an LLM coding tool and they should absolutely be dogfooding it as hard as they can. They need to be the ones to figure out how to achieve this sought-after productivity boost. But so far it seems to me like AI coding is more similar to past trends in industry practice (OOP, Scrum, TDD, whatever) than it is different in the only way that’s ever been particularly noteworthy to me: it massively changes where people spend their time, without necessarily living up to the hype about how much gets done in that time.
The article says that AWS's Cedar authorization policy engine is written in Lean, but it's actually written in Dafny. Writing Dafny is a lot closer to writing "normal" code rather than the proofs you see in Lean. As a non-mathematician I gave up pretty early in the Lean tutorial, while in a recent prototype I learned enough Dafny to be semi-confident in reviewing Claude's Dafny code in about half a day.<p>The Dafny code formed a security kernel at the core of a service, enforcing invariants like that an audit log must always be written to prior to a mutating operation being performed. Of course I still had bugs, usually from specification problems (poor spec / design) or Claude not taking the proof far enough (proving only for one of a number of related types, which could also have been a specification problem on my part).<p>In the end I realized I'm writing a bunch of I/O bound glue code and plain 'ol test driven development was fine enough for my threat model. I can review Python code more quickly and accurately than Dafny (or the Go code it eventually had to link to), so I'm back to optimizing for humans again...
I think the issue goes even deeper than verification. Verification is technically possible. You could, in theory, build a C compiler or a browser and use existing tests to confirm it works.<p>The harder problem is discovery: how do you build something entirely new, something that has no existing test suite to validate against?<p>Verification works because someone has already defined what "correct" looks like. There is possible a spec, or a reference implementation, or a set of expected behaviours. The system just has to match them.<p>But truly novel creation does not have ground truth to compare against and no predefined finish line. You are not just solving a problem. You are figuring out what the problem even is.
It seems like sound testing methodology to identify important theorems related to the code, prove them, and then verify the proof.<p>Verification gets sold as "bulletproof" but I'm skeptical for a couple reasons:<p>- How do you establish the relationship between the code and the theorem? Lean theorem can be applied to zlib implemented in Lean, what if you want to check zlib implemented in a normal programming language like C, JS, Zig, or whatever?<p>- How do you know the key properties mean what you think they mean? E.g. the theorem says "ZlibDecode.decompressSingle (ZlibEncode.compress data level) = .ok data" but it feels like it would be very easy to accidentally prove ∃ x s.t. decompress(compress(x)) == x while thinking you proved ∀ x, decompress(compress(x)) == x.<p>I've tried Lean and Coq and...I don't really like them. The proofs use specialized programming languages. And they seem deliberately designed to require you to use a context explorer to have any hope of understanding the proof at all. OTOH a normal unit test is written in a general purpose programming language (usually the same one as the program being tested), I'm much more comfortable checking that a Claude-written unit test does what I think it's doing than a Claude-written Lean proof of correctness.
100% of my innovation for the past month has been getting the coding agent to iterate with an OODA loop (I make it validate after act step) trying to figure out how to get it to not stop iterating.<p>For example, I have discovered there is a big difference between prompting 'there is a look ahead bias' and 'there is a [T+1] look ahead bias' where the later will cause it to not stop until it finds the [T+1] look ahead bias. It will start to write scripts that will `.shift(1)` all values and do statistical analysis on the result set trying to find the look ahead bias.<p>Now, I know there isn't look ahead bias, but the point is I was able to get it to iterate automatically trying different approaches to solve the problem.<p>The software is going to verify itself eventually, sooner than later.
The users verify and fix it on the fly with the Claude VM JIT <a href="https://jperla.com/blog/claude-is-a-jit" rel="nofollow">https://jperla.com/blog/claude-is-a-jit</a>
TFA seems to be big on mathematical proof of correctness, but how do you ever know you're proving the right thing?
You do. Even the latest models still frequently write really weird code. The problem is some developers now just submit code for review that they didn't bother to read. You can tell. Code review is more important than ever imho.
I agree with you. But I have to say, it is an uphill battle and all the incentives are against you.<p>1. AI is meant to make us go faster, reviews are slow, the AI is smart, let it go.<p>2. There are plenty of AI maximizers who only think we should be writing design docs and letting the AI go to town on it.<p>Maybe, this might be a great time to start a company. Maximize the benefits of AI while you can without someone who has never written a line of code telling you that your job is going to disappear in 12 months.<p>All the incentives are against someone who wants to use AI in a reasonable way, right now.
I actually agree with good time to start a company. Lot of available software engineers that can actually understand code, AI at a level that can actually speed up development, and so many startups focusing on AI wrapper slop that you can actually make a useful product and separate yourself from the herd.<p>Or you can be a grifter and make some AI wrapper yourself and cash out with some VC investment. So good time for a new company either way.
But it's so BORING. AI gets to do the fun part (writing code) and I'm stuck with the lame bits.<p>It's like watching someone else solve a puzzle, or watching someone else play a game vs playing it yourself (at least that's half as interesting as playing it through)
LLMs are still not good at structurally hard problems, and it is doubtful they ever will be absent some drastic extension. (Including continuous learning). In the mean time the trick is creating a framework where you can walk them through the exact stages you would to do it, only it goes way faster. The problem is many people stop at the first iteration that looks like it works and then move on, but you have to keep pushing in the same way you do with humans.<p>Bluntly though, if what you were doing was CRUD boilerplate then yeah it is going to just be a review fest now, but that kind of work always was just begging to be automated out one way or another.
I am really enjoying making requirements docs in an iterative process. I have a continuous improvement loop where I use the implementation to test out the docs. If I find a problem with the docs, I throw away the implementation, improve the docs, then re-implement. The kind of docs I'm getting are of <i>amazing</i> quality.
Your workplace has chosen to deprive you of the enjoyment that you got from the work. You have a few options: (1) ask for a raise proportional to the percentage of enjoyment that you lost, (2) find a workplace that does not do this, or (3) phone it in (they see you and your craft as something be milked for cash, so maybe stop letting yourself get milked, and milk them right back, by doing _exactly_ what is asked of you and _not_ more -- let these strategic geniuses strategize using their own brains).
For me the most fun part is getting something that works. Design the goal, but not micromanage and get lost in the details. I love AI for that, but it is hard really owning code this way. (At least I manually approve every or most changes, but still, verifying is hard).
AI has really sharpened the line between the Master Builders of the world and the Lord Businesses along this question: What, exactly, is the "fun part" of programming? Is it simply having something that works? Or is it the process of going from not having it to having it through your own efforts and the sum total of decisions you made along the way?
I can solve a problem in 10% of the time. Dealing with an issue TODAY, instead of having to put it in the backlog.
It is remarkably effective to have Claude Code do the code review and assign a quality score, call it a grade, to the contribution derived from your own expectations of quality.<p>Then don’t even bother looking at C work or below.
> You do<p>I really want to say: "You are absolutely right"<p>But here is a problem I am facing personally (numbers are hypothetical).<p>I get a review request 10-15/day by 4 teammates, who are generating code by prompting, and I am doing same, so you can guess we might have ~20 PRs/day to review. now each PR is roughly updating 5-6 files and 10-15 lines in each.<p>So you can estimate that, I am looking at around 50-60 files, but I can't keep the context of the whole file because change I am looking is somewhere in the middle, 3 lines here, 5 lines there and another 4 lines at the end.<p>How am I supposed to review all these?
Ideally, you’re working with teammates you trust. The best teams I’ve worked on reviews were a formality. Most of the time a quick scan and a LGTM. We worked together prior to the review as needed on areas we knew would need input from others.<p>AI changes none of this. If you’re putting up PRs and getting comments, you need to slow down. Slow is smooth, and smooth is fast.<p>I’ll caveat this with that’s only if your employer cares about quality. If you’re fine passing that on to your users, might as well just stop reviewing all together.
> Ideally, you’re working with teammates you trust.<p>I do trust them, but code is not theirs, prompt is. What if I trust them, but because how much they use LLMs their brain started becoming lazy and they started missing edge cases, who should review the code? me or them?<p>At the beginning, I relied on my trust and did quick scans, but eventually noticed they became un-interested in the craft and started submitting LLM output as it is, I still trust them as good faith actors, but not their brain anymore (and my own as well).<p>Also, assumption is based on ideal team: where everyone behaves in good faith. But this is not the case in corporations and big tech, especially when incentives are aligned with the "output/impact" you are making. A lot of times, promoted people won't see the impact of their past bad judgements, so why craft perfect code
Yeah, I agree with you. I’d say they’re not high performers anymore. Best answer I’ve got is find a place where quality matters. If you’re at a body shop it’s not gonna be fun.<p>I do think some of this is just a hype wave and businesses will learn quality and trust matter. But maybe not - if wealth keeps becoming more concentrated at the top, it’s slop for the plebs.
If reviewing has become the bottleneck, the obvious - albeit slightly boring - solution is to slow down spitting out new code, and spend relatively more time reviewing.<p>Just going ahead and piling up PRs or skipping the review process is of course not recommended.
I don't quite follow - are you describing an issue with the way your team has structured PRs? IMO, a PR should contain just enough code to clearly and completely solve "a thing" without solving too much at once. But what this means in practice depends on the team, product, velocity, etc. It sounds like your PRs might be broken up into too small of chunks if you can't understand why the code is being added.
I am saying PRs I get are around 60-70 lines of change, which is small enough to be considered as single unit (add to this unit tests which must pass with new change, so we are talking about 30 line change + 30 line unit test)<p>But when looking at the PR changes, you don't always see whole picture because review subjects (code lines) are scattered across files and methods, and GitHub also shows methods and files partially making it even more difficult to quickly spot the context around those updated lines.<p>Its difficult problem, because even if GitHub shows whole body of the updated method or a file, you still don't see grand picture.<p>For example: A (calls) -> B -> C -> D<p>And you made changes in D, how do you know the side effect on B, what if it broke A?
If the code is well architected, the contract between C and D should make it clear whether changes in D affect C or not. And if C is not affected, then B and A won't be either.
> If the code is well architected<p>Big constraint. Code changes, initial architecture could have been amazing, but constantly changing business requirements make things messy.<p>Please don't use, "In ideal world" examples :) Because they are singular in vast space of non-ideal solutions
> Its difficult problem, because even if GitHub shows whole body of the updated method or a file, you still don't see grand picture.<p>> For example: A (calls) -> B -> C -> D<p>> And you made changes in D, how do you know the side effect on B, what if it broke A?<p>That's poor encapsulation. If the changes in D respect its contract, and C respects D's contract, your changes in D shouldn't affect C, much less B or A.
> That's poor encapsulation<p>That's the reality of most software built in last 20 years.<p>> If the changes in D respect its contract, and C respects D's contract, your changes in D shouldn't affect C, much less B or A.<p>Any changes in D, eventually must affect B or A, it's inevitable, otherwise D shouldn't exist in call stack.<p>How the case I mentioned can happen, imagine in each layer you have 3 variations: 1 happy path 2 edge case handling, lets start from lowest:<p>D: 3, C: 3<i>D=9, B: 3</i>C=27, A: 3*B=81<p>Obviously, you won't be writing 81 unit tests for A, 27 for B, you will mock implementations and write enough unit tests to make the coverage good. Because of that mocking, when you update D and add a new case, but do not surface relevant mocking to upper layers, you will end up in a situation where D impacts A, but its not visible in unit tests.<p>While reading the changes in D, I can't reconstruct all possible parent caller chain in my brain, to ask engineer to write relevant unit tests.<p>So, case I mentioned happens, otherwise in real world there would be no bugs
Leaky abstractions are a thing. You can just encapsulate your way out of everything.
Tests. All changes must have tests. If they're generating the code, they can generate the tests too.
who reviews the tests? again me? -> that's exactly why I am saying review is a bottleneck, especially with current tooling, which doesn't show second order impacts of the changes and its not easy to reason about when method gets called by 10 other methods with 4 level of parent hierarchy
> The problem is some developers now just submit code for review that they didn't bother to read.<p>Can you blame them? All the AI companies are saying “this does a better job than you ever could”, every discussion topic on AI includes at least one (totally organic, I’m sure) comment along the lines of “I’ve been developing software for over twenty years and these tools are going to replace me in six months. I’m learning how to be a plumber before I’m permanently unemployed.” So when Claude spits out something that seems to work with a short smoke test, how can you blame developers for thinking “damn the hype is real. LGTM”?
This is correct. And at this point (and I think you agree?) we have to take that critical thinking skill and stop letting it just happen to us.<p>It might seem hopeless. But on the other hand the innate human BS detector is quite good. Imagine the state of us if we could be programmed by putting billions of dollars into our brains and not have any kind of subconscious filter that tells us, hey this doesn’t seem right. We’ve already tried that for a century. And it turns out that the cure is not billions of dollars of counter-propaganda consisting of the truth (that would be hopeless as the Truth doesn’t have that kind of money).<p>We don’t have to be discouraged by whoever replies to you and says things like, oh my goodness the new Siri AI replaced my parenting skills just in the last two weeks, the progress is astounding (Siri, the kids are home and should be in bed by 21:00). Or by the hypothetical people in my replies insisting, no no people are stupid as bricks; all my neighbors buy the propaganda of [wrong side of the political aisle]. Etc. etc. ad nauseam.
I'm an 99% organic person (I suppose I have tooth fillings) and the new models write code better than I do.<p>I've been using LLMS for 14+ months now and they've exceeded my expectations.
Not only do they exceed expectations, but any time they fall down, you can improve your instructions to them. It's easy to get into a virtuous cycle.
So are you learning a trade? Or do you somehow think you’ll be one of the developers “good enough” to remain employed?
I have a physical goods side hustle already and I'm brainstorming ideas about a trade I can do that will benefit from my programming experience.<p>I'm thinking HVAC or painting lines in parking lots. HVAC because I can program smart systems and parking lot lines because I can use google maps and algos to propose more efficient parking lot designs to existing business owners.<p>There is that paradox when if something becomes cheaper there is more demand so we'll see what happens.<p>Finally, I'm a mediocre dev that can only handle 2-3 agents at a time so I probably won't be good enough.
> Can you blame them?<p>Yes I absolutely can and do blame them
I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying. A new discipline is being born. It is much closer to proper engineering.<p>Like an engineer overseeing the construction of a bridge, the job is not to lay bricks. It is to ensure the structure does not collapse.<p>The marginal cost of code is collapsing. That single fact changes everything.<p><a href="https://nonstructured.com/zen-of-ai-coding/" rel="nofollow">https://nonstructured.com/zen-of-ai-coding/</a>
Our CEO, an expert in marketing has discovered Claude Code and is the one having the most open PR of all developers and is pushing for us to « quickly review ». He does not understand why review are so slow because it’s « the easiest part ». We live in a new world.
> I just finished writing a post about exactly this. Software development, as the act of manually producing code, is dying.<p>It was never that. Take any textbook on software engineering and the focus was never the code, but on systems design and correctness. I'm looking at the table of contents of one (Software Engineering by David C. Kung) and these are a few sample chapters:<p><pre><code> ...
4. Software Requirement Elicitation
5. Domain Modelling
6. Architectural Design
...
8. Actor-System Interaction Modeling
9. Object Interaction Modeling
...
15. Modeling and Design of Rule-Based Systems
...
19. Software Quality Assurance
...
24. Software Security
</code></pre>
What you're talking about was coding, which has never been the bottleneck other than for beginners in some programming languages.
Accountability then
This is the biggest problem going forward. I wrote about the problem many times on my blog, in talks, and as premises in my sci-fi novels<p>Sitting in your cubical with your perfect set of test suites, code verification rules, SOP's and code reviews you wont want to hear this, but other companies will be gunning for your market; writing almost identical software to yours in the future from a series of prompts that generate the code they want fast, cheap, functionally identical, and quite possibly untested.<p>As AI gets more proficient and are given more autonomy (OpenClaw++) they will also generate directly executable binaries completely replacing the compiler, making it unreadable to a normal human, and may even do this without prompts.<p>The scenario is terrifying to professional software developers, but other people will do this regardless of what you think, and run it in production, and I expect we are months or just a few years away from this.<p>Source code of the future will be the complete series of prompts used to generate the software, another AI to verify it, and an extensive test suites.
If you need to interact with some things in platform.openai.com, you know it is not months away, it is there already now. I had to go through forms and flows there, so buggy and untested, simply broken. They really eat their own dog food. Interacting with the support, resulted in literally weeks of ping pong between me and AI smoothed replies via email to fix their bugs. Terrible.
How do you get an extensive test suite?
Verify? Seems like no one is even <i>reviewing</i> this stuff.
This is really great and important progress, but Lean is still an island floating in space. Too hard to get actual work done building any real world system.
One thing that seems under-discussed in this space is the shift from verifying programs to verifying generation processes.<p>If a piece of code is produced by an agent loop (prompt -> tool calls -> edits -> tests), the real artifact isn’t just the final code but the trace/pipeline that produced it.<p>In that sense verification might look closer to: checking constraints on the generator (tests/specs/contracts), verifying the toolchain used by the agent, and replaying generation under controlled inputs.<p>That feels closer to build reproducibility or supply-chain verification than traditional program proofs.
The biggest issue right now is most AI tools aren't hooked up appropriately to an environment they can test in (Chrome typically). Replit works extremely well because it has an integrated browser and testing strategy. AI works very well when it has the ability to check its own work.
At the end of the day you need humans who understand the business critical (or safety critical) systems that underpin the enterprise.<p>Someone needs to be held accountable when things go wrong. Someone needs to be able to explain to the CEO why this or that is impossible.<p>If you want to have AI generate all the code for your business critical software, fine, but you better make sure you understand it well. Sometimes the fastest path to deep understanding is just coding things out yourself - so be it.<p>This is why the truly critical software doesn’t get developed much faster when AI tools are introduced. The bottleneck isn’t how fast the code can be created, it’s how fast humans can construct their understanding before they put their careers on the line by deploying it.<p>Ofc… this doesn’t apply to prototypes, hackathons, POCs, etc. for those “low stakes” projects, vibe code away, if you wish.
I think this gets to the heart of it. We’re gonna see a new class of devs & software emerge that only use AI and don’t read the code. The devs that understand code will still exist too, but there is certainly an appetite for going faster at the cost of quality.<p>I personally find the “move fast and break thing” ethos morally abhorrent, but that doesn’t make it go away.
You can use AI to make a reviewers job much easier. Add documents, divide your MR into reviewable chunks, et cetera.<p>If reviewing is the expensive part now, optimize for reviewability.
> Where this leads is clear. Layer by layer, the critical software stack will be reconstructed with mathematical proofs built in. The question is not whether this happens, but when.
(answering the title) The lusers
State Sponsored Hackers AI will verify it.
I believe the old ways, which agile destroyed, will come back because the implementation isn’t the hardest part now. Agile recognized correctly that implementation was the hard part to predict and that specification through requirements docs, UML, waterfall, etc. were out of date by the time the code was cooked.<p>I don’t think we’ll get those exact things back but we will see more specification and design than we do today.
Agile was a response to the coordination problems in certain types of firms. Waterfall persisted in organizations that have and require a more traditional bureaucratic structure. Waterfall makes sense if you are building a space probe or an unemployment insurance system, agile makes sense if you are trying to find product market fit for a smartphone app.
LLM generated code combined with formal verification just feels like we're entering the most ridiculous timeline. We know formal verification doesn't work at scale, hence we don't use it. We might get fully vibe coded systems but we sure as hell won't be able to verify them.<p>The collapse of civilisation is real.
Because of the scale of generated code, often it is the AI verifying the AI's work.
I of course cannot say what the future holds, but current frontier models are - in my experience - nowhere near good enough for such autonomy.<p>Even with other agents reviewing the code, good test coverage, etc., both smaller - and every now and then larger - mistakes make their way through, and the existence of such mistakes in the codebase tend to accellerate even more of them.<p>It for sure depends on many factors, but I have seen enough to feel confident that we are not there yet.
So who's verifying the AI doing the verifying or is it yet another AI layer doing that? If something goes wrong who's liable, the AI?
You have 2 paths - code tests and AI review which is just vibe test of LGTM kind, should be using both in tandem, code testing is cheap to run and you can build more complex systems if you apply it well. But ultimately it is the user or usage that needs to direct testing, or pay the price for formal verification. Most of the time it is usage, time passing reveals failure modes, hindsight is 20/20.
The "Nearly half of AI-generated code fails basic security tests" link provided in this piece is not credible in my opinion. It's a <i>very</i> thinly backed vendor report from a company selling security scanning software.
No one does currently, and its going to take a few very painful and high profile failures of vital systems for this industry to RELEARN its lesson about the price of speed.<p>In fact it will probably need to happen a few times PER org for the dust to settle. It will take several years.
Sure but industry cares about value (= benefit - price), not just price. Price could be astronomical, but that doesn’t matter if benefit is larger.
I feel like people used to talk about nines of uptime more. As in more than one. These days we've lost that: <a href="https://bsky.app/profile/jkachmar.com/post/3mg4u3e6nak2p" rel="nofollow">https://bsky.app/profile/jkachmar.com/post/3mg4u3e6nak2p</a><p>I recall a time, maybe around 2013-2017, when people were talking about 4 or 5 nines. But sometime around then the goalposts shifted, and instead of trying to make things as reliable as possible, it started becoming more about seeing how unreliable they can get before anyone notices or cares. It turns out people will suffer through a lot if there's some marginal benefit--remember what personal computers were like in the 1990s before memory protection? Vibe coding is just another chapter in that user hostile epic. Convenient reliability, like this author describes, (if it can be achieved) might actually make things better? But my money isn't on that.
I'm in the process of building v2.0 of my app using opus 4.6 and largely agree with this.<p>It's pretty awesome but still does a lot of basic idiotic stuff. I was implementing a feature that required a global keyboard shortcut and asked opus to define it, taking into account not to clash with common shortcuts. He built a field where only one modifier key was required. After mentioning that this was not safe since users could just define CTRL+C for the shortcut and we need more safeguards and require at least two modifier keys I got the usual "you're absolutely right" and proceeded to require two modifier keys. But then it also created a huge list of common shortcuts into a blacklist like copy, cut, paste, print, select all, etc.. basically a bunch of single modifier key shortcuts. Once I mentioned that since we're already forcing two modifier keys that's useless it said I'm right again and fixed it.<p>The counter point of this idiocy is that it's very good overall at a lot of what is (in my mind) much more complicated stuff. It's a .NET app and stuff like creating models, viewmodels, usercontrols, setting up the entire hosting DI with pretty much all best practices for .net it does it pretty awesomely.<p>tl;dr is that training wheels are still mandatory imho
Also AI.
No one really. Code is for humans to read and for machines to compile and execute. Llms are enabling people to just write the code and not have anyone read it. It’s solving a problem that didn’t really exist (we already had code generators before llms).<p>It’s such an intoxicating copyright-abuse slot machine that a buddy who is building an ocaml+htmx tree editor told me “I always get stuck and end up going to the llm to generate code. Usually when I get to the html part.” I asked if he used a debugger before that, he said “that’s a good idea”.
This is something I've been wondering about...<p>If boilerplate was such a big issue, we should have worked on improving code generation. In fact, many tools and frameworks exist that did this already:<p>- rails has fantastic code generation for CRUD use cases<p>- intelliJ IDEs have been able to do many types of refactors and class generation that included some of the boilerplate<p>I haven't reached a conclusion on this train of thought yet, though.
[dead]
The same ones who verify it when I write it: my customers in production! /s (well, maybe /s)
no one wants to believe this but there will be a point soon when an ai code review meets your compliance requirements to go to production. is that 2026? no. but it will come
In his latest essay, Leonardo de Moura makes a compelling case that if AI is going to write a significant portion of the world’s software, then verification must scale alongside generation. Testing and code review were never sufficient guarantees, even for human-written systems; with AI accelerating output, they become fundamentally inadequate. Leo argues that the only sustainable path forward is machine-checked formal verification — shifting effort from debugging to precise specification, and from informal reasoning to mathematical proof checked by a small, auditable kernel. This is precisely the vision behind Lean: a platform where programs and proofs coexist, enabling AI not just to generate code, but to generate code with correctness guarantees. Rather than slowing development, Lean-style verification enables trustworthy automation at scale.
That was a prolix and meandering essay. Next time I’d rather just look at the prompts and hand edits that went into writing it rather than the final artifact; much like reviewing the documentation, spec, and proof over the generated code as extolled by the article.