The real issue, in my view, is not AI itself.<p>The problem is a management pattern:
removing people and organizational slack because they don’t generate immediate profit,
and then expecting the knowledge to still be there when it’s needed.<p>Short-term cost cutting leads to less junior hiring,
and removes the slack that experienced engineers need in order to teach.
As a result, tacit knowledge stops being transferred.<p>What remains is documentation and automation.<p>But documentation is not the same as field experience.
Automation is not the same as judgment.
Without people who have actually worked with the system,
you end up with a loss of tacit knowledge—and eventually, declining productivity.<p>AI is following the same pattern.<p>What AI is being sold as right now is not really productivity.
In many domains, productivity is already sufficient.
What’s being sold is workforce reduction.<p>The West has seen this before, especially in the case of General Electric.<p>GE pursued aggressive short-term financial optimization,
cutting costs, focusing on quarterly results, and maximizing shareholder returns.
In the process, it hollowed out its own long-term capabilities.
It effectively traded its future for short-term gains.<p>The same mindset is visible today.<p>The core problem is that decision-makers—often far removed from actual engineering work—
believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.<p>Tacit knowledge comes from direct experience with real systems over time.
If you remove the people and the learning pipeline,
that knowledge does not stay in the organization. It disappears.
> removing people and organizational slack<p>You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.<p>I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.
The problem is, in the minds of these people 'firing at 100% all the time' generally means doing busywork and/or thinking of ways to cheat/manipulate their customers and the market for maximum gain whole delivering minimum value. I would have loved to be 100% engaged working on solving real problems in honest ways at some of my past jobs, but alas MBA/marketing leadership, which has taken over much of tech has very little interest in actually building good things and solving real problems in honest ways.
I think the bean counters get a bad rap for this a bit unfairly. The past century has seen more progress in knowledge and technology than the rest of human history combined. The world and business environment are changing too rapidly to make longtermist thinking practical.<p>Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.<p>Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.<p>R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.<p>Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.
> R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself?<p>Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.<p>Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.
Those fields still develop in-house expertise and world-leadning products. General Electric was cited above, but their turbine engine division is producing the most fuel-efficient, reliable, and lowest TCO aircraft engines there have ever been. The materials science and engineering expertise needed to do this isn't something you can find in a freshly-graduated university student.<p>Products like jet engines, though, are still those where quality matters. They are so costly that there's room in the finances to deliver it. Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.
> <i>Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.</i><p>A part of this is that consumers usually don't have very good information about products like that. I would almost always pay twice as much for an appliance that's going to last three times as long, but I usually can't find a review that's based on a teardown and rebuild or testing to destruction.<p>Aircraft engines are subjected to both.
> Unlike household appliances, where consumers make decisions mostly on the basis of price and being $5 cheaper than the competition is what will get you the sale even if it means using plastic instead of cast or forged metal parts.<p>Some of this seems reverse causal to me. There were many consumers interested in options other than a race to the bottom. I certainly remember 90s Consumer Reports-era consciousness of consumers trying to find the best products <i>as</i> they all seemed to race to the bottom.<p>The irony seems to be that now that GE has sold GE Appliances they've been returning to higher quality and cutting fewer corners just because activist US shareholders wanted slightly higher dividends each quarter. It feels like only a matter of time before Heier finishes the next steps in the Lenovo playbook and stops paying GE to license their brand and stop giving credit to a US company that stopped caring about consumers and consumer product quality decades ago.
Not quite; for wide-bodies at least RR pips GE for fuel-efficiency, but there’s not much in it for the latest generation of power plants.
"The world and business environment are changing too rapidly to make longtermist thinking practical." Tell that to the Chinese...
Universities dont do product oriented research. They do more general research. And also, they should not do product oriented research, that is companies role.<p>And universities research capabilities are being destroyed too right now.
I believe private equity ownership represents this in an aggressive form. The 2 and 20 percent takes that PE usually mandates as part of their purchase agreement means that they are highly highly incentivized to maximize short term "wins" over long term survival.<p>I think Chesterton and Taleb also had pretty reasonable things to say about understanding a system before you make changes and fragile/anti-fragile systems as well.
> Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time<p>Not just that, you have to be always doing less for more gains. Real work is bad work. Shrinkflation good. I don't know what it is if it wasn't a pure scammer mindset.
I’ll note at the end of the last century I worked at IBM research which had a budget of 6 Billion dollars. Management was trying very hard to get better return on that investment. Even today IBM though often ridiculed in the tech space (sometimes they do deserve it) spends a lot on R&D.
Lucent at the same time went through the same issue: how to monetise Bell Labs.<p>Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.<p>When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.<p>I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.
Did anything come out from those billions?
> Did anything come out from those billions?<p>Per wikipedia:<p><pre><code> IBM employees have garnered six Nobel Prizes, seven Turing Awards,
20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology,
five National Medals of Science and three Kavli Prizes. As of 2018,
the company had generated more patents than any other business in each of 25 consecutive years.</code></pre>
> the company had generated more patents than any other business in each of 25 consecutive years.<p>A couple things about those patents, from a former IBMer who has quite a few in his time there.<p>First, not all patents are created equal. Most of those IBM patents are software-related, and for pretty trivial stuff.<p>Second, most of those patents are generated by the rank and file employees, not research scientists. The IBM patent process is a well-oiled machine but they ain't exactly patenting transistor-level breakthroughs thousands of times a year.
Why do you <i>need</i> to generate transistor-level breakthroughs multiple times a year? Those breakthroughs are hard to generate, but they're important and industry-spanning. The problem is we've mostly stopped generating them.
I wasn't saying anything about that, I was just pointing out that yes, IBM produces a ton of patents, but they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses.
> they're mostly trivial junk that regular employees generate en masse in order to earn accomplishments and make up for the insultingly low bonuses<p>We did that at Meta and Amazon too (for polycarbonate puzzle pieces, with no monetary award at all!). Every now and then something meaningful came out of it
I also worked (briefly, as an intern) at IBM and IBM’s management also sometimes undermined the R&D that happened at the company.<p>I started at the tail of one research group’s mass exodus. It was like a bomb had gone off; the people left behind were trying to pick up the pieces. In essence, this group developed a sophisticated new technique, which the company urged them to commercialize. Pivoting to commercialization was a big effort, and not naturally within the expertise of this group, but they did it, largely at the expense of their own research productivity—for several years. They even hired programmers (ie, not people who are primarily computer scientists) and got it done. But just before launch, IBM pulled the plug.<p>This infuriated the researchers in the group. Keep in mind that career advancement in research is largely predicated on producing new research. In effect, IBM asked people to take a time out and then punished them for agreeing to do it. The whole group was extremely demoralized. Google was the largest beneficiary of this misstep.<p>I also had a similar, frustrating experience working for Microsoft, so it’s not just IBM, but the same dynamics were at work: bean counters asking researchers to commercialize something and then axing a project as it becomes deliverable.<p>If AI replaces any role in the company of the future, please let it be the managerial class.
The thing is, Nobel Prizes and other awards don't pay the bills.<p>Patents do, but in most cases it's trivial patents or patents for a "mutually assured destruction" portfolio (aka, you keep them in hand should someone ever decide to sue you).<p>That's a fundamental problem with how the Western sphere prioritizes and funds R&D. Either it has direct and massive ROI promises (that's how most pharma R&D works), some sort of government backing (that's how we got mRNA - pharma corps weren't interested, or how we got the Internet, lasers, radar and microwaves) or some uber wealthy billionaire (that's how we got Tesla and SpaceX, although government aids certainly helped).<p>All while we are cutting back government R&D funding in the pursuit of "austerity", China just floods the system with money. And <i>they are winning the war</i>.
mRNA is not a good example. If anything, it's a demonstration of why the Western capitalist model is superior to anything else. Most of the mRNA research was funded by venture capital as a high-risk high-reward investment.<p>In the world of government-sponsored research, mRNA likely would have been passed over in favor of funding research with more assured results.
Every year they grant prizes. If hardly anyone is doing core R&D because of cost cutting, there is a higher chance those doing the smallest amount of R&D get the prizes.<p>A Nobel in 2026 doesnt carry the same weight as a Nobel in 1955.
Toshiba, IBM and Siemens had a DRAM joint development program 1993-1998. Several generations of DRAM was developed there. Also, while IBM exited the DRAM business, the knowledge survived in Rambus to an extent.
> When bean-counters took over the ecosystem [...] in their mind, every part of the system needs to be firing at 100% all the time.<p>This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)
I haven't read this book but I see it often mentionedin contexts like this. it was written in 2001 and I think its synopsis still stands.<p>Slack by Tom Demarco (2001)<p><a href="https://www.goodreads.com/book/show/123715.Slack" rel="nofollow">https://www.goodreads.com/book/show/123715.Slack</a>
Engineers seem to think business people don’t know what they are doing, but if your post were true, then companies would add slack to outperform their competitors.<p>The broken system likely doesn’t have enough business impact to justify the investment to maintain it.
Adding slack works over years.<p>Cutting slack gets you quarterly bonuses.<p>When you plan working 3-5 years in a single company you don’t care if it crashes and burns month after you leave just to burn down next one.<p>Conversely we see the same dynamic with engineers, they build stuff to prop up their CV and don't care if company still supports crap they did after they leave.
> companies would add slack to outperform their competitors.<p>I think if they did this they'd get buried by the market. Your slack is someone else's opportunity to undercut you. It's a systemic problem, it's in every individual's self interest to work towards instability.
This would be true if everyone was optimizing for the same thing.<p>It's not terribly difficult to imagine someone optimizing for, say, a bonus at the end of the year.
They also took out all the <i>quality</i>, though in pure business terms one can argue that's a kind of "slack" by itself.<p>The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.<p>And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.<p>Which, inevitably, <i>is the stuff sold directly by Chinese manufacturers</i>. And so the beancounters are hoisted by their own petard.<p>We've seen it happen to small electronics and general goods.<p>We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.<p>---<p>And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)<p>E.g. Desktop Linux has always been kind of a joke. It hasn't gotten <i>better</i>, the problems are all still there. Windows is just going down in flames. People are jumping ship now.<p>SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.<p>---<p>And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.<p>Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.
> <i>E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.</i><p>Desktop Linux <i>has</i> gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: <a href="https://www.linux.com/news/23-years-terrible-linux-predictions/" rel="nofollow">https://www.linux.com/news/23-years-terrible-linux-predictio...</a><p>And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was <i>Tux Racer</i>.<p>But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.<p>By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)<p>The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.<p>Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.
> Desktop Linux has gotten better<p>This is on me for being a bit too snarky.<p>So yes, Desktop Linux has "gotten better". What it hasn't done is <i>solved any of the systemic problems</i>.<p>The Open Source development quirks that created the shitshow of the 1999 is still here. Gnome is <i>better</i> but still suffers massively from mainstream features being declared stupid by the maintainers. (A power button that turns off the machine? Heretical.)<p>Valve's recent successes are pretty illustrative here. They used their money to directly hijack the projects their products rely on.<p>For what it pertains the comparison, Windows is not without this "slow" improvement either. 95 and 98 are lightyears behind contemporary Windows in so many ways. Until quite recently it still made about as much sense to use Linux as it did back then; Not much.<p>Take your Linux Laptop example. Sure, Linux finally kind of worked on some specific models that were tested for it. Meanwhile, Windows had moved from "it'll work with some mucking about with drivers" to "It works universally, on practically all hardware". Really, by the mid 2010s Windows would finally be quite tolerant of you changing the hardware.<p>Hence my original point; Desktop Linux hasn't really caught up with Windows in any meaningful sense. Windows is just nose-diving into the ground in the last few years.
> The Open Source development quirks that created the shitshow of the 1999 is still here. Gnome is better but still suffers massively from mainstream features being declared stupid by the maintainers. (A power button that turns off the machine? Heretical.)<p>Gnome have been chopping off their own limbs because it reduces weight. All in the name of simplicity. I think they are not the best example of Open Source development.<p>KDE on the other hand had a hard fall once and basically recovered and invested long term in Plasma and that has paid off handsomely. Today, it is one desktop that I can say is closest to typical/standard desktop paradigms out of the box while retaining a high degree of flexibility for those who choose to customise it. I have been using KDE on Fedora for a while now and it has been basically solid.
For some reference back in Ubuntu 6 days around 2005 I switched. It took me 2 weeks to get X Org to run with my nvidia card at the time. 2 weeks of messing with config files. I only persisted because I was so sick of windows.
> <i>And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. (...) They can't move back upmarket after that's done.</i><p>The knowledge isn't the problem. It can be quickly regained, and progress of science and technology often offer new paths to even better quality, which limits the need for recovering details of old process.<p>The actual problem is, there is no market to go up to anymore. Once everyone is used to garbage being the only thing on offer, and adjust to cope with it, you cannot compete on quality anymore. Customers won't be able to tell whether you're honest, or just trying to charge suckers for the same garbage with a nicer finish, like every other brand that promises quality. It would take years of effort and low sales to convince the customers to start believing you're the real deal, which (as beancounters will happily tell you) you cannot afford. And even if you could, how are you going to convince people you're not going to start cutting corners again a few years down the line? In fact, how do you convince yourself? If it happened once, if it keeps happening everywhere around across all economy, it's bound to happen to your business too.
Wrong on the first point, right on the second. Institutional knowledge can't be easily regained. To build up the knowledge to, say, make a transistor, you need a bunch of people experimenting with a bunch of things. Published scientific papers and patents will get you part of the way there, but the final stretch is still up to you, including things like which equipment to buy, purity of supplies (and where to get them!), how long the chip needs to be bombarded by each kind of particles, how much air the cleanroom needs to move. All the tiny details. You have to discover them by trial and error. <i>Actual chip manufacturing companies</i> have found themselves unable to get good yield until they copied the floor plan of another working fabrication plant, and they still have no idea why that mattered, but that's an extreme case. Maybe nobody expected miniscule air contamination from one process step was affecting another nearby process step, and in the original plan they were farther apart.<p>Yes if you want to wire a neighborhood for internet you can skip DSL and go straight to fiber. That's not the problem. The problem is that nobody in your company knows how deep to put the fiber to minimize problems, how much redundancy is needed, how strong the mechanical armor around the fiber needs to be, how many fibers per cable to meet future capacity needs without excessive costs, which landlords are friendly to you, nobody has the right connections to city hall to get digging permits approved expediently, and so on.
10 year warranty on appliances instead of 1 would show the manufacturer was serious about quality !
> It can be quickly regained<p>I'm not sure what you mean with this?<p>Sure, hypothetically e.g. any western car manufacturer could poach a bunch of BYD employees. <i>But it's not really practical for most businesses</i>.<p>> The actual problem is, there is no market to go up to anymore.<p>This is the "Market for Lemons" problem, yes.<p>It's less of a problem than you might think. Convincing the entire wider world that you're legitimate is a problem. One made infinitely worse by store marketplaces like Amazon preferring to push "aqekj;bgrsabhghwjbgawrjwsraG" brand garbage.<p>So you just don't. The trick is to start small. The smallest you can sustain. (This doesn't work for cars, or anything that's sufficiently complex. You won't be taking on Salesforce.)<p>But so long as you can find a market niche where there's demand for quality, you can carve out a living, and from there, scale up.<p>The problem with <i>that</i> is twofold: Venture Capital has supplanted other forms of investment and "small business generating single digit millions in revenue" is utterly unappealing to VCs, even though the investment required is downsized accordingly.<p>And problem #2: The cost of starting a business is too high right now. Real estate and cost of living just make it unaffordable to even try. + Healthcare if you're in the US.
So, is that a market failure, or is that the market functioning as intended?
I think you’re not blaming political leadership enough. NAFTA, and other programs were always going to lead to the state of affairs we have now. This was a choice. Blaming greed is like blaming gravity.
> <i>Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there.</i><p>Desktop Linux mostly works these days. It does everything most regular people would want of it, with zero fuss. Including playing games. In some respects, it's easier to use than Mac or Windows.<p>When it has trouble with some things, one must remember neither Mac nor Windows is perfect, and they can be extremely frustrating at times.<p>Time to update those prejudices!
> E.g. Desktop Linux has always been kind of a joke<p>And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).
> optimised immediate profitability over everything else<p>Which is the usual complaint that businesses are focused on short term results, sacrificing long term results.<p>If that would be generally true, the stock market would be going down steeply, not up, as stock prices are based on expectations of future profits.
Are stock market profit expectations mostly long term? Stock markets have been wrong before.<p>Besides that, the U.S. stock market went up over several decades while manufacturing capabilities were transferred overseas. That has had, and will continue to have, domestic ramifications that might not be captured by investor profits.
> as stock prices are based on expectations of future profits.<p>I thought stock prices were based one what I thought I could sell it for next week.
> <i>Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.</i><p>> <i>You are spot on w.r.t every assertion you've made.</i><p>Huh? What happened to the concept of "debate" on HN. It's just a bunch of people agreeing with each other. Yet the data doesn't support any of OP's thesis.<p>Here's a chart of the rise in productivity per hour worked in the United States since 1947. It's a steady linear increase every single year: <a href="https://fred.stlouisfed.org/series/OPHNFB" rel="nofollow">https://fred.stlouisfed.org/series/OPHNFB</a><p>Yours is the type of story big company workers tell themselves to feel important while refusing to learn anything new and never taking any risks. But the truth is 99.999% of companies are not doing anything that unique or complex. Most companies are not ASML.<p>If I had a nickel for every time I've heard someone justify their do-nothing position within a giant bureaucracy while saying the phrase "institutional knowledge" I'd be rich. This is just a sign of a poorly run giant company full of engineers building esoteric and overly complex in-house solutions to already-solved problems as job security.<p>The truth is all of this "institutional knowledge" is worthless in the face of disruption, and it has a half life that's getting shorter every day.<p>Everybody talks shit about global just-in-time supply chains and specialization...but just because we had a fake toilet paper shortage for a few months during a 100-year global pandemic doesn't mean running things like it's 1947 for the last 70 years would have been better. You enjoy a much higher quality of life today due to these "evil" JIT supply chains which it turns out are far more durable than people want to claim.
US aggregate productivity metrics fail to address this nuance. There is a fundamental difference in abstraction layers between a macro-system becoming more efficient and an individual enterprise experiencing operational failure. As a software engineer, distinguishing between these layers is critical. Your argument is akin to claiming that because the Google Play Store sees a higher volume of app releases (increased productivity), the intrinsic quality of individual apps has naturally improved.<p>In this analogy, the individual app represents a company, and the Play Store represents the broader US market. Silicon Valley’s highly liquid labor market allows talent to flow freely, which opens up and elevates the baseline of the overall market. However, that is entirely distinct from the fact that individual companies are suffering severe drops in internal quality and productivity.<p>Furthermore, in software architecture, 'productivity' and 'quality' are rarely directly proportional. With AI coding tools, we can ship an app orders of magnitude faster. Historically, it took me three months to write 60,000 lines of code; recently, I am generating that same volume in just two weeks. My productivity has undeniably spiked, but can I confidently claim the code quality is better than when I manually scrutinized every single line?<p>The real issue is not whether the broader economy has grown more productive since 1947. The core issue is whether a specific organization bleeds capability when the exact people who understand its real-world constraints, failure modes, and operational history walk out the door.<p>Both realities can co-exist: National productivity can trend upwards, while individual companies simultaneously suffer operational regressions due to botched migrations, failed refactors, or the loss of tacit knowledge.<p>I agree that 'institutional knowledge' is sometimes weaponized to defend unnecessary complexity. However, the opposite fallacy is treating all localized, domain-specific knowledge as worthless. While some of it is merely job-security folklore, the rest is literally the only surviving documentation of why the system functions in the first place
>. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.<p>This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.<p>Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.<p>It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...<p>If there were known <i>"make more software, make more money"</i> opportunities available, they would have already done them.<p>Actual growth and new demand needs to come from arenas <i>outside</i> of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.<p>The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.<p>A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.
> doesn't generate any for later.<p>"any" is quite an assumption.
> If there were known "make more software, make more money" opportunities available, they would have already done them.<p>Sometimes they're available, but not palatable, when the opportunity could threaten their <i>existing</i> investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.<p>Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.
Ok... but this just generalizes into the "known things" type.<p>Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.<p>More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.<p>A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.<p>"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.
> The core problem is that decision-makers—often far removed from actual engineering work — believe that tacit knowledge can be replaced with documentation, tools, and processes. [It] cannot.<p>I am not so certain:<p>For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.<p>The problem rather is: the volume of documentation that I would have to write would be <i>insane</i>; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.<p>So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.<p>This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: <i>this</i> is in my experience the real reason why it isn't done.
The deeper I wade through Microsoft’s Azure documentation the more I feel the reality of this. There’s so much of it that it basically is unreadable in real terms, most employees will never get the time allocated, and when you do try to exhaustively read up on a specific area you find that the documentation is incomplete and wrong in subtle but important ways. I’m sure Microsoft spends a lot of resources on that documentation, but it seems somewhat of a hopeless mission.
There are certain things that are too obvious to some person at a given time. Hence they would not consider it's worth documenting. Some of those things are important bits and pieces of the theory[1] of the program.<p>[1] <a href="https://pages.cs.wisc.edu/~remzi/Naur.pdf" rel="nofollow">https://pages.cs.wisc.edu/~remzi/Naur.pdf</a>
I think it's an important property of a system to be document<i>able</i> not just document<i>ed</i>. What I mean essentially, is the system was designed with sound principles, and said principles were written down and followed.<p>I have seen this work only once in my life, and it was so nice to see, but yeah, most code is just a ball of twine, and even if there was a guiding principle beneath, it has been long abandoned, and overruled, and the only way to understand the system is to take it all in at once.
I think it’s reasonably easy to design a system that’s documentable and documented. It’s very, very hard to maintain and iterate on a system while maintaining those properties.<p>Hacky things will make their way in because it takes a month to do the documentable thing and a week to ship the hacky thing.<p>It takes a lot of skilled people from varying disciplines to figure out what things are going to survive long enough and be important enough to spend the resources doing the right thing instead of the hacks.<p>It bites both ways. I’ve seen core business products crippled by years of digital duct tape, but I’ve also seen internal tooling that never really becomes useful because they insist on doing the “correct” thing and it’s constantly a year behind what we need it to do.
This is such a weird counter-argument, that only serves to prove OP’s point.<p>“It’s not that it’s not documentable. It’s just that it would take tens of thousands of pages and no one would be able to write that or read that to effectively take over the project.”<p>Okay, so surely this is what OP had in mind when they said documentation doesn’t work… Is it no longer safe to assume reasonable expectations when making an argument? Why the need to “well actually” them with this response?
<< [belief that] knowledge can be replaced with documentation, tools, and processes. [It] cannot.
<< volume of documentation that I would have to write would be insane<p>I am not sure those are mutually exclusive. We all know if situations where a person knows of tiny and typically undocumented system quirks. We even have a corporate name for it: institutional knowledge. The issue is that executives think it can ALL somehow be done, when even cursory real life project lift will quickly teach one how insane average gap between documented and undocumented tends to be. Add to that near constant changes to API, versions, systems, people and I can't help but wonder at executives, who really do think this way.
But you've just perfectly described the tacit knowledge problem.<p>Yes, you can spend all your time writing docs, or just mentor a junior and let them grok the system through osmosis.<p>Also your doc won't ever have 100% coverage unless you write an absolute tome. Tacit knowledge are things that are so obvious that you wouldnt even think of writing it down in the first place.
It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading (even tens of thousands of pages of) documentation, that is the crust of it.
>
It’s way easier (for this type of scenarios) and far more effective to learn by doing than to learn by reading<p>I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.<p>If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.<p>In other words: there mostly exist two kinds of tasks:<p>- easy, regular adjustments<p>- deep changes that require a really good understanding of the system
I tend to document some tricky non-obvious pieces of knowledge directly above the relevant code. "We have to do X below instead of obvious-first-idea-Y because Z".<p>Any time a refactoring comes up which moves code around, AI (or my coworkers) remove those comments without thinking twice, and I need to tell them "hey this is still valid".
It's kind of a learning JIT. It's no use to go through and memorize something you don't need in the short term. It's hard to memorize well and by the time you need to draw on the knowledge it's already hazy.
This is why you can think of such documentation more as a reference manual and not just plain documentation.<p>In any case, AI is great for traversing a codebase and producing at least a draft of such documentation.
crust (edge/border) -> crux (heart/essence)
I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.<p>The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.<p>It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?<p>The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.<p>I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.
We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.<p>Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.<p>Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.<p>In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
> In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.<p>The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.<p>I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.
This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).<p>I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)
I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.<p>From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.<p>I just reset the router, actually took the time to do everything by the docs, and then it worked.<p>Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.
Personally this type of behavior played a large part in why I left 2 oss communities.<p>A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.<p>They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.
People are losing their ability to reason without prompting an LLM first.<p>It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.<p>I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.
> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.<p>Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.
There is a lot of wisdom in this.<p>At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.<p>It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.
You could have done this with Google search or Wikipedia or reading through books though
I think you are right, but it also makes sense. Human communication is inherently inefficient.
Points of view, miscommunication, interpretation... It's the obvious point to automate.
Not defending it, just my thoughts
i see what you did there :)
This shows Western government system is broken.<p>In ideal world (where we don't live):<p>* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)<p>* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)<p>* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)<p>In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies
> In the west, government is optimizing for "loads and loads of moooney"<p>More appropriately, government is optimizing for 4 year electoral terms. No one cares about longer timescales necessary to tackle hard problems.<p>This is where autocracies like China, or monarchies for example, win over democracies.
Counter-examples are France and Japan. Democracies, electoral terms. High-speed rail that the world looks up to, investment in infrastructure everywhere. In France you have Grand Paris, a programme to transform the suburbs into denser housing and commercial space, a calculation and planning that INCLUDES public transport.<p>And the green initiatives in France. These, transit, Grand Paris, and much more are initiatives that take many years to realize.<p>Now let's move over to New Jersey and New York City. The most densely populated state (NJ) has some of the worst transit despite being in the NYC greater metropolitan area. An old tunnel between the two needs to be replaced, but politicians with four year mental horizons canned it until recently (ARC project). Infrastructure is a fight between Federal, two states and a city politically and partially from a funding perspective.<p>We could go on, but I just wanted to point out that the United States is a poor example of good governance. And that we don't need to live in a totalitarian nightmare just because we acknowledge the US fails to produce innovation and investment for the public good.<p>And let's not talk about debt, as if it is a unique problem to France or anything new.
>This is where autocracies like China, or monarchies for example, win over democracies.<p>Autocracies like China, are able to plan longer term. But, because they don't regularly change their leadership like a democracy, the leaders become old, tired, schlerotic and surrounded by 'yes men'. Hence "Democracy is the worst form of government, except for all the others.".
I think that has something to do with the prerequisites of democracy.<p>I believe one important factor for a democracy to work properly, is to have a large number of citizens who 1) can stand up and push back when they feel something is wrong, and 2) is sufficiently knowledgeable. We don’t have that anymore. Of course I’m also to be blamed for that.
Democracy requires informed thoughtful voters to function.<p>Public education was supposed to deliver that. This is a dream that has failed in the US.<p>Possibly the most lacking tools are Critical Thinking (not directly taught as a subject AFAIK) and some class with a focus on how government(s) work. The latter was an elective I took in high school (not a core requirement, it should be).<p>At least when I was in college it helped to have critical thinking skills, but was not a basics (100 level) course. Political studies might be a different degree, but again not a core course. I find that ironic since everyone has to interact with government regulations and vote.
Western democracy is very interesting.<p>Corporations promote people to Principal or distinguished engineer only when they prove their worth by running long running large scale projects.<p>But when it comes to governing the whole country: lobby, marketing and boom, you are a president for next 4 years, which is anyway not enough to deliver anything big and see the impact. (Except the destruction, destruction is easy to cause)
I wonder what longer cycles with easier recall methods might yield.
I dunno if cycle length is the key here, the Soviets and the Chinese went with five-year plans, and done properly, it seems like thats a long enough amount of time to accomplish very important things.<p>WW2 took slightly less than 6 years, when we count it from the invasion of Poland to the fall of Nazi Germany.<p>The moon landings took little less than 7 years, so I don't think we are terribly off by the timeframe.<p>Considering the world's been getting faster (just think about how different the US was before Trump took power a bit more than a year ago), I think 4 years is fine.
It's also where autocracies fail spectacularly and lead to decades of misery for their citizens.
<i>> This is where autocracies like China, or monarchies for example, win over democracies.</i><p>This is the wrong characterization, and in fact it's where monarchies lost out to democracies. Without an organized system of replacement in response to poor performance, autocracies with a poor leader are stuck with that poor leader for life. Ask North Korea how that's going. The upside is that if you have a brilliant leader, then you also get the benefit of that brilliant leader for life. The variance in an autocracy is absolutely huge, and that's their weakness in the long term. Democracies take the edge off, and are intentionally designed to have both less upside and less downside, trading performance for stability. Xi Jinping looks good comparatively because we have gormless losers like Trump and Biden to compare to him to, but he makes plenty of his own mistakes as well (the whole Taiwan situation is a unforced error driven by his own ego, similar to Putin with Ukraine), and we've seen historically what China looks like when it's stuck with a shit leader for decades (Great Leap Forward, anyone?).
I think of the four year cycle as one year to whine about the previous (if different) government you took over from, two years of governing and the last as a ”get ready for election”. So in the most optimal scenario you get three ”peaceful” years. It’s very few things that can be done well in three years at ”ruling a country”-scale.
I always think that’s the failure of citizens, not just the officials. Eventually history is going to blame us for not taking action, not pushing back, and pretty much sleep tightly when things fall apart around us.
> The problem is a management pattern .... Short-term cost cutting<p>Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.<p>It works fine with machinery and materials, but not with humans.<p>When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.<p>Humans also break in different ways:<p>* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought<p>* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually<p>This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.<p>Next is the knowledge, more you outsource, more you lose it
Perhaps US companies should invest more in their employees then? Advancement, promotions beyond %1-3% COLAs, career paths, etc would go along way to keep employees interested in seeing their employers succeed instead of jumping ship every couple of years. The would require some effort from the C-suite however and since they jump ship every few years as well, I don't see that changing anytime soon.
Unfortunately the Wall Street accountants who run our companies don't mind if you jump ship after your 2% 'reward' raise. Because when someone new comes and costs 10 % more plus recruiting costs, that latter person has 'proven' their worth in the market, similar to when a house goes up in value due to scarcity.<p>If you were to explain the costs of knowledge lost, of training, of taking a risk on a new unknown person, of relationships, there's no answer because it doesn't show up in any operating expense worksheet.<p>What you're supposed to do is find another job, and explain that you love this job so much, but the other offer is really good, can they come up close to it and you'll stay. Repeat this every few years or find a new job and move to it.
Invest in employees is very broad statement.<p>Before investing to employees I think it should revisit management practices and strategies, which starts in MBA and university.<p>Instead of teaching how to increase shareholder value in the short term, it should also teach how to increase value to the society in the long term as well (and focus on it highly) - not just say: if you win society wins kind of generic fluff.<p>Without changing management strategies everything becomes short term after a while
That 'real issue' is the lack of formal effective communications training across the board in the United States, and probably all of Western Culture.<p>The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.<p>That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.
Agree. There is so much focus on "let's do the same thing we are doing now with fewer people". It is very boring and uninspired. How about "let's do something that we couldn't do before", instead?
Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation <i>this</i> quarter? Sheesh..worst case scenario is that the work done now will benefit someone else when they've already left.<p>Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.
> Why would anyone have a sight longer than a quarter? I mean how does long term thinking help the execs get their compensation this quarter?<p>Vesting schedules, conditional grants, contractual equity ownership requirements
>Vesting schedules, conditional grants, contractual equity ownership requirements<p>In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.
<i>> ...any actual product RD and support is a real annoying cost that just cuts into the profits...</i><p>Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."<p>It is a very difficult mindset to argue against.
Would be interesting to get a law that says that all positions supposed to take long-term decision should be paid with X% of their salary in (non-redeemable until Y years?) stocks.
This sounds all true to me, but I think there is more. It is not just decisions by management, it is also the wider economic context. Low interest rates and, for the US, having the world reserve currency as your own currency both seem to make many of these changes attractive or even inevitable. Low interest rates lead to 'innovation' which I put in scare quotes because besides real innovation it can also mean something that passes as innovation but in the end just turns out to be a bubble of stuff that was not valuable enough. The 'innovation' then crowds out investments in more boring sectors like manufacturing. This is also not good for the population in general because fewer jobs are left for people who are not suited for working in highly 'innovative' sectors.
> The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.<p>You need some experienced people around, but companies that rely on institutional knowledge to get everything done have always been doomed to fail.<p>Even before AI, turnover was a real thing. People churn jobs a lot in tech even when the pay is good. They get bored and jump companies, leave to join their friends' startup, or move to another city.<p>Every company I've worked for that operated on a belief that institutional knowledge was king and documentation and processes couldn't replace it eventually had to face the music when key employees left. Ironically this problem was at its worst at a company that compensated very well, because those key employees would often realize they had enough money to retire early or go take some risky startup job instead of sticking around to be the insitutional knowledge base.
There’s even a management tutorial game which demonstrates the dangers of removing too much slack from systems.<p>It’s called The Beer Game[1].<p>One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.<p>Short-termism is the death of companies.<p><a href="https://en.wikipedia.org/wiki/Beer_distribution_game" rel="nofollow">https://en.wikipedia.org/wiki/Beer_distribution_game</a>
It's much more than a management problem, experienced software engineers are actively opting into apathy and atrophy of their craft.<p>I see many peers getting worse in their abilities. It's especially disheartening to see people I admired for their problem solving devolve into someone who delegates more and more of their reasoning to LLMs. It really negatively affects working with them. If you have a concern or criticism of "their" approach to a problem they either dismiss it off hand as invalid, or they go discuss it with their LLM of choice making themselves a bottleneck to collaboration.<p>As the article suggests I suspect we're in for a real dark age of software as companies struggle to know who to keep, if they can even trust that those who have vital knowledge and skill today will retain it going forward.
What you describe had been happened already when programming task became using search engines, passing data between libraries, and delegating coding to off-shore workers.
It's interesting because i find when I'm less busy/stressed at work is when I spend more time motivated, doing better work, and fixing issues that otherwise would get left behind.
"McKinsey comes to town".<p>Basically same shaped taylorism-derived industrial management has imposed itself as the "default dogma" in private and public administration.
The way the system is supposed to work is that companies that make bad decisions fail, and provide room for companies that do not make bad decisions to appear or grow bigger. Which works as long as you have an environment with fair competition where people are free to start and grow companies without running into entrenched interests or undue hardship.
This behavior is strongly incentivized by the fact that recruitment and on boarding and training costs don't show up in the quarter, or maybe even the fiscal year, where layoffs are made. You can also hide a bit of age and wage discrimination in layoffs and intentionally dumb down your organization to goose up the quarter a bit more.<p>Quarterly financial reporting is an obvious target for a rethink. Managers get instantaneous readings from dashboards, but they also like the room for shenanigans that quarterly reporting to shareholders enables. It's going to be hard to get management to give up information asymmetry.
On the other hand, we have government operations that spend staggering amounts of money, and accomplish nothing at all. This one even has lost $8 million and nobody knows what happened to the money:<p><a href="https://www.seattletimes.com/seattle-news/politics/does-nothing-work-unfortunately-seattle-has-another-case-study/" rel="nofollow">https://www.seattletimes.com/seattle-news/politics/does-noth...</a>
The real issue isn't the management pattern. The real issue is outsourcing. Offshore The manufacturing and coding and you wont have the facilities and personnel to do that type of labor anymore. Management has a and in that, but the people and the officials they elect have an even bigger impact (regulation vs invisible hand and all that).
I came to comment EXACTLY about this issue. Management lives in a world where they have absolutely no expertise on what they are supposed to manage. So they try to objectify their decisions, with generic KPIs based on efficiency or cost or whatever. And miss MANY additional decision axis very focused on WHAT they are supposed to build. That is a MASSIVE issue, in my opinion.
AI is almost a distraction from the older pattern: management discovers a way to make the spreadsheet look better this quarter and the hidden cost only shows up years later
> But documentation is not the same as field experience.<p>Even if it were, creating good documentation or assessing its quality requires experience in <i>using</i> good and bad documentation. And how would juniors build up that experience if they are using AI for everything.
Seems to me that - optimistically - this would shift the job of a software engineer into a more formal engineering role, and that the actual implementation is done by AI. In the same way in other areas, engineering and implementation differ and implementation can be (and is) automated.<p>No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.
That sounds right, but it can be superbly wrong because that presupposes that you can debug what the AI gets very confidently wrong.<p>There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
Indeed, even if were possible for someone to create any program most of the time just by directing a team of AI agents, when something does not work one needs the ability to zoom in through the abstraction levels and understand exactly the program that is executed, so only knowing to generate prompts becomes insufficient.<p>This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.
I don't think compilers are a good example. The economics of software development has won a long time ago. For example in Gamedev with well known soft real-time requirements people (mostly) stopped doing that machine code dance many hardware generations ago. Like it happened with memory optimizations: people measure memory in GB now not in KB =)<p>I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.
Gamedev dev is a good example actually. True, handwritten assembly has gone out of style. But knowing how caches work, and how to lay out data to improve performance is important. And stuff like vector intrinsics also gets used.
1) luckily, nowadays compiler's bugs surface very rarely, as the average programmer does not have capability to solve such issues<p>2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it
Personally my experience has been that once I manage to describe a problem in good enough detail that a junior engineer would be able to solve it, it's good enough for an LLM as well.<p>Which creates incentives I'm not wholly comfortable with, but the fact is that I'm more productive now alone, than I used to be in a team.
My experience is that if I manage to describe a problem in enough detail for a junior or LLM to be able to solve it, it would have been faster to do it myself.<p>Prior to LLMs the idea was to involve juniors in the engineering process to give them an opportunity to learn rather than necessarily to improve the team's immediate productivity. Some companies famously (and consciously) refused to hire juniors to avoid the performance hit even prior to genAI (eg Netflix).<p>Involving LLMs in our engineering processes has very suspect implications for both productivity and quality of our output, since unlike juniors the LLMs don't even learn.
> this would shift the job of a software engineer into a more formal engineering role<p>If only you knew how the civil engineering sausage was made.<p>The amount of yolo'ing stuff based on vibes goes up when testing is expensive/impractical. They just paper over it all with disclaimers of the sort that would get laughed at for being non-starters in the software industry.
I think the problem is even more general than that, and has existed since before LLMs. All of the decision makers are incentivized to chase short term gains and ignore everything else. Many tech companies already had huge gaps in knowledge around their own codebases simply because such knowledge and expertise is basically treated as a liability/expense rather than an asset.<p>I'm actually very optimistic about LLMs/AI for basically the opposite reason tech leadership/MBAs are - I think it will allow us to overcome organizational/business/marketing ing hurdles that tech companies rely on short-sighted MBA-style 'leadership' for in the first place. And not because I believe in OpenAI and Anthropic - I think the future is self-hosted or community -hosted open models, and open collaboration among willing peers, building open software to solve real problems in honest ways, rather than hierarchical top-down corporate hellholes pumping out pre-enshitified crapware full of ads, tracking and dark patterns.
And the next level of this is, that even companies that realize this, mostly go ahead acting like this anyway, because they think someone else can train the juniors. Some other company will appear to do that, but nimby!
Over time the lack of good judgement will lead to a decline in their products' quality, which will be difficult to recover from.
There's economic / capitalist pressure to reduce cost / increase revenue and optimize for short-term profits; that's on the corporate side, anyway.<p>But applying the military hardware stuff to software is IMO a bit of a leap; I get the similarities, but where demand for software hasn't slowed down at all, demand for military hardware and ammunition just wasn't there.<p>The alternative would have been to keep all the factories alive, maintained, staff employed (or training staff ready to onboard rapidly hired staff when capacity had to go up), supplied stockpiled (and rotated), etc. And who would be willing to pay for that?<p>In times of peace the voter wouldn't want the government to spend billions on the military if it wasn't necessary... except for the US which still spends billions a year on the military even in peacetime. But not on their production facilities it seems.
In the case of the military I'd say the real reason is political. After the fall of the Berlin wall, Europe collectively agreed (knowingly or not) that war is now a thing of the past and the goal should be the complete dismantling of militaries worldwide, starting with Europe. Lead by example, etc.
It's subtler than that. Europe was just constantly reminded by its big brother not to duplicate NATO structures, which are dependent on the US.
This.<p>Plus, of course, each European country has to support their own defense industry, so each one of them needs to have their own howitzer/tank/whatever and they can't agree on common approach that would actually allow for the economy of scale.
They agreed that war was a thing of the past, but still continued to push for NATO to allow new members anyway, ironically causing Russia (and China and everyone who is NOT in NATO) to suspect that war was NOT a thing of the past and therefore never quite abandoning their military completely. Unpopular opinion: the West should either NEVER have abandoned its military production (so as to maintain NATO actual preparedness for war, given that's the only reason for its existence) OR it should just have dismantled NATO and announced to the world that it strongly believes war is a thing of the past, and that other countries are advised to follow suit. But we actually chose the easy, halfway path: keep NATO, keep our militaries "looking strong" (which gives the signal our rivals should also do the same, obviously), but not actually be ready for any sort of major war and as the article points out, even lose actual capacity to become ready for war within any realistic timeframe. The worst possible outcome :(.
It could be matching theory for outcome though. The unpopular opinion may still be wrong too. Russia was quite different in 1999, or better in 1992, to the point of joining NATO, and China was nowhere the threat of today, and it could be different reasons- not keeping NATO - which caused today's standup. So, basically, the situation seem to be more complex.
USA had no part in that push?
NATO expansion was pretty controversial in the US<p><a href="https://time.com/archive/6731121/how-clinton-decided-on-nato-expansion/" rel="nofollow">https://time.com/archive/6731121/how-clinton-decided-on-nato...</a>
Perhaps but the US was pushing NATO to invest more in war for years suggesting they didn't believe war was in the past
The real problem is that the west is fractured and sees itself only as individuals, corporations and nations.<p>China sees itself as a civilization. Russia does too, ish (look up Dugin's Eurasian Empire).<p>The West has a hard time believing anyone wants to destroy it, so doesn't take the threat seriously. Meanwhile other civilizations are working to both destroy the west and ensure their own place in the future.<p>So we were OK outsourcing our production and knowledge for a quick buck and it's coming back to haunt us.<p>Even now, we still don't see ourselves as a civilization, actively work to undermine ourselves and help our enemies who openly want to destroy us, and are barely doing anything to defend ourselves. We seem to have also given up on the idea of a "democratic world", which was in vogue when I was growing up (Bush 2 years).<p>As for the thesis of this article, the positive is that code and knowledge, because of the fact preserving it is basically free, is still there. AI hasn't been good enough to displace it. And our technological advantage is still pretty wide and our military industrial complex is, for better or worse, coming back.
Most workforce reductions are using AI as a cover up for greedy short term bonuses.<p>Any exec using AI to pay fewer people lacks imagination.
china is run by engineers, America is run by bankers.<p>the consequences are significant
Modern managers: You have to be in the office for synergy and the serendipitous exchanges with coworkers that lead to innovation.<p>Also modern managers: You were in the bathroom for six minutes. I'm docking your pay.
> But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.<p>This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.<p>Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.<p>This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. <i>"Those who care don't know and those who know don't care"</i>.<p>The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.<p>This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.<p>This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.
I'm not sure if the tweet was a joke, but some companies are apparently hiring junior developers back because it's cheaper than AI.
Most of what you describe here is overfitting:<p><a href="https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html" rel="nofollow">https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart....</a>
> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.<p>It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.<p>It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.<p>This race over the few decades has hollowed out most Western companies.<p>Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.<p>Capitalism is the worst form of economic system, apart from all the rest.
I have worked for companies in different countries.<p>I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.<p>This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.<p>People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.
See also <a href="https://georgzoeller.com/blog/posts/the-tech-job-apocalypse-is-just-outsourcing-with-an-ai-coat-of-paint/" rel="nofollow">https://georgzoeller.com/blog/posts/the-tech-job-apocalypse-...</a>
> The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.<p>my promotion packet at work always included how great of a document-er i am
> The real issue, in my view, is not AI itself<p>in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though
> The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.<p>I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.<p>The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.
The US is stuck in this weird irony where they recognize that Soviet-style central planning is a disaster but can't recognize that it's what megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.<p>In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.<p>But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.
I wonder if it would work if top US companies implemented a system like the NFL draft, where companies competing for top engineers out of college get to pick from the best engineers inversely proportionally based on how they did before financially.<p>While it sounds counter intuitive, it maintains a good distribution of talent across the industry.<p>But that system would only work if healthy competition was the goal, not moneymaking.
> megacorps do when they're insulated from competition. Internal politics, perverse incentives and a system that can sustain massive inefficiencies right up until the point that it doesn't.<p>You just described Lucent.
Yes - ultimately it's the same system. Far from being daring and innovatory, it's backward-looking, unimaginative, and bureaucratic.<p>Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.<p>The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.<p>What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.
Yes, many Americans and other Westerners believe that the so-called "socialist" economies, like those of the Soviet Union and of Eastern Europe were non-capitalist.<p>This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.<p>In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.<p>Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.
Everyone wants to tag the evil with their opposition's name. The evil is concentration of power. But no one wants to call it <i>that</i> because then they can't pretend that it's something different when they're doing it themselves.<p>Witness the people who keep proposing to solve market consolidation with higher taxes. Higher taxes go to the government, and therefore the interests that have captured the government. Are we going to solve it by taking money from Warren Buffet and giving it to Larry Ellison? Do we benefit from increased funding for Palantir? No, you have to break up the consolidated markets through some combination of antitrust enforcement and peeling back the regulatory capture that prevents new competitors from entering the market.
> Higher taxes go to the government, and therefore the interests that have captured the government.<p>There is at least a chance for it to be redistributed, unlike private wealth.
Let's have a quick look at the federal budget. The big ticket items are social security, medicare, net interest and military/VA. Together those are more than half the budget.<p>Social security is the biggest of them. Older people have more wealth than younger people on net and social security is structured to make higher payments to people who made more money when they were younger, which is significantly correlated with having more wealth right now. So it's a massive transfer payment system that transfers money <i>from</i> the poor <i>to</i> the rich. Meanwhile it uses its own special tax which is significantly more regressive than the ordinary income tax and doesn't tax corporate income at all. Notice in particular that we could instead be solving "grandma doesn't starve" with a UBI that makes uniform payments to everyone and not disproportionate payments to the rich, and comes from a tax which is also paid by corporations.<p>Net interest is a naked transfer to people with enough capital to invest in government bonds.<p>Most of the military and VA budgets go to government contractors who work hard to sustain an uncompetitive bidding process with thick margins.<p>Medicare uses the same bad tax as social security and those dollars go to the healthcare industry which has <i>thoroughly</i> captured the government. The AMA lobbies to limit the number of medical residency slots and sustain a doctor shortage and healthcare corporations have established a thicket of laws to limit competition, impair price transparency and promote over-consumption.<p>That's where the majority of the government budget goes, and the remaining minority of the money is also going in significant part to government contractors and regulatory capture industries. The government takes tax money <i>from</i> the middle class and gives it <i>to</i> the rich and huge corporations.<p>We don't need any more "redistribution" like that. If you think you can get the government to stop doing that and instead give the money the poor and middle class then first prove you can do it with the <i>existing</i> money before even thinking about collecting more. You have a nutrient deficiency because you're infested with tapeworms, not because you don't have enough food.
I'd argue we need both massive antitrust, and higher taxes on the wealthy to prevent them from amassing the power to prevent the antitrust.
The things amassing power to prevent antitrust are corporations, not individuals. It does nothing to make Bill Gates sell shares in Microsoft to pay taxes when the corporation stays the same size. If anything it makes it worse because then more corporations are controlled by Wall St rather than founders and they're significantly more inclined to turn the screws to juice short-term profits.
And change in laws regarding the legalized corruption (Citizens United, ...). And fight for real freedom of speech.<p>This is very complex problem that needs to be tackled from all sides simultaneously, the entrenched interests are already well setup to defend themselves.
Citizens United was a pretty pro-speech decision and is unfairly maligned, and "money is speech" predates it by quite a few years. The real problem is when huge corporations control the flow of information.<p>Which is a bigger problem, that corporations can pay for political ads, or that one corporation has 90% search market share? That there are political ads on Facebook or Twitter, or that those corporations control what's in the feed of hundreds of millions of people because use of their algorithm is tied to the network effect instead of having a federated system like RSS or email?
Plus a systematic way of keeping the Gini coefficient of wealth small in a sustainable way. I'm a fan of establishing sovereign wealth funds whose dividends are paid out equally per capita for this purpose.
A sovereign wealth fund has the government deciding what to invest in, which is both a magnet for corruption and a good way to get below-market returns through mismanagement. It also requires an extremely oppressive build-up period where the government is collecting money in taxes to seed the fund instead of providing services to the population, which is why the countries that have one are basically all countries that net export huge amounts of oil, and China which exports everything else.<p>Meanwhile you don't need the government to use tax dollars to buy stocks in specific companies. If you want a UBI then use VAT. Then it comes from every company instead of having government bureaucrats choose which ones, and gets paid out immediately instead of needing a generation of build-up.
wow!! straight to the dome.<p>thought about this too - but not as expressively as you put it.<p>e.g in China - for early stage ventures - there's cut throat competition - then as Thiel would put it with heavy competition profits trend towards 0 - by then the tech is perfected or close to perfect - then the state uses its funds to back a monopoly. that's how you get a BYD.
And to complete the reversal what is now referred to as the "golden age of capitalism" i.e the post WW2 USA was actually very socialist. Strong social movement and unions and social spending that created a wealth working/middle class with a bunch of spending power.<p>Inequality society producea inequal economy (and vice versa) which is the economy of any developing country. Few rich,. miniscule middle class and lots of poor people in slums snd poverty.
West: We need profits and then we’ll try to build something useful.<p>China: We need to build this useful thing and then later let’s try to make profits, too.
What do you think the war in the gulf is about, the US cannot compete with China so they are destroying the global system that enabled them. There is no plan to have a peace with Iran, only perpetual war and the destruction of the middle east, starvation in East Asia and poverty and nationalist wars in Europe, potentially with Russia taking over vast swathes of Eastern Europe again. Suddenly Russia is the one in charge of the China-Russia relationship. It's such a stupid plan for the US that you might think it was designed by Putin himself.
You started well, but then the train got derailed...<p>Russia has no need for Eastern Europe (they have enough land and resources, why saddle yourself with hostile population?), as long as the said Easter Europe is not threatening them with NATO bases/missiles (US has repeatedly shown that they do not hesitate to use their muscle if they think they can get away with it, so Russia's paranoia is not entirely unfounded).<p>Even if Russia somehow took over Eastern Europe (most likely way: they learn from US how to do soft 'regime change'), they have no chance against China (China is just so much bigger and better organized; the population's mentality also matters a lot). China and Russia are rather complementary, there is not reason for confrontation between them.<p>But you are correct, what US is doing is really totally stupid ... although it seems designed by Netanyahu, not Putin.
> Russia has no need for Eastern Europe<p>They sure do like to sell to them.<p>> Easter Europe is not threatening them with NATO bases/missiles<p>This never made much sense.
Attacking a NATO buffer pre-emptively, brining your forces out and closer to existing NATO weapons, is basically putting you in the same situation with less resources. The issue is not about weapons "threatening". ICBMs can reach anywhere and smaller munitions from local seaboards (subs). This idea that NATO is somehow threatening by proximity is not credible. The answer to it would not be to rush headlong into a conflict to bring those forces to bear and bring your border to theirs anyway.<p>It looks more like the Ukraine conflict has been about securing resources, testing capabilities, and demographics (tied to capabilities). Russia wanted more resources to sell to partners and wanted to test the (declining) capability of it's own forces.
You are applying western thinking (acquiring captive markets, NATO is a force of good, surely not threatening) to Russia. Big fail, they think differently.<p>It is obviously clear that Ukraine is not about securing resources: Given the costs of war (Russia knew the sanctions will be coming, just did not think their funds will be frozen), the cost-benefit is simply not there. Given the obvious economic drawbacks of attacking Ukraine, the only explanation that makes sense is the national security one. You go to war to 'test capabilities' only if it is a minor thing without serious consequences, which Ukraine war definitively does not fit.
If China cannot get oil from the middle east what happens to China and China-Russia relations? I didn't say there would be hostilities just Russia would become potentially the more dominant partner.<p>If NATO expansion is the reason for the war in Ukraine (not imperialism) then why has the war not stopped now we know Ukraine will never join NATO?
1) Russia will happily supply China with oil and other resources, and China will pay by industrial good and all other stuff they produce. China is working really hard on getting rid of dependence on foreign energy sources, any leverage Russia might get if it became the sole supplier of oil/gas to China is very temporary and Russia knows it. Furthermore, unlike USA, it has no delusion of ever dominating China - China already has them by the balls.<p>2) mostly face saving, but also: Ukraine will remain openly hostile, NATO or not, planning to have hostile (EU) forces on its territory as 'security guarantor'. Russians still believe Ukraine will collapse (those men will eventually run out/economy will collapse/EU will not send its children to die on the eastern front) and they will be able to have a friendly (or at least truly neutral) government there. Russia's paranoia about the west is really strong, well founded and well documented.
You seem to be extremely fond of Russian propaganda.
That's the easy way out, isn't it? Why argue on merit of anything you don't like, just name it Russian propaganda.<p>Or, perchance, you want to provide a concrete argument why are my statements incorrect? (No, 'it fits Russian narrative' is not argument about correctness, it is an argument about the narrative.)
<p><pre><code> > Russia's paranoia about the west is really strong, well founded and well documented.
</code></pre>
It's an act, and everyone in Russia knows that it's an act. Acting this way gets the dumber kind of Western politicians to carefully tiptoe around Russia; that is the value this act provides.
There are many western authoritative sources documenting that.<p>Have a look at William Burn's 'Nyet means nyet' depeshe.
Or Merkel's memoirs.
Or George Kennan's statement's in the 90's on the wisdom of expanding NATO.<p>But, ultimately, one believes what he/she wants to believe....<p>Do you think it is better to not carefully tiptoe around Russia? Do you consider full-on sanctions, total refusal (except Trump) to diplomatically engage them, open intelligence, military and financial support of Ukraine 'carefully tiptoing'?What do you propose instead? Open WW3? I am really curious.
You listed joke sources. Merkel, in particular, has been utterly discredited for her naivety toward Russia. Her sucking on Russian gas left Germany lagging in the transition to renewables and EVs, and the German economy is now paying a double price by also having to bear a part of the economic burden of the war.<p>As to Russia, virtually no-one in Russian academic foreign policy circles, nor in the influential semi-formal circles of imperialists and neo-nazis, nor anywhere inbetween, is paranoid about the US, NATO and the West in general. What is there to be paranoid about? They see the West in general as utterly impotent, making big words, but not backing it up with a stick. This week one year ago, Trump wrote "Vladimir, STOP!" in response to a massive air attack on Kyiv. Putin didn't, and what followed? A bunch of nothing.<p>The answer to your question about tiptoeing is abundantly clear to anyone familiar enough with Russian culture to know what <i>zek</i> and <i>kagebeshnik</i> mean and how to deal with them. Politely asking them to stop has never worked. The idea that you have to talk with people in the language they understand is hardly a novel one.
Sigh, joke sources. Burns and Kennan also, right? Anybody who actually understood Russians is a joke. Study a bit, and not only neocon think tank sources, but from the people who actually understood Russians (there are practically none left in recent administrations).<p>Russians are paranoid, among other things, about nuclear decapitation strikes. For the same reasons, they have repeatedly explicitly strongly opposed missile sites in Poland and Romania.<p>I am really curious, what do you think the west should have done? Bomb Russians directly? I mean, what else is left?
This is all basic economics.<p>Companies can grow organically or through strategy and adding new verticals to a point. Eventually they're too large for that. They own the whole market, they can't get regulatory approval for acquisitions and so on. At this point they only really grow at the same rate the industry or the economy does.<p>At this point (or often long before), the only way to increase profits is to raise prices and/or reduce costs. Profits tend to decline over time so there is constant pressure to reduce costs to statisfy the insatiable need for increasing profits.<p>This is the real product AI is selling: cutting wages. It's a combination of displacing workers (which, thus far, hasn't been all that successful). Where it is successful is to have the threat of layoffs hanging over your workers, getting them to do extra unpaid work for the same wages and making sure they can't ask for raises.<p>That's what's paying for all this AI investment.<p>So I agree with you: the real problem isn't AI. It's capitalism.
The problem, in other words, is quarterly earnings in specific and shareholder capitalism in general.
[dead]
[flagged]
> What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.<p>And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.<p>I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it <i>does</i> do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).<p>I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.
The time to solve that resource problem is before AI concentrates power, not after. It’s LESS likely to happen when a tiny elite increases their already huge amount of power.
Jobless people normally can't feed themselves in a modern world.<p>And uh, healthcare. Among other things.
You sound convincing, but it also reads very AI generated. A lot of people will stop reading half way.
You're absolutely right. And the root cause is simple: the stock market / shareholders. The incentive is for quarterly returns, not long term. That's why CEOs look for that - that's the job they are assigned by shareholders and the board. For a shareholder what matter is the stock going up. Heck, you can make money even if it goes down, but you can't if it stands still.
No. It’s pure greed dominating the world. My employer is owned by bigger private company and the shitshow is the same as in big megacorporation. There are hordes of colleagues to stab one for 100€ more salary a month. Disgusting.<p>The company is manufacturing special computers. The initial owner/founder
ordered CPU modules and memory cards always looking at the price break. His question was always „how many to buy to get best price?“. So he ordered sometimes 200-300 parts more than needed immediately. Then the follow up order came and he emptied the storage. Now new manager always orders EXACT amount memory cards as ordered computers. Price is secondary thing, most important thing to work without warehouse and get things delivered just in time. What doesn’t work at all for the while already. The high prices buying small quantities is eating up the profit, so people are getting fired to save costs. It is pure greed dominating western world. Everything is done to look accounting nicely at every cost, get whole bonus despite ruining the company long term. I see this pattern recently very often.
I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.<p>My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).<p>I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.<p>Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.<p>Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.
Recently, after a month of heavily AI assisted programming, I spent a few days programming the good old fashioned way.<p>I spent most of the time confused and frustrated, straining painfully against the problem. I spent most of my 7 hour session this way, and the task was successfully completed.<p>But I was startled by the difficulty. I began to worry that I had given myself some kind of brainrot from disuse. Then I remembered, my goodness, it always felt that way, if I was ever doing something new. That's just what it feels like, grappling with a problem you haven't seen before.<p>It was always as hard as that, I was just no longer used to the feeling. You get used to the difficulty, and then it feels normal.<p>Or indeed: you get used to its absence, and then it suddenly feels overwhelming and "wrong" !<p>I think maintaining the capacity to tolerate difficulty and discomfort is a "muscle" well worth preserving.
I've had the "problem" of forgetting syntax before any AI, with IDE autocomplete. It was only ever a problem when switching jobs and being expected to write syntactically correct code on platforms without syntax checks or autocomplete. So I did some exercises on such platforms in preparation for interviews.<p>In the real world, reliance on syntax autocomplete and checks was never an issue. The important thing has always been understanding the core concepts of the language and the runtime, e.g. how the event loop works with Node.js and how to write asynchronous and event driven programs.
I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.<p>I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.
Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.<p>But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.<p>It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.<p>A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.
I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.
DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.<p>I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.
why I agree with both of you?
>I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.<p>This feels unfair to the people dealing with your (LLM’s) code. You don’t vet it at all? Or am I reading this wrong?
What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.<p>The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.<p>It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"<p>The profit motive simply does not drive society in this direction.<p>The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.<p>Avoiding crisis over the longer term is simply not investable.<p>"Fair" is not a relevant or useful conception in this context.
> What does "fair" have to do with anything?<p>Not wasting other people’s time when they expect your work to at least pass a cursory check. It’s selfish and disrespectful. It reflects poorly on you. I don’t know about all that other stuff you wrote but it’s not really what I’m talking about so I’ll clarify.<p>I don’t know what your high school/college was like, but we used to trade papers for editing. It was universally considered bad practice to send rough/first drafts. It’s disrespectful and wastes the time of people who are being generous with it for you. You’re offloading your work in a selfish way.<p>Simply put: If I want an LLM’s raw results, I’ll prompt it myself. Why are you involved if I don’t want <i>your</i> work? <i>Your</i> expertise? Want to use an LLM then go for it but don’t just wipe its muddy boots on my work. At least look at the results.<p>Unfortunately, this is becoming even more common with LLM’s. I have no problem confronting people about it because 100% of the time they don’t want it done to them. It’s not even an argument, it’s catching them being selfish and they know it.
Are the people paying your paycheck being fair to you? Are the executives of your company paid orders of magnitude more than you are? Fairness starts from there. Your job is to be as unexploited as possible. I hope my coworkers also have this goal.
What does my relationship with the c-suite/my work have to do with a colleague dumping their unedited chatgpt crap on to me? I legitimately do not understand what point you’re trying to make. There seems to be a lot of assumptions here and I’m not sure what they are.<p>Sending your unedited LLM outputs to me is not sticking it to the execs. If you really want to play that game, you go ahead and ship that or hand it to someone who deals with the final output. That’s your prerogative and you can face the consequences. I am not here to clean up your AI slop. That’s not my job. At that point <i>you</i> are the problem, not the c-suite.<p>All I hear from AI evangelists is “it’s a tool! It’s not the problem! It’s people using it wrong!” Ok, then the people using it are the problem if something is wrong. So if you act this way, which is clearly not a productive use of the tool, you are the problem.<p>Edit: let me just ask you a somewhat multi-faceted question. If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied? Is that what you wanted me to do? Is there a reason you asked me to do it instead of prompting ChatGPT yourself?<p>What if I did this <i>every time</i> I had to write anything? Every email. Every summary. Every report. Just prompt, copy, paste, send to you.
And this is the AI ethos: anti-contentiousness.<p>Is it correct? Is it any good? Should I subject another person to this? Is it profoundly rude to not even read their email and just have a robot respond automatically?<p>The slopmonger does not engage with the question at all, because they never cared.
What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?<p>We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.<p>And I don't like it one bit.
Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces <i>me</i> to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.<p>I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to eat shit at the end.<p>What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society <i>who seriously need these features</i>, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.
Another tragedy of the commons tbh
I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.<p>Helps me keep sane tbh. And keeps the edge sharp.
At work we are literally forced to use AI and it’s part of our performance review. Even though I really like coding by hand, I have to now use AI so I can keep my job. I will try this out though, 2 days per week using AI and the rest handcoding, enough to stave off the inevitable lay off perhaps.
Surely it can’t be hard to token max at work the same fucking way people have games Jira metrics for years and years.<p>If I’m ever in that position (everything I work on it air-gapped, it’ll never happen) I would make it a priority to figure out how to game that bullshit metric so I could get on with solving actual problems.<p>I imagine a lot of people do this. Metric becomes a target, etc.
I have <i>always</i> had a problem, worse than most I think, where if I’m away from a language for a bit I lose my ability to write it quickly and competently, real quick.<p>It doesn’t matter if I was quite competent in it… the mechanical bits fade fast.<p>Doing llm assisted work is going to be like pouring bleach on my brain. I can feel it. The more I use it the worse it will be for me.<p>I can still formulate what I need, and problem solve just fine, but all the nuts and bolts evaporate.
The danger is not using a tool. We all use tools... The danger is skipping the part where your own brain builds a model of the problem
To add to this issue, a lot of people then offload their mental load and work to the people downstream of their LLM results.<p>Someone on HN put it well the other day: everyone wants to deliver AI results, no one wants to receive them.
Another perspective: AI reduces brain effort in some domains which actually frees up brain juice that can be applied elsewhere.
For me AI mostly reduces time effort. AI types code faster than I do, looks up stuff on the internet faster than I do, debugs faster than I do, but doing those never required much "brain effort" from me.<p>What does require "brain effort" from me is making educated decisions. Mostly during planning to figure out which pros/cons of each possible approach are actually relevant for our situation - AI does this poorly, makes lots of wrong assumptions if you don't steer it correctly, and noticing these + correcting AI on them requires "brain effort" too. Then the part of code review where you think about what can go wrong. AI still sucks at figuring out edge cases. It doesn't "know" the entire codebase like I do, its context only has "the parts of codebase deemed relevant".<p>Before AI I could jump from 30 minutes of hard thinking into an hour of coding during which my brain essentially rested, before returning to hard thinking again. Nowadays those hour-long coding sessions turn into 5-10 minutes of watching AI do something.<p>So for me using AI doesn't "free up brain juice", it instead makes me use my "brain effort" more, and in a workplace environment gives me less time to rest and makes me more tired, cause nowadays bosses expect us to work faster + colleagues working faster means more review requests.
Show us effects.<p>What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?
I'll bite. I've been writing music for decades but I can't sing. With ai I can write lyrics and generate ai vocals, then separate the stems and extract the vocals throwing away the rest.
Add the vocals to my daw and create the rest the way I want.
Saying its a great work of art is subjective, but for me I can make music I couldn't before now.
I've got one. I'm working on a cryptographic identity system in rust. One of the stricter iterations of it demanded creating a public version and private version of each type. The best way to accomplish this is a procedural macro. I don't know if you've written proc macros by hand in rust. I have, years ago, and it was somewhat torturous. I didn't want to relearn to do it all over again and spend what would have taken weeks (this is a side project) to gain a skill I will easily forget in a month or so. So I had an LLM code it for me. This is a really great use for it: it's not building any strong logic or doing any IO, it's simply writing code that generates other code, and is entirely verifiable and testable. It built it for me so I could spend those weeks working on higher level logic and p2p syncing protocol stuff that <i>actually matters</i> for the project.<p>I want to make it clear that I'm an LLM luddite. I mostly find the things distasteful and obnoxious. But there are definitely use-cases where they can do what's essentially bitch work and save a lot of time that would otherwise be a waste. It's a tool that can be used for specific things. I don't use them for everything.
Exactly. What service got better and/or cheaper?
Most software, other than Windows and macOS, seems to have gotten better more quickly lately.<p>Hard to quantify though, as most, other than the AI companies don't usually advertise their AI usage.
So this is the classic tension between the "coding for the love of code" vs the "coding to solve problems" mindset. This cultural concept has been around since before AI was on the scene, heck well before software existed (craftsman vs builder).
Ya this has been my sentiment. If i need to one-off a quick script that does some processing on data, it’s nice to offload that so i can focus on pieces of my code that are more important and interesting to me. The context switching cost is still there tho…
>I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.<p>I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.
I think it's been well-documented that people <i>feel</i> more productive with GenAI, even when actual productivity declines.
You probably overestimate yourself.
I relate to the idea of having a different level of thinking now with AI. How would you evaluate that someone is overestimating themselves?<p>As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.<p>I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.
It sounds like you're optimizing for a system of self-deception. If you never check how the data is collated, but rather whether the collation appears consistent, you will eventually be left only with data that has the appearance of consistency, regardless of how correct it is.
I hear this a lot, but also I'm curious. How can you really forget coding?<p>It doesn't seem to me a thing that I could suddenly forget?<p>Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.<p>My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.
If you are not practicing an activity consistently, you'll forget some of the finer grained aspects. When I'm coding, I subconsciously create a continuous logic map. Having someone or something just generate (and generate so quickly) destroys that and makes it easier for bugs to slip through.
I mean if e.g. AI stopped existing all of sudden, it doesn't mean you would have forgot how to code and couldn't all of sudden anymore, right?<p>You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.<p>As in how could I possibly forget what loops, conditionals or functions are?<p>I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.
A writer will never forget what adjectives, verbs, and nouns are. But if they use LLMs to write for them for years they will be worse at writing on their own.
Well, what I'm trying to say here is that coding is conveying logic, the way you'd evaluate it is how fit it is for its purpose, and if it's long term code, how well it will scale into future.<p>Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.<p>You could simply become a better writer without not writing anything by just observing.<p>If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.<p>With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.
Are we talking about observational ability, creativity, accuracy of communication or grammar here?<p>There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.
Coding is a thinking avtivity. What you’ll be missing is the nimbleness in doing that activity, not the knowledge.<p>So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.
I feel like I have to disagree here. I don't practice e.g. multiplication or doing math in my head everyday or for years really, but I feel like I'm just as fast at it as I ever was. In fact whenever I have tried things like Lumosity or brain benching games, that I used to do when I was younger, I'm actually faster than when I was younger, despite not having practiced it at all. I feel like all the real world side practice has helped me improve these abilities indirectly, they have all added to my brain's ability to notice novel patterns, see things from different perspectives, apply new intuitive strategies, that I might have not noticed because I was tunnel visioning when I was younger.<p>There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.
Multiplication is elementary school math which doesn't require any thinking and the learned approach is simple. You can't really compare the simple stuff that's taught to kids, like basic multiplication or riding a bike with stuff that requires domain-specific knowledge and experience.<p>Think more stuff like "find the angle of lines defined by (x-4y-1=0) and (x-y-2=0)", "write the number 2026 in base 7", "solve an equation sin^2(x) - sin(x) = 0".<p>I plucked these from our country's high school final exam from this year. Back when I was in high school, I did mine in 60 minutes without an error when the time limit is 150 minutes and I intuitively immediately knew how to approach each task since the moment I saw it. Also all needed formulas are supplied, you don't need to remember any of them.<p>I plucked these because for these I don't have the immediate "know how" now, I still understand the topics, and could solve them with enough time, but it would require some thinking and thus I would be slower at solving them than when I was in high school, even though I'm pretty sure I could still ace it in the 150 minute time limit.<p>But reality goes beyond high schoool... College-level math, like derivations/integrations, sums, algebraic proofs, is even harder and solving some of them could take me hours when I could do them in minutes when I was in college.<p>With code it's the same. I could solve simple Python/Pascal/C++ high school level tasks as fast or faster than when I was in high school, even if I didn't write any code for a couple of years. But we also had assembly class in college, and I would struggle at assembly if I had to code it now, 10+ years later, even though I didn't struggle with it back then.
I used to be an expert at php but now I haven’t written any in over a decade, I can still read it but it would take me a little while to get back to where I was (hopefully I’ll never need to), same thing could easily happen due to ai
If your internet died you would likely be worse at programming that you were in 2020. I think is what people are getting at.
I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.
If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.<p>But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.<p>Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
“Money was never the constraint. Knowledge was.”<p>The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.<p>Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.<p>If you don’t care enough to write it, why should I care enough to read it?
I am really amazed at how we are really okay with LLMs writing code end to end (without human in the loop) / dark factory concept but when it comes articles, HN is suddenly against LLMs writing words. I do not see the difference between writing code and writing prose. Both have keywords, grammars, syntax, meaningful combinations (function or chaining in code / collocations in words). If we think that AI-generated words are not meaningful or easy to follow that same must apply to AI-generated code, which may be harder to read or understand since it is not written by human. Let's stop being hypocrites.<p>Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.
Who is the 'we' here? When did I become ok with LLMs writing code end to end or against LLMs being used to assist writers? I wasn't aware I held either of these positions.
That's because the purpose of code is to be used, not to be read.<p>The only purpose of the written word is to be read.
I've always tried to write code for future maintainers first. That is often me.
That's the difference to me. Code is used as instructions to computers. Written human language is used to to communicate thoughts, ideas, feelings to other humans.<p>I disagree with the premise that "we" are all OK with AI slop computer code however. Even if it's just for consumption by machines, for at least some developers it is a creative outlet.
The purpose of writing is to get your thoughts across in words. A prompt sufficient enough to get out an article with zero chance of it adding things you don't mean has to contain as much information as the article itself would. Just write the article.
Since I cannot edit my comment, I replied my comment. I did not mean to insult HN moderators. I am actually very happy that they are protecting HN by removing and flagging AI content. I only wanted to attract attention to the topic that for some areas AI is promoted but then for some areas AI is demoted and I do not get it.<p>What I mean with "we" is that there is a general perception that using AI is okay and mandatory. This idea is becoming more and more prevalent in management positions and it disturbs me deeply.<p>I got some replies since I commented, but I am still in the same mind. I did not see a strong refutation to my idea. Why are some people (I didn't want to use the word "we" again) are okay with AI use in code but not in prose? I know that they are not exactly same but they have some similarities. If we are unhappy with sloppy prose, why are we happy with sloppy, potentially buggy or hard to maintain code?
> I do not see the difference between writing code and writing prose.<p>That’s the problem.
This is a funny point. People don't want to read LLM code either, so who knows where that puts us.
I would say that the simple reason is that writing is often artistic and coding very rarely is.<p>I don’t listen to AI music or watch AI videos, I don’t want to read AI articles
I've been opposed to all of it the whole time. But yes, let's stop being hypocritical.
What hypocrisy is there in distinguishing between the qualitative value of prose vs code? They serve entirely different purposes; your failure to recognize that is no one else's fault.
We are not okay with slop code. There was healthy and widespread dissent in 2024 and beginning of 2025. Ycombinator cracked down on the dissent first by installing another moderator and then by downranking and banning anti-AI people.<p>What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.
Slop is slop.
LLMs are trained on real life grammar written by humans. Sometimes the characteristic traits you see by LLMs are written again by human hands.
It didn't feel at all AI written to me. It's much better than the AI written junk that HN laps up without noticing.
It is full of these short sentences that AI writing loves, sort of to feel "punchy". Normally you would copy-edit that stuff, join them up, have the writing have some rhythm. I agree with GP, the article is hard to read because it seems to have a lot of <a href="https://tropes.fyi/" rel="nofollow">https://tropes.fyi/</a>
Is it really so obvious? It didn’t seem AI-written to me.<p>Every day I seem to encounter (and skip over in disgust) a dozen or so AI-generated articles at the top of web searches, but this wasn’t anything at all like those.
Not the factory floor. The receiving end.<p>It wasn’t one bottleneck. It was all of them.<p>Not the nuclear material. The pattern.<p>Money was never the constraint. Knowledge was.<p>...
#1 rule of slop: anything that can be written, can be AI-generated now<p>#2 rule of slop: even posts critical of pervasive AI usage and how it's ruining the world can be AI-generated
Yes, and textile factories involved people "forgetting" how to weave. Whitehead:<p>"Civilization advances by extending the number of important operations which we can perform without thinking about them"<p>It remains to be seen whether this implies some kind of constraint on human progress. I doubt it.
> They can’t tell you what the AI got wrong.<p>AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.<p>This is not fun. It has no flow.
I beg to differ, insofar as my own experience has been the exact opposite. I enjoy fixing other people's mistakes. And I especially enjoy outsmarting the LLMs. I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.
I think I might enjoy it for a little bit and then become very depressed at the idea that it will never end, a future of fixing things that should never have been broken in the first place and which won't stay fixed.
> I find that I can obsessively breathe down the neck of an LLM for far longer than I could ever stay in the traditional flow state.<p>I can do that too. Most programmers can.<p>That's because <i>it requires less skill!</i> Critiquing something is always easier than doing it.<p>I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.<p>That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not <i>just</i>, as many think, productivity gains.<p>They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...
<i>That's because it requires less skill! Critiquing something is always easier than doing it.</i><p>No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.
> Mediocre programmers always wanted to rewrite everything<p>You are comparing writing something with rewriting something. You don't know what the difference is?
How is that “no” and “the other way around”? The desire to rewrite comes from the ease with which one can critique existing code for being “too hard” to understand.
You can't generalize that statement.<p>There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"<p>As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.<p>But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.
Perhaps you have the psychological make up to thrive in this new environment. Glad it is working for you.
It should have the same flow as reviewing PRs from humans.
Who really truly enjoys that and doesn't see it as a chore?<p>I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
I usually don't mind, but tend to split reviews into two types. Either I understand the context and can quickly do an in depth review, or I have to take some time to actually learn about the code by reviewing the surrounding systems, experimenting with it, etc. But in both cases I would at least run the code and verify correctness.<p>I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
> Who really truly enjoys that and doesn't see it as a chore?<p>This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.<p>Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.<p>Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
> Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.<p>Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....
Which is a really, really bad idea.<p>Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.<p>I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"<p>btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.
The problem is the LLMs completely change the equation. Before LLMs, beyond very junior (needs serious coaching) levels, reviewing was typically faster than writing the code that was reviewed. With LLMs, writing code is orders of magnitude faster than reviewing it. We already see open source projects getting buried in LLM slop and you have to find the real human or at least carefully curated contributions among the slop.<p>I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.<p>But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.
Anything AI generated is troll. There's no logic. It's just pattern repetitions. I don't get how supposedly smart engineers fall for it
We humans cannot scan 100’000 articles looking for the golden nugget, the AI data mining can do it and present it in seconds. Obviously we need to verify the data.<p>A couple of decades ago, we didnt trust compilers, we did assembly manually. Today is same barrier, some developers will explode with productivity while others will be left behind.
Because a lot of engineering is pattern repetition, which is not very fun for engineers either, and LLMs can do it much faster?
[flagged]
I highly question the ability of companies to gauge the level of experience of any dev.<p>The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.<p>Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.<p>What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.<p>Does that really make you senior or just politically versed?<p>The pattern shows up the most whenever failing software pokes holes in perception.
There are two kinds of developers.<p>There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.<p>And then there's the rest.<p>Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.<p>Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.<p>Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.<p>You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.
> There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.<p>I would rather tweak that a bit and say, we need a kind that has two things: 1. aptitude - not genius or 10x something just plain being able to think clearly and having problem solving skills 2. care i.e. not just dump whatever hack works in the short-term and declare victory. It doesn't imply that you obsess over perfection and ignore deadlines etc. But basic care about the solution being sensible, good code quality and not causing a new set of problem due to shortcuts. Both are things we routinely expect from programmers and I see less and less of them. #2 is rarer than #1.
>It's almost impossible to turn someone from the latter group into the former.<p>Only if you're constrained by the same short-term thinking as US businesses. The way to do that is more of an apprenticeship model when someone observes/works closely with someone from the first group over years.<p>Even then, the businesses don't want to pay for that, and why should the workers give that away for free? They want people to churn out code because they've chosen to hire micromanagers that need constant updates and babysitting through communication.
My experience is that that plainly does not work. I work with developers of both types, and the junior ones who are part of the first group are limited in their ability by experience, but they have an inquisitive mind and don't give up quickly when they encounter something they don't understand.<p>Much more experienced developers of the second type just throw their hands up and give up (or now: turn to AI). I've worked closely with them to try and reform them. Maybe I'm doing it all wrong, but it has never succeeded.<p>With the ones from the first group it can work that way: you can show them how you approach problems and they will ask questions and pick up patterns and you'll see them improve.<p>> Even then, the businesses don't want to pay for that, and why should the workers give that away for free?<p>Businesses would need a high likelihood that they can reap the rewards of upskilling employees. Why invest a lot of money and high-talent attention into someone who might quit? At the same time, I'll happily pay three times as much for a truly skilled senior developer. I think the employee's incentives are much more aligned: it will increase their market value, it's an investment into their wealth, not the business'.
>My experience is that that plainly does not work.<p>The apprenticeship model isn't in practice at any scale in software, I don't see how you could believe that. Practically every career start is self-taught or university to junior positions which is not the high-attention, one-on-one focus you'd get.<p>>Why invest a lot of money and high-talent attention into someone who might quit?<p>What happens if you don't and they stick around? You might say 'well, I'd just fire them' but then you are going to have a culture of people always having one foot out the door. And a high amount of position switching in the industry has led us to what we have today where people don't really stay and build for the long-term, and shoddy code bases also drive people to quit.<p>An apprenticeship model also helps if you can do 3-5 year agreements for training where you see the most benefit from the person in the last 1-2 years.<p>As good as it has been for my career, switching often probably needs to slow down (while raises go up) and apprenticeships go into effect for better quality training.<p>All this assuming there isn't another major leap in AI competency though.
I'd color this a little. I think there's also an engineering mindset some people have, and some don't. And over 10 years in, I'm still not sure if it can be trained or not. Some people are just really good at seeing the technical solutions in terms of engineering: Where does the data live, where does it go. How does it get there, how does it change. How does it break, how will we know, how will we fix it, how will we cope with its shortcomings. All of those questions to some people are a relatively quick and intuitive part of scoping and design. And for others its like a constant cliff they run into midway through their projects, or worse (and far more common) a set of bugs that are "tech debt" (for someone else to inherit) as the slap the "Mission Accomplished" on yet another project.<p>I've seen people that are very proactive and generally fall into your former group, but also don't quite seem to think like an engineer. I really want it to be trainable - I am trying - but IDK if it is or not.
> There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc<p>You're framing this person as a good developer, and sure. Probably some people who behave like this are good. <i>MANY</i> people who are like this leave mountains of problems in their wake. It takes a very special person to be able to build good quality with this kind of approach.<p>You're basically taking about someone getting the right answer on the fly at full speed<p>It's much more likely to get a subtly wrong answer, which is then dropped on that second group to manage and maintain going forward, while the fast moving person is hurried along to go drop a subtly wrong solution on another project with another team. This has happened to me many times in my career
Thank you for this.
> Technically you don't have to be an employed developer to become a senior developer.<p>Outside of a sufficiently large organization „seniority“ of a developer doesn‘t make any practical sense. So, technically you can assign yourself any label, but that would be weird thing to do.<p>A freelancer is measured by portfolio, a computer scientist in academia by publications, an OSS contributor by the volume and impact of contributions. In either case, it‘s proportional to the effort spent on learning and building.<p>Anyway, regardless of employment status the measure of your professionalism is not defined by only something you can learn from the books. Experience matters a lot: it‘s nearly impossible to succeed in stakeholder management or presentation of your solutions by reading anything. You need practice and feedback. Senior engineers aren‘t those who excel in writing code: fresh CS graduates are supposed to know algorithms better. Senior engineers can contribute at full scale of SDLC themselves and support others. That is much easier to achieve in a professional environment rather than working on amateur projects.
Sure, we live in a society. Seniority is about your ability to make an impact, which generally requires social and organizational skills. It can be bemoaned as much as you want but that’s how the world works.
> What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.<p>This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.
> Technically you don't have to be an employed developer to become a senior developer.<p>That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??<p>I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.<p>You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.
> That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn.<p>That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.
I've never worked in a corporate environment beyond client projects.<p>Picked up a book on XHTML (no, that isn't a typo) and CSS in 2007, just kept trying to build stuff I wanted to build and backfilling knowledge as I went. Not only is it possible, it's preferred. ~20 years in and I've learned how to build my own full-stack JS framework, deployment infra, a CSS framework, and an embedded database to boot.<p>Not one drop of this would have been possible had I taken the traditional corporate track.
Maybe they mean you can be not employed and build products yourself? Technically true, but that's like running your own surgeries or something, you're still doing surgery.
Analogies to other professions give your argument an air of legitimacy, with none.<p>There’s plenty of people in this world who are expert programmers without following any traditional path.<p>“Oh yeah, like who”, you say.<p>Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.
There's a certain irony in that the article itself is quite clearly assisted by AI. Not a criticism per se as I don't have a problem with AI assistance, but food for thought given the material being commented on.
The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well. It seems people use them to "polish" up their writing but in reality it would have read better if they hadn't.<p>My current pet peave is using period instead of comma, as in:<p>> My people lived the other side of this equation. Not the factory floor. The receiving end.<p>Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.
> The tropes that AI introduces into articles are very noticeable, quite annoying, and very unnatural -- they unfortunately don't write well.<p>Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.<p>Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.
It really feels sometimes like they were trained on too much short-form fiction or something. Really stunts their sentence and paragraph texture.
Unnecessary emphasis can get... quite comical... indeed.
People have also started copying the AI tropes, especially your period/comma example.
The uncanny valley is an attractor basin.
Made me stop reading a few paragraphs in. I don't have a "problem" in the ethical sense either, but as the sibling comment notes, the way LLMs write is rather grating. To make matters worse, a) people seem to use them to add pointless volume / "filler" to their texts, so now I have to wade through pages and pages of this stuff, and b) I have no easy way to distinguish between an article at least based on novel human insights vs entirely LLM-generated from a "write me something about X topic" prompt. I don't think it's a stretch to say that the latter just isn't worth reading given the state of the art.
The filler stuff is really a huge waste of time and effort. I tried to Google weather Ranch Corn Nuts are vegan and every result in the top 10 was the same AI generated slop with 10 paragraphs that had nothing to do with what I was trying to find.<p>All the top results had the same AI feel to them. The same format and structure.<p>The best part? None of them said yes or not. None of them answered the question. They just listed common dairy and non-vegan ingredients to look out for. So, all that AI and nobody put in the ingredients list. Lol
I don't have a problem with AI assistance either, but this undermines the point the article is making. For me it is like a priest preaching gay sex is wrong and then being caught in bed with a male prostitute (snorting cocaine optional). Leaves bad taste in the mouth.
Out of curiosity, what are you basing this on?<p>The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.
Sort of a taste receptor I’m sure many have developed now.<p>The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.
What are the obvious tells? List them, because I think our sense of the tells may not overlap.<p>This article is clearly LLM-generated, even the title. A key indicator is that it <i>almost</i> makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so <i>humanity</i> is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.
There are quite a lot of them: <a href="https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing</a><p>But it's really just the usual ones that are truly obvious. "Not X but Y," em-dashes, "underscores the significance of X," and so forth.<p>A terse sentence structure can be a tell, but, IMO, it's a weak one.
The blog post reads nothing like Hemingway. Here's a classic example: <a href="https://anthology.lib.virginia.edu/work/Hemingway/hemingway-hills" rel="nofollow">https://anthology.lib.virginia.edu/work/Hemingway/hemingway-...</a><p>Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.<p>LLM slop reads more like slide bullet points extrapolated to prose-length text
Blog posts aren't typically written like Hemingway.<p>Find some pre 2020 that are, and you'd have a point.
<a href="https://awnist.com/slop-cop" rel="nofollow">https://awnist.com/slop-cop</a> (via <a href="https://news.ycombinator.com/item?id=47806845">https://news.ycombinator.com/item?id=47806845</a>) points out Staccato Burst, Dramatic Fragment, Colon Elaboration, and Short-Hook Paragraph. To me, those define the tone of this article.
Interesting tool.<p>I'm not trying to defend the blog post, but I gave Slop Cop 775 words of an essay by Schopenhauer (translated into English) and got "15 patterns detected."<p>I fear we're approaching the point where AI-written text grows indistinguishable from human-written text, unless the AI-user is exceptionally lazy and uses an obsolete model...
I saw academic rigor fall of a cliff in exchange for 'better job alignment' between end 80s when I had my first class after finishing highschool called 'Formal verification in software' on to beginning of the 2000s when I left giving the first class to new students 'Programming in Java'. All the 'teaching how to think' was replaced with 'how to get a well paying job'.
There is no incentive to work hard for most organizations. They will take credit for all your work and maybe give you a $1000 bonus at the end of the year, if they havent laid you off by that time.
>I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.<p>>In defense, the substitute was the peace dividend. In software, it’s AI.<p>Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?
Not sure why that was ever the plan, as there are clearly not enough people.<p>Also over here, east of 15°E we were fired all the same.<p>I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.<p>I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not <i>that</i> frugal.
Happy to help and eventually take over.
Pretty sure cheap foreign labor is more prevalent now than ever at every major tech company.<p>They really, really do not want to spend money. Especially not on Americans and their health insurance.<p>It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
America could just reduce their cost of living, optimize their healthcare, make domestic business more attractive etc instead of trying to ban everything to duct tape over deeper problems
> It's really strange how we're just letting them get away with this.<p>Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.<p>> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.<p>When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.<p>“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.
India for the most part.
It had to be H1B Indians and outsourcing to India. As a European, I have seen some "Eastern European devs" around, sure. But they were not present at every company I worked with. Indians were. Quality-wise, it was always the same story, but I'm not going to elaborate. Everyone who is ready to accept it, knows what I would be saying anyway.
No, you probably need to elaborate on that. Because in my experience, the quality from people in India varies just as much as the quality from any other country, including the USA.<p>What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.<p>You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."<p>Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.
Okay, since you insist.<p>I'm not racist. I don't care about race. I do care about culture a lot. By culture I mean a set of "default behaviors" and values that people from said culture are more likely to exhibit. That's where my issues with Indians began and continue. Of course you are right that generalizing over 1+ billion people is a futile exercise. Intellectually, I agree. And yet, in my personal experience, certain behaviors and attitudes they have just keep coming up with frequency, that just doesn't match any other group of people I have been interacting with. I live a rather international life. I interact with people from many, many cultures. I currently live in a culture, that is completely alien to my own, and I love it. It's not a problem of closed mind or some kind of supremacy thinking. I am free from that.<p>Specifically about Indians - I find that great many of them prefer memorizing over thinking. In the IT consulting days of my career, I noticed that they seemed to have 4-5 solutions, that they would apply to all problems. Whether the solution would fit the problem or solve it, was secondary. If it did, great. If it didn't, well that was someone else's problem. Half of my job was fixing stuff that an Indian "fixed" before me. The appearance of having fixed something was much more important than the actual fixing. It was all about appearances with them. While people in general seek recognition, I have never met another group of people who are so eager to lie and cover things up to gain some perception of short-term bump in status. It's not isolated to work environment. You see, I suspected myself of perhaps being racist in the end, so I would challenge myself to befriend Indians if I met any - just to see. Maybe I was being judgmental and wrong? The last time I tried it, the Indian man I met kept kissing my ass so much I had to cut him off. Why did he do that? Based on what he was saying, he saw me as someone from an "upper caste" (he projected his ideals of a successful businessman on me) and desperately wanted me to know how much I have done for him (I haven't done anything other than having a few conversations about life and business in general). Took me a while to understand that all this excessive praise and ass kissing was an attempt to elevate himself by proximity to something great. Needless to say I am nowhere as great as he portrayed me to be. Later I also found that half the stuff he shared with me was made up to impress me.<p>Another feature of their culture is extreme pride. They will never stop talking about India, Indian culture, Indian food, etc. They expect you to praise it, be in awe. If you aren't, they will pressure you to change your mind. Since working with them was a universally appalling experience, I wasn't impressed, so that came up a lot. You see this pride and attention seeking everywhere online. A normal person will say "Hello", "Good morning". An Indian will say "Good morning FROM INDIA". It must be mentioned, because it must be noticed and praised. It's just tiring. There is a reason why so many are waiting for country-based filters on Twitter. You wouldn't have guessed which countries are most upset about this.<p>I am certain that there are reasons and explanations for all of this and that there are many exceptions. As you have mentioned, there are so many of them, they can't all be like that. And fair enough. I just find all of this so tiring, that I don't want to deal with them at all. If 1 out of a 100 is a smart and pleasant person, they are still surrounded by 99 that I don't want to deal with. It might be sad, but it is what it is.
[dead]
I’m working on a side project and AI is writing all the code. The code it produces is not good, and this comes from someone who has experience producing bad code. One thing I’m worried about is places like GitHub being full of AI code, which leads to AI being trained on AI code. It seems like this will lead to a downward spiral.
These LLMs are great for now, but they have to go by their training materials. And if people stop creating new ways to code, new languages, new coding patterns, etc, then the code LLMs produce will be stuck in 2026 forever.
Every day Peter Naur’s paper programming as theory building gets more relevant<p>Link: <a href="https://gwern.net/doc/cs/algorithm/1985-naur.pdf" rel="nofollow">https://gwern.net/doc/cs/algorithm/1985-naur.pdf</a>
People are not perfect. I went to Ukraine just days before the invasion. Travel and Hotels in Kiev had become extremely cheap. You asked the Ukrainians about the possible invasion. "Not going to happen" everybody said."Russia talks always aggressively, but never does anything".<p>They did not properly prepare and as a result lost 20% of its territory in days.<p>Days after that I was back is Austria and could not stop thinking about some of the people I spoke with being dead.<p>Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer. "What are you going to do when drones are used against your infrastructure?" If you followed the Russian war and first Iranian strike it was obvious that drones were going to be used against them. "not going to happen" again.<p>The have lost tens of billions for lacking proper preparation. They could have been protected spending just hundreds of millions of dollars over years.<p>It is about humans, not AI.
> They did not properly prepare and as a result lost 20% of its territory in days.<p>Ukraine has been preparing since 2014. Without preparation there would be a Russian talking head right now in Kyiv.
According to [0] the military was basically doing under-the-radar preparations in the last few weeks before the attack, because the official narrative was that nothing's gonna happen.<p>> A small group of officers at HUR, Ukraine’s military intelligence agency, did begin quiet contingency planning in January, prompted by the US warnings and the agency’s own information, one HUR general recalled. Under the guise of a month-long training exercise, they rented several safe houses around Kyiv and took out large supplies of cash. After a month, in mid-February, the war had not yet started, so the “training” was prolonged for another month.<p>> The army commander-in-chief, Valerii Zaluzhnyi, was frustrated that Zelenskyy did not want to introduce martial law, which would have allowed him to reposition troops and prepare battle plans. “You’re about to fight Mike Tyson and the only fight you’ve had before is a pillow fight with your little brother. It’s a one-in-a-million chance and you need to be prepared,” he said.<p>> Without official sanction, Zaluzhnyi did what little planning he could. In mid-January, he and his wife moved from their ground-floor apartment into his official quarters inside the general staff compound, for security reasons and so he could work longer hours. In February, another general recalled, table-top exercises were held among the army’s top commanders to plan for various invasion scenarios. These included an attack on Kyiv and even one situation that was worse than what eventually transpired, in which the Russians seized a corridor along Ukraine’s western border to stop supplies coming in from allies. But without sanction from the top, these plans remained on paper only; any big movement of troops would be illegal and hard to disguise.<p>[0] <a href="https://www.theguardian.com/world/ng-interactive/2026/feb/20/a-war-foretold-cia-mi6-putin-ukraine-plans-russia" rel="nofollow">https://www.theguardian.com/world/ng-interactive/2026/feb/20...</a>
I'd say that Ukraine were very prepared for the invasion, though? They managed to survive for the first 2 weeks, leading to a long-term war. The Donbas war had already been going on for 8 years, and I don't think Ukrainians were under some illusion that those weren't Russians.
On the flip side, all around the world you have "leaders" talking about imaginary conflicts with foreign countries that we must spend billions (they have a friend who really should get the contract) to prepare for and if the other side (tm) gets in your whole family will be killed instantly.
In hindsight, it's easy to be smart. You picked two examples where somebody said "never gonna happen" and then it happened. How about the countless examples where somebody said the same and then the thing actually didn't happen?<p>Take millions playing the lottery. To each of them, I can confidently say "you won't win, not gonna happen". For almost all of them I'll be right. There will be one who wins, were I was wrong, and they will say "see, told you so". That doesn't mean my prediction was wrong. It means you are having a reporting bias.
> They did not properly prepare and as a result lost 20% of its territory in days.<p>They did though. While nobody actually believed Putin would be dumb enough, the Ukrainian army was still, just in case, extremely busy on preparing defences, organising stockpiles, preparing defensive tactics.
> While nobody actually believed Putin would be dumb enough<p>I'm not sure why you'd say nobody thought they would invade. To me it was clear in December the year before when the Russian navy began sailing the long way around Europe, getting in the way of Irish fisherman and confirmed days before the invasion when they had stockpiled medical personnel and blood on the front lines.
When the US warned, days before, of the imminent invasion, the broad reaction was still one of "the boy who cried wolf"
And I didn't understand why anyone thought the warning was wrong. Who sends their navy around Europe, collects 100k troops on the border, and ships blood reserves to the front for a training exercise?
It was clear when they captured Crimea.
> Since that I have also been in Dubai and Saudi Arabia as an entrepreneur and engineer.<p>Why would we listen to anything related to right or wrong from you then if you don't care?
At least here in America, we've been offshoring coding since at least the late 90s. Often, that code isn't so great (this is not an indictment of foreign workers in general but these off-shore operations are not always on the up-and-up).<p>And just like offshoring dev work, we may see the rebound effect when there's all kinds of poorly written LLM outputs in production and companies are running around trying to re-hire high quality devs to fix all these fires that they themselves started.
I remember same complaints about junior engineers copy pasting snippets of code from StackOverflow without understanding. And without curiosity to understand, without code review and mentorship from senior engineers they never grew to the senior level. But that is only some of them, others used StackOverflow to learn, did not use the snippets without understanding them first and properly adapting to their context, and they got good coaching in their teams and now have reached senior level from there. I see the same dynamic with LLMs, just more opportunities for both juniors to learn more by following up, and for seniors to to create tooling to enforce better architectur, test coverage and fault resiliency.
I think you're missing the point. Nobody removed people thanks to their SO copy-paste skills. If anything, more folks were hired to troubleshoot and sort out any copy pasta blunders (since you actually need working software, at the end of the day).<p>With LLMs this is no longer true - the thing can vibe a great deal before anyone notices that they have 100.000 lines of code doing what a focused, human reviewed and tested 10.000 lines can do. And as this goes on, it becomes increasingly more difficult for anyone to actually dig into and fix things in the 100.000 without the help of LLMs (thus adding even more slop on the pile).
Excellent post. Two stand-out points are deskilling through abolition of apprenticeship (or equivalent progression through the rank and responsibility), and loss of institutional knowledge, especially tacit knowledge stored in individual people. These are people problems more than they are technology problems. Without continuity of process and practice stuff gets lost. Sometimes change really is progress, for example software safety and security practices have progressed over the past 50 years, but other times change is just churn, or choices driven by misaligned incentives which will bite later, as the article describes.
The author calls it out at the end but still spends a long time creating a false equivalency. Offshoring to humans in another country is not the same as having machines do work domestically. Really, the only thing that matters is:<p>"Maybe AI gets good enough, and the bet pays off. Maybe it doesn’t."<p>Of course, we are all wondering if AI will be good enough in 5 to 10 years such that you don't have to look at the code (at all). If so, then very few programmers will be needed it seems. If not, its possible that roughly the same number will be needed.<p>It seems oddly binary to me since as soon as you need to understand <i>anything</i> about the code, you have to effectively onboard yourself to a foreign codebase and develop the needed context.
The thrust of the argument seems the wrong way around. It's roughly:<p>- Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.<p>In reality if they'd used AI, the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.<p>The article seems AI written with punchy sentences and mixed up logic.
First of all this is clearly AI-assist writing (being charitable here).<p>And the premise makes no sense anyway. The only risk of forgetting how to make shells is when other countries are making shells more efficiently. Non-western countries are not going to reject AI-coding, nor are they going to make software more efficiently by hand.
Programmers in non-western countries may not be able to afford $100 per month on vibe coding.<p>They may keep taking the longer and harder route of a mixture of AI and hand coding.
The same applies to the south. It’s shocking to read tales of people spending hundreds of dollars monthly with coding agents, that’s wholly impossible for the vast majority of devs in South America, even 20 dollars is hard to justify for most households. By economic factors alone, I bet there are a lot more people learning the hard skills in places they can’t afford to be dependent on the tools.
They'll find a way. If it's not the chiptole bot, the enormous volume of low-effort AI implementations will provide a free token layer.
>Non-western countries are not going to reject AI-coding<p>If they are smart, they will. And I think they are smart.
Just a small datapoint, but:<p>> Salesforce said it won’t hire more software engineers in 2025.<p>Some headline somewhere reported this, but Salesforce plenty of engineers (in the US at least) in 2025. One of them is a junior engineer on one of my scrum teams.
This post uses the war machine to demonstrate AI=bad.<p>I'm far removed from the conflict in Ukraine, but from the reporting it seems like they are making extremely good use of well understood, inexpensive technologies like drones with mundane munitions.<p>I'm sure Stinger missiles have their place in the battlefield there, but a $120K stinger doesn't seem like a very good countermeasure against a few thousand dollar drone.<p>So, counterpoint: We also need to understand how to embrace the changing face of software.
Just like how "The West" offloaded most of its manufacturing to China as people don't have sew their own shoes.<p>I see this as a sign of increase in productivity, the important software will still have a human centered development team, but we don't need another dev team on say, tinder for dogs.
Reminds me of this post:<p><a href="https://berthub.eu/articles/posts/how-tech-loses-out/" rel="nofollow">https://berthub.eu/articles/posts/how-tech-loses-out/</a>
You could say COBOL has had this "problem" for 40 years also. That's why we need to constantly be inventing new ways of making things. The old ways are always forgotten over time.<p>If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost. That's the price of constant progress.
The point of the article is that sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis. The reason US/EU manufacturing was "the old ways" is that people could make easier money with financial engineering, an insight that extended all the way to Raytheon.<p>COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.<p>As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":<p><pre><code> The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
</code></pre>
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. <a href="https://www.anthropic.com/engineering/building-c-compiler" rel="nofollow">https://www.anthropic.com/engineering/building-c-compiler</a>
But these are hard IT things a human programmer really struggles with as well. What % of software written is that? Very very low. Most software is dull and requires business vagueness to be translated into deterministic logic and interfaces; LLMs are pretty great at that as it is. If humans use their old ways to fix complex problems and llms do the rest, we still only need a handful of those humans. For now.
"For now" is sort of the entire point of the article :)<p>Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.<p>With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.<p><a href="https://en.wikipedia.org/wiki/The_Mythical_Man-Month" rel="nofollow">https://en.wikipedia.org/wiki/The_Mythical_Man-Month</a>
These are good point and I am not overestimating; we are simply seeing the productivity boost in our company and the rise in profitability. We practice TDD, but only at integration level, so we have tests upfront for api and frontend and the AI writes until it works. SOTA models are simply good enough not to do;<p>function add(a,b) = c // adds two numbers<p>test: add(1,2)=3<p>to implement<p>function add(a,b) return 3<p>So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.
Needn’t worry, such incompetencies are rooted out by the 8th or 9th round of interviews.
This is a great article, make sure take a moment to read it and digest it.
I write all my own code and then run it by the LLM for analysis and suggestions. I'm probably not a 10x developer, but this has 10x'd my own progress. I make fewer missteps and build a deeper understanding of the problem space. I used to brace myself for a lengthy debugging session whenever testing a new section of code and now it tends to run as expected, more often than not. It may take a few years but I expect people who are using LLMs in the backseat will pull out ahead as the leaders of the future of software.
Funny enough that this article is written with AI.
> The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.<p>Well then train them, instead of selecting 0.18% of applicants and calling it a day.<p>It's not some innate, immutable property - people can be taught even in adulthood.<p>Also it's not like they'll work for a year and switch jobs - not in the current market.
The thrust of the argument seems the wrong way around. It's roughly:<p>- Knowledge of how to make Fogbank etc. was lost when the people retired and or died. AI will make things worse, especially for code.<p>In reality if they'd used AI the knowledge in it would still be there as it doesn't retire or die or need paying a salary. I guess you have to keep a copy of the model file.<p>The article seems AI written with punch sentances and mixed up logic.
When you've run out of ideas just portray "the west" as some monolithic portrait in some decline-porn fan fiction as clickbait.
I'm so tired of this, it's such a lazy take. "The West" is a giant, incongruous collection of wildly disparate nations and cultures with wildly different circumstances, policies, histories, and cultures.<p>It feels a lot like someone has a cursory understanding of American politics, and thinks the US is somehow representative. It's not, it is an outlier by every statistical measure. If you want to understand the world, you need to start by forgetting everything you know about the US.
The article makes no sense, and stars with a very wrong perspective on things.<p>This kind of forgetting <i>is normal</i>. It's how things work when time and resources are finite. The only problem here is the belief that you can keep capacity to do something without actively exercising it, and thus the expectation that you can "just" resume doing things after a long break, without paying up a cold-start cost.<p>But you can't, and there's no reason to be surprised. I bet the Pentagon and the EU weren't. They didn't need those Stingers and shells for decades, didn't expect to need them soon - but they knew they <i>could</i> get them if they really needed them, but it's gonna be costly.<p>I don't get why people think this is unusual or surprising, or somehow outrageous and proves something about society or "mindsets of elites" - other than positive aspects like adaptability and resilience.<p>This is true at all scales. Your body and brain optimizes aggressively, too. An individual saying "I need to warm up" or "I need to hit the gym a few times and then I'll be able", or "yes, I can, but I haven't done it for years so I need an hour with a book/documentation..." - all that is exactly the same as EU going "yes we can make artillery shells... though we haven't in a while so we need some time and some millions of EUR to get our supply chain sorted out first".
> This kind of forgetting is normal<p>Just as shift in power and the rise and fall of nations is normal.
Yes. Again, this will eventually happen to every one, <i>some way</i>. Of course nations always want to prevent this; it's part of the job of the government. But there's always long tail of very low probability, very destructive threats. You can't possibly safeguard against them. In fact, trying to do so is a sure way to trigger a fall of your nation (or at least your government), by draining your economy dry due to paranoia.<p>The rational thing is to address a threat proportionally to it's expected damage and probability of occurrence. When war is unlikely, you scale down your defense production; when it becomes more likely, you ramp it up - paying cold-start cost is still much cheaper than paying for ongoing readiness. If your scaling down defense makes it more likely for you to be attacked - well, that's the job of your intelligence and defense departments to track. Nobody said it's a static system - it's a highly dynamic one, that's what makes geopolitics a hard thing.
For that matter, a lot of human civilization has been about identifying things that were normal and making them rare. "Normal" infant mortality of 40%, famines, floods, history being lost, etc.<p>Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:<p>1. "This specific case should not have taken certain people by surprise."<p>2. "This is a manifestation of a broader phenomenon."<p>3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]
In the specific case discussed in the article and comments, I'm advocating for another interpretation:<p>4a. "If a process is unlikely to be needed any time soon, shutting it down and then paying cold-start costs if and when it's needed again, is better than keeping it going and wasting resources better used elsewhere", and<p>4b. "There's an infinitely long tail of low-probability problems, and you can't possibly afford to maintain advance readiness for any of them".<p>Also on the overall sentiment:<p>4c. "Paying a cold-start cost isn't a penalty or sign of bad planning. It's just a cost."
My thought as well. Imagine the cost if we kept active every production line of every obscure thing we haven't needed in decades. It's unreasonable to think that we should still be able to make these easily. It would hamper development of new things.
"institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing "<p>This article passes blame to AI for developers not learning because they are not being actively hired. You do not need to be hired to learn something. You need to learn something in order to be hired.
Most people don’t know shit before they get a job. All the most gifted developers I’ve hired or worked with were idiots on day one. Less dumb than than everyone else, but I don’t care how many hackathons or coding competitions or open source projects you’ve contributed to, there’s so much education dealing with real problems and real coworkers in a real work environment.
I agree, most human developers have an exponential learning curve but I also acknowledge the fact companies are mostly driven by shareholder pressure to maximize earning potential and are not to keen on funding that curve.<p>They just want to quick results for their 3rd quarter report and AI does that, without much investment (for now atleast).
AI isn't making us forget, and we aren't in the process of forgetting. We forgot, past tense, in 2015.<p>The rise of coding bootcamps destroyed the historic knowledge and expertise of professional software developers. Waves and waves of people joined the tech workforce, without taking the years of experience required to learn how programming, and professional software development, should work. The result was a lot of really bad code, and a lot of reeeeeally bad product decisions.<p>Since 2018 I haven't met anyone who has read an entire technical manual on a framework, library, or tool that they use every day. By 2020 I was meeting engineering managers who said they wouldn't let engineers use a technology if they couldn't find StackOverflow snippets for it. I still meet "Senior" engineers who don't understand the most basic professional methods, like how Scrum, Agile, or Kanban actually work, and why you shouldn't just make things up as you go. Hell, the entire industry developed a collective psychosis preventing them from understanding the word "DevOps", because everyone switched entirely to learning by reading false blog posts written by clueless amateurs and upvoted in an echo chamber. If you never learn properly, and repeat misconceptions, you won't do good work.<p>We <i>neeed</i> a professional software development license, the same way the Trades have licensed plumbers and electricians and framers. We need people to apprentice under a master engineer, so they are guided by people who know what to do and what not to do. And we need formal tests to ensure businesses don't hire clueless people who passed a two week course to write critical software. Of course nobody <i>wants</i> to do this, and that's why it's so necessary.
Very frustrating to have an ‘article’ so heavily AI-written take up this much space and attention.<p>What’s really happening is that we are all forgetting how to think
> I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.<p>><p>> In defense, the substitute was the peace dividend. In software, it’s AI.
Well, I think it's a good sign that society forgot how to make weapons.
The sad thing is that they need to re-learn again.
The article speaks well but the situation for coding is more severe.<p>Shells are not needed once they are not in needed. Code does not: customer need is always there.<p>Before forgotting how to code, The West will first get round up by their own Monsanto, voluntarily.
The worrying scenario is not that AI writes code. It's that companies stop creating the people who can tell whether the code is any good.
America still makes a ton of stuff. California is still one of the leading manufacturing locations in the world. California isn't #1 in any industry (outside of aerospace) but is in the top 10 for pretty much everything else (in the U.S.). It's been the world's #4-6 economy for the past two decades.
The defense analogy makes absolutely no sense. All the examples are of production shutdowns or reductions. Knowledge was lost because people retired and not replaced at all. None of it was lost to automation.<p>Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.<p>Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.<p>Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.
Exactly. The US hasn't forgotten how to manufacture, in fact a ton of manufacturing happens in the US. What's happened is that it's been automated. And automation is one of the better ways to extract knowledge from a person who will one day switch jobs, retire or pass away.
This is why a comprehensive computer science degree is necessary. Seeing and working only with the trees leads to destroying some forests, eventually.
Yes. Just like globalization created companies like TSMC, AI will do the same. Software engineers who don't rely on LLM code generators will have a moat because they can do it cheaply and sustainably.<p>Another reason is that LLMs train on the existing code we already know, don't expect new programming languages or frameworks this means that the software engineering skills that exist today will be relevant for a long time.
I am not so much convinced by your last point, that point of new languages and frameworks. I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models.<p>I think engineering skills will still remain relevant due to taste and proper judgement. A model trained on everything and the kitchen sink has probably not the fitting bias for given specific problems in my project. Accepting too much AI generated code without steering the ship will result in some drift of taste and ultimately make some mediocre project like done by people without good domain knowledge and without good taste. It might even be short term a business, but it lacks the long term excellence, that sets projects with good judgement apart from the common rabble.
> I think the cutoff date is closing in on our current now. If models cannot easily become bigger, they will likely advertise using "up-to-date-ness". Maybe they will be merely a few days behind. Or bigger models will make use of smaller but more up-to-date models<p>But they will still rely on assembly, C, Rust, Linux, HTML, TCP/IP... Doesn't matter how up to date they are, they rely on existing code they have been trained on, they can't just create new languages without the training data.
"the west" ?<p>You mean the world?<p>Deepseek was being glazed here, Im sure chinese programmers use it like CC
The west did not "forget", it merely decided to participate in the "commoditization of everything" which has the exact outcomes we expect and easily observe today: skilled labor becomes unskilled labor in service to the machinery of capital.
The Fogbank example is the most chilling part. It's not just that they lost the people — they lost the ability to know what they didn't know. Nobody could even write down what was missing because the knowledge was never formalized in the first place.<p>The junior hiring collapse compounds this. Senior engineers develop judgment partly by watching juniors make mistakes and correcting them. Remove that loop and you don't just lose future seniors — you quietly degrade the current ones.<p>The 0.18% recruiting conversion rate mentioned here tracks with what I see in compliance and security engineering too. "Can you tell when the AI is confidently wrong?" is now the most important interview question, and almost nobody can answer it well.
The junior hiring collapse is all so bizarre. I graduated recently and my career prospects are jarringly limited.<p>I thought I'd go back for a Masters/PhD but then Trump mercurially defunded lots of STEM grad programs. Ngl, I found myself stuck. Zero job openings, zero PhD program openings. It's all so frustrating.
I don’t see it as so dire.<p>Software developers have been learning what they needed to know to do the job the whole time. That’s pretty much the job description.<p>What you need to know has changed a lot recently. Like always.<p>> The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.<p>That’s certainly not true. I’d take a hard look at my hiring process if it was performing this inefficiently.
Speak for yourself. I now dare to code much harder problems and learning is bliss. No more having to sit down to dig needle-in-haystack through horrible documentation or random Stack Overflow posts.<p>LLMs are a magnificent tool if you use them correctly. They enable deep work like nothing before.<p>The problem is the education system focused on passivity (obeyance), memorization, and standardized testing. And worst of all, aiming for the lowest common denominator. So most people are mentally lazy and go for the easy win, almost cheating. You get school and interview cheating and vivecoders.<p>But it's not the only way to use LLMs.<p>Similarly, in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.
Perhaps the approach to, and leverage from, using AI is different for someone who's been active on HN for two decades, and junior devs who've been brought up on iPhones in the flawed school system you're describing?<p>As TFA says, the problem is that accumulating knowledge takes time and effort, and the AI hype and expectations on LLM-assisted coding helps with rationalizing ever more short-sighted decisions that squander or hinder that process.
> Speak for yourself.<p>Even if you are the absolute unicorn who gets paid to "code much harder problems" and "learning", the rest of the industry exists to deliver actual products and services.<p>So unless you nurture some type of <a href="https://xkcd.com/208/" rel="nofollow">https://xkcd.com/208/</a> fantasy, this is not just about <i>you</i>. The industry as a whole needs to find a way to work with LLMs without automating programming away entirely, and the industry as a whole needs to find a way to ensure that newcomers are able to be productive even if code-generation tools are taken away from them.
> in Wikipedia you can spend hours reading banal pop-slop content or instead spend that time reading amazing articles about history, literature, arts, and science.<p>I'm not saying you're personally doing anything wrong, but there's a parallel here, when smart and curious people read <i>articles about</i> history and literature and art and science, rather than engaging directly with the real thing.<p>Or then the next level down, where <i>creating</i> amazing work in all of those domains depends on enough "slack" in the system for people to pursue deep work that will not be immediately profitable.<p>Do you see where I'm going with that? We (and I'm very much including myself: here I am on HN, instead of reading something more substantial) skim the (Wikipedia) surface, instead of diving truly deep. AIs (right now) are the ultimate surface-skimmers, and our fascination with and growing reliance on them reflects something in our current surface-skimming cultural mindset.
The west also "forgot" to calculate by hand. Because they invented calculators and computers.<p>The west is not merely forgetting to code. It is creating systems that can code. They aren't standing still. They are progressing to a higher level of production.
[delayed]
This raises a good point. The analogy in the article implies that eventually there will again be a need to know how to write code at a large scale and nobody will know how to do it. I don’t think the analogy holds if you think of AI as a sort of orchestration and abstraction layer which at the end of the day, all software development tools are.<p>But I do think there’s another thing going on quietly in corporate America currently that <i>will</i> have major ramifications for companies that have prioritized using AI and that is a loss of technical excellence in general.<p>I can’t put my finger on it but sometime around 2023 or so there was a noticeable falloff in technical competence at companies I work with because the higher ups went all in on a generative AI future. No longer were they investing in training new hires and having rigorous certification standards. Instead people were encouraged to use AI tools to answer questions and would regularly pass off the output to more knowledgeable workers for refinement. These people clearly had no idea whether what they were sending out was accurate or not but it looked and felt like real work.<p>I think there will be a consolidation across the tech industry and AI will not be a differentiator and only those who are actually competent will succeed but right now AI is allowing a lot of incompetence to go undetected throughout a lot of organizations.
I wonder if the real problem is short-term thinking in culture and incentivised by markets. By optimising next quarter's profits over investing in long-term growth and capability, things like this happen.
I don’t know. Partly true.
I came to web development, when low level things solved: frameworks, ORMs, OSs, databases.
I don’t know sql nor c++ well. But I can create a system, a value based on the abstractions. Everyone told me: Ruslan, you don’t know SQL, what a shame! Well I do not have problems and did not have about it.<p>Probably we are going to be fine with AI abstraction too. People will use it, stuck with problems, dig deeper, learn, improve, same as we had with frameworks and its source code.
I don't agree with "the west forgot how to make things", it moved supply chains for cheap consumer goods to asia, but in the B2B space a lot of things are manufactured in Europe: companies like Bosch, Volkswagen, ASML, Alstom and Airbus are cranking out extremely complicated machines that last many years in demanding environments. It's just a different level of value-add vs. low cost electronics (for instance).
I remember Covid and the supply chain crisis that unfolded in Europe and the west. Most of the companies you’ve mentioned weren’t cranking out anything during that time as all of them realised that "low cost electronics" are not always readily available <i>and that we forgot how to make them</i> or don’t have the capacity to produce them in significant numbers anymore ourselves. A lot of basic electronic components were not available during that time and we still haven’t fully grasped the complexities of our supply chains and where they begin.<p>I also remember, that EE for a while stopped using the term "jellybean parts". Turns out that most jellybeans are produced in Asia.
Basically, we forgot the human neural nets must also be trained.
Space programmes have this issue too - everything had to be relearnt and un-obsoleted for Artemis Moonshots
I feel sad that evening I read now has to pass through my "is this LLM slop?" filter, and if it gets caught, the content loses focus and the worthless puzzle of truth takes over.<p>So there was this:
"I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years’ worth of Stinger production. I’ve seen this pattern before. It’s happening in my industry right now."<p>The filter flashed the warning on the telltale signs and I stopped reading. Now I've got the puzzle I don't want to do. Did someone trying to argue against "AI assisted" coding use an LLM to author that argument?<p>But this is HN, I can also just move on to the next story.
If the system treats you as a number, you should become a mercenary.<p>I love this articles who all the coders read but none of the management.<p>If possible, be a mercenary and put a high number on your expertise, so we can solve this management blind spot faster.<p>If you can't, let your life/work's passion be "not starving to death", and try to change it politics-side.
Chinese models run around $2 to $8 dollars per million tokens - Claude is 10x that cost - when will the bean counters move to chinese models, USA bans these models for national security reasons - Anthropic, Openai, Meta, X all move to China where the models will be cheaper
I disagree with the premise - interesting but I interpret the same fact pattern differently.<p>The history of technology is the replacement of manual processes with automated ones.<p>Consider a very basic process: checkout of a restaurant.<p>Writing the price of each item on a sheet of paper, manually adding them and writing the total was replaced with typing in the prices and eventually with just pushing the button for the item. Paper still exists for jotting down your order but within seconds of leaving the table it’s transitioned to computer.<p>This has enabled lots of desirable advances- speed, accuracy, new payment rails, and increasingly, elimination of the server in checkout- you tap a credit card on a tabletop device.<p>Did we “forget” how to do checkout? No. We purposely changed it.<p>But if the internet connection goes down or the backend server powering the cash register app goes down, there is an atrophied and not-regularly exercised skill set (maybe not even trained, IDK) that has to be implemented on-the-fly and it’s slow and frustrating for everyone.<p>Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.<p>Military procurement of weapons systems is hardly the place to point to as a technological tradition. There are lots of cases where no one pays the money to keep a production process in place; the reasons are all related to shortsighted “cost savings” or failing to anticipate changing needs.<p>With coding today, we are seeing the same kind of shift in priorities as my restaurant example. Having humans write code in the 2020 (pre-GPT) tradition was extremely inefficient in terms of time-from-idea-to-implementation.<p>We’ve found a new way to do the mundane part of that task (the mechanics of translating spec to implementation).<p>We are figuring out how to do that while preserving quality (and a lot of it is learning how to specify appropriately).<p>Will we “forget” how to “build” code?<p>No, but the skills to generate source code by hand will atrophy just as the skills to draw blueprints by hand atrophied with the advent of CAD.<p>Will we find examples where someone prematurely optimized away knowledge of a skill or process, incorrectly thinking it was no longer needed? Of course.<p>But the productivity gains we get will be so great on average that no one will go back to doing things the old way.<p>There will be old-timers and hobbyists who will preserve some of that knowledge; for most it will just be a curiosity.
Everyone is taught at a young age how to do basic addition and multiplication. That's all check out requires. People are not taught at a young age how Rust lifetimes work or how to write human maintainable code.<p>I agree, as with everything in 2026, the reality lands somewhere in the middle of the discourse online. But pretending this is in practice anything like the check out example is wrong.
Though I do believe you are making them in good faith, I find those comparisons do not hold.<p>CAD still requires you know what to do, and without CAD you can still draw blueprints by hand because you know what the result should be. Checkout is basic arithmetic you can do on a paper or even your personal phone. In both cases it is clear what the process is and what the output should be, and it doesn’t replace knowledge and training and certification.<p>With coding, none of that is true. By and large, there is a trend of people who don’t know what they’re doing shitting out software, or people who should know better not verifying the very flawed output they get. That is already having negative consequences in people’s lives.
The point you seem to be missing is that focusing only on optimization makes us all fragile to system shocks.<p>> Businesses don’t exercise (or perhaps even train) this process because it’s just not needed enough to warrant the cost.<p>Until a crisis hits. Covid and supply chain failures. Iran war and straight of Hormuz. Prolonged War in Europe with no production pipeline available. Banks collapsing after unsustainable overleveraging in supposedly "safe" mortgages.<p>For every optimization and cost-saving measure that is deployed, there should be a backup plan in place. MBA types and "technologists" keep missing this. What is the backup plan for the case where most of the economy activity is built on software produced by business who overleveraged on LLM for code generation?
> Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.<p>I'm going to steal that one and add it to Stross': "Efficiency is the reciprocal of resilience."
Yes that is one key that resonated with me. The author did a great job of putting these recurring concepts into their own words<p>The other that really resonated was something that I read before along the lines of… we think that once humanity learns something, that knowledge stays and we build on it. But it’s not true, knowledge is lost all the time. We need to actively work to keep knowledge alive<p>That’s why libraries and the internet archive are so important. Wikipedia, too
There was a time when companies had terrible development practices and could forget how to build, test, and deploy software, but is anyone seeing that now? We have much better development practices nowadays.<p>It doesn’t seem much like defense industry problems.
This still happens. Lots of my career has been figuring out what code is actually running in prod, and determining if it even works.
IMHO, it's a people Thing. People developed better practices, talked about IT in conferences, maybe left the company. AS a result the knowlegde spread. On the other Hand, If the places where a skilled individual can work and honey their skills, the knowlegde become scarce, the knowlegde cannot spread anymore and it will vanish. If you only program with AI and 5 people do the Work of 100, then you end Up in such a scenario.
Do you think this is a tooling problem or more about incentives and how engineers are trained now?
Just like we forgot how to shoe horses, drive elevators, and mill corn!
How do you become a senior engineer if no one hires you as a junior anymore.
>The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.<p>I see a talent pipeline collapse in next 5 years. "Software engineering is over coding is a solved problem" as being chanted by semi literate media and the AI grifter's marketing departments would further scare away the allocation of human capital to software engineering easily commanding 3x rise in salaries due to resource shortage.
But - albeit briefly - a lot of value for the shareholders has been created
I have forgotten all about Apache Wicket - will this cause the downfall of western civilization?
Is this written by a real person though?
We're forgetting how to write too, apparently, and with that, forgetting how to think for ourselves.
If it's any consolation, this isn't unique to "the West," AI programming has completely taken over in the PRC as well.
The west hasn’t known how to write code for the 20 years I have been doing it, at least at major .com brands.<p>It’s a 85/15 rule. These big companies hire hundreds, possibly thousands, of developers but most of them cannot code. Some of them struggle to write emails. About 15% of those people provide 85% of the value.<p>Here is where it all went wrong. The goal of software, the only goal, is automation. That means eliminating human labor. The goal of these big companies is hiring, which is mostly the opposite of eliminating labor. That conflict results in people who cannot do the jobs they are hired to perform and whose goals are to retain employment in preference to automating anything.<p>Worse still is that you can’t talk about if 85% of the people doing that work find this very subject completely hostile.<p><i>It is difficult to get a man to understand something, when his salary depends on his not understanding it.</i> Upton Sinclair.
> Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate.<p>It's minor but this is just wrong. If you're going to hire 4 candidates, there could be 2,253 perfectly qualified candidates even if only 0.18% get hired. The conversion rate is meaningless; it just tells us how many jobs were on offer. There is no way that the skills this fellow wanted were so rare and difficult that only 1/500 candidates could possibly handle the job. Humans even in the 1/20 mark are pretty competent if you're willing to train them and legitimate geniuses crop up at around 1/200.
He writes 2,253 candidates and 2,069 were disqualified. 184 were qualified, so 1 in 12 was considered competent.
One thing you can't really rule out American ingenuity is deciding to do something.<p>What America did with developing Shale Oil to become viable, so quickly is one example.
thankfully with code and coding agents - the tacit/tribal knowledge always lives via the codebase itself unlike atoms based manufacturing processes..
This is some convoluted BS built on the premise that wars need to make sense, economically or otherwise. No, wars do not need to make sense. If a person, a dictator or a president, unilaterally starts a war that forfeits the lives of both the dictator's (possibly fabricated) enemies and its own people, that person is knowingly committing murder. Logically, such a person should be handled with at least as much prejudice as a lone wolf that opens fire on a crowd. So we need to fix our legal systems to be better at preventing wars, not our economic systems to be better at fighting them.
I just can't get over the incredible irony of so many of these "AI is bad for you, mmkay" articles being LLM-generated.<p>If the author sincerely believes the thesis that AI makes you vulnerable / dumb, they are either incredibly hypocritical. But more likely, they're just cynical and trying to get traffic to their website. And you're not getting back the time you spent reading this and arguing with it.
We've forgotten how to do most basic things. Roads are paved terribly, food quality is equally gross, our colleges are diploma mills, homes are built like crap... Everything has steadily been going in the wrong direction my entire life. It feels like we're almost in a dark age where basic skills from a generation ago are being forgotten.
We’ve been automating every single industry that we touched for decades without a single word, bringing up tepid responses like “it’s capitalism” or “business is business” when called out on it.<p>But now that the time has comes for us to automate and change, we’re all up in arms and using ridiculous arguments like this post to fight it.<p>The hypocrisy is mind blowing
Is this article even worth clicking on? The headline makes it sound like yet another pearl-clutching article extrapolating some trend to the extreme in divergence with reality.<p>AI has been an effective coding tool for, what, 2 years at most? We've collectively forgotten all of our skills in those 2 years? Really?
Leadership and management problem, exacerbated by the Chicago school.<p>Good, knowledgable employees are not fungible. The in-house culture that built the engineering takes an entire generation to build.<p>The winner-take-all MBA class of the 1980s to the 2000s and the congressional leadership developed during this era are squarely at fault and their policies need to be replaced.
Did we forget how to make things?
I mean we stopped making some things, but US manufacturing output is higher than ever<p><a href="https://en.wikipedia.org/wiki/Manufacturing_in_the_United_States" rel="nofollow">https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...</a>
I'm not forgetting how to code. They're not gonna get me.<p>Did I forget everyone's phone numbers when cell phones came out? Yes.<p>But this is different. Coding is my passion. I was doing it before I got paid to do it and I'll be doing it after they no longer pay people to do it anymore.
I think comments on such posts have bimodal distribution. On one end there are people who see the utility of AI models for programming and are generally eager to see more capable models and ways of using them. On the other there are people who see AI destroying programming and have no idea how AI could change to be a force for good.<p>I had idea what might be the difference between the groups. I think for the latter group the code is important part of the goal. They see software as rather ends than means. Not entirely of course.<p>And the first group considers artifacts that the software produces to be the goal. So as long as AI written software is capable of producing valuable artifact they are willing and eager to go with it. And AI does that.<p>If the result of my code is finetuning of a neural network, I don't really care how it happened. I can benchmark it afterwards and know if the code that AI made for this purpose was good or not. I can inspect the code, investigate it, pinpoint ideas I don't like, suggest some ideas to try that I believe could give better results. I can restart, or try doing same thing few times in parallel trying different harnesses and models. All in service of the result, that is not code.<p>If you have a program that needs to do something and are willing to try AI to write it, think foremost about how you can rephrase the problem so that the output of AI written program becomes an artifact that can be independently verified, how to turn desired behavior into an artifact to evaluate.
Frankly, I find the attitude towards AI coding here on HN to be both disappointing and a bit disgusting. Not long ago places like this where software developers gathered were full of various texts about how important it was to be able to reason about your code, how tech debt crept into your projects, and how skillful you had to be to write good software, various smart algorithmic tricks to squeeze more performance out of your hardware, etc.<p>Now? Seems like code quality is outdated and uninteresting all of a sudden. Everything is about agentic coding, harnesses, paying hundreds of dollars to Anthropic to let their LLM do the coding for you or perhaps using a 128 GB Mac to run a local model. Do you know your code base? Doesn't matter, if there are any bugs in the future Claude will fix them! Tokenmaxxing is the new paradigm, who cares about the end result as long as it's runs for now and passes all (AI written) tests!<p>But don't suggest these people shouldn't get $100k+ salaries, after all, they still "software engineers" in their minds, they're running the agent orchestration harness in the terminal after all, not everyone anywhere in the world could do that! They're special and deserve to be well compensated for their hard vibe coding work!<p>This industry is rotting from the inside.
This is why I advocate for making everything as simple as possible. The more complex the tech, the more likely it will be lost through the passing of time.<p>It's kind of insane how much knowledge a human being needs to have to build certain technologies and it's taken for granted.<p>AI might make the knowledge easier to acquire but it's still a lot of knowledge that people have to internalize.
Odd anecdote. I completed high school in 2017 and my home country demanded us use mathematical tables, not calculators, to find logs and sines for our version of SAT math.<p>I got my highest-paying numerical programming contract (in the US) just because I knew (from high school math table experience) how to use LUTs to calculate a lot of useful stuff i.e quarter squares.<p>Modernization is great and all. However, it's disappointing to know lots of new programmers are oblivious of the fundamentals.
Hope we dont forget humanity one day!
"A METR randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points."<p>This is weird, but does seem a common result.<p>-> AI generates a ton of code fast, but then the human takes a long time to review. Every time the prompt changes. The AI takes a few minutes to generate code that the human will take hour to review.<p>The reviewing is taking longer than if human just did the code. So why is it so difficult to go back to coding instead of prompting.?
You are not using as much of your brain (solving problems, thinking etc.) so you end up in a cycle of becoming stupider and stupider the more you use ai, so eventually your prompts get worse and worse then you start asking "why is it not working as good as it was two weeks ago"
It's a great story, and a nicely written piece.<p>But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.<p>Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).<p>It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.<p>We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?<p>We had better make sure we write down and preserve the recorded data though :)
Why do millions need to code? When we industrialized million lost the knowledge to sewn a garment. Why a few specialized worker producing self building solution is that bad?<p>I mean beyond the obvious hacker news bias.<p>If you like it nobody will remove it to you as a hobby. But the artisanal aspect of coding as a production mechansim is dying, and it was about time.
While the Fogbank story is a funny anecdote, I don't see it as a fitting example for atrophied skills. It's like writing a clean implementation of some software and it just doesn't match the legacy version until you realize that the legacy version had an unnoticed bug that made it behave the way it does.
Isn't that is the point of technological civilization development? People for example forgot how to weave on the handloom, or all the parts production and the maintenance for the watermills. And wooden sailships - top mastery of handling and engineering developed for millennia, gone.<p>As it was said - the future is here, it just distributed non-uniformly, so somebody is still and will be for some time sailing, manufacturing things and writing code.
We have both forgotten how to make things and also decided we can make more profit letting someone else make everything for every market. We have moved to a generation fixated on maximizing profit. However there is logic there as the cost to access the ability to make things is prohibitively expensive. As someone who makes open hardware with a nod to the environment and reusability, you can not justify or even find more locally sourced options than China.<p>Coding is different though, coding doesn't have a cost barrier, it has a ability barrier. I think we will loose a lot of people who never were passionate about programming and perhaps go back to a happy equilibrium. AI is only production ready if you have someone who understands software development. AI will improve speed to market if you have the right team, it doesn't remove the need for some to learn to code. You will of course end up with startups using exclusively AI but they will be those who end up with major security breaches or simply cannot scale as the AI goes in the wrong direction for the future. Tbh that's probably a positive as it weeds out the start ups that are focused on buzzwords for funding and not product.
No matter what happens to the viability of software development as a career, I will always care about the craft as I have done the past twenty years and change. The imperatives to adopt LLMs in situations where they do not benefit me nor my work is what is driving me away. I have to agree with latexr; the people who seem to benefit the most from the current moment are those who see software as a means to an end without much concern for quality, longevity, nor customer experience.<p>Why is speed-to-market such an important metric? I do not understand the need to mimic the largest players in the industry, nor do I see any particularly profound long term benefits to first mover advantage.
> I think we will loose a lot of people who never were passionate about programming<p>Anecdotally, what I’m seeing right now is the opposite. People who don’t care about programming are joining, while those who do care are getting tired of the bullshit and leaving. The good programmers are the ones leaving, the hacks are extremely happy to use LLMs.<p>When shit hits the fan, there won’t be many people left to clean it.
Idk, if I look at major os software it looks to me like the west still holds a huge number of brilliant contributors.
The same “forgetting pattern” can be said of assembly, hardware, combustion cars, radio, heck, even making fire.<p>There will always be specialists who can really debug stuff. Mechanics, etc. Time moves on, and we need to move with it.<p>I’m amazed at this “end-of-world” crap. People use AI to write this shit, to make it even crazier.
Talking of military stuff: it's not a problem really. No one can keep the non-needed capacity in existence, it's not even possible if no one consumes the product. Make sufficient buffer stocks to have time to re-learn the process when needed, and that's it. There is no realistic way it could work any different, otherwise it's like: maintain entire Cold War era production capacity and keep it idle or working at 5% workload, just to be able to ramp up when needed? But it means keeping almost all of the Cold War budgets still flowing. Wasn't going to happen - and of course, in Russia it also didn't happen, and couldn't.<p>In the end of the day, Russia burnt through their entire Soviet stocks in roughly 2-2.5 years, while US spent a very small proportion of theirs and Europe, maybe about half. And now consumption on both sides is similar with expenses on the Western side to feed that machine being almost invisibly small. Nothing bad happened.
Just look how MS try to get rid of<p><a href="https://news.ycombinator.com/item?id=47881805">https://news.ycombinator.com/item?id=47881805</a>
The West started to forget how to code a long time before AI. At first, it was the work visas to bring programmers in, then it was outsourcing. At this point, I'm not even sure if AI is doing more harm than good in this department as it might be able to bring some jobs back to the "West", if it turns out that it's cheaper than outsourcing.<p>The outsourcing was shedding more of the trivial jobs, while trying to keep key positions at home, but increasingly, it also started to lose the key positions too. It's possible that AI can make it so that the key positions will be harder to justify to outsource... but, who knows... maybe not.
> The defense industry thought peace would last forever, too.<p>Not really since they are always pushing for more wars.
I can't not write the tired comment of how ridiculous it is to criticize AI and then use AI to write your article. It's tired, but so is this writing style.<p>For the actual problem, I fear this can't be solved by warning people, the pain will need to be felt. The system we live in, basically free market capitalism, cannot do anything else except local optimization. Maybe it's for the best, I don't know. The alternative of top down planning wouldn't have this problem, but it would have other problems. I work for a mid size somewhat luxury brand, and the major goal right now is cost cutting and AI for efficiency everywhere instead of using it to create better products or better ways to reach out customers. When I think about who will buy our luxury products if all jobs were optimized out of existence, I don't have an answer, but again I think the pain will need to be felt to change course.
> After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function.<p>Same thing that happened to the unfortunate Dr. Jekyll!
This will end with the way of COBOL with a few people that still have the expert-level understanding of refactoring old code without causing outages or service disruption.<p>We’ll see, but right now I now see developers 24/7 hooked onto their agents and in the future we will experience a de-skilling problem which clean code, best practices, security and avoiding NIH syndrome will be all flushed down the toilet.
exactly, as they say everyone has to learn to code.
I wish this article wasn’t AI slop. It wasn’t X, it was Y.
when you offshore or automate away the hands-on knowledge, you don't just lose the workers, you lose the entire institutional memory, and no amount of money can buy that back overnight
> The West Forgot How to Make Things. Now It's Forgetting How to Code<p>Can we stop repeating this nonsense headline please? We did not stop manufacturing things.<p>Manufacturing is a huge industry in the West. <a href="https://en.wikipedia.org/wiki/Manufacturing_in_the_United_States" rel="nofollow">https://en.wikipedia.org/wiki/Manufacturing_in_the_United_St...</a><p>The US manufacturing sector is the biggest it has ever been. Exports are at all time record highs. The only thing that declined about manufacturing is the jobs. We build way more than we ever did but with far fewer people.<p>What we did do is decide that basic items aren't worth it. Our capacity is limited, our labor pool is limited, expenses are high, it doesn't make sense to make trinkets when we can make complex high precision parts and devices.<p>But no, we did not forget how to make things. We chose to use our capacity in a smarter way.
That's why I am looking forward to be a 70 year old demanding tons of money for doing the things I came to love and was cut off by AI.<p>What a bright future!<p>But the rest is a big no from my side.<p>"In hindsight" - Southpark, please take over.<p>What if there was a continuation of producing unused weapons during the last 20 years? "Waste of money", "Old tech", "useless" - dilemma.<p>Also the generalization is awfully misleading: "The west".<p>Let's say all are suffering from military dementia the same way. Who do think has an easier time to recover? USA or Europe? Europe relied and relies or freeloads on USA in especially military affairs.<p>As you wrote: some veterans teach building, handling cruse missiles to young guns like having an exciting time with the boy scouts.<p>Germany? "Never again! Demilitarize Germany." Decades long hatred towards USA was pretty much summed up with the slur "Ami go home!" which was a phrase used to protest US military bases in Germany - and then, when most of them finally left, it was all just fun and games (losers).<p>So USA has some sort of infrastructure and intellectual property to recover and never stopped treasure it as part of the country's history: Veterans' Days, Unknown Soldier, Arlington - Hegseth did a great job stopping the decline here.<p>Meanwhile Europe: You couldn't have a hold out in secrecy. Some enquete commission would investigate, and addresses would be leaked and people doxed.<p>Have a look at the representatives of the Germany Army: overweight nice guys. Sorry to say, but I think there is something wrong with this picture.<p>Europe has nothing to restart. They never had in the first place. Many tend to forget that the US provided massive supply to all allies during WW2. Russia would have been wiped out if it wasn't for the US logistics and money. After the war there was a joke told by survivors of the Eastern front: The first Sherman got shot on the Eastern front not the West.<p>Europe was always on life support. France military forces outnumbered Germany at the start of WW2. But they were tired and instead of fighting build a wall so to say. Netherlands and Denmark was taken without any resistance.<p>And it is the same for programming. How many European companies dominate globally like FAANG? Exactly. None. 30 years of Internet and it is getting lonely at the top for the US.<p>"The West"? Nope.<p>During the 80th, while Chucky Cheese was all the rage, in Germany you got massively socially ostracised for showing your interest in computers. Playing electronic handhelds put you up on notice by teachers, demanding correction by the school administration - true stories.<p>Another one: What do all FAANG like companies have in common? The founders and top managers have a background in CS. What do European managers have in common? They haven't heard of CS so far.<p>Europe is a mess. US is maybe having a cold start but gets its shit done.<p>Germany killed of its industrial sector. Energy producers as well. Germany does what Morgentau had in mind but what off the table: no more wars and weapons, just farmers and horses.<p>USA is save in every regard. It is not that something has been lost. This happens or why do we don't know anything about Rome?<p>You have to distinguish recovering from losing. Once you were at the top, at least you know how to get there while others in most cases will never get there.<p>These are different abilities: conserving knowledge and rebuilding it. USA needs to reactivate, while Europe needs to build from the ground up without any starting point - without money, energy, moral support, nothing.<p>USA is already the winner here. And this pattern keeps repeating. 250 years and what we have is an epoch were USA saw kingdoms rise and fall, USA is the only constant there is.<p>Treasure it. You are in a save spot despite all the dire circumstances. A blessing in disguise.
>Denis Stetskov<p>?<p>Putin's propagandist, or just useful idiot.
[dead]
[dead]
[dead]
[dead]
They "forgot" how to make them? The greatest superpower in the world? Isn't it obvious that "Stingers" were a giant hoax and we never went to Afghanistan? Maybe the US needs to go "back" there just to prove we can.. /s
[dead]
[dead]
[dead]
[dead]
[flagged]
> I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end.<p>With all due respect, but many european taxpayers help pay for Ukraine. I am not disagreeing on the premise of the West killing itself via systematic recessions - Trump invading Iran leading to inflation as an example - so a lot of things are going on that show a ton of incompetency both in the USA and the EU, but at the same time I also get question marks in my eyes when this criticism comes from a country that receives money from others. That money could instead go to make EU countries more competitive, for instance. I am not saying this should necessarily be the case, mind you; I fully understand the nature of Putin's imperialism. But we need to really consider all factors when it comes to strategic mistakes with regards to production - and that includes taking up debts all the time. There are always a few who benefit in war, just as they benefit from subsidies from taxpayers (inside and outside as well).
As an anecdotal evidence I code way more now with agents because i have an entity who has vast amount of knowledge about pretty much everything and I have the creativity to use that well.
Rather bad premise in the article.
1.) Germany, Italy and Eastern Europe are very industrial regions. The author forgets defence is not only the industry.
2.) The author doesn't show any source that Chinese developers don't use AI
I don’t know, but the evidence shows that software engineering is not that deep of an art.<p>People come and go at rates that would not be sustainable in any manufacturing business.
Yes, businesses tend to believe that.<p>No, every time people switch knowledge gets lost and code quality degrades.<p>In part I blame accounting rules justifying investments is easier than maintenance.
Interesting take. We are not going to talk about Office, Windows, Adobe or Autodesk products here. Neither Linux kernel.<p>Just classified ads or e-commerce platforms such as gumroad and shopify are complex enough that a single person cannot master them end to end. The domain is huge to master and takes a lots of time to master.
Click/rage bait?<p>The opening paragraph is ridiculous. The FIM-92 Stinger is obsolete. It was replaced by FGM-148 Javelin. DACH (Germany, Austria, Switzerland) didn't forget how to make things. They are still world class for manufacturing. (Northern Italy is also economically part of that manufacturing mega-hub.)<p>There are plenty of NLAWs (much cheaper than Javelin, and only <i>slightly</i> less capable) in EU/Nato stocks to satisfy Ukraine needs against Russian heavily armed main battle tanks. For everything else, you can use one or two suicide drones to kill anything with a motor.<p>And now to give credit where credit is due:<p>Looking at his (assumed) LinkedIn profile: <a href="https://www.linkedin.com/in/denjkestetskov/" rel="nofollow">https://www.linkedin.com/in/denjkestetskov/</a><p>It looks like he was educated in Ukraine, so likely a Ukrainan national. If I were a Ukrainan, then I <i>too</i> would be publishing rage bait like this in an attempt to pressure allies to provide more funding, weapons, and gear.<p>As a final suggestion, the writer can visually spice up his blog post with one of my all time favourite military photos from Wiki: <a href="https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USMC.JPG" rel="nofollow">https://commons.wikimedia.org/wiki/File%3AFIM-92_Stinger_USM...</a>
So you published this comment with an anti-Ukrainian spin, and just <i>2 minutes</i> after posting, your comment is already at the top of comment rankings? I hope HN mods follow inauthentic upvote / comment behaviour on this site; this looks fishy.
The Stinger is an anti air weapon, the Javelin is an anti tank weapon.
Stingers use that gas cooled spinny thingy. They're not FLIR based like 9X are.