This agent stuff is really making me lose respect for our industry<p>All the years of discussing programming/security best practices<p>Then cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter, determinism is going out the window, and its becoming standard practice to have bots on our local PC constantly running unknown shell commands
We didn't collectively decided, we've got this forced down our throats to apply a novel tool to any imaginable situation because the execs got antsy about being left behind.<p>A truly absurd amount of capital was deployed which triggered a cascade of reactions by the people in charge of capital at other places. They are extremely anxious that everything will change under their feet, and if they don't start using as much as humanly possible of it right about now they die.<p>That's it.<p>The tools have definitely found some use, there's more to learn on how else they can be used, and maybe over time smart people will settle on ways to wrangle it well. The messaging from the execs though, is not that, it is "you'll be measured on how much you use this, we don't know for what or how, it's for you to figure out but don't dare to not use it".<p>I do understand their anxiety, their job is to not let their companies die, and make the most money as they can in the process; a seemingly major shift on the foundations of their orgs will cause fear.<p>But we have not collectively decided that it was safe, and good, to run rampant with these tools without caring for all that was learnt since software was invented...
This wasn't really forced on us.<p>The whole industry is like a fashion show and has been for a long time. This is just exceptionally stupid compared to moderately stupid things before. I see it ore that everyone's wearing pink feathered chicken suits because it's in fashion. If you don't wear a pink feathered chicken suit then you're a luddite scumbag who doesn't deserve the respect of your peers.<p>However some of us still have enough self-respect not to be seen dead in a pink feathered chicken suit. I mean I'm still pissed off at half the other stuff we do in the industry. I haven't even really looked at the chicken suits yet.
If you work in a tech company with >5k employees it's extremely likely it's been forced down on you to wear the pink feathered chicken suit, and told to not complain about the pink feathered chicken suit because it is the inevitable future, and no one will be wearing anything that doesn't look like it ever again. Also, we are watching every straggler not in a pink feathered chicken suit, put yours on or leave the building.
Force is seeping in. Managements are expecting that LLM-driven prouctivity-enhancers will be deployed and give broad-based boosts. More are each week. Supposedly cheaper than people. Those that aren't yet might be soon.
When your performance review includes facility with and productivity with LLM tools, you are being forced.
my assessment of the situation: "we've spent so much money on AI's promise to give us 5x, 10x returns, that now we have to earn it back by foisting the burden on <i>developers</i> to make up the gains by working harder, at least enough to recoup the exec's decision to pour money into the boondoggle".<p>"Hey developers, we spent $x million on Claude, who promised 7x returns, so YOU better make it 7x more efficient so we don't look bad".
The "whole industry." What, like 5 companies?<p>This is a "monopolized sector." They absolutely forced it on you. In most cases, sure, not directly, but their influence is the only driving force. Absent this no one would have jumped on this flimsy bandwagon.
We had it forced down our throats by CEOs and CTOs who thought that it would improve our productivity. Nobody forced it down <i>their</i> throats, though. Instead, they were seduced. They went willingly.
In one gig I was on, a consultant showed up and started saying that the platform was not good because it didn't have any machine learning(this is pre-AI buzz words). So the executives asked me when can I fix the platform to have machine learning in it. They didn't have an answer when I simply asked "machine learning to do what?" and my explanation of what machine learning is or can be used for went to deaf ears. So yeah, definitely agree on seduced and then went willingly and blindly.
no. openclaw wasnt forced by ceo's. it was forced by the same people who though there was money to be made in crypto then ICO then NFT. a bunch of scammers that bring negative value to the world
And they make money. A scammer is the President of the United States.<p>At a certain point why blame people for trying to keep up? Why are scammers so successful? It seems to me we have a systemic failure at a societal level. Until we are honest about that it will only get worse. Until then maybe some rouge LLM botching some critical system will be the wake up call we need.<p>I am not sure what to make of critiques that seem to rest on notions of a small population of scammers preying upon the doe-eyed public. I think the situation is a bit closer to Carlin: garbage in, garbage out. A critique that holds up quite excellently in this AI age.
> At a certain point why blame people for trying to keep up?<p>No.
western society is a shelve of its former glory. it did not last long but there was an age were man was capable of greatness. the early internet kinda was the last stretch of this short run then money corrupted it. the underlying issue stems from abandoning cultural education as a Western value. Instead, we've opted to dispense raw ideology devoid of any thinking mechanism that we now seek so dearly to integrate to LLMs so that they can be more like us. This sloppening manifested in our lives through every medium.
We witnessed it when animation shifted to 3D, providing slop and poorly designed characters and stories. We witnessed it when video games all adopted the same game engines, look and feel and lack of narrative stakes, slopping ideology down players’ throats- no nuance, no wit, just mind-numbing dogma that punishes anyone who dares to criticize.Perhaps most damaging was Netflix's infiltration of our households that has accelerated our collective intellectual atrophy through relentless ideologically charged content parroting as entertainment. Meanwhile, our children's minds are being shaped not by family or tradition but by the algorithms of TikTok and Snapchat.The past decade and a half hasn't just prepared LLMs to replicate human abilities it has systematically stripped away human complexity, reshaping us into predictable patterns, not to raise LLMs to our level, but to reduce us to theirs, until the distinction no longer matters.
Our industry has never been serious about security. We all download and run unvetted code via package managers every day. At least now the insanity is out in the open. We won't change until Skynet fires off the nukes.
I keep getting so depressed thinking about the inevitable. Quite simply, humans can't scale or iteratively improve. We still need to eat, we still need to sleep, we can only think on one thread at a time basically, we take 20 years to get to our prime, which is a fleeting moment, while most of our lifespan is spent in a state of decline of capability. AI humanoid robot from the near future doesn't need to eat or sleep, can work 24/7, can compute thousands of processes in parallel, is the same fungible unit as any other humanoid robot, forever with some maintenance. Why justify a sustaining an inefficient human in that modern world? It is more profitable for the company to have humans go extinct and maximize planetary resource use to its fullest extent possible.<p>Seems we are digging our graves as a species and don't even realize it. I mean Sam Altman is already saying it taking 20 years to train a human is a Big Problem.
I don't think it will be cost effective to build humanoid robots to do most tangible work. Why assemble an expensive masterpiece of servomotors, chips, plastic and steel, when billions of desperate humans are <i>right there</i> and only cost 2.5 meals a day and a small shelter?<p>Of course, <i>intelligence</i> will be a solved problem so "20 years of training" won't be needed. You'll just be the hardware. AI will tell you to pick up that box, place it on that conveyor belt, place the autowelder at that seam and wait for the green light, turn the wrench to install bolt B in part C. If you don't wish to, or no longer can, so be it. Another, hungrier human will replace you. After all more are made every day, and they are capable of doing this type of labor by age 10 or so. And what else would they do with their time, go to school and get a completely useless education?<p>All of this will of course be in service of our technofeudal lords, the owner class. Some robots <i>will</i> be needed for heavy lifting and for the jobs that are too sensitive to trust a human in, like personal security and strikebreaking. Can't risk trusting a serf for those tasks. But for most physical grunt work humans will be cheaper. Shockingly cheap, when they have no other options.<p>Did that make you less depressed?
> I don't think it will be cost effective to build humanoid robots to do most tangible work. Why assemble an expensive masterpiece of servomotors, chips, plastic and steel, when billions of desperate humans are right there and only cost 2.5 meals a day and a small shelter<p>If all you have to offer people is this kind of sad fucking "2.5 meals a day and a small shelter" while you live on yachts and eat like a king, eventually they <i>will</i> gang up and kill you
> eventually they will gang up and kill you<p>I’m looking around the world and thinking this “eventually”
isn’t happening very fast.<p>Not an optimistic thought.
I keep wondering when the west will get tired of having kings and they keep surprising me. I assume humanity get to The Culture eventually, but I'm starting to doubt that Americans will be leading the way on that front.<p>But maybe Altmans AI will break out and do it for us.
I sure hope you are right.
>and don't even realize it.<p>Oh, many of us realize it, but doing anything about Moloch is much, much harder.
Isn't the problem that Altman and his peers are calling the shots here? We could use robots to work less and spend more time enjoying life, but we can only imagine being crushed under a boot and starving.
Surely we can accelerate human training. Just install a brain implant which administers an electric shock whenever the subject deviates from the official training plan.
To what end though? Are the robots going to take over and trade busy work amongst themselves forever? What would that accomplish?
> Why justify a sustaining an inefficient human in that modern world?<p>I should not need to justify my existence, that is the problem with being led by psychopaths.<p>Twenty years to train humans for what? A tech job? That is not why we get an education. It is not my purpose to be a cog in the wheel for some psychotic billionaire.
Yes and also the software industry has never been truly serious about security either: it's more of implied table stakes than an advertised product feature.<p>Also, customers outsource the risk to their vendors, so as long as there's someone to sue, nobody worries about doing it right. Ship it now and pay the lawyers later.
This is never getting to skynet launching the nukes stage. It's not that clever and never will be.<p>Humans will kill us by it damage amplifying their worst characteristics.<p>Thus we'll die of a pandemic because some idiot LLM'ed up positive looking virology data when they were being too lazy to verify something. Everyone will trust it because they don't really care as long as it looks about right.
> We won't change until Skynet fires off the nukes.<p>And then we won't need to, because at that point it will be too late.
It has never been serious about security, quality and performance. Only new sloppy features. And now everyone is bragging on LinkedIn how fast they create more slop: "Look, CC generated thousands lines of code for me! Approve and merge!"
Agents are providing to employees the long overdue benefits limited liability companies long enjoyed: Gambling with upside for themselves and other peoples downsides.
I’ve never had respect for the industry as a whole, only individuals within. There has a been a serious lack of rigor and professionalism in software engineering for as long as I’ve been a part of it
I think it might be because we (or at least I) used to associate insecure actions with people, not computers. Computers should know better, right? Recently, I spotted that Opus 4.6 found config files for one of its tools and gave itself access to my whole filesystem. Similarly, Gemini CLI will rewrite itself if you let it.
> Then cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter<p>Is this new to people? I figured this out when I first entered the industry. The messages have never been particularly subtle.
It’s a nightmare… the problem is it’s far too easy for people to set these agents up - without understanding the security implications.<p>We’ve covered so many issues already on our blog (grith.ai)
The number of wasted hours spent talking about code quality and patterns has to be astronomical.
Don't worry, ai read all the transcripts and blogs and emails and has at least ingested some of the ethos in its outputs.<p>I self taught and wrote a small saas in 2017. Pays well enough to support me.<p>I'm building a new one using AI this year. I promise you, it's better built and more secure than what my previous still in use Saas is.
The frustrating part is watching all the careful thinking about reliability and failure modes get thrown out the window the second something new gets hyped. It's not even that people disagree with the principles, they just stop applying them.
There's nothing "collectively" about it. I don't know what industry you work in, but in mine it's a top down mandate to use AI everywhere, tracked with KPIs, from the CEO down, and supported and pressured by companies like Amazon and MS.<p>We're the dummies that have to run around picking up dookies like a new puppy in the house.
> cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter<p>I saw the sea change in 2008 when quality process got replaced with velocity and testing tasks. I've watched everything from Experian and health record data leaks to Windows 11 since that change. Software quality hasn't mattered for a long time.
The media isn’t helping. This wasn’t a “rogue AI”. It was a system that was given permission by a human operator.<p>We don’t say “a rogue plane killed 300 people today when it crashed into a mountain”.<p>The only difference in the AI case is that some people are attempting to shift blame for their incompetence into a computer system, and the media is going along with it because it increases clicks.
People salivate so hard at the thought of the high level of automation promised that they're willing to do away with privacy altogether and live in Data Communism.<p>My thinking is, this will increase the demand for backup and other resilience solutions.
I think it's batshit crazy. That's why I wrote yoloAI, so I could sandbox it up properly and control EXACTLY what comes out of that sandbox, diff style.<p><a href="https://github.com/kstenerud/yoloai" rel="nofollow">https://github.com/kstenerud/yoloai</a><p>I can't go back anymore. Going back to a non-sandboxed Claude feels like going back to a non-adblocked browser.
How can you respect an industry that doesn't respect itself?
Meta has never in its entire existence been known for caring about software quality.
Turns out all of the frenzy of the ZIRP era is piddling compared to what happens when ZIRP is taken away.
The whole agent ecosystem is a ridiculous shitshow. All of this because you need to ASAP find something believable to sell your overinflated, bullshit machine to the masses. Otherwise the bubble will burst.
[dead]
[dead]