One fascinating thing about the whole AI phenomenon is how incredibly hostile it is to _standards_. Whether something works properly, or is ethical, or is true, no longer matters at all; all that matters is "pls use our AI".<p>Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.<p>And it's not just them. There was a time that Google cared deeply about UX. Now, on macOS Google remaps CMD-G in Google Docs to launch some LLM bullshit (EDIT: huh, they may have fixed this; it was definitely doing it a couple of weeks ago), because, after all, it has only had a standard universal meaning on macOS for about three decades, no big deal.
It's a complete takeover of technically incompetent management that feels like it can finally execute their ideas to the fullest instead of relying on those pesky swengs with their obstructions, complaints and problems. We'll soon get the management utopia everywhere.
Principal engineer balks at bad UX when the PM should know better (it's their job)<p>2023: Ah well I guess we can't do it<p>2025: you're fired. Hey kid we hired two weeks ago, implement bad idea please
To be fair, it was already done by bad managers long before.
I saw a trend of UX/UI designers coming with practice which I knew better were wrong. But they insisted. E.g hijack brosser native controls.<p>Will never know whether they passed along some manager/PM commandements or were just incompetent.
[delayed]
To be fair, the native browser controls have had too many quirks and features fox UX/UI consistency.<p>Corporate needs their Brand™ look precisely as specified in their expensive Style Guide. IBM wouldn't want the Google vibes of Android Material Design TextFields, I imagine.<p>Scratch beneath the visuals, and starker technical differences appear.<p>Safari on iOS (used to?) has a 350ms debounce delay on every tap / click, in case you want to do a multitouch gesture.<p>JavaScript (Frameworks) were the only way this arbitrary delay to user input could be reduced before 2015, when Apple finally released a native API for this.<p><a href="https://webkit.org/blog/5610/more-responsive-tapping-on-ios/" rel="nofollow">https://webkit.org/blog/5610/more-responsive-tapping-on-ios/</a>
With some resistance. Now they do it far more often.
That's how I got my first opportunity 20 years ago
It wasn’t AI that brought us Apple’s gray on slightly-lighter-gray UI standards, nor the 10,000,000 ••• menus that have infested every webapp in the past 10 years as an alternative to thoughtful UI design. We humans made everything shitty before we made AI.
> Apple’s gray on slightly-lighter-gray UI standards<p>It's a tangential point, but I turned on System Settings -> Accessibility -> Display -> Increase Contrast (the on/off option, not Display Contrast) and now at least the windows are outlined sharply.
Good thing we trained our fortune teller calculators on all that historic shittiness!
Maybe, but at least the 10,000,000 options were there instead deemed that they are not to be used by those pesky users. And now its they are not just hidden. They are simply not there.
Guns and bombs also didn't create war. But they did made it way more lethal.
100%<p>This AI boom is not a boom because its good for developers or users. It's a boom because it's a management dream; the promise of pumping up growth while reducing expensive workforce is simply too good for them to not throw decades of platitudes and "best practices" out the window. When people point out where AI fails, they're not seeing past the end of their nose. They don't realize they're not the real customers. It is leadership with millions in buying power who are the customers, and they're the same ones who only ever cared about managing the perception of success and growth; your clean code and user-focused development practices didn't matter to them back then and they certainly don't matter to them at all now. When it comes to an absolute state of garbage products and software, we still ain't seen nothin' yet.
Bring on the feature creep and epic down time
On the other hand, no one to place the blame on if management does it themselves.
Perennial HN trope: all bad tech evolutions are management's fault. Engineers are flawless paragons of technical purity.
Hard to blame the engineer when the engineer gets fired for not implementing management's whims. As much as I'd like to hold people accountable and say they should just accept getting fired instead of compromising the ideals, the truth is I've got a family now and if they paid me enough I'd do the same.
Aren't you guys glad there are no programmers gatekeeping programming with their "morals" and "etiquette"? Any marketer with an LLM can update the programming tool now. AI really levels the playing field and it's time for pesky programmers to get off their high horse, don't you think? :)
> all that matters is "pls use our AI".<p>If you look at the staggering amounts of money that have been put into the tech, this attitude becomes practically mandatory, in an inhuman sense. They have to get ROI, at literally any cost. And it shows.
> Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.<p>Probably they thought the new generations forgot about how awful they were in the not so distant past.<p>I think they set it all on fire because greed got the better of them again.
> greed<p>Is a greed/not greed scale really useful to discuss company behaviors ?<p>I wanted to say I get what you mean, but even thinking about the company I root for the most, I can't think of a point where they're not driven by their desire to make a lot more money.<p>If your point is that there's good and bad ways to seek money, I'm not sure it's properly encompassed by "greed", which I interpret as the intensity of a desire, not its nature or validity.<p>To you "greed" might mean something else, but is it properly conveyed ?
Approximately everybody would like more money.<p><i>Greedy</i> people put the desire for more money above the welfare of the business, themselves, and other. Greedy people literally put their desire for more personal wealth above the very lives of others.<p>Greed/not greed is a very fair way of putting it. One can operate a business that requires profit without wanting to destroy everyone and everything that stands in the way of more money.
I think there's one more factor that is crucially important — greedy people lack long-term vision, and care a lot more about money <i>now</i> than they do about potentially much more money in the future.<p>I suppose it's kind of interesting that you could measure greed as an unusually high discount rate for the time value of money?
> Greedy people put the desire for more money above the welfare of the business<p>In my experience, it's much simpler.<p>People are greedy if they make things I want cost more.
The Seven Deadly Sins provide an interesting perspective to human psychology even in modern times. Greed / avarice is defined as wanting more than you need.
> Probably they thought the new generations forgot about how awful they were in the not so distant past.<p>More likely, never learned about it in the first place, save a few whispers. Who's got time to go digging in deep, when there's 'experiments to run, research to be done' ...<p>> I think they set it all on fire because greed got the better of them again.<p>new blood, new greed
Whomever at Microsoft is making these decisions and oversees all this, yeeeesh
AI psychosis. Divide between rich and poor. They live in their own golden bubbles and there's no sanity checks. The workers are so far removed from the realm of competentance and influence it's just CEOs and VPs trying to pump the next 6 months stock value regardless of anything.<p>It's like the zeitgeist has decided the only thing that matters is their own farts and how they dont smell.
The industry spent decades preaching us about power savings, with Microsoft settings application lecturing about power saves and the update app programming them on renewables peak, only for... wasting gigawatts by forcing us to have copilot everywhere.<p>If Microsoft were consistent, which isn't, power saving mode would disable AI features.
When I've been working on stuff that requires a SSO login, I noticed that it makes, what I considered, hostile anti-user choices in defaulting to tracking pieces of information I didn't want to track and hadn't mentioned.<p>Fair that I didn't instruct it explicitly to make more pro-user choices, it just seemed to think slurping as much information into the backend was an default intention. Wasted a few more tokens to iterate on it to remove things, but it was IMO interesting enough that I finally submitted feedback around what I imagine is an interesting training problem.
Yeah, even .NET is now plagued with AI, see AI dashboard on Aspire, AI components on Blazor, .NET upgrade assistant now being AI agent,....<p>VSCode hasn't yet been rebranded into VS CoPilot by pure luck.
Has always been the case. Corporations hate standards and would rather lock you in except where market forces prevent them. It was a miracle we have something like the internet - and the government had to create it.<p>Microsoft's decade-long PR rehabilitation has worked wonders for them.
> Microsoft spent literal decades rehabilitating their reputation.<p>Mmm... I think I missed that part.
Not everyone bought it, but they campaigned hard...and now see it was all just a dog and pony show. The hold-outs were right...
Not really. A company is not one monolithic entity with a single will. Far more plausible than "it was all a trick" is that for a time, people were in charge who really were trying to improve things, and now, those people have been replaced with others who are willing to burn it all down.
Before 2010 or so, “serious” internet developers wouldn’t touch Microsoft stuff — Microsoft was for office memos and poorly structured spreadsheets and that was it.<p>So yeah, Azure being a real option at the highest levels of internet-scale operations is a turnaround from where they were.
That’s not an accurate take. Microsoft has had a monopoly on the PC desktop OS. Anyone writing applications for users was targeting Windows and using Microsoft. To call most of these developers “not serious” is quite and overstatement. This includes all PC game developers, DAW, CAD, Adobe…?<p>Azure expanded the Microsoft franchise, and provides another prong to their whole integration story just like cloud AD services and online Office 365 provide another way to stay integrated into their ecosystem.<p>Yeah, they needed to work on their image somewhat, but their image never negatively impacted them
> Anyone writing applications for users was targeting Windows and using Microsoft.<p>Developers as users, sure. MSFT was common. Developers as responsible for infrastructure, MSFT anything was considered a huge risk and unreliable in the 90s.<p>Granted, my memory retains only a general narrative...I remember a shift by 2002ish when I started to see windows servers as perfectly fine machines for closet/under-the-table infra you didn't care too much about anyway. By 2004 they were moving out of the closet, so to speak. Then those machines became more important because more was being done with them and were considered "just as good" as any other OS. Developers that had experience, with their MSFT certs in hand, were cheaper too. It was a slow progression to eat into the corporate marketshare. By 2006 virtual machines were ubiquitous and you could run MSFT virtualized. Many companies do that by default today for workspace controls. I have never and would never choose to use MSFT products (including Azure) for business critical infra. MSFT acquiring Github was great for them, and the death of it for me. I'm probably an old outlier, but I 'member.
> PC game developers, DAW, CAD, Adobe<p>Right, those are all desktop applications. Microsoft has long owned that market.<p>I said “internet developers” meaning web sites, servers, apps, etc. Microsoft’s early offerings in that space, plus all the pain they inflicted with Internet Explorer, is what took years to overcome.
As an MS dev at the time: MS missed The Web and Mobile, thinking Office would be enough. Everything since is catchup.<p>On the one hand MS was a web pioneer — asynchronous web calls and ActiveX technologies that were surprisingly capable — but these were peripheral to their main goals.<p>Instead of MS extending their unified development platform outwards, something .Net promised to enable, effectively the opposite happened. .Net chased Java, but Java was being pushed out by Ruby on Rails. .Net web starts chasing RoR, but then Node is getting cool. .Net Web starts chasing Node and that effort splits .Net into uhhhhh ‘Framework’ uhhh ‘standard’ (ie Old-and-working), and .Net Core (what a container based web stack VM needs to look like).<p>The problem at that point, IMO/IME, is that Node is JavaScript, and those awesome server-side geniuses dump too-easy tooling while recreating every problem of every stack ever (ie LeftPad, loosely goosey versioning, and NPM being a crypto hackers wet dream). The .Net that started as Enterprise Server Stuff is now kinda sorta ‘Whatever’ about versioning, stability, roadmaps, and platform planning. Everything from DataAccess to GUI was churned needlessly for almost a decade, and everyone using that platform looks and feels like an a-hole because huge swaths of MS tech is abandonware resulting in perpetual rewrites of recent-term work and silos of competence.<p>No one can explain what framework to use to write a basic windows application anymore… Office uses React, and Windows does too… the fat cats who made MS into M$ knew better than that, the M$ who chased cloud growth and cut staff for stock price has never cared.
They went from demonizing open source software to buying GitHub, releasing their own open source software (including VSCode), and hosting Linux on Azure. Huge changes! But of course it ends up being another Embrace and Extend move by the masters of that tactic
Hackernews used to experience a collective paroxysm of joy every time a new Visual Studio Code dropped. There definitely was a pervasive belief that the Nadella era ushered in a cuddly new Microsoft.
I remember a time, way back, around 2010 maybe?, where Microsoft was referred to as "M$" in this place and generally perceived as an evil corporation o.O
Most likely more a difference of venue. I saw lots of that on Slashdot. Less of it on Digg or Reddit. Virtually none of it here, but it seems to be making a resurgence in the form of "Macroslop" and related epithets
Lol, yep! That actually goes way back before 2010. It probably started in the early 90's, at least
Both things can be true. VSCode did help us get to the point where I can use it on Linux, MacOS, or Windows and have a lot interoperability. It's the typical cycle. All it takes is a couple people to get their hands on managing the code to turn anything into garbage.
Remember “Microsoft <3 Linux”
GMAIL in the web is so shitty, I literally switched over to another provider. I don't know how anyone can use them as their webmail client. You can't make sense of longer mail threads with forwards, answers etc. in between - it becomes an unreadable hot mess.
They invested billions. They're scared.
> They invested billions. They're scared.<p>They could have shipped a good product with all those billions they spent in reinventing Clippy.<p>I have this feeling that their bet was that all the Microsoft shops will jump on Copilot without looking at alternatives, so they did not really have to make it as good as their competition.
"good" is not important for software anymore, at least in the regular consumer market. Companies have discovered that people will just continue to accept subpar, unfinished and sometimes even partially-functioning software.
"accept" is such a weird word for this, though I don't know of a better one in English.<p>What we seem to be experiencing is a combination of monopoly power/abuse, and regulatory/government/court capture to keep it in place.
if internet comments are any kind of indication (which they very well may not be) I've seen lots of people complaining about win11 but remaining because they can't give up playing their favorite online hero shooter. That's acceptance to me
"tolerate" would be the better word to describe it
Agree that acceptance is irrelevant. No one has a choice, because all the “competitors” in any given niche (phone, cloud platform, PC operating system) are executing the same play. Enshittify, extract profit from ~suckers~ customers, ignore any churn because with the limited choices available there will be new suckers to replace them.<p>We accept this the same way we accept the air quality wherever we are.<p>Yes, Linux is there, but consider the barriers to the average person of truly adopting a strict Free Software life. Consider how many things in life now simply demand for you to have an Android or iOS phone. Things as simple as parking.
Well, now no one has to convince anyone to shell out for upgrades because everything is a subscription. What worked perfectly well can now get replaced out from under you overnight
Making good products simply no longer seems to be on the agenda for most of these companies.
> They could have shipped a good product with all those billions they spent in reinventing Clippy.<p>I really liked Copilot - it gave you a lot of tokens across a bunch of models and their agentic features were perfectly serviceable, alongside it being really affordable! And then they moved over to usage based billing and it no longer has that advantage over the alternatives: <a href="https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/" rel="nofollow">https://github.blog/news-insights/company-news/github-copilo...</a><p>I still think they have a really good AI tab autocomplete implementation and it's nice to be able to use that in VSC without swapping to another editor altogether... but that's not enough to really make me pay for their subscription. I could probably move to Zed altogether if I had a problem with VSC itself, though at least the base editor doesn't feel like it has been enshittified and I quite like it, all things considered.
Microsoft continues to make billions in profit despite its spending on AI, because it has a diversified business that generates revenue. I don't get why they would be "scared"? It's basically a calibrated risk at that level.
Good products are not profitable <i>enough</i>. Not that good products are profitable at all, but if it doesn't make disgusting amounts of money <i>this quarter</i> it's not worth considering at all.<p>We've reached the phase of "infinite shareholder growth" where physics says no, and that is so unacceptable that we'd rather burn down the entire global economy than accept less than exponential growth. It isn't that growth is impossible either, there just can't be <i>enough</i> growth. Break-even is apparently a fate worse than death
> They could have shipped a good product with all those billions<p>They did. It's called Azure: <a href="https://www.geekwire.com/2026/microsoft-tops-wall-street-expectations-reports-accelerating-azure-growth-and-37b-ai-run-rate/" rel="nofollow">https://www.geekwire.com/2026/microsoft-tops-wall-street-exp...</a>
Not sure "good product" and "Azure" really belong in the same sentence.<p>Have you read this?<p><a href="https://isolveproblems.substack.com/p/how-microsoft-vaporized-a-trillion" rel="nofollow">https://isolveproblems.substack.com/p/how-microsoft-vaporize...</a>
I know a few people who worked on Azure’s FedRAMP ATOs, and “good” is not a word I’ve ever heard them use.
That's largely a product of work in the 2010s. What's their next Azure? Clippy on steroids probably won't cut it.
Their next Azure is the same as the next App Store and the next YouTube; they are services, you just keep operating them while they're in the green.<p>Microsoft's B2C reputation is undeniably burnt, but their B2B mindshare is unshakable.
the cloud used because execs have already got a microsoft contract. (not to mention the fun licensing problem)
And they aren't the only ones! The bubble might be reaching it's size limits
They invested billions. They can exit in 6 months if this thing stays afloat.<p>I don't think it's fear; it's greed.
> And it's not just them. There was a time that Google cared deeply about UX<p>Are we talking about the same Google? They still haven't fixed Android gesture navigation after almost a decade.
The thing the annoys me the most (to use polite language) is that product design went off the window with the AI craze. You could probably ship actual products that actual people would want to use, but instead everyone wants to turn everything into a chatbot, as if chatbots are the pinnacle of user interface, the crabs of software, the purpose, goal, and telos of technology. It drives me nuts.
<p><pre><code> > turn everything into a chatbot, as if chatbots are the pinnacle of user interface
</code></pre>
i have seen this first-hand, so many chat bots added to so many screens... like how about just make the ux better? well, that wouldn't look good at individual/team review time cause its not "using ai", so its not a suprise that's what we are getting.
The only question is "number go up?": will this result in more money from investors or not?
Its even worse in my eyes, they dont even offer a model they themselves maintain.
The entire selling point is "you no longer have to conform to standards in input to get usable output"; why would they conform to standards in output, or in process?
This particular change feels... human driven.
Wait, when did they rehabilitate their reputation?
Before AI they were already shoving crap down our throats through windows 11.
What do you mean, there are many, perhaps too many, AI standards. MCP, SKILLS.md, A2A, two different ACPs, ECA.
Claude code not supporting specifying an alternate location to look for agent skills is another example.
Not that surprising when you consider the monumental investments. It's heinous but right in line with modern corporate business ethics.
Sent from iPhone
> And then set fire to the whole thing in an offering to their robot gods.<p>It's the bourgeoisie dream: A means of production that also does the labor 24/7 and can't complain, infinitely spawnable. Theoretical slavery+, so of course they're throwing everything into the furnace for it.
These next few years are the real turning point. If they are right about AI and robotic workforces, then it's checkmate--they don't need us anymore, and we're next for the furnace. If they're wrong... well, I don't know... Will there be any consequences? Maybe a few people lose a few percent of their net worth.
The AI tool providers need companies and customers to pay for the tools and automation. If all the white collar jobs in the Western world are replaced by AI or AI generated SAAS products, some 60% percent the workforce suddenly won't have jobs. If such a large percentage of the workforce has no income through employment, who will be able to pay for the services from SAAS providers and thus ultimately the AI providers?<p>The tradesmen working on my house renovations aren't consuming SAAS products during their day jobs.<p>The white collar workforce can't rapidly switch to blue collar jobs.<p>So for these companies to remain viable, they need the white collar workers to still somehow end up with enough money to pay for services that ultimately the companies provide.<p>Maybe the turning point will be a recognition that companies can't only focus on maximising shareholder value. They also need to consider their role in maintaining and improving the societies they operate in.
There will always be jobs for private security, firefighters, and utility repairmen to protect / restore the data centers when people inevitably attack them.<p>There will be a period of rapid change. If we are lucky, the political class will see and adjust policy quickly. Otherwise we will see US urban areas gutted like the Rust Belt was after NAFTA / WTO. They are making the same mistakes but in a different industry.
Why will there always be these jobs, if the technofascists are right? They're creating enslaved sentience. Even the class traitor police want a union, fight for more pay.<p>What's uniquely un-automate-able about those jobs in their dream future?
Google will definitely lose. Llms supplants search. But not the old document search which they stopped doing long ago.<p>Add in the fact that open weight models are 6-12 months behind frontier models means AI companies aren’t building a moat, they’re on a treadmill. And treadmills don’t justify the valuations OR the hype.<p>AI companies are in trouble.
I see one profitable enterprise for AI that involves spying on everyone, managing their lives (or otherwise) tightly, automating foreign conquests and needing to make only the top decisions while delegating everything else, like a king. I can see a group or one could say a class of people that would happily invest in such future.
Exactly. I keep saying, AI is not useful to us. There will be no AI companies.<p>Even this supposed profitable enterprise, the people involved are absolutely too moronic to be able to control the thing they try to invent, it will just be a matter of time before it turns around and eliminates them as well...
Not all AI companies are the same.<p>Some are piling on masses of debt to built capacity (eg. Oracle). Others are just reinvesting the profits from the rest of their company (eg. Google, Meta).<p>Anthropic’s moat is their best tool, Claude Code.<p>OpenAI’s moat is the brand of ChatGPT, once the fastest growing app in the history of the world.<p>It’s possible that open weight models keep pace, but it’s also possible that the investment to train them becomes prohibitively expensive and open weight models cease to keep pace with the large foundation model companies.
I really don't think open models will lose. I think they are cheaper to train because they have to be more efficient than the monstrosities we have now.<p>There is no theory that says the current frontier models cannot exist in models with 1/100th the compute waste ;). When we start trending in that direction, and oh wow we truly are, there will be no reason for these services. You could run them on your own hardware without serious investments.<p>The moat openai and anthropic have is them among others have attempted to buy all of the computer hardware for the next two years. That's intentional. They know the only existential threat to them is anyone coming up with a way to do this better than them. It's already happened and it's going to become more and more divergent.
Open weight models will keep pace because capable open-weight models are China's strategy for preventing a closed takeover of AI by the West.
US megatechs stole copyrighted data to train their. Hyper expensive models.<p>Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.<p>I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.
One of the double edge swords I see is devs/evangelists pushing agentic coding are playing the 'good enough' statement. If that is true and those asking for software can live with good enough AI code, the moment the free local models hit that level the party is over in the continual push to the premium tip of the spear models.
We might already be there. I've been running Qwen-3.6-27B with 8-bit quantization locally with llama.cpp (~100k context window), and to be honest for my use case, 40-50% of the time it is more usable than claude-code. I only have the $20/mo plan, so I often hit rate limits after 2-3 prompts. And while the local model is slower, it just keeps chugging, is practically free, and more often than not produces code similar to claude. I wouldn't be surprised if in 6-12 months we have local models which are comparable to opus 4.6...which I personally consider as a tipping point where agentic coding became practical.
What does their patent moat look like?
Google owns the core transformer patent(s), for one thing, e.g. <a href="https://patents.google.com/patent/US10452978B2/en" rel="nofollow">https://patents.google.com/patent/US10452978B2/en</a>.<p>I haven't read the claims, so I don't know how easy it will be to work around them. This particular one seems to cover encoder-decoder networks, so it's not necessarily applicable to later LLM implementations. But I'd be amazed if Google didn't have several other relevant patents in their arsenal.
I guess if they are wrong the world economy crashes and burn again, because they wasted all these shiny dollars on infra build out. It's lose lose.
Initially I assumed that when the bubble burst, some VCs would go bust, Oracle would go bust, a few hyperscalers would take a significant haircut but carry on, and life would pretty much go on. However there's now sufficient dodgy AI-related debt making its way onto the debt markets that the bubble burst could be a lot messier, and it may be more than a few percent.
A few percent of your net worth, when you're sitting on top of a pile of gold like a dragon on a yacht is one thing, but when you're a retiree, and you're on a fixed income, living off the proceeds from an annuity and a reverse mortgage, and inflation in all its forms is eating into the plan you had, and you don't have any backup, yes there will be consequences!
> Maybe a few people lose a few percent of their net worth.<p>the entire US economy rides on this now so it’ll be more than few people and <i>a lot</i> more than few percent.
LOL.<p>Robotics isn't even 1% of the way to replacing anything.<p>Consider why every neat demo is a backflip and not washing the dishes or laying bricks or something.
People (well, American people (disclosure, I am an American)), used to be scared/worried that Silicon Valley will eventually move to Bangalore or Shenzhen, because of wage-discrepancies, and so on -- and it is not a totally unreasonable concern, considering that the _Silicon_ part of Silicon Valley has been slowly relocated to Taipei, Seoul, Tokyo, and a few others. At this point, maybe we should start pushing that the _rest_ of Silicon Valley gets relocated somewhere else, too.<p>It's a breeding ground for Edisons and Morgans, not Teslas. It is profoundly depressing that SV is doing everything it can (knowingly or unknowingly, not sure which is worse) to get the entire planet to stop taking it seriously and to shun it.
If you have worked in Silicon Valley you know that Bangalore and Shenzhen came here ;)<p>In all seriousness, the silicon is still designed in Silicon Valley but maybe you don't hear about that as much? Broadcom, Qualcomm, Intel, Samsung, AMD, Nvidia, etc. all have a huge presence there still.
No country would want them.
BUT it is a trap: <a href="https://arxiv.org/html/2603.20617v1" rel="nofollow">https://arxiv.org/html/2603.20617v1</a>
One things for sure I won't be buying any SaaS, streaming, or ordering from Amazon if I have no future prospects for work. I already stopped most of my subscriptions because of a layoff unrelated to AI.<p>We buy food and go for walks as entertainment. It's been refreshing but also obviously scary.
Didn’t get the “scary” part. I also keep my entertainment to the minimum dependencies possible. I try to rely on stuff I own: music cds, iso videogames + emulators, physical books or ebooks (thanks Anna), exercise outdoors… ditching streaming like netflix/youtube, buying crap on amazon, uber, etc
Scary = “if I have no future prospects for work”<p>It’s the combination of AI changing the workplace, the large techs shedding double digit headcount, recruiting / hiring departments being so broken by the AI arms race hitting job applications, and the macro business environment generally being on the downward slope at the moment.
Scary part is not having a job right now that's all. It's not scary walking around getting more vitamin d
This feels like the same mechanism for climate change. The actors dont care since they're not completely responsible for that outcome and benefit from ignoring it
Turns out it's not infinitely spawnable after all.
AI is the ultimate grifting tool, grifters gonna grift.
5 years ago it was blockchain & NFT’s.<p>Same hypers just moved to different technology.
In my circles it literally was the same people. Instead of trying to get me to buy ETH they started talking only via LLMs. Unsurprisingly we aren't in touch anymore... Maybe they are happier with their chatbots, I'll never know that's for sure
All of the "carbon credit" guys I know are now all in on AI with zero sense of self awareness.
Yep, 25 years ago it was the web. And remember the great electricity grift 100 years ago. And horseless carriage grifters like Ford!
See how fast so many of the crypto and NFT/Web 3 lot shifted to AI, like rats on a sinking ship.<p>I think VCs saw Crypto and dreamt of being able to create the same amount of irrational value. AI has the same technical complexity "You can't easily explain it in a single sentence" energy but unlike Crypto and NFTs, enough actual utility to not seem completely illegitimate. It literally is the perfect hype grift tool. Crypto has survived almost 20 years off of nonsense, how long can this crap last. <i>sigh</i>
If you still think crypto and AI are nonsense, then I guess you will carry these beliefs the rest of your life, but these beliefs won't outlive you, as they have no relation to reality.
I said AI has utility but drives irrational levels of investment. Crypto has little utility besides a place to gamble, con credulous people and otherwise act as a really shitty store of wealth.<p>Most modern crypto projects barely bother to promise to do anything useful let alone achieve anything useful, which the overwhelming majority do not.<p>These aren't beliefs but statements of fact.
Those things you listed have lots of utility. Gambling is one of the most lucrative industries to be in.
That's very uncharitable. Crypto has been extremely useful for all sorts of grifters and enabled separating fools from their money at true web scale.
[flagged]
Ok. Well have fun. Bye.
Who is investing in NFTs today?<p>Who is building their company using permission-less blockchain as the database? The average person still uses a bank checking account, not replacing it with a crypto account.<p>I haven’t heard of any progress on tokens in the Governance direction.<p>Stablecoins without a public audit trail have so far stayed relevant, but there are several which are suspiciously reminiscent of the mistakes that SBF made.<p>We all see the transfer of funds and the ostensible store of wealth when it comes to buying influence or presidential pardons. Those of us not wearing crypto-colored glasses don’t see the promise that VCs sold us on the industry 5-10 years ago.
I never spoke about NFTs nor do I have to speak about them, not today and not ever, so save your bait. It's in the same way that you didn't speak about bank bailouts, so I won't bait you into it.<p>Most people obviously use multiple accounts of different types. Those who have crypto wallets will never reveal them to you in the interest of their privacy.<p>Stablecoin firms make so much cash via interest that they're easily over-capitalized.<p>If you're foolish enough to be manipulated by VC interests, that's your own fault. I would focus on the tech, not on what VCs want you to believe. This applies generally, irrespective of the sector. I don't know why this is hard to understand.
NFTs are stupid. But I have a feeling as governments default on their debt and economies collapse in the next few decades cryptocurrencies will be of increasing importance.
> There was a time that Google cared deeply about UX<p>Have we been using the same Google?
Their search homepage was supposed to be minimal. I was at a tech talk given by Google sometime around 2012 and they said that their ad service is not under any circumstances allowed to slow down the page load - if the ads don't return before the page is ready the pager is rendered without ads.<p>Chrome had so many great ux choices originally, such as tabs all staying the same size when you were closing them so that you could close multiple easily and only resizing after a second or two (that stopped working around a year ago). Hell there are even rumours that Chrome is called Chrome because it was a polished UX.<p>Their original products were so smooth compared to what was there before. Search compared to altavista, mail compared to Hotmail, both compared to Yahoo!. I really don't know where your perspective comes from. GCP?
This comment and a few others here make me feel old and sad for the people too young to remember that time. Yes, Google was an enormous breath of fresh air when it came out. 1000% better UI and features than the competition. Search was incredible. Gmail was a revelation. The whole company culture was night and day compared to the stodgy old tech companies like IBM. Just mind blowingly awesome. And then maps?? How did they even do that? The tech world felt entirely fresh and new and hopeful.
We have. That's why the parent said _there was a time_, implying that this is no longer true.
Admittedly, it's a while ago. But original gmail, say, really did put a huge amount of effort into it.
Some people seem to think they cared, at some point. I’m not one of them.
> Microsoft spent literal decades rehabilitating their reputation.<p>"Decades" is a stretch. There was a brief window around the Windows 7/8 era and then, like a dog returning to his vomit, they returned to their user-hostile bullshit. Windows 11 is the culmination of that, but Windows 10 was plenty bad. Remember how Windows 10 made <i>Solitaire</i> a subscription service? Sticking copilot into everything is just more of the same.
What did Command+G do in OSX? Online results are saying it "advances to the next search result after doing find". In other OS', that's just the enter key, if I am understanding the context correctly.
A challenge for AI-assisted coding is this rush by some companies to shove AI everywhere, usually without a clear understanding of software development or how LLMs work. Code generated by AI doesn't have a traditional author (1), and regardless of how we describe these tools, LLMs and their creators have (at best) very limited accountability for the output. If "vibe coding" becomes the norm, it will greatly diminish any genuine value that AI-generated code can offer.<p>1: <a href="https://ossature.dev/blog/ai-generated-code-has-no-author/" rel="nofollow">https://ossature.dev/blog/ai-generated-code-has-no-author/</a>
This feels like the modern version of 'Sent from my iPhone' but much more invasive. Git commits are legal and technical records. Falsifying who authored a piece of code just to pump up AI usage stats is a huge breach of trust and it is disappointing to see Microsoft prioritize branding over the integrity of the developer's log.
I expect my IDE to record what happened, not what the marketing department wants people to think happened.....
Absolutely, messing with commits is more invasive than messages. It gets worse:<p>"Sent from my iPhone" appears in the authoring view, and you can delete it.<p>Co-authored-by: NEVER appears in the commit message UI - it is added without the user even seeing it.
And also those early Spotify days where Spotify would automatically post what you’re listening to to your Facebook wall.<p>I’ve always seen that practice of using the user as your recommendation lever without their consent as unethical.
I think it's kinda cute that you don't see it as an attempt to <i>steal code</i> by claiming they "co-authored" it. How long before they claim they can use any code co-authored by Copilot in training? How long before you see your own code, "co-authored by Copilot" as an output in a commercial product that YOU aren't making a profit from? Just a thought :)
Good point. That fake commit addendum means that the entire commit contents would not be under copyright protection. AI generated code is not currently copyrightable.
Is thos actually decided yet? Closest thing was the image generation cases. What's your go to source for this?
It doesn't mean that. A Co-Authored-By header isn't a legal signature or legal assertion of AI generated code.
Not that simple… this is great read: <a href="https://legallayer.substack.com/p/who-owns-the-claude-code-wrote" rel="nofollow">https://legallayer.substack.com/p/who-owns-the-claude-code-w...</a>
Outside this instance, how can one prove code was AI generated beyond a reasonable doubt? Also, do you (or anyone else) know how much AI/copied-code has to be modified for it to be considered independent?<p>If AI generates code, and one just renames some variables/method signatures, then what?
> how can one prove code was AI generated beyond a reasonable doubt?<p>Subpoena the provider they use.<p>Even if they don’t retain the full context, they have to save API calls for billing and analytics. If you’re clauding for the hour up to and after the commit, one can reasonably assume you built it with (if not exclusively by) AI.
One could argue that Co-Authored by Copilot means 'not under copyright'
The headline literally says the line is being inserted regardless of usage, which makes it easy to argue that it’s entirely meaningless as an indicator of AI use at all.
Yeah the current guidance from US copyright office is that if it were said to be solely authored by copilot it would not be eligible for copyright. If it were said to be solely authored by human A (who happened to use co-pilot) the elements and arrangement of it not generated by co-pilot would be copyrightable. I’m not sure the copyright office has released guidance on attempting to register AI as a co-author I assume the registration would be rejected but you’d be able to re-submit as sole Human author.
I am the person who approved this PR and would like to acknowledge and apologize for the mistake of turning this feature on by default without sufficient upfront validation.<p>There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.<p>Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI. I'll work on fixing those and meanwhile revert default to off in 1.119 update.<p>I am open to any (constructive) comments/suggestions - please feel free to reach me directly (my alias @microsoft.com) or open an issue on GitHub. Happy to answer anything here as well.
Changing the default behavior for all of your users with no notification is pretty unforgivable. Even if this feature worked correctly, it obviously doesn’t, this should at minimum be a prompt after upgrade to let the user confirm that this is what they want. But honestly should be opt in for those that want it.<p>To have it silently just start adding marketing copy to git commit messages is pretty bad. To have that added text not be visible to the user in the UI so they can remove it before commit is just much worse.<p>This kind of thing being released speaks to a greater disfunction over there. Not a good look at all and I am not a Microsoft or AI hater. But my commit messages are not where you move fast and break things
Well, the good news is commit messages are some of the most visible thing, and there are no silent modifications that are really possible.<p>The bad news is - where else have this happened in VS Code?<p>- A happy user of (n)vim
Then how are they going to be able to sue and take ownership stake in projects that have their clanker's authorship in it when they gain traction, or its advantageous for them to do so?
[dead]
I think the constructive criticism is best directed at whatever process you are following. That process allowed a very visible user facing change in a widely used piece of software. How did this change make it to production without some process catching the impact of this change? Was there really no internal discussion from a code review at least? This seems hard for me to believe. I expect more from Microsoft.
> Was there really no internal discussion from a code review at least? This seems hard for me to believe.<p>The outlined story feels unfortunately very believable to me.<p>Teams need to push out the most number of features, and nobody stops even for a second to think about how a feature might affect other flows or other users not in the feature request.<p>It might have been quickly reviewed to check if the code does what it needs to do (add the coauthor note).<p>Do you think reviewers will think about unwanted effects, when they need get back to feeding their own poorly thought out and underspec’d features to their LLMs?
<i>> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code.</i><p>What metric did Microsoft use to assess that VS Code users "expect" their commits to have unsolicited messages added to them?<p><i>> Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI.</i><p>Did you discuss adding these messages with your legal department?<p>What is Microsoft's position on adding such authorship statements to the code Microsoft did not author?<p>Or is Microsoft stating that using LLM assistants makes Microsoft a co-author of the code?<p>Does Microsoft have copyright claims on the code if LLM assistants are used at any time during its creation?
I think there’s a few of us who appreciate you being up front. I’d question the intent and why it was a mistake, especially when the commit[0] message reverting said functionality states “widespread criticism” citing this very HN article makes it look seemingly like the revert is due to negative PR opposed to a mistake.<p>[0]<a href="https://github.com/microsoft/vscode/pull/313725/commits/1e70c1a7d11c4a43a6b3d4fbaa18b8d64ef62982" rel="nofollow">https://github.com/microsoft/vscode/pull/313725/commits/1e70...</a>
I just wanted to say, while I think this feature was a bad idea, I sincerely applaud your willingness to post here, knowing you'll get roasted. Seriously brave and commendable.
Interesting case:<p>- a project manager vibe-coded the change without thinking it through at all<p>- the PR was reviewed by an LLM<p>- an actual engineer gave LGTM without really reviewing the changes, trusting the LLM<p>Did I get this right?
><i>a project manager vibe-coded the change without thinking it through at all</i><p>The PMs vibe-coding and having no idea what they're doing isn't even the main issue (although it is pretty bad).<p>The main issue is: how are the <i>actual</i> engineers supposed to "review" the slop? They probably report to the same PM or are at below in the org chart and might be evaluated by them. Not just at MS, but any company.<p>Such a conflict of interest would be detrimental to quality anywhere. You wouldn't build a bridge like this, nor should you software.
You can't make this shit up.
The LLM actually points out the problem tho
Why does the commit editor hide the coauthored message? Why not pre-populate the text field and users take or leave it when committing?
Co-Authored-By is normally a trailer, and trailers aren’t part of the commit message. It’s likely the commit editor isn’t set up to show trailers. They’re not exactly obscure, but it does seem that they’re relatively unknown.
What do you mean they aren’t part of the commit message? Trailers like (signed off by) are absolutely part of the message. Tools can choose to treat them as special metadata, but they’re part of the commit.<p>The docs for the function to interpret trailers even says this explicitly: <a href="https://git-scm.com/docs/git-interpret-trailers" rel="nofollow">https://git-scm.com/docs/git-interpret-trailers</a><p>> Add or parse structured information in commit messages
I mean that they’re not necessarily part of the --message parameter to `git commit`, but instead part of the --trailer parameter. I don’t know how VSCode is programmed, but it seems plausible that trailers are handled separately from the message parameter.<p><a href="https://git-scm.com/docs/git-commit" rel="nofollow">https://git-scm.com/docs/git-commit</a>
Don't you understand that the default shouldn't be changed at all in this case? It improves nothing and affects every single user. If an org/project wants this behavior then it can enforce this flag for its contributions. The only valid reason for this change is someone's performance somewhere in Microsoft is dependent on VS Copilot usage metric.
Absolute clown car of an operation. Just abdicated responsibility even when it comes to very basic testing. This is bonzi buddy scam software bad, intended or not. Have fun Microsoft, but this is where we part ways.
thank you for doing this, it gave me the push I needed to finally switch to zed. vscode has really been going downhill for a while now. it's sad to watch, it used to be a really nice editor
> a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code<p>Literally who?
I could easily see companies, especially enterprise-level companies, expect code that was generated with AI to have some level of ownership attributed to that AI. Whether a simple "Co-Authored-by Copilot" byline on the commit is the right way to do that is another question though.
No one, which is why he refuses to reply further to any of these inquiries.
Just for any future mea culpa, I'd recommend not hedging with comments like this one:<p>> As folks mentioned here - many similar tools do this as well.<p>It's really doubtful they have the same behavior people are complaining about here: namely including the authored by Copilot statement when it wasn't used (or even enabled).
Anthropic does by default. I had to put “no co-authored by lines in commits, ever” into my global settings.<p>That’s pretty close to “included when it wasn’t used (or even enabled)” since it’s opt-in by default and you have to explicitly say no. It’s not even clear where to turn it off, I just rely on the AI to figure out not to do it.
> I am open to any (constructive) comments/suggestions<p>Here's one:<p>I think a senior sysadmin needs to sit you down in their office and have a very serious talk with you about the <i>responsibility</i> that comes with writing code other people run. I am serious. We used to have these talks with everyone who got sudo access. You shouldn't be shipping code if you don't understand the trust that is required of people in your position.<p>This isn't just about this "feature" being active when AI features are disabled, the way you mis-implemented this has resulted in it modifying the commit message with the user even seeing it! That is malicious behavior, not an innocent little feature "to make life easier".<p>I've fully switched off of VS Code to Kate now, which is faster and better behaved in most cases anyway. Bye.
To be fair, looks like a PM vibe coded it and this person “just” gave it an approval with no comments after an LLM review.
He makes more money than you and you’re responding to his ai chatbot<p>Seethe
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.<p>Then make it an extension, not a IDE-behaviour thing. Is that so complicated, so difficult?
> There was no ill intent by evil corporation<p>I simply do not believe you
Thanks for facing this head-on here; mistakes happen.<p>I think the default to on should also be reconsidered regardless. The assessment (co-authored by AI) may be valid but the assumption the user wants that advertising is exactly that, an assumption, and a dubious one at that.
I appreciate you acknowledging that this was a mistake, but as you surely know from your own experience with other people’s mistakes, some mistakes are so egregious that they cast doubt on the intentions of the people involved even if they are corrected later.<p>To me, “let’s add false attribution to every commit by default without informing the user” falls squarely into that category. I don’t think I’ve ever worked in an environment where something like that wouldn’t have been red-flagged in three seconds by anyone who took even a casual glance. I’d honestly be embarrassed if such a proposal even made it into a public pull request for my organization, nevermind that pull request getting merged.
Have it as an add-on said customers can add. Opt-in, not opt-out. No AI without consent.
Just disable everything AI by default bro.
[dead]
Brave man. RIP your inbox.
The best part is that copilot commented on the PR saying that this doesn’t actually change the behaviour, creates inconsistency in the codebase and suggested reverting the change! (This comment seems to have been ignored…)<p>> The configuration schema default was changed to "all", but the runtime fallback in extensions/git/src/repository.ts still calls config.get('addAICoAuthor', 'off'). This is now out of sync and can lead to unexpected behavior in contexts where the contributed configuration defaults aren't loaded (e.g., some tests/hosts), and it makes the intended default unclear. Update the runtime fallback to match the schema default (or omit the fallback so the contributed default is used).
That's pretty standard review practice in there by now.
I also liked the bot posting screenshot diffs that are all false positives, while apparently not capturing the default change (is it not in some menu somewhere?)
There are two commits in the PR, the second of the two seems to update the fallback config to avoid the inconsistency that Copilot was complaining about.
To everyone who bought the "developer-friendly" Microsoft of VSCode fame from a few years ago: this is what they forever did, and forever will do.<p>This company has been pulling these tricks since the early 90s.<p>If you fell for this once again, there's nobody else to blame but yourself.
You’re forgetting the fact that the newer generations coming into the industry don’t know that. They don’t even know what a VHS tape is and some don’t even know what a DVD is — this isn’t a problem it’s just their baseline is different from ours. Global warming is an example of this: newer generations see today’s conditions as normal but we older generations see them as broken and a problem.<p>To be direct about this: this is actually our fault they fell for this. It’s your fault too. We’re the ones building the future for the next generation/s, so whatever “tricks” they fall for are created by our generation (to extract or generate wealth, amongst other things.)<p>That’s on us to do better through education and fighting back.
The younger generations aren't really that stupid. They know what a DVD is for gosh sake.<p>They also know the conditions they have to endure - economic, climate, whatever - are not normal or okay. They're well aware of who to blame for those.
> If you fell for this once again, there's nobody else to blame but yourself.<p>We don’t need snarky comments like this, especially when the technology in question is so pervasive and takes a lot of cognitive effort to avoid. The blame lies solely with Microsoft.
The very young do not always do as they are told.<p>If one hasn't been personally betrayed yet, it is easy to minimize or ignore the warnings of others who have been through the predatory/anticompetitive, EEE, stack ranking, etc. eras of MS.
You may be surprised to learn some of the employed adults on this site were born after the 90’s.
Unfortunately, quite a few of these young adults ignored the people who lived through it last time and were repeatedly warning them about it.
And hopefully those employed adults have done their due diligence and read some history.
Ok but I’ve used VSCode for almost 10 years, got mad at this once, and disabled it instantly. This sucks, but maybe don’t overreact?
I'd like my tools to not have a time-bomb attached to them, no matter if it takes 10 years to explode.<p>And honestly I think this case is just a perpetually clueless manager getting over-joyous with vibecoding (to the point of being marveled at changing two lines of code without blowing everything up).<p>It's probably going to be reverted in the coming days. Which doesn't change the fact that it's a very Microsoft way of operating.
- Automatically activated audio cues (purportedly for accessibility) without consideration for users with auditory sensitivity; continued to release changes that would override attempts to disable the unwanted sound; dismissed with "but how else could we <i>possibly</i> notify people that we added the feature?"<p>- Refusing for <i>over seven years</i> to offer a simple UI to clear "issues" pane, instead blaming plugin authors for not 'owning' the content. <a href="https://github.com/microsoft/vscode/issues/66982" rel="nofollow">https://github.com/microsoft/vscode/issues/66982</a><p>Microsoft hasn't cared about the actual users of VSCode for a very long time.
Just because you can opt out doesn't mean that they're not shitty for defaulting you to opt in.
It is certainly bad behavior that Microsoft did this. But it's irrational to jump from there to "this is what they always did and always will do" as OP did. Corporations are not unchangeable monoliths, and it was perfectly reasonable to use Microsoft tools when they were acting decently towards their users. Now that they have turned user-hostile, it makes sense to avoid them until they learn their lesson, and so on.<p>People act like a corporation has character traits, as a person does. But it doesn't. You can't strongly predict future behavior based on the present the way you can with a person, so it makes no sense to have seething eternal hatred for a company.
Hatred for a corporation is as useful as hatred for a nuclear bomb. No matter how harmful or destructive, it lacks any sort of free will that would make it a reasonable target for such hate.<p>There's actual people <i>making it happen</i>, though.
Microsoft is unforgivable. So much time and so much money and they made the world worse whenever they could make a buck. They need to be broken up.
This is bad. I need to start Monday warning my team about this and installing validation hooks in our repos that catch any commits with this. We don't have a non-AI policy, but we have an "approved AI" policy due to data security, and having all your commits say "Co-Authored-by Copilot" is more or less the same as as "I ** on infosec". We also have a "short commits message" policy, and that "Co-Authored" thingy takes characters.
FYI, they changed the default of 'git.addAICoAuthor' to 'chatAndAgent' afterwards:
<a href="https://github.com/microsoft/vscode/pull/312880" rel="nofollow">https://github.com/microsoft/vscode/pull/312880</a><p>So it was 'off' -> 'on' -> 'chatAndAgent'
Changed back or not, this demonstrates that they're either willing to make sweeping changes like this that hurt a massive number of users, or that they're incompetent to the point of not realising the impact of the first change. They'd have had to just blindly make the change, since the original PR was approved and merged within the same minute by the original author (no additional eyes, at least that we can see), or ignore user complaints and make it anyway. Both cases demonstrate terrible stewardship of VSCode.
This should be higher, as this dates from 5 days ago I wonder why OP didn't bother to mention this follow-up
To be fair to OP, that follow-up doesn't appear to be mentioned anywhere in the discussion on #310226, either. They probably should have left a note about that change before locking the thread.
To be honest, I didn't see the follow up. It just incensed me enough that they would do that to begin with.<p>Right up there with Zed being pretty open that they siphon your code through their API surface and have a "Just Trust Us Bro" data retention policy, along with no way to turn the collaboration features off.<p>- OP
"Sent from my iPhone" marketing only works if people want everyone to know they're using the product.
Huh. I always thought the point of "Sent from my iPhone" (or the earlier "Sent from my Blackberry") was that it indicated "I don't have access to my desktop and file server right now so don't expect me to send that file".
That's one way that it works, but that's not the main driver.<p>This kind of tagline marketing works best with people people who aren't even aware that they're participating, and who aren't bothered to do anything different it even if they become aware.<p>The juice isn't worth the squeeze, so the marketing remains.<p><pre><code> Sent from my iPhone
Downloaded from Demonoid
Rusty n Edie's: The world's friendliest BBS 216-726-0737</code></pre>
But, also, I think in this case, it makes people less likely to use the product, as there's a lot of baggage around agent-written code. People who shouldn't be using it are using it to make so many PRs it's become a DoS attack for some projects, so a lot of project maintainers are rightly sniffy about AI-written code.
I'd like to think that the level of cognitive sophistication necessary to assess the situation negatively would be very widely available. That would be a very pleasant line of thought for me.<p>But then, I look at the modern-world empires that are built upon advertising and realize that reality just isn't that way. At all.
There's no such thing as bad publicity. If people who didn't know about your product become angry about your product, they're more likely to buy it.
100% I have one ~tiny~ project that has a handful of stars and actual people seem to use it. End of last year I received a huge slop drive-by PR on it. Spent 20 minutes reading it, realised it was just nonsense. I want my friggin' 20 minutes back.<p>I can't imagine how infuriating this is for maintainers of projects with much more footfall. I'm frankly shocked more aren't just outright closing the doors to PRs from unknown contributors
Dang, now I wanna call Rusty n Edie's BBS for some reason.
However, there's one counterexample: some email clients in the past experienced explosive growth by adding signatures. It was annoying, but it definitely worked.
Someone, somewhere, probably has a "% of commits co-authored by copilot" KPI.
Doubly so, because you are being used as ad-channel and not being compensated for it either.
I don't really send emails anymore but when I actually used email to keep in touch with friends (during the interesting bit of time between smart phones becoming mainstream and SMS and other messaging services becoming more popular than email), I changed my signature to be "Sent from your iPhone" even though I used an android and mainly sent emails from my computer, just to be an edgy teenager. Got some interesting responses from that.<p>It's interesting to see how communication, digital and otherwise, has evolved over time.
Microsoft already does this with their mobile Outlook. Sent by Outlook Android / iOS on the bottom of the message.
But you can see it and remove it before sending. It’s definitely not the same.
Does anybody else remember Tapatalk? They did the same with signatures in forums.
"sent from my iphone" originally meant more than just "i have a fancy phone that lets me send email" in the early days it meant "I'm not at my desk right now."
This is pumping <i>someone's</i> metrics up inside of Microsoft, <i>somewhere</i>.<p>The question is - will their boss revert it or encourage it when they discover the source of the stats being juiced?
A Principal Software Engineer at Microslop merged this - <a href="https://www.linkedin.com/in/dmitriy-vasyura-9191611/" rel="nofollow">https://www.linkedin.com/in/dmitriy-vasyura-9191611/</a><p>This is the author of the MR - <a href="https://github.com/cwebster-99" rel="nofollow">https://github.com/cwebster-99</a> - A Product Manager at Microslop<p>I've routinely spoken on the uselessness, and oftentimes detriment of product managers in tech.<p>The dearth of leadership driving for vanity metrics like PMs writing code doesn't help either.
I can’t access that LinkedIn link without going through their Persona ID process, which requires all kinds of PII.<p>> LinkedIn users attempting identity verification may be unknowingly handing sensitive personal data to Persona Identities Inc., a company that distributes information to government agencies, credit bureaus, utilities, and mobile providers.<p>^ Link from a LinkedIn page I found on a Kagi search.<p>I can view some LinkedIn pages but not others without logging in.<p>Even though I’ve never posted to LinkedIn it only use it as a public résumé, my account was flagged as needing identity verification. I’m pretty sure this happened a year or two ago when I changed my email address from one domain I owned to another domain I owned.<p>I’ve never been able to log in since then, and there is no support path. The only available way past it is to simply submit all the info to Persona.
I'm here in case you have something to say to me directly.
The role feels like it’s borne out of a desire to see employees as fungible.
Isn't that someone the person who created the PR?
"Product Manager at @microsoft working on VS Code and GitHub Copilot!" it says on her profile
Isn't it also cause they want to tag those commit so that they don't feed it into copilot training?
My first thought when I read this was that it was accidental. But the title of the PR looks like that they aren't even trying to hide it
That someone saw Google's claim that 75% of their code is written with AI and said "hold my beer".<p>Juiced stats? No such thing, at least as long as stock number go up.
Isn’t this a kind of “leopards ate my face” situation? I thought we had all “agreed” that letting AI write code and take control of software repositories is good, even if we have no idea what is going on beyond a thin surface layer, because well it’s fast and we can fix it later and lol who needs testing? My customers are my testers.<p>And now it’s suddenly bad because the developer is the customer?
The sneaky commit modification is triggered by very modest usage of AI such as auto-completion.<p>Look, if an agent writes the code and the commit message then adding a Co-authored-by by default is ok. Not even showing it before the commit is made is not, and adding the message when AI was just completing code is not.
I genuinely think it's not ok even then. Copilot is a tool, one of many I use. That tool has no business polluting commit messages without my knowledge.<p>The appended message isn't even adding any new information, as in this day and age a vast majority of commits is probably "co-authored" by an LLM.
I should have been clearer, the hidden addition is never ok.<p>If I ask Claude to write a commit message, it will inserted a co-author line (and an ad), but I can see it and disapprove, add a counter instruction to CLAUDE.md etc
Glorified autocomplete, syntax reminder and random snippet generator thinks it's co-authoring things.
Microsoft is such a master class in how to make me hate you, quickly.
I know you didn’t mean it that way, but boy did that make me feel old.<p>Anyone else remember the bill gates borg category on slashdot?
There is more of it that's going on. For me, Microsoft's SwiftKey keyboard app sabotages the use of a competing search engine (DuckDuckGo) in Firefox in Android for me. When typing a multi-word double-quoted search phrase, it doesn't allow it to be typed correctly.
Next it will be Co-authored by Co-Pilot with help from Dominos Pizza
Next Microsoft will sue you to get a share of revenues and ownership as co-author, if your product ever makes success.
But only if you watched this 1 min Segment of today's sponsor...<p>Your free commit today is brought to you by duff beer
This will be so true hahaha
More like Carl's Jr.: Fuck you! We're eating.
<p><pre><code> microsoft locked as spam and limited conversation to collaborators 6 minutes ago</code></pre>
This is especially hostile to users given that courts are ruling that AI written code can’t be copyrighted.<p>When Hotmail inserted “sent using Hotmail” in emails as a growth hack it didn’t have legal consequences. This might.
My newest yocto image mounts a 640K RO tmpfs on top of $HOME/.vscode-server to prevent people using VSCode from shitting all over the relatively small emmc.
The PR author didn't even bother to properly capitalize their subject and add a description. What a double standard for code quality Macroslop is applying to internal vs. external contributions.
Jeez, you can see many things wrong with this new all-in AI direction that Microsoft is taking. Commit by a product manager, who probably actually never digged through the code before…automated ai review not catching the problem, and the vibe codes pr introduction the error itself
Even large companies like Anthropic and Microsoft keep pushing out features without proper code and/or product review. This has become a bottleneck in software engineering.
its a sad truth, but in the end, for companies, software is just a means to an end: more revenue<p>and if they can get more revenue by less quality and cutting corners they will do it; see countless examples of such scandals in many industries...
I have been in this situation. A major driving force is some kind of a demand from the leadership to see the KPI for the AI adoption. And this unfortunately is the easiest one to implement.<p>The other aspect is virality. I think by now the implementing team should know that most people do not appreciate Claud inserting itself into the commit message. It's the job of the team to feed that to the leadership.
withing ONE week, Microsoft one-sidedly decided to<p>1. increase the LLM usage by 20x in Copilot<p>2. add rate hourly (roughly 4 hours blocks) and weekly rate limits to models use in Copilot<p>3. introduce credit based billing where you can't roll over unused credits<p>4. and now inserts themself to the commits as co-author<p>Man, I really feel like they want us to hate them
Adding Copilot as co-author: For when just stealing other people's code doesn't cut it anymore.
Wow. Just like using ungoogled-chromium instead of chrome, lineage os instead of oem android, using vscodium instead of vscode is again justified. These decisions really are the ones that I'll never regret.<p>In addition, using the word microslop instead of microsoft is again justified, too.
Search for "AICoauthor" in VSCode settings and turn it off.
To be precise,<p>"git.addAICoAuthor": "off"
Until they change it to „CoauthorAI” in next version. This shouldn’t be a default in the first place
Wonder if they're going to claim copyright interest based on inserting that crap.
And here I’m thinking that my text editor should have zero interaction with anything git other than as a diff viewer.<p>lazygit is text editor agnostic and works brilliantly to give some near perfect porcelain to git specifically. And it works the same with Ghostty, Terminal, zed, VS Code, any environment I happen to be in, while saving so many keystrokes.
Recent and related:<p><i>Tell HN: VS Code v1.117.0 automatically adds GitHub Copilot as your co author</i> - <a href="https://news.ycombinator.com/item?id=47958353">https://news.ycombinator.com/item?id=47958353</a> - April 2026 (36 comments)<p>(Looks like that one never made the front page, so we won't treat the current one as a dupe)
If your code is "co-authored by Copilot", does that then allow future AI to train on it without your consent?
At no point in time companies were so desperate for developer attention. It feels like the general consensus is it is a “winner takes it all” race, and everyone has to add as many dark patterns as possible to increase stickiness.
I miss in this whole thread <i>why</i> this is happening. Presumably to be transparent whether code has been co-written by AI?<p>What's in it for Microsoft?<p>If we accept that AI can't copyright or own IP rights on something, then why?
I have a sneaky suspicion that there's some lobbying in the works to overturn that ruling going forward. In the past, it was OK to build models from copyrighted data etc one might have found on the wayside. But, in the future, no such thing for you. Everything generated by the AIs will then belong (at least partly) to the megacorps (maybe THEY can co-own the copyright if the AI cannot). Nice pulling-up-the ladder if true.<p>This could also be a move against other countries' IP position.<p>I've seen the explanation from dimitriv [1], but I am not convinced. These markings achieve very little, as people can clearly work around it by copy-pasting code from another place, or using other companies tools, like claude code or antigravity (or, not even use the GUI)<p>I suppose the answer might just be "don't attribute to malice ...", even if Microsoft has proven us wrong before; they generally know exactly what they are doing strategically.<p>I guess, in a few years we will know.<p>[1] <a href="https://news.ycombinator.com/threads?id=dmitriv#47991835">https://news.ycombinator.com/threads?id=dmitriv#47991835</a>
So what's next ... Is this a proof for when they are going to charge you a 30% commission on your sales for products build with their tools?
I've been hesitant to use Zed mostly because I didn't want to learn new key but last week, I finally jumped in and remapped to keys that I like. It works really well.
This is really bad.
Time to leave for something else if you haven't already, vscode has been good to us but this kind of behavior is only going to ramp up as Microsoft seeks to get a return on their AI investments.
Just when you think they've reached the bottom, they just keep digging.
Whenever I use Cursor's voice dictation, my prompts get "Thank you" inserted at the end of the sentence.
That happens in most speech to text systems, even Superwhisper, Monologue and Wispr Flow. I read somewhere it comes from training on YouTube audio and happens when there is silence. I guess it depends on the model but most of them are based on Whisper which has this problem
> I read somewhere it comes from training on YouTube audio<p>Does it also insert "please like & subscribe?"
Ha, I also have this happen all the time in response to mouse clicks. When playing with Apple Foundation Models + Whisper I noticed that it happens so often that I had to explicitly filter this out before acting on transcriptions.
Wow that pr itself looks amateurish. I reported this a while ago <a href="https://news.ycombinator.com/item?id=47958353">https://news.ycombinator.com/item?id=47958353</a>
Great, here’s how to remove it from your commits:<p>Run git commit --amend<p>Your text editor will open. Delete the line: Co-authored-by: Github Copilot <noreply@github.com><p>Save and exit<p>Force push the change: git push --force-with-lease
The original commit is (publicly) preserved on Github.com until perpetuity, so I wouldn't use the term 'remove'.
Or people could instead not use Microslop software, easy fix for the AI bullshit. But yeah of course you're technically right.
Looks like it comes into play for telemetry and here in actual commits:<p><a href="https://github.com/microsoft/vscode/blob/4e312e3c3a18d13c26d49de2c554d8121a832fd3/extensions/git/src/repository.ts#L1459" rel="nofollow">https://github.com/microsoft/vscode/blob/4e312e3c3a18d13c26d...</a>
Determining AI provenance is really tricky and difficult when you have so many different ways to author code.
Looks like VS Code has decided that by stamping all code as AI generated, it is more likely to be right than wrong. Some PM must have declared that false negatives are a lot more dangerous than false positives when it comes to AI provenance tracking
"microsoft locked as spam and limited conversation to collaborators 6 hours ago" :)
quote:
"Thank you all for your feedback, professional or otherwise.
Sorry about the regression. I will work on fixing this in 1.119.<p>There is a number of issues with the Co-Author functionality:<p>It should never have been enabled when disableAIFeatures is on.
It should not add attribution to changes that were not done by AI.
We need to make sure it receives a more test coverage before change the default.
If you have additional (constructive) feedback, please ping me directly or open an issue."
We got a positive response just before "microsoft locked as spam": <a href="https://github.com/microsoft/vscode/pull/310226#event-25100330987" rel="nofollow">https://github.com/microsoft/vscode/pull/310226#event-251003...</a>
Changes the defaults. Does not care to provide a pull request description. Talk about hubris.
PMs at Microsoft have incredibly bad taste
It's all because of ridiculous performance systems of some $BIGTECH$<p>"Here's we increased number of commits by Copilot from X to Y, %Z increase"
Seems like MS from the Gates/Ballmer era is back
Wondering what else I'm using from MS that might be at risk. <glances at TypeScript>
This is why I never ever commit through the gui.
Is this when you add a commit through VSC or does the editor add some git hook?
It's your own fault for using anything made by microsoft at this point.
So GitHub reached its tipping point, I guess vscode will follow
Adding anything to my commits or PRs or anything else is a deal breaker TBH
Does that make the code uncopyrightable? Non-human authorship?
If it's actually <i>co</i> authored then you should be fine on copyright.<p>And of course dumb messages that aren't true won't affect copyright.
The real question is why Anthropic was able to use DMCA takedown requests "in good faith" against the Claude leaks when their own CTO claimed it is a 100% slopcoded codebase, and they themselves argue that all LLM generated code is transformed enough to not be copyrightable. Which they have to state without being able to turn back because they violated millions of book and software licenses during training.<p>Make it make sense.
Truth, law and consequences (for the capital class) are <i>so last year</i>.
You can lie.<p>You can get away with lying.<p>You can lie to judges.<p>You can get away with lying to judges.<p>You can profit from getting away with lying to judges.<p>A judge isn't involved, anyway. The leaker would have to take you to court and then prove that your request was in bad faith and that they didn't infringe copyright.<p>Competent programmers understand how to tell the computer what needs to happen. Really good programmers understand how the computer executed the code, and take advantage of it - they know about speculative execution and cache prefetching. Competent lawyers know what the law says. Really good lawyers understand how the law is executed, and take advantage of it - they know when it won't be enforced.
What? Training is not inference. Reading books is not the same as writing.
Maybe read up on how transformers, their encoders and decoders, and the attention matrix works?<p><a href="https://arxiv.org/abs/1706.03762" rel="nofollow">https://arxiv.org/abs/1706.03762</a>
The courts have determined that, yes, and that is the position of the Copyright Office. And the Supreme Court has rejected appeal, so that's the standing precedent.<p>Realistically, look forward to SOX style audits and having to maintain evidence of how much of a code base has human authorship vs machine generation. Or reject slop.<p>I can't wait for:<p>* The first company to do perjury for litigating over a nonexistent copyright for machine generated code.<p>* The first company to get nailed to the wall for reverse engineering and replicating high profile copyrighted code, like Windows.
Having a tool involved isn't the same as being entirely generated by a tool<p>For example, without any AI, if I generate a lookup table for the sine function in my code, that table may not be copyrightable because it was machine-generated, but it doesn't somehow make the rest of the code not copyrightable either<p>"Co-authored by" doesn't imply it was entirely machine-generated
Things are getting shittier by the day. Lovely
My early paranoia about corporate AI is really maturing. No one’s really laughing at me anymore either.
> <i>No description provided.</i><p>Right because of course you wouldn’t provide an explanation for why such a change would be made.<p>Providing zero description or background or explanation for why a change is made is probably the only thing that pisses me off as much as a pure AI-slop description of a change: your job in a PR description is to give the background for why a change is being made. Honestly, any PR which doesn’t do this should be insta-closed by policy. But it totally tracks with the level of quality I’d expect from the company in question.
I saw this the other day and was pretty confused - I prefer to write my own commit messages and wondered if I’d accidentally let the AI do it this time. Nope, just MS changing things behind my back. Sigh.
Having to scroll through 3 screens worth of giant automated comments on the linked PR before seeing any comments written by humans is the cherry on top.<p>So many repositories look like this now, it's honestly sad.
Co-authored-by iPhone
A lot of bitching about Microsoft here, for something Claude has been doing forever. I have a git hook that rejects any commit containing the line Co-authored by Claude
That's a fair point, but claude code is not an editor (yet?), and when you use claude code, and allow it to commit things, it's almost certainly "co-authored by llm".<p>Back to vscode, people get the "co-authored" line even if they didn't use the AI features.
Well claude does it if you ask it to commit instead of you, and it lets you review it, this is not the case with this feature - judging by the comments on PR. Sometimes it says co-authored by copilot even when the code is not generated by AI. Also it will never say co-authored by claude or whatever, always copilot. Also why would my IDE care about this and not the AI itself?
Are you ashamed of other people finding out you used Claude? I think the co-authored-by bit should not be a setting at all, AI-generated code should be clearly identified.
I use Claude at work. I've never instructed it to make a commit, and it's never attempted to make one. It would fail anyway because my commits are signed by Yubikey and it requires presence detection, so I have to tap it.<p>But I don't want it to make commits, and I don't want to review its code in the Claude Code TUI, either. I want to read its changes in my text editor, decide what to drop or revise or revert, and then stage individual hunks or regions into logical commits.<p>If anyone asks I'll tell them I used an LLM, idc. I often mention it in commit messages or PRs. But I don't want LLM agents to write commits at all.
> AI-generated code should be clearly identified.<p>Let AI autonomously produce code of a quality that I care about and I might consider giving it credit. I don't know how other people write code but I come up with an idea and use a multitude of LLMs to brainstorm a reasonably comprehensive spec that any reasonably competent person can read and produce a working program from, including a locally working Q2 quant of Qwen 3.6. Even Kimi is as good as Claude at most coding tasks, and I don't see why any single agent deserves any credit for my design.<p>Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.
I think the Linux kernel's standard of disclosure via the "Assisted-By" trailer is the right move.<p>Makes it clear you used a bullshit machine, without implying it's an author.<p>...assuming you think using them at all is a good move - I won't deny they have some utility (though I'd argue much lower than many seem to think), but I do presently believe they're a disaster for humanity.<p>The ruination of the Internet with slop, the massive propagation of propaganda, and the insanely easy-to-wield tools for abuse are in no way worth the ability to accrue tech debt at 10x velocity (though to be clear, accruing tech debt can absolutely be a useful strategy, if one I personally dislike).
Basically what you’re saying is that if AI does anything on your computer, anything the AI impacts you should lose control over. If the AI touched it at all in any way, big or small, you now lose ownership of the actions your computer takes (on open source tools, I might add).<p>In case you need reminding of common sense, I’m supposed to be allowed to decide what my commit messages are because <i>it’s my fucking computer.</i><p>I prefer that my software is not a morality police.
mind-boggling people are trying to hide this, tells you all you need to know about our “profession.” presence of that hook or the like
in a place of business should be fireable offense
I've never had Claude Code in VSCode add attribution to a commit when I didn't use it. VSCode is adding the attribution even when you have all copilot features disabled and therefore could not have used it.
I already added a pre-commit hook to strip this out. Having to defend myself from my own editor is absurd.
Please do share
Does anyone happen to know, what, if any, are the ownership/copyright/intellectual property liabilities and/or rights that come from a `co-authored by copilot/claude/codex/whatever`<p>Right now these companies are dealing with legal troubles from taking other's code/IP without honoring the license or copyright.<p>My theory that could be a bit of stretch is; if they can eventually replace all that copyright'd code that is trained into these models with versions their agent services created during the millions of uses daily, they can train future versions on code they wrote. If they hold any ownership stake or usage rights on that code, due to those co-authored lines, which are saying "this agent and by extension the company that owns it was a part of creating this code", they effectively will have laundered the license away from the original owners and removed any way to pursue legal action because they won't even be using the stuff stolen anymore, and worse yet, if they now have their own copyright or other legal grounds due to their agents co-authoring all new code, they could start going after smaller ai companies for the same thing individuals were going after them for.<p>I know that's a pessimistic outlook, but I feel like the co-authored lines are being placed there for more than marketing exposure. It's a commit message after all, how much could that help marketing. It's the ownership/author attribution aspect that concerns me.
Default or mandatory gift authorship?
Microsoft is enshittifying VS Code. I have already started looking for a lifeboat.<p>Imagine what this is going to look like in 2 years.
This is not just a joke, it is a legal nightmare. You may be giving away the copyright ownership, or at least part of it, to Microsoft.
The day I see it does this is the day I switch to zed, or whatever.
I personally don't mind if an AI inserts it's "Co-Authored by" tag into commits it has worked on - it's transparency, I used its help and it should get credit for good work, or disdain for bad.<p>But, just inserting the tag because it's being used for git commands - there's a line there.
I’m sorry, I don’t get it: a piece of software needs credit for creating another piece of software? Like, would you credit GCC for adding optimisations to your binary?
It's useful <i>as metadata</i> (like how JPEGs can store the camera model it was taken on, or PDFs contain the program used to generate it), but yes, I don't like LLMs giving themselves co-author credit. I turn this off in Claude Code.
It's a useful warning label for LLMed code. (When an editor isn't gratuitously adding it to non-LLMed code.)
GCC isn't making editorial decisions.
> it should get credit for good work, or disdain for bad<p>Hard disagree. The "credit" it gets is through the form of charging my credit card.<p>Imagine for a moment that you are a company which hired a human developer to create your app rather than AI. In this case, the developer sold his or her right to credit by way of becoming a paid employee. All credit/rights/etc to the code become the ownership of Company, not the developer.
I am paid by my company to write code - does that mean I shouldn't be given credit for the work I create?<p>DMR, Kevin Thompson are credited with creating C and Unix, but they were paid employees of AT&T - where's the issue with them being credited for their work?
You, and those others, are people. The clanker is not, and should not get the privileges of a person.
The LLM is just a database. Would you be fine if this was done when cribbing stuff from Github, StackOverflow, tutorials and so on, or do you think some databases are more special than others in this regard, and if so, on what merit?
So much for their commitment to quality: <a href="https://blogs.windows.com/windows-insider/2026/03/20/our-commitment-to-windows-quality/" rel="nofollow">https://blogs.windows.com/windows-insider/2026/03/20/our-com...</a>
just use vscodium (opensource vscode without microsoft's spyware) stop giving an increasingly incompetent org more control over your data ppl.<p><a href="https://vscodium.com/" rel="nofollow">https://vscodium.com/</a><p>Claude amp, cline, kilo etc plugins all work great with it, for ssh Open Remote works great with it too.
Previously: <a href="https://news.ycombinator.com/item?id=47958353">https://news.ycombinator.com/item?id=47958353</a>
... they don't require code review by at least one other person before merging..?<p>If this is indicative of practices over at MS these days, it explains a lot.
In typical Microsoft form, they locked further comments on the GitHub PR.
Poor Courtney
Finally the usage metrics look amazing, the masses have woken up
I got tired of Claude adding their signatures to my commits against my instructions (the settings schema changed at some point), so I added a commit-msg hook that blocks multi-line commits. Easy and works like a charm, and would block this sort of M$ intrusion.<p>What a despicable behaviour from M$.
Speaking of which, why does anybody use VS Code?<p><a href="https://vscodium.com/" rel="nofollow">https://vscodium.com/</a><p>I do at work because nobody listens to me, but at home never ever have I used VS Code. Use just Codium.
Wasn’t it discussed here that no copyrights apply to code generated by AI? I’m asking myself whether adding "Co-authored-by: Copilot" means the code is not protected by the GPL, or even allows Microsoft to own your code...
Claude code and codex do this all the time too. Fucking annoying.
There's a large gap between what they do (same env var disables this since the beginning) vs Microsoft bucking it's way through AI coauthorship credit in a multi potential author china shop, though.
That isn't the same thing.<p>It's you're using AI tool to code, obviously the tool should be given due credits on the commits, for ethics.<p>but in this case Microslop is branding <i>any</i> commits as "co-authored by Copilot", even if the user never used any AI tool.<p>This is blatant attempt violation of commits authorship ethics and user rights.
Well, that's good news for all the developers working at companies with delusional management proclaiming "100% of code will be written by AI in 6 months"!
[dead]
[dead]
[dead]
Growth hacking at its best /s
"chat.disableAIFeatures": true
I really hope the editor wars don't start again. I've been happily using VsCode for years now. More than happy in fact, it's one of the best pieces of software I've ever used, as evidenced by how AI companies basically started as a VsCode fork.<p>But this is going full-throttle on enshittification.<p>WTF happened at microsoft (github, openai partnership, copilot pricing) that all this shit just ramped up to a 11?
The editor wars never ended, and VSCode has been user hostile since inception. It came with unavoidable telemetry right out the gate.
vim and emacs are both still great choices.
> WTF happened at microsoft (github, openai partnership, copilot pricing) that all this shit just ramped up to a 11?<p>"Make a great free product so that we can enshittify it later" is an infamous MS playbook. Maybe nothing happened, maybe just the usual MS at work.
If you're angry about this then what are you going to do about it?
Moved to Zed and recommended my team do the same.
Make sure to delete VSCode fully from any PC I have access to and annoy all my coworkers to get rid of it.
I would think that the thing to do about it (if you want to use VS Code at all; some people (such as myself) don't), should be to send a patch to prevent adding the Co-authored-by line if Copilot is disabled, so that it will only add that line if the Copilot is enabled.
Turn it off and rage on social media.<p>If it gets bad enough, look into Zed. Their tagline is literally "your last next editor".
Zed currently does not have a revenue stream. Ot's only a matter of time before the same shenanigans ensue.
They're a commercial entity that sells AI plans and enterprise features.
Like how GNU Emacs is completely saturated with AI now?<p>(That's sarcasm, in case anyone wants to pretend I'm being serious.)
Zed is a nonstarter for me as long as they install additional software (third party runtimes to run LSPs) without asking my permission. That isn't acceptable behavior.
Unfortunately, Zed is years behind VSCode in terms of polish, Microsoft supported LSPs just work better in VSCode, they are better integrated, and Zed can't do anything about LSPs memory or peformance.
> Zed is years behind VSCode in terms of polish<p>One could think that.
But VSCode is the one that occasionally failed to simply <i>render</i> text.<p>No idea what happened these handful of times, but the UI was just completely screwed up, as if it were one of these "scratch to reveal" games, but with the file’s content (and unresponsive, obviously).
I tried VSCode some years ago (immediately moved to Codium) and yes, it is extremely well-done for what it is. But Zed is good enough for me. Everything I care about for Python, TS/JS/CSS and C programming is available. I do not even miss the JetBrains tooling for these.
I'm rooting for Zed but it does feel quite underbaked still right now.
[dead]