> the uniquely American animosity toward artificial intelligence<p>The poll they cite shows this is clearly not unique: <a href="https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/" rel="nofollow">https://www.pewresearch.org/global/2025/10/15/concern-and-ex...</a>. The US is (just barely) at the top, but no country is anywhere close to "more excited than concerned", and several countries are basically equal to the US.<p>I live in America so that's my perspective, but I would be surprised if this article couldn't accurately describe a lot of other countries.
This is obviously anecdata but I was back home in Central Europe over Christmas and a staggering amount of people use ChatGPT (in public). Most usage I've seen was on public transit and in restaurants. My mom has replaced her Google usage with ChatGPT. Meanwhile in the US, my friend group makes it a point of pride to not use "AI" for anything, and 90% are not in tech. I had a feeling that us Americans are being forced to use Copilot/Gemini/whatever more than Europeans and have slightly more animosity towards it.
It’s economic anxiety. The average American wants jobs to come back (look at the election data), and seeing AI shoehorned into every service is not an indicator that industry is going to start hiring lower level positions anytime soon.<p>The EU has strong worker protections and a robust social safety net. It’s not surprising to hear they are less antagonistic towards AI
EU has the same problems, and those 1-3 months of notice periods aren't helping that much when the whole world is in recession. Whole segments of IT are getting selectively laid off, like QAs for example, while LLMs are being shoved in everywhere.
One can use LLMs and at the same time dislike them and fear the long term consequences of mass proliferation. I sometimes use them, either to answer a multi-item question or to generate three paragraphs of business-speak spam if that is the only way. I don't like either of these things, spam especially. But even when LLMs are genuinely useful, it's only because normal search engines has failed me. Had they been more powerful, I wouldn't need to ask a random character generator.
It's not even that, everyone feels like it does not work well and its results are wonky/unreliable. I live in central Europe, for what it's worth.
Because "Why do people hate AI" confirms you are in the majority. This title suggests that it is uniquely american and misguided.
Note that most countries in the list are developed countries.<p>In a separate research at <a href="https://www.bloomberg.com/news/articles/2025-06-20/trust-in-ai-strongest-in-china-developing-countries-un-study" rel="nofollow">https://www.bloomberg.com/news/articles/2025-06-20/trust-in-...</a> , people found that low income countries have higher trust in AI.
Also SK and, to a lesser extent, Japan (and Germany?)<p>I wonder about China. More generally, do countries deemed collectivist (<a href="https://worldpopulationreview.com/country-rankings/collectivist-countries" rel="nofollow">https://worldpopulationreview.com/country-rankings/collectiv...</a>) and supportive of their government tend to lean towards AI?
<a href="https://hai.stanford.edu/ai-index/2025-ai-index-report" rel="nofollow">https://hai.stanford.edu/ai-index/2025-ai-index-report</a><p>> In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI products and services as more beneficial than harmful. In contrast, optimism remains far lower in places like Canada (40%), the United States (39%), and the Netherlands (36%).
The chart just shows how rapidly countries have adopted AI. Everyone starts off optimistic, and when the job losses and enshittification starts to pile up it quickly turns to distrust.
Not true. China is definitely adopting AI more universally than western countries, and I have friends in China losing designing jobs due to AI. They remain optimistic as (1) the society typically doesn't blame the technology advancement (2) they switched to an AI powered content creator (fortune telling and meme videos) and continued to make money
And presumably much less experience with it.
I'm a European living in the US and my sense is that perceptions towards AI are generally more positive here than in Europe if anything (I do work in tech though which skews things a good bit).<p>This article almost feels like some kind of psychological manipulation: "Jeez Americans, can't you just get on board like the rest of the world?"
Especially lately, it seems to be fashionable to dump on Americans as somehow odd, weird, and afraid of things we supposedly shouldn't be. So much for "tolerance."
Hello citizen,<p>We're going to build a massive data center in your town. In one day, it will use as much electricity as you use in 10 years, and will produce more written words than you could write in 10 lifetimes. Its main purpose will be to eliminate your job, but it will have other uses, like generating images of your daughter in a bikini.<p>We do this in the hopes that it makes me (not you) very rich. Sounds good? Just kidding, we're not asking you!
You could write a low-effort fearmongering "meme" like this for almost every single technological innovation.
Incorrect. Previously at least some lip service consideration to public benefit was given.<p>Television, for example, had many FCC regulations at its inception to ensure it served in the public interest. This of course devolved over time into nomimal compliance like showing community bulletins at 5am when no one was watching.<p>You might be somewhat correct with the release of the Internet upon the public in the early 90's, but imagine if common carrier rules were not in effect for the phone lines everyone was using to access the Internet back then. The phone companies would have loved to collect the per-minute charges AOL initially was doing before they went to unlimited. They already had a data solution in place - ISDN - but it was substantially more expensive from what I understand and targeted to business only.<p>With AI, it's the complete opposite, everything is full steam ahead and the government seems to be giving it its full blessing.
>Previously at least some lip service consideration to public benefit was given<p>The public benefit here is that all sorts of "compliance" is made cheaper. I can see it already in the construction industry. Stuff you used to hire a firm for you use cheap labor for, they use AI, you have your "one old guy who's engineering license is kept up to date" check it, it gets some tweaks then passes his scrutiny. He submits it. Town approves it because it's legitimately right. High fives all around, three people just did something that used to take a much bigger team. The engineer would have had to decline that job before. The contractor too.<p>Of course, this all comes at the expense of whoever benefitted from having that barrier there in the first place.
Sure, the automobile wiped out the horse and buggy industry. The difference is that the average person’s quality of life vastly improved as a result.<p>Most genAI has been laughably poor at doing what it’s advertised at doing for the average person. People didn’t ask and don’t need a shoddy summary of their text notifications and they don’t want AI to take away their creative hobbies.
Let's see you do it! Do cars and solar panels.
So do you not think that's a big part of why Americans hate it?
You forgot about the other part "you're fired. Your boss says it's because you're replaced by AI".<p>The boss is lying, it's because Trump has caused a severe recession, and your boss is seeing revenue drop, not because AI is truly capable of replacing people. The boss simply wants to fire people due to recession WITHOUT signalling to shareholders that the next quarterly report is going to be ... unpleasant. But that's not what people hear.
<i>Everyone</i> - not just America - hates AI, because it has now become clear that the tool isn't a way to empower the masses (as it was initially sold) but instead to move power and wealth up the ladder. Automated (shit) customer support. AI interviewing. Brainrot content creation. Deepfakes. Parasocial relationships. Dangerous medical advice/therapy. Propaganda. Surveillance.<p>AI is simply not making <i>anyone's</i> life better, so what's there to love exactly?
I do really wonder how it's going to shake out.<p>I fully expect AI can and will be used to hurt the masses in those ways. But on the other hand the masses may very well be made substantially wealthier by AI cutting through all the arcane memorization and beurocratic bullshit that society has levied upon them to keep them down over the past 100yr.<p>Why do I need a $$$$ dentist to assess my X-rays. Why not a $$ hygienist assisted by AI and checked by the dentist in the odd cases? Now multiply that sort of labor reshuffling by every sector of the economy.
Every negative outcome of the existence of various generative AIs that you listed already existed beforehand and with the exception of "Brainrot content creation" probably won't even exacerbate the problem.
Therefore, why support/like something that will do only the same but faster?
AI eliminates the chance someone in the system could accidentally take pride in what they do.
Asia doesn't hate AI.<p>It may be true that AI is unpopular in the west more generally though, not just America.
AI is lifechanging for me when it comes to programming, it made something that I've been doing for more than a decade now exciting again. But I do accept that it's very bad for everyone involved since the end goal is to make humans seem like an inefficiency.<p>But again, we pretty much acomplished all the major goals of evolution - now we're pretty much just weaponizing it for enjoyment, pleasure and entertainment to keep ourselves from being bored. As for the rest of us where it just seems like the natural continuation to tickle the curiosity on our brains as depressing, dystopian and heartbreaking as it sounds we are currently heading towards accomplishing the final innovation we will produce as a human race: create something superior to our biological lifeforms.<p>Maybe uploading ourselves into a super computer isn't as sci-fi as we thought since it seems living as a normal human will become extremely difficult.
The poll they're referring to is here: <a href="https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/" rel="nofollow">https://www.pewresearch.org/global/2025/10/15/concern-and-ex...</a><p>The bucket they're referring to is labeled "more concerned than excited." It seems like rounding that to "hate" (in the headline) is misleading?<p>People can be "more concerned than excited" about the future while still using ChatGPT (or Claude Code) a lot. Even much of the management and workers at top AI labs could be put in the "more concerned than excited" bucket.<p>Maybe the headline should be "Why are Americans more concerned than excited about AI?"
To put it very simply, I and other Americans I discuss this with have absolutely zero faith that all of the promises being made around AI will lead to a better life.
For one, if AI stands to threaten our jobs, these jobs are critical to our very survival in this country that has done everything within its power to remove safety nets and programs that benefit those experiencing hard times.
The state of the American system is so poor when it comes to supporting anyone who is not wealthy, that it is fully expected that we would be placed into a category of "Undesirables" where more effort will be placed into purging or removing us than helping us get back on our feet.<p>Additionally, Americans are very technologically astitute in comparison to other countries, we were first adopters for AI.
I think at this point its been proven that AI is underperforming all expectations, and thus we are starting to resent the sentitment that more and more of our economy needs to be directed towards this technology that still remains a pipedream rather than a reality.
My speculations:<p>- Potential job loss, particularly in the bottom half or so of jobs.<p>- Further wealth inequality due to so many factors but primarily because the companies providing these tools will capture the dollars that would’ve been spent on the jobs mentioned above.<p>- NIMBY-ism. AI = data centers and people are overwhelmingly deciding they don’t want these near their homes. I live in the Midwest and it’s been amazing how much opposition has been showing up for these projects.<p>Of course all of these are based on the speculation and “promises” of the tech. Many feel the time is to act now rather than once it’s too late, on the off chance these things do happen.
>- Potential job loss, particularly in the bottom half or so of jobs.<p>"the bottom half" of desk jobs, maybe. But most jobs in "the bottom half" overall are not desk jobs, and therefore aren't going to be replaced with AI anytime soon. Think burger flippers, waiters, and retail clerks.
The US seems to m mostly look down on blue collar work, service workers and other non-desk jobs. Maybe that's part of the reason why AI as a threat to low-skilled desk work is seen as such an offense. Those affected might slide into the "lower class" of people who use their hands to earn an income, and those in the "lower class" will have a harder time climbing up into a desk job
I wonder how many of those desk jobs actually create value though.<p>Some paper pushing asshole working for the government demands some paper bullshit. Some other paper pushing asshole working for bigco produces said paper. Is value actually created? Perhaps there's some risk mitigation but enough to justify their respective wages? And the need to push that paper back and forth locks the little guy out of competing in that market.<p>Yeah, it'll suck for a lot of people in the interim. But that will also put downward price pressure on a ton of things who's cost makes other value producing things not worth doing. If legal, design, engineering, etc, etc, services are made cheap in the "boring" cases then that becomes competitive advantage for the buyers which over time trickles down to their buyers and their buyers.
Good. They should be skeptical.<p>> People in many other developed democracies — Japan, Israel, Sweden, South Korea — had warm views of social media in a 2022 survey<p>I guess they cite that as some kind of jab at Americans ("look how crazy and backwards they are"), but, I am sorry, I don't see warms views of ad fueled tech megacorps sucking people's attention and harvesting clicks as a positive thing. If anything, someone else could turn right around and reword the article "as look at these other countries blindly trusting American tech corps with their privacy and attention".
<a href="https://archive.ph/ooir9" rel="nofollow">https://archive.ph/ooir9</a>
More pertinent question --- why do non-Americans love AI?<p><i>Lawyers use it to draft legal briefs.</i><p>Court sanctions lawyers for fake citations generated by AI.<p><a href="https://natlawreview.com/article/court-sanctions-attorneys-submitting-brief-ai-generated-false-citations" rel="nofollow">https://natlawreview.com/article/court-sanctions-attorneys-s...</a><p>AI is inherently unreliable and untrustworthy. In this case, distrust is not just some emotional reaction but pretty well grounded in fact.<p>Once lawyers learn this themselves from experience, I expect they will move toward legally impressing this upon any who are slow/reluctant to admit as much.<p>Using technology that is widely known to be flawed for any sort of serious work is a textbook example of "negligence".
The example in the article really is true negligence. I can't imagine that lawyer did good work even prior to using AI.<p>My approach is that you're responsible for anything you ship, and I don't care (within reason) how you generated it. Once it hits production, it's yours, and if it has any flaws, I don't want to hear "Well the AI just missed it or hallucinated." I don't fucking care. You shipped it, it's _your_ mistake now.<p>I use Claude Code constantly. It's a great tool. But you have to review the output , make necessary adjustments, and be willing to put your name on it.
> AI is inherently unreliable and untrustworthy<p>big statement that doesn’t hold up under any technical scrutiny. “AI” —- neural networks —- are used reliably in production all over the place. signals filtering/analysis, anomaly detection, background blurring, medical devices, and more<p>assuming you mean LLMs, this still doesn’t hold up. it depends on the system around it. naively asking ChatGPT to construct a legal brief is stupid use of the tool. constructing a system that can reliably query over and point you to relevant data from known databases is not
Another reason why the public hates AI is because it has developed a cult around it of people who deny its fallibility and insist, with unshakable faith, that it will make their socially destructive fantasies come true.
Because it's forced down our throat, non-functional, and expensive. Do we really need a whole article for that?
60% of the time, A.I. works every time
AI works every single time. No exceptions!<p>Oh, wait, you are correct, AI never works. I see it now.
The problem is that it's being used for things it shouldn't be used for. Everyone these days knows that you shouldn't use a microwave to cook a steak, but I'm sure when it was first invented there were many chefs who tried.
I hired a local plumber to do some work on a bathroom remodel. They had a human secretary when I first reached out and they sent a plumber to give me a quote, everything was going smoothly.<p>After I get the quote, I call back and get a phone AI assistant instead to handle scheduling the job instead of the human secretary. The AI assistant does not understand my home address and keeps asking me over and over to repeat it, it was the most Black Mirror experience I've had in years. There was no option to speak to a person, so I hung up without getting the job scheduled. I wrote a bad Google review detailing my experience.<p>I called a different plumbing company (with a human on the other end) and got a second quote that was 30% less than the original, they came out and did the job.<p>Three weeks later, the owner of the first company reaches out making excuses as to why he wasn't flagged about my issue sooner and apologizing, but the job was done by that point.<p>I'd rather pay extra for services like plumbing if it means I can talk to a human because these LLM voice systems can't do basic scheduling calls. If they go off the rails, the customer is totally hosed. I don't wanna hear "oh it'll get better". The future we're headed towards is banal and dystopian beyond comprehension.
As an experienced software engineer, I’m incredibly excited about what AI is going to be able to do, like on a scale of 1-10, I’m an 8.<p>As a citizen, who doesn’t trust the government, or the media, or giant corporations, I’m also an 8/10 concerned.<p>That means I’m equally concerned about AI as I am excited, I might be more concerned than someone who isn’t excited at all about AI 1/10, but who is mildly concerned with a 5/10.
"Why do Americans dislike what billionaires like?"<p>The whiny petulant owner class punches down on the commoners again. To those so privileged: I prefer my privacy and sanity to that of your goofy futuristic fever dreams. Please knock it off. If you want me to build a statue in your honor then fix healthcare.
One hypothesis might be the crony administrations embrace of AI with no apparent checks on their behavior.<p>Zuck can get an audience with the President, who can basically override any so-called independent agencies determinations. Congress is neutered and SCOTUS seems to enable this behavior. All while our power bills go up and we fuel data centers by polluting our environment<p>So I wonder if it’s not just AI, it’s AI with seemingly no recourse from the public to check capitalist excesses.
I just want to say that the AI companies themselves use extremely hyperbolic, apocalyptic language. What else do we expect?
The only Americans ive seen excited about AI are CEOs which should come as a shocker to nobody.
I think people see through it as a way for corporations to cut spending and jobs while offering a lower quality product.<p>My apartment complex recently switched to an "AI assistant" that replaced the front office person. I haven't heard a single positive thing about it from anybody. It's utterly terrible, and they're absolutely not "passing the savings on" to residents by lowering rents.<p>Even with "vibe coding", we accept that it does a shit job, because it does an "okay enough" shit job that it's often usable, but nobody wants to maintain "vibe code".
For me it's oversaturation. AI "features" are getting pushed into every product whether I want them or not. I've found AI useful in some contexts, but having it forced in everywhere screams desperation. Do people actually want this stuff or are companies just hoping they do?
A much better question is why anyone likes AI or see any value in it
It is very confident in producing first output. Confidence works on lot of people. And then well there is lot of demand for good enough looking content. Be it email, easy answer or art. Or translation.<p>I can actually see value in AI generated content with stuff that is already considered low value. Slap a image on your spam post. Hell, just write that spam post. Do that enough time and you might make more than you spend in time and money.<p>Seems like on net for individuals doing this there is some value on it.
It does work for people at a good enough level that woukd have needed an expensive professional in the past.
The current AI bubble seems like a bad proposition for most people regardless of how it shakes out. The way I've seen it described elsewhere is: either the bubble pops, causing a significant recession or it doesn't and loads of people lose their livelihoods to AI. In either case average people lose.<p>The problems with AI aren't technical they are political and economical. This topic is discussed in Max Tegmark's "Life 3.0", in which he theorises about various outcomes if we do invent AGI. He describes one possibility where we move to a post-scarcity society and people spends their days doing art and whatever else they fancy. Another option looks more like the world described in Elysium. I suspect the latter prediction feels more likely to most people.
Because it's a marginal improvement and though useful it's being sold by degenerate tech grifters as the science fiction dream come true to increase their influence and wealth. Everyone is sick and tired of these overgrown teenagers running amok.
I'm not American but I guess they don't want it because it's shit.
A few years ago in my lefty friend circles, everyone was talking about grabber’s “bullshit jobs” and how we’ve built a system to keep us occupied with meaningless busy work.<p>But now AI is threatening (promising?) to make those jobs go away, and the same folks are pissed.<p>If you wanted people to get on board with this, there should probably some sort of UBI/expansive social safety net in place because it turns out that if people have to choose between unfulfilling drudgery and not affording food… people take the drudgery
If the job's point wasn't creating things, a machine that creates things can't take them away. Even if it was able to do what the hype claims.<p>AI pushers are promising to take other jobs away, not the bullshit ones.
i think the press fixates on AI job losses for creatives (e.g., voice actors, designers, etc), but in absolute numbers the loss are going to be much larger for bullshit middle managers. the types of people who couldn't <i>really</i> explain what they do at work and just fill their days sitting in meetings and forwarding emails.
People hate AI instead of humans - themselves (job), others (customer support). People love AI instead of nothing - coding help, medical questions when not at doctor's office, companionship when quality human interaction is scarce. You can get any answer you want depending on how you phrase your question.
U.S. is most media saturated realm in the world. Americans minds have been saturated by artificial information for 50 years and since mid-2000 FAANG the American psyche is cracking apart with the strain. In 2025 AI is amplifying the psychic chaos and creating an economic existential disaster while accelerating the most tragic and corrosive aspects of monetary wealth. But others than this, what's to hate?
I feel like it's because the people pushing it have a terrible track record. They've shown themselves to be manipulative, untrustworthy, and willing to do whatever it takes to make themselves rich. Why should we trust them with AI?<p>In better hands the technology would probably make the world a better place. But not in the hands of silicon valley billionaires.
Uhh AI bros are promising Wall Street they're going to eliminate wages while telling us we're on the precipice of a workless utopia (while also being in bed with the far right who want to destroy anything resembling a social safety net.) That's not even taking into account the Theil/Yarvin/Musk/Trump/etc techno-fuedalist aspirations ("Freedom" cities)<p>I love the tech, and despise the people pushing it and how they want to use it. What should be used to free us from some work and allow us to focus more on human things (family, arts) and science, instead will be used to further divide and subjugate us, all in the interest of shifting more wealth and power from the working class up to those who have both in abundance.
The worst part is that AI is getting blamed for this. It's not why these things are happening.<p>1) Trump's spending on tax (last term) has caused a severe recession<p>2) Trump's recession is causing people to fire workers, BUT CEOs and ... don't want to admit it is because revenue's dropping and about to drop more, ie. because management is close to being forced to report a total disaster of a quarterly report. So no, you're fired "because AI can do your job" (meanwhile actual demonstrations of AI actually doing a single minimum-wage job ... even OpenAI "for some reason" doesn't demonstrate that)<p>What people don't seem to get is that AI's history of overpromising and underdelivering is about 3x as long as the one for Nuclear Fusion.
Because the entire press spent all year telling the public to hate it, and neither the writers nor the readers know much about it.
Sometimes I feel like I’m living in a parallel universe because all I’ve seen for the past 3+ years until very recently has been breathless think-pieces about how this specific version of AI is the future and everyone has to get on board or be left behind. Every time one of these CEOs with massive vested interests open their mouths, the press goes to bat for them and publishes dozens of new scary headlines showing number go up.
I think that's true for the tech & financial press, for obvious reasons. Outside of that bubble journalists and writers have been very anxious, given AI seems to directly threaten their profession even further.
The tech press does not universally view AI as useful. Indeed, there is an entire subindustry of anti-tech press that stands ready to reflexively denounce AI at every opportunity. But even seemingly neutral or pro-tech outlets such as Ars Technica are ambivalent. Ars has one editor apparently dedicated to saying negative things about Gemini, for example.
Nor do most folks using it. Which isn't totally novel: I have almost no idea how an internal combustion engine works. But I do know enough to realize that it's not appropriate to drive my car on a bike path.