If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.<p><a href="https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd" rel="nofollow">https://www.fda.gov/medical-devices/digital-health-center-ex...</a>
Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
A few months ago in a comment here on HN I speculated about the reason an old law might have been written the way it was, instead of more generally. If it had been written without the seemingly arbitrary restrictions it included there would have been no need for the new law that the thread was about.<p>A couple hours later I decided to ask an LLM if it could tell me. It quickly answered, giving the same reason that I had guessed in my HN comment.<p>I then clicked the two links it cited as sources. One was completely irrelevant. The other was a link to <i>my</i> HN comment.
I had a similar thing happen to me just today. A friend of mine had finished a book in a series. I have read the series but it was almost 10 years ago, and I needed a refresher with spoilers, so I went looking.<p>Well, some redditor had posted a comparison of a much later book in the series, and drawn all sorts of parallels and foreshadowing and references between this quite early book I was looking for and the much later one. It was an interesting post so it had been very popular.<p>The AI summary completely confused the two books because of this single reddit post, so the summary I got was hopelessly poisoned with plot points and characters that wouldn't show up until nearly the conclusion. It simply couldn't tell which book was which. It wasn't quite as ridiculous as having, say, Anakin Skywalker face Kylo Ren in a lightsaber duel, but it was definitely along those same lines of confusion.<p>Fortunately, I finished the later book recently enough to remember it, but it was like reading a fever dream.
I find its tricky with games, especially ones as updated as frequently as Minecraft over the years. I've had some of this trouble with OSRS. It brings in old info, or info from a League/Event that isn't relevant. Easier to just go to the insanely curated wiki.
What's interesting to me is that this kind of behavior -- slightly-buffleheaded synthesis of very large areas of discourse with widely varying levels of reliability/trustworthiness -- is actually sort of one of the best things about AI research, at least for me?<p>I'm pretty good at reading the original sources. But what I don't have in a lot of cases is a gut that tells me what's available. I'll search for some vague idea (like, "someone must have done this before") with the wrong jargon and unclear explanation. And the AI will... sort of figure it out and point me at a bunch of people talking about exactly the idea I just had.<p>Now, sometimes they're loons and the idea is wrong, but the search will tell me who the players are, what jargon they're using to talk about it, what the relevant controversies around the ideas are, etc... And I can take it from there. But without the AI it's actually a long road between "I bet this exists" and "Here's someone who did it right already".
I run a small business that buys from one of two suppliers of the items we need. The supplier has a TRASH website search feature. It's quicker to Google it.<p>Now that AI summaries exist, I have to scroll past half a page of result and nonsense about a Turkish oil company before I find the item I'm looking for.<p>I hate it. It's such a minor inconvenience, but it's just so annoying. Like a sore tooth.
Or you can take the alternative approach, where Microsoft's own "Merl" support agent says it knows anything to do with Minecraft, and then replies to basically any gameplay question with "I don't know that".
"Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them...
<a href="https://deadstack.net/cluster/google-removes-ai-overviews-for-medical-queries" rel="nofollow">https://deadstack.net/cluster/google-removes-ai-overviews-fo...</a>
> <i>Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.</i><p>Notice how little this sentence says about whether anything is any good.
This incessant, unchecked[1] peddling is what rids "AI" of the good name it could earn for the things it's good at.<p>But Alas, infinite growth or nothing is the name of the game now.<p>[1] Well, not entirely thanks to people investigating.
The fact that it reached this point is further evidence that if the AI apocalypse is a possibility, common sense will not save us.
... at the same time, OpenAI launches their ChatGPT Health service: <a href="https://openai.com/index/introducing-chatgpt-health/" rel="nofollow">https://openai.com/index/introducing-chatgpt-health/</a>, marketed as "a dedicated experience in ChatGPT designed for health and wellness."<p>So interesting to see the vastly different approaches to AI safety from all the frontier labs.
Ars rips of this original reporting, but makes it worse by leaving out the word "some" from the title.<p>‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk:
<a href="https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation" rel="nofollow">https://www.theguardian.com/technology/2026/jan/11/google-ai...</a>
Removing "some" doesn't make it worse. They didn't include "all" AI titles which it would. "Google removes AI health summaries after investigation finds dangerous flaws " is functionally equivalent to "Google removes some of its AI summaries after users’ health put at risk"<p>Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
How could they even offer that without a Medical Device license? where is the FDA when it comes to enforcement?
But only for some highly specific searches, when what it should be doing is checking if it's any sort of medical query and keeping the hell out of it because it can't guarantee reliability.<p>It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.
They AI summary is total garbage. Probably most broken feature I saw being released in a while.
Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
It's mystifying. A relative showed me a heavily AI-generated video claiming a Tesla wheelchair was coming (self-driving of course, with a sub-$800 price tag). I tried to Google it to quickly debunk and got an AI Overview confidently stating it was a real thing. The source it linked to: that same YouTube video!
Yeah. It's the final nail in the coffin of search, which now actively surfaces incorrect results when it isn't serving ads that usually deliberately pretend to be the site you're looking for. The only thing I use it for any more is to find a site I know exists but I don't know the URL of.
The AI summaries clearly aren’t bad. I’m not sure what kind of weird shit you search for that you consider the summaries bad. I find them helpful and click through to the cited sources.
Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
huh.. so google doesn't trust it's own product.. but openai and anthropic are happy to lie? lol
Google for "malay people acne" or other acne-related queries. It will readily spit out the dumbest pseudo science you can find. The AI bot finds a lot of dumb shit on the internet which it serves back to you on the Google page. You can also ask it about the Kangen MLM water scam. Why do athletes drink Kangen water? "Improved Recovery Time" Sure buddy.<p>Also try "health benefits of circumcision"...
I agree with your point.<p>Going offtopic: The "health benefits of circumcision" bogus has existed for decades. The search engines are returning the results of bogus information, because the topic is mostly relevant for its social and religious implications.<p>I am related with the topic, and discussion is similar to topics on politics: Most people don't care and will stay quiet while a very aggresive group will sell it as a panacea.
the problem isn't that search engines are polluted; that's well known. The problem is that people perceive these AI responses as something greater than a search query; they view it as an objective view point that was reasoned out by some sound logical method -- and anyone that understands the operation of LLMs knows that they don't really do that, except for some very specific edge examples.
[dead]
[flagged]
chatGPT told me, I am the healthiest guy in the world, and I believe it