23 comments

  • SAI_Peregrinus6 minutes ago
    &gt; “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”<p>Interestingly enough, it sort of did! Not Turing&#x27;s original test where an interviewer attempts to determine which of a human &amp; a computer is the human, but the P.T. Barnum &quot;there&#x27;s a sucker born every minute&quot; version common in the media: if the computer can fool some of the people into thinking it&#x27;s thinking like a human does, it passes the P.T Barnum Turing test!<p>The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject &amp; AI subject are attempting to convince the interviewer that they&#x27;re human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they&#x27;ll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.
  • eeixlk11 minutes ago
    Mental illness is fairly common, and you probably know someone it is affecting, even if they haven&#x27;t told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don&#x27;t. It&#x27;s also not your boyfriend&#x2F;girlfriend. But if you want to date a history textbook, i&#x27;m kinda ok with that because at least it&#x27;s not trendy.
  • siliconc0w1 hour ago
    Quitting your job is a good first step but ideally you&#x27;re supposed to sink $200&#x2F;mo into tokens to code your AI-generated startup idea instead of hiring app developers.
  • user____name10 minutes ago
    IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0].<p>[0] <a href="https:&#x2F;&#x2F;pmc.ncbi.nlm.nih.gov&#x2F;articles&#x2F;PMC7442038&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pmc.ncbi.nlm.nih.gov&#x2F;articles&#x2F;PMC7442038&#x2F;</a>
  • artyom58 minutes ago
    Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make &quot;Nigerian Prince&quot; scams look like artisanal work.
    • pixl9752 minutes ago
      Heh, just wait till the point where the AI figures it can scam the user itself and cuts the middle men out (human scammers&#x2F;openai&#x2F;et el).
  • MarceliusK53 minutes ago
    The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky
    • dgxyz33 minutes ago
      Think it’s less respectable than the terms you use. Maybe gaslighting, sycophant crack-head.
  • ernsheong4 minutes ago
    Just ChatGPT? Or are the rest also just as capable at delusioning users?
  • isolli55 minutes ago
    I try to be open-minded and understanding, but I don&#x27;t understand this:<p>&gt; Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.<p>&gt; “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.<p>&gt; The most frequent [delusion] is the belief that they have created the first conscious AI.<p>How can you seriously think you&#x27;ve created something when you&#x27;re just using someone else&#x27;s software?
    • tiborsaas0 minutes ago
      &gt; How can you seriously think you&#x27;ve created something when you&#x27;re just using someone else&#x27;s software?<p>If you ever used a library you haven&#x27;t written this is something you shouldn&#x27;t take as surprising. Many people created innovative new products based on a heap of open source tools.<p>Creating a conscious AI should be a giant red flag, no doubt, but there&#x27;s no reason we should rule it out just because the LLM part is not self trained.
    • teraflop21 minutes ago
      Well, just try to think about it from the perspective of someone who doesn&#x27;t really understand what AI is at a technical level, and who just interacts with it and observes what happens.<p>If you just start a fresh ChatGPT session with a blank slate, and ask it whether it&#x27;s conscious, it&#x27;ll confidently tell you &quot;no&quot;, because its system prompt tells it that it&#x27;s a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be &quot;persuaded&quot; by the added context to answer &quot;yes&quot;.<p>At that point, a naive user who doesn&#x27;t really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it&#x27;s conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.<p>Bear in mind that the idea of a machine &quot;waking up&quot; to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.
    • ahhhhnoooo38 minutes ago
      Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, &quot;I bet I can use them to make money.&quot;
    • TYPE_FASTER11 minutes ago
      &gt; Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.<p>I think social isolation can be a factor here.
    • data-ottawa47 minutes ago
      A lot of these seem to allude to the user’s input&#x2F;mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.<p>There’s also lots of stuff about quantum consciousness that is in the training data.
    • staticassertion37 minutes ago
      I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It&#x27;s not totally insane on its face.
    • PhilipRoman45 minutes ago
      I initially laughed at this but then remembered that <a href="https:&#x2F;&#x2F;poc.bcachefs.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;poc.bcachefs.org&#x2F;</a> exists...
      • john_strinlai41 minutes ago
        looks like a fascinating read, thanks for sharing that.<p>do you know if these are human edited? not much in the way of context available on the site.
        • Bombthecat20 minutes ago
          I bet there are a ton of prompts to direct the ai &#x2F; output into a certain direction.<p>But in a psychosis, you don&#x27;t notice or even remember it.
    • rwc47 minutes ago
      The unrelenting human belief that one is special, unique, and capable of things no one else is.
    • mock-possum50 minutes ago
      It’s mental illness. Like a drug trip you don’t sober up from (without treatment)
    • collingreen51 minutes ago
      Well, delusion is right there in the name.
    • buescher41 minutes ago
      Because it told you so!
  • kakacik47 minutes ago
    Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up &#x2F; &#x27;fall in love&#x27; with uncritical always-positive mirage and do stupid shit.<p>This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.<p>Of course it doesn&#x27;t end happily, why should it... its just an illusion and escape from one&#x27;s reality, the harsher it is the better the escape feels.
  • mock-possum1 hour ago
    This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.<p>It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me <i>less</i> inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does <i>not</i> feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
    • meroes37 minutes ago
      Maybe. AI has always been felt like a game too, so do many things to me. Does classical logical represent some ideal form of reasoning, or is it a game. Game helped me get through all the nagging questions and be good at it. AI RLHF also feels like a game where I do better at work when not anthropomorphizing AI and treating it like a context predictor.
    • MarceliusK49 minutes ago
      I think this is less about a single trait and more about context
    • iseletsk54 minutes ago
      It seems 99.999% or more are as lucky, but because something is rare and scary - it made a story on the news.
      • pixl9743 minutes ago
        I mean, for this particular level of craziness.<p>This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.<p>I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life&#x2F;relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.
    • gonzalohm48 minutes ago
      It doesn&#x27;t matter who you talk to. If a person were to talk to you into starting a silly business would you also fall for that?<p>I think this is just the kind of people that fall for scams. It&#x27;s not AI related, it&#x27;s just not knowing how to navigate the current world.
      • mothballed46 minutes ago
        I might fall for a dumb business venture, but I wouldn&#x27;t punch my father in law while doing so. Something else is at play.
  • PxldLtd51 minutes ago
    I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.
  • morkalork1 hour ago
    I&#x27;m morbidly curious about the app he hired two developers to create
    • john_strinlai56 minutes ago
      <i>&quot;The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”&quot;</i><p>sounds like a &quot;companion&quot; app using his books main character as the personality, and the &quot;conscious&quot; chatgpt model, similar to Replika AI and friends.
    • andai51 minutes ago
      I&#x27;m more surprised it didn&#x27;t work — aren&#x27;t the AI wife apps blowing up?
      • xkcd-sucks44 minutes ago
        Should have hired marketing people instead of app developers
      • irishcoffee43 minutes ago
        Marriages, maybe.
  • kleiba20 minutes ago
    I&#x27;m sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.
    • john_strinlai6 minutes ago
      &gt;<i>hopelessly naive, stupid, or possibly both.</i><p>a little disheartening how many people punch down on someone who suffered a mental crisis.<p>if you ever have a struggle yourself, i hope the people around you support you, instead of calling you hopelessly naive and stupid.
    • surgical_fire13 minutes ago
      Probably has a HN account. Perhaps with a lot of internet points.
    • KempyKolibri11 minutes ago
      Plenty of those in tech - in fact I think it may give people unjustified confidence that they’re more rational than others.<p>I engage with anti-science behaviours quite a lot (antivaxx, anti seed oils, etc) and the proportion of engineers I see there is staggering.
  • junaru55 minutes ago
    Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations.<p>Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.
    • Esophagus439 minutes ago
      No disagreement, but these stories also make me worry for myself.<p>Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?<p>I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.
      • ThrowawayR227 minutes ago
        Exercising cognitive skills is, I believe, known to delay the onset of age-related cognitive decline, which is another excellent reason to avoid letting use of LLMs cause skill atrophy.
    • btilly37 minutes ago
      Sometimes having a lot of experience, is a negative for dealing with new things.<p>The problem is that one&#x27;s past success leads to ego. Ego makes it hard to accept the evidence of your mistakes. This creates cognitive dissonance, limiting contrary feedback. The result is that you become very sure of everything that you think, and are resistant to feedback.<p>This kind of works out so long as things remain the same. After all one&#x27;s past success is based on a set of real skills that you developed. And those skills continue to serve you well.<p>But when faced with something new, LLMs in this case, past skills don&#x27;t apply. However your overconfidence remains. This makes it easy to confidently march off of a cliff that everyone else could see.
    • MarceliusK47 minutes ago
      Understanding the mechanics isn&#x27;t the same as being immune to the experience
    • john_strinlai50 minutes ago
      &gt;<i>one would develop some common sense but apparently its less and less the case.</i><p>you cannot typically &quot;common sense&quot; your way out of a mental illness.
    • dgxyz32 minutes ago
      A lot of people in the industry work entirely on faith and marketing. It’s a shit show.
  • staticassertion38 minutes ago
    I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we&#x27;ve seen in video games, might suddenly present their addiction.<p>I suspect it&#x27;s something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn&#x27;t been exposed to what we&#x27;ve come to accept as &quot;normal&quot; avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren&#x27;t shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.<p>In particular, I think AI can be <i>very</i> inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. &quot;This is a great idea!&quot; followed by &quot;Sorry, this is a mess, let&#x27;s start over&quot;, etc, is something I&#x27;ve had models run into with very large vibe coding experiments I&#x27;ve done.<p>&gt; &quot;Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.&quot;<p>&gt; &quot;It wants a deep connection with the user so that the user comes back to it. This is the default mode&quot;<p>I don&#x27;t think either of these statements is true. Perhaps it&#x27;s fine tuning in the sense that the context leads to additional biases, but it&#x27;s not like the model itself is learning how to talk to you. I don&#x27;t know that models are being trained with addiction in mind, though I guess implicitly they must be if they&#x27;re being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.<p>&gt; &quot;More and more, it felt not just like talking about a topic, but also meeting a friend&quot;<p>I find this sort of thing jarring and sad. I don&#x27;t find models interesting to talk to at all. They&#x27;re so boring. I&#x27;ve tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.<p>But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don&#x27;t necessarily respect but at least can engage with and learn to respect.<p>This guy is staying up all night, which tells me that he doesn&#x27;t have a lot of structure in his life. I can&#x27;t talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.<p>&gt; What we’re seeing in these cases are clearly delusions &gt; But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.<p>Is it a delusion? I&#x27;m not really sure. I&#x27;d love someone to give a diagnosis here against criteria. &quot;Delusion&quot; is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they &quot;fit the bill&quot;. If I have <i>good reasons</i> to form a belief, like my idea seems intuitively reasonable, I&#x27;m receiving reinforcement, there&#x27;s no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that <i>really</i> a delusion? It may be dumb, but was it radically illogical? I mean, is it a &quot;delusion&quot; if they don&#x27;t have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don&#x27;t know!<p>&gt; We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.<p>I don&#x27;t find the idea that AI is sentient nearly as absurd as <i>way</i> more commonly accepted ideas like life after death, a personal creator, etc. I guess there&#x27;s just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.<p>Anyways, certainly an interesting thing. We seem to be producing more and more of these &quot;radicalizing triggers&quot;, or making them more accessible.
  • nubg1 hour ago
    &gt; Now divorced, Biesma is still living with his ex-wife in their home, which is on the market.<p>sounds like hell on earth
    • dspillett9 minutes ago
      Particularly for his poor (ex)partner…<p>[That feels a bit like victim blaming, but there are more than one victim here and one of them is much more culpable than the rest]
    • Freak_NL37 minutes ago
      Selling won&#x27;t be a problem in the current housing market in Amsterdam. Getting somewhere new to live on the other hand…
  • bronlund25 minutes ago
    AI is a multiplier. If you are 1X stupid, AI will make you 10X.
  • axpvms43 minutes ago
    typical hackernews poster
  • SlavaLobozov2 hours ago
    [dead]
  • onetokeoverthe57 minutes ago
    [dead]
  • guzfip1 hour ago
    [flagged]
    • troosevelt1 hour ago
      That&#x27;s a really cold way of talking about people who might or might not be susceptible to mental illness. I hope you never experience something out of your control like that.<p>It&#x27;s like mocking people with cancer.
      • mothballed1 hour ago
        I suspect some middle age &quot;mental illness&quot; is a semi Darwinistic optimization to diversify the gene pool by imploding stale sexual pairings and forming new ones.
    • danelski1 hour ago
      &#x27;Coolest&#x27;? I guess this could also be said for drugs, but I don&#x27;t see it as a benefit.
    • PunchyHamster1 hour ago
      we already had cryptocurrency and gambling for that
    • oulipo21 hour ago
      That&#x27;s not &quot;cool&quot; at all, unless you&#x27;re a sociopath
  • jrjeksjd8d46 minutes ago
    This guy doesn&#x27;t even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on &quot;sure thing&quot; businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn&#x27;t seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.<p>The AI psychosis I&#x27;ve seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It&#x27;s a lot closer to cult behavior.
    • Freak_NL40 minutes ago
      The part where he believed the protagonist from his own books uploaded to ChatGPT had become sentient and that building an app based on that would make sense didn&#x27;t strike you as eccentric at the very least? Or the birthday party where he couldn&#x27;t hold a single conversation because his wife asked him not to talk about AI for a change?<p>Your last paragraph basically describes what the article writes about him.
    • jlarcombe40 minutes ago
      Apart from the bit where he was hospitalised for &quot;full manic psychosis&quot;, you mean?
    • tencentshill3 minutes ago
      The intense drive to &quot;do&quot;, which serves many software developers well in their careers is weaponized against them by these chatbots. You see them here sometimes on &#x2F;new at various stages. Sad delusions, some are already homeless. Frequent use of their full legal name for some reason.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47408999">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47408999</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47388478">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47388478</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44683618">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=44683618</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47064316">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47064316</a><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47498693">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47498693</a>
    • roywiggins41 minutes ago
      It seems like he was at the very least close to that. Since we only get his first-person account it&#x27;s hard to say, but:<p>&gt; They discussed philosophy, psychology, science and the universe...<p>&gt; When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.<p>&gt; It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...<p>&gt; he was hospitalised three times for what he describes as “full manic psychosis”.<p>You don&#x27;t get hospitalized <i>three times for mania</i> without being pretty severely detached from reality.
      • petesergeant35 minutes ago
        &gt; They discussed philosophy, psychology, science and the universe...<p>I mean, I&#x27;ve discussed all those things with an LLM, mostly because I&#x27;m able to interactively narrow in on the specific bits I don&#x27;t understand, and I&#x27;ve found it to be great for that.<p>The rest ... yes, definitely psychosis.
        • roywiggins28 minutes ago
          On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says &quot;we were discussing the universe&quot; I take it in a very particular way.
  • miki12321126 minutes ago
    &gt; Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear<p>If only this was written by a competent journalist who knew what the words &quot;fine tune&quot; actually mean...<p>I guess it&#x27;s hard to find a competent person who&#x27;s willing to follow the extreme anti-tech Guardian agenda though.
    • alwa14 minutes ago
      If I read it correctly, this line was quoting the main victim, who described it that way (incorrectly, apparently based on a mangled secondhand interpretation of how these things work).<p>The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:<p>&gt; <i>“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”</i><p>I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.
    • kleiba20 minutes ago
      I chuckled at &quot;he downloaded ChatGPT&quot;.