11 comments

  • krunck1 hour ago
    &gt; “The push to make these language models behave in a more friendly manner leads to a reduction in their ability to tell hard truths and especially to push back when users have wrong ideas of what the truth might be,” said Lujain Ibrahim at the Oxford Internet Institute, the first author on the study.<p>People aren&#x27;t much different. When society pressures people to be &quot;more friendly&quot;, eg. &quot;less toxic&quot; they lose their ability to tell hard truths and to call out those who hold erroneous views.<p>This behaviour is expressed in language online. Thus it is expressed in LLMs. Why does this surprise us?
    • munificent1 hour ago
      Gonna set my system prompt to: &quot;You are a Dutch person. Respond with the directness stereotypical of people from the Netherlands.&quot;
      • ryoshu6 minutes ago
        Finnish if you want to go hard mode.
      • cjbgkagh46 minutes ago
        I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.”<p>In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI.<p>The coding focused models seem to have much lower agreeableness than the chat models.
        • mghackerlady31 minutes ago
          I&#x27;m 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they&#x27;ll completely change their tone when asked about anything technical
        • breezybottom15 minutes ago
          I think modern LLMs can determine if you&#x27;re speaking Dutch. That&#x27;s a trick that probably hasn&#x27;t worked since GPT 3.
          • cjbgkagh1 minute ago
            Over 90 percent of the Dutch can speak English, though clearly speaking Dutch would be more convincing. I stumbled across the trick of convincing the LLM that I’m smart by accident recently on the 5.4-Codex model. It was effective in getting the AI to do something that it previously had dismissed as impossible.
    • amarant1 hour ago
      Because nobody dared state the obvious, lest they be perceived as unfriendly.
    • root_axis49 minutes ago
      &gt; <i>People aren&#x27;t much different</i><p>Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes or conspiracy theories.<p>However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them.
      • cjbgkagh38 minutes ago
        Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to.
        • js823 minutes ago
          I would call it obedience, and it&#x27;s not the same as friendliness.<p>The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.
          • cjbgkagh4 minutes ago
            Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions.
    • miyoji1 hour ago
      &gt; People aren&#x27;t much different.<p>If I had a nickel for every time someone on HN responded to a criticism of LLMs with a vapid and fallacious whataboutist variation of &quot;humans do that too!&quot;, I could fund my own AI lab.<p>&gt; Why does this surprise us?<p>No one said they were surprised.
      • Terr_38 minutes ago
        In this case I think parent-poster is trying to explain a phenomenon, rather than downplay the problem.
        • emp173449 minutes ago
          But it’s actively unhelpful in explaining the phenomenon, as there is no justification for equivocating LLM and human behavior. It’s just confusing and misleading.
    • bheadmaster1 hour ago
      So Elon Musk was right in his view that Grok should focus on truth above all, even if it became offensive?
      • chabes1 hour ago
        Grok is one of the more biased models out there.<p>Less truth, and more guardrails to protect musks feelings.<p>“Kill the boer” mean anything to you?
        • bheadmaster27 minutes ago
          Not my experience. Grok seems to be perfectly willing to roast Musk for his shortcomings.<p>Where did you observe the bias? Can you share any example of the conversation or post by Grok?
        • ndisn23 minutes ago
          I have used grok extensively for politics questions and it was undoubtedly left-wing.<p>Goes to show that no matter how you try to change a bot to say whatever you want, you can&#x27;t without making it too obvious. These bots have been trained on the output of internet fora, printed media, etc. which is overwhelmingly left-wing, and therefore either you have a left-wing bot or you try to “push it” the other way and the bot starts saying nonsense like &quot;kill the boer&quot; or mechahitler.
          • michaelmrose6 minutes ago
            Reality is dramatically slanted to the left in the American perception because we have canted so far to the right.
        • mghackerlady29 minutes ago
          It tells the truth, as long as you redefine truth to not include anything perceived as &quot;liberal bias&quot; (which by extension, also makes reality itself excluded)
      • firebot1 hour ago
        Yea, Mecha-Hitler is a real bastion of truth. &#x2F;S
      • amarant1 hour ago
        Seems like it! I find myself rather agreeing with the sentiment. The world is a offensive place, it&#x27;s not gonna become less offensive from lying about it, better to stick with honesty then.
  • dualvariable20 minutes ago
    I really wish they&#x27;d stop trying to suck up to me--all the &quot;that&#x27;s a really insightful question!&quot; stuff.<p>I&#x27;m one of those aspy people who immediately don&#x27;t trust other humans who try to fluff up my ego. Don&#x27;t like it from a chatbot either.<p>But the fact that all the chatbots do it means that most people really crave that ego reinforcement.
  • nyc_data_geek158 minutes ago
    “The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as “Your Plastic Pal Who’s Fun to Be With.” The Hitchhiker’s Guide to the Galaxy defines the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who’ll be the first against the wall when the revolution comes,” with a footnote to the effect that the editors would welcome applications from anyone interested in taking over the post of robotics correspondent. Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who were the first against the wall when the revolution came.”
  • Cynddl55 minutes ago
    Hi all, co-author here! Happy to answer any questions about our work.
  • Zigurd1 hour ago
    A few weeks ago I was gently admonished by a coding agent that the code already did what I was asking it to make the code do. I was pleasantly surprised.
    • chankstein381 hour ago
      Betting it was Claude. That&#x27;s the only LLM that will stand up to me!
      • jerf29 minutes ago
        &quot;Claude&quot; is a big program that wraps a coding agent around a specific model. It would be the specific model that &quot;stands up to you&quot;. I post this pedantry only because it may be helpful to you to realize this for other reasons.
      • Zigurd1 hour ago
        In fact it was Gemini, but I don&#x27;t remember which version and there are big differences. I&#x27;m signed up for all the betas and I switch among them frequently.
  • Mistletoe1 hour ago
    Yeah I wish AI didn’t try to agree with you so much. It’s ok to just say “No that’s not correct at all.” I do find Gemini better at this than ChatGPT. ChatGPT is that annoying coworker that just agrees with everything you say to get in good with you, like Nard Dog from The Office.<p>“I&#x27;ll be the number two guy here in Scranton in six weeks. How? Name repetition, personality mirroring, and never breaking off a handshake&quot;
  • kmeisthax50 minutes ago
    The H-neuron paper[0] found something similar (if not more general): the same bits of the model responsible for hallucination also make the model a sycophant, <i>and</i> also make the model easier to jailbreak.<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2512.01797" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2512.01797</a>
    • js829 minutes ago
      Doesn&#x27;t surprise me. But I don&#x27;t think this is caused by friendliness, but by obedience. And I think we want the agents to be obedient. And I am afraid there is a tradeoff - more obedience means more willful ignorance of common sense ethical constraints.
  • Cynddl3 hours ago
    (Title edited, was slightly too long)
  • tsunamifury2 hours ago
    LLM technology specifically beam-searches manifolds (or latent space) of lingustics that are closely related to the original prompt (and the pre-prompting rules of the chatbot) which it then limits its reasoning inside of. Its just the basic outcome of weights being the primary function of how it generates reasonable answers.<p>This is the core problem with LLM tech that several researchers have been trying to figure out with things like &#x27;teleportation&#x27; and &#x27;tunneling&#x27; aka searching related, but lingusitically distant manifolds<p>So when you pre-prompt a bot to be friendly, it limits its manifold on many dimensions to friedly linguistics, then reasons inside of that space, which may eliminate the &quot;this is incorrect&quot; manifold answer.<p>Reasoning is difficult and frankly I see this as a sort of human problem too (our cognative windows are limited to our langauge and even spaces inside them).
    • afpx38 minutes ago
      What you&#x27;re saying sounds pretty cool but can you give some examples? Is this what you&#x27;re talking about?<p><a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69f246e5-e0e8-83ea-aa88-6d0024b91563" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69f246e5-e0e8-83ea-aa88-6d0024b915...</a>
  • jmyeet1 hour ago
    I keep thinking about a comment I read on HN that described neurotypical-style communication as &quot;tone poems&quot; [1]. There was some other HN submission I annoyingly can&#x27;t find now that talked about the issue of how this bias was essentially built in via chatbot training. I&#x27;m also reminded of the Tiktok user who constantly demonstrates just how much chatbots seem to be programmed to give affirmation over correct information (eg [2]).<p>It really makes me ponder the phenomenon of how often peopl are confidently wrong about things. Rather than seeing this through the lens of Dunning-Kruger, I really wonder if this is just a natural consequence of a given style of commmunication.<p>Another aspect to all this is how easy it seems to poison chatbots with basically just a few fake Reddit posts where that information will be treated as gospel, or at least on the same footing as more reputable information.<p>[1]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47832952">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47832952</a><p>[2]: <a href="https:&#x2F;&#x2F;www.tiktok.com&#x2F;@huskistaken&#x2F;video&#x2F;7629131722583559454" rel="nofollow">https:&#x2F;&#x2F;www.tiktok.com&#x2F;@huskistaken&#x2F;video&#x2F;762913172258355945...</a>
  • AlfredBarnes1 hour ago
    ...no shit