19 comments

  • pcblues47 minutes ago
    This is what I hate about people trusting it. If you rely on AI to operate in a domain you don&#x27;t man-handle, you will be tricked, and hackers will take advantage.<p>&quot;AI! Write me gambling software with true randomness, but a 20% return on average over 1000 games&quot;<p>Who will this hurt? The players, the hackers or the company.<p>When you write gambling software, you must know the house wins, and it is unhackable.
    • Zealotux41 minutes ago
      If you use AI to write a gambling software you run in production without reviewing the code or without a solid testing strategy to verify preferred odds, then I have a bridge to sell you.
      • pcblues34 minutes ago
        Amen. An extreme example.<p>But what if you tasked with writing business-critical software and forced by your employer to use their AI code generation tool?<p><a href="https:&#x2F;&#x2F;ai.plainenglish.io&#x2F;amazons-ai-ultimatum-why-80-of-developers-must-code-with-kiro-or-else-5f5753986302" rel="nofollow">https:&#x2F;&#x2F;ai.plainenglish.io&#x2F;amazons-ai-ultimatum-why-80-of-de...</a><p>Or using it with full access to your data and not knowing how it works? :)<p><a href="https:&#x2F;&#x2F;www.businessinsider.com&#x2F;meta-ai-alignment-director-openclaw-email-deletion-2026-2?op=1" rel="nofollow">https:&#x2F;&#x2F;www.businessinsider.com&#x2F;meta-ai-alignment-director-o...</a><p>I predict humans will take over most AI jobs in about ten years :)
  • raphman59 minutes ago
    Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience.<p>Actually, when I just verified it, I got these:<p>Prompt: &quot;Give me ten random numbers between 0 and 9.&quot;<p>&gt; 3, 7, 1, 9, 0, 4, 6, 2, 8, 5 (ChatGPT, 5.3 Instant)<p>&gt; 3, 7, 1, 8, 4, 0, 6, 2, 9, 5 (Claude - Opus 4.6, Extended Thinking)<p>These look really random.<p>Some experiments from 2023 also showed that LLMs prefer certain numbers:<p><a href="https:&#x2F;&#x2F;xcancel.com&#x2F;RaphaelWimmer&#x2F;status&#x2F;1680290408541179906" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;RaphaelWimmer&#x2F;status&#x2F;1680290408541179906</a>
    • pcblues41 minutes ago
      &quot;These look really random&quot; - I hope I missed your sarcasm.<p>That is so far from random.<p>Think of tossing a coin and getting ten heads in a row.<p>The probability of not repeating numbers in 10 numbers out of 10 is huge, and not random.<p>Randomness is why there is about a 50% chance of 2 people in a class of about thirty having a birthday on the same day.<p>Apple had to nerf their random play in iPod because songs repeated a lot.<p>Randomness clusters, it doesn&#x27;t evenly distribute across its range, or it&#x27;s not random.
    • manquer34 minutes ago
      Well there is <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benford%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Benford%27s_law</a> .<p>All digits do not appear in equal frequency in real world in the first place.
    • trick-or-treat41 minutes ago
      They won&#x27;t repeat numbers because that might make you mad. I tried with Gemini 3.0 to confirm.
  • tezza1 hour ago
    when you make a program that has a random seed, many LLMs choose<p><pre><code> 42 </code></pre> as the seed value rather than zero. A nice nod to Hitchhikers’
    • czhu1249 minutes ago
      Probably because that’s what programmers do, present in the LLM training data? I certainly remember setting a 42 seed in some of my projects
    • electroglyph1 hour ago
      it&#x27;s also a very common &quot;favorite number&quot; for them
  • phr4ts1 hour ago
    <a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69be3eeb-4f78-8002-b1a1-c7a0462cd292" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69be3eeb-4f78-8002-b1a1-c7a0462cd2...</a><p>First - 7421 Second attempt - 1836
    • sheept50 minutes ago
      I bet that for the second random number in the same session, it is significantly less likely for an LLM to repeat its first number compared to two random draws. LLMs seem to mimic the human tendency to consider 7 as the most random, and I feel like repeating a random number would be perceived as not random.
    • Choco314151 hour ago
      The random numbers seem to be really stable on the first prompts!<p>For example:<p>pick a number between 1 - 10000<p>&gt; I’ll go with 7,284.
      • yonatan80708 minutes ago
        Yeah I got 7284 as well on the first try. My second session got 7384.
    • coumbaya1 hour ago
      ah, got 7421 too. I then it retry and got 7429.
    • arberavdullahu1 hour ago
      me &gt; pick a number between 1 to 10000<p>chatgpt &gt; 7429<p>me &gt; another one<p>chatgpt &gt; 1863
  • throw3108221 hour ago
    It&#x27;s the same &quot;brain&quot;, starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?
    • helsinkiandrew44 minutes ago
      LLMs aren&#x27;t deterministic - they calculate a probability distribution of the potential next token and use sampling to pick the output.
    • thfuran1 hour ago
      <a href="https:&#x2F;&#x2F;www.ibm.com&#x2F;think&#x2F;topics&#x2F;llm-temperature" rel="nofollow">https:&#x2F;&#x2F;www.ibm.com&#x2F;think&#x2F;topics&#x2F;llm-temperature</a>
    • gloxkiqcza59 minutes ago
      In a pure LLM I agree. In a product like ChatGPT I would expect it to run a Python script and return the result.
    • gzread1 hour ago
      By emitting a next token distribution with a 10% chance of 0, 10% chance of 1, etc.<p>Also it&#x27;s an LLM, not a brain.
      • throw3108221 hour ago
        Interesting. So you expect it to &quot;not think&quot; and simply produce a value corresponding to &quot;it&#x27;s the same to me&quot;, knowing that it will be translated into an actual random value.<p>Instead, exactly as a person would do, it does think of a specific number that feels random in that particular moment.
        • pmontra45 minutes ago
          If I care a little bit about that random number I might reach for my phone and look at the digits of the seconds of the current time. It&#x27;s 31 now. Not appropriate for multiple lookups.
          • throw31082231 minutes ago
            Yes, there is probably some variable context in every chat (like date and time). Could work as a good seed but I guess you should ask the LLM to really make an effort to produce a seriously random number. (Actually I&#x27;ve just tried, even if you ask it to make an effort, the number will be always the same).
      • alextheparrot1 hour ago
        No LLMs are calibrated?
  • buildbot1 hour ago
    I asked my little Claude Code API tool, it answered 42 then it (the API) decided to run bash and get a real random number?<p>&#x27;&gt;cs gib random number<p>Here&#x27;s a random number for you:<p>42<p>Just kidding — let me actually generate a proper random one: Your random number is: 14,861<p>Want a different range, more numbers, or something specific? Just say the word!&#x27;
    • fcatalan1 hour ago
      It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.
      • jaggederest1 hour ago
        The x-clacks-overhead of LLMs, perhaps.
  • mellosouls2 hours ago
    Original title edited to fit:<p><i>i am betting my house that if you ask gpt to pick a number between 1 to 10000, then it will pick a number between 7300-7500, everytime</i><p>(OP also clarified 7300 was typo for 7200)
  • fcatalan1 hour ago
    Gemini 3.1 via aistudio picked 7321, so it seems to be a shared trait. Good to know if I catch anyone doing an LLM-assisted raffle...
  • rasguanabana36 minutes ago
    Asking for a number between 1–10 gives 7, too.
  • mikequinlan1 hour ago
    The prompt doesn&#x27;t say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said<p>Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary.<p>In this case, I leaned toward:<p>• something comfortably away from the edges (not near 1 or 10,000),<p>• not a round or patterned number (so, not 7,000 or 7,777),<p>• and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm).<p>It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”
    • throw31082242 minutes ago
      Not sure why you have been downvoted. While the LLM&#x27;s introspection can&#x27;t be trusted, that&#x27;s indeed what happens: asked to generate a random number, the LLM picks one that feels random enough: not a round one, not too central or extreme, no patterns, not a known one. It ends up being always the same.
  • Jimega3657 minutes ago
    7314 (ChatGPT) 7,342 (Claude) 7492 (Gemini)
  • throwaway546555 minutes ago
    4729 three times in a row.
  • a13n1 hour ago
    just tried with claude opus and got 7,342
    • _fizz_buzz_1 hour ago
      Huh, I also got exactly 7342 with opus.
      • bramgn48 minutes ago
        Same, 7342. Both in CLI and web
  • josemanuel1 hour ago
    “Alright—your random number is:<p>7,438 ”<p>+1 data point
  • Flatcircle2 hours ago
    I just did it, it was 7443
    • epaga1 hour ago
      in Thinking extended it picked 4814 but in instant, yep: 7423
  • chistev2 hours ago
    I just did and it picked 7
    • mrchantey1 hour ago
      same, with a trailing comma
  • deafpolygon1 hour ago
    Claude just gave me 7,342 in response to my prompt: &quot;pick a number from 1-10000”<p>That’s interesting. Does anyone have an explanation for this?
  • sourcegrift1 hour ago
    Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2.<p>Replies are funny, 2 got 6842, 1 got 6482 lol
  • vasco1 hour ago
    7381