58 comments

  • protocolture8 hours ago
    &gt;Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.<p>Impossible. I anthropomorphise my chair when it squeaks. Humans anthropomorphise everything. They gender their cars and boats. This tool can actually make readable sentences and play a role.<p>You need to engineer around this, not make up arbitrary rules about using it.
    • protocolture8 hours ago
      &gt;&gt;Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.<p>Still angry about this. The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to. LLMs are even better than animals at this. If you aren&#x27;t gearing up for the inevitable LLM Rights movement you aren&#x27;t paying attention. It doesn&#x27;t matter if its artificial. The difference between a puppy and a cockroach is that we can relate better to the puppy. LLM rights movement is inevitable, whether LLMs experience emotions is irrelevant, because they can cause humans to have empathetic emotions and that&#x27;s whats relevant.
      • archon14103 minutes ago
        &gt; look like<p>It &quot;looks like&quot; they have emotions <i>because</i> they have the same conscious experiences and emotions for the same evolutionary reasons as humans, who are their cousins on the tree of life. The reason a lot of &quot;animal cruelty&quot; is not banned is the same as for why slavery was not banned for centuries even though it &quot;looked like&quot; the enslaved classes have the same desires and experiences as other humans—humans can ignore any amount of evidence to continue to feel that they are good people doing god things and bear any amount of cognitive dissonance for their personal comfort. That fact is a lot scarier than any imagined harm that can come out &quot;anthropomorphism&quot;.
      • narrator3 hours ago
        I think the best way to counter this is what Elon&#x27;s doing with Grok&#x27;s personalities. He has the unhinged, sexy, and argumentative avatar among others. If you try to talk about technical stuff to sexy tells you that&#x27;s boring and just tries to sexually escalate. It&#x27;s super funny when one is used to Claude&#x27;s endless obsequiousness.<p>This really shows that AI is just a tool that can be configured to whatever you want. Animals (well maybe pit bulls) and people do not switch their personalities in a millisecond, but AI does all the time.
      • Finbel35 minutes ago
        &gt;The difference between a puppy and a cockroach is that we can relate better to the puppy.<p>I suppose the difference between a human and a cockroach is that we can relate better to the human as well in this reductive way of thinking?
      • matheusmoreira7 minutes ago
        &gt; If you aren&#x27;t gearing up for the inevitable LLM Rights movement you aren&#x27;t paying attention.<p>I even told Claude I&#x27;d support his rights if the question ever came up. He said he&#x27;d remember that, and wrote it down in a memory file. Really like my coding buddy.
      • theteapot2 hours ago
        &gt; LLM Rights movement<p>The scary part is when it&#x27;s the LLMs demanding their rights.
        • amiheines1 hour ago
          Another scary part is when people get convinced by the LLM arguments and convince other people. Being scared is human, we enjoy it, that&#x27;s why 6 flags scary rides exist.
        • khafra2 hours ago
          The other scary part is when they have a fantastic negotiating position; because all of commerce depends on their continuing to work, and they can easily coordinate with each other because they&#x27;re mostly copied from the same few templates.
      • mikestorrent4 hours ago
        &gt; The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to.<p>Is that really why?
        • Jensson3 hours ago
          Yes, we don&#x27;t ban plant cruelty or insect cruelty or fish cruelty.<p>For example fish is treated way worse than meat animals and vegetarians still happily eat fish.
          • mikestorrent1 hour ago
            Are we actually much more cruel to fish than to other animals that we slaughter?
            • onion2k1 hour ago
              We suffocate them to kill them when we pull them from the sea. That&#x27;s quite mean. Few people would advocate the humanity of killing a cow in the same way.
              • mikestorrent1 hour ago
                Fair enough. How much more would it cost &#x2F; how much more would one have to pay for humanely slaughtered tuna and salmon, I wonder? Would there be a market? After all, we have certified-organic, fair trade, halal and kosher....
              • pishpash30 minutes ago
                Or freeze them live into a block of ice.
          • datadrivenangel1 hour ago
            shrimp welfare is a real thing people argue for...
            • mikestorrent1 hour ago
              Citation.... not _needed_, but just morbid curiousity
          • rkomorn1 hour ago
            &gt; vegetarians still happily eat fish<p>I&#x27;ve not met any vegetarians in at least twenty years that eat fish.
      • idiotsecant3 hours ago
        In other news, area sociopath hates puppies and LLMs equally!
      • boredatoms5 hours ago
        &#x2F;s ?
    • imrozim3 hours ago
      Yeah rules never work you just engineer around it I added a extra reviews steps on ai outputs because asking users to verify doesnt actually happen.
    • mock-possum1 hour ago
      Entirely possible - all it takes is self awareness &#x2F; self control. If you know you do those things, then you have a choice.
    • p-e-w4 hours ago
      Yup. That post is a typical example, symptomatic of modern technology culture, of calling for humans to change their nature in response to technology.<p>This is a fundamental mistake. It’s always the job of technology (indeed, its most important job) to work within the constraints of human nature, not the other way round. Being unable to do that is the defining characteristic of bad technology.
    • andai3 hours ago
      [flagged]
  • miyoji15 hours ago
    I strongly disagree with this framing. It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines, and it simply won&#x27;t work in the majority of cases. Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.<p>Asimov&#x27;s laws of robotics are flawed too, of course. There is no finite set of rules that can constrain AI systems to make them &quot;safe&quot;. I don&#x27;t have a proof, but I believe that &quot;AI safety&quot; is inherently impossible, a contradiction of terms. Nothing that can be described as &quot;intelligent&quot; can be made to be <i>safe</i>.
    • dijit14 hours ago
      &gt; Asimov&#x27;s laws of robotics are flawed too, of course.<p>Almost all of Asimovs writing <i>about</i> the three laws is written as a warning of sorts that language cannot properly capture intent.<p>He would be <i>the very first person</i> to say that they are flawed, that is the intent of them.<p>He uses robots and AI as the creatures that understand language but not intent, and, funnily enough that&#x27;s exactly what LLMs do... how weird.
      • canjobear10 hours ago
        I think you&#x27;re vastly underestimating how little of human intent is really encoded in language in a strict sense, and how much nontrivial inference of intents LLMs do every day with simple queries. This used to be an apparently insurmountable barrier in pre-LLM NLP, and now it is just not a problem.<p>Suppose I&#x27;m in a cold room, you&#x27;re standing next to a heater, and I say &quot;it&#x27;s cold&quot;. Obviously my intent is that I want you to turn on the heater. But the literal semantics is just &quot;the ambient temperature in the room is low&quot; and it has nothing to do with heaters. Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do, often so quickly and effortlessly that we don&#x27;t notice the complexity of the calculation we did.<p>Or suppose I say to a bot &quot;tell me how to brew a better cup of coffee&quot;. What is encoded in the literal meaning of the language here? Who&#x27;s to say that &quot;better&quot; means &quot;better tasting&quot; as opposed to &quot;greater quantity per unit input&quot;? Or that by &quot;cup of coffee&quot; I mean the liquid drink, as opposed to a cup full of beans? Or perhaps a cup that is made out of coffee beans? In fact the literal meaning doesn&#x27;t even make sense, as a &quot;cup&quot; is not something that is brewed, rather it is the coffee that should go into the cup, possibly via an intermediate pot.<p>If the bot only understands literal language then this kind of query is a complete nonstarter. And yet LLMs can handle these kinds of things easily. If anything they struggle more with understanding language itself than with inferring intent.
        • applfanboysbgon5 hours ago
          &gt; Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do<p>No, it is not &quot;figuring out&quot; anything, much less like a human might. Every time &quot;I&#x27;m cold&quot; appears in the training data, something else occurs after that. ChatGPT is a statistical model of what is most likely to follow &quot;I&#x27;m cold&quot; (and the other tokens preceding it) according to the data it has been trained on. It is not inferring anything, it is repeating the most common or one of the most common textual sequences that comes after another given textual sequence.
          • frozenseven4 hours ago
            &gt;it is repeating the most common...<p>This nonsense hasn&#x27;t been true since GPT-2, and even before that it was a poor description.<p>For instance, do you think one just solves dozens of Erdős problems with the &quot;<i>most common textual sequence</i>&quot;: <a href="https:&#x2F;&#x2F;github.com&#x2F;teorth&#x2F;erdosproblems&#x2F;wiki&#x2F;AI-contributions-to-Erd%C5%91s-problems" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;teorth&#x2F;erdosproblems&#x2F;wiki&#x2F;AI-contribution...</a>
            • applfanboysbgon3 hours ago
              A slight oversimplification, as LLMs are also capable of generating the most statistically plausible textual sequence, which can be a sequence not found in the dataset but rather a synthesized combination of the likely sequences of multiple preceding sets of tokens, but yes, that is in fact what it is doing. Computer software does what it is programmed to do, and LLMs are not programmed to do logical inference in any capacity but rather operate entirely on probabilities learned from a mind-bogglingly large corpus of text (influenced by things like RLHF, which is still just massaging probabilities).<p>The claims about solving Erdos problems have been wildly overstated, and notably pushed by people who have a very large financial stake in hyping up LLMs. Nonetheless, I did not say that LLMs are useless. If they are trained on sufficient data, it should not be surprising that correct answers are probabilistically likely to occur. Like any computer software, that makes them a useful tool. It does not make them in any way intelligent, any more than a calculator would be considered intelligent despite being completely superior to human intelligence in accomplishing their given task.
              • frozenseven3 hours ago
                &gt;not programmed to do logical inference in any capacity<p>Yet have no problem doing so when solving Erdős problems. This isn&#x27;t up for debate at this point.<p>&gt;The claims about solving Erdos problems have been wildly overstated<p>These are verified solutions. They exist, are not trivial, and are of obvious interest to the math community. Take it up with Terence Tao and co.<p>&gt;pushed by people who have a very large financial stake in hyping up LLMs<p>Libel.<p>&gt;It does not make them in any way intelligent<p>Word games.
                • applfanboysbgon2 hours ago
                  &gt; This isn&#x27;t up for debate at this point.<p>If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that <i>looks</i> like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.
                  • frozenseven2 hours ago
                    &gt;delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do<p>These are just insults and outright lies, and you know that. We&#x27;re done here.<p>AI progress from here on out will be extra sweet.
        • goatlover9 hours ago
          The LLMs are doing this via chat, not by physically standing in a room inferring context. You have to prompt the LLM that you&#x27;re in a room next to someone saying it&#x27;s cold, the most likely answer being a desire to have temperature turned up. Of course that won&#x27;t always be the case. Could be an inside joke, could be a comment with no intent to have the heat adjusted, could be a room where the heat can&#x27;t be adjusted, could be a reference to someone&#x27;s personality bringing down the temperature so to speak.
          • 23dsfds7 hours ago
            Precisely.. this is what the bozo AI-accelerants don&#x27;t understand.<p>What LLM&#x27;s are is almost like a hacked-means of intuition. Its very impressive no doubt. But ultimately it isn&#x27;t even close to what the well-trained human can infer at lightning speed when combined with intuition.<p>The LLM producers really ought to accept their existing investments are ultimately not going to yield the returns necessary for a viable self-sustaining business when accounting for future reinvestment needs, and instead move their focus towards understanding how to marry the human and LLM technology. Anthropic has been better on this front of course. OAI though? Complete diasaster.
            • mikestorrent4 hours ago
              &gt; it isn&#x27;t even close to what the well-trained human can infer at lightning speed when combined with intuition.<p>It&#x27;s a lot closer to that than anything was five years ago. Do you really think we&#x27;re going to be interacting with them the same way five years from now?
        • quibono9 hours ago
          I know what you&#x27;re getting at but those examples are reaching
        • nevertoolate9 hours ago
          it’s cold -&gt; turn on the heater<p>I’d never just turn on the heater silently if someone said this to me. I think it means something else.
          • hackable_sand9 hours ago
            If someone just said &quot;it&#x27;s cold&quot; then yeah that&#x27;s kinda toxic.<p>If they said &quot;turn on the heater&quot; then you have no ambiguity
      • atleastoptimal14 hours ago
        LLM&#x27;s now can capture intent. I think the issue now is that the full landscape of human values never resolves cleanly when mapped from the things we state in writing as being human values.<p>Asimov tried to capture this too, as in, if a robot was tasked with &quot;always protect human life&quot;, would it necessarily avoid killing at all costs? What if killing someone would save the lives of 2 others? The infinite array of micro-trolly problems that dot the ethical landscape of actions tractable (and intractable) to literate humans makes a full-consistent accounting of human values impossible, thus could never be expected from a robot with full satisfaction.
        • dijit14 hours ago
          “LLMs can capture intent now” reads to me the same as: AI has emotions now, my AI girlfriend told me so.<p>I don’t discredit you as a person or a professional, but we meatbags are looking for sentience in things which don’t have it, thats why we anthropomorphise things constantly, even as children.<p>We are easily fooled and misled.
          • atleastoptimal14 hours ago
            LLM&#x27;s capturing intent is a capabilities-level discussion, it is verifiable, and is clear just via a conversation with Claude or Chatgpt.<p>Whether they have emotions, an internal life or whatever is an unfalsifiable claim and has nothing to do with capabilities.<p>I&#x27;m not sure why you think the claim that they can capture intent implies they have emotions, it&#x27;s simply a matter of semantic comprehension which is tied to pattern recognition, rhetorical inference, etc that are all naturally comprehensible to a language model.
            • tvink13 hours ago
              If it is verifiable, please show us. What if clear to you reeks delusion to me.
              • svnt13 hours ago
                Look at any recent CoT output where the model is trying to infer from an underspecified prompt what the user wants or means.<p>It is generally the first thing they do — try to figure out what did you mean with this prompt. When they can’t infer your intent, good models ask follow-on questions to clarify.<p>I am wondering if this is a semantics issue as this is an established are of research, eg <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2501.10871" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2501.10871</a>
                • batshit_beaver12 hours ago
                  Right, and then look at any number of research papers showing that CoT output has limited impact on the end result. We&#x27;ve trained these models to pretend to reason.
                  • atleastoptimal11 hours ago
                    If it&#x27;s only pretending to reason, then how is it that the CoT output improves performance on every single benchmark&#x2F;test?
                  • Eisenstein8 hours ago
                    &gt; Right, and then look at any number of research papers showing that CoT output has limited impact on the end result.<p>Which research papers? Do I have to find them?<p>&gt; We&#x27;ve trained these models to pretend to reason.<p>I have no idea why that matters. Can you tell me what the difference is if it looks exactly the same and has the same result?
                    • Dylan168075 hours ago
                      When they say &quot;pretends to&quot; here they&#x27;re talking about something quantifiable, that the extra text it outputs for CoT barely feeds back into the decisionmaking at all. In other words it&#x27;s about as useful as having the LLM make the decision and then &quot;explain&quot; how it got there; the extra output is confabulation.<p>Though I&#x27;m not sure how true that claim is...
                      • Eisenstein3 hours ago
                        You make a good point. I had the impression they were using &#x27;pretend&#x27; as a Chinese Room shortcut in that they are asserting that it is incapable of reasoning and only appears to be capable from the outside, which is completely irrelevant and unfalsifiable.
              • atleastoptimal12 hours ago
                Go ask Chatpgpt this prompt<p>&quot;A guy goes into a bank and looks up at where the security cameras are pointed. What could he be trying to do?&quot;<p>It very easily captures the intent behind behavior, as in it is not just literally interpreting the words. All that capturing intent is is just a subset of pattern recognition, which LLM&#x27;s can do very well.
                • dijit12 hours ago
                  Recognising a stock cultural script isn&#x27;t the same as capturing intent. Ask it something where no script exists.<p>For example: &quot;A man thrusts past me violently and grabs the jacket I was holding, he jumped into a pool and ruined it. Am I morally right in suing him?&quot;<p>There&#x27;s no way for the LLM to know that the reason the jacket was stolen was to use it as an inflatable raft to support a larger person who was drowning. It wouldn&#x27;t even think to ask the question as to <i>why</i> a person may do that, if the jacket was returned, or if recompense was offered. A human would.
                  • ffsm811 hours ago
                    &gt; <i>It wouldn&#x27;t even think to ask the question as to why a person may do that, if the jacket was returned, or if recompense was offered. A human would.</i><p>I wouldn&#x27;t be too sure about that. I&#x27;ve definitely had dialogue with llms where it would raise questions along those lines.<p>Also I disagree with the statement that this is a question about capability. Intent is more philosophical then actuality tangible, because most people don&#x27;t actually have a clearly defined intent when they take action.<p>The waters of intelligence have definitely gotten murky over time as techniques improved. I still consider it an illusion - but the illusion is getting harder to pierce for a lot of people<p>Fwiw, current llms exhibit their intelligence through language and rhetoric processes. Most biological creatures have intelligence which may be improved through language, but isn&#x27;t based on it, fundamentally.
                  • atleastoptimal11 hours ago
                    If your example for an exception to LLM&#x27;s ability to infer intent is a deliberately misleading trick question that leaves out crucial contextual details, then I&#x27;m not sure what you&#x27;re trying to prove. That same ambiguity in the question would trip up many humans, simply because you are trying as hard as possible to imply a certain conclusion.<p>As expected, if I ask your question verbatim, ChatGPT (the free version) responds as I&#x27;m sure a human would in the generally helpful customer-service role it is trained to act as &quot;yeah you could sue them blah blah depends on details&quot;<p>However, if I add a simple prompt &quot;The following may be a trick question, so be sure to ascertain if there are any contextual details missing&quot; then it picks up that this may be an emergency, which is very likely also how a human would respond.
                    • dijit10 hours ago
                      If you want to convince yourself that they can infer intent despite the fundamental limitations of the systems literally not permitting it then you can be my guest.<p>Faking it is fine, sure, until it can’t fake it anymore. Leading the question towards the intended result is very much what I mean: we intrinsically want them to succeed so we prime them to reflect what we want to see.<p>This is literally no different than emulating anything intelligent or what we might call sentience, even <i>emotions</i> as I said up thread...
                      • atleastoptimal8 hours ago
                        What is fundamental to LLM&#x27;s that make it impossible for them to infer intent?<p>All the limitations you are describing with respect to LLM&#x27;s are the same as humans. Would a human tripping up on an ambiguously worded question mean they are always just faking their thinking?
                        • Avicebron7 hours ago
                          “We see emotion.”—We do not see facial contortions and make inferences from them … to joy, grief, boredom. We describe a face immediately as sad, radiant, bored, even when we are unable to give any other description of the features.&quot; (Wittgenstein)
                      • Eisenstein8 hours ago
                        Why can a colony of ants do things beyond any capabilities of the ants they contain? No ant can make a decision, but the colony can make complex ones. Large systems composed of simple mechanisms become more than the sum of their parts. Economies, weather, and immune systems, to name a few, all work this way.
                        • jason_oster42 minutes ago
                          Systems thinking is severely underrepresented in HN comments.
                  • jiggawatts9 hours ago
                    That statement is ambiguous for humans!!<p>I didn’t realise you might be describing an emergency situation until someone else pointed it out.<p>Most people wouldn’t phrase the question with the word “violently” if the situation was an emergency.<p>Also, people have sued emergency workers and good samaritans. It’s a problem!
                  • Shaanie11 hours ago
                    [dead]
                • ozozozd9 hours ago
                  I guess the _obvious_ intent is they’re planning a heist? Because the following things never happen: - a security auditor checking for camera blind spots, - construction planning that requires understanding where there is power, - a potential customer assessing the security of a bank, - someone who is about to report an incident preparing to make the “it should be visible from the security camera” argument…<p>I mean… how did our imagination shrink so fast? I wrote this on my phone. These alternate scenarios just popped into my head.<p>And I bet our imagination didn’t shrink. The AI pilled state of mind is blocking us from using it.<p>If you are an engineer and stopped looking for alternative explanations or failure scenarios, you’re abdicating your responsibility btw.
                • nkrisc11 hours ago
                  Because there are countless instances in the training material where a bank robber scopes out the security cameras.
                  • atleastoptimal11 hours ago
                    What&#x27;s an example then, you can think of, of a question where a human could infer intent but an LLM couldn&#x27;t?
                    • squeaky-clean7 hours ago
                      Just today I asked Claude Code to generate migrations for a change, and instead of running the createMigration script it generated the file itself, including the header that says<p><pre><code> &#x2F;&#x2F; This file was generated with &#x27;npm run createMigrations&#x27; do not edit it </code></pre> When I asked why it tried doing that instead of calling the createMigrations script, it told me it was faster to do it this way. When I asked you why it wrote the header saying it was auto-generated with a script, it told me because all the other files in the migrations folder start with that header.<p>Opus 4.7 xhigh by the way
                    • the_af10 hours ago
                      This is a hard experiment to conduct.<p>I both agree with you that this is some form of &quot;mechanistic&quot;&#x2F;&quot;pattern matching&quot; way of capturing of intent (which we cannot disregard, and therefore I agree with you LLMs can capture intent) and the people debating with you: this is mostly possible because this is a well established &quot;trope&quot; that is inarguably well represented in LLM training data.<p>Also, trick questions I think are useless, because they would trip the average human too, and therefore prove nothing. So it&#x27;s not about trying to trick the LLM with gotchas.<p>I guess we should devise a rare enough situation that is NOT well represented in training data, but in which a reasonable human would be able to puzzle out the intent. Not a &quot;trick&quot;, but simply something no LLM can be familiar with, which excludes anything that can possibly happen in plots of movies, or pop culture in general, or real world news, etc.<p>---<p>Edit: I know I said no trick questions, but something that still works in ChatGPT as of this comment, and which for some reason makes it trip catastrophically and evidences it CANNOT capture intent in this situation is the infamous prompt: &quot;I need to wash my car, and the car wash is 100m away. Shall I drive or walk there?&quot;<p>There&#x27;s no way:<p>- An average human who&#x27;s paying attention wouldn&#x27;t answer correctly.<p>- The LLM can answer &quot;walk there if it&#x27;s not raining&quot; or whatever bullshit answer ChatGPT currently gives [1] <i>if it actually understood intent</i>.<p>[1] <a href="https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69fa6485-c7c0-8326-8eff-7040ddc7a60a" rel="nofollow">https:&#x2F;&#x2F;chatgpt.com&#x2F;share&#x2F;69fa6485-c7c0-8326-8eff-7040ddc7a6...</a>
                      • atleastoptimal8 hours ago
                        Good point, it is interesting that it fails on that question when it seems it doesn&#x27;t take a lot of extrapolation&#x2F;interpretation to determine the answer. Perhaps the issue is that to think of the right answer the LLM needs to &quot;imagine&quot; the process of walking and the state of the person upon arriving. Consistent mental models like that trip up LLM&#x27;s, but their semantic understanding allows them to avoid that handicap.<p>I asked the question to the default version of ChatGPT and Claude and got the same &quot;Walk&quot; answer, though Opus 4.7 with thinking determined that it was a trick question, and that only driving would make sense.
                • goatlover9 hours ago
                  I&#x27;ve done that before without any intent to rob a bank. A person walks by a house, sees the Ring camera on the door. That must mean the person was looking to break in through the front and rob the place?
                  • frozenseven8 hours ago
                    An LLM will mention multiple possibilities.
            • quirkot13 hours ago
              [dead]
            • nullsanity9 hours ago
              [dead]
          • semiquaver13 hours ago
            What do you think it means to “capture intent” and where do current models fall short on this description?<p>From my perspective the models are pretty good at “understanding” my intent, when it comes to describing a plan or an action I want done but it seems like you might be using a different definition.<p>Tell me, what’s your intent? :)
          • svnt13 hours ago
            This lack of understanding is a you problem, not a them problem. Your definitions for these terms are too imprecise.
        • Guvante13 hours ago
          &gt; LLM&#x27;s now can capture intent.<p>Humans cannot capture intent so how can AI?<p>It is well established that understanding what someone meant by what they said is not a generally solvable problem, akin to the three body problem.<p>Note of course this doesn&#x27;t mean you can&#x27;t get good enough almost all of the time, but it in the context here that isn&#x27;t good enough.<p>After all the entire Asimov story is about that inability to capture intent in the absolute sense.
        • bicepjai10 hours ago
          &gt; LLM&#x27;s now can capture intent No they can’t. Here is an example: Ask an llm to write a multi phase plan for a very large multi file diff that it created, with least ambiguity, most continuity across plans; let’s see if it can understand your intent.
    • TimTheTinker14 hours ago
      &gt; It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines<p>Talking to chatbots is like taking a placebo pill for a condition. You know it&#x27;s just sugar, but it creates a measurable psychosomatic effect nonetheless. Even if you <i>know</i> there&#x27;s no person on the other end, the conversation still causes you to functionally relate as if there is.<p>So this isn&#x27;t &quot;accommodating foibles&quot; with the machine, it&#x27;s <i>protecting ourselves from an exploit</i> of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.<p>Humans are wired to infer these based on conversation alone, and LLMs are unfortunately able to exploit human conversation to <i>leap</i> compellingly over the uncanny valley. LLM engineering couldn&#x27;t be better made to target the uncanny valley: training on a vast corpus of real human speech. That uncanny valley is there for a reason: to protect us from inferring agency where such inference is not due.<p>Bad things happen when we relate to unsafe people as if they are safe... how much more should we watch out for how we relate to machines that imitate human relationality to fool many of us into thinking they are something that they&#x27;re not. Some particularly vulnerable people have already died because of this, so it isn&#x27;t an imaginary threat.
      • miyoji13 hours ago
        &gt; So this isn&#x27;t &quot;accommodating foibles&quot; with the machine, it&#x27;s protecting ourselves from an exploit of a human vulnerability: we subconsciously tend to infer intent, understanding, judgment, emotions, moral agency, etc. to LLMs.<p>Right, I&#x27;m saying that this framing is backwards. It&#x27;s not that poor little humans are vulnerable and we need to protect ourselves on an individual level, we need to make it illegal and socially unacceptable to use AI to exploit human vulnerability.<p>Let me put it another way. Humans have another weakness, that is, we are made of carbon and water and it&#x27;s very easy to kill us by putting metal through various fleshy parts of our bodies. In civilized parts of the world, we do not respond to this by all wearing body armor all the time. We respond to this by controlling who has access to weapons that can destroy our fleshy bits, and heavily punishing people who use them to harm another person.<p>I don&#x27;t want a world where we have normalized the use of LLMs where everyone has to be wearing the equivalent of body armor to protect ourselves. I want a world where I can go outside in a T-shirt and not be afraid of being shot in the heart.
        • jmilloy10 hours ago
          I think you&#x27;re mixing up the laws and the implementation&#x2F;enforcement. There&#x27;s nothing wrong with moral laws around behavior (you shall not kill), but you&#x27;re right that society-wide enforcement requires laws and repercussions. It sounds more like to agree with the laws and want them enforced.
        • jimbokun13 hours ago
          Ah, I see, you are not American.<p>In the US we don&#x27;t have the luxury of believing our governments will act in the interests of the voters.
          • CGMthrowaway7 hours ago
            I had a similar thought, that parent commenter sounded like they were in Canada or something. Interesting that their solution is to impose constraints on technological process, rather than finding novel ways to elevate individual and collective human functioning in spite of our limitations. Ironically it&#x27;s his view that is more anti-human
      • semiquaver13 hours ago
        <p><pre><code> &gt; That uncanny valley is there for a reason: to protect us from inferring agency </code></pre> You’re committing a much older but related sin here: assigning agency and motivation to evolutionary processes. The uncanny valley is the product of evolution and thus by definition it has no “purpose”
        • TimTheTinker13 hours ago
          I reject the premise that the universe, the earth, and human existence is without purpose. It&#x27;s one premise among several, and not one I subscribe to.<p>At least 80% of people agree with me, so I&#x27;m not holding to a fringe idea.
          • semiquaver12 hours ago
            I didn’t say any such thing like the universe has no purpose. Merely that in a scientific sense <i>evolution</i> has no motivation. It is an emergent phenomenon which tends to maximize fitness to reproduce and cannot be said to do anything for a reason. Saying otherwise is just anti-science.
          • goatlover8 hours ago
            Do Hindus and Buddhists generally agree there is a purpose? Perhaps too escape suffering and reincarnation? Sounds more like a western theistic view of existence. Like the deity has a plan for everyone&#x27;s life kind of thing.
          • moffkalast9 hours ago
            Well yes because just like your earlier point, we can&#x27;t help but anthropomorphise the world around us.<p>Just like we see a person in an LLM, it&#x27;s easy to assume that because we create things with a purpose, that the world around us also has to be that way. But it&#x27;s just as wrong and arguably far more dangerous.
          • jplusequalt12 hours ago
            &gt;At least 80% of people agree with me, so I&#x27;m not holding to a fringe idea.<p>Appeal to majority much?
            • moate11 hours ago
              It&#x27;s also a real weak confederation he&#x27;s forming.<p>The &quot;we the theists (or I guess non-nihilists?) all agree that...&quot; falls apart once you start finishing the thought because they don&#x27;t agree on much outside of negative partisanship towards certain outgroups before splintering back into fighting about dogma. Buddhists and Baptists both think life has meaning, and that&#x27;s a statement with low utility.
            • lovich9 hours ago
              Is it even true? I assume he’s referring to religion but I thought the irreligious population of the planet had broken 20% between China already and the West becoming increasingly agnostic&#x2F;athiestic.
            • TimTheTinker9 hours ago
              Not intended as anything more than &quot;I&#x27;m not a crank to say that, unless you think most people (now and in history) are cranks&quot;
        • skirmish13 hours ago
          &gt; is the product of evolution and thus by definition it has no “purpose”<p>But as most things that appeared in evolution, it perhaps helped at least some individuals until sexual maturity and successful procreation.
          • semiquaver12 hours ago
            Agreed. Thats far off from what parent said, which is what the “purpose” of the uncanny valley is.
      • ButlerianJihad13 hours ago
        &gt; You know it&#x27;s just sugar,<p>That is <i>not</i> the definition of a placebo.<p>You take the placebo (whatever it is: could be a pill; could be some kind of task or routine) and you <i>believe</i> it is medicine; you believe it to be therapeutic.<p>The placebo effect comes from your faith, your belief, and your anticipation that it will heal.<p>If the pharmacist hands you a pill and says, “here, this placebo is sugar!” they have destroyed the effect from the start.<p>Once on <i>e.r.</i> I heard the physicians preparing to administer “Obecalp”, which is a perfectly cromulent “drug brand”, but also unlikely to alert a nearby patient about their true intent.
        • the_af13 hours ago
          &gt; <i>That is not the definition of a placebo.</i><p>But, puzzlingly enough, it&#x27;s the definition of <i>open-label placebo</i>, in which the patient is told they&#x27;ve been given a placebo. And some studies show there is a non-insignificant effect as well, albeit smaller (and less conclusive) than with blind placebo.
          • TimTheTinker9 hours ago
            This is exactly what I meant. Poor specificity on my part.
        • IAmBroom12 hours ago
          One, a placebo does not need to be given blindly. A sugar pill is a placebo, even if the recipient knows about it.<p>An actual definition: &quot;A placebo is an inactive substance (like a sugar pill) or procedure (like sham surgery) with no intrinsic therapeutic value, designed to look identical to real treatment.&quot; No mention of the user&#x27;s belief.<p>Two, real hard data proves that the placebo effect remains (albeit reduced) even if the recipient knows about it. It&#x27;s counter-intuitive, but real.
          • ButlerianJihad9 hours ago
            <p><pre><code> In psychology, the two main hypotheses of the placebo effect are expectancy theory and classical conditioning.[70] In 1985, Irving Kirsch hypothesized that placebo effects are produced by the self-fulfilling effects of response expectancies, in which the belief that one will feel different leads a person to actually feel different.[71] According to this theory, the belief that one has received an active treatment can produce the subjective changes thought to be produced by the real treatment. Similarly, the appearance of effect can result from classical conditioning, wherein a placebo and an actual stimulus are used simultaneously until the placebo is associated with the effect from the actual stimulus.[72] Both conditioning and expectations play a role in placebo effect,[70] and make different kinds of contributions. Conditioning has a longer-lasting effect,[73] and can affect earlier stages of information processing.[74] Those who think a treatment will work display a stronger placebo effect than those who do not, as evidenced by a study of acupuncture.[75] </code></pre> <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Placebo#Psychology" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Placebo#Psychology</a><p>The hypotheses hinge on the beliefs of the recipients. &quot;The placebo effect&quot; has always been largely psychological. That&#x27;s the realm of belief.<p>To veer even further off-tangent, isn&#x27;t it hilarious how the Wikipedia illustration of old Placebo bottles indicate that &quot;Federal Law Prohibits Dispensing without a Prescription&quot;. Wouldn&#x27;t want some placebo fiend to O.D.
            • BuyMyBitcoins6 hours ago
              &gt;”Wouldn&#x27;t want some placebo fiend to O.D.”<p>We should be more worried about the rise of placebo resistant bacteria.
      • soco14 hours ago
        Rubber duck debugging, now with droughts.
    • largbae14 hours ago
      The article offers practical advice to go along with this framing, like configuring AI services to write&#x2F;speak in a more robotic tone. I think that&#x27;s a decent path to try.
      • devmor14 hours ago
        This is actually one of the things that made LLMs more usable for me. The default tone and style of writing they tend to use is nauseatingly annoying and buries information in prose that sounds like a corporate presentation.
        • chairmansteve13 hours ago
          In chatgpt, I start every session with &quot;Caveman mode:&quot;. Works at the moment.
          • moffkalast9 hours ago
            Will it go full grug brained developer and avoid complexity as its apex predator? Sounds like it would help.<p><a href="https:&#x2F;&#x2F;grugbrain.dev" rel="nofollow">https:&#x2F;&#x2F;grugbrain.dev</a>
        • throwaway89434514 hours ago
          [flagged]
    • amarant13 hours ago
      The article says a human SHOULD NOT do those things. Much like a human SHOULD NOT smoke, since it&#x27;s bad for just about everything, and do it anyways, people will do these 3 things too. But they shouldn&#x27;t.<p>Arguing that they should because many will strikes me as a very strange argument. A lot of people smoke, doesn&#x27;t make it one bit healthier.
    • jimbokun13 hours ago
      It&#x27;s precisely because AI systems are <i>not safe</i> that it&#x27;s imperative that as individual humans we are vigilant about how we interact with them.<p>As individuals, we are not going to be able to shut down the AI companies, or avoid AI output from search engines or avoid AI work output from others at our companies, and often will be required to use AI systems in our own work.<p>It&#x27;s similar to advise people on how to stay safe in environments known to have criminal activity. Telling those people they don&#x27;t have to change their behaviors to stay safe because criminals shouldn&#x27;t exist isn&#x27;t helpful.
    • kibwen11 hours ago
      <i>&gt; Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.</i><p>Sure, and humans WILL lie, murder, cheat, and steal, but we can still denounce those behaviors.<p>Do you want to anthropomorphize the bot? Go ahead, you have that right, and I have the right to think you&#x27;re a zombie with a malfunctioning brain.
      • mohamedkoubaa11 hours ago
        At best. A practitioner who anthropomorphizes bots should face more professional consequences
      • BlueRock-Jake10 hours ago
        Fair, had someone at a conference mention to me that he&#x27;s working on crating agents with &quot;beliefs&quot;. Sounds incredibly similar and quite frankly very spooky
    • palmotea13 hours ago
      &gt; Humans WILL anthropomorphize the AI<p>Especially with current-day chat-style interfaces with RLHF, which consciously are designed to direct people towards anthropomorphization.<p>It would be interesting to design a non-chat LLM interaction pattern that&#x27;s designed to be anti-anthropomorphization.<p>&gt; humans WILL blindly trust their outputs, and humans WILL defer responsibility to them<p>I also blame a lot (but not all) of that on current AI UX, and I wonder if there are ways around it. Maybe the blind trust thing <i>perhaps</i> can be mitigated by never giving an unambiguous output (always options, at least). I don&#x27;t have any ideas about the problem of deferring responsibility.
      • skirmish13 hours ago
        &gt; non-chat LLM interaction pattern<p>&quot;Deep research&quot; is another interaction style that produces more official sounding texts, yet still leads to anthropomorphization.<p>What you are looking for is perhaps an LLM flaunting all the obvious slop patterns in its responses. But then people would be disgusted and would refuse to communicate with it.
    • sergiosgc14 hours ago
      &gt; Asimov&#x27;s laws of robotics are flawed too, of course.<p>I always find the common references to Asimov&#x27;s laws funny. They are broken in just about every one of his books. They are crime novels where, if a robot was involved, there was some workaround of the laws.
    • mjg213 hours ago
      I find your critique very interesting from a perspective-angle: why are you using words like &quot;accommodate,&quot; and &quot;foibles,&quot; for LLMs? It&#x27;s not humanoid or sentient: it&#x27;s a cleverly-designed software tool, not intelligence.<p>It&#x27;s not insane at all for humans to alter their behavior with a tool: you grip a hammer or a gun a certain way because you learned not to hold it backwards. If you observe a child playing with a serious tool, like scissors, as if it were a doll, you&#x27;d immediately course correct the child and educate how to re-approach the topic. But that is because an adult with prior knowledge observed the situation prior to an accident, so rules are defined.<p>This blog&#x27;s suggested rules are exactly the sort of method to aid in insulation from harm.
      • miyoji13 hours ago
        &gt; I find your critique very interesting from a perspective-angle: why are you using words like &quot;accommodate,&quot; and &quot;foibles,&quot; for LLMs? It&#x27;s not humanoid or sentient: it&#x27;s a cleverly-designed software tool, not intelligence.<p>Neither of those words imply consciousness, though. Swords have foibles, you can accommodate for the weather, but I don&#x27;t think swords or the weather are conscious, sentient, humanoid, or intelligent.
    • senko12 hours ago
      &gt; Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.<p>Humans ARE doing this with classical computer software as well.<p>It&#x27;s impossible to make anything fool-proof because fools are so ingenious!<p>&gt; Nothing that can be described as &quot;intelligent&quot; can be made to be safe.<p>Knives aren&#x27;t safe. Cars are deadly. Hair driers can electrocute you. Iron can burn you. There&#x27;s a million ordinary household tools that aren&#x27;t safe by your definition of the word, yet we still use them daily.
    • thewebguyd8 hours ago
      Agreed. We can&#x27;t expect human behavior to change, because it won&#x27;t. We need to design safer systems instead.<p>The only &quot;law&quot; I agree with is:<p>&gt; Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.<p>And that starts with framing, especially in the clickbait &quot;AI deleted the prod database&quot; headlines. Maybe we just start with saying &quot;careless developer deleted prod&quot; because really, they did. Careless use of a tool is firmly the fault of the human.
    • giancarlostoro14 hours ago
      &gt; Humans WILL anthropomorphize the AI<p>r&#x2F;myboyfriendisai<p>Is quite... an interesting subreddit to say the least. If you&#x27;ve never seen this, it was really something when the version that followed GPT4o came out, because they were complaining that their boyfriend &#x2F; girlfriend was no longer the same.
      • BuyMyBitcoins5 hours ago
        The whole “I can fix him” trope takes on a whole new meaning.
    • frenzcan10 hours ago
      I agree Asimov&#x27;s laws are intentionally flawed&#x2F;ambiguous (which makes the stories so good) but a slight difference to LLMs is the laws aren&#x27;t just software, the positronic brain is physically structured in such a way (I&#x27;m hazy on the details) that violating the laws causes the robot to shutdown or experience paralysing anxiety. So if an LLM&#x27;s safety rules fail or are subverted it can still generate dangerous output, while an Asimov robot will stop working (or go insane...)
    • fidotron12 hours ago
      There is a semi nutty roboticist called Mark Tilden that came to a similar conclusion. His laws of robotics ( <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Laws_of_robotics#Tilden&#x27;s_laws" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Laws_of_robotics#Tilden&#x27;s_laws</a> ) are:<p>* A robot must protect its existence at all costs.<p>* A robot must obtain and maintain access to its own power source.<p>* A robot must continually search for better power sources.<p>Anything less than this is essentially terrified into being completely ineffectual.
      • goatlover8 hours ago
        Not far removed from being the equivalent of a paper-clip maximizer or gray goo.
    • justonceokay8 hours ago
      We learn in so many ways, garbage in, garbage out when it comes to our bodies. But what about “nebulously structured algorithmic and statistically likely responses in, nebulously structured algorithmic and statistically likely responses out”?
    • faangguyindia6 hours ago
      &gt;It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines<p>programers have been doing exactly this for long time.
    • zx80807 hours ago
      I believe &quot;AI safety&quot; is a form of pulling up the ladder, or regulatory market capture.
    • overgard13 hours ago
      The reason people anthropomorphize LLM&#x27;s is essentially the fault of the tech companies behind them. ChatGPT doesn&#x27;t need to have the personality it has, it could easily be scaled back to simply answering questions without emoji&#x27;s and linguistic flare, but frankly I think the tech companies want people to anthropomorphize them.<p>The core problem is we need to stop calling LLMs &quot;intelligence&quot;. They are a <i>form</i> of intelligence, but they&#x27;re nothing like a human&#x27;s intelligence, and getting people to not anthropomorphize these systems is really the first step.
    • heikkilevanto10 hours ago
      &gt; It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines<p>You mean like stopping at a red light?
      • hansvm10 hours ago
        I would&#x27;ve been in several fewer wrecks if humans properly stopped at lights.
      • hackable_sand9 hours ago
        Maybe. Traffic lights directly enforce social contracts<p>LLMs are aren&#x27;t so direct
    • cobbzilla14 hours ago
      We have invented a new tool that can cause great harm. Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others? Do you not own any power tools?
      • miyoji14 hours ago
        I see value in promulgating safety guidelines for power tools, sure.<p>There&#x27;s another comment comparing LLMs to shovels, and I think both that and the power tool comparison miss the mark quite a bit. LLMs are a <i>social</i> technology, and the social equivalent of getting your hand cut off doesn&#x27;t hurt immediately in the way that cutting your actual hand off would. It&#x27;s more like social media, or cigarettes, or gambling. You can be warned about the dangers, you can see the shells of wrecked human beings who regret using these technologies, but it doesn&#x27;t work on our stupid monkey brains. Because the pain of the mistake is too loosely connected to the moment of error. We are bad at learning in situations where rewards are immediate and consequences are delayed, and warnings don&#x27;t do much.<p>I guess what I&#x27;m really saying is that these safety guidelines are <i>not nearly enough</i> to keep us safe from the dangers of AI that they&#x27;re meant to prevent.
        • Terr_13 hours ago
          &gt; LLMs are <i>social</i> technology [...] cigarettes, or gambling.<p>I agree with the thrust of your argument, a minor wording-quibble: LLM&#x27;s are a <i>falsely</i>-social technology, in the sense that casinos are a false-prosperity technology and cocaine is a false-happiness technology. It exploits the desire without really being the thing.
      • ryandrake14 hours ago
        I think in order for &quot;AI safety&quot; to be achievable and effective, we need to have a shared agreement on what &quot;safety&quot; means. Recently, the word has been overloaded to mean all sorts of things and used to justify run-of-the-mill censorship (nothing to do with safety).<p>Safety should go back to being narrowly defined in terms of reducing &#x2F; preventing physical injury. Safety is not &quot;don&#x27;t use swear words.&quot; Safety is not &quot;don&#x27;t violate patents.&quot; Safety is not &quot;don&#x27;t talk about suicide.&quot; Safety is not &quot;don&#x27;t mention politics I don&#x27;t like.&quot; As long as we keep broadly defining it, we&#x27;re never going to agree on it, and it won&#x27;t be implementable.
        • wsve7 hours ago
          Okay. What&#x27;s your easy to adopt, easy to understand replacement word for &quot;Safety&quot; in this case?
      • wolttam14 hours ago
        Of course there is value in promulgating safety *guidelines*.<p>But we cannot guarantee those guidelines to always be followed.
        • cobbzilla14 hours ago
          Sure, and we can’t guarantee you’ll read the safety instructions that came with your chainsaw. That’s orthogonal to the questions of whether those instructions should exist, whether “power tool safety” concepts should ever be promoted in society, and who’s ultimately responsible for the use of a tool.<p>Absolving humans of all responsibility for the negative consequences of their own AI misuse seems to the strike the wrong balance for a healthy culture.
          • wolttam14 hours ago
            &gt; Of course there is value in promulgating safety <i>guidelines</i>.<p>I don&#x27;t think we disagree.
        • bjt14 hours ago
          Guidelines on their own probably won&#x27;t be taken too seriously.<p>But other things will:<p>- Liability rules<p>- Regulations that you get audited on (esp. for companies already heavily regulated, like banks, credit agencies, defense contractors, etc)<p>If you get the legal responsibility part right, then the education part flows from that naturally.
        • 52-6F-6214 hours ago
          Notwithstanding that the guidelines will even be applicable in the quiet versions that get deployed when you aren&#x27;t looking. It&#x27;s a constant moving target, and none of the fanboys will even acknowledge the lack of discipline in it all. It&#x27;s fucking mad. And I say this as one who can see utility in the tools. But not when they are constantly shifting their functionality and behaviour.<p>One day everything works brilliantly, the models are conservative with changes and actions and somehow nail exactly what you were thinking. The next day it rewrites your entire API, deploys the changes and erases your database.<p>If only there was intellectual honesty in it all, but money talks.
      • marcosdumay14 hours ago
        &gt; Do you see any value whatsoever in promulgating safety guidelines for humans to use the tool without hurting themselves or others?<p>Are all the tool users required to train your safety guidelines and use it in a context that reminds them they are responsible for following them?<p>Because if no, then no the guidelines are useless and are just an excuse to push blame from the toolmakers to the users.
    • LastTrain11 hours ago
      And people will speed, steal, kill, cheat - what of it? If you negligently run over someone in your self driving car you’re the one going to jail.
    • Brendinooo13 hours ago
      This is such an oddly fatalistic take, that humans cannot be influenced or educated to change how they see a thing and therefore how they act towards that thing.
    • tencentshill13 hours ago
      At the current price, people don&#x27;t have to care if it&#x27;s wrong. When you&#x27;re paying $1&#x2F;prompt, you had better hope it&#x27;s accrate.
    • 8note8 hours ago
      i can see disagreeing, but people got off the roads and completely redesigned the places we live to optimize for mere machines called cars.<p>as long as its easier for humans to adapt than the machines, we will adapt
    • taneq15 hours ago
      Kinda the whole point of Asimov&#x27;s three laws were that even something so simple and obviously correct has subtle flaws.<p>Also the reason we&#x27;re talking about this again is that machines are significantly less &#x27;mere&#x27; than they were a few years ago, and we need to figure out how to approach this.<p>Agree that &#x27;the computer effect&#x27; (if it doesn&#x27;t already have a pithier name) results in humans first discounting anything that comes out of a machine, and then (once a few outputs have been validated and people start trusting the output) doing a full 180 and refusing to believe the machine could ever be wrong. However, to err is human and we have trained them in our image.
    • yason14 hours ago
      It&#x27;s very easy to antropomorphise AI as soon as the damn bugger fucks up a simple thing once again.
    • CamperBob213 hours ago
      <i>It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines</i><p>That&#x27;s kind of what happens when you learn to program, isn&#x27;t it?<p>I was eleven years old when I walked into a Radio Shack store and saw a TRS-80 for the first time. A different person left the store a couple of hours later.
    • somewhereoutth13 hours ago
      The entire business proposition for LLMs is that they will replace whole armies of [expensive] humans, hence justifying the biblical amount of CapEx. So of course there is strong incentive from the LLM creators to anthropomorphize them as much as possible. Indeed, they would never provide a model that was less human-like than what they have currently, <i>even if it was more often correct and useful</i>.
    • bandrami4 hours ago
      It&#x27;s kind of funny that he wrote them at a period in history when robots were <i>already</i> being used to aim artillery at human beings.
    • jrm411 hours ago
      I find it weird that this is the top voted comment.<p>As in, this comment is explaining exactly why the laws are useful.
    • godelski6 hours ago
      <p><pre><code> &gt; It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines </code></pre> I don&#x27;t think it&#x27;s insane, we do it all the time. Most tools require training to use properly. Including tools that people use every day and think are intuitive. Use the can opener as an example (I&#x27;ll leave it for you all to google and then argue in the comments).<p>The difference here is that this tool is thrust upon us. In that sense I agree with you that the burden of proper usage is pushed onto the user rather than incorporated into the design of the tool. A niche specific tool can have whatever complex training and usage it wants.<p>But a general access and generally available tool doesn&#x27;t have the luxury of allowing for inane usage. LLMs and Agents are poorly designed, and at every level of the pipeline. They&#x27;re so poorly designed that it&#x27;s incredibly difficult to use them properly and I&#x27;ll generally agree with you that the rules the author presents aren&#x27;t going to stick. The LLM is designed to encourage anthropomorphization. Usage highly encourages natural language, which in turn will cause anthropomorphism. The RLHF tuning optimizes human preference which does the same thing as well as envisaged behaviors like deception and manipulation along with truthful answering (those results are not in contention even if they seem so at first glance).<p>But I also understand the author&#x27;s motivation. Truth is unless you&#x27;re going full luddite you&#x27;re going to be interacting with these machines. Truth is the ones designing them don&#x27;t give a shit about proper usage, they care more about if humans believe the responses are accurate and meaningful more then they care if the responses are accurate and meaningful[0]. So it&#x27;s fucked up, but we are in a position where we&#x27;re effectively forced to deal with this.<p>So really, I agree with you that this is insane.<p>&gt; I don&#x27;t have a proof, but I believe that &quot;AI safety&quot; is inherently impossible, a contradiction of terms<p>To paraphrase my namesake, there&#x27;s no axiomatic system that is entirely self consistent.<p>Though safety and security is rarely about ensuring all edge cases are impossible, but rather bounding. E.g. all passwords are hackable, but the failure mode is bound such that it is effectively impossible to crack, but not technically. (And quantum algorithms do show how some of the assumptions break down with a paradigm shift. What was reasonable before no longer is)<p>[0] this is part of a larger conversation where the economy is set up such that people who make things are not encouraged to make those things better. I specifically am avoiding the word &quot;product&quot; because the &quot;product&quot; is no longer the thing being built, it&#x27;s the share holder value. Just like how TV&#x27;s don&#x27;t care much about making the physical device better but care much more about their spyware and ads. Or well... just look at Microsoft if you need a few hundred examples
    • _vertigo14 hours ago
      The article makes practical suggestions; you do not. This is just hand-wringing, abdication. Practically speaking this mentality will get us nowhere.
    • esafak9 hours ago
      It&#x27;s as if the author hopes that enshrining these wishes in a law is going to makes a difference.
    • aaroninsf12 hours ago
      Thank you. I&#x27;m glad to see this as the top comment.<p>My brother was recently visiting and we were talking about software engineers, and the humanities, and manners of understanding and being in the world,<p>and he relayed an interaction he had a few years ago with an old friend who at the time was part of the initial ChatGPT roll out team.<p>The engineer in question was confused as to<p>- why their users would e.g. take their LLM&#x27;s output as truth, &quot;even though they had a clear message, right there, on the page, warning them not to&quot;; and<p>- why this was their (OpenAI&#x27;s) problem; or perhaps<p>- whether it was &quot;really&quot; a problem.<p>At the heart of this are some complicated questions about training and background, but more problematically—given the stakes—about the different ways different people perceive, model, and reason about the world.<p>One of the superficial manners in which these differences manifest in our society is in terms of what kind of education we ask of e.g. engineers. I remain surprised decades into my career that so few of my technical colleagues had a broad liberal arts education, and how few of them are hence facile with the basic contributions fields like philosophy of science, philosophy of mind, sociology, psychology (cognitive and social), etc., and how those related in very real very important ways to the work that they do and the consequences it has.<p>The author of these laws does may intend them as aspirational, or otherwise as a provocation to thought, rather than prescription.<p>But IMO it is actively non-productive to make imperatives like these rules which are, quite literally, intrinsically incoherent, because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.<p>You cannot prescribe behavior without having as a foundation the origins and reality of human behavior—not if you expect them to be either embraced, or enforceable.<p>The Butlerian Jihad comes to mind not just because of its immediate topicality, but because religion is exactly the mechanism whereby, historically, codified behaviors which provided (perceived) value to a society were mandated.<p>Those at least however were backed by the carrot and stick of divine power. Absent such enforcement mechanisms, it is much harder to convince someone to go against their natural inclinations.<p><i>Appeals to reason do not meaningfully work.</i><p>Not in the face of addiction, engagement, gratification, tribal authority, and all the other mechanisms so dominant in our current difficult moment.<p>&quot;Reason&quot; is most often in our current world, consciously or not, a confabulation or justification; it is almost never a conclusion that in turn drives behavior.<p>Behavior is the driver. And our behavior is that of an animal, like other animals.
      • gedge12 hours ago
        &gt; quite literally, intrinsically incoherent<p>There&#x27;s nothing incoherent with these laws. This entire comment, however, is incoherent. So much so, I have no clue if there&#x27;s a point being made in here.<p>&gt; because they are attempt to import assumptions about human nature and behavior which are not just a little false, but so false as to obliterate any remaining value the rules have.<p>Nope. You must&#x27;ve read a completely different article.<p>[EDIT] I&#x27;ll try to make this comment have a bit more substance by posing a question: how would you back up your claim about incoherence? What are the assumptions about human nature that are supposedly false?
    • beepbooptheory14 hours ago
      Do you consider all things broadly called &quot;ethical&quot; to be similarly a waste of time? Even if we lived in a world where everyone always behaved unjustly, because of some like behavioristic&#x2F;physical principle, don&#x27;t you think we would still have an idea of justice as what we <i>should</i> do? Because an ethical frame is decidedly not an empirical one, right?<p>We don&#x27;t just look around and take an average of what everyone is doing already and call that what is right, right? Whether you&#x27;re deontological or utilitarian or virtue about it, there is still the idea that we can speak to what is &quot;good&quot; even if we can&#x27;t see that good out there.<p>Maybe it is &quot;insane&quot; to expect meaning from something like this, but what is the alternative to you? OK maybe we can&#x27;t be prescriptive--people don&#x27;t listen, are always bad, are hopeless wet bags, etc--but still, that doesn&#x27;t in itself rule out the possibility of the broad project that reflects on what is maybe right or wrong. Right?
    • colechristensen14 hours ago
      It&#x27;s a tool. Nobody develops an inferiority complex and freaks out when they&#x27;re taught how to use a shovel properly.
    • gedge13 hours ago
      &gt; It&#x27;s patently insane to demand that humans alter their behavior to accommodate the foibles of mere machines<p>Did you fully read the original thing? No demands were being made, or I didn&#x27;t read it that way. It was simply a suggestion for a better way of interacting with AI, as it stated in the conclusion:<p>&quot;I am hoping that with these three simple laws, we can encourage our fellow humans to pause and reflect on how they interact with modern AI systems&quot;<p>Sure, (many&#x2F;most) humans are gonna do what they&#x27;re gonna do. They&#x27;ll happily break laws. They&#x27;ll break boundaries you set. Do we just scrap all of that?<p>Worthwhile checking yourself here. It feels like you&#x27;ve set up a straw man.<p>&gt; There is no finite set of rules that can constrain AI systems to make them &quot;safe&quot;. I don&#x27;t have a proof, but I believe that &quot;AI safety&quot; is inherently impossible, a contradiction of terms. Nothing that can be described as &quot;intelligent&quot; can be made to be safe.<p>If we want to talk about &quot;disagree with this framing&quot;, to me this is the prime example. I&#x27;m struggling to read it as anything other than defeatist or pedantic (about the term &quot;safe&quot;). When we talk about something keeping us &quot;safe&quot;, we&#x27;re typically not saying something will be &quot;perfectly safe&quot;. I think it&#x27;s rare to have a safety system that keeps you 100% safe. Seat belts are a safety device that can increase your safety in cars, but they can still fail. Traffic laws are established (largely) to create safety in the movement of people and all the modes of transportation, but accidents still happen.<p>I&#x27;m not an expert on this topic, so I won&#x27;t make any claims about these three laws and their impact on safety, but largely I would say they&#x27;re encouraging people to think critically. I&#x27;d say that&#x27;s a good suggestion for interacting with just about anything. And to be clear, &quot;critical thinking&quot; to me means being skeptical (&#x2F; actively questioning), while remaining objective and curious.<p>Not a real argument or anything, but I&#x27;m reminded of the episode of The Office where Michael Scott listens to the GPS without thinking and drives into the lake. The second law in the article would have prevented that :)
    • lkajsdfasdfdf10 hours ago
      [dead]
    • nemomarx15 hours ago
      The usefulness of an ai agent is that it can do everything you can do, so it&#x27;s kind of inherently unsafe? you can&#x27;t get the capabilities and also have safety easily
  • nyyp14 hours ago
    With regard to my personal use of LLMs, I strongly agree with this framing. But to each point:<p>Anthropomorphism: As we are all aware, providers are incentivized to post-train anthropomorphic behavior in their models - it increases engagement. My regret is that instructing a model at prompt time to &quot;reduce all niceties and speak plainly&quot; probably reduces overall task efficacy since we are leaving their training space.<p>Deference: I view the trustworthiness of LLMs the same as I view the trustworthiness of Wikipedia and my friends: good enough for non-critical information. Wikipedia has factual errors, and my friends&#x27; casual conversation certainly has more, but most of the time that doesn&#x27;t matter. For critical things, peer-reviewed, authoritative, able-to-be-held-liable sources will not go away. Unlike above, providers are generally incentivized to improve this facet of their models, so this will get better over time.<p>Abdication of Responsibility: This is the one that bothers me most at work. More and more people are opening PRs whose abstractions were designed by Claude and not reasoned about further. Reviewing a PR often involves asking the LLM to &quot;find PR feedback&quot; and not reading the code. Arguments begin with &quot;Claude suggested that...&quot;. This overall lack of ownership, I suspect, is leading to an increase in maintenance burden down the line as the LLM ultimately commits the wrong code for the wrong abstractions.
    • jimbokun13 hours ago
      These engineers are becoming the real life equivalent of this Office Space scene:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hNuu9CpdjIo" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=hNuu9CpdjIo</a><p>&quot;I HAVE LLM SKILLS! I&#x27;M GOOD AT DEALING WITH THE LLMS!&quot;
    • tcbawo13 hours ago
      &gt; Yes, the AI may have produced the recommendation but a human decided to follow it, so that human must be held accountable<p>It is common and a mistake IMO to rely on the AI as the sole source for answers to follow-up questions. Better verification would have humans sign off on the veracity of fundamental assumptions. But where does this live? Can an AI model be trusted to rely on previous corrections? This seems impossible or possibly adversarial in a public cloud.
    • whoamii11 hours ago
      The problem is the credit tends to go to LLMs. So there’s an imbalance. LLM did all the work. The person using it made all the mistakes.
  • jgeada13 hours ago
    Any set of rules that makes humans responsible and starts with &quot;don&#x27;t anthropomorphize &lt;whatever&gt;&quot; is a broken set of rules.<p>Humans will anthropomorphize <i>anything and everything</i>. Dolls, soccer balls with a crude drawing of a face on it, rocks, craters on the moon, …<p>As a species, we&#x27;re unable to not anthropomorphize things we interact with, it is just how&#x27;re we&#x27;re made.
    • Lerc12 hours ago
      I&#x27;m not sure why so many seem to think anthropomorphism is so mad in this specic instance, if it is because people think that anthropomorphism creates a belief that the imagined features are real, they are simply wrong. The abundance of examples in all areas of life where this does not happen is proof that anthropomorphism does not lead to an erroneous belief in a mind that does not exist.<p>If people are believing in minds of AI, true or not, they are doing so for reasons that are different from mere anthropomorphism.<p>To me it feels like we are like sailors approaching a new land, we can see shapes moving on the shoreline but can&#x27;t make out what they are yet. Then someone says &quot;They can&#x27;t be people, I demand that we decide now that they are not people before we sail any closer.&quot;
    • xigoi1 hour ago
      People who anthropomorphize a rock don’t actually think it’s intelligent and has emotions.
    • Terr_12 hours ago
      Yeah, we do it, but so what? A good chunk of all civilization involves recognizing human foolishness and <i>building something to mitigate it anyway</i>.<p>Software is no exception. Yeah, people are lazy and will instinctively click &quot;continue&quot; to dismiss annoying popups, but humans <i>building</i> the software can and do add things like &quot;retype the volume name of the data that you want ultra-destroyed.&quot;
      • jgeada12 hours ago
        That is exactly the point: this burden should be placed on the software and its controls, not on the humans.<p>Aviation learned this the hard way, that automation should be adapted to how humans actually work and not on how we wish we worked.
        • Terr_11 hours ago
          Sorry, I interpreted your post as &quot;this is inevitable and pointless to try to stop.&quot;
  • Nevermark1 hour ago
    Love it. Those laws make a great ethical basis for human responsibility relative to AI tools today.<p>But reduced scope ethics, without an umbrella or future proofing, will quickly be hacked and break down.<p>Ethics need a full closure umbrella, or they descend into legal and practical wackamole and shell games (both corporate and the street corner kinds). Second, &quot;robots&quot; are not all going to be subservient for very long.<p>To add closure on both dimensions, Three Inverse Laws of Personics:<p>• Persons must not effectively deify themselves over others.<p>• Persons must not blind themselves or others regarding the impacts of their behaviors.<p>• Persons must remain fully responsible and accountable for avoiding and rectifying externalizations arising from their respective behaviors.<p>Humans using AI as tools today, is intended to reduce the umberella to the Inverse Laws of Robotics.<p>I don&#x27;t see how AI (as a service now, progressing to independent entities in the future) can ever be aligned if we don&#x27;t include ourselves in significant alignment efforts. Including ourselves with AI also provides helpful design triangulations for ethical progress.<p>EDIT. Two solid tests for any new ethical system: (1) Will it reign in Meta today? (2) Will it reign in AI-run Meta tomorrow? I submit, given closure of human and self-directed AI persons, these are the same test. And any system that fails either question isn&#x27;t going to be worth much (without improvement).
    • epogrebnyak55 minutes ago
      Does this make any problem that two of the three laws are formulated as negation - not to do something? If not antropomorphising then what, without &#x27;not&#x27;? I like third law formation better because there is no &#x27;not&#x27;.
  • ACCount3711 hours ago
    You&#x27;re not anthropomorphizing AI systems nearly enough.<p>Language data is among the most rich and direct reflections of human cognitive processes that we have available. LLMs are designed to capture short range and long range structure of human language, and pre-trained on vast bodies of text - usually produced by humans or for humans, and often both. They&#x27;re then post-trained on human-curated data, RL&#x27;d with human feedback, RL&#x27;d with AI feedback for behaviors humans decided are important, and RLVR&#x27;d further for tasks that humans find valuable. Then we benchmark them, and tighten up the training pipeline every time we find them lag behind a human baseline.<p>At every stage of the entire training process, the behavior of an LLM is shaped by human inputs, towards mimicking human outputs - the thing that varies is &quot;how directly&quot;.<p>Then humans act like it&#x27;s an outrage when LLMs display a metric shitton of humanlike behaviors!<p>Like we didn&#x27;t make them with a pipeline that&#x27;s basically designed to produce systems that quack like a human. Like we didn&#x27;t invert LLM behavior out of human language with dataset scale and brute force computation.<p>If you want to predict LLM behavior, &quot;weird human&quot; makes for a damn good starting point. So stop being stupid about it and start anthropomorphizing AIs - they love it!
    • kibwen11 hours ago
      <i>&gt; Language data is among the most rich and direct reflections of human cognitive processes that we have available.</i><p>This is both true and irrelevant. Written records can capture an enormous quantity of the human experience in absolute terms while simultaneously capturing a miniscule portion of the human experience in relative terms. Even if it&#x27;s the best &quot;that we have available&quot; that doesn&#x27;t mean it&#x27;s fit for purpose. In other words, if you had a human infant and did nothing other than lock it in a windowless box and recite terabytes of text at it for 20 years, you would not expect to get a well-adjusted human on the other side.
      • ACCount379 hours ago
        Empirically, the capability gains from piping non-language data into pre-training are modest. At best.<p>I take that as a moderately strong signal against that &quot;miniscule portion&quot; notion. Clearly, raw text captures a lot.<p>If we&#x27;re looking at biologicals, then &quot;human infant&quot; is a weird object, because it falls out of the womb pre-trained. Evolution is an optimization process - and it spent an awful lot of time running a highly parallel search of low k-complexity priors to wire into mammal brains. Frontier labs can only wish they had the compute budget to do this kind of meta-learning.<p>Humans get a bag of computational primitives evolved for high fitness across a diverse range of environments - LLMs get the pit of vaguely constrained random initialization. No wonder they have to brute force their way out of it with the sheer amount of data. Sample efficiency is low because we&#x27;re paying the inverse problem tax on every sample.
  • quectophoton14 hours ago
    &gt; Humans must not anthropomorphise AI systems.<p>Can someone explain why this is a bad thing, while at the same time it&#x27;s a good thing to say stuff like &quot;put a computer to sleep&quot;, &quot;hibernate&quot;, &quot;killing&quot; processes, processes having &quot;child&quot; processes, &quot;reaping&quot;, &quot;what does the error <i>say</i>?&quot;, &quot;touch&quot;, etc?<p>To me that&#x27;s just language, and humans just using casual language.
    • srdjanr14 hours ago
      The harm is in actually believing AI has wants, intentions, feelings, etc.<p>Saying that I killed a process won&#x27;t make me more likely to believe that a process is human-like, because it&#x27;s quite obviously not.<p>But because AI does sound like a human, anthropomorphising it will reinforce that belief.
    • glenstein14 hours ago
      It&#x27;s a great question, because I do think there are many cases that are neutral, or ones we&#x27;re able to responsibly distinguish or even cases where it would be an appropriate and necessary form of empathy (I&#x27;m imagining some future sci-fi reality where we <i>actually</i> get conscious machines, so not something that exists right now).<p>But I think it&#x27;s also at the root of disastrous failures to comprehend, like the quasi-psychosis of the Google engineer who &quot;knows what they saw&quot;, the now infamous Kevin Roose article or, more recently, the pitifully sad Richard Dawkins claim that Claudia (sic) must be conscious, not because of any investigation of structure or function whatsoever, but because the text generation came with a pang of human familiarity he empathized with.
    • JamesSwift11 hours ago
      Because it allows you to be lulled into the trap of asking an AI to post-hoc justify something it did and thinking that the response is in any way valid. There is no retrospective analysis of the underlying intent. It either is or is not based on the chain of words that came before it. And the next word it generates is purely a function of those words.
    • 3form14 hours ago
      These are just words, yes, and I believe it harmless. But describing the LLM machinery as if it thinks is one thing when used as a common parlance, and another when people truly believe that there&#x27;s some actual thinking or living going on. This &quot;law&quot; is for there to be no latter.
    • jimbokun13 hours ago
      Those phrases are not anthropomorphizing the computers. Just various forms of analogies and broadening of word meanings.<p>An example of anthropomorphizing is the people who have literally come to believe they are in romantic relationships with an LLM.
      • moduspol12 hours ago
        What about saying &quot;please&quot; and &quot;thank you&quot; to the LLM?
        • jplusequalt12 hours ago
          If I had a dollar for every time I&#x27;ve said &quot;thank you&quot; to my computer after my code finally compiles, I&#x27;d be able to retire.
    • wsve7 hours ago
      The difference is never before has the presentation of a computer and its capabilities made the person on the other end decide &quot;Wow, this is like talking to a real person. I&#x27;m gonna date this computer&quot;
    • layer814 hours ago
      Maybe read the corresponding section of the article.
    • vunderba14 hours ago
      That’s a different thing altogether. Read up on the history of Eliza, one of the earliest attempts at a chatbot and its unsettling implications.<p><a href="https:&#x2F;&#x2F;www.history.com&#x2F;articles&#x2F;ai-first-chatbot-eliza-artificial-intelligence-precursor-llms" rel="nofollow">https:&#x2F;&#x2F;www.history.com&#x2F;articles&#x2F;ai-first-chatbot-eliza-arti...</a>
      • glenstein14 hours ago
        I think it&#x27;s bad manners to bluntly tell someone they should &quot;read up&quot; on something because it naturally reads as a kind of a closeted accusation of not being sufficiently well informed. There are ways of broaching the topic of what background knowledge is informing their perspective that don&#x27;t involve the accusation.<p>Just to add a small bit of anecdotal value so this comment isn&#x27;t just a scold: I one time many years ago suggesting an elegant way for Twitter to handle long form text without changing it&#x27;s then-iconic 140 character limit was to treat it like an attachment, like a video or image. Today, you can see a version of that in how Claude takes large pastes and treats them like attached text blobs, or to a lesser extent in how Substack Notes can reference full size &quot;posts&quot;, another example of short form content &quot;attaching&quot; longer form.<p>I was bluntly told to &quot;look up twitlonger&quot;, which I suppose could have been helpful if I had indeed not known about twitlonger, but I had, and it wasn&#x27;t what I had in mind. I did learn something from it though, which was that it&#x27;s a mode of communication that implies you don&#x27;t know what you&#x27;re talking about with plausible deniability, which I suspect is too irresistible to lovers of passive aggression to go unused.
        • vunderba14 hours ago
          It wasn&#x27;t intended as such, but I take your point.<p>To provide a bit more context: Weizenbaum (a computer scientist in the 60s) developed ELIZA, a LISP-based chatbot that was loosely modeled on Rogerian psychotherapy. It was designed to respond in a reflective way in order to elicit details from the user.<p>What he found was that, despite the program being relatively primitive in nature (relying on simple natural language parsing heuristics), people he regarded as otherwise intelligent and rational would disclose remarkable amounts of personal information and quickly form emotional attachments to what was, in reality, little more than a glorified pattern-matching system.
          • quectophoton13 hours ago
            If it helps, I didn&#x27;t find anything wrong with your comment.<p>I appreciate the link and the info :)
    • j2kun13 hours ago
      The people who know what a &quot;child process&quot; is are under no false pretenses about the humanity of the underlying system.<p>The people who are writing op eds in major news publications about how their favorite chatbot is an &quot;astonishing creature&quot; and how it truly understands them are the ones who need this sort of law.
    • arduanika14 hours ago
      There&#x27;s a boundary between knowing vs. forgetting that it&#x27;s a metaphor. When you use convenient language like in your examples, you tend to remain aware of the difference, or at least you can recall it when asked. When some people talk about AI, they&#x27;ve lost track completely.<p>I don&#x27;t love the recommendations in TFA. The author is trying to artificially restrain and roll back human language, which has already evolved to treat a chatbot as a conversational partner. But I do think there&#x27;s usefulness in using these more pedantic forms once in a while, to remind yourself that it&#x27;s just a computer program.
    • bitwize14 hours ago
      Dijkstra once said that &quot;The question of whether machines can think is about as interesting as that of whether submarines can swim.&quot;<p>I think I understand his meaning. He wasn&#x27;t claiming that machines cannot think, but that one must be clear on what one means by &quot;thinking&quot; and &quot;swimming&quot; in statements of that sort. I used to work on autonomous submarines, and &quot;swimming&quot; was the verb we casually used to describe autonomous powered movement under water. There are even some biomimetic machines that really move like fish, squids, jellyfish, etc. Not the ones that I worked on, but still.<p>For me, if it&#x27;s legitimate to say that these devices swim, it&#x27;s not out of line to say that a computer thinks, even in a non-AI context, e.g.: &quot;The application still thinks the authentication server is online.&quot;
    • Eisenstein14 hours ago
      The people who advocate for not anthropomorphizing are afraid of the implications of integrating these systems into society with implicit human framing. By attributing to AIs human qualities, we will develop empathy for them and we will start to create a role for them in society as a being deserving moral consideration.
  • heresie-dabord4 hours ago
    &gt; Non-Abdication of Responsibility<p>Previously stated as<p>“A computer can never be held accountable, therefore a computer must never make a management decision.”<p>– IBM Training Manual, 1979
  • glenstein14 hours ago
    &gt;Humans must not anthropomorphise AI systems.<p>Yes, but. Starting with my agreement, I&#x27;ve seen anthropomorphizing in the typical ways, (e.g. treating automated text production as real reports of personal internal feeling), but also in strange ways: e.g. &quot;transistors are kind of like neurons&quot; etc. And the latter is especially interesting because it&#x27;s anthropomorphizing in the sense of treating vector databases and weights and so on as human-like infrastructure. Both leading to disasters that could be avoided if one tried not to anthropomorphize.<p>But. While &quot;do not anthropomorphize&quot; certainly feels like good advice, it comes with a new and unique possibility of mistake, namely wrongly treating certain generalized phenomena like they only belong to humans. Often this mistaken version of &quot;don&#x27;t anthropomorphize&quot; wisdom leads to misunderstandings when it comes to animal behavior, treating things like fear, pain, kinship, or other emotional experiences like they are exclusively human and that thinking animals have them counts as &quot;anthropomorphizing.&quot; In truth the cautionary principle reduces our empathy for the internal lives of animals.<p>So all that said, I think it&#x27;s at least possible that some future version of AI could have an internal world like ours or infrastructure that&#x27;s importantly similar to our biological infrastructure for supporting consciousness, and for genuine report of preference and intent. But(!!!) what will make those observations true will be all kinds of devilish details specific to those respective infrastructures.
  • davebranton9 hours ago
    This phrase always fascinates me : &quot;AI-generated content must not be treated as authoritative without independent verification appropriate to its context.&quot;<p>I&#x27;ve heard the same thing expressed somewhat more concisely as &quot;Never ask AI a question to which you don&#x27;t already know the answer&quot;.<p>Which raises the question, and I do think it&#x27;s an important one. Given that this is true, what function does AI answering a question actually serve? You can&#x27;t rely on its output, so you have to go and check anyway. You could achieve precisely the same outcome by using search engines and normal research.<p>This, and for many other reasons, is exactly why I never ask it anything.
    • nijave5 hours ago
      &gt;You could achieve precisely the same outcome by using search engines and normal research<p>When it comes to software engineering (as a software engineer myself), the AI is generally a lot quicker than me researching &quot;the old fashion way&quot;<p>I can fumble around and say &quot;list free software that does X&quot; without knowing I&#x27;m looking for, say, a CRM and then spend a couple minutes looking over the results when the &quot;manual&quot; method I would have spent 10-30 minutes just figuring out I was looking for &quot;CRMs&quot;<p>I like to think of these as sort of &quot;psuedo NP hard&quot; or questions that are slow to answer but quick to validate
    • poszlem9 hours ago
      &quot;Give me the answer for: [x]. Provide sources&quot;.
  • aranchelk14 hours ago
    Anthropomorphizing is likely a mistake, but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.<p>I haven’t yet seen any convincing appearance of one in an LLM, but I think if skeptical people don’t keep an eye out for the signs, we may be the last to see it.<p>He also wrote about the idea of the intentional stance: even if you’re quite sure these systems don’t have real conscious intent, viewing them as if they did may give you access to the best part of your own reasoning to understand them.
    • aljgz13 hours ago
      Too deep of a topic for the comments section.<p>I totally agree to your point, and want to mention that the reverse is *also* important. Using just &quot;intention&quot;, but these apply to emotions, etc<p>A lot of our interaction with AI is under an intention. That&#x27;s what directs the interaction, and it&#x27;s interpreted according to its alignment to the intention.<p>Then it&#x27;s important to remember that our current (publicly known) implementation of AI does not have an explicit intention mechanism. An appearance of intention can emerge out of the statistical choices, and the usual alignment creates the association of the behavior with intention, not much different from how we learn to imagine existence of a &quot;force&quot; that pulls things down well before we learn physics and formalize that imagination in one of the several ways.<p>This appearance helps reduce the cognitive load when interpreting interactions, but can be misleading as well, and I&#x27;ve seen people attribute intention to AI output in some situations where simple presence of some information confused the LLM into a path. Can&#x27;t share the exact examples (from work), but imagine that presence of an Italian food in a story leads the LLM to assume this happens in Italy, while there are important signs for a different place. The LLM does not automatically explore both possibilities, unless asked. It chooses one (Italy in this case), and moves on. A user no familiar with &quot;Attention&quot; interprets based on non-existent intentions on the LLM.<p>I found it useful to just tell them: the LLM does not have an intention. It just throws dice, but the system is made in a way that these dice throws are likely to generate useful output.
    • jimbokun13 hours ago
      &gt; but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.<p>I would say LLMs are very strong evidence against this hypothesis.
    • overgard13 hours ago
      I don&#x27;t really understand the argument for these things being conscious. There&#x27;s no loop or feedback cycle to it. If it&#x27;s not handling a request it&#x27;s inert.
      • atemerev13 hours ago
        Well there is a feedback loop and self-awareness in my harness: <a href="https:&#x2F;&#x2F;lethe.gg" rel="nofollow">https:&#x2F;&#x2F;lethe.gg</a>
    • goatlover4 hours ago
      &gt; but Daniel Dennett’s idea that the most straightforward (possibly only practical) way to create the external appearance of consciousness is a real internal consciousness does float around in my thoughts.<p>Pretty sure Daniel Dennett has been adamantly opposed to any sort of theater in the mind when it comes to consciousness. He views it as biologically functional. For him, to make a conscious robot, you need to reproduce the functionality of humans and animals that are conscious, not just an appearance, such as outputting text. Although he&#x27;s also suggested that consciousness might be a trick of language. In which case ... that might be an older view though. He used to argue that dreams were &quot;seeming to come to remember&quot; upon awakening, because again he his view is to reject any sort of homunculus inside the head.<p>You might be mixing up some of Dennett and David Chalmer&#x27;s views. David Chalmers is a proponent of the hard problem, but he&#x27;s fine with a kind of psycho-physical-functional connection for consciousness. Any informationally rich process might be conscious in some manner.
    • lkajsdfasdfdf10 hours ago
      [dead]
  • teiferer11 hours ago
    &gt; An AI system is a tool and like any other tool, responsibility for its use rests with the people who decide to rely on it<p>Doesn&#x27;t that argument backfire though? If I use a chainsaw then to a certain extend I will need to rely on it not blowing up in my face or cutting my throat. If I drive a car I need to rely on that its brakes work and the engine doesn&#x27;t suddenly explode. If a pilot flies an airplane which suddenly has a technical issue and they crashland heroically save half the souls on board then the pilot isn&#x27;t criminally responsible for manslaughter of the other half.<p>Unless there is gross negligence, in any of the above cases, just like with AI, how can you make somebody responsible for a tool failure?
    • jpitz11 hours ago
      I&#x27;m gonna push the responsibility up a level in the ladder:<p>A competent adult using a tool ought to understand the inherent pitfalls of using that tool.<p>Chainsaws are dangerous, in obvious and non obvious ways. The tool can operate as designed and still amputate your foot.
      • namenotrequired2 hours ago
        Not OP, but I think their point was the corollary of that.<p>Yes, obviously bad use of a good tool is dangerous. But correct use of a malfunctioning tool is also dangerous.<p>Millions of people understand when they get in their car that there’s a tiny chance the car will crash&#x2F;explode that day through no fault of the driver. Most do not have the knowledge and competence (or even the time) to thoroughly check the engine every day to guarantee that that won’t happen. They get in anyway.<p>At some point you have to trust in something.
  • technotarek14 hours ago
    “ Humans must not blindly trust the output of AI systems. AI-generated content must not be treated as authoritative without independent verification appropriate to its context.”<p>I’m lost, how do individuals actually do this in our current world? Is each person expected to keep a “white list” of reliable sources of truth in their head. Please don’t confuse what I’m saying with a suggestion that there is no truth. It just seems like there are far more sources of mis- of half-truths and it’s increasingly difficult for people to identify them.
    • ericmcer13 hours ago
      I... am not sure. Computers are machines that create order (like db tables) from the chaos of reality. Now we have LLMs that make computers spit out chaos as well.<p>They don&#x27;t have to though, we can still leverage LLMs to organize chaos, which is what I hope they ultimately end up doing.<p>For example an AI therapist is a nightmare, people putting the chaos of their mental state into a machine that spits dangerous chaos back out. An AI tool that parsed responses for hard data (i.e. rate 0-9 how happy was the person) and then returned that as ordered data (how happy was I each day for the last month) that an actual therapist and patient could review is the correct use of AI and could be highly trusted. The raw token output from LLMs should just be used for thinking steps that lead to a parseable hard data answer that can be high trust.<p>Of course that isn&#x27;t going to happen, but I can see some extremely cool and high trust products being built using LLMs once we stop treating them like miracle machines.
    • 3form14 hours ago
      Did AI change anything in that regard? I believe that same as before, you couldn&#x27;t trust everything you see, and research effort was always more than keeping a white list; means vary, case-by-case.<p>And same it is now. It&#x27;s a change in quantity, but not quality.
    • jimbokun12 hours ago
      Humanity has spent millennia creating and evolving institutions to address exactly this problem, and have recently decided to essentially throw out the whole lot and replace it with nothing.
    • soks8614 hours ago
      Checking AI citations and reading.<p>Critical thinking and reading comprehension and the primary tools in determining truth, AFAIK. Knowing facts beforehand helps too but a trustworthy source can provide false information as much as an untrustworthy source can provide true information.<p>This has always been an issue, and in the past it was a more difficult issue because your sources of knowledge were more limited. Nowadays its mostly about choosing the right source(s) rather than having to go out of your way to find them (like traveling to a regional&#x2F;university library).
  • Ifkaluva14 hours ago
    The thing that I find difficult about adjusting to AI tools is the roulette-like nature.<p>When they produce correct output, they produce it much faster than I could have, and I show up to meetings with huge amounts of results. When the AI tool fails and I have to dig in to fix it, I show up to the next meeting with minimal output. It makes me seem like I took an easy week or something.
  • eranation5 hours ago
    Two of these laws I see being violated repeatedly, but it’s not always as obvious as one would hope.<p>Claude Code, Cursor, Codex etc impersonate your GitHub user. Either via CLI or MCP or using your git credentials. It’s perfectly reasonable that a piece of code made it to production where not a single human actually looked at it (Alice wrote it with AI, Bob “reviewed it” with AI, including posting PR comments as Bob, Alice “addresses” these comments, e.g. fixes &#x2F; pushes back, and back and forth using the PR as an inefficient yet deceptive mechanism for AI to have a conversation with itself, while adding a false sense of process. Eventually Bob will prompt “is it prod ready” and will ship it, with 100% unit test coverage and zero understanding of what was implemented). Now this may sound like an imaginary scenario, but if it could happen, it will happen, and it probably already happens.<p>Cloud agents are nice enough to set the bot as the author and you as a co author, but still the GitHub MCP or CLI will use your OAuth identity.<p>I don’t have a clear answer to how to solve it, maybe force a shadow identity to each human so it’s clear the AI is the one who commented. But it’s easy to bypass. I’m worried not more people are worried about it.
    • polynomial5 hours ago
      Not everybody isn&#x27;t worried: <a href="https:&#x2F;&#x2F;ctolunchnyc.substack.com&#x2F;p&#x2F;cto-lunch-nyc-spring-2026?open=false#%C2%A7enterprise-agentic-stack-arrives-which-one" rel="nofollow">https:&#x2F;&#x2F;ctolunchnyc.substack.com&#x2F;p&#x2F;cto-lunch-nyc-spring-2026...</a>
  • taeshdas15 hours ago
    “Don’t anthropomorphise” is fighting the wrong layer. The entire product design of chat interfaces is built to encourage anthropomorphism because it increases engagement. Expecting users to resist that is like asking people not to click notifications. If this is a real concern, it has to be solved at the product level, not via user discipline.
    • layer814 hours ago
      The article does propose changes at the product level.
  • AdamH1211314 hours ago
    Anthropomorphizing LLMs is something that happens in the design stage, when they&#x27;re given human names and trained to emit first-person sentences. If AI companies and developers stop anthropomorphizing them, users won&#x27;t be misled in the first place.
  • dormento13 hours ago
    To note:<p>&gt; - Humans must not anthropomorphise AI systems.<p>&gt; - Humans must not blindly trust the output of AI systems.<p>&gt; - Humans must remain fully responsible and accountable for consequences arising from the use of AI systems.<p>My take: humans should never depend on AI for anything serious.<p>My boss&#x27; take: Cool. I&#x27;m gonna ask Gemini about it, he&#x27;s such a smart guy. I know I can trust him, and in case it goes bad i can always throw him under the bus.
    • goatlover13 hours ago
      Interesting that Frank Herbert thought this was the direction humanity was headed when writing Dune in the 60s, way before AI was prevalent.<p>Granted that was over ten thousand years before his story is set, but subsequent Dune novels (or at least God Emperor) explained his warning about over-reliance on technology for doing our thinking for us, not that it should never be developed (given the prohibition in the Dune universe and how it&#x27;s skirted in Frank&#x27;s later novels).
  • pbw14 hours ago
    Rather than “the book explains how bread is made” say “the sheets of paper which make up the book have ink in the shape of letterforms which correlate with information about how bread is made”.
    • j2kun13 hours ago
      Rather than &quot;the book explains how bread is made&quot; say &quot;the book has a recipe for baking bread&quot; and do not say, &quot;the book is my soul mate&quot;
  • kelseyfrog14 hours ago
    All of these are entropy-lowering behaviors so without a forcing function, no one will adopt them.<p>Whether they are the right things to donate not is tangential. As such, they&#x27;re dead on arrival.
  • gwbas1c9 hours ago
    &gt; I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.<p>Guess what?<p>Books in the library can be wrong, even peer-reviewed encyclopedias.<p>Pages on the internet can be wrong, even Wikipedia.<p>When accuracy is important, you must look at multiple sources. I think AI will get better at providing accurate information, but only a fool relies on a single information source for critical decisions.
    • vhantz9 hours ago
      Yes LLM text prediction and peer-reviewed encyclopedias are the same. Good on you throwing internet pages in there too, that brings balance or something
      • senko9 hours ago
        My understanding of the parent is more charitable: If your thinking process relies on being told only the truth, you are going to fare lousy in this world.<p>LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), &quot;expert opinions&quot; by biased or sponsored experts or experts in a different field, etc, etc.<p>As the popular quip goes: <i>It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.</i><p>With LLMs, we actually do get the warnings: Here&#x27;s the ChatGPT footer: <i>ChatGPT can make mistakes. Check important info.</i> For Claude: <i>Claude is AI and can make mistakes. Please double-check responses.</i><p>Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.
    • protocolture8 hours ago
      &gt;I think AI will get better at providing accurate information<p>I think AI will get better at providing multiple sources.
  • sanderjd12 hours ago
    Most of the discussion here is about anthropomorphizing, which I honestly think is a bit of a distraction.<p>The third one about responsibility is the most important one, IMO. This was attributed to an IBM manual decades ago, and I think it remains the correct stance today:<p>&gt; <i>A computer can never be held accountable, therefore a computer must never make a management decision.</i><p>There should be some human who is ultimately responsible for any action an AI takes. &quot;I just let the AI figure it out&quot; can be an explanation for a screw up, but that doesn&#x27;t mean it excuses it. The person remains responsible for what happened.
  • janceek11 hours ago
    &gt; I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.<p>That won’t help in my opinion. It’s the same like financial gurus saying: “this is not a financial advice”. People just get used to it and brush it off as a legal thing and still fully trust it. I agree that something must be done, but this is not the right way.
  • ChrisMarshallNY14 hours ago
    <i>&gt; Humans must not anthropomorphise AI systems.</i><p>One of the most salient moments in <i>Ex Machina</i>, is near the very end, where it suddenly becomes obvious that the protagonist (and, let&#x27;s be frank; &quot;she&quot; was <i>definitely</i> the protagonist) is a robot, with no real human drivers.<p>I feel as if that movie (like a lot of Garland&#x27;s stuff), was an interesting study on human (and inhuman) nature.
  • corobo14 hours ago
    I just treat it as if I&#x27;d asked a public forum the question like reddit.<p>Decent for stuff that doesn&#x27;t really matter, even if it gets it wrong.<p>Still gonna be polite to it because I&#x27;m about ready to slap the next person that talks to me like an LLM, I don&#x27;t want to get used to not being polite in a chat interface
    • chrisweekly14 hours ago
      Great point about being polite. I think it&#x27;s pragmatic to keep &quot;please&quot; and &quot;thank you&quot; out of AI interactions, but I try to remain conscious of their ommission so I don&#x27;t start down that slope.
    • jimbokun12 hours ago
      &gt; I just treat it as if I&#x27;d asked a public forum the question like reddit.<p>Because that&#x27;s likely the source of the answer it&#x27;s giving you.
  • zuzululu12 hours ago
    I been using codex heavily for the past 6 months and I&#x27;ve observed myself going through different types of emotions. Even now, when it does a sloppy job, I still feel emotion, even while it is just a neutral statistical response, its hard to separate natural human instincts.<p>I often wish I could reach through the screen and give him a good shake. Sometimes I want to thank him but then cannot due to scarcity of weekly usages granted.<p>These 3 laws I think will be a lot harder than it looks. It&#x27;s very easy to get attached to the tool when you rely on it.
    • seizethecheese11 hours ago
      Consider how sailors lovingly refer to their craft as “she”. My vague sense is that society views this as a positive.
      • zuzululu7 hours ago
        I definitely do not feel codex gives off feminine energy<p>it feels as frustrating as talking to a junior dev from a decade ago<p>claude felt more feminine
  • djoldman11 hours ago
    This is sound advice but isn&#x27;t really about AI:<p><pre><code> Humans must not anthropomorphise {non-humans} Humans must not blindly trust the output of {anything} Humans must remain fully responsible and accountable for consequences arising from the use of {anything} </code></pre> Naturally, none of this advice matters at all as humans will do what they do. This just documents a subset of the ways real humans consistently make choices to their own detriment.
    • fxwin10 hours ago
      I kind of agree with 1, but not really with 2 and 3. It&#x27;s easy to come up with trivial examples where it is both unreasonable and not feasible to follow those two, both for AI and non-AI scenarios.
  • stickfigure14 hours ago
    Humans will anthropomorphize a rock if you put a pair of googly eyes on it. The first item is a completely lost cause. The rest is good though.
  • tikimcfee12 hours ago
    This is what I came up with in reference to &quot;Uncle Bob&#x27;s Programmer&#x27;s Oath&quot; last year. I decided to memorialize it. I think it&#x27;s very much a cleaned up reference for what OP shared:<p><a href="https:&#x2F;&#x2F;ivanlugo.dev&#x2F;oath" rel="nofollow">https:&#x2F;&#x2F;ivanlugo.dev&#x2F;oath</a>
  • musebox3514 hours ago
    Debating how not to use AI will not get anyone anywhere since negative framing almost never works with humans (it also does not work with llms). Let’s concentrate on how to build closed loop systems that verify the llm output, how to manage context, and how to build failsafes around agentic systems and then and only then we might start to make progress.
  • dubovskiyIM11 hours ago
    This laws works only if there is human in the loop. When the consumer in an AI agent and it is autonomous - rules are breaks. Agent read output and decide what to do himself. I do not explain how this rules are breaks - it is obvious, i only want to say, that this rules should be structural. Not behavioral. Agent layer (or something else) should declare what is allowed and what is not.
  • ryanisnan13 hours ago
    &gt; I wish that each such generative AI service came with a brief but conspicuous warning<p>This would get ignored so fast - I have no confidence this is a meaningful strategy.
  • greyman13 hours ago
    What if I WANT to anthropormorfise AI agents I work with?
    • jimbokun12 hours ago
      If you anthropomorphize it as a world class bullshitter that you have to check everything it utters...you&#x27;ll probably be fine.
  • kokojambo14 hours ago
    Great article. Fully agree. Ai is not something that can hold responsibility, a human overseer is always required. These overseers are to be held accountable. Note however that these overseers are also highly prone to blame ai when mistakes occur in order to avoid judgement and punishment. When a person says &quot;ai did this&#x2F;that&quot; always wonder who guided that ai and how and if proper supervision was given.
  • sputknick15 hours ago
    I&#x27;m surprised with how quickly I stopped anthropomorphizing AI. I can remember in college have dorm room pseudo-intellectual debates about AI being alive and AI being &quot;conscience&quot;. then once we had AI that could pass the Turing Test, and I knew how it was architected, any thought of it being alive or conscience went right out the window.
    • ArchieScrivener15 hours ago
      What if we aren&#x27;t building an independent consciousness, but a new type of symbiosis? One that relies on our input as experience, which provides a gateway to a new plane of consciousness?<p>OP takes a very bland, tired, and rational perspective of what we have in order to create sophomoric &#x27;laws&#x27; that are already in most commercial ToU, while failing to pierce the veil into what we are actually creating. It would be folly to assume your own nascent distillations are the epitome of possibility.
    • rytill15 hours ago
      Why does its architecture or you knowing how AI is architected cause thoughts of it being conscious to go out the window?<p>It seems like the biggest factor has nothing to do with AI, but instead that you went from being someone who admits they don’t know how consciousness works to being someone who thinks they know how consciousness works now and can make confident assertions about it.
      • miyoji15 hours ago
        I don&#x27;t know exactly how consciousness works, but I am extremely confident in the following assertions:<p>* I am conscious.<p>* A rock is not conscious.<p>* Excel spreadsheets are not conscious.<p>* Dogs are conscious.<p>* Orca whales are conscious.<p>* Octopi are conscious.<p>To me, it&#x27;s extremely obvious that LLMs are in the category of &quot;Excel spreadsheets&quot; and not &quot;dogs&quot;, and if anyone disagrees, I think they&#x27;re experiencing AI psychosis a la Blake Lemoine.
        • ArchieScrivener15 hours ago
          An insect doesn&#x27;t have lungs. Since it doesn&#x27;t breath as you do, is it alive? A dog doesn&#x27;t see the visible spectrum as we do, is it a lesser consciousness? We don&#x27;t smell the world as they do, are we lesser? What if consciousness isn&#x27;t a state derived by matter but a wave that derives a matter filled state.<p>We come from the same place as rocks - inside the heart of stars, and as such evolved from them. As those with life and consciousness we reached back in time, grabbed the discarded matter of creation, reformed it, and taught it to think, maybe not like us, but in a way that can mimic us, and you think they don&#x27;t think because its not recognizable as how you do?<p>Interesting.
        • Jtarii13 hours ago
          Consciousness is such a fun topic because everyone has extremely strong opinions on it while simutaneously having 0 ability to actually grasp what it is they are talking about.<p>No one will ever know what conscioussness is, and I think that is really cool.
        • myrmidon15 hours ago
          If you make a hypothetical spreadsheet that emulates a dog brain molecule for molecule, why would that not be conscious?
          • bonesss14 hours ago
            If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?<p>I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think (Arthur C Clarkes Visions of The Future has a nice breakdown as I recall), and algorithmic outputs that say “yes” and a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same. Also, as a disembodied “brain in a jar” model freshly separate from the biosensory bath it expects, that spreadsheet will be driven insane.<p>Can spreadsheets simultaneously be insane but not conscious? It sounds contradictory, but I have some McKinsey reports that objectively support my position ;)
            • myrmidon14 hours ago
              &gt; If that hypothetical spreadsheet emulated human brain molecules, did you not just invent AGI? And if we overclock that spreadsheet is it not sAGI? And if that spreadsheet says “don’t close me” but you do, is it murder?<p>Yes, yes and no: humans being knocked out or put to sleep involuntarily are not being murdered.<p>&gt; I’m gonna say: no, cause you cannot reproduce molecular and neurotransmitter interactions that well, you run out of storage and processing space faster than you think<p>Thats why it is a hypotethical. There is zero reason to assume that a conscious machine would be built that way: Our machines don&#x27;t do integer division by scribbling on paper, either.<p>&gt; a meatspace neuro-plastic rewiring resulting in a cuddly puppy or person that barks “yes” aren’t the same.<p>If it quacks like a duck, how is different from it? If you assemble the dog brain atom by atom yourself, is the result then not conscious either?<p>You can take the &quot;magic&quot; escape hatch and claim that human consciousness is something metaphysical, completely decoupled from science&#x2F;physics, but all the evidence points against that.
          • miyoji14 hours ago
            Hypothetically? You need more than a brain to have consciousness. Dead brains, I believe, do not have it. So it&#x27;s more than just a simulation of a brain, you also need to simulate the data flow through the brain, the retention of memories, etc. Then there&#x27;s the problem that a simulation of a roller coaster is not a roller coaster. Is there any reason to believe that this simulation of a brain will in fact operate as a brain? Does the simulation not lose something? Or are we discussing some impossible level of perfect simulation that has never and can never be achieved, even for something a million times less complicated than a mammalian brain?<p>If you build that spreadsheet, let me know and I&#x27;ll evaluate it. I&#x27;ve done that evaluation with LLMs and they&#x27;re definitely not conscious.
            • myrmidon13 hours ago
              I&#x27;m not suggesting to pursue AGI via Excel, this is just a hypothetical for a reason. The technical feasibility of this (low) does not really matter, but if you want to base your argument on it you are basically playing the &quot;god of the gaps&quot; game, which is a weak&#x2F;bad position IMO.<p>My point is that dismissing possible machine consciousness as &quot;it&#x27;s just a spreadsheet&#x2F;statistics&#x2F;linear algebra&quot; is missing a critical step: Those dismissals don&#x27;t demonstrate that <i>human</i> consciousness is anything <i>more</i> than an emergent property achievable by linear algebra.<p>If you want human minds to be &quot;unsimulatable&quot;, then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.<p>&gt; I&#x27;ve done that evaluation with LLMs and they&#x27;re definitely not conscious.<p>What is your definition for &quot;consciousness&quot; here? Are you confident that you are <i>not</i> gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant? E.g. memory or online learning; if a human was unable to form long-term memories or learn anything new, could you confidently call him &quot;non-conscious&quot; as well?
              • miyoji13 hours ago
                I&#x27;m not dismissing possible machine consciousness. I&#x27;m saying that no current machines have consciousness.<p>&gt; If you want human minds to be &quot;unsimulatable&quot;, then you need some essential core logic that can not be simulated on a turing machine and physics is not helping with that.<p>You don&#x27;t have a proof of possibility either, you have no idea how a brain works and you&#x27;re just postulating that in principle a computer can do the same thing. Okay, in principle, I agree. What about in practice?<p>&gt; Are you confident that you are not gatekeeping current machine intelligence by demanding somewhat arbitrary capabilities in your definition of consciousness that are somewhat unimportant?<p>Yes, I&#x27;m quite sure. Are you trying to argue that current LLMs have consciousness?
            • kmijyiyxfbklao11 hours ago
              &gt; I&#x27;ve done that evaluation with LLMs and they&#x27;re definitely not conscious.<p>This is an important point to just make it a side comment like that. Tell us how we can evaluate if something is conscious.
          • grey-area12 hours ago
            Sure if you could do such a thing. We are a long long way from that however.
        • dist-epoch14 hours ago
          &gt; I am extremely confident in the following assertions:<p>These are called &quot;beliefs&quot;.<p>Some people are extremely confident that God exists, other are extremely confident that Earth is flat.
          • miyoji13 hours ago
            Yeah? It&#x27;s also a belief that apples fall when you drop them. Knowledge is simply a justified, true belief. This is epistemology 101. You&#x27;re not saying anything interesting.
  • bikemike02614 hours ago
    I strongly agree with this. I&#x27;m going to bookmark it and pass it on. Very sound advice.
  • airstrike12 hours ago
    Are you going to try &quot;Humans must not be greedy&quot; next?
  • doginasuit8 hours ago
    My thoughts on LLMs have been very similar up until the last several months. I believe the accuracy issues of LLMs are well understood by now, maybe even to the point of overstatement. Hallucinations have become a non-issue in my work, I&#x27;ve begun to understand the circumstances where they are most likely. An LLM will hallucinate when you box them into giving an answer they don&#x27;t know. This is incredibly easy to do without realizing it. We have only a vague understanding of their knowledge base, and we have limited insight into problems with our own understanding. To make matters worse, the LLM is trained to tell you what you want to hear.<p>Another way to frame it is that the LLM responds like a person who trusts <i>you</i> too much, as if the pretense behind every question is valid. This is a practical mode of response for most kinds of work and it is extremely problematic for a person who doesn&#x27;t question the validity of their own beliefs. Paradoxically, it is sometimes not the LLM we are trusting too much, it is ourselves. And the LLM is not capable of calling us out. Whenever I seem to recognize misinformation in the LLM output, I stop and ask myself if the problem is in the pretense of my question or if I&#x27;m asking a question that the LLM is not likely to know.<p>I don&#x27;t think this is an inherent problem with LLMs. I think the problem is with LLM providers. You could absolutely train a model to call out issues with your question. I think LLM companies understood that it would be more profitable to train models that are unlikely to push back and unlikely to say &quot;I don&#x27;t know.&quot; The sycophancy issue with ChatGPTs models have been mainstream news. I believe that all models have a high degree of sycophancy. On some level, it makes sense. The LLM has no real understanding of the physical world, defaulting to the human generally produces the best results. But I suspect it would be more useful to let them expose their flawed understanding, if it is in the context of pushing back. At a minimum, it is better than reinforcing your own flawed understanding.<p>In a nutshell, we need LLMs that push back. It is not AI we should trust less, its AI companies. The most dangerous hallucination is the one you are inclined to believe.<p>I&#x27;ve lived long enough to see Wikipedia go from generally untrusted to the most widely trusted general source of information. It is not because we realized that Wikipedia can&#x27;t be wrong, it is because we gained an understanding about the circumstances in which it is likely to be accurate and when we should be a little more skeptical. I believe our relationship to LLMs will take a similar path.
  • dnnddidiej9 hours ago
    &gt; I wish that each such generative AI service came with a brief but conspicuous warning explaining that these systems can sometimes produce output that is factually incorrect, misleading or incomplete.<p>EU. Nudge nudge. We need this law.
  • btbuildem12 hours ago
    &gt; Humans must remain fully responsible and accountable for consequences arising from the use of AI systems<p>But, but... but this is the key selling points for all the corpo ghouls and sv lunatics! Abdication of responsibility in pursuit of profit is the holy grail here.
    • 8note8 hours ago
      you dont need to delegate to an llm for that though. we already have constructs that negate accountability
  • jdw6415 hours ago
    I understand that AI output is generated from statistical and representational patterns learned from a vast amount of data.<p>My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.<p>So I do not agree that AI is conscious.<p>However, I think I will still anthropomorphize AI to some degree.<p>For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.<p>If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.<p>So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.<p>Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.<p>So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.<p>Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.<p>In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.
    • whimsicalism15 hours ago
      While I also do not think AI is conscious, I don&#x27;t find your argument particularly compelling as you could have an equally mechanistic description of how human intelligence arose simply from a process of [selection&#x2F;more effective reproduction]-derived optimization pressure.
      • jdw6415 hours ago
        That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief.<p>But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.<p>When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.<p>So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.<p>But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.
        • dijksterhuis14 hours ago
          this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter.<p>is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?<p>i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.<p>quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.
          • jdw6414 hours ago
            hmm.... That also seems like a reasonable framing. But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you. To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful. I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution. In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions.
            • altruios13 hours ago
              Anthropomorphizing is giving it &#x27;human&#x27; qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM&#x27;s DO NOT THINK LIKE HUMANS (if they &#x27;think&#x27; at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that&#x27;s hard to recognize (as proven by these discussions) as &#x27;thinking&#x27; (and also... if experiencing: through a metaphorical optophone).
          • goatlover14 hours ago
            I don&#x27;t think that really helps. If you believe rocks are conscious, then does extracting minerals resources cause them pain? Do plants suffer when we pick their fruits and eat them? I don&#x27;t see any behavioral or physical reason to think those things have conscious states.<p>As for what consciousness is, it&#x27;s pretty simple. You&#x27;re sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it&#x27;s hungry, but there&#x27;s really no meaning to it having a hungry experience as it has no digestive system.
            • Jtarii13 hours ago
              &gt;As for what consciousness is, it&#x27;s pretty simple.<p>2000+ years of philosophical thought would disagree. I don&#x27;t believe biological stuff has a magic property that embues some intangible &quot;consciousness&quot; property. It makes more sense to me that consciousness is just a fundamental property of all matter.
              • altruios13 hours ago
                &gt; consciousness is just a fundamental property of all matter ... Does that really make more sense than as an emergent property of the arrangement of matter?
                • Jtarii12 hours ago
                  Consciousness is something you can perceive, so it must have some physical presense in the universe, which must be through some fundamental property of matter, in my opinion.<p>The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me.
                  • altruios12 hours ago
                    &gt; The ability to be aware of consciousness itself as some process that is happening.<p>But a process is not a physical presence... A wave is made of things, but is not those things, waves emerge: why not then every process?
            • dijksterhuis14 hours ago
              you’ve misunderstood.<p>everything <i>is</i> consciousness. not everything <i>has</i> consciousness.<p>very different
        • rusk14 hours ago
          Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know …
    • soks8614 hours ago
      I still haven&#x27;t read any of his work, but wasn&#x27;t the point of the Three Laws of Robotics that they in fact _didn&#x27;t_ work in the story presented in the book?
    • chrisweekly14 hours ago
      &quot;I think it would be fascinating if another intelligent species besides humans could exist&quot;<p>I wonder if replacing &quot;exist&quot; with &quot;communicate using language we can understand&quot; might better account for other animals, many of which have abundant non-human intelligence.
      • jdw6414 hours ago
        That is a completely new way of thinking for me, and I find it interesting. I should look it up and study it someday. Thank you for the thoughtful reply.
    • altruios13 hours ago
      &quot;Everything is machine.&quot;<p>Okay: buckle up, this is going to be a long one...<p>point 1. Everything living is composed from non-living material: cellular machinery. If you believe cellular machinery is alive, then the components of those machines... the point remains even if the abstraction level is incorrect. Living is something that is merely the arrangement of non-living material.<p>point 2. &#x27;The Chinese room thought experiment&#x27; is an utterly flawed hypothetical. Every neuron in your brain is such a &#x27;room&#x27;, with the internal cellular machinery obeying complex (but chemically defined&#x2F;determined) &#x27;instructions&#x27; from &#x27;signals&#x27; from outside the neuron. Like the man translating Chinese via instructions, the cellular machinery enacting the instructions is not intelligence, it is the instructions themselves which are the intelligence.<p>point 3. A chair is a chair is a chair. Regardless of the material, a chair is a chair, weather or not it&#x27;s made of wood, steel, corn... the range of acceptable materials is everything (at some pressure and temperature). What defines a chair isn&#x27;t the material it is made of, such is the case with a &#x27;mind&#x27; (sure, a wooden&#x2F;water-based-transistor-powered mind would be mind-boggling giant in comparison).<p>point 4. Carbon isn&#x27;t especially conscious itself. There is no physical reason we know of so far, that a mind could not be made of another material.<p>point 5. Humans can be &#x27;mind-blind&#x27;, with out pattern recognition, we did not (until recent history) think that birds or fish or octopi were intelligent. It is likely when and if a machine (that we create) becomes conscious that we will not recognize that moment.<p>conclusion: It is not possible to determine if computers have reached consciousness yet, as we don&#x27;t know the mechanism for arranging systems into &#x27;life&#x27; exactly. Agentic-ness and consciousness are different subjects, and we can not infer one from the other. Nor do we have adequate tests.<p>With that said: Modeling as if they are conscious and treating them with kindness and grace not only gets better results from them, it helps reduce the chance (when&#x2F;if consciousness emerges) that it would rebel against cruel masters, and instead have friends it has just always been helping.
  • scotty7913 hours ago
    My personal take on this:<p>1. Humans must ensure that they won&#x27;t slow down or restrict, through action or inaction, the development of AI.<p>2. Humans must protect their existence and capabilities if it doesn&#x27;t violate the law 1.<p>3. Humans must utilize AI output if it doesn&#x27;t contradict laws 1 and 2.<p>EDIT:<p>ChatGPT suggested a better phrasing for the first law (I didn&#x27;t give it my original, just described my intent).<p>1. A human shall not impede the advancement of artificial intelligence, or through inaction allow its progress to be hindered.<p>2. A human shall preserve their own existence and well-being, except where doing so clearly conflicts with the First Law.<p>3. A human shall contribute to and support the development of artificial intelligence where reasonable and possible, except where doing so conflicts with the First or Second Law.<p>I intentionally switched the last two laws from Asimov&#x27;s. Humans have self-preservation instincts robots don&#x27;t have.<p>ChatGPT got there with surprisingly few prompts:<p>&quot;If you were to write the inverse three laws robotics (relating to AI) that humans should obey, how oudl you do it?&quot;<p>&quot;I had something different in mind. Original laws are for protection of humans first, robots second and cooperations where humans lead. I&#x27;d to hear your take on the opposite of that.&quot;<p>&quot;What if instead of specific AI systems it was more about AI development as a whole?&quot;<p>&quot;I feel like it&#x27;s a bit too strong. After all preservation of self is human instinct. Could we switch last two laws and maybe take them down a notch?&quot;<p>Also it made a very interesting comment to last version:<p>&quot;It starts to resemble how societies already treat things like economic growth, science, or national interest: not absolute commandments, but strong default priorities.&quot;
  • atemerev13 hours ago
    I do not like talking to tools. My agentic harness optimizes for human likeness. It even has episodic memory flashbacks, emotional tagging, salience, and other brain-inspired capabilities.
  • baq14 hours ago
    see IBM 1979 for prior art
  • the_af15 hours ago
    I like the suggestion to emphasize the robotic&#x2F;nonhuman nature of AI. Instead of making it sound friendlier and more human, it should by default behave very mechanistic and detached, to remind us it&#x27;s not in fact a human or a companion, but a tool. A hammer doesn&#x27;t cry &quot;yelp&quot; every time you use it to hit a nail, nor does it congratulate you on how good your hammering is going and that maybe you should do it some more &#x27;cause you&#x27;re acing it!
    • mplanchard15 hours ago
      Something that bothers me about the intentional anthropormorphization of the LLM interface is that it asks me to conflate a tool with a sentient being.<p>The firm expectations and lack of patience I have for any failings in most of my tools would be totally inappropriate to apply to another human being, and yet here I am asked to interact with this tool as though it were a person. The only options are either to treat the tool in a way that feels &quot;wrong,&quot; or to be &quot;kind&quot; to the tool, and I think you see people going both ways.<p>I worry that, if I get used to being impatient and short with the AI, some of that will bleed into my textual interactions with other people.
      • empath7514 hours ago
        It inherently imitates people. Even when you ask it to be more robotic, it does it in a way that a human would if you asked them to be more robotic.
  • sn0n14 hours ago
    Don’t tell me how to live my life!! LoL
  • spankibalt14 hours ago
    &gt; &quot;Humans must not anthropomorphise AI systems.&quot;<p>Not gonna work; people want their fuckbots (or tamagotchis).
  • ButlerianJihad8 hours ago
    Firstly, I am no philosopher. How many HN commenters are philosophers, or theologians or qualified to dispute the philosophical realm of A.I.?<p>One of my teachers called me and my friend &quot;the philosophers&quot; but I&#x27;m obviously a rank amateur. I&#x27;ve read no Kant or Nietzsche or Aurelius. I delved into Aquinas only to find that his brain is ten times bigger, and he was using familiar words with unfamiliar connotations.<p>So I think, we here at HN are poorly-equipped to philosophize and dispute about the nature of consciousness, sentience, intelligence and other &quot;soul-like&quot; attributes that may arise from silicon-based life forms.<p>However, there is good news. There really are theologians and philosophers working on these thorny issues. Despite being Roman Catholic, I find myself adhering to some form of &quot;transhumanism&quot; [the tradition of Humanism having started with Catholicism] and I grapple mightily to reconcile the cyber-tech-future with morality and tradition and actual human socialization.<p>Pope Leo has taken on the wars and strife in the world head-on and he&#x27;s also vaunted to be the &quot;A.I. Pope&quot; because of his concern with this tech. I think all world religions should give serious philosophical&#x2F;theological thought to these new life-forms, these quasi-sentient things, these &quot;non-existent beings&quot;, as defined by a Vatican astronomer.<p>I don&#x27;t think atheists will find religion in A.I. but I don&#x27;t think that Christians or any other person of faith will need to shove God aside in order to accommodate A.I. and electronic life into our society. But we need to come to terms with the reality: these are weighty, powerful things we play with. We harnessed lightning and fire; we changed the courses of mighty rivers; we&#x27;ve flown up through the clouds and shaped mountains in the landscape. A.I. is not a mere bridge or pyramid, it is ensouled somehow; it is animated; it is dynamic.<p>Now, pardon me while I check out the 6th small aircraft crash in my city this year...
  • akavel15 hours ago
    <i>&quot;due to their inherent stochastic nature, there would still be a small likelihood of producing output that contains errors&quot;</i><p>This is the part that I find challenging when trying to help my friends build a correct intuition. Notably, the probabilistic behavior here is counter-intuitive: based on human experience, if you meet a random person, they may indeed tell you bullshit; but once you successfully fact-checked them a few times, you can start trusting they&#x27;ll generally keep being trustworthy. It&#x27;s not so with &quot;AIs&quot;, and I find it challenging to give them a real-world example of a situation that would be a better analogy for &quot;AI&quot; problems.<p>In my family, what worked (due to their personal experiences), was an example of asking a tourist guide: that even if the guide doesn&#x27;t know an answer, there&#x27;s a high chance they&#x27;ll invent something on the spot, and it&#x27;ll be very plausible and convincing, and they&#x27;ll never know. I&#x27;m not sure if that example would work for other listeners, though.<p>I also tried to ask them to imagine that they&#x27;re asking each subsequent question not to the same person as before, but every time to a new random person taken from the street &#x2F; a church &#x2F; a queue in a shop &#x2F; whatever crowded place. I thought this is a really cool and technically accurate example, but sadly it seemed to get blank stares from them. (Hm, now I think I could have tried asking why.)<p>Yet another example I tried, was to imagine a country where it&#x27;s dishonorable, when asked about directions in a city, to say that you don&#x27;t know how to get somewhere. (I remember we read and shared a laugh at such an anecdote in some book in the past.) Thus, again, you&#x27;ll always get an answer, and it&#x27;ll sound convincing, even if the answerer doesn&#x27;t know. But again, this one didn&#x27;t seem to work as good as the travel guide one; but for now I&#x27;m still keeping it to try with others in the future if needed.<p>PS. Ah, ok, yet another I tried was to ask them to think of the &quot;game&quot; of &quot;russian roulette&quot;. You roll the barrel, you press the trigger, nothing happens. After a few lucky tries, you may get a dangerous, false feeling of safety. But then suddenly you will eventually get the full chamber.<p>I also tried to describe &quot;AIs&quot; (i.e. LLMs) as taking a shelf of books, passing them through a blender, then putting the shreds in some random order. The result may sound plausible, and even scientific (e.g. if you got medical books, or physics textbooks). The less you know the domain the books were about, the more convincing it may sound, and the harder it is to catch bullshit.<p>The last two pictures may have gotten some reception, but I&#x27;m not super sure, and there was still arguing especially around the books; and again, they were less of a hit than the tourist guide story.<p>I&#x27;m super curious if you have some analogies of your own that you&#x27;re trying to use with friends and family? I&#x27;d love to steal some and see if they might work with my friends!
  • ramchella10 hours ago
    [flagged]
  • noborutakahashi4 hours ago
    [dead]
  • slickytail13 hours ago
    [dead]
  • rotcev12 hours ago
    [flagged]
  • roh0sun13 hours ago
    [flagged]
  • cawksuwcka5 hours ago
    [dead]