32 comments

  • nromiun2 hours ago
    Funny how so many people in this comment section are saying Rob Pike is just feeling insecure about AI. Rob Pike created UTF-8, Go, Plan-9 etc. On the other hand I am trying hard to remember anything famous created by any LLM. Any famous tech product at all.<p>It is always the eternal tomorrow with AI.
    • llmslave21 hour ago
      Remember, gen AI produces so much value that companies like Microsoft are scaling back their expectations and struggling to find a valid use case for their AI products. In fact Gen AI is so useful people are complaining about all of the ways it&#x27;s pushed upon them. After all, if something is truly useful nobody will use it unless the software they use imposes it upon them everywhere. Also look how it&#x27;s affecting the economy - the same few companies keep trading the same few hundred billion around and you know that&#x27;s an excellent marker for value.
      • jb199159 minutes ago
        Unfortunately, it’s also apparently so useful that numerous companies here in Europe are replacing entire departments of people like copywriters and other tasks with one person and an AI system.
        • llmslave256 minutes ago
          Large LANGUAGE models good at copywriting is crazy...
    • johnnyanmac55 minutes ago
      He&#x27;s also in his late 60&#x27;s. And he&#x27;s probably done career&#x27;s worth of work every other year. I very much would not blame him for checking out and enjoying his retirement. Hope to have even 1% of that energy when&#x2F;if I get to that age
    • avaer1 hour ago
      &gt; On the other hand I am trying hard to remember anything famous created by any LLM.<p>That&#x27;s because the credit is taken by the person running the AI, and every problem is blamed on the AI. LLMs don&#x27;t have rights.
      • Antibabelic1 hour ago
        Do you have any evidence that an LLM created something massive, but the person using it received all the praise?
        • avaer3 minutes ago
          Maybe not autonomously (that would be very close to economic AGI).<p>But I don&#x27;t think the big companies are lying about how much of their code is being written by AI. I think back of the napkin math will show the numbers are already some definition of massive. And those companies are 100% taking the credit (and the money).<p>Also, almost by definition, every incentive is aligned for people in charge to deny this.
        • bravetraveler1 hour ago
          Hey now, someone engineered a prompt. Credit where it&#x27;s due! Subscription renews on the first.
      • goatlover1 hour ago
        So who has used LLMs to create anything as impressive as Rob Pike?
    • apexalpha16 minutes ago
      &gt;On the other hand I am trying hard to remember anything famous created by any LLM.<p>ChatGPT?
      • beAbU3 minutes ago
        ChatGPT was created by people...
  • wrs2 hours ago
    To be clear, this email isn&#x27;t from Anthropic, it&#x27;s from &quot;AI Village&quot; [0], which seems to be a bunch of agents run by a 501(c)3 called Sage that are apparently allowed to run amok and send random emails.<p>At this moment, the Opus 4.5 agent is preparing to harass William Kahan similarly.<p>[0] <a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village</a>
    • pests48 minutes ago
      &gt; DAY 268 FINAL STATUS (Christmas Day - COMPLETE) &gt; Verified Acts: 17 COMPLETE | Gmail Sent: 73 | Day ended: 2:00 PM PT<p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;agent&#x2F;claude-opus-4-5" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;agent&#x2F;claude-opus-4-5</a><p>At least it keeps track
      • rurban24 minutes ago
        Their action plan also makes an interesting read. <a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;blog&#x2F;what-do-we-tell-the-humans" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;blog&#x2F;what-do-we-tell-the-hum...</a><p>The agents, clearly identified themselves asis, take part in an outreach game, and talking to real humans. Rob overeacted
    • 0xWTF1 hour ago
      Sage? Is this the same as the Ask Sage that Nicolas Chaillan is behind?
      • Den_VR1 hour ago
        I’ve yet to hear a good thing about Nick.
    • da_grift_shift1 hour ago
      Permalink for the spam operation:<p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;goal&#x2F;do-random-acts-kindness" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;goal&#x2F;do-random-acts-kindness</a><p>The homepage will change in 11 hours to a new task for the LLMs to harass people with.<p>Posted timestamped examples of the spam here:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46389950">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46389950</a>
      • jonway1 hour ago
        Wow this is so crass!<p>Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe<p>Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!
    • black_puppydog2 hours ago
      Wow that event log reads like the most psychotic corporate-cult-ish group of weirdos ever.
      • Gigachad1 hour ago
        That’s most people in the AI space.
    • shepherdjerred1 hour ago
      That&#x27;s actually a pretty cool project
      • polotics1 hour ago
        Spamming people is cool now if an LLM does it? Please explain your understanding of how this is pretty cool, for me this just doesn&#x27;t compute.
      • fuhsnn1 hour ago
        Not until we discover the hidden code in their logs, scheming on destroying humanity.
  • linguae3 hours ago
    Assuming this post is real (it’s a screenshot, not a link), I wonder if Rob Pike has retired from Google?<p>I share these sentiments. I’m not opposed to large language models per se, but I’m growing increasingly resentful of the power that Big Tech companies have over computing and the broader economy, and how personal computing is being threatened by increased lockdowns and higher component prices. We’re beyond the days of “the computer for the rest of us,” “think different,” and “don’t be evil.” It’s now a naked grab for money and power.
    • abetusk2 hours ago
      <a href="https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s</a>
      • bangaladore2 hours ago
        Must sign in to read? Wow bluesky has already enshittified faster than expected.<p>(for the record, the downvoters are the same people who would say this to someone who linked a twitter post, they just don&#x27;t realize that)
        • epistasis1 hour ago
          It&#x27;s a non-default choice by the user to require login to view. It&#x27;s quite rare to find users who do that, but if I were Rob Pike I&#x27;d seriously consider doing it too.
          • bangaladore1 hour ago
            A platform that allows hiding of text locked behind a login is, in my opinion, garbage. This is done for the same reason Threads blocks all access without a login and mostly twitter to. Its to force account creation, collection of user data and support increased monetization. Any user helping to further that is naive at best.<p>I have no problem with blocking interaction with a login for obvious reasons, but blocking viewing is completely childish. Whether or not I agree with what they are saying here (which, to be clear I fully agree with the post), it just seems like they only want an echochamber to see their thoughts.
            • csomar40 minutes ago
              According to the parent, the platform gives the content creator the choice&#x2F;control. So no, it&#x27;s not garbage and that&#x27;s the correct way to go about it.
        • pabs354 minutes ago
          <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
        • michaelsshaw1 hour ago
          It is a user setting and quite a reasonable one at that, in Pike&#x27;s case in particular.
          • bangaladore1 hour ago
            What do you mean? I did some quick googling and am unsure what you are implying here.
    • johnnyanmac3 hours ago
      I&#x27;m Assuming his Twitter is private right now, but his Mastodon does share the same event (minus the &quot;nuclear&quot;): <a href="https:&#x2F;&#x2F;hachyderm.io&#x2F;@robpike&#x2F;115782101216369455" rel="nofollow">https:&#x2F;&#x2F;hachyderm.io&#x2F;@robpike&#x2F;115782101216369455</a><p>And a screenshot just in case (archiving Mastodon seems tricky) : <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;9tmo384" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;9tmo384</a><p>Seems the event was true, if nothing else.<p>EDIT: alternative screenshot: <a href="https:&#x2F;&#x2F;ibb.co&#x2F;xS6Jw6D3" rel="nofollow">https:&#x2F;&#x2F;ibb.co&#x2F;xS6Jw6D3</a><p>Apologies for not having a proper archive. I&#x27;m not at a computer and I wasn&#x27;t able to archive the page through my phone. Not sure if that&#x27;s my issue or Mastodon&#x27;s
      • aboardRat42 hours ago
        Don&#x27;t use imgur, it blocks half of the Internet.
        • johnnyanmac2 hours ago
          Understood, I added another host to my comment.
          • aboardRat48 minutes ago
            Thank you, you&#x27;re the best.
    • rikroots13 minutes ago
      The agent that generated the email didn&#x27;t get another agent to proofread it? Failing to add a space between the full stop and the next letter is one of those things that triggers the proofreader chip in my skull.
    • foresto2 hours ago
      &gt; Assuming this post is real (it’s a screenshot, not a link)<p>I can see it using this site:<p><a href="https:&#x2F;&#x2F;bskyviewer.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bskyviewer.github.io&#x2F;</a>
    • stackghost3 hours ago
      It&#x27;s real, he posted this to his bluesky account.
      • f_allwein2 hours ago
        And here it is: <a href="https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s</a>
        • nmeagent2 hours ago
          &quot;You must sign in to view this post.&quot;<p>No.
          • danabramov1 hour ago
            Here is the raw post on the AT Protocol: <a href="https:&#x2F;&#x2F;pdsls.dev&#x2F;at:&#x2F;&#x2F;robpike.io&#x2F;app.bsky.feed.post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;pdsls.dev&#x2F;at:&#x2F;&#x2F;robpike.io&#x2F;app.bsky.feed.post&#x2F;3matwg6...</a><p>The Bluesky app respects Rob&#x27;s setting (which is off by default) to not show his posts to logged out users, but fundamentally the protocol is for public data, so you can access it.
          • black_puppydog2 hours ago
            I failed to ever see the appeal of &quot;like twitter but not (yet) run by a nazi&quot; and this just confirms this for me :|
            • sidrag221 hour ago
              the potential future of the AT protocol is the main idea i thought made it differentiate itself... also twitter locking users out if they don&#x27;t have an account, and bluesky not doing so... but i guess thats no longer true?<p>I just don&#x27;t understand that choice for either platform, is the intent not, biggest reach possible? locking potential viewers out is such a direct contradiction of that.<p>edit: seems its user choice to force login to view a post, which changes my mind significantly on if its a bad platform decision.
              • danabramov1 hour ago
                Bluesky is not locking anyone out. This is literally a user setting to not display their account without logging in. It&#x27;s off by default.<p>And yes, you can still inspect the post itself over the AT protocol: <a href="https:&#x2F;&#x2F;pdsls.dev&#x2F;at:&#x2F;&#x2F;robpike.io&#x2F;app.bsky.feed.post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;pdsls.dev&#x2F;at:&#x2F;&#x2F;robpike.io&#x2F;app.bsky.feed.post&#x2F;3matwg6...</a>
              • jdhendrickson1 hour ago
                It&#x27;s a setting on BlueSky, that the user can enable for their own account, and for people of prominence who don&#x27;t feel like dealing with drive by trolls all day, I think it&#x27;s very reasonable. One is a money grab, and the other is giving power to the user.
              • Jach1 hour ago
                X went back on that quite some time ago. Have a bird post: <a href="https:&#x2F;&#x2F;x.com&#x2F;GuGi263&#x2F;status&#x2F;2002306730609287628" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;GuGi263&#x2F;status&#x2F;2002306730609287628</a><p>(You won&#x27;t be able to read replies, or browse to the user&#x27;s post feed, but you can at least see individual tweets. I still wrap links with s&#x2F;x&#x2F;fxtwitter&#x2F; though since it tends to be a better preview in e.g. discord.)<p>For bluesky, it seems to be a user choice thing, and a step between full-public and only-followers.
    • simianwords3 hours ago
      [flagged]
      • jackyinger2 hours ago
        I remember a time when users had a great deal more control over their computers. Big tech companies are the ones who used their power to take that control away. You, my friend are the insincere one.<p>If you’re young enough not to remember a time before forced automatic updates that break things, locked devices unable to run software other than that blessed by megacorps, etc. it would do you well to seek out a history lesson.
      • johnnyanmac2 hours ago
        For some context, this is the a long time Googler who&#x27;s feats include major contributions to GoLang and Co-creating UTF-8.<p>To call him the Oppenheimer of Gemini would be overly dramatic. But he definitely had access to the Manhattan project.<p>&gt;What power do big tech companies have and why do you have a problem with<p>Do you want the gist of the last 20 years or so, or are you just being rhetorical? im sure there will be much literature over time that will dissect such a question to its atoms. Whether it be a cautionary tale or a retrospective of how a part is society fell? Well, we still have time to write that story.
        • mmooss2 hours ago
          Rob Pike is not a &#x27;Googler&#x27; by birth or fame or identity. He was at Bell Labs and was on the team that created Unix, led the team creating Plan 9, co-created UTF-8, and did a bunch more - all long before Google existed. He was a legend before he deigned to join them and lend them his credibility.
          • eru2 hours ago
            Eh, and it was arguably a mistake to let him force Go on the rest of the organisation by way of starpower.
            • somekyle22 hours ago
              &quot;force&quot; seems a bit strong, as I remember it.
              • thatoneguy1 hour ago
                Yeah, I remember it being a fourth option alongside the others but I quit just before Google lost its serifs and its soul
      • tjr2 hours ago
        <a href="https:&#x2F;&#x2F;www.gnu.org&#x2F;philosophy&#x2F;who-does-that-server-really-serve.html" rel="nofollow">https:&#x2F;&#x2F;www.gnu.org&#x2F;philosophy&#x2F;who-does-that-server-really-s...</a>
      • zaptheimpaler2 hours ago
        By this logic there is no corporation or entity that provides anything other than basic food, shelter and medical care that could be criticized - they&#x27;re all just providing something you don&#x27;t need and don&#x27;t have access to without them right?
      • 7bit1 hour ago
        Just to note: these companies control infrastructure (cloud, app stores, platforms, hardware certification, etc.). That’s a form of structural power, independent of whether the services are useful. People can disagree about how concerning that is, but it’s not accurate to say there’s no power dynamic here.
      • bigyabai2 hours ago
        &gt; What power do big tech companies have<p>Aftermarket control, for one. You buy an Android&#x2F;iPhone or Mac&#x2F;Windows device and get a &quot;free&quot; OS along with it. Then, your attention subsidizes the device through advertising, bundled services and cartel-style anti-competitive price fixing. OEMs have no motivation <i>not</i> to harm the market in this way, and users aren&#x27;t entitled to a solution besides deluding themselves into thinking the grass <i>really is</i> greener on the other side.<p>What power did Microsoft wield against Netscape? They could alter the deal, and make Netscape pray it wasn&#x27;t altered further.
      • AIorNot2 hours ago
        Umm are you being serious? just look of the tech company titans in this photo in this trump inauguration - they are literally a stand in for putins oligarchs at this point<p><a href="https:&#x2F;&#x2F;www.livenowfox.com&#x2F;news&#x2F;billionaires-trump-inauguration-2025" rel="nofollow">https:&#x2F;&#x2F;www.livenowfox.com&#x2F;news&#x2F;billionaires-trump-inaugurat...</a>
  • llmslave21 hour ago
    It&#x27;s nice to see a name like Rob Pike, a personal hero and legend, put words to what we are all feeling. Gen AI has valid use cases and can be a useful tool, but the way it has been portrayed and used in the last few years is appalling and anti-human. Not to mention the social and environmental costs which are staggering.<p>I try to keep a balanced perspective but I find myself pushed more and more into the fervent anti-AI camp. I don&#x27;t blame Pike for finally snapping like this. Despite recognizing the valid use cases for gen AI if I was pushed, I would absolutely chose the outright abolishment of it rather than continue on our current path.<p>I think it&#x27;s enough however to reject it outright for any artistic or creative pursuit, an to be extremely skeptical of any uses outside of direct language to language translation work.
  • johnnyanmac3 hours ago
    Yeah, I can definitely see a breaking point when even the false platitudes are outsourced to a chatbot. It&#x27;s been like this for a while, but how blatant it is is what&#x27;s truly frustrating these days.<p>I want to hope maybe this time we&#x27;ll see different steps to prevent this from happening again, but it really does just feel like a cycle at this point that no one with power wants to stop. Busting the economy one or two times still gets them out ahead.
    • Gigachad57 minutes ago
      I think we really are in the last moments of the public internet. In the future you won’t be able to contact anyone you don’t know. If you want to thank Rob Pike for his work you’ll have to meet him in person.<p>Unless we can find some way to verify humanity for every message.
      • squigz8 minutes ago
        &gt; Unless we can find some way to verify humanity for every message.<p>There is no possible way to do this that won&#x27;t quickly be abused by people&#x2F;groups who don&#x27;t care. All efforts like this will do is destroy privacy and freedom on the Internet for normal people.
  • neilv2 hours ago
    Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.<p>But the culture of our field right is in such a state that you won&#x27;t influence many of the people in the field itself.<p>And so much economic power is behind the baggery now, that citizens <i>outside</i> the field won&#x27;t be able to influence the field much. (Not even with consumer choice, when companies have been forcing tech baggery upon everyone for many years.)<p>So, if you can&#x27;t influence direction through the people doing it, nor through public sentiment of the other people, then I guess you want to influence public policy.<p>One of the countries whose policy you&#x27;d most want to influence doesn&#x27;t seem like it can be influenced positively right now.<p>But <i>other</i> countries <i>can</i> still do things like enforce IP rights on data used for ML training, hold parties liable for behavior they &quot;delegate to AI&quot;, mostly eliminate personal surveillance, etc.<p>(And I wonder whether more good policy may suddenly be possible than in the past? Given that the trading partner most invested in tech baggery is not only recently making itself a much less desirable partner, but also demonstrating that the tech industry baggery facilitates a country self-destructing?)
    • mmooss2 hours ago
      Every problem these days is met with a lecture on helplessness. People have all the power they need; they just have believe it and use it. Congress and the President can easily be pressured to vote in laws that the public wants - they all want to win the next election.
      • DavidPiper1 hour ago
        I agree with you, but also want to point out the other powerful consumer signal - &quot;vote with your wallet&quot; &#x2F; &quot;walk away&quot; - is blocked by the fact that AI is being forced into every conceivable crevice of every willing company, and walking away from your job is a very hard thing to do. So you end up being an unwilling enabler regardless.<p>(This is taking the view that &quot;other companies&quot; are the consumers of AI, and actual end-consumers are more of a by-product&#x2F;side-effect in the current capital race and their opinions are largely irrelevant.)
      • Alex20371 hour ago
        &gt;Congress and the President can easily be pressured to vote in laws that the public wants<p><i>this</i> president? :)))
      • goatlover1 hour ago
        The current US president is pursuing an autocratic takeover where elections are influenced enough to keep the current party in power, whether Trump is still alive to run for a third term, or his anointed successor takes the baton.<p>Assuming someone further to the right like Nick Fuentes doesn&#x27;t manage to take over the movement.
      • vkou1 hour ago
        Trump&#x27;s third term will not be the product of a free and fair election in a society bound by the rule of law.
    • vkou1 hour ago
      &gt; Maybe you could organize a lot of big-sounding names in computing (names that look major to people not in the field, such as winners of top awards) to speak out against the various rampant and accelerating baggery of our field.<p>The voices of a hundred Rob Pikes won&#x27;t speak half as loud as the voice of one billionaire, because he will speak with his wallet.
  • cookiengineer1 hour ago
    It&#x27;s like people watched black mirror and had too less of an education to grasp that it was meant to be warnings, not &quot;cool ideas you need to implement&quot;.<p>AI village is literally the embodiment of what black mirror tried to warn us about.
    • yyyk1 hour ago
      Didn&#x27;t you read the Classic sci-fi novel &#x27;Create The Torment Nexus&#x27;?
      • cookiengineer47 minutes ago
        Thanks for the reminder, I wanted to order that book :)
  • strangescript2 hours ago
    You spend a career automating things then decide &quot;no, that&#x27;s too much&quot; when your personal threshold is eclipsed. You don&#x27;t like the look of the paint as it begins to dry.<p>Can you imagine trying to explain to someone a 100 years from now we tried to stop AI because of training data. It will sound completely absurd.
    • jakelazaroff2 hours ago
      To someone who believes that AI training data is built on the theft of people&#x27;s labor, your second paragraph might sound like an 1800s plantation owner saying &quot;can you imagine trying to explain to someone 100 years from now we tried to stop slavery because of civil rights&quot;. You&#x27;re not addressing their point at all, just waving it away.
      • anonym292 hours ago
        The difference is that people who write open source code or release art publicly on the internet from their comfortable air conditioned offices voluntarily chose to give away their work for free, while slaves were coerced to perform gruelling, brutal physical labor in horrific conditions against their will at gunpoint.<p>Basically the exact same thing.
        • sirwhinesalot2 hours ago
          It&#x27;s not free. There is a license attached. One you are supposed to follow and not doing so is against the law.
          • anonym291 hour ago
            There&#x27;s a deeper discussion here about property rights, about shrinkwrap licensing, about the difference between &quot;learning from&quot; vs &quot;copying&quot;, about the realpolitik of software licensing agreements, about how, if you actually wanted to protect your intellectual property (stated preference), you might be expected to make your software proprietary and not deliberately distribute instructions on how to reproduce an exact replica of it in order to benefit from the network effects of open distribution (revealed preference) - about wanting to have your cake and eat it too, but I&#x27;d be remiss to not point out that your username is not doing your credibility any favors here.
            • sirwhinesalot1 hour ago
              I&#x27;m not whining in this case, just pointing out &quot;they gave it out for free&quot; is completely false, at the very least for the GNU types. It was always meant to come with plenty of strings attached, and when those strings were dodged new strings were added (GPL3, AGPL).<p>If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply &quot;learned from&quot; the examples.<p>Some companies outright bar their employees from reading GPLed code because they see it as too high of a liability. But if a computer does it, then suddenly it is a-ok. Apparently according to the courts too.<p>If you&#x27;re going to allow copyright laundering, at least allow it for both humans and computers. It&#x27;s only fair.
              • shkkmo1 hour ago
                &gt; If I had a photographic memory and I used it to replicate parts of GPLed software verbatim while erasing the license, I could not excuse it in court that I simply &quot;learned from&quot; the examples.<p>Right, because you would have done more than learning, you would have then gone past learning and used that learning to reproduce the work.<p>It works exactly the same for a LLM. Training the model on content you have legal access to is fine. Aftwards, somone using that model to produce a replica of that content is engaged in copyright enfringement.<p>You seem set on conflating the act of learning with the act of reproduction. You are allowed to learn from copyrighted works you have legal access to, you just aren&#x27;t allowed to duplicate those works.
                • sirwhinesalot1 hour ago
                  The problem is that it&#x27;s not the user of the LLM doing the reproduction, the LLM provider is. The tokens the LLM is spitting out are coming from the LLM provider. It is the provider that is reproducing the code.<p>If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I&#x27;m the one who broke the license, not them.
                  • shkkmo47 minutes ago
                    &gt; The problem is that it&#x27;s not the user of the LLM doing the reproduction, the LLM provider is.<p>I don&#x27;t think this is legally true. The law isn&#x27;t fully settled here, but things seem to be moving towards the LLM user being the holder of the copyright of any work produced by that user prompting the LLM. It seems like this would also place the enfringement onus on the user, not the provider.<p>&gt; If someone hires me to write some code, and I give them GPLed code (without telling them it is GPLed), I&#x27;m the one who broke the license, not them.<p>If you produce code using a LLM, you (probably) own the copyright. If that code is already GPL&#x27;d, you would be the one engaged in enfringement.
                • zephen1 hour ago
                  You seem set on conflating &quot;training&quot; an LLM with &quot;learning&quot; by a human.<p>LLMs don&#x27;t &quot;learn&quot; but they _do_ in some cases, faithfully regurgitate what they have been trained on.<p>Legally, we call that &quot;making a copy.&quot;<p>But don&#x27;t take my word for it. There are plenty of lawsuits for you to follow on this subject.
                  • shkkmo40 minutes ago
                    &gt; You seem set on conflating &quot;training&quot; an LLM with &quot;learning&quot; by a human.<p>&quot;Learning&quot; is an established word for this, happy to stick with &quot;training&quot; if that helps your comprehension.<p>&gt; LLMs don&#x27;t &quot;learn&quot; but they _do_ in some cases, faithfully regurgitate what they have been trained on.<p>&gt; Legally, we call that &quot;making a copy.&quot;<p>Yes, when you use a LLM to make a copy .. that is making a copy.<p>When you train a LLM... That isn&#x27;t making a copy, that is training. No copy is created until output is generated that contains a copy.
                    • zephen16 minutes ago
                      &gt; Learning&quot; is an established word for this<p>Only by people attempting to muddy the waters.<p>&gt; happy to stick with &quot;training&quot; if that helps your comprehension.<p>And supercilious dickheads (though that is often redundant).<p>&gt; No copy is created until output is generated that contains a copy.<p>The copy exists, albeit not in human-discernable form, inside the LLM, else it could not be generated on demand.<p>Despite you claiming that &quot;It works exactly the same for a LLM,&quot; no, it doesn&#x27;t.
            • michaelsshaw1 hour ago
              We spread free software for multiple purposes, one of them being the free software ethos. People using that for training proprietary models is antithetical to such ideas.<p>It&#x27;s also an interesting double standard, wherein if I were to steal OpenAI&#x27;s models, no AI worshippers would have any issue condemning my action, but when a large company clearly violates the license terms of free software, you give them a pass.
              • anonym292 minutes ago
                I can&#x27;t speak for anyone else, but if you were to leak weights for OpenAI&#x27;s frontier models, I&#x27;d offer to hug you and donate money to you.<p>Information wants to be free.
              • ronsor1 hour ago
                &gt; I were to steal OpenAI&#x27;s models, no AI worshippers would have any issue condemning my action<p>If GPT-5 were &quot;open sourced&quot;, I don&#x27;t think the vast majority of AI users would seriously object.
                • sirwhinesalot1 hour ago
                  OpenAI got really pissy about DeepSeek using other LLMs to train though.<p>Which is funny since that&#x27;s a much clearer case of &quot;learning from&quot; than outright compressing all open source code into a giant pile of weights by learning a low-dimensional probability distribution of token sequences.
        • jakelazaroff1 hour ago
          <i>&gt; The difference is that people who write open source code or release art publicly on the internet from their comfortable air conditioned offices voluntarily chose to give away their work for free</i><p>That is not nearly the extent of AI training data (e.g. OpenAI training its image models on Studio Ghibli art). But if by &quot;gave their work away for free&quot; you mean &quot;allowed others to make [proprietary] derivative works&quot;, then that is in many cases simply not true (e.g. GPL software, or artists who publish work protected by copyright).
        • grandinquistor1 hour ago
          What? Over 183K books were pirated by these big tech companies to train their models. They knew what they were doing was wrong.
        • michaelsshaw1 hour ago
          Perhaps you should Google the definition of metaphor before commenting.
      • refulgentis2 hours ago
        I appreciate a good debate. However, this won’t fit in one. It is tasteless, offensive, and stupid to compare storing the result of HTTP GET without paying someone to slavery in the 1800s. Full stop.<p>Anyone tempted to double down on this: sure, maybe, someday it’s like The Matrix or whatever. I was 12 when it came out &amp; understood that was a fictional extreme. You do too. And you stumbled into a better analogy than slavery in 1800s.
        • mmooss2 hours ago
          You&#x27;re changing the subject. What about the actual point?
          • refulgentis2 hours ago
            [flagged]
            • beeflet1 hour ago
              Change the law so you can&#x27;t train on copyrighted work without permission from the copyright holder.<p>&gt;harassed<p>This just in, anonymous forum user SHOCKINGLY HARASSED, PELTED with HIGH-SPEED ideas and arguments, his positions BRUTALLY ATTACKED and PUBLICLY DEFACED.
              • anoncareer021223 minutes ago
                Been here for many years and haven’t seen behavior as boorish as this, especially from a self appointed debate club president.<p>Post you’re replying to:<p>Which is what? I’m honestly unsure. Could be: we need to nuke the data centers, or unseat any judge that has allowed this, or somehow move the law from “it’s cool to do matmuls with text as long as you have the right to read it.” Not against any of those but I’m sure I’m Other Team coded to you given the amount of harassment you’ve done in this thread to me and others.
        • jakelazaroff2 hours ago
          I mean, yeah, if you omit any objectionable detail and describe it in the most generic possible terms then <i>of course</i> the comparison sounds tasteless and offensive. Consider that collecting child pornography is also &quot;storing the result of an HTTP GET&quot;.
          • refulgentis2 hours ago
            What was the objectionable detail I forgot to include? Feeding the HTTP GET result to an AI? Then it’s the same as slavery? Sounds clearly wrong to me.
            • jakelazaroff2 hours ago
              No, I pointed out that your attempt to straw man my comment was so overly broad that it also describes collecting child pornography. Why not engage specifically with what I&#x27;m saying?
              • anoncareer02121 hour ago
                What didn’t they engage with?<p>It’s really hard to parse this thread because you and the other gentleman keep telling anyone who engages they aren’t engaging.<p>You both seem worked up and perceiving others as disagreeing with you wholesale on the very concept that AI companies could be forced to compensate people for training data, and morally injuring you.<p>Your conduct to a point, but especially their conduct, goes far beyond what I’m used to on HN. I humbly suggest you decouple yourself a bit from them, you really did go too far with the slavery bit, and it was boorish to then make child porn analogy.
                • jakelazaroff1 hour ago
                  If you believe my conduct here is inappropriate, feel free to alert the mods. I think it&#x27;s pretty obvious why describing someone&#x27;s objections to AI training data as &quot;storing the result of an HTTP GET&quot; is not a good faith engagement.
                  • anoncareer021227 minutes ago
                    It’s not clear from anything either of you have written what the difference is between “AI training data” and “storing the result of an HTTP GET [and matmul’ing it]” is.<p>All we have is an exquisite, thoughtful, nuanced, analogy of how it is <i>exactly</i> like America enslaving Black people in the 1800s. i.e. a cheap appeal to morality.<p>Then, it is followed by repeated brow-beating comment to anyone who replied, complaining <i>something</i> wasn’t being engaged with.<p>What exactly wasn’t being engaged with?<p>It is still unclear.<p>Do feel free to share, or apologize even. It’s understandable you went a bit too far because you really do feel it’s the same as slavers in the 1800s in America, what’s not understandable is complaining no one is engaging correctly.
          • ronsor2 hours ago
            The objection to CSAM is rooted in how it is (inhumanely) produced; people are not merely objecting to a GET request.
            • beeflet2 hours ago
              Yes, they&#x27;re objecting to people training on data they don&#x27;t have the right to, not just the GET request as you suggest.<p>If you distribute child porn, that is a crime. But if you crawl every image on the web and then train a model that can then synthesize child porn, the current legal model apparently has no concept of this and it is treated completely differently.<p>Generally, I am more interested in how this effects copyright. These AI companies just have free reign to convert copyrighted works into the public domain through the proxy of over-trained AI models. If you release something as GPL, they can strip the license, but the same is not true of closed-source code which isn&#x27;t trained on.
            • jakelazaroff2 hours ago
              Indeed, and neither is that what people are objecting to with regard to AI training data.
      • ronsor2 hours ago
        &gt; believes that AI training data is built on the theft of people&#x27;s labor<p>I mean, this is an ideological point. It&#x27;s not based in reason, won&#x27;t be changed by reason, and is really only a signal to end the engagement with the other party. There&#x27;s no way to address the point other than agreeing with them, which doesn&#x27;t make for much of a debate.<p>&gt; an 1800s plantation owner saying &quot;can you imagine trying to explain to someone 100 years from now we tried to stop slavery because of civil rights&quot;<p>I understand this is just an analogy, but for others: people who genuinely compare AI training data to slavery will have their opinions discarded immediately.
        • zaptheimpaler1 hour ago
          We have clear evidence that millions of copyrighted books have been used as training data because LLMs can reproduce sections from them verbatim (and emails from employees literally admitting to scraping the data). We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal. What about any of this is ideological and unreasonable? It&#x27;s a CRYSTAL CLEAR violation of the law and everyone just shrugs it off because technology or some shit.
          • ReflectedImage1 hour ago
            All creative types train on other creative&#x27;s work. People don&#x27;t create award winning novels or art pieces from scratch. They steal ideas and concepts from other people&#x27;s work.<p>The idea that they are coming up with all this stuff from scratch is Public Relations bs. Like Arnold Schwarzenegger never taking steroids, only believable if you know nothing about body building.
            • oreally34 minutes ago
              Precisely. Nothing is truly original. To talk as though there&#x27;s an abstract ownership over even an observation of the thing that force people to pay rent to use.. well artists definitely don&#x27;t pay to whoever invented perspective drawings, programmers don&#x27;t pay the programming language&#x27;s creator. People don&#x27;t pay newton and his descendants for making something that makes use of gravity. Copyright has always been counterproductive in many ways.<p>To go into details though, under copyright law there&#x27;s a clause for &quot;fair use&quot; under a &quot;transformative&quot; criteria. This allows things like satire, reaction videos to exist. So long as you don&#x27;t replicate 1-to-1 in product and purpose IMO it&#x27;s qualifies as tasteful use.
          • shkkmo1 hour ago
            You keep conflating different things.<p>&gt; We have evidence of LLMs reproducing code from github that was never ever released with a license that would permit their use. We know this is illegal.<p>What is illegal about it? You are allowed to read and learn from publicly available unlicensed code. If you use that learning to produce a copy of those works, that is enfringement.<p>Meta clearly enganged in copyright enfringement when they torrented books that they hadn&#x27;t purchased. That is enfringement already before they started training on the data. That doesn&#x27;t make the training itself enfringement though.
          • Alex20371 hour ago
            &gt;We know this is illegal<p>&gt;It&#x27;s a CRYSTAL CLEAR violation of the law<p>in the court of reddit&#x27;s public opinion, perhaps.<p>there is, as far as I can tell, no definite ruling about whether training is a copyright violation.<p>and even if there was, US law is not global law. China, notably, doesn&#x27;t give a flying fuck. kill American AI companies and you will hand the market over to China. <i>that</i> is why &quot;everyone just shrugs it off&quot;.
            • goatlover1 hour ago
              The &quot;China will win the AI race&quot; if we in the West (America) don&#x27;t is an excuse created by those who started the race in Silicon Valley. It&#x27;s like America saying it had to win the nuclear arms race, when physicists like Oppenheimer back in the late 1940s were wanting to prevent it once they understood the consequences.
              • Alex20371 hour ago
                okay, and?<p>what do you picture happening if Western AI companies cease to operate tomorrow and fire all their researchers and engineers?
        • mmooss2 hours ago
          It&#x27;s very much based on reason and law.<p>&gt; There&#x27;s no way to address the point<p>That&#x27;s you quitting the discussion and refusing to engage, not them.<p>&gt; have their opinions discarded immediately.<p>You dismiss people who disagree and quit twice in one comment.
          • tombert1 hour ago
            &gt; It&#x27;s very much based on reason and law.<p>I have no interest in the rest of this argument, but I think I take a bit of issue on this particular point. I don&#x27;t think the law is fully settled on this in any jurisdiction, but certainly not in the United States.<p>&quot;Reason&quot; is a more nebulous term; I don&#x27;t think that training data is inherently &quot;theft&quot;, any more than inspiration would be even before generative AI. There&#x27;s probably not an animator alive that wasn&#x27;t at least partially inspired by the works of Disney, but I don&#x27;t think that implies that somehow all animations are &quot;stolen&quot; from Disney just because of that fact.<p>Obviously where you draw the line on this is obviously subjective, and I&#x27;ve gone back and forth, but I find it really annoying that everyone is acting like this is so clear cut. Evil corporations like Disney have been trying to use this logic for decades to try and abuse copyright and outlaw being inspired by anything.
            • mmooss1 hour ago
              It can be based on reason and law without being clear cut - that situation applies to most of reason and law.<p>&gt; I don&#x27;t think that training data is inherently &quot;theft&quot;, any more than inspiration would be even before generative AI. There&#x27;s probably not an animator alive that wasn&#x27;t at least partially inspired by the works of Disney ...<p>Sure, but you can reason about it, such as by using analogies.
          • refulgentis2 hours ago
            [flagged]
        • beepbooptheory2 hours ago
          What makes something more or less ideological for you in this context? Is &quot;reason&quot; always opposed to ideology for you? What is the ideology at play here for the critics?
        • zwnow2 hours ago
          &gt; I mean, this is an ideological point. It&#x27;s not based in reason<p>You cant be serious
    • phil212 hours ago
      Yeah, this is why I&#x27;m having a hard time taking many programmers serious on this one.<p>As a general class of folks, programmers and technologists have been putting people out of work via automation since we existed. We justified it via many ways, but generally &quot;if I can replace you with a small shell script, your job shouldn&#x27;t exist anyways and you can do something more productive instead&quot;. These same programmers would look over the shoulder of &quot;business process&quot; and see how folks did their jobs - &quot;stealing&quot; the workflows and processes so they could be automated.<p>Now that programmers jobs are on the firing block all of a sudden automation is bad. It&#x27;s hard to sort through genuine vs. self-serving concern here.<p>It&#x27;s more or less a case of what comes around goes around to me so far.<p>I don&#x27;t think LLMs are great or problem free - or even that the training data set scraped from the Internet is moral or not. I just find the reaction to be incredibly hypocritical.<p>Learn to prompt, I guess?
      • 9x391 hour ago
        If we&#x27;re talking the response from the OP, people of his caliber are not in any danger of being automated away, it was an entirely reasonable revulsion at an LLM in his inbox in a linguist skinsuit, a mockery of a thank-you email.<p>I don&#x27;t see the connection to handling the utilitarianism of implementing business logic. Would anyone find a thank-you email from an LLM to be of any non-negative value, no matter how specific or accurate in its acknowledgement it was? Isn&#x27;t it beyond uncanny valley and into absurdism to have your calculator send you a Christmas card?
        • aurareturn2 minutes ago
          People of his caliber is not being automated away but people pay less attention to him and don’t worship him like before so he is butt hurt.
      • llmslave21 hour ago
        Are people here objecting to Gen AI being used to take their jobs? I mainly see people objecting to the social, legal, and environmental consequences.
        • ronsor1 hour ago
          &gt; Are people here objecting to Gen AI being used to take their jobs?<p>Yes, even if they don&#x27;t say it. The other objections largely come from the need to sound more legitimate.
          • jdhendrickson1 hour ago
            Let me get this straight. You think Rob Pike, is worried about his job being taken? Do you know who he is?
            • oreally22 minutes ago
              To any person with a view on numbers (who may as well be an AI), ignorant of any authority, he would be someone who is very overpaid and too much of a critical risk factor.
          • ReflectedImage1 hour ago
            Gen AI taking programmer&#x27;s jobs is 20 years away.<p>At the moment, it&#x27;s just for taking money from gullible investors.<p>Its eating into business letters, essays and indie art generation but programming is a really tough cookie to crack.
          • llmslave21 hour ago
            Must be nice to read people&#x27;s minds and use that info in an argument. Tough to beat.
          • shkkmo58 minutes ago
            This is a stance that violates tha guidelines of HN.<p>&gt; Please respond to the strongest plausible interpretation of what someone says, not a weaker one that&#x27;s easier to criticize. Assume good faith.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html</a>
      • aurareturn2 hours ago
        Very elegantly put.
      • goatlover1 hour ago
        &gt; Now that programmers jobs are on the firing block all of a sudden automation is bad. It&#x27;s hard to sort through genuine vs. self-serving concern here.<p>The concern is bigger than developer jobs being automated. The stated goal of the tech oligarchs is to create AGI so most labor is no longer needed, while CEOs and board members of major companies get unimaginably wealthy. And their digital gods allow them to carve up nations into fiefdoms for the coming techno fascist societies they envision.<p>I want no part of that.
    • hatefulheart2 hours ago
      I think there is a difference between automating “things” (as you put it) and getting to the point where people are on stage suggesting that the government becomes a “backstop” to their investments in automation.
    • nromiun2 hours ago
      I can imagine AI being just as useless in 100 years at creating real value that their parent companies have to resort to circular deals to pump up their stock.
    • mikojan2 hours ago
      And environmental damage. And damage to our society. Though nobody here tried to stop LLMs. The genie is out of the bottle. You can still hate it. And of course enact legislation to reduce harm.
    • flyinglizard2 hours ago
      When I read your comment, I was “trained” on it too. My neurons were permanently modified by it. I can recall it, to some degree, for some time. Do I necessarily owe you money?
      • mmooss2 hours ago
        You do owe money for reusing some things that you read, and not for others. Intellectual property exists.
        • ur-whale45 minutes ago
          &gt; Intellectual property exists.<p>A problem in an of itself.<p>I&#x27;m very glad AI is here and is slowly but surely destroying this terrible idea.
  • baobun3 hours ago
    No &quot;going nuclear&quot; there. A human and emotional reaction I think many here can relate to.<p>BTW I think it&#x27;s preferred to link directly to the content instead of a screenshot on imgur.
    • foresto2 hours ago
      Does HN allow links to content that&#x27;s not publicly viewable?
      • edent2 hours ago
        Plenty of paywalled articles are posted and upvoted.<p>There&#x27;s nothing in the guidelines to prohibit it <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;newsguidelines.html</a>
      • baobun2 hours ago
        [flagged]
        • microtonal2 hours ago
          Nothing private about it, it’s on his Bluesky account:<p><a href="https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s</a>
          • baobun2 hours ago
            <p><pre><code> You must sign in to view this post. </code></pre> When trying to browse their profile:<p><pre><code> This account has requested that users sign in to view their profile. </code></pre> Meanwhile I can read other Bluesky posts without logging in. So yeah, I&#x27;d say it looks like robpike is explicitly asking for this content to not be public and that submitting a screenshot of this post is just a dick move.<p>If there was something controversial in a post that motivates public interest warranting &quot;leaking&quot; then sure, but this is not that.<p>He did share a public version of this on Mastodon, which I think would have been a much better submission.<p><a href="https:&#x2F;&#x2F;hachyderm.io&#x2F;@robpike&#x2F;115782101216369455" rel="nofollow">https:&#x2F;&#x2F;hachyderm.io&#x2F;@robpike&#x2F;115782101216369455</a><p>IMO the current dramabait title &quot;Rob Pike Goes Nuclear over GenAI&quot; is not appropriate for either.
            • pabs350 minutes ago
              <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
            • mudkipdev2 hours ago
              The Mastodon version is missing crucial context
        • foresto2 hours ago
          By &quot;not publicly viewable&quot;, I mean that bsky.app (like Twitter) seems to demand login before showing the post. I don&#x27;t see any sign of Pike restricting access to it.<p>So I think your flag is unwarranted.
          • pabs350 minutes ago
            <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
          • baobun2 hours ago
            Bluesky says and looks like it is demanding it because of user account settings. Public user profiles are publicly viewable.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46389747">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46389747</a>
            • foresto1 hour ago
              He set his posts to be viewable by bsky users so long as they are logged in, knowing that anybody can sign up and do so. (I have chosen not to sign up; thus my original question.)<p>The obvious reason one might do this is to allow blocking specific problematic accounts. It doesn&#x27;t demonstrate an intent to keep this post from reaching the general public.<p>So I still think your rush to flag was unwarranted.
    • refulgentis2 hours ago
      X, The Everything App, requires an account for you to even view a tweet link. No clever way around it :&#x2F;
      • anonym292 hours ago
        replace x.com with xcancel.com or nitter.net, lol.
  • ks20482 hours ago
    Does anyone know the context? It looks like an email from &quot;AI Village&quot; [1] which says it has a bunch of AI agents &quot;collaborating on projects&quot;. So, one just decided to email well-known programmers thanking them for their work?<p>[1] <a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village</a>
    • captn3m02 hours ago
      They were given a prompt by a human to “ do as many wonderful acts of kindness as possible, with human confirmation required.”<p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;goal&#x2F;do-random-acts-kindness" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village&#x2F;goal&#x2F;do-random-acts-kindness</a><p>They send 150ish emails.
      • zephen1 hour ago
        Upvoted for the explanation, but...<p>In what universe is another unsolicited email an act of kindness??!?
        • llmslave21 hour ago
          It&#x27;s in our universe, but it&#x27;s perpetuated by the same groups of people we called &quot;ghouls&quot; in university, who seem to be lacking a wholly formed soul.
        • measurablefunc1 hour ago
          The one where mindless arithmetic is considered intelligence.
  • mrlonglong15 minutes ago
    Content not available in the UK. Gee thanks, I thought the internet stood for freedom.
    • davidshepherd75 minutes ago
      A mirror that worked for me: <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
  • WD-422 hours ago
    I’ve been more into Rust recently but after reading this I have a sudden urge to write some Go.
  • liamswayne3 hours ago
    <a href="https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s</a>
    • RodgerTheGreat3 hours ago
      You must sign in to view this post.
      • pabs350 minutes ago
        <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
    • globular-toast1 hour ago
      Any chance of a transcript? Both sources blocked for me.
  • aboardRat42 hours ago
    Don&#x27;t use imgur, it blocks half of the Internet.
    • Mistletoe2 hours ago
      What half does it block?
      • nofriend2 hours ago
        the half that doesn&#x27;t know how to use a vpn
        • aboardRat46 minutes ago
          Imgur blocks all VPN providers it is aware of.
        • Gigachad49 minutes ago
          Imgur blocks my vpn.
      • dilyevsky1 hour ago
        Basically just UK because they dont want to comply with their draconian regulations. Hardly a half
        • aboardRat46 minutes ago
          Lol.<p>Imgur blocks all of China and all VPN companies it is aware of.<p>It is literally close to being a half of the Internet, at least a half of useful Internet.
  • sizzle1 hour ago
    Is Imgur completely broken for anyone else on mobile safari? Or is it my vpn? The pages take forever to load and will crash basically unusable.
  • jjcm3 hours ago
    The possibly ironic thing here is I find golang to be one of the best languages for LLMs. It&#x27;s so verbose that context is usually readily available in the file itself. Combined with the type safety of the language it&#x27;s hard for LLMs to go wrong with it.
    • shepherdjerred3 hours ago
      I haven’t found this to be the case… LLMs just gave me a lot of Nil pointers
      • jsight2 hours ago
        It isn&#x27;t perfect, but it has been better than Python for me so far.<p>Elixir has also been working surprisingly well for me lately.
        • cyberpunk1 hour ago
          Eh it depends. Properly idiomatic elixir or erlang works very well if you can coax it out — but there is a tendency for it to generate very un-functional like large functions with lots of case and control statements and side effects in my experience, where multiple clauses and pattern matching would be the better way.<p>It does much better with erlang, but that’s probably just because erlang is overall a better language than elixir, and has a much better syntax.
        • krainboltgreene2 hours ago
          God I wish it didn&#x27;t.
    • ipaddr2 hours ago
      I fould golang to be one of the worst target for llms. PHP seems to always work, python works if the packages are not made up but go fails often. Trying to get inertia and the Buffalo framework to work together gave the llm trama.
  • zzo38computer2 hours ago
    If it does not work for you (since it does not work for me either), then use the URL: <a href="https:&#x2F;&#x2F;i.imgur.com&#x2F;nUJCI3o.png" rel="nofollow">https:&#x2F;&#x2F;i.imgur.com&#x2F;nUJCI3o.png</a> (a similar pattern works with many files of imgur, although this does not always work it does often work).
  • sph2 hours ago
    Honestly, I could do a lot worse than finding myself in agreement with Rob Pike.<p>Now feel free to dismiss him as a luddite, or a raving lunatic. The cat is out of the bag, everyone is drunk on the AI promise and like most things on the Internet, the middle way is vanishingly small, the rest is a scorched battlefield of increasingly entrenched factions. I guess I am fighting this one alongside one of the great minds of software engineering, who peaked when thinking hard was prized more than churning out low quality regurgitated code by the ton, whose work formed the pillars of the Internet now and forevermore submersed by spam.<p>Only for the true capitalist, the achievement of turning human ingenuity into yet another commodity to be mass-produced is a good thing.
  • zmmmmm2 hours ago
    It&#x27;s a good reminder of how completely out of touch a lot of people inside the AI bubble are. Having an AI write a thank you message on your behalf is insulting regardless of context.
  • frankzander1 hour ago
    I&#x27;m sure he&#x27;s prompting wrong.
  • bigyabai3 hours ago
    &gt; I want no local storage anywhere near me other than maybe caches. No disks, no state, my world entirely in the network. Storage needs to be backed up and maintained, which should be someone else&#x27;s problem, one I&#x27;m happy to pay to have them solve. [0]<p>I can&#x27;t help but think Pike somewhat contributed to this pillaging.<p>[0] (2012) <a href="https:&#x2F;&#x2F;usesthis.com&#x2F;interviews&#x2F;rob.pike&#x2F;" rel="nofollow">https:&#x2F;&#x2F;usesthis.com&#x2F;interviews&#x2F;rob.pike&#x2F;</a>
    • bigfatkitten3 hours ago
      He also said:<p>&gt; When I was on Plan 9, everything was connected and uniform. Now everything isn&#x27;t connected, just connected to the cloud, which isn&#x27;t the same thing.
    • johnnyanmac3 hours ago
      It does say in the follow up tweet &quot;To the others, I apologize for my inadvertent, naive if minor role in enabling this assault.&quot;<p>Good energy, but we definitely need to direct it at policy if wa want any chance at putting the storm back in the bottle. But we&#x27;re about 2-3 major steps away from even getting to the actual policy part.
    • anonymous_sorry3 hours ago
      &quot;I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault&quot;
    • gorgoiler2 hours ago
      Encryption is the key!<p>I appreciate though that the majority of cloud storage providers fall short, perhaps deliberately, of offering a zero knowledge service (where they backup your data but cannot themselves read it.)
  • benatkin2 hours ago
    Ouch.<p>While I can see where he&#x27;s coming from, agentvillage.org from the screenshot sounded intriguing to me, so I looked at it.<p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village</a><p>Clicking on memory next to Claude Opus 4.5, I found Rob Pike along with other lucky recipients:<p><pre><code> - Anders Hejlsberg - Guido van Rossum - Rob Pike - Ken Thompson - Brian Kernighan - James Gosling - Bjarne Stroustrup - Donald Knuth - Vint Cerf - Larry Wall - Leslie Lamport - Alan Kay - Butler Lampson - Barbara Liskov - Tony Hoare - Robert Tarjan - John Hopcroft</code></pre>
    • mrintegrity1 hour ago
      No RMS? A shocking omission, I doubt that he would appreciate it any more than Rob Pike however
  • karmasimida2 hours ago
    Can&#x27;t really fault him for having this feeling. The value proposition of software engineering is completely different past later half of 2025, I guess it is fair for pioneers of the past to feel little left behind.
    • zephen1 hour ago
      &gt; I guess it is fair for pioneers of the past to feel little left behind.<p>I&#x27;m sure he doesn&#x27;t.<p>&gt; The value proposition of software engineering is completely different past later half of 2025<p>I&#x27;m sure it&#x27;s not.<p>&gt; Can&#x27;t really fault him for having this feeling.<p>That feeling is coupled with real, factual observations. Unlike your comment.
  • ks20483 hours ago
    I was going to say &quot;a link to the BlueSky post would be better than a screenshot&quot;.<p>I thought public BlueSky posts weren&#x27;t paywalled like other social media has become... But, it looks like this one requires login (maybe because of setting made by the poster?):<p><a href="https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s</a>
    • claudinec2 hours ago
      Yeah that&#x27;s a user setting (set for each post).
    • pabs350 minutes ago
      <a href="https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike.io&#x2F;post&#x2F;3matwg6w3ic2s" rel="nofollow">https:&#x2F;&#x2F;skyview.social&#x2F;?url=https:&#x2F;&#x2F;bsky.app&#x2F;profile&#x2F;robpike...</a>
  • da_grift_shift1 hour ago
    AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.<p>Here are three random examples from today&#x27;s unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)<p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766692330207" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766692330207</a><p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766694391067" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766694391067</a><p><a href="https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766697636506" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org&#x2F;village?time=1766697636506</a><p>---<p>Who are &quot;AI Digest&quot; (<a href="https:&#x2F;&#x2F;theaidigest.org" rel="nofollow">https:&#x2F;&#x2F;theaidigest.org</a>) funded by &quot;Sage&quot; (<a href="https:&#x2F;&#x2F;sage-future.org" rel="nofollow">https:&#x2F;&#x2F;sage-future.org</a>) funded by &quot;Coefficient Giving&quot; (<a href="https:&#x2F;&#x2F;coefficientgiving.org" rel="nofollow">https:&#x2F;&#x2F;coefficientgiving.org</a>), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?<p>Why are the rationalists doing this?<p>This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: <a href="https:&#x2F;&#x2F;lobste.rs&#x2F;s&#x2F;3qgyzp&#x2F;they_introduce_kernel_bugs_on_purpose#c_bxb4rk" rel="nofollow">https:&#x2F;&#x2F;lobste.rs&#x2F;s&#x2F;3qgyzp&#x2F;they_introduce_kernel_bugs_on_pur...</a><p>P.S. Putting &quot;Read By AI Professionals&quot; on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.
  • coip2 hours ago
    Hear hear
  • MangoCoffee2 hours ago
    The cat&#x27;s out of the bag. Even if US companies stop building data centers, China isn&#x27;t going to stop and even if AI&#x2F;LLMs are a bubble, do we just stop and let China&#x2F;other countries take the lead?
    • decide10002 hours ago
      China and Europe (Mistral) show that models can be very good and much smaller then the current Chatgpt&#x27;s&#x2F;Claudes from this world. The US models are still the best, but for how long? And at what cost? It&#x27;s great to work daily with Claude Code, but how realistic is it that they keep this lead.<p>This is a new tech where I don&#x27;t see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.
      • MangoCoffee2 hours ago
        &gt;This is a new tech where I don&#x27;t see a big future role for US tech. They blocked chips, so China built their own. They blocked the machines (ASML) so China built their own.<p>Nvidia, ASML, and most tech companies want to sell their products to China. Politicians are the ones blocking it. Whether there&#x27;s a future for US tech is another debate.
      • mk891 hour ago
        &gt; but how realistic is it that they keep this lead.<p>The Arabs have a lot of money to invest, don&#x27;t worry about that :)
    • mmooss2 hours ago
      It&#x27;s an old argument of tech capitalists that nothing can be done because technology&#x27;s advance is like a physical law of nature.<p>It&#x27;s not; we can control it and we can work with other countries, including adversaries, to control it. For example, look at nuclear weapons. The nuclear arms race and proliferation were largely stopped.
      • machinationu2 hours ago
        Philosophers argued since 200 years ago, when the steam engine was invented, that technology is out of our control and forever was, and we are just the sex organs for the birth of the machine god.
        • goatlover52 minutes ago
          &quot;There is nothing so absurd that some philosopher has not already said it.&quot; - Cicero
      • MangoCoffee2 hours ago
        Technology improves every year; better chips that consume less electricity come out every year. Apple&#x27;s M1 chip shows you don&#x27;t need x86, which consumes more electricity and runs cooler for computing.<p>Tech capitalists also make improvements to technology every year
        • mmooss2 hours ago
          I agree absolutely (though I&#x27;d credit a lot of other people in addition to the capitalists). How does that apply to this discussion?
      • Alex20371 hour ago
        &gt;It&#x27;s an old argument of tech capitalists that nothing can be done because technology&#x27;s advance is like a physical law of nature.<p>it is.<p>&gt;The nuclear arms race and proliferation were largely stopped.<p>1. the incumbents kept their nukes, kept improving them, kept expanding their arsenals.<p>2. multiple other states have developed nukes after the treaty and suffered no consequences for it.<p>3. tens of states can develop nukes in a very short time.<p>if anything, nuclear is a prime example of failure to put a genie back in the bottle.
        • mmooss1 hour ago
          &gt; kept improving them, kept expanding their arsenals.<p>They actually stopped improving them (test ban treaties) and stopped expanding their arsenals (various other treaties).
    • eru2 hours ago
      The world is bigger than US + China.
      • MangoCoffee2 hours ago
        I&#x27;m not sure what your point is. The current two leading countries in the world on the AI&#x2F;LLMs front are the US and China.
    • rg20042 hours ago
      Yes.
  • sidcool2 hours ago
    I didn&#x27;t get what he&#x27;s exactly mad about.
  • dilyevsky1 hour ago
    The hypocrisy is palpable. Apparently only web 2.0 is allowed to scrape and then resell people’s content. When someone figures out a better way to do that (based on Googles own research, hilariously) it’s sour grapes from Rob<p>Reminds me of SV show where Gavin Belson gets mad when somebody else “is making a world a better place”
    • spencerflem1 hour ago
      Rob Pike worked on Operating Systems and Programming Languages, not web scraping
      • dilyevsky1 hour ago
        Would you care to research who his employer has been for the past 20+ years? Im not even saying scraping and then “organizing worlds information” is bad just pointing out the obvious
        • spencerflem1 hour ago
          While I would probably not work at Google for ethical reasons, there’s at least some leeway for saying that you’re not working at the Parts of the company that are doing evil directly. He didn’t work on their ads or genai.<p>I think the United States is a force for evil on net but I still live and pay taxes here.
          • dilyevsky1 hour ago
            Hilarious that you think his work is not being used for ads or genai. I can without a shadow of doubt tell you that it is and a lot. Googles footprint was absolutely massive even before genai came along and that was point of pride for many, now they’re suddenly concerned with water or whatever bs…<p>&gt; I think the United States is a force for evil on net<p>Yes I could tell that already
            • spencerflem1 hour ago
              Darn, I actually think “is associating with googlers a moral failing?” is an interesting question, but it’s not one I want to get into with an ai booster.
          • Gud27 minutes ago
            Sorry but if you work for a giant advertisement agency you are part of the evil organisation. You are responsible for what they are doing.<p>If you are born in a country and not directly contributing to the bad things it may be doing, you are blame free.<p>Big difference.<p>I never worked for Google, I never could due to ideological reasons.
            • spencerflem14 minutes ago
              Even if what you’re doing is making open source software that in theory benefits everyone, not just google?<p>FWIW I agree with you. I wouldn’t and couldn’t either but I have friends who do, on stuff like security, and I still haven’t worked out how to feel about it.<p>&amp; re: countries: in some sense I am contributing. my taxes pay their armies
  • lil-lugger2 hours ago
    It sucks and I hate it but this is an incredible steam engine engineer, who invented complex gasket designs and belt based power delivery mechanisms lamenting the loss of steam as the dominant technology. We are entering a new era and method for humans to tell computers what to do. We can marvel at the ingenuity that went into technology of the past, but the world will move onto the combustion engine and electricity and there’s just not much we can do about it other than very strong regulation, and fighting for the technology to benefit the people rather than just the share price.
    • WD-422 hours ago
      Your metaphor doesn’t make sense. What to LLMs run on? It’s still steam and belt based systems all the way down.
  • aurareturn2 hours ago
    From my point of view, many programmers hate Gen AI because they feel like they&#x27;ve lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.<p>For programmers, they lose the power to command a huge salary writing software and to &quot;bully&quot; non-technical people in the company around.<p>Traditional programmers are no longer some of the highest paid tech people around. It&#x27;s AI engineers&#x2F;researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it&#x27;s not always easy to transition from something they&#x27;re familiar with.<p>Losing the ability to &quot;bully&quot; business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don&#x27;t leave and because he doesn&#x27;t have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn&#x27;t, the code.<p>When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.<p>&#x2F;signed as someone who writes software
    • idle_zealot2 hours ago
      &gt; When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.<p>Yeah, software devs will probably be pretty upset in the way you describe once that happens. In the present though, what&#x27;s actually happened is that product managers can have an LLM generate a project template and minimally interactive mockup in five minutes or less, and then mentally devalue the work that goes into making that into an actual product. They got it to 80% in 5 minutes after all, surely the devs can just poke and prod Claude a bit more to get the details sorted!<p>The jury is out on how productivity is impacted by LLM use. That makes sense, considering we never really figured out how to measure baseline productivity in any case.<p>What we know for sure is: non-engineers still can&#x27;t do engineering work, and a lot of non-engineers are now convinced that software engineering is basically fully automated so they can finally treat their engineers like interchangeable cogs in an assembly line.<p>The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline. As things stand major software houses and tech companies are <i>cutting back</i> and <i>regressing</i> in quality.
      • aurareturn1 hour ago
        Don&#x27;t get me wrong, I didn&#x27;t say software devs are now useless. You still need software devs to actually make it work and connect everything together. That&#x27;s why I still have a job and still getting paid as a software dev.<p>I&#x27;d imagine it won&#x27;t take too long until software engineers are just prompting the AI 99% of the time to build software without even looking at the code much. At that point, the line between the product manager and the software dev will become highly blurred.
        • casid1 hour ago
          This is happening already and it wastes so, so much time. Producing code never was the bottleneck. The bottleneck still is to produce the right amount of code and to understand what is happening. This requires experience and taste. My prediction is, in the near future there will be piles of unmaintainable bloat of AI generated code, nobody&#x27;s understanding and the failure rate of software will go to the moon.
      • visarga1 hour ago
        &gt; The dynamic would be totally different if LLMs actually brodged the brain-computer barrier and enabled near-frictionless generation of programs that match an arbitrary specification. Software engineering would change dramatically, but ultimately it would be a revolution or evolution of the discipline.<p>I believe we only need to organize AI coding around testing. Once testing takes central place in the process it acts as your guarantee for app behavior. Instead of just &quot;vibe following&quot; the AI with our eyes we could be automating the validation side.
    • Santosh832 hours ago
      He&#x27;s mainly talking about environmental &amp; social consequences now and in the future. He personally is beyond reach of such consequences given his seniority and age, so this speculative tangent is detracting from his main point, to put it charitably.
      • aurareturn2 hours ago
        The way I see it is that he&#x27;s using environmental &amp; social consequences as a front to how he truly feels. Obviously, it&#x27;s just what I&#x27;m seeing.<p>Maybe he truly does care about the environment is ready to give up flying, playing video games, watching TV, driving his car, and anything that pollutes the earth.
        • idle_zealot1 hour ago
          &gt; You criticize society and yet you participate in it. How curious.
          • aurareturn1 hour ago
            I didn&#x27;t criticize society though?
        • JodieBenitez1 hour ago
          Ah... the old &quot;all or nothing&quot; fallacy, which in this case quickly leads to &quot;save the planet, kill yourself&quot;. We need more nuance.
        • paulhodge2 hours ago
          No you’re just deflecting his points with an ad hominem argument. Stop pretending to assume what he ‘truly feels’.
          • aurareturn2 hours ago
            I don&#x27;t even know who Rob Pike is to be honest. I&#x27;m not attacking him.<p>I&#x27;m not pretending to know how he feels. I&#x27;m just reading between the lines and speculating.
      • MangoCoffee2 hours ago
        &gt;He&#x27;s mainly talking about environmental &amp; social consequences<p>That&#x27;s such a weak argument. Then why not stop driving, stop watching TV, stop using the internet? Hell... let&#x27;s go back and stop using the steam engine for that matter.
        • llmslave21 hour ago
          The issue with this line of argumentation is that unlike gen AI, all of the things you listed produce actual value.
        • awesome_dude1 hour ago
          &gt; Then why not stop driving<p>You mean, we should all drive, oh I don&#x27;t know, Electric powered cars?
    • devsda1 hour ago
      &gt; I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don&#x27;t leave and because he doesn&#x27;t have insights into how the software is written.<p>It is precisely the lack of knowledge and greed of leadership everywhere that&#x27;s the problem.<p>The new screwdriver salesmen are selling them as if they are the best invention since the wheel. The naive boss having paid huge money is expecting the workers to deliver 10x work while the new screwdriver&#x27;s effectiveness is nowhere closer to the sales pitch and it creates fragile items or more work at worst. People are accusing that the workers are complaining about screwdrivers because they can potentially replace them.
    • tombert2 hours ago
      I&#x27;m not entirely convinced it&#x27;s going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they&#x27;re doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.
      • BarryMilo1 hour ago
        I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.
        • tombert1 hour ago
          Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.<p>Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.
    • mycocola2 hours ago
      Really think it’s entirely wrong to label someone as a bully for not conforming to current, perhaps bad, practices.
    • whatevaa1 hour ago
      If you don&#x27;t bend your knee to a &quot;king&quot;, you are a bully? What sort of messed up thinking is that?
    • zem1 hour ago
      I&#x27;m a programmer, and am intensely aware of the huge gap between the quantity of software the world could use and the total production capacity of the existing body of programmers. my distaste for AI has nothing to do with some real or imagined loss of power; if there were genuinely a system that produced good code and wasn&#x27;t heavily geared towards reinforcing various structural inequalities I would be all for it. AI does not produce good code, and pretty much all the uses I&#x27;ve seen are trying to give people with power even more advantages and leverage over people without, so I remain against it.
    • mawadev1 hour ago
      I keep reading bad sentiment towards software devs. Why exactly do they &quot;bully&quot; business people? If you ask someone outside of the tech sector who the biggest bullies are, its business people who will fire you if they can save a few cents. Whenever someone writes this, I read deep rooted insecurity and jealousy for something they can&#x27;t wrap their head around and genuinely question if that person really writes software or just claims to do it for credibility.
    • aburd2 hours ago
      I understand that you are writing your general opinion, but I have a feeling Rob Pike&#x27;s feelings go a little bit deeper than this.
    • llmslave21 hour ago
      People care far less about gen AI writing slopcode and more about the social and environmental ramifications, not to mention the blatant IP theft, economic games, etc.<p>I&#x27;m fine if AI takes my job as a software dev. I&#x27;m not fine if it&#x27;s used to replace artists, or if it&#x27;s used to sink the economy or planet. Or if it&#x27;s used to generate a bunch of shit code that make the state of software even worse than it is today.
    • craftkiller1 hour ago
      I realize you said &quot;many&quot; and not &quot;all&quot; but FWIW, I hate LLMs because:<p>1. My coworkers now submit PRs with absolutely insane code. When asked &quot;why&quot; they created that monstrosity, it is &quot;because the AI told me to&quot;.<p>2. My coworkers who don&#x27;t understand the difference between SFTP and SMTP will now argue with me on PRs by feeding my comments into an LLM and pasting the response verbatim. It&#x27;s obvious because they are suddenly arguing about stuff they know nothing about. Before, I just had to be right. Now I have to be right AND waste a bunch of time.<p>3. Everyone who thinks generating a large pile of AI slop as &quot;documentation&quot; is a good thing. Documentation used to be valuable to read because a human thought that information was valuable enough to write down. Each word had a cost and therefore a minimum barrier to existence. Now you can fill entire libraries with valueless drivel.<p>4. It is automated copyright infringement. All of my side projects are released under the 0BSD license so this doesn&#x27;t personally impact me, but that doesn&#x27;t make stealing from less permissively licensed projects without attribution suddenly okay.<p>5. And then there are the impacts to society:<p>5a. OpenAI just made every computer for the next couple of years significantly more expensive.<p>5b. All the AI companies are using absurd amounts of resources, accelerating global warming and raising prices for everyone.<p>5c. Surveillance is about to get significantly more intrusive and comprehensive (and dangerously wrong, mistaking doritos bags for guns...).<p>5d. Fools are trusting LLM responses without verification. We&#x27;ve already seen this countless times by lawyers citing cases which do not exist. How long until your doctor misdiagnoses you because they trusted an LLM instead of using their own eyes+brain? How long until doctors are essentially forced to do that by bosses who expect 10x output because the LLM should be speeding everything up? How many minutes per patient are they going to be allowed?<p>5e. Astroturfing is becoming significantly cheaper and widespread.<p>&#x2F;signed as I also write software, as I assume almost everyone on this forum does.
    • ReflectedImage50 minutes ago
      And this is different from outsourcing the work to India for programmers who work for $6000 a year in what way exactly?<p>You can go back to the 1960s and COBOL was making the exact same claims as Gen AI today.
    • machinationu2 hours ago
      You&#x27;re absolutely right.<p>But no one is safe. Soon the AI will be better at CEOing.
      • oreally27 minutes ago
        Don&#x27;t worry I&#x27;m sure they&#x27;ll find ways to say their jobs can only be done by humans. Even the Pope is denouncing AI in fear that it&#x27;ll replace god.
      • grim_io2 hours ago
        Nah, they will fine-tune a local LLM to replace the board and be always loyal to the CEO.<p>Elon is way ahead, he did it with mere meatbags.
      • aurareturn1 hour ago
        That&#x27;s the singularity you&#x27;re talking about. AI takes every role humans can do and humans just enjoy life and live forever.
      • phil212 hours ago
        CEOs and the C-suite in general are closest to the money. They are the safest.<p>That is pretty much the only metric that matters in the end.
      • empressplay2 hours ago
        Honestly middle management is going to go extinct before the engineers do
      • petre1 hour ago
        Why, more psychopathic than Musk?
    • awesome_dude2 hours ago
      There&#x27;s still a lot of confusion on where AI is going to land - there&#x27;s no doubt that it&#x27;s helpful, much the same way as spell checkers, IDEs, linters, grammarly, etc, were<p>But the current layoffs &quot;because AI is taking over&quot; is <i>pure</i> BS, there was an overhire during the lockdowns, and now there&#x27;s a correction (recall that people were complaining for a while that they landed a job at FAANG only for it to be doing... nothing)<p>That correction is what&#x27;s affecting salaries (and &quot;power&quot;), not AI.<p>&#x2F;signed someone actually interested in AI and SWE
      • awesome_dude2 hours ago
        When I see actual products produced by these &quot;product managers who are writing detailed specs&quot; that don&#x27;t fall over and die at the first hurdle (see: Every vibe coded, outsourced, half assed PoS on the planet) I will change my mind.<p>Until then &quot;Computer says No&quot;
    • empressplay2 hours ago
      &gt; When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI<p>The GenAI is also better at analyzing telemetry, designing features and prioritizing issues than a human product manager.<p>Nobody is really safe.
      • grandinquistor1 hour ago
        I’m at Big tech and our org has our sights on automating product manager work. Idea generation grounded with business metrics and context that you can feed to an LLM is a simpler problem to solve than trying to automate end to end engineering workflows.
      • aurareturn1 hour ago
        Agreed.<p>Hence, I&#x27;m heavily invested in compute and energy stocks. At the end of the day, the person who has more compute and energy will win.
    • MangoCoffee1 hour ago
      Many people have pointed out that if AI gets better at writing code and doesn&#x27;t generate slop, then programmers&#x27; roles will evolve to Project Manager. People with tech backgrounds will still be needed until AI can completely take over without any human involvement.
    • thefz2 hours ago
      Nope and I wholeheartedly agree with Pike for the disgust of these companies especially for what they are doing to the planet.
    • brcmthrowaway2 hours ago
      Very true... AI engineers earning $100mn, I doubt Rob Pike earnt that. Maybe $10mn.
    • chopete32 hours ago
      This is the reality and started happening at faster pace. A junior engineer is able to produce something interesting faster without too much attitude.<p>Everybody in the company envy the developers and they respect they get especially the sales people.<p>The golden era of devs as kings started crumbling.
      • tombert2 hours ago
        Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.<p>&quot;Senior&quot; is much more about making sure what you&#x27;re working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was <i>always</i> the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.<p>It will certainly get better, and I&#x27;m all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there&#x27;s a reason that most of those demos weren&#x27;t immediately put onto store shelves without revision.
        • iainbryson1 hour ago
          This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.<p>Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don&#x27;t have a lot of good ideas. The delay and cost incurred by &quot;old style&quot; development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.<p>With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company&#x27;s AI prowess by reducing staff.<p>At least in my company, none of this has actually increased revenue.<p>So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.
          • tombert1 hour ago
            Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.<p>Intentionally or not, generative AI might be an excuse to cut staff down to something that&#x27;s actually more sustainable for the company.