15 comments

  • kennywinker1 hour ago
    Anybody who stays at openai is signing on to build machines that will be used to kill innocent people and control people who think that’s a bad idea.
    • rvz37 minutes ago
      That&#x27;s fine. But they shouldn&#x27;t be lecturing to anyone about &quot;principles&quot; or moral superiority and at the same time being either paid or holding RSUs as well, since that would make them completely dishonest themselves.<p>It just shows that they have done poor research about the company before joining (Meta is just as bad) and are in on the grift (joined OpenAI only after post-ChatGPT) and this employee does not believe what they are saying.
    • ed_mercer1 hour ago
      I’m worried that China will build said killing machines and that we’ll be unprepared.
      • dbtc1 hour ago
        I&#x27;m worried that China will build said killing machines only because they see that we are and feel the need to be prepared.
        • sidcool8 minutes ago
          Game theory in action
        • bilbo0s35 minutes ago
          This.<p>Everyone will do this, because everyone will believe that everyone will do this.<p>Even worse, there really is no guarantee that the great powers will create the best terminators. Everyone talks about China and the US. (And we should.) At the same time however, we should all keep in mind that nations from India and Indonesia, to North and South Korea will not be simply sitting on their hands while the US and China forge ahead.<p>A future where 4 million dollar American or Chinese terminators are easily overwhelmed by thousands and thousands of 5 dollar Indian autonomous devices is not at all outside the realm of future possibilities.<p>That&#x27;s what makes it all so concerning. We can kind of see where it leads in terms of enhanced capability potential for non-state actors, but we can&#x27;t really see a way to avoid that future.
      • suzzer992 minutes ago
        I got scared when I saw China&#x27;s synchronized drone swarms at the Beijing Olympics, which I believe was the point.
      • orwin1 hour ago
        Unless you&#x27;re living in Taiwan, I don&#x27;t think you have a lot to prepare for.
      • t0lo58 minutes ago
        The myth of american moral superiority had been dead for a while. Why would china be any more evil than the US, which has waged far more colonialist wars and killed far more foreign lives in recent times (look at the news today for inspiration)
        • afavour50 minutes ago
          I don’t see any contradiction with what the OP said, though. You don’t have to be morally superior to still be concerned about a country’s forces killing you.
        • smallnix24 minutes ago
          Uighur concentration camps? Falun Gong organ harvesting?
          • ceejayoz13 minutes ago
            ICE is building a bunch of concentration camps as we speak.
        • lakrici8828449 minutes ago
          [dead]
      • henry20231 hour ago
        “I’m afraid my neighbor would kill my son, therefore, I’ll kill my son myself”
      • jimmydoe39 minutes ago
        while you are worried about China, USI have done a genocide and started a new war.
        • infinitewars32 minutes ago
          And the U.S.&#x2F;Thiel&#x2F;Musk are trying to start an AI-powered nuclear war next: <a href="https:&#x2F;&#x2F;wikipedia.org&#x2F;wiki&#x2F;Golden_Dome_(missile_defense_system)" rel="nofollow">https:&#x2F;&#x2F;wikipedia.org&#x2F;wiki&#x2F;Golden_Dome_(missile_defense_syst...</a>
      • jatari50 minutes ago
        China is currently a more morally virtuous country than the US.
        • jimmydoe37 minutes ago
          Believe me, China hasn&#x27;t show its true face yet, but it will, just wait.<p>And while we are waiting, there&#x27;re another few wars to be done.
          • throw31082228 minutes ago
            Maybe the true face of China so far is that it hasn&#x27;t shown its true face. While the true face of the US is what it has shown again and again.
        • rockskon39 minutes ago
          They welded shut the doors to Uyghur Muslims and had a bunch of donated food for them stacked outside their homes in one giant pile that they couldn&#x27;t get to. It either rotted away or was eaten by animals.
        • onetokeoverthe46 minutes ago
          [dead]
  • ta90003 minutes ago
    “I don’t think we should spy on Americans and I don’t think we should kill people without human oversight but I still have respect for the guy willing to do that”. Please, make it make sense.
  • mkl1 hour ago
    <a href="https:&#x2F;&#x2F;xcancel.com&#x2F;kalinowski007&#x2F;status&#x2F;2030320074121478618" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;kalinowski007&#x2F;status&#x2F;2030320074121478618</a> to see replies.
  • usr110611 minutes ago
    In Germany it made it even to the general news <a href="https:&#x2F;&#x2F;www.spiegel.de&#x2F;wirtschaft&#x2F;unternehmen&#x2F;openai-managerin-caitlin-kalinowski-kuendigt-nach-umstrittenem-deal-mit-pentagon-a-30e406b8-bdb6-4a01-8f82-623223b3e094" rel="nofollow">https:&#x2F;&#x2F;www.spiegel.de&#x2F;wirtschaft&#x2F;unternehmen&#x2F;openai-manager...</a><p>So it wouldn&#x27;t even be worth a HN submission. Well, I think it can still go under exception for exceptional news.
  • wrs1 hour ago
    I have a hard time with this separation of “principle” from “people”. Isn’t it people who have principles?
    • 000ooo0001 hour ago
      Easier to remain in the industry if you are shittalking principles instead of people.
      • skeeter20201 hour ago
        yep - it really softens your actions, which in this case seem like a big step. So if you respect the people, why didn&#x27;t you stay? or if you disagree this strongly with their actions, how can you still respect them?<p>I get that there&#x27;s nuance, but this feels like they want to make a big ethical stand without burning any bridges. You can have one of those.
        • pdpi1 hour ago
          There are people I&#x27;ve worked with who I&#x27;ll never worth with again. There are others I&#x27;d be willing to work with if they got their act together.<p>&quot;If you disagree this strongly with their actions, how can you still respect them?&quot; is a decent description of the latter.
    • Aurornis57 minutes ago
      “It’s not X, it’s Y” is a common ChatGPT trope used to give a sense of depth to a statement but the specific contrast is generally murky like this. This Tweet was either written by ChatGPT or heavily influenced by ChatGPT style.
    • rvz56 minutes ago
      There are no &quot;principles&quot; in big tech and I call bullshit on this tweet and their reasoning.<p>OpenAI already had military contracts while this employee was at the company and there was no open letter last year about that.<p>Prior to that, they were at Meta and joined OpenAI <i>after</i> ChatGPT took off.<p>If they thought that AGI was about &quot;principles&quot; then not only they were naive, but it leads me to believe that they were only there for the RSUs, just like their time at Meta.<p>Why is it so hard to be honest and just say you were there for the money, fame and RSUs and not for so called &quot;AGI&quot;?
  • voganmother4240 minutes ago
    Respect for standing up
  • mrcwinn51 minutes ago
    Whatever happened to this all powerful non profit that would ensure OAI is doing right? Something tells me they just cashed in and run a corrupt shell at this point.
  • monkaiju1 hour ago
    Always surprised when these &quot;smart people&quot; didn&#x27;t see these things coming from several years away... Its honestly hard for me to believe it.<p>Going to work for these big SV corps is and always has been directly in service of US empire, that&#x27;s literally what built the valley in the first place.
    • conartist61 hour ago
      Haha that&#x27;s what I thought, but my thought was that I can&#x27;t believe Sam Altman didn&#x27;t see a serious backlash coming when Anthropic rejected a contract saying &quot;the only two things we won&#x27;t do are mass surveillance and autonomous killer drones&quot; and within 6 hours Sam was all over that.
    • pan691 hour ago
      It&#x27;s easier to defer principled decision making to the future while you can rake in the cash in the meantime.
      • monkaiju24 minutes ago
        Yeah i think this is pretty much it tbh
  • structuredPizza2 hours ago
    Autocomplete &gt; Automurk
  • lostmsu1 hour ago
    To save a click<p>&gt; I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
    • naveen9947 minutes ago
      With open borders, how do you differentiate Americans from aliens. How do you differentiate criminals from innocents ?
      • brayhite19 minutes ago
        If it wasn’t for the space before the question mark, I’d have assumed you were a bot.
        • naveen9911 minutes ago
          I am not. But irrelevant.
  • mrcwinn52 minutes ago
    Good for Caitlin. Sam Altman is awful. He literally admitted on Twitter that they rushed their military contract to get it done. Are you kidding me? You rushed your military contract?<p>Any employee who stays, especially given the financial cushion they have, is complicit. Shame on all of them.<p>But here’s the sad truth: most of the knowledge workers at OpenAI won’t be of any value sometime soon because of the very tool they’re building.
    • sudo_cowsay42 minutes ago
      You cant just blame everyone at OpenAI<p>Everyone has their own unique situation
  • camillomiller1 hour ago
    If you don’t wanna upset your stomach, don’t make the mistake of reading the replies. What a cesspool of humanity X is.
  • ClaudioAnthrop1 hour ago
    [dead]
  • LeoPanthera1 hour ago
    Their justification rings hollow when they continue to use X.
    • jmull1 hour ago
      Doesn&#x27;t seem to be an equivalency there.
      • idlerig1 hour ago
        There isn&#x27;t, just inserting politics into a discussion on principles.
    • cozzyd1 hour ago
      Leaving a job is easy. Social media on the other hand...
  • threethirtytwo1 hour ago
    That twitter post was clearly written by AI along with the instructions for the AI to avoid &quot;tells&quot; and other tropes common to AI.<p>Absolutely nothing wrong with something written with AI. Just pointing it out.
    • bombcar1 hour ago
      But was it written with an OpenAI AI?
    • fredoliveira40 minutes ago
      and you say this based on?
      • threethirtytwo32 minutes ago
        The way everyone else can tell. My instincts. AI has a flavor.
    • Aeglaecia46 minutes ago
      if that&#x27;s the case, ai failed to remove the negative parallel construction (my current top ai smell aside from slanted inverted commas). what signs are there of this being ai asked not to sound like ai?
      • threethirtytwo31 minutes ago
        Right, that&#x27;s the sign. Ai often fails to do what it&#x27;s told. So that&#x27;s the sign of it asked to not sound like an AI. I told AI to do this for my current post as well.
        • Aeglaecia17 minutes ago
          ok. do you see any more concrete signs? to me it smells like openai output with newlines removed. but aside from smell (and the negative parallel construction), one could argue that this may be the output of a human who has been influenced by the prose of ai.