Our principles

(openai.com)

35 points by tosh1 hour ago

25 comments

  • KuSpa1 hour ago
    &gt; Here are the principles that guide our work.<p>&gt; 1. Democratization. We will resist the potential of this technology to consolidate power in the hands of the few.<p>For example they could publish their models and research... instead of doing the opposite of what they claim being their very first principle.
    • Lalabadie28 minutes ago
      Or they could resist harvesting everyone&#x27;s work for free to turn into their own revenue
  • cdrnsf4 minutes ago
    Why should we trust a for profit tech concern to do the right thing <i>this time</i> given the historical context available to us?
    • cyanydeez0 minutes ago
      forget all context; believe capitalism will resolve it this time using $TECHNOLOGY
  • aeternum50 minutes ago
    Remember this one of OpenAI&#x27;s principles?<p>&gt; We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”<p>What do people think is the probability that OpenAI would ever actually do this?
  • kelseyfrog1 hour ago
    Glaring lack of, &quot;We will not participate in the creation or operation of &#x27;kill bots&#x27;.&quot;<p>I can&#x27;t believe it has to be said. Yet, here we are. Nice to haves include: &quot;We will not participate in the use of AI for mass surveillance,&quot; and &quot;We will not participate in the use of AI for (cyber-)warfare.&quot;
    • dyauspitr1 hour ago
      Outside of coding, that is probably the most lucrative place for AI to be used in. The theatre of war is forgiving of small collateral damage and it’s like this technology was built for kill bots.
      • kelseyfrog1 hour ago
        What a great opportunity to demonstrate the alignment problem between harmful AI, economic incentives, and standing up for ones principles in spite of having something to gain.
      • himata41131 hour ago
        the technology openai-sells is actually not that good for kill bots, we have boston dynamics for that. I mean to be real here, they&#x27;re already better than human soldiers, deploying 100 of the doggies and letting them run loose could wipe out any fortified group.<p>Especially if you include things that are not normally acceptable such as suicide bombers, poison gas, etc.<p>Also it has been proven that in real modern warfare cheap drones seem to dominate. So unless we have a kill-bot that can withstand explosives while staying lightweight and operable with good KD (drones are 1.0 or less). kill-bots would have to have a KD of 100 to break even.
        • zulux1 hour ago
          Counterpoint: Killbots are vulnerable to smaller, cheaper bots deployed in defensive positions.
  • throwaw121 hour ago
    &quot;Principles&quot; of &quot;Open&quot; AI<p>* This will change anytime we want, whether you agree or not<p>* For employees who are following current &quot;principles&quot;, when we change, if you are strict about principles, then please leave, we will hire new people
  • piskov39 minutes ago
    Funny how true adherence to this is only with open-sourcing the models.<p>And this won’t happen.<p>So another “do no evil” bla-bla which will ultimately be dropped
  • wxw1 hour ago
    &gt; Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people. We believe the latter is much better...<p>Superintelligence aside, power in the present is already held by a small handful of companies, at least in the west. The principles are pretty good, vacuous though they may be.
    • tomComb47 minutes ago
      Having it in the hands of public companies or foundations seems preferable to me to having it in the hands of private companies or individuals.
    • xigoi34 minutes ago
      This principle would be good if not for the irony of ClosedAI saying that.
  • pton_xd1 hour ago
    &gt; We envision a world with widespread flourishing at a level that is currently difficult to imagine<p>Help me imagine it, what are some examples of widespread flourishing we can look forward to?
    • tmvphil8 minutes ago
      I&#x27;m not super optimistic personally, but isn&#x27;t the optimistic outcome obvious? If AGI takes over, solves robotics, (and doesn&#x27;t kill us all,) then we could see the elimination of all human labor for the purposes of meeting necessities.
    • deaux1 hour ago
      SamA becoming more powerful and wealthy.<p>SamAs hypothetical friends - not sure if he has any _real_ friends as that requires trust, which is famously antithetical to interacting with SamA - becoming more powerful and wealthy.<p>Need any more examples of flourishing?
      • ryandrake41 minutes ago
        In a simplified world with 1 trillionaire and 99 people who have $1 each, the average person has ~$10 billion. The average person is flourishing!
    • Sivart1335 minutes ago
      This is the key, every AI company makes a big deal about how AGI will be transformative but we&#x27;re just supposed to take it on faith that this transformation will be good.<p>absolute underpants gnomes reasoning
    • willturman1 hour ago
      I read &quot;widespread flourishing&quot; as referring to a scope of influence and &quot;at a level that is difficult to imagine&quot; as referring to an amount of accumulated wealth.<p>But surely the people who aren&#x27;t committing to not use technology for autonomous death deserve a more charitable reading.
    • jampekka1 hour ago
      Maybe something like swarms of autonomous killer drones invading Greenland and Canada?
    • orphereus1 hour ago
      There are people that will call you a luddite for this.
    • thomasingalls1 hour ago
      The more specific the better
    • sushisource1 hour ago
      &quot;The world is going to be so much better! We&#x27;re not really sure how, but, trust us, definitely better!&quot;
    • dyauspitr1 hour ago
      Not succumbing to being cynical I can imagine total elimination of food shortages by completely making farming robot driven, AI driven gene therapy to end most disease, robot driven scalable power generation through building solar panels in an automated fashion, installation and maintenance, one robot at home that can replace all tradesmen etc. There is a lot of stuff that could be possible if the promises pan out and they don’t seem completely out of reach at this point.
      • elevatortrim22 minutes ago
        The reason for food shortages is not scarcity of food, nor the reason for shelter, clothing, or transportation.<p>We have enough technology and resources to make it a heaven for everyone except for unpreventable diseases or similar.<p>Yet we do not because we did not crack the human-alignment problem.<p>That’s the problem we need to solve, not the resource problem.<p>If we solve the resource problem before the human alignment problem, we will cause unimaginable suffering.
        • dyauspitr7 minutes ago
          It will sort itself out over time. It will be painful. Probably not over our lifetime.
  • stephc_int135 minutes ago
    Translation: We know that many of you don&#x27;t trust us and that our image has degraded a lot recently, but we need you to trust us.<p>Show, don&#x27;t tell.<p>Snatching this contract with the military was not a good sign of things to come from OpenAI or Sam Altman.<p>The documented patterns of lies and manigances is real.
  • ladax727071 hour ago
    Those are my principles, and if you don&#x27;t like them... well, I have others!
    • elktown57 minutes ago
      First thing I thought about. I have no reason to believe that it’s not accurate for most contemporary tech corps.
  • dbgrman48 minutes ago
    When a company writes down principles, you should be highly skeptical.<p>- Democratization. Why is it your prerogative sam bro? In other words, what he means is consolidate access so &quot;We&quot; can democratize. We choose who gets what.<p>- Empowerment. People are empowered by default. Its the totalitarians who curtail the empowerment. The fact that sam things he has the power to &quot;empower&quot; people is arrogant at best. People are empowered already, you just build the tools and make the tools accessible at a reasonable price.<p>- Universal prosperity. This one pisses me off the most. Who TF made you the benevolent mayor of universe? Are you running for president of the universe and people ask: Hey sam, what would you do as a president of the universe? &quot;I will bring universal prosperity&quot;.. yaaaay Sama for president. FFS!<p>- Adaptability Yep. we&#x27;ll kiss the ring of whoever is in power, until we get in power. then we will adapt if needed to your needs.<p>You know who else has principles: Meta. (<a href="https:&#x2F;&#x2F;www.meta.com&#x2F;about&#x2F;company-info&#x2F;?srsltid=AfmBOooT6i0pCeWiR9aqNtZtwqvFhS3CQEOZ1_BwwgI9xhYV2SnGfzV8" rel="nofollow">https:&#x2F;&#x2F;www.meta.com&#x2F;about&#x2F;company-info&#x2F;?srsltid=AfmBOooT6i0...</a>)<p>- Give people a voice. Read: ensure you control their voice. My take: who tf are you to give anyone a voice? Everyone HAS a voice.<p>- Build connection and community. Read: ensure that you control all the connections and communities so that you can steer elections and other important things. My take: people have been connecting already for thousands of years.<p>- Serve everyone Read: control who you serve. My take: Serve everyone, except for totalitarian regimes and people with ideas that are not aligned with ours.<p>etc. etc.
  • transitivebs55 minutes ago
    was expecting this page to be blank...
  • _doctor_love15 minutes ago
    A prompt I fired:<p><pre><code> read: https:&#x2F;&#x2F;openai.com&#x2F;index&#x2F;our-principles&#x2F; then analyze it critically through the lens of https:&#x2F;&#x2F;www.orwellfoundation.com&#x2F;the-orwell-foundation&#x2F;orwell&#x2F;essays-and-other-works&#x2F;politics-and-the-english-language&#x2F; translate the original article using Orwells principles - what is it really saying? </code></pre> Gemini gave a pretty harsh interpretation, ChatGPT was curiously fence-sitty. Make of it what you will.<p>tldr: <i>We are building a machine that we do not fully understand, which will upend the global economy and introduce severe new risks, but we promise we are doing it for your own good.</i>
  • josh-sematic1 hour ago
    I wonder how much money OpenAI has spent lobbying for alternative economic models research (or better—spent on such research themselves). Then compare that to how much they’ve spent lobbying to enable deploying this all as fast as possible. Those numbers will speak louder than any words on their principles.
  • dabedee40 minutes ago
    Here is the translated version:<p>1. Democratization is centralization. We will resist the potential of this technology to consolidate power in the hands of the few, by consolidating it in the hands of us, who are not few but correct.<p>2. Empowrment is compliance. We believe AGI can empower everyone to achieve the goals we have determined are worth achieving.<p>3. Prosperity is scarcity. We want a future where everyone can have an excellent life, which will require new economic models because the old ones will no longer function, for reasons unrelated to us.<p>4.Resilience is dependence. AGI will introduce new risks, which only AGI can solve, which only we can build.<p>5. Adaptability is revisionism. We continue to believe the only way to meet the challenges of an unpredictable future is to be prepared to update our positions, our charter, our nonprofit status, our safety commitments, our board, our cofounders, and our prior statements, all of which were operative at the time and are now inoperative and were never said.
    • outside123437 minutes ago
      6. Please don&#x27;t look at our financials. They are horrible and we are hoping to sucker people into an IPO before all of this implodes. The least your Grandma can do for us is give us 2% of her S&amp;P 500 portfolio so we can exit before it goes to zero. This is AGI after all.
    • rvz37 minutes ago
      What is true definition of &quot;AGI&quot; in this context?
  • simonreiff31 minutes ago
    I believe Groucho Marx once said: &quot;I&#x27;m a man of principles. If you don&#x27;t like them, I have others!&quot;
  • smlacy40 minutes ago
    PRINCIPLES.md
  • jethronethro1 hour ago
    Interesting how this was released on the eve of the Musk v. Altman trial ...
  • dbvn1 hour ago
    Newsflash: they&#x27;re changing... again
  • surgical_fire5 minutes ago
    This was an impressive load of bullshit. I wonder if they asked ChatGPT to generate it.
  • ChrisArchitect38 minutes ago
    Related:<p>Altman&#x27;s &#x27;beliefs&#x27; in his response to the moltov cocktail<p><a href="https:&#x2F;&#x2F;blog.samaltman.com&#x2F;2279512" rel="nofollow">https:&#x2F;&#x2F;blog.samaltman.com&#x2F;2279512</a> (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47724921">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47724921</a>)
  • Matl1 hour ago
    Why even put this out? This is weeks after the whole Department of War thing, days after the article about Sam Altman being a pathological liar (as if we needed one).<p>If the intention is to bury all that then I think it&#x27;s going to have the exact opposite effect and make everyone remember.
  • freshnode1 hour ago
    Dystopian humblebrag gymnastics.
  • Teever48 minutes ago
    Why do companies put stuff like this out in 2026?<p>Like who is the intended audience and what purpose does this serve?<p>I can&#x27;t imagine that this will have the same powerful effect that Google&#x27;s &#x27;don&#x27;t be evil&#x27; stuff did all those years ago.<p>People are just too cynical and have enough experience being burned by big tech companies. You might think that I&#x27;m speaking from a place of age and experience but I think this applies to everyone, young and old -- we&#x27;re all using these devices and services from the cradle now it seems and we&#x27;ve all been burned by them or know someone who has been burned by them -- kids know the big tech rug pull just like they know they rug that they crawl on while sucking on a pacifier.<p>So what&#x27;s the point of this? Is the intended audience internal? Like is it just for the people who work at openai to distract them from the news the stories that they hear in the news about their companies and the stuff they hear people say about them in social gatherings before they admit that they work for openai?
    • DaiPlusPlus10 minutes ago
      &gt; Like who is the intended audience and what purpose does this serve?<p>Green Party voters; Technophobic readers of The Guardian[1]; Account managers at image-washing nonprofits; and possibly an anti-Roko&#x27;s Basilisk.<p>[1] As opposed to technophilic Guardian subscribers, myself included; just to be clear that I&#x27;m not dunking on the newspaper itself.
  • gigatexal1 hour ago
    lol &quot;principles&quot; like Sam Altman has any ...<p>All the major AI shops are out trying to be the king of the jungle -- I don&#x27;t think there can be a market in the end for all of them to be worth 2T+ giants.