Violence like this is not the answer. However, this post feels like a thinly veiled attempt at using this alarming attack to reclaim public goodwill after the New Yorker article the other day.<p>> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.<p>Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.<p>Altman wants to seem relatable and personable even though he’s one of the wealthiest and most powerful people in the world. You don’t get that option when you control a technology that has the potential to alter so many lives, especially when you just sold said technology to the US military. All the talk around democratizing AI rings hollow.<p>The implication of Altman’s blog seems to be “stop writing critical articles about me because it will cause more violence.” However, the rich and powerful cannot use this excuse to escape objective scrutiny.
If it wasn’t a good or at least workable answer, the state and corporations would be using it so much
>> Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.<p>The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is <i>always someone</i> who feels this way about <i>anything</i>.<p>The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.<p>Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that <i>so does everyone else</i>.<p>The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.<p>We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.<p>In <i>retrospect</i>, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
> Violence like this is not the answer.<p>I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.<p>I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
That's certainly the implied threat when people show up with AR-15's in the Idaho statehouse. Yes it's legal. But what is the point? This is ruby red Idaho.<p>I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
Violence is not the answer if and only if there are non-violent ways to achieve necessary goals.<p>We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.<p>Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.<p>All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.<p>Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."<p>The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
> it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.<p>And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.<p>Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
<p><pre><code> > We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes
</code></pre>
An excellent illustration of the blind spot
Answer to what? Do you know the question?
Interesting you say not vs never. It seems this kid thought it was a time where violence was needed. The question i always ask in these situations is about what the line would be that would justify violence?<p>Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
Violence is an extreme failure state.<p>If your goal is to improve the system then you always want to move away from it.<p>Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
> what the line would be that would justify violence<p>It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
It is not complicated.<p>Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.<p>This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.<p>Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
Yes. Yes.
> Violence like this is not the answer. However<p>Sigh
Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.<p>Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
[flagged]
Words and writings (law) only have power because of violence (the monopoly of it)<p>So yes, in essence, it seems like violence is the answer.<p>When (perceived) justice is gone, the monopoly crumbles because the system is not working.<p>And this perception can have many causes
That’s a very dismissive point of view to the seriousness of the situation. He had a Molotov cocktail thrown at his home in the immediate aftermath of an article that painted him in a negative light. The two may not be connected but seem to be.
Altman didn't create AI. That disruption is already coming no matter what. He's a fine enough steward of the tech. And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please. Yet another citizen who benefits from a system while trying to attack it.
> He's a fine enough steward of the tech.<p>Are you Sam Altman?
> Altman didn't create AI.<p>No one said he did.<p>> That disruption is already coming no matter what.<p>[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.<p>> He's a fine enough steward of the tech.<p>He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.<p>Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.<p>Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it.
If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!
It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.
Anthropic in particular does this masterfully, you’d think they’d invented Skynet by the way they hand-wring.<p>As always what matters are actions and evidence, not talk.
When a model can tell funny jokes or write good poetry, that's when I'll be concerned.
><i>... you’d think they’d invented Skynet by the way they hand-wring.</i><p>Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."<p>Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
I’ll believe Anthropic when they fire everyone making more than the cost of a few GPUs. Until then, it’s just marketing.
Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.
Is this belief grounded on some kind of derivation, or just a prima facie belief?<p>If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.<p>It's been promised to be around the corner for decades.<p><a href="https://en.wikipedia.org/wiki/Technological_singularity" rel="nofollow">https://en.wikipedia.org/wiki/Technological_singularity</a>
Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?
Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market
Does it matter? The frontier models stole the whole internet, then the second-level models stole from them… It’s all theft.
> just their usual marketing<p>I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
> Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?<p>He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.<p>Beyond that I'd like to simply know why he thinks any of this is <i>his</i> responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....<p>Gets 5% on ARC-AGI2 private set.<p>Chinese models are suspiciously good a benchmarks.
Especially when Google is in the far better position to come out ahead…imo.<p>Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.<p>It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.
I have the same feelings
6 months will be an impossible gap once the thing starts closed loop self improvement
An impossible gap in the race to... what exactly?<p>Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
Two words: Delusion and overconfidence.<p>"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].<p>[0] <a href="https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks" rel="nofollow">https://www.anthropic.com/news/detecting-and-preventing-dist...</a>
I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the moment<p>When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
GLM 5.1 already closed the gap on Opus 4.6. Deepseek 4 could surpass it.
Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.
you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift
When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.<p>That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).
Don't worry, if someone truly achieves superintelligence it won't be controlled by anyone for long.
There will be a blinding flash which signals the superintelligence singularity. When the smoke clears, you'll see a 50-foot tall Altman/Borg hybrid. He is about to destroy humanity with his death ray. Suddenly, a 50-foot tall Musk/Borg hybrid appears out of nowhere, and stops Altman just in time. Then they work together to destroy all humans.
That's my other nightmare scenario :P
I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.<p>This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
It's never OK to physically attack someone like this. Full stop.<p>Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
If only that sentiment was reciprocal!<p>When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.<p>There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
Exactly.<p>People don't need to act like a slave.<p>Make your own decisions in life.
Exactly this
The ‘graduation day massacre of 2047’, ycombinator’s greatest tragedy…. The ceremony was interrupted by ‘Anti-AI’ + ‘Pro-Trump/Palestine Gaza Hotel & Casino’ protesters (who all refused to wear their anti COVID-47 plastic vampire teeth) and, with good cause, were massacred by the Cyber-Hot-Pinkertons<p>I forgot what I was typing this in response to, so I’m just going to stop and post lol
Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
> why one person potentially being responsible for hundreds or thousands of deaths is acceptable<p>I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?<p>I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
I’m fine with holding them all accountable to varying degrees. For example, yes, ultimately the president is responsible, but so is the person who dropped bombs instead of refusing an illegal order; just like the street dealer, gang banger, trafficker, and cartel boss are all guilty of all of their various crimes.<p>What do you find difficult to understand about that?
Accountability sinks are good value and wealthy people always make sure they have enough of them
Ah the old 'everyone is responsible so nobody is responsible' canard.<p>I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.<p>There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.<p>I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
> There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.<p>Is this what we just saw with America attacking Iran?
> The entire purpose of government is to have a monopoly on violence.<p>... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.<p>> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.<p>Which kinda follows the spirit of English Common Law:<p>> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone<p>A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.<p>I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
This is a distinction without meaning. It makes no moral difference who dispenses justice, if said justice is justified.
Yeah, it's kind of terrifying, how this incident seems to have faded from people's memories.
Military power and attacks on private individuals are different things. It's perfectly consistent to be against attacks on private individuals while being in favor of building military weapons.
There's thirty-some-odd million people in Ukraine who very much would like to get AI weapons before the Russians do. They're coming whether you want them or not.
The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.<p>Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
> Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.<p>Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.<p>I see a difference.<p>US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.<p>Those that killed the school girls will never face punishment.
And it's versus 150 innocent people vs. a few very guilty people.
If you want to draw that distinction, then don't you need to account for intent? I don't think the USG intended to bomb a school. The guy throwing a Molotov cocktail has even less claim to it being an accident.
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.<p>We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model<p>The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.<p>Without missing a beat, she said " If humans loss was that complete, there would be no historians.<p>I responded that I never said they were human historians.
> I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.<p>Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.<p>_Now_ it is indeed too late.
I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.
If you can't think of a single occurrence in history that directly disproves your proposed guideline, I'm seriously concerned for you. It would be time to redo your education.<p>If you can, then you shouldn't be proposing introduction of guidelines that are blatantly wrong.
Do you feel the same way about comments that support the US military action in Iran? Why or why not?
It is unnecessary, and it was an obvious offense, not defense. Of course it is "bad". We (Trump) need(s) to stop creating wars and fucking up the economy, while killing others. It is bad all the way down.
If you grind people into a paste long enough, eventually some of them may object in one manner or another.
Are calls for violence bad when you're calling for throwing a molotov cocktail at a child? At an adult? At a serial killer? At someone who's about to shoot you unprovoked? At someone who murdered your family? At someone who's about to?<p>If you said "yes" to all of the above, I'd love to know your reasoning.
Yes.<p>If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
The general tone here is that freedom of speech is absolute and nothing should curtail that.<p>Not my personal view.
I’d like to know your reasoning for answering “no” to all of the above.
I agree with the idea that calls for violence are bad; however most people in the world are more than happy to support both violence and calls for same against people and organizations they believe to be sufficiently significant threats.<p>Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?<p>How about calls for violence against Putin during his war of aggression?<p>This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for <i>me</i>, as I’m with Asimov on the matter, but it isn’t for most humans.)
Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.<p>Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.<p>Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.<p>Technology itself is inert. What humans do with technology should be regulated.<p>IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
> Would it be moral to attack knife manufacturers?<p>Apply this to guns.<p>Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.<p>AI will get this treatment I’m sure.
>Would it be moral to attack knife manufacturers?<p>if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.<p>Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.<p>But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.
> the way we tackle that is with conversations, not violence<p>I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.<p>A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
That sentiment always comes from people who are better at fighting with communication.
Everyone else deserves to grow old, too...
It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.<p>Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)<p>When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
Like this, for sure not. And Sam has not, even with that article, done anything to warrant violence.
> OpenAI has abandoned its open source roots.<p>It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!<p>It's like that old joke:<p>A man offers a young woman $1,000,000 to sleep with him for one night.<p>“For a million dollars? Sure, I’ll sleep with you.”<p>He smiles at her, “How about $50, then?”<p>“How dare you! I’m not a whore!”<p>“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”<p>Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.<p>So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
"Like this" is doing some serious work in that statement!
Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.<p>It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
If we are going to say violence isn’t okay then it is important that we be clear about the boundaries of what we define as violence.<p>Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.<p>If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.
I've never understood this specific taboo against physical violence. Firing a thousand people or stealing their wages, ruining their life and their families', passing unjust laws that threaten the well-being and happiness of a million, that's ok! A punch in the nose, that's not ok!<p>There are far worse things than physical violence against one person, and with the end of the rule of law there isn't any other recourse. The one value that is common across all cultures is that the wicked must be punished for their wickedness; expect to see violence against oligarchs and CEOs spread like fire.
The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
We'd have never progressed as a species with your mentality. Change is painful and it's part and parcel of progress.<p>Humans would be suffering far more today if we weren't willing to accept short term pains for progress.
Change and progress like the people of France deciding they had enough of injustice and nobles' impunity, then? A little short-term pain for social progress? We agree.
That sounds suspiciously like a "ends justify the means" argument.<p>It's easy to say we need to be willing to accept short term pains when it's someone else who has to bear the brunt of them.
Are you willing to stand by this argument and give up your career?
> It's never OK to physically attack someone like this.<p>I broadly agree.
But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this
Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.
An oligarch who promotes “democracy”. Is trying to cynically ingratiate himself, or is he really that deaf to the irony?
Well said, I condemn the violence as well. I had to stop at that point too though, it's so blatantly disingenuous and hypocritical.
Can't say I feel sorry for the guy. Anyone who actually believes his platitudes about "democratizing" AI is far too naive. If he really believed that, he'd make a torrent out of ChatGPT's weights and upload it to the pirate bay.<p>The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive <i>need not be kept alive</i> any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.<p>If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
> AI has to be democratized; power cannot be too concentrated<p>That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
That's not true.<p>As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.<p>The US is engaging in military action against many countries and has threatened to annex or invade allies.<p>In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.
> It's never OK to physically attack someone like this. Full stop.<p>I agree. The French Revolution was really, really mean.
So you think it would always be wrong to throw a molly at Hitler?
Was it not OK to kill King Louis?<p>Just saying.
[flagged]
> Those who make peaceful revolution impossible will make violent revolution inevitable."<p>- John F Kennedy, 1962.
Assuming this is a serious question, here are some ideas you could read about!<p>- <a href="https://en.wikipedia.org/wiki/Vigilantism" rel="nofollow">https://en.wikipedia.org/wiki/Vigilantism</a><p>- <a href="https://en.wikipedia.org/wiki/Law" rel="nofollow">https://en.wikipedia.org/wiki/Law</a><p>- <a href="https://en.wikipedia.org/wiki/Bill_(law)" rel="nofollow">https://en.wikipedia.org/wiki/Bill_(law)</a><p>- <a href="https://en.wikipedia.org/wiki/Trial" rel="nofollow">https://en.wikipedia.org/wiki/Trial</a>
Ideas.<p>Now back to reality.<p>Law: Epstein. ICE, Geneva Convention, Segregation<p>Bill: Going once, going twice, highest bidder wins. Ironic on a Sama thread.<p>Trial: OJ Simpson. Many miscarriages.<p>Vigilantism: Revolutions<p>I am not saying break the law. I am saying look back at history.
We'd be stuck in the Stone Age with your mentality.
If only the American Colonies would just have petitioned King George just a few more times…
this is the mentality of the modern age, as shaped by america and all empires before her, e.g. supreme leader khomeini no longer exists because the man americans voted for as head of the armed forces decided it would be better this way.
We’re in the middle of slaughtering two civilizations and you think we’re not in the Stone ages?
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.<p>For context his blog post seems to be a response to this deep-dive New Yorker article:<p>"Sam Altman May Control Our Future—Can He Be Trusted?"<p><a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted" rel="nofollow">https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...</a><p><a href="https://news.ycombinator.com/item?id=47659135">https://news.ycombinator.com/item?id=47659135</a>
Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.<p><a href="https://www.youtube.com/watch?v=wr_sB1Hl0oM" rel="nofollow">https://www.youtube.com/watch?v=wr_sB1Hl0oM</a>
Turns out the article was not in fact incendiary.
Incendiary. Is he trying to suggest the journalists are at fault here?
"I am firm, you are obstinate, he is a pig-headed fool."<p><a href="https://www.wikiwand.com/en/Emotive_conjugation" rel="nofollow">https://www.wikiwand.com/en/Emotive_conjugation</a>
Yeah, it's one thing to write an incendiary article, it's a very different thing to write an objective article about someone who will say anything to get what they want.
He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).<p>If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.<p>It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
[dead]
Unserious answer about a very serious event.<p>I don't believe a word of Sam's "I believe" section.
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.<p>If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.
You don’t even know what is covered. It could be anything from how to prompt to how to create your own models from numpy primitives.
Who tf is dumb enough to not do it, though?<p>If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
Yeah, people learning new technology is terrible. /s
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".<p>I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.<p>[0] <a href="https://news.ycombinator.com/item?id=47717587">https://news.ycombinator.com/item?id=47717587</a>
> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.<p>Well that makes two of us. Character seems to mean nothing today.
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.<p>Unless AI companies knowingly participate in murder plots, they should not be liable.<p>Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?<p>Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?<p>Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
> <i>Incendiary and false headline aside</i><p>The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.<p>> <i>Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?</i><p>No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.<p>Beautiful.
unpopular opinion but i think it's written quite well
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.<p>> Working towards prosperity for everyone, empowering all people<p>> We have to get safety right<p>> AI has to be democratized; power cannot be too concentrated<p>None of these statements, IMO, reflect his actions over the past 5 years.<p>> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future<p>I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.<p>Just my opinion, but it comes off as very insincere.<p>To be clear, what happened is still awful and there's absolutely no justification for it.
it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting
Yes, clearly not written with his own product.
Perhaps by ChatGPT
Scrolled thru.<p>> A lot of companies say they are going to change the world; we actually did.<p>Just couldn’t resist. So much of it reads like a marketing message.<p>Sam - when you say all society will benefit and that’s what you’re working towards, you can’t just say that. Nobody believes you and more importantly nobody has any reason to believe you. When you lead with that, and say nothing about what you are actually doing towards it, you make people work against you. When you put yourself up as a dictator for the collective needs of humanity, you have to put up or shut up.<p>So many put huge faith in you, but it’s turned out to be in the end entirely about you.
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.<p>What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...<p>I don't advocate for violence, but I do foresee more headlines like this as things get worse.
Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.<p>I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.<p>We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.<p>We’ll lose these jobs and there will be no super abundance at that point, and not even government support.<p>There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
I think, like other disruptive inventions of the past, there will be pain for many, but it will pass. Society will grow and adapt. There's some statistic somewhere I will paraphrase and/or botch that goes like: 90% of the jobs people have today didn't exist 50 years ago. I think no one can imagine what possible opportunities will manifest in the future. It's a lot easier to imagine everything that might go wrong because we evolved to see a sabertooth in the rustling leaves.
The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.
Out of curiosity... why do you think this?<p>I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.<p>What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.<p>There is no specialisation re. models at this moment in time so it is very likely to be the case.<p>OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.<p>There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.<p>This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.<p>Generous of them, really.
How about the economic impact of all the over investments in AI? It’ll all be dumped on us all Im afraid.
I've reread your post a few times and I can't make heads or tails of it. I don't even disagree with anything you've said, it just seems like a total non-sequitur; nothing you've said gives any reason to disbelieve that AI will put (many) people out of work.
> what is the game plan for society moving forward as AI takes more jobs<p>> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?<p>What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?<p>Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
The molotov cocktail was thrown <i>at the metal gate</i>, not at the house and they arrested some kind of a disturbed person:<p><a href="https://sfstandard.com/2026/04/10/sam-altman-russian-hill-molotov-cocktail/" rel="nofollow">https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...</a><p>It was a performative action.<p>I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
Historically, was it always so <i>common</i> for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
We are in a fact still in the tail end of a uniquely measured and peaceful time.
Can you explain the petty, self important, trolling manner? Which traits are intrinsically negative?<p>Genuine Q
Of Altman, Trump et al, Elon, the Nvidia guy, etc? Or am I not understanding the question?
I don’t think this will do much to help his image.<p>They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
"AI has to be democratized" - pretty weak coming from ClosedAI
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
Not that I excuse this behavior, but it's expected is it not? He's claimed to have built the replacement for human labor while participating in the regulatory capture that ensures that process screws the affected parties out of any effective recourse.<p>He's stood atop a soapbox, in earshot of everybody, and shouted to the corporations that because of him, they can now fire hundreds of thousands — millions — of people with impunity. It doesn't matter that it's not true and that the firings are probably not actually due to AI. But he's standing in front of them and providing the cover.<p>He's a marketing guy. He made himself the face of AI. His message out of the gate was that it was going to replace human workers. What did he think was going to happen?<p>It's like all of these people think that humanity has evolved out of the collective rage spirals that powered political revolutions in the 1500's, 1600's, 1700's — every 100's. Nope. It's always still there. We've had a middle class for awhile to mask it but it's being hollowed out and when it collapses completely, that ugly and ever-present human urge to eat the rich will rage right back to the surface again. Yet, they all seem to be apt to fight to be first in line to be the face of injustice during a volatile period for some reason.<p>It's kind of baffling but also interesting to witness.
<i>'Discourse is getting too hot'</i> says Man selling Large Language Microwaves
Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is <i>this angry at this point</i>, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?<p>If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
Is the underground bunker in New Zealand ready yet? Better check on it.
We still haven't made AGI, so I don't understand what he's saying they did.
What article is he referencing in the fourth paragraph? The New Yorker one? I got the impression that it was careful in its reporting and by no means one-sided.<p>Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
1) It's terrible that this has happened. People who do this are evil.<p>2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.<p>3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
The current crop of tech billionaires openly hate democracy, gleefully proclaim that their products are going to put everyone out of a job, and invest enormous amounts of time and energy into making sure that nobody can do anything to stop the world they’re creating, that nobody asked for or wants.<p>Actions have consequences. I’m sorry. Read a history book.
>“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”.
The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.<p>The analogy has 2 simple rules and you can't even follow them:<p>#1 It MUST be destroyed.<p>#2 SOMEONE has to have the ring until then.<p>Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.<p>Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
Did Claude Mythos escape containment?
> <i>AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.</i><p>What a bullshit thing for someone who is not actually democratizing access to AI to say.
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*<p>OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
It’s funny how this happens the very same moment we get to read about Claude’s Mythos and a New-Yorker article. I really doubt the attacker is up to date with either…<p>The only thing surprising here is how naive you guys are. He is a marketing&sales guy in the first place.
> The world deserves huge amounts of AI and we must figure out how to make it happen.<p>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.<p>Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.<p>I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?<p>Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.<p>But what, specifically, do you see? What am I blind to?<p>* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.
The Epstein regime all seem really manic and probably fearing the French bourgeoisie treatment. They tried to get Luigi on "terrorism" charges
> They tried to get Luigi on "terrorism" charges<p>That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is <i>precisely</i> what he intended to do. It is why there are so many Luigi fans, it is what they want too.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."<p>"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
You had me laughing for a full minute, bravo! :)<p>It's so true the Sam's of the world would be holding up their newborn child to save themselves from a raindrop!
It's like a baby on board bumper sticker. But for your house.
Gross man, get help. Living with your family isn't using them as a sheild.
Yeah it’s like they don’t want their children murdered, crazy
I’m probably going to get flames for this, but it would not surprise me in the least if Altman staged this. Given his history, it’s exactly the kind of thing he would do. Think about it - Elon has launched a smear campaign against him prior to the trial and Altman is getting crushed by negative press. Despite his efforts, he has been having trouble getting the media to pay attention to what he has to say about it. Solution? Rise above the noise with something even more newsworthy, and use it to push his personal PR, even mentioning and retorting Musk.<p>Think about something else: your house gets firebombed at 3:45am. How long until the cops wrap up and are done interviewing you? Two hours? How long until your family calms down and you can have alone time to write? He states it’s still night when he’s writing it. Yet he finds enough time alone to write a well-thought-out essay?<p>Yeah…seems likely.
Not gonna lie, based on everything I've ever heard about Sam Altman (long before the New Yorker article he seems to be very upset about) my first thought on reading his post was maybe the event was engineered as some sort of PR stunt.<p>But police have arrested a suspect:<p><a href="https://www.reuters.com/world/us/suspect-arrested-after-molotov-cocktail-attack-openai-ceo-sam-altmans-home-2026-04-10/" rel="nofollow">https://www.reuters.com/world/us/suspect-arrested-after-molo...</a><p>I'm not enough of a tinfoil hat wearer to think there's a grander conspiracy that the SFPD is in on, so I'm going to believe this really happened.<p>I do think him trying to tie it to press he has been getting lately is still a shitty and opportunist thing for him to do.<p>If any of the press is inaccurate and defamatory, sue them for it, he can certainly afford the legal costs. If not, then maybe he should act better so as not to come off as a sociopath when people do fair reporting on him.
There is a suspect, but he appears mentally ill and could have been paid by anyone to throw a molotov cocktail at the metal gate (to ensure that no one in the house got hurt):<p><a href="https://sfstandard.com/2026/04/10/sam-altman-russian-hill-molotov-cocktail/" rel="nofollow">https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...</a><p>"Around 3:40 a.m., the suspect threw a bottle containing a flaming rag at the metal gate of 855 Chestnut St., according to a police report."
Is there no vein of fear and loathing you won't tap?
I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.
He says power can't be too concentrated - but even n-2 generation models are not open.<p>He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.<p>3:45am in the morning - no dip, that's what AM is.<p>---<p>Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".<p>The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.<p>Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
If the billionaire is “awake in the middle of the night and pissed”, it means you’re doing it right.
I appreciate his post and his tone.<p>No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).<p>Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).<p>We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
None of the things you believe are working out.<p>1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.<p>2) AI will be the most powerful tool, etc. - see point 1.<p>3) It will not all go well, etc. - probably should have thought about that before you released it on the world.<p>4) AI has to democratized, etc. - true, won't happen. See point 1.<p>5) Adaptability is critical, etc. - Yes. Fully agree.<p>The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.<p>Same as it ever was, Mr. Altman. Same as it ever was.
Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.
This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.<p>When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?<p>Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.<p>Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.<p>Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.<p>So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.<p>I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth <i>depriving someone of their bodily liberty</i> or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.<p>I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is <i>so evil</i> , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Why did I need to scroll halfway down the page before finding a comment that says it was wrong to firebomb his house and nothing else?
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
This is both horrible and not at all surprising.<p>Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.<p>Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.<p>Still horrible and not right.
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.<p>I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.<p>The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)<p>People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
Hes attempting to humanize himself in hopes his family home where his child lives isn't firebombed. Again.<p>Very reasonable response when you take a step back.
"Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.<p>OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
People are not able to afford food, housing, energy, healthcare, or anything else right now because of Sam and the other scum bags.<p>Because of him people are suffering immensely.<p>My heart goes out to everyone in this situation.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.<p>How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.<p>I am glad you feel my pain, Mr. Altman.
I wonder if this is the first time in recent history (or ever?) that he has felt this way. Must be nice.
Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.<p>It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
> Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.<p>> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.<p>This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.<p>Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
think of the children!<p>did he find his PR agent on Upwork or does he just think we're all morons?
Removing him is active harm reduction or the world.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time<p>Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
> There was an incendiary article about me a few days ago [...]<p>That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?<p>It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.<p>I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
> Now what about millions of photos of all the other families possibly affected by him?<p>His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: <a href="https://news.ycombinator.com/item?id=47640048">https://news.ycombinator.com/item?id=47640048</a> ).
[flagged]
I don't know who you think the "real family" is but a) narrowing what a real family is does an awful disservice to a whole host of unique families, not just families that involve surrogacy and b) nearly all surrogacies in the US are gestational surrogacies where at least one parent is genetically related to the child and the surrogate is not at all related to the child (not that genetic relations is what makes something a real family or not, but I'm pretty sure thats what is implied here).
Yikes
this is probably orchestrated by sam altman himself or one of his lackeys
Sam Altman has written, and probably still believes,<p>"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]<p>This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.<p>[0] <a href="https://blog.samaltman.com/machine-intelligence-part-1" rel="nofollow">https://blog.samaltman.com/machine-intelligence-part-1</a>
Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.
Using an article about a home housing a child being firebombed to platform your irrelevant opinions about the victim is a bad look.
TIL Sam Altman is gay
“I’m just trying to make the world a better place for my child by ensuring millions won’t be able to afford to feed their children.”
The FOBO here smells.
AI is great. But it seems like those that wield its power only do so to create massive unemployment and benefits to the top 1%.
> This is quite valid, and we welcome good-faith criticism and debate.<p>It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.<p>Listen, for people unaware of history things used to be a <i>lot</i> more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.<p>As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
> We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.<p>This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.<p>Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.<p>As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.<p>> A lot of companies say they are going to change the world; we actually did.<p>Ugh.
It's amazing how humble someone can pretend to be a couple days after the top investigative journalist in the country (maybe world) exposes them as a sociopath and there is an attempt to assassinate them.<p>What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
What a tone deaf response. Sounds like he learned nothing at all from this.
So he spends a few seconds writing something generic about his family and then uses that as a platform for a bunch of personal PR. That's sociopathy.
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
[flagged]
FYI, you started out with a very common word used to exaggerate or cherry-pick the opinions of enemies ("giddy").<p>It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
>This is simply not how the economy works, if everyone is poor who do you think is paying for products/services leveraging AI?<p>Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.<p>"The top 20% of earners now make up over half of consumer spending"<p><a href="https://www.axios.com/2025/08/08/stock-market-us-economy-rich-poor" rel="nofollow">https://www.axios.com/2025/08/08/stock-market-us-economy-ric...</a><p>>also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.<p>All these could be stopped <i>right now</i> but <i>many people don't want to</i>. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.<p>Human stupidity is the real problem and ASI isn't going to "solve" anything.
Top 1% and top 20% are entirely different numbers, and majority does not mean all. If the bottom 99% or even 80% of people were unable to meaningfully engage in the economy it would collapse. We already know this model does not work due to several centuries of feudalism.<p>It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.<p>Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?
> AI? If everyone is broke because all the jobs got automated, who is buying the products to supply revenue to the companies<p>Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks
What are they buying with this money? If you're the rich 1% and have replaced the 99% with AI there is no longer an economy for you to participate in. We don't have to imagine this scenario, we already did feudalism, and it famously boiled down to land and military.<p>> slave class<p>This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"
Responses in this thread are embarrassing. Cat's out of the bag and needs a steward. People acting like Altman can just turn the machines off and this all stops are deluded.
Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.
Jfc. People, a molitov cocktail was thrown as his home.<p>The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.<p>Holy shit.
What the hell is up with this thread? It seems half the people here are saying they get molotoved on a weekly basis,Sam is a such and such for not taking it like a man, while the other half appears to mourn the lack of casualties?<p>Wtf is wrong with you people?
Get off my lawn and go back to Reddit where you belong!
No one deserves to be attacked.<p>I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
Daamn, you were too fast to share the story haha.
OpenAI will end up the hero of this whole AI saga. I actually believe what he wrote there. Anthropic just took a left turn when they chose to lock up mythos. That was a pivotal move that proved Anthropic’s mindset is dangerous. They just changed the trajectory of AI completely, for the worst.<p>OpenAI just needs to learn to manage products. They need to start finishing things rather than just shutting down projects without putting real effort into iterating on them to create viable business models. They are undisciplined. They’ve done this phony version of looking disciplined by shutting down Sora and nixing adult mode, but that’s superficial. The things they’re pivoting to are no more serious. They just sound serious. They gotta learn to create desire in consumers and design viral AI products. Like Apple. Consumer facing pop culture products. That’s the market that’s wide tf open. They can print if they get good at that.