Kagi Translate was kind enough to turn this from LinkedIn Speak to English:<p><i>The Microsoft and OpenAI situation just got messy.</i><p><i>We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:</i><p><i>1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.</i><p><i>2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.</i><p><i>3. Microsoft is done giving OpenAI a cut of their sales.</i><p><i>4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.</i><p><i>5. Microsoft is still just a big shareholder hoping the stock goes up.</i><p><i>We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.</i>
This was actually really helpful. I feel like it should be done for all PR speak.
It's better than the original, but still off.<p>"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."<p>It would seem the translator took corporate PR speak and translated it <i>into</i> something between the LinkedIn and short-form blogger dialects.<p>[1] <a href="https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-are-reaching-a-boiling-point-4981c44f?mod=article_inline" rel="nofollow">https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...</a>
Being objectively correct isn't the goal of the translator, the translator can't possibly know if a statement is truthful. What the translator does is well... translate, specifically from some kind of corporate speak that is really difficult for many people including myself to understand, into something more familiar.<p>I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
For reference: <a href="https://translate.kagi.com/?from=LinkedIn+speak&to=en" rel="nofollow">https://translate.kagi.com/?from=LinkedIn+speak&to=en</a>
It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
It’s pretty much a religious eschatology at this point
The continued fleecing of investors.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
Do the investments make sense if AGI is not less than 10 years away?
> <i>Do the investments make sense if AGI is not less than 10 years away?</i><p>They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
Best way to achieve AGI: Redefine AGI.
The investments don't make sense.
Make mine p p p p p p vicodin
> AGI<p>We already have several billion useless NGI's walking around just trying to keep themselves alive.<p>Are we sure adding more GI's is gonna help?
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
You can read the leaked emails from the Musk lawsuit.<p>At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.<p>I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.<p>Grand delusion, perhaps.
He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.
At this point, AGI is either here, or perpetually two years away, depending on your definition.
Full Self-Driving 2.0
It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...<p>...just please stop burning our warehouses and blocking our datacenters.
Where do I sign up?
Any sufficiently complex LLM is indistinguishable from AGI
> <i>Any sufficiently complex LLM is indistinguishable from AGI</i><p>Isn't this tautology? We've <i>de facto</i> defined AGI as a "sufficiently complex LLM."
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.<p>However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.<p>LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?<p>At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
Some might be missing the reference: <a href="https://en.wikipedia.org/wiki/Clarke's_three_laws" rel="nofollow">https://en.wikipedia.org/wiki/Clarke's_three_laws</a>
This agreement feels so friendly towards OpenAI that it's not obvious to me why Microsoft accepted this. I guess Microsoft just realized that the previous agreement was kneecapping OpenAI so much that the investment was at risk, especially with serious competition now coming from Anthropic?
Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.
Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
> Microsoft will no longer pay a revenue share to OpenAI.<p>I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
Does this mean Microsoft gets OpenAI's models for "free" without having to pay them a dime until 2032?<p>And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?<p>And Microsoft owns 27% of OpenAI, period?<p>That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
OpenAI was also threatening to accuse "Microsoft of anticompetitive behavior during their partnership," an "effort [which] could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign" [1].<p>[1] <a href="https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-are-reaching-a-boiling-point-4981c44f?mod=article_inline" rel="nofollow">https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...</a>
Does anyone expect azure quality to improve? Has it improved at all in the last 3 years? Does leadership at MS think it needs to improve?<p>I doubt it
Am I crazy, or was this press release fully rewritten in the past 10 minutes? The current version is around half the length of the old one, which did not frame it as a "simplification" "grounded in flexibility" but as a deeper partnership. It also had word salad about AGI, and said Azure retained exclusivity for API products but not other products, which the new statement seems to contradict.<p>What was I looking at?
They forgot the "hey ChatGPT, rewrite this to have better impact on the company stock" before submitting it
I noticed the exact same thing. I read the original, went back to read it again and it’s completely changed.
Interesting side effect of this is that Google Cloud may now be the only hype scaler that can resell all 3 of the labs models? Maybe I'm misinterpreting this, but that would be a notable development, and I don't see why Google would allow Gemini to be resold through any of the other cloud providers.<p>Might really increase the utility of those GCP credits.
Might not be good for Gemini long term if Anthropic and OpenAI can and will sell in every cloud provider they can find but businesses can only use Gemini via Google Cloud.
Good for Google Cloud, bad for Gemini = ??? for Google
How is it good for Gemini that it's not available on two out of three major cloud platforms?
that will likely mean the end of gemini models...
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.<p>The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Dennis: I think we made every single one of our Paddy's Dollars back, buddy.<p>Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.<p>Dennis: That's right.<p>Mac: How much fresh cash did we make?<p>Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.<p>Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's
Dollars. Keepin' it moving.<p>Dennis: Right. That is assuming, of course, that they will come back here and drink.<p>Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.<p>Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.<p>Mac: Okay...<p>Dennis: How does this work, Mac?<p>Mac: The money keeps moving in a circle.<p>Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?<p>Mac: I don't know. I thought you knew.
The original "AGI" agreement was always a bit suspect and open to wild interpretations.<p>I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.<p>Azure is effectively OpenAI's personal compute cluster at this scale.
What fraction of Azure compute does OpenAI represent? (Does the $250bn commitment have a time period? Is it legally binding?)
Azure did $75B last quarter.<p>That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.<p>OpenAI is a large customer, but this is not making Azure their personal cluster.
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
Really interesting. Why would Microsoft have done this deal? I'm a bit lost. Sure they get to not pay a revenue share _to_ OpenAI but surely that's limited to just OpenAI products which is probably a rounding error? Losing exclusivity seems like a big issue for them?
Biggest upside of this is I expect OpenAI models to be available on Bedrock, which is huge for not having to go back to all your customers with data protection agreements.
Isn’t that an “API product”? I read this assuming the whole point of renegotiation was to let OpenAI sell raw inference via bedrock, but that still seems to be blocked except for selling to the US Government.
> OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.<p>This seems impossible.
Pursue "new opportunities"? Microslop is dumping OpenAI and wishes it well in its new endeavors.
I read this as the other way. OpenAI was desperate to dump Microsoft.
In retrospect all those OAI announcements are gonna look so cringe.<p>They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
"Advancing Our Amazing Bet" type post
Looks like MS is shafting OpenAI.
"We want to sell surveillance services to the US gov. MSFT was hesitant so we gave ourselves room to do it without them."
Impossible to take any of this seriously when it constantly refers to AGI.
[dead]
[dead]
Why is this being made public?
It’s an agreement between a public company and a highly scrutinized private company. Several of the provisions will change what happens in the marketplace, which everyone will see.<p>I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).