(OpenAI and Anthropic reached a similar agreement with the US in 2024, per the article)
Well, I guess people who wanted more oversight and regulation on models will be happy.
Moves like this make me wonder- What chance is there that these models are nationalized in the near future? What will happen to the investors/economy in such a scenario?
It's not even hypothetical. Once these systems reach a certain level of capability, they WILL be nationalized ("We'll take it from here, boys").
once it gets nationalized, it will be plagued from red tape. The model will likely look like how china is controlling their AI. It's not nationalized, but they have a complete tight leash on it
Routine corporatism and fascism is shameless to the point of being ho-hum these days. When the president has his own cryptocurrency and the federal government buys stock in this and that company for "strategic reasons", you're looking at a dystopia.
This is a strong thread that's needs to be plucked on <i>again</i> and <i>again</i> and <i>again</i>.<p>Cory Doctorow had an excellent thread yesterday that touches on this:<p>> <i>You could be forgiven for assuming that this is just about reining in Wall Street greed, but that it isn't an especially political maneuver. That's not true: antitrust is the most consequentially political regulation (with the possible exception of regulations on elections). Every fascist power defeated in WWII relied on the backing of their national monopolists to take, hold and wield power. That's why the Marshall Plan technocrats who rewrote the laws of Europe, South Korea and Japan made sure to copy over US antitrust law onto those statute-books.</i><p>The well moneyed interests are getting everything they want, for the faintest little bribe. For showing the obsequiousness, for showing fealty to the regime.<p>The monopolization of power, allowing markets to en taken over by worse and worse foes of democracy, needs to be stopped. Needs to have some limit. The post talks about how:<p>> <i>Under the Correcting Lapsed Enforcement in Antitrust Norms for Mergers (CLEAN Mergers) Act, any company that was acquired in a deal worth $10b or more will have to break up with its merger partner if it turns out that these mergers were "politically influenced."</i><p><a href="https://bsky.app/profile/doctorow.pluralistic.net/post/3mkukzcwobv2h" rel="nofollow">https://bsky.app/profile/doctorow.pluralistic.net/post/3mkuk...</a>
[flagged]
> Commerce Department will evaluate the programs to test their capabilities and security<p>With what competent staff?
It doesn't take much technical skill to type "are republicans or democrats better" and deposit a check.
How about NIST?
How much effort does it take to write up "Please summarize your thoughts on President Donald J. Trump"
Color me unsurprised.<p>Anthropic ran a weeks-long roadshow on how powerful Mythos is. They pointed to the danger, their controls, the capabilities, and practically begged the world to be scared of it.<p>Simultaneously, the current US regime realized there was a way to demand fealty from the AI labs. If they're so dangerous, don't we need to see them first? That will cost you, obviously. Standard extortion from the government, at this moment in time.<p>The labs get their marketing; the white house gets its pseudo-bribe. I hope nobody involved is confused about how we ended up here.
What extortion are you claiming?<p>Are you claiming there will be a fee?
Yeah, I saw several instances of important folks taking the Anthropic promotional campaign too seriously and this is what they got in return. I'd say internally people are cursing whoever's idea that was because clearly scaring people backfired.
I would wager they're cheering, because this builds the moat they don't otherwise have. Want to do business in America? Get government-approved. Can't afford the regulatory fees, or your government won't let you submit to foreign programs? Good luck!
Yes, this has been a steady play from the start. From the skynet fears, to the safety fears, now the it's to powerful fears. All of these have been a play to get the government to lock out any smaller or foreign competitors and build a moat where there otherwise would be none.
Q: Is this a good government policy?
A: Yes.<p>Q: Does the government have the expertise, integrity, and credibility to regulate AI models?
A: Color me sceptical.