<a href="https://archive.is/jWJw0" rel="nofollow">https://archive.is/jWJw0</a>
Is Wang even able to achieve superintelligence? Is anyone? I'm unable to make sense of Wang's compensation package. What actual, practical skills does he bring to the table? Is this all a stunt to drive Meta's stock value?
The way it sounds, Zuckerberg believes that they can, or at the very least has people around him telling him that they can. But Zuckerberg also though that the Metaverse would be thing.<p>LeCun obviously thinks otherwise and believes that LLMs are a dead-end, and he might be right. The trouble with LLMs is that most people don't really understand how they work. They <i>seem</i> smart, but they are not; they are really just good at appearing to be smart. But that may have created the illusion the true artificial intelligence is much closer than it really is in the minds of many people including Zuckerberg. And obviously, there now exists an entire industry that relies on that idea to raise further funding.<p>As for Wang, he's not an AI researcher per se, he basically built a data sweatshop. But he apparently is a good manager who knows how to get projects done. Maybe the hope is that giving him as many resources as possible will allow him to work his magic and get their superintelligence project on track.
Wang is a networking machine and has connected with everyone in the industry. Likely was brought in as a recruiting leader. Mark being Mark, though, doesn’t understand the value of vision and figured getting big names in the same room was better than actually having a plan.
Your last sentence suggests that he willingly failed to take the choice to create a vision and a plan.<p>If, for whatever reason, you <i>don't</i> have a vision and a plan, hiring big names to help kickstart that process seems like a way better next step than "do nothing".
You’re right – the way I phrased it assumes “having a plan” is a possibility for him. It isn’t. The best he was ever going to do was get talent in the room, make a Thinking Machines knockoff blog post with some hand wavey word salad, and stand around until they do something useful.
How to draw an owl:<p>1. Hire an artist.<p>2. Draw the rest of the fucking owl.
3. Scribble over the draft from the artist. Tell them what is wrong and why. repeat a few times.<p>4. In frustration, use some AI tool to generate a couple of drafts that are close to what you want and hand them to the artist.<p>5. Hire a new artist after the first one quits because you don't respect the creative process.<p>6. Dig deeper into a variety of AI image-generating tools to get really close to what you want, but not quite get there.<p>7. Hire someone from Fiverr to tweak it in Photoshop because the artists, both bio and non-bio, have burned through your available cash and time.<p>8. Settle for the least bad of the lot because you have to ship and accept you will never get the image you have in your head.
Wang is not Zuck's first choice. Zuck couldn't get the top talents he wanted so he got Wang. Unfortunately Wang is not technical, he excels in managing the labeling company and be the top in providing such services.<p>That's why I also think the hiring angle makes sense. It would actually be astonishing if he could turn technical and compete with the leaders in OAI/Anthrpic
> They seem smart, but they are not; they are really just good at appearing to be smart<p>There are too many different ways to measure intelligence.<p>Speed, matching, discovery, memory, etc.<p>We can combine those levers infinitely create/justify "smart". Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.<p>Maybe you meant genius? Because that standard is quite high and there's no way they're genius today.
They're neither smart nor dumb and I think that trying to measure them along that scale is a fool's errand. They're combinatorial regurgitation machines. The fact that we keep pointing to that as an approximation of intelligence says more about us than it, namely that we don't understand intelligence and that we look for ourselves in other things to define intelligence. This is why when experts use these things within their domain of expertise they're underwhelmed, but when used outside of those domains they become halfway useful.<p>Trying to create new terminology ("genius", "superintelligence", etc.) seems to only shift goal posts and define new ways of approximation.<p>Personally, I'll believe a system is intelligent when it presents something novel and new and challenges our understanding of the world as we know it (not as I personally do because I don't have the corpus of the internet in my head).
> You can be both at the same time.<p>Smart and dumb are opposites. So this seems dubious. You can have access to a large base of trivial knowledge (mostly in a single language), as LLMs do, but have absolutely no intelligence, as LLMs demonstrate.<p>You can be dumb yet good at Jeopardy. This is no dichotomy.
> Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.<p>This has to be bait
> they are really just good at appearing to be smart.<p>In other words, functionally speaking, for many purposes, they are smart.<p>This is obvious in coding in particular, where with relatively minimal guidance, LLMs outperform most human developers in many significant respects. Saying that they’re “not smart” seems more like an attempt to claim specialness for your own intelligence than a useful assessment of LLM capabilities.
> They seem smart, but they are not; they are really just good at appearing to be smart.<p>Can you give an example of the difference between these two things?
Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content. Let's assume they are doing a pretty convincing job too. Now, the audience watching these scenes may think that the actor is actually speaking the language, but in reality they are just mimicking.<p>This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.
> Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content.<p>Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are always grammatical, always relevant to what you said modulo ambiguity, largely coherent, and accurate more often than not. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.
You've missed the point of the example, of course it's not the exact same thing. With regard to LLM, the biggest difference is that it's a regression against the world's knowledge, like an actor who memorized every question that happens to have an answer written down in history. If you give him a novel question, he'll look at similar questions and just hallucinate a mashup of the answers hoping it makes sense, even though he has no idea what he's telling you. That's why LLMs do things like make up nonsensical API calls when writing code that seem right but have no basis in reality. It has no idea what it's doing, it's just trying to regress code in its knowledge base to match your query.
I don't think I missed the point; my point is that LLMs do something more complex and far more effective than memorise->regurgitate, and so the original analogy doesn't shed any light. This actor has read billions of plays and <i>learned many of the underlying patterns</i>, which allows him to come up with novel and (often) sensible responses when he is forced to improvise.
You are describing Searle's "Chinese Room argument"[1] to some extent.<p>It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is <i>something</i> there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...<p>[1] <a href="https://plato.stanford.edu/entries/chinese-room/" rel="nofollow">https://plato.stanford.edu/entries/chinese-room/</a>
1. I would argue that an actor performing in this way does actually understand what his character means<p>2. Why doesn't this apply to you from my perspective?
Wisdom vs knowledge, where the word "knowledge" is doing a lot of work. LLMs don't "know" anything, they predict the next token that has the aesthetics of a response the prompter wants.
I suspect a lot of people but especially nerdy folks might mix up knowledge and intelligence, because they've been told "you know so much stuff, you are very smart!"<p>And so when they interact with a bot that knows everything, they associate it with smart.<p>Plus we anthropomorphise a lot.<p>Is Wikipedia "smart"?
It doesn't seem obvious to me that predicting a token that is the answer to a question someone asked would require anything less than coming up with that answer via another method.
Hallucinating things that never exist?
Being able to learn to play Moonlight Sonata vs. being able to create it. Being able to write a video game vs being able to write a video game that sells. Being able to tell you newtons equations vs being able to discover the acceleration of gravity on earth
[flagged]
What are the differences between a person that is smart and an LLM that seems smart but isn't?
The ability to generate novel ideas.
What's your definition of a novel idea? How do you measure that?<p>I've had a 15 year+ successful career as a SWE so far. I don't think I've had a single idea so novel that today's LLM could not have come up with it.
Well that's not true - see the Terry Tao article using AlphaEvolve to discover new proofs.<p>Additionally, "novel ideas" isn't something that is included in something that smart people do so why would it be a requirement for AI.
How many people generate novel ideas? When I look around at work, most people basically operate like an LLM. They see what’s being done by others and emulate it.
In my experience, discernment and good judgment. The "generating ideas" capabilities is good. The text summarization capabilities are great. However when it comes to making reasoned choices, it seems like it's losing all abilities, and even worse it will sound grossly overconfident or sycophantic or both.
The LLM is not a person.
it's in the eye of the beholder
Humans aren't smart, they are really just good at appearing to be smart.<p>Prove me wrong.
You'll just claim we only "appeared" to prove you wrong ;)
If you don’t think humans are smart, then what living creature qualifies as smart to you? Or do you think humans created the word but it describes nothing that actually exists in the real world?
I think most things humans do are reflexive, type one "thinking" that AIs do just as well as humans.<p>I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.<p>I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.
Wang is able to accurately gauge zuck’s intelligence.
If Zuck throws $2-$4Bn towards a bunch of AI “superstars” and that’s enough to convince the market that Meta is now a serious AI company, it will translate into hundreds of billions in market cap increases.<p>Seems like a great bang for the buck.
> What actual, practical skills does he bring to the table?<p>This hot dog, this no hot dog.
So FAIR has been effectively disbanded, LeCun is moving out, Wang is doing 996 and teams are hiring to fire to insulate people who need to vest their stock. How long until the company accumulates enough stress to rupture completely?
As someone who's startup got bought out by facebook, many years ago, its not surprising to read.<p>The politics surrounding zuck is wild. Cox left then came back, mainly because hes not actually that good, and has terrible judgement when it comes to features and how to shape effective teams (just throw people at it, features should be purely metric based, or a straight copy of competitors products. There is no cohesive vision of what a meta product should be. Just churn out microchanges until something sticks)<p>Zuck also has pretty bad people instincts. He is surrounded by egomangics, and Boz is probably the sanest out of all of them. Its a shame he doesn't lead engineering that well (ie getting into fights with plebs in the comments about food and shuttle timings)<p>He also is very keen on flashy new toys, and features, but has no instinct for making a product. He still thinks that incremental slightly broken features, but rapidly released is better than a product that works well, is integrated and has a simple well tested UI pathway for everything. Common UI language? Pah, thats for android/apple. I want that new shiny feature, I want it now. What do you mean its buggy? just pull people off that other project to fix it. No, the other one.<p>Schrep also was an in insightful and good leader.<p>Sheryl is a brilliant actor that helped shape the culture of the place. However there was always a tinge of poison, which was mostly kept in check until about 2021. She went full politician and started building her own brand, and generally left a massive mess.<p>Zuck went full bro and decided that empathy made shit products and decided that he like the taste of engineer's tears.<p>but back to TBD.<p>The problem for them is that they have to work collaboratively with other teams in facebook to get the stuff the need. The problem is, the teams/orgs they are fighting against have survived by competing against others ruthlessly. TBD doesn't have the experience to fight the old timers, they also don't really have experience in making frontier models.<p>They are also being swamped by non-ML engineers looking to ride the wave of empire building. this generates lots of alignment meetings and no progress.
All facts in this post. FB management always had such a shockingly different tone than other big tech companies. It felt like a bunch of friends who’d been there from the start and were in a bit over their heads with way too much confidence.<p>I have a higher opinion of zuck than this though. He nailed a couple of really important big picture calls - mobile, ads, instagram - and built a really effective organization.<p>The metaverse always felt like the beginning of the end to me though. The whole company kinda lived or died by Zuck’s judgement and that was where it just went off the rails, I guess boz was just whispering in his ear too much.
Computer scientists spending a career building advertising inventory and private data lakes while at the same time desperate to never be perceived in this light. It must make for an interesting "culture."
I mean yeah, booo facebook.<p>The problem with that assessment is that only really the monetisation team were the ones abusing the data. They are an organisation that were very much apart from the rest, different culture and different rules.<p>For the longest while you could be actually making things better, of thinking you were.<p>When problems popped up, we _could_ apply pressure and get things fixed. The blatant content discrimination in india, instagram kids, and a load of other changes were forced by employees.<p>However, in 2023 there were some rule changes aimed at stopping "social justice warrior-ing" internally. It was repeatedly tightened until questioning the leaders is considered against the rules.<p>Its no coincidence that product decisions are getting worse.
It’s both sad and believable when I hear that Boz is the most sane of them all.<p>Boz is such a grifter in his online content. He naturally weasel words every little point and while I have no doubt he’s smart, I don’t think I could trust him to provide an honest opinion publicly.<p>My friends at meta tend to not hold him in the highest esteem but echo largely what you said about the politics and his standing amongst them.
A CEO with terrible judgement? Egomaniac executives? Products that a/b test and stick with what works? Chasing trends? Internal competition?<p>Sounds like every company.
I'm as ready to hate on Meta as anyone but this article is a bit of a nothingburger.<p>So there are disagreements about resource allocation among staff. That's normal and healthy. The CEO's job is to resolve those disagreements and it sounds like Zuck is doing it. The suggestion to train Meta's products on Instagram and Facebook data was perfectly reasonable from the POV of the needs of Cox's teams. You'd want your skip-level to advocate for you the same way. It was also fine for AW to push back.<p>>. On Thursday, Mr. Wang plans to host his annual A.I. holiday party in San Francisco with Elad Gil, a start-up investor...It’s unclear if any top Meta executives were invited.<p>Egads, they _might_ not get invited to a 28-year-old's holiday party? However will they recover??
When I read that the dude was asked to take $2b from reality labs and spend it on AI, I was shocked… that they were still spending $2b on virtual reality nonsense in 2025.<p>That said, from what I understand, X is working on using grok to improve the algorithm.<p>Why can’t fb do the same and coexist?
I still don't think Meta is wrong about VR. It's just still not the year for it. (I know the market has been saying that for 30 years)
Bro they spend $4B on RL <i>every quarter</i>.
Meta prints money as an ad company but clearly resents being one.<p>VR was a ~$100B+ attempt to buy pivot, and it’s generated ~single-digit billions in revenue. The tech worked maybe, but the vibe sucked, and the problem was that people don’t want to live or work there. Also, Meta leadership personalities are toxic to a lot of people.<p>Now they’re doing the same thing with AI e.g., throw money at it, overpay new talent, and force an identity shift from the top. Longterm employees are still well paid, just not AI gold rush paid which is gunna create fractures.<p>The irony is Meta already had what most AI companies don’t in distribution, data, and monetization. AI could have been integrated into revenue products instead of treated as a second escape from ads.<p>You can’t typically buy your way out of your business model. Especially with a clear lack of vision. Yes, dood got lucky in a couple acquisitions, but so would you if you were throwing billions around.
>clearly resents being one.<p>Do they? It seems to me that they're just aware that social media and the internet is trendy and they need to be out there ready to control the next big thing if they want to put ads on it. Facebook has been dying for years. Instagram makes them more ad revenue per user than FB but it's not the most popular app of its class.
I for one have been trying to use the term “ad tech” in lieu of “big tech/faang/etc.” for a couple of years now hoping it will catch.
>from what I understand, X is working on using grok to improve the algorithm.<p>>Why can’t fb do the same and coexist?<p>I'm sorry ,but what does this mean? Like are they prompting grok for suggestions on improvements? or having it write code? or something else?
if you think Meta RL loses money wait until OpenAI goes public
Meta should replace Mr Z with a bit sane person. At this point, he is like a mad emperor.
Zuck has unilateral majority voting power. This was probably a good thing during the financial crisis, but appears to be more of a liability these days.
Would it be a successful business? That is what matters in the market.
I feel like many of the comments are focused on the trees and not on the forest. The new head of Facebook AI is 28 years old? That's not OK, that's too young. Too inexperienced and not worldwise enough by a long shot. No shit they're having problems. Can you imagine being a facebook lifer, or one of the LLM pros they've bribed/hired over to the company, to be bossed around by someone with very little life experience? No shit it isn't going well.
That’s much older than when Zuckerberg founded Facebook. Also older than when Bill Gates founded Microsoft, Steve Jobs founded Apple, and Larry Page and Sergey Brin founded Google. We’re talking about running a tech company, not being a politician. Clearly there’s no need to be 50+ and have a bunch of “life experiences” to be successful.
When these companies were founded, they had nowhere near the scale and resources in the hands of the current set of folks. Zuckerberg at 28 was riding a bike and this is a rocketship (pointed up or down, is not clear)
You’re comparing being the founder and CEO, to being an employee hired to run a fraction of an organisation?
Founding a faang and growing it provides a very different set of life experiences than being a startup owner thrust into it.
> The new head of Facebook AI is 28 years old? That's not OK, that's too young. Too inexperienced and not worldwise enough by a long shot.<p>This is ageist in the way I don't usually expect from the Valley. Plenty of entrepreneurs have built successful or innovative concepts in their 20s. It is OK to state that Wang is incompetent, but that has little to do with his age and more to do with his capability.
Mr Z. pays engineers well, that's what counts in my book, I like Mr. Z.
Even if both "sides" really wanted to get along, working with someone making 100x (if not 1,000x) more than you is poised to be a weird interaction.<p>It must also be massively demoralizing, particularly if you're an engineer who has been there for 10+ years and has pushed features which directly bring in revenue, etc...<p>Btw,<p>>But Mr. Wang, who is developing the model, pushed back. He argued that the goal should be to catch up to rival A.I. models from OpenAI and Google before focusing on products, the people said.<p>That would be a massive mistake. Wang is either a one-trick pony or someone who cares more about his other venture than Meta's, sad.
There was a similar dynamic when FB bought WhatsApp. Although I think people kind of forgot about it after a year or two.
same is true in many startups
He's not wrong, you can't compete against blue sky R&D if you're focused on making something profitable. It's the innovators dilemma.
I agree, classic innovator's dilemma. It's a new business enterprise, has nothing to do with Meta's existing business or products. They can't be under the same roof and mush have independent goals.
<p><pre><code> That team, called TBD Lab (for “to be determined”), was placed in a siloed space next to Mr. Zuckerberg’s office at the center of Meta’s Silicon Valley headquarters, surrounded by glass panels and sequoia trees.
</code></pre>
Hooli XYZ? Silicon Valley was over 10 years ago and it seems to have aged pretty well. I wonder if this is going to be like “Yes Minister” that is close to 50 and still completely on point.
> TBD Lab’s researchers have come to view many Meta executives as interested only in improving the social media business<p>That cannot have been a surprise to anyone joining.
Meta doesn’t really have a social media business, they have an ad business that’s driven by a massive dumping operation in social media.
That framing is silly. “NBC doesn’t have a television business, they have an ad business”. “Google doesn’t have a search business, they have an ad business.” “Amazon doesn’t have a retail business, they have an ad business.”<p>It doesn’t provide any value to reframe it this way, unless you think it’s some big secret that ads are the main source of revenue for these businesses.
I'd contrast this with Flickr. Flickr was the original social network. They have a modest loss leader, a reasonable free tier, but nothing like the permanent money bonfire that the big tech firms operate.<p>They were kinda the first real Web 2.0 social media site, with a social graph, privacy controls, a developer API, tagging, RSS feeds.<p>I feel that they never really got to their full potential exactly because these big VC-backed dumping operations in social media (like Facebook) were able to kill them in the crib.<p>If we're going to accept that social media is a natural monopoly: great. Regulate them strictly, as you should with any monopoly.
Flickr failed because they sold to Yahoo which was bad place to end up. But a successful Flickr would look a lot like Instagram<p>Del.icio.us is the same story. Good product ahead of its time, bought by Yahoo and died. Could have been Pinterest.
Fair point, there's a good chance we'd be living in a techno utopia right now if someone was able to go back in time and prevent Yahoo from murdering so many promising startups. Conversely, if Yahoo had just spent the relative pocket change that Google was asking for back in the day perhaps we'd be living under the oppressive thumb of a trillion dollar market cap Alta Vista.
> VC-backed dumping operations<p>Which is very reassuring considering some of them are fairly obviously on the wrong side of history with very naive viewpoints <a href="https://news.ycombinator.com/item?id=7852246">https://news.ycombinator.com/item?id=7852246</a>
NBC produces their own content, Facebook and Instagram meanwhile are the equivalent of public access TV with ads. There is no unique "brand" that Facebook has, anything posted on there is also posted everywhere else.
Restaurants don't have a food business, they have a charging people money through bills business.
> “NBC doesn’t have a television business, they have an ad business”.<p>They do broadcast TV, the purpose of which is to display ads. That does make sense.<p>> “Google doesn’t have a search business, they have an ad business.”<p>When Google started out, in the "don't be evil", simple home page days, they were a search company. It is hardly true any more, ads are now the centre of their business.<p>> “Amazon doesn’t have a retail business, they have an ad business.”<p>Well, duh! Quite obvious these days. That is where they get the lion's share of the revenue, outside AWS.<p>I am impressed, you hit the nail on the head!
What is the difference between the two? What kind of social media business is there other than selling ads?
I know we're so defeated as consumers that we can hardly imagine it, but you could just...charge for the customers' access to social media network. Kinda like every other service that charges money.<p>It would have the side effect of making the whole business less ghoulish and manipulative, since the operators wouldn't be incentivized to maximize eyeball hours.<p>It's impossible to imagine this because government regulation is so completely corrupted that a decades-long anticompetitive dumping scheme is allowed to occur without the slightest pushback.
Unlike most business, social media relies on having a high market saturation to provide value. So having a subscription model doesn’t work very well.<p>Of course perhaps it’s a bit different now since most people consume content from a small set of sources, making social media largely the same as traditional media. But then traditional media also has trouble with being supported by subscriptions.
App.net was a wonderful experience with great developer buy in. It is also my understanding that it was operating at break even when it was mothballed. The VC backing it wanted Facebook returns. It was an amazing experience because it didn’t depend on advertisers. I have no idea how it would have fared through Covid and election dramas but it remains my platonic ideal for a social network.
It's basically Mastodon. The infrastructure is paid by its owner and often relies on donations from their users.
I hate the ad business model as much as the next person, but this is a pipe dream. Meta had ~$50b in revenue on ads last quarter, and 3.54b “daily active people” whatever that means. That’s in the order of $1/“dap”/week, and there is just absolutely no way any meaningful proportion of their userbase would be paying that much for these apps.
Mastodon is not ad funded.
Dreamwidth has been about for fifteen years now and is entirely user funded.<p>Scaling is harder. But you can have a niche which works fine.
Perhaps not, but you can bet that they were told the opposite when Zuckerberg was recruiting them. Indeed, ring fencing the lab does suggest some real attempt to do it.
I know more about AI than any of these people.
With the exception of instagram fb marketplace, meta just looks and feels like a chaotic, sloppy mess of a company. Between the incoherent and buggy garbage that is ads manager (something I have used for my own business) and zuck saying he laid off poor performers (effectively screwing those people for no reason), it all looks like poor business operations. So its no surprise they can't figure out AI even with all the ads profits and brain power.<p>An adult needs to show up, put zuck back in a corner and right the ship.
> zuck saying he laid off poor performers (effectively screwing those people for no reason)<p>Were they not actually performing poorly, then? Maybe I'm missing some context, but laying off poor performers is a good thing last I checked. It's identifying them that's difficult the further removed you are from the action (or lack thereof).
Several of my colleagues were laid off. We all worked on the same project. I reviewed their code and was in meetings with them daily, so I know what their performance was like. They were absolutely not poor performers and it was ridiculous that they were laid off and labeled as poor performers. The project was a success too.
You're replying to someone (rightfully) pointing out that you can layoff poor performers without proclaiming it with one of the farthest reaching voices in the industry.<p>Anyone who's worked in a large org knows there's absolutely zero chance that those layoffs don't touch a single bystander or special case.
From what I heard, Eric Lippert was one of the layoff victims. I find it unlikely that he was actually a poor performer, since he's an industry legend.
"[My probabilistic languages] team in particular was at the point where we were regularly putting models into production that on net reduced costs by millions of dollars a year over the cost of the work.<p>...<p>We foolishly thought that we would naturally be protected from any layoffs, being a team that reduced costs of any team we partnered with.<p>...<p>The whole Probability division was laid off as a cost-cutting measure. I have no explanation for how this was justified and I note that if the company were actually serious about cost-cutting, they would have grown our team, not destroyed it."<p><a href="https://ericlippert.com/2022/11/30/a-long-expected-update/#:~:text=We%20were%20very,not%20destroyed%20it." rel="nofollow">https://ericlippert.com/2022/11/30/a-long-expected-update/#:...</a>
I refuse to believe that companies are allocating major ad spend to Facebook in 2025. Instagram, yes.
[dead]
As someone that pivoted to agentic work and quit the job that tried to get the existing team to do agentic work:<p>All companies are structuring like this, and some are more equipped to do it than others<p>Basically the executive team realizes the corporate hierarchy is too rigid for the lowly engineers to surface any innovation or workflow adjustments above the AI anxiety riddled middle management and bandwagon chaser’s desperate plea for job security, and so the executive creates a team exempt from it operating in a new structure<p>Most agentic work impacts organizations that are outside of the tree of that software/product team, and there is no trust in getting the workflow altered unless a team from upon high overwrites the targeted organization<p>we are at that phase now, I expect this to accelerate as executives catch on through at least mid-summer 2026
It's not even a new thing... re Skunkworks. It's completely natural for new/developing technology to be formed in new organizations separate from incumbered corporate bureaucracy. iirc, IBM did this with the PC, that later languaged under the bureaucrats, and there are many others over the past half century.<p>I think the biggest issue with Meta here, is how much visibility they have to adjacent orgs, which is not too surprising given the expenditures, but still surprising. It should be a separate unit and the expenses absolutely thought of as separate from the rest of the org(s).
Sounds like DOGE, a resounding success!