I see things like 2 sentence menu summaries in Uber Eats that are completely off in tone.<p>A quick sample from my app right now:<p>“Authentic Caribbean Flavours. Jerk Chicken, Curry Goat, and more. A vibrant culinary journey awaits.” - local Caribbean place<p>“Customisable burgers with 250,000+ toppings. Hand-cut fries and rich milkshakes await.” - Five Guys<p>“Authentic Indian cuisine bursting with rich flavours. Perfect for late-night cravings” - local Indian<p>Everything is Authentic, or Rich, or whatever.<p>—-<p>They’re investing in the wrong bits of AI. I’m sure they’re AB testing these soulless often inaccurate blurbs but I just cannot see how investing money into them actually sells more product.<p>On the other hand, if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful, and it would be cheaper.
Aside from the hilarious "250,000+ toppings" error, these summaries seem... fine? I would be unsurprised to learn that a human came up with them, even. Seems like pretty common/standard marketing copy.
So, I've just read a few dozen student reports, which I'm 95% sure were mostly generated by AI.<p>The problem isn't one page of one report. It's not even one whole report. But, the more you read, the more irritating it gets. It's hard <i>not</i> to notice the AIisms, and once you know them, it gets really obvious. And I know some people will say 'Oh, I say X', for any particular X, but the thing that people don't do is use some same construction at least twice a page, every page, forever.<p>Now, I can imagine there ends up being a bit of a battle, where AIs try to learn to write 'less AI', but for now, it's very obvious if you read enough AI generated stuff.
>It's hard not to notice the AIisms, and once you know them, it gets really obvious.<p>Maybe I haven't read enough uber eats descriptions to notice, but at least from the sampling above it doesn't seem too obviously AI. There might be a lot of cliche wording, but it's not even clear whether it's worse than human reviews/descriptions.
Maybe each one is fine in isolation - what doesn’t come across from the sample is that every single one is practically the same. If you have Uber Eats, open up the app and look through the summaries for a bunch of restaurants and you’ll see what I mean.<p>And besides that, this just feels like something nobody asked for that probably doesn’t sell more food compared to, for example, more pictures.
Most restaurant food is all the same though because they buy everything from Sysco.
It seems to have been at least slightly improved, but youtube video summaries suffered from this to an almost comical degree not long ago. The AI voice is already pretty recognizable and stilted, then you constrain it to avoid saying anything negative or spoilery about the video, and (presumably) don't let it remember past output. No surprise its extremely repetitive. For humans you're at least getting different people's voices, on different days, who remember that they just wrote about how the last one was a "unique look highlighting the importance of design".
Don't worry, they'll also use AI to add more pictures, which will all look strangely similar while bearing at best passing resemblance to anything you might actually receive after placing an order.
I think that's exactly the point. It's the distillation of the most common marketing copy possible, and when that tone is applied everywhere it becomes very same-y, like those cookie cutter neighborhoods where every house is the exact same. Which to some extent defeats the purpose of marketing as it doesn't stand out at all, just sanitized sameness. It's boring and a bit creepy.
That's not an error, it's what 5 Guys advertise. It's the number of combinations for their toppings.
That sounds like an error to me. "Number of toppings" is not the same as "Number of possible combinations of toppings".
Shouldn’t it be a lot more? Around 20 toppings in any combination and count would be 20! + 19! + 18! … no?
But why does Uber need to spend 3.4B on injecting a useless blob of text between me and an overpriced burger delivered by a struggling illegal immigrant in a smoke-belching jalopy?<p>I know the counter-argument. "This will increase sales". You know what else would increase sales? Spending the 3.4B to replace the above with a uniformed delivery service similar to UPS. That job could pay benefits.
My last job did something similar. An AI blurb feature was researched and built and costs a good chunk of resources to run for no reason other than being able to tell investors AI was being used.<p>I proposed a solution using simple heuristics that would have accomplished the same output, would have been cheaper to build and cost next to nothing to run, but being economical, efficient and boring doesn't make exciting PowerPoint slides.
> But why does Uber need to spend 3.4B on injecting a useless blob of text<p>They didn't. 3.4B was their total R&B cost. Don't blame AI for your human hallucination.
My R&B cost used to amount to buying the occasional Al Green album, how times have changed
Why does a fast food delivery service need a research arm in the first place? It's not exactly rocket surgery.
For the same reason the shoe industry spends billions sponsoring athletes and sports teams to hawk their gear. It's to build layer upon layer of abstractions to move the conversation away from how the sausage is made, and towards something that could justify their own bloated salaries, like promoting "sporting excellence" or "tech innovation".
they make me feel embarrassed to be human
Right, it seems just like a marketing blip, which is <i>bad</i>, because marketing isn't meant to be informative, it's meant to be manipulative.<p>If we're using AI and we're still getting the gobbley gook nothing burger marketing word soup, then what are we doing here?<p>No, not everything IS rich and authentic. And no, it's not awaiting me!
Yeah, what's going on in these cos is that a PM is tasked to find ways of integrating AI into product and well if someone's payroll depends on it they _will_ find ways to integrate AI into product. "Hey I couldn't find ways to integrate AI into product" is not an acceptable response.<p>And it's not just Uber. My weather app has an AI weather summary these days
Product reviews from real people are useful because they are allowed to say negative things.<p>Once you bypass the real reviews for a summary, all those useful negative signals get glossed over because the host platform doesn’t want to piss off the restaurants by propagating those negative comments.
The article seems to suggest the unexpected spend was primarily on coding tools, like Claude Code.<p>One would hope Uber could manage 1 sentence API summaries (regardless of their quality) for less than $3.4 billion.
You're absolutely right.
I don't want my fries and milkshakes to await. I want them to be made fresh when I order them. Silly robots.
> They’re investing in the wrong bits of AI<p>They’re investing on the wrong bits, not wrong bits of AI. No matter how many features they come up with after spending billions of dollars, customers are not any more likely to order food than they already are. The money is better spent reducing their atrocious fees and making sure the restaurant isn’t marking up every single menu item by 25%.
The linked article is not about usage of AI in the product. It's about blowing the budget on AI coding tools, which is a much more interesting topic to discuss given how heavily they are being pushed by some companies.<p>If AI coding tools were having the benefits promised by AI vendors then Uber would be dropping staff, not the tools themselves.
I've never cared about those menu summaries! I always look at menu items and their descriptions. They are fine, at least to me.
> if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful<p>Out of curiosity, what do you think might be a successful application for AI in Uber's business? It seems like this <i>is</i> the sort of thing AI applications end up being. Does it actually get better than this?
You misunderstand. AI cannot fail. It can only _be_ failed. In this case, it was failed by the restaurant industry's lack of actual diversity. _They_ need to do better, not AI.
The only accurate summary is "shit and overpriced"
> <i>According to The Information, Chief Technology Officer Praveen Neppalli Naga said Uber is now "back to the drawing board" after a surge in the use of AI coding tools, particularly Anthropic's Claude Code, has blown past internal expectations.</i><p>Of usage costs?<p>> <i>The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.</i><p>That's not a payoff.<p>What is the immediate cost of those code updates, what is the quality, how do they affect longer-term maintenance, how does that compare to doing it without "AI", etc.<p>Are these articles written to inform or to hype?<p>> <i>UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets.</i><p>There's my answer. Here's a helpful uBlock Origin filter:<p><pre><code> ||finance.yahoo.com^</code></pre>
Only 11%!? Slackers. My team's project is 100% coding agent generated as pushed for by our dear leaders. Yes I'm very scared for when it all crashes down and really hope I'm not there when it does.
Yes, yahoo “journalism” is garbage. The primary source of this story is paywalled, so I can’t actually see what it said, but this AI (or otherwise crappy) summary is worthless.
><i>Despite spending $3.4 billion on research and development, the company has already exhausted its planned AI budget just months into 2026.</i><p>This, and the rest of the article, does does not seem to support that they spent 3.4B on AI. The text implies that the R&D budget <i>for the entire company</i> is 3.4 billion (which sounds vaguely reasonable given that market cap), and the portion of that which was earmarked for AI is already spent. I have no idea what the AI spend is there (although I assume it's not small), and the article doesn't provide any number either.<p>Those are <i>extremely</i> different things (unless there's evidence that 100% of R&D is spent on AI) and that headline seems to be intentionally misleading.
Based on my direct experience of similar budgets, you are exactly correct. AI coding tools don’t incur costs like this. AI costs are dominated by runtime and offline inference as part of business- and customer-facing workloads.<p>AI isn’t cheap, but what is especially not cheap is trying to get results that exceed ~80% in quality. Developers can tolerate gaps, customers won’t.
”Engineers were actively encouraged to use tools like Claude Code and Cursor, even ranking them on internal leaderboards based on usage”<p>Token maxxing? Might explain high costs if you are actively encouraging developers to spend as much tokens as possible.
I was told AI makes people more productive so the costs should easily pay for themselves in the form of more revenue.
I'm coming around to it being like getting a pair of industrial grade yak clippers. Yes, there will be a lot of shiny yaks, but the market for shiny yaks is low.
Maybe the lesson here is we shouldn't rely on the guys selling picks and shovels when determining if we should be buying picks and shovels.
The main question is: what is demand elasticity for software?<p>If it low, and lower prices won’t generate much new demand, we should expect AI to improve engineering productivity, and for companies to reduce staff.<p>If it is high, then we should see companies hire more engineers, increase output and lower prices (and earn more).
Bureaucracy creates work so long as it owns the production function. In software that's typically through system upgrades, new API's, etc. The system will grow in internal complexity to its carrying capacity. You'd need someone who understands how to replace parts to prune, but they don't really have the incentive. This effect is reduced where software is less essential to the product, but any software-heavy product (particularly with a moat) will be more susceptible.<p>Companies try to manage it via CI/CD, outsourcing and internal competition, but no, companies can't magically reduce staff. They can, however, inject fear, which is good for reducing overt bureaucratic games, but actually increases covert bureaucracy and reduces knowledge-sharing, making the problem worse.<p>Only when incentives are aligned - when developers have an (equity) stake in growing the company - can the culture be open and efficient.
Every time the cost of software development has gone down due to higher level tools we've gotten higher software budgets, more software developers, and more released software. Demand appears to be effectively infinite.
>what is demand elasticity for software?<p>NP-hard
3.4B in 4.5 months…is that all going to Anthropic? Makes it seem so with the wording and how they’re pivoting to Codex too
That’s the <i>entire</i> R&D budget - the article is completely lacking in actual details, such as how much was spent on Ai
It's probably all AI spending, including them doing AI stuff for their products.
oh man uber is acquiring the company I work for [1] and we currently really like Claude ... but if Codex is better so be it. I just really, really, really like Claude Code as a front end. Guess I'll have to make it talk Codex instead.<p>[1] it's public knowledge <a href="https://investor.uber.com/news-events/news/press-release-details/2026/Uber-to-Acquire-Global-Chauffeur-Service-Leader-Blacklane/default.aspx" rel="nofollow">https://investor.uber.com/news-events/news/press-release-det...</a>
"My delivery service CEO told me the AI keep eating his tokens so I asked how many tokens he has and he said he just goes to the token shop and gets a new batch of tokens afterwards so I said it sounds like he’s just feeding tokens to the AI and then his laid off workers started crying."
If it is anything like my company, sign enormous deals to AI startups that have existed for 8 months, and do little more than provider wrappers around someone else's model. Then hire three different firms that do the same thing because each division has to prove how much more AI they are than the others. Have a handful of internal engineers who have no idea what they are doing, but get approval to build and run an internal B200 server farm. Ensure any big jobs are done through some kind of white-glove offering from Amazon/Azure that removes complexity, but charges astronomical rates.
Apparent source: <a href="https://www.theinformation.com/newsletters/applied-ai/uber-cto-shows-claude-code-can-blow-ai-budgets" rel="nofollow">https://www.theinformation.com/newsletters/applied-ai/uber-c...</a>
This makes it sound like they spent $3.4B on tooling, but is it actually on salaries? Hardware?<p>Probably 5k-6k hires in the department, at say $350k/employee costs, is $2.1B which still leaves a ton of extra costs somewhere. Are they sending $1B to Anthropic?<p>Weird and uninformetive article.
Large companies have been incentivizing and correlating token spend to performance, thus creating needless spend of tokens for now. Goodharts Law and all that.
Hi everyone. We are going to rate your performance directly on how much money you cost the company to do your job. The more money you spend doing your job, the better your performance review.<p><i>6 months later</i><p>Hi everyone. We are over budget.
I think the article is very deceptive in its framing. They spent 9% more; not $3.2B more. That’s $300m more which is frankly not that much. Firms spent way more than that on cloud adoption or before that web adoption in prior cycles.
Syndicated from <a href="https://www.benzinga.com/markets/tech/26/04/51828848/ubers-anthropic-ai-push-hits-wall-cto-says-budget-struggles-despite-spend" rel="nofollow">https://www.benzinga.com/markets/tech/26/04/51828848/ubers-a...</a><p>It's a poorly written junk article upvoted based on Uber/Anthropomorphic sentiment. I recommend flagging it.
This reads as if 100% of Uber’s R&D spend went to tokens. It did not.
This article is very unclear. It says that "Despite spending $3.4 billion on research and development, the company has already exhausted its planned AI budget just months into 2026" - does this mean the budget for AI coding tools is $3.4B? Or the budget on R&D (product development) at Uber?<p>Then later, "Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing—suggesting AI may be as much a cost driver as a productivity lever" - so what's the 2026 budget? (From a quick googling Uber operates on a December 31-ending financial year).<p>The the article says "Chief Technology Officer Praveen Neppalli Naga said Uber is now "back to the drawing board" after a surge in the use of AI coding tools". But then says "The financial pressure is already building. Uber's R&D expenses rose 9% to $3.4 billion in 2025, and the company expects that figure to keep climbing". So are they "back to the drawing board" (pulling back on the tooling) or plowing on and continuing to grow the costs?<p>The article goes on to say "The payoff is starting to show. Around 11% of Uber's live backend code updates are now written by AI agents, up sharply in just a few months. These systems power everything from ride-matching to pricing and bug fixes.".<p>So what am I to draw from this? What actually was the budget? And was it blown? And if so, what is the consequence? This is just such a bizarre piece.
I wonder how many engineers are using their unlimited token budgets to moonlight new startups.
Holy misleading headlines Batman. They're not spending $3.4B on solely tokens for Anthropic are they? I don't think so...<p>If anything the CTO is just saying, we're blowing through token budgets way faster than expected as the uptake is so immense. I think that's right from what I've seen. Once people get it, they start using AI for everything. Obviously that's not going to be sustainable forever. I do think we're going to see a lot of adaptive routing in the future to cheaper models for more mundane tasks, whereas right now everything is getting routed to Opus regardless of real need.
Look, we're using Uber Eats to order food for "free food Tuesdays" in our office.<p>I'm struggling to not puke using their interface, and a couple of times I gave up ordering even though it was free.<p>Every click can take 2-5 seconds to be processed, without any indication. Menus glitch. I once got 2 copies of my order because I rage-clicked the "Finish" button several times.<p>So you're trying to do high-end AI when you can't make a basic fucking form-based webapp work?!? What do you expect?
[dead]
[dead]
[dead]