52 comments

  • amazingamazing6 days ago
    Gemma4 in my view is good enough to do things similar to Gemini 2.5 flash, meaning if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions but it’s not great at using all tools or one shooting things that require a lot of context or “expert knowledge”<p>If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.<p>That’s a problem.<p>For the others anyway.
    • slopinthebag6 days ago
      Yep, and to be honest we don&#x27;t really <i>need</i> local models for intensive tasks. At least yet. You can use openrouter (and others) to consume a wide variety of open models which <i>are</i> capable of using tools in an agentic workflow, close to the SOTA models, which are essentially commodities - many providers, each serving the same model and competing with each-other on uptime, throughput, and price. At some point we will be able to run them on commodity hardware, but for now the fact that we can have competition between providers is enough to ensure that rug pulls aren&#x27;t possible.<p>Plus having Gemma on my device for general chat ensures I will always have a privacy respecting offline oracle which fulfils all of the non-programming tasks I could ever want. We are already at the point where the moat for these hyper scalers has basically dissolved for the general public&#x27;s use case.<p>If I was OpenAI or Anthropic I would be shitting my pants right now and trying every unethical dark pattern in the book to lock in my customers. And they are trying hard. It won&#x27;t work. And I won&#x27;t shed a single tear for them.
    • mark_l_watson5 days ago
      I agree. At first I was really turned off by the Gemma 4 line of models because they didn’t function with coding agents as well as the qwen3.5 line of models. However, I found that for other use cases Gemma 4 was very good.<p>EDIT: I just saw this: “”Ollama 0.20.6 is here with improved Gemma 4 tool calling!”” I will rerun my tests after breakfast.
    • swazzy6 days ago
      similar vibes as &quot;640k ought to be enough for anybody&quot;
      • Philip-J-Fry5 days ago
        I think the difference is that with LLMs, in a lot of cases you do see some diminishing returns.<p>I won&#x27;t deny that the latest Claude models are fantastic at just one shotting loads of problems. But we have an internal proxy to a load of models running on Vertex AI and I accidentally started using Opus&#x2F;Sonnet 4 instead of 4.6. I genuinely didn&#x27;t know until I checked my configuration.<p>AI models will get to this point where for 99% of problems, something like Gemma is gonna work great for people. Pair it up with an agentic harness on the device that lets it open apps and click buttons and we&#x27;re done.<p>I still can&#x27;t fathom that we&#x27;re in 2026 in the AI boom and I still can&#x27;t ask Gemini to turn shuffle mode on in Spotify. I don&#x27;t think model intelligence is as much of an issue as people think it is.
        • dimmke5 days ago
          100% agree here. The actual practical bottleneck is harness and agentic abilities for most tasks.<p>It&#x27;s the biggest thing that stuck out to me using local AI with open source projects vs Claude&#x27;s client. The model itself is good enough I think - Gemma 4 would be fine if it could be used with something as capable as Claude.<p>And that&#x27;s gonna stay locked down unfortunately especially on mobile and cars - it needs access to APIs to do that stuff - and not just regular APIs that were built for traditional invoking.<p>The same way that websites are getting llm.txts I think APIs will also evolve.
        • Tianning5 days ago
          Agree on the diminishing returns,the Opus 4.6 anecdote is a good signal
        • wj5 days ago
          I&#x27;m not sure I understand your last paragraph? The two sentences seem to contradict?
          • BoorishBears5 days ago
            GPT 3.5 was intelligent enough to understand that command and turn it into a correct shaped JSON object: the platforms don&#x27;t have tight enough integration to take advantage of the intelligence
        • bawana5 days ago
          I think security is the issue-ai is good at circumventing this. For example , ai can read paywalled articles you cannot. Do you really want ai to have ‘free range’.?
        • mewpmewp25 days ago
          I mean to me even difference between Opus and Sonnet is as clear as day and night, and even Opus and the best GPT model. Opus 4.6 just seems much more reliable in terms of me asking it to do something, and that to actually happen.
          • Philip-J-Fry5 days ago
            It depends what you&#x27;re asking it though. Sure, in a software development environment the difference between those two models is noticeable.<p>But think about the general user. They&#x27;re using the free Gemini or ChatGPT. They&#x27;re not using the latest and greatest. And they&#x27;re happy using it.<p>And I am willing to bet that a lot of paying users would be served perfectly fine by the free models.<p>If a capable model is able to live on device and solve 99% of people&#x27;s problems, then why would the average person ever need to pay for ChatGPT or Gemini?
            • mewpmewp25 days ago
              But even other tasks, like research etc, where dates are important, little details and connections are important, reasoning is important, background research activities or usage of tools outside of software development, and this is where I am finding much of the LLMs most useful for my life.<p>Even Opus makes mistakes with dates or not understanding news and everything correctly in context with chronological orders etc, and it would be even worse with smaller and less performing models.<p>Scheduling, planning, researching products, shopping, trip plans, etc...
          • acidtechno3035 days ago
            You&#x27;re quick to say &quot;to me&quot; in your comparison.<p>My experience is very different than yours. Codex and CC yield very differenty result both because of the harness differencess and the model differences, but niether is noticeably better than the other.<p>Personally, I like Codex better just because I don&#x27;t have to mess with any sort of planning mode. If I imply that it shouldn&#x27;t change code yet, it doesn&#x27;t. CC is too impatient to get started.
            • mewpmewp25 days ago
              I guess yes, that&#x27;s a harness difference, and you can also configure CC as a harness to behave very differently, but still with same harness and guidance, &quot;to me&quot; there&#x27;s still a difference in terms of Opus 4.6 and e.g. GPT 5.4 or which GPT model do you use? I&#x27;ve been using Claude Code, Codex and OpenCode as harnesses presently, but for serious long running implementation I feel like I can only really rely on CC + Opus 4.6.
              • acidtechno3035 days ago
                Yes 5.4<p>Perhaps Opus is superior and I&#x27;m just jaded.<p>I come from Cursor before having adopted the TUI tools. Opus was nothing short of pathetic in their environment compared to the -codex models. I would only use it for investigations and planning because it was faster.<p>Like you&#x27;ve said, though, that could just be a harness issue.
            • charcircuit5 days ago
              I have the opposite experience. Codex gets to work much faster than Claude Code. Also I&#x27;ve never seen the need to use planning mode for Claude. If it thinks it needs a plan it will make one automatically.
              • acidtechno3035 days ago
                I&#x27;ll drink to the idea that it&#x27;s all in my head.
      • shermantanktop6 days ago
        Well you can do a <i>lot</i> with 640k…if you try. We have 16G in base machines and very few people know how to try anymore.<p>The world has moved on, that code-golf time is now spent on ad algorithms or whatever.<p>Escaping the constraint delivered a different future than anticipated.
        • throwaw126 days ago
          &gt; you can do a lot with 640k…if you try.<p>it is economically not viable to try anymore.<p>&quot;XYZ Corp&quot; won&#x27;t allow their developers to write their desktop app in Rust because they want to consume only 16MB RAM, then another implementation for mobile with Swift and&#x2F;or Kotlin, when they can release good enough solution with React + Electron consuming 4GB RAM and reuse components with React Native.
          • KerrAvon5 days ago
            Strangely enough, AI could turn this on its head. You can have your cake and eat it too, because you can tell Claude&#x2F;Codex&#x2F;whatever to build you a full-featured Swift version for iOS and Kotlin for Android and whatever you want on Windows and Mac. There&#x27;s still QA for the different builds, but you already have to QA each platform separately anyway if you really care that they all work, so in theory that doesn&#x27;t change.<p>Of course, it&#x27;s never that simple in reality; you need developers who know each platform for that to work, because you must run the builds and tell the AI what it&#x27;s doing wrong and iterate. Currently, you can probably get away with churning out Electron slop and waiting for users to complain about problems instead of QAing every platform. Sad!
          • gerdesj5 days ago
            My Commodore 64 begs to differ.
        • stavros5 days ago
          The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.
          • throw0101d5 days ago
            &gt; <i>The simple fact is that a 16 GB RAM stick costs much less than the development time to make the app run on less.</i><p>The costs are borne by different people: development by the company, RAM sticks by the customer.<p>A company is potentially (silently?) adding to the cost of the product&#x2F;service that the customer has to bear by needed to have more RAM (or have the same amount, but can&#x27;t do as much with it).
            • stavros5 days ago
              Yep, and since companies care about TCO, they reward the software with the lower TCO, which happens to be the one that uses more RAM but is cheaper to produce.
              • lucketone5 days ago
                Until RAM prices increases significantly.
          • loloquwowndueo5 days ago
            One stick does. How about all the sticks needed for all the people who want to run the software?
            • stavros5 days ago
              Still cheaper, since it amortizes over all the software.
              • appreciatorBus5 days ago
                Some software has millions or even billions of users. The cost of 16 GB multiplied by million millions or billions would pay for a lot of refactoring.<p>That said, I think it’s more of a collective action problem. The person who could pay for the refactor to operate in 640 K is not the same person who has to pay for the 16 GB. And yes, the 16 GB is cheap enough in comparison to other costs that the latter group doesn’t necessarily notice that they are subsidizing inefficient development.
                • loloquwowndueo5 days ago
                  I think stavros means amortization on an individual level - if all software is bloated and requires 16GB to run then my expense for a 16GB stick is not caused by a single piece of software, but everything I use.<p>Not that I agree of course :) I’m talking more of the net negative of everyone needing to buy 16gb sticks so developers can YOLO vibe-coded unoptimized garbage. But at least I think the former explanation is what stavros meant :)
        • raverbashing6 days ago
          Especially if the 640k are &quot;in your hand&quot; and the rest is &quot;in the cloud&quot;
        • jstummbillig6 days ago
          People get hung up on bad optimization. It you are the working at sufficiently large scale, yes, thinking about bytes might be a good use of your time.<p>But most likely, it&#x27;s not. At a system level we don&#x27;t want people to do that. It&#x27;s a waste of resources. Making a virtue out of it is bad, unless you care more about bytes than humans.
          • TeMPOraL5 days ago
            These bytes are human lives. The bytes and the CPU cycles translate to software that takes longer to run, that is more frustrating, that makes people accomplish less in longer time than they could, or should. Take too much, and you prevent them from using other software in parallel, compounding the problem. Or you&#x27;re forcing them to upgrade hardware early, taking away money they could better spend in different areas of their lives. All this scales with the number of users, so for most software with any user base, not caring about bytes and cycles is wasting much more people-hours than is saving in dev time.
            • jstummbillig5 days ago
              Creating people able to do these optimizations costs human life, which is not spend on other things, like building the unoptimized version of another product.
              • dns_snek5 days ago
                We&#x27;re not talking about writing assembly by hand here. If your software has a million daily users and wastes a minute of their day, that&#x27;s about <i>9 work-years of labour</i> wasted every single day.<p>In a 5-year lifecycle that&#x27;s about 10,000 years of human labour wasted. Yes, I had to quadruple-check this myself.<p>Does it take 10,000 work-years of effort, per project, to train its developers to write reasonably performant code?<p>Of course not all of this would translate into actual productivity gains but it doesn&#x27;t have to.
                • jstummbillig5 days ago
                  What world are you living in where the median piece of software has a million users? Or even a hundredth of that?
                  • shermantanktop4 days ago
                    I work on software with millions of users..? Sure, it’s not the median, but it exists and there’s a lot of it.<p>Unfortunately the number of users and the collective value of their wasted time doesn’t make arguing for efficiency and performance any easier.
                  • herewulf3 days ago
                    The one we&#x27;re in where &quot;software&quot; doesn&#x27;t just mean an app that someone downloads from a website or an app store. Software includes lots of server side components, etc, etc.<p>I once noticed my name in the Chromium OS credits due to a patch I had submitted to a library that&#x27;s on every Chromebook. 1 million would be a small number for Chromebooks alone.
                  • dns_snek5 days ago
                    I&#x27;m not talking about the median piece of software with 2 users and 0.1 developers (I made that up).<p>The ones that stick out are actively maintained, widely used, and well funded. It doesn&#x27;t have to be a million active users, but they should be the first to get their act together.
                  • TeMPOraL5 days ago
                    It&#x27;s what many software companies <i>dream of</i> and <i>aim for</i>. But the same argument works with 10k users, or even 1k users.
                • charcircuit5 days ago
                  You are failing to consider the opportunity cost of how much more work-years can be saved by making a new feature.
      • pdpi6 days ago
        Look at the whole history of computing. How many times has the pendulum swung from thin to fat clients and back?<p>I don&#x27;t think it&#x27;s even mildly controversial to say that there will be an inflection point where local models get Good Enough and this iteration of the pendulum shall swing to fat clients again.
      • flir6 days ago
        Assuming improvements in LLMs follow a sigmoid curve, even if the cloud models are always slightly ahead in terms of raw performance it won&#x27;t make much of a difference to most people, most of the time.<p>The local models have their own advantages (privacy, no -as-a-service model) that, for many people and orgs, will offset a small performance advantage. And, of course, you can always fall back on the cloud models should you hit something particularly chewy.<p>(All IMO - we&#x27;re all just guessing. For example, good marketing or an as-yet-undiscovered network effect of cloud LLMs might distort this landscape).
      • iso16315 days ago
        More than &quot;a 3 year old laptop is fine&quot;<p>My thinkpad is nearly 10 years old, I upgraded it to 32GB of ram and have replaced the battery a couple of times, but it&#x27;s absolutely fine apart from that.<p>If AI which was leading edge in 2023 can run on a 2026 laptop, then presumably AI which is leading edge in 2026 will run on a 2029 laptop. Given that 2023 was world changing then that capacity is now on today&#x27;s laptop<p>Either AI grows exponentially in which case it doesn&#x27;t matter as all work will be done by AI by 2035, or it plateaus in say 2032 in which case by 2035 those models will run on a typical laptop.
    • shaz0x5 days ago
      Even Gemma 4 E2B is more useful than you&#x27;d think if you give it the right harness. I&#x27;ve been running it on Android via llama.rn and it handles function calling natively — the model outputs structured tool calls without any prompt engineering. Won&#x27;t replace Opus for hard reasoning but for a mobile app that needs to pick a tool and run it, the cost math is hard to argue with. $0&#x2F;query forever.
    • colechristensen6 days ago
      Local models seem somewhere between 9 and 24 months behind. I&#x27;m not saying I won&#x27;t be impressed with what online models will be able to do in two years, but I&#x27;m pretty satisfied with the prediction that I won&#x27;t really need them in a couple of years.
      • Gigachad6 days ago
        We still aren&#x27;t going to be putting 200gb ram on a phone in a couple years to run those local models.
        • Tuna-Fish5 days ago
          HBF is coming fast, with the first examples expected to be sampling to users this year.<p>The storage technology of Flash memory can be optimized to be as fast and more energy-efficient than DRAM at large linear reads, there was just little demand before because doing so costs you ~half of your density and doesn&#x27;t improve your writes at all. All the flash memory manufacturers realized that this is a huge opportunity for model weights and are now chasing this.<p>Or in other words, after the initial price peak stabilizes in a few years, it will be reasonable to put ~500GB of weights into a device for ~$100 in memory costs.
        • jurmous6 days ago
          We don’t need 200gb of RAM on a phone to run big models. Just 200 GB of storage thanks to Apple’s “LLM in a flash” research.<p>See: <a href="https:&#x2F;&#x2F;x.com&#x2F;danveloper&#x2F;status&#x2F;2034353876753592372" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;danveloper&#x2F;status&#x2F;2034353876753592372</a>
          • adrian_b6 days ago
            Yes, I agree that this is the right solution, because for a locally-hosted model I value more the quality of the output than the speed with which it is produced, so I prefer the models as they were originally trained, not with further quantizations.<p>While that paper praises the Apple advantage in SSD speed, which allows a decent performance for inference with huge models, nowadays SSD speeds equal or greater than that can be achieved in any desktop PC that has dual PCIe 5.0 SSDs, or even one PCIe 5.0 and one PCIe 4.0 SSDs.<p>Because I had also independently reached this conclusion, like I presume many others, I have just started to work a week ago on modifying llama.cpp to use in an optimal manner weights stored on SSDs, while also batching many tasks, so that they will share each pass through the SSDs. I assume that in the following months we will see more projects in this direction, so the local hosting of very large models will become easier and more widespread, allowing the avoidance of the high risks associated with external providers, like the recent enshittification of Claude Code.
            • alwillis5 days ago
              &gt; While that paper praises the Apple advantage in SSD speed, which allows a decent performance for inference with huge models, nowadays SSD speeds equal or greater than that can be achieved in any desktop PC that has dual PCIe 5.0 SSDs, or even one PCIe 5.0 and one PCIe 4.0 SSDs.<p>Apple’s advantage is their unified memory architecture where the CPU, GPU and Neural Engine share the same memory and the SSD is directly connected to the SoC--less latency than PCIe. Memory bandwidth starts at 300+ GB&#x2F;s.
              • adrian_b5 days ago
                In an optimized implementation of model inference, the latency of SSD access has no importance, because no random accesses are done.<p>The purpose of optimizing model inference for weights stored on SSDs is to achieve a continuous reading from SSDs at the maximum throughput provided by hardware, taking care that any computations and any accesses to the main memory are overlapped over the SSDs reading.
        • anon3738395 days ago
          That amount of RAM won’t be necessary. Gemma 4 and comparably sized Qwen 3.5 models are already <i>better</i> than the very best, biggest frontier models were just 12-18 months ago. Now in an 18-36GB footprint, depending on quantization.
        • alwillis5 days ago
          &gt; We still aren&#x27;t going to be putting 200gb ram on a phone in a couple years to run those local models.<p>You can already buy an iPhone with 2 TB of storage. The CPU, GPU and Neural Engine all share the same pool of RAM and the SSD is directly connected to all of this. You won’t need 200 GB of RAM to run local models when you essentially have 500 GB of virtual memory.
        • mh-6 days ago
          A lot of people are making the mistake of noticing that local models have been 12-24 months behind SotA ones for a good portion of the last couple years, and then drawing a dotted line assuming that continues to hold.<p>It simply.. doesn&#x27;t. The SotA models are enormous now, and there&#x27;s no free lunch on compression&#x2F;quantization here.<p>Opus 4.6 capabilities are not coming to your (even 64-128gb) laptop or phone in the popular architecture that current LLMs use.<p>Now, that doesn&#x27;t mean that a much narrower-scoped model with very impressive results can&#x27;t be delivered. But that narrower model won&#x27;t have the same breadth of knowledge, and TBD if it&#x27;s possible to get the quality&#x2F;outcomes seen with these models <i>without</i> that broad &quot;world&quot; knowledge.<p>It also doesn&#x27;t preclude a new architecture or other breakthrough. I&#x27;m simply stating it doesn&#x27;t happen with the current way of building these.<p>edit: forgot to mention the notion of ASIC-style models on a chip. I haven&#x27;t been following this closely, but last I saw the power requirements are too steep for a mobile device.
          • am17an6 days ago
            Don’t underestimate the march of technology. Just look at your phone, it has more FLOPS than there were in the entire world 40 years ago.
            • kuboble6 days ago
              And I think it&#x27;s very likely that with improved methods you could get opus 4.6 level performance on a wrist watch in few years.<p>You needed supercomputer to win in chess until you didn&#x27;t.<p>Currently local models performance in natural language is much better than any algorithm running on a super computer cluster just few years ago.
            • root_axis6 days ago
              Yeah, but that&#x27;s the current state of the art after decades of aggressive optimizations, there&#x27;s no foreseeable future where we&#x27;ll ever be able to cram several orders of magnitude more ram into a phone.
              • TeMPOraL5 days ago
                We already cram several orders of magnitude more flash storage into phone than RAM (e.g. my phone has 16 GB RAM but 1 TB storage); even now, with some smart coding, if you don&#x27;t need <i>all</i> that data at the same time for random access at sub millisecond speed, it&#x27;s hard to tell the difference.
                • alwillis5 days ago
                  Agreed. Apple is sells an iPhone Pro Max with 2 TB of storage.
            • vrighter5 days ago
              but it doesn&#x27;t have that much more flops than it did a couple of years ago.
          • colechristensen6 days ago
            There&#x27;s been plenty of free lunch shrinking models thus far with regards to capability vs parameter count.<p>Contradicting that trend takes more than &quot;It simply.. doesn&#x27;t.&quot;<p>There&#x27;s plenty of room for RAM sizes to double along with bus speed. It idled for a long time as a result of limited need for more.
          • slopinthebag5 days ago
            The gap between SOTA models and open &#x2F; local models continues to diminish as SOTA is seeing diminishing returns on scaling (and that seems to be the main way they are &quot;improving&quot;), whereas local models are making real jumps. I&#x27;m actually more optimistic local models will catch up completely than I am SOTA will be taking any great leaps forward.
          • grumbel5 days ago
            Would the model even need that breath of knowledge? Humans just look things up in books or on Wikipedia, which you can store on a plain old HDD, not VRAM. All books ever written fit into about 60TB if you OCR them, and the useful information in them probably in a lot less, that&#x27;s well within the range of consumer technology.
          • baq6 days ago
            Pretty sure there’s at least a couple orders of magnitude in purely algorithmic areas of LLM inference; maybe training, too, though I’m less confident here. Rationale: meat computers run on 20W, though pretraining took a billion years or so.
        • herewulf3 days ago
          My phone connected via mesh VPN to a server at home is local enough model for me.
    • rldjbpin3 days ago
      &gt; If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.<p>&gt; That’s a problem.<p>While improvements should continue rolling in, and might even match current SOTA in benchmarks down the line, is it &quot;good enough&quot;?<p>Hard to believe we have reached that stage with current models, which would continue to stretch beyond what we can economically run. Call it skill issue, or try to fix it with a revolutionary harness, it seemingly takes a village to get it all working. Maybe by then we will have good enough ecosystem in these layers too, but if current capabilities is the benchmark, it might need more time in the oven.
    • docstryder5 days ago
      The economy is, more or less, a competition.<p>If someone gets a really great axe and are happy with it, that’s great for them.<p>But then, other people will be on bulldozers.<p>They can say they are happy with the axe, but then they are not in the competition at that point.
      • ectospheno5 days ago
        I think the article was wondering how many billion dollar bulldozers the world needs. My local hardware store sells a variety of axes. I myself am a happy ax user. I even replace them.
    • logicallee5 days ago
      &gt; if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions<p>could I ask how you do that? I installed openclaw and set it to use Gemma 4 but it didn&#x27;t act in an agent mode at all, it only responded in the chat window while doing nothing, and didn&#x27;t read any files or do anything that you wrote (though I see you do mention that it&#x27;s not great at using all tools). What are you using exactly?
      • data-ottawa5 days ago
        I had the same issues. I had to tell it to use sub agents explicitly, and instead of saying set a cron say set an openclaw cron.<p>I generally do like the model, it’s not a great agent though.<p>It’s good for summarization tasks, small tool use, and has pretty good world knowledge, though it does hallucinate.
    • blitzar6 days ago
      &gt; it’s not great at using all tools<p>Glad it wasnt just me - i was impressed with the quality of Gemma4 - it just couldnt write the changes to file 9&#x2F;10 times when using it with opencode
      • seaal6 days ago
        <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;google&#x2F;gemma-4-31B-it&#x2F;commit&#x2F;e51e7dcdb6febd74c182fe0cb41c236363ae2ac5" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;google&#x2F;gemma-4-31B-it&#x2F;commit&#x2F;e51e7dcd...</a><p>There was an update to tool calling 3 days ago. I haven&#x27;t tested it myself but hope it helps.
        • blitzar5 days ago
          Wow, that is <i>so much</i> better! I didnt exactly test it extensively but my issues are gone.
        • sroussey6 days ago
          Hmm.. is there an updated onnx?
      • erichocean6 days ago
        &gt; <i>it just couldnt write the changes to file 9&#x2F;10 times when using it with opencode</i><p>You might want to give this a try, it dramatically improves Edit tool accuracy without changing the model: <a href="https:&#x2F;&#x2F;blog.can.ac&#x2F;2026&#x2F;02&#x2F;12&#x2F;the-harness-problem&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.can.ac&#x2F;2026&#x2F;02&#x2F;12&#x2F;the-harness-problem&#x2F;</a>
    • pojzon4 days ago
      Like you say, Google is not playing the game to win. They play it to make others lose.<p>Looking at current advancements - this is the horse I would bet my money on.
    • vasco5 days ago
      But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback &#x2F; review mechanisms or having to babysit it prompt by prompt.<p>By the time gemma6 allows you to do the above the proprietary models supposedly will already be on the next step change. It just depends if you need to ride the bleeding edge but specially because it&#x27;s &quot;intelligence&quot;, there&#x27;s an obvious advantage in using the best version and it&#x27;s easy to hype it up and generate fomo.
      • oblio5 days ago
        &gt; But that difference atm is the difference between it being OK on its own with a team of subagents given good enough feedback<p>Do people actually build meaningful things like that?<p>It&#x27;s basically impossible to leave any AI agent unsupervised, even with an amazing harness (which is incredibly hard to build). The code slowly rots and drifts over time if not fully reviewed and refactored constantly.<p>Even if teams of agents working almost fully autonomously were reliable from a functional perspective (they would build a functional product), the end product would have ever increasing chaos structurally over time.<p>I&#x27;d be happy to be proven wrong.
    • phantomoc5 days ago
      [dead]
    • gorgmah6 days ago
      When that happens, you&#x27;ll have fomo from not using opus 5.x. The numbers that they showed for Mythos show that the frontier is still steadily moving (and maybe even at a faster pace than before)
      • random20215 days ago
        I would be surprised about that behavior even for 10% people doing real AI usable work. Very few people buy new motherboard or CPU or gfxcard every 3 months?<p>Even now just because the latest Anthropic is super great doesn&#x27;t mean people are not using other models. Not everyone is subscribed to only the best.
    • blcknight6 days ago
      There is a cognitive ceiling for what you can do with smaller models. Animals with simpler neural pathways often outperform whatever think they are capable of but there&#x27;s no substitute for scale. I don&#x27;t think you&#x27;ll ever get a 4B or 8B model equivalent to Opus 4.6. <i>Maybe</i> just for coding tasks but certainly not Opus&#x27; breadth.
      • zarzavat6 days ago
        The only thing that we are sure can&#x27;t be highly compressed is knowledge, because you can only fit so much information in given entropy budget without losing fidelity.<p>The minimal size limits of reasoning abilities are not clear at all. It could be that you don&#x27;t need all that many parameters. In which case the door is open for small focused models to converge to parity with larger models in reasoning ability.<p>If that happens we may end up with people using small local models most of the time, and only calling out to large models when they actually need the extra knowledge.
        • idle_zealot6 days ago
          &gt; and only calling out to large models when they actually need the extra knowledge<p>When would you want lossy encoding of lots of data bundled together with your reasoning? If it is true that reasoning can be done efficiently with fewer parameters it seems like you would always want it operating normal data searching and retrieval tools to access knowledge rather than risk hallucination.<p>And re: this discussion of large data centers versus local models, do recall that we already know it&#x27;s possible to make a pretty darn clever reasoning model that&#x27;s small and portable and made out of meat.
          • aldonius4 days ago
            I guess we can imagine a pure reasoning model (if that&#x27;s even the right word any more) with almost zero world-knowledge. How does it know what to look for? How does it do any meaningful communication at all?<p>So I think it&#x27;s useful to have an imprecise-but-fairly-accurate set of world knowledge as part of an otherwise reasoning-heavy model. It&#x27;s a cache.<p>And if the it&#x27;s an LLM, or something like that, I think it basically has to have world-knowledge built in, because what is natural language if not communication about the world?
          • Gareth3214 days ago
            I find it difficult to understand the distinction between parametric knowledge and reasoning skills in LLMs. I still think of them as distinct but I understand there is significantly overlap. Arguably, they are the same thing in LLMs. So I would assume that if <i>reasoning</i> is high quality, using RAG could be logical (if much slower). However if the lack of parametric knowledge impacts reasoning, then use of larger models seems warranted. A dumb LLM wouldn&#x27;t offer sufficient results even with all the RAG in the world.
          • dryarzeg5 days ago
            &gt; we already know it&#x27;s possible to make a pretty darn clever reasoning model<p>There&#x27;s is a problem though: we know that it is possible, but we don&#x27;t know how to (at least not yet and as far as I am aware). So we know the answer to &quot;what?&quot; question, but we don&#x27;t know the answer to &quot;how?&quot; question.
          • adrianN6 days ago
            I would call brains with the needed support infrastructure small.
        • yorwba6 days ago
          I think you underestimate the amount of knowledge needed to deal with the complexities of language in general as opposed to specific applications. We had algorithms to do complex mathematical reasoning before we had LLMs, the drawback being that they require input in restricted formal languages. Removing that restriction is what LLMs brought to the table.<p>Once the difficult problem of figuring out what the input is supposed to mean was somewhat solved, bolting on reasoning was <i>easy</i> in comparison. It basically fell out with just a bit of prompting, &quot;let&#x27;s think step by step.&quot;<p>If you want to remove that knowledge to shrink the model, we&#x27;re back to contorting our input into a restricted language to get the output we want, i.e. programming.
      • charcircuit6 days ago
        I think you are underestimating the strength a small model can get from tool use. There may be no substitute for scale, but that scale can live outside of the model and be queried using tools.<p>In the worst case a smaller model could use a tool that involves a bigger model to do something.
        • sroussey6 days ago
          Small models are bad at tool use. I have liquidai doing it in the browser but it’s super fragile.
          • dd8601fn5 days ago
            I don’t really understand this, but I hear it a lot so I know it’s just confusion on my part.<p>I’m running little models on a laptop. I have a custom tool service made available to a simple little agent that uses the small models (I’ve used a few). It’s able to search for necessary tool functions and execute them, just fine.<p>My biggest problem has been the llm choosing not to use tools at all, favoring its ability to guess with training data. And once in a while those guesses are junk.<p>Is that the problem people refer to when they say that small models have problems with tool use? Or is it something bigger that I wouldn’t have run into yet?
      • dathinab6 days ago
        except you don&#x27;t want knowledge in the model, and most of that &quot;size&quot; comes from &quot;encoded knowledge&quot;, i.e. over fitting. The goal should be to only have language handling in the model, and the knowledge in a database you can actually update, analyze etc. It&#x27;s just really hard to do so.<p>&quot;world models&quot; (for cars) maybe make sense for self driving, but they are also just a crude workaround to have a physics simulation to push understanding of physics. Through in difference to most topics, basic, physics tend to not change randomly and it&#x27;s based on observation of reality, so it probably can work.<p>Law, health advice, programming stuff etc. on the other hand changes all the time and is all based on what humans wrote about it. Which in some areas (e.g. law or health) is very commonly outdated, wrong or at least incomplete in a dangerous way. And for programming changes all the time.<p>Having this separation of language processing and knowledge sources is ... hard, language is messy and often interleaves with information.<p>But this is most likely achievable with smaller models. Actually it might even be easier with a small model. (Through if the necessary knowledge bases are achievable to fit on run on a mac is another topic...)<p>And this should be the goal of AI companies, as it&#x27;s the only long term sustainable approach as far as I can tell.<p>I say should because it may not be, because if they solve it that way and someone manages to clone their success then they lose all their moat for specialized areas as people can create knowledge bases for those areas with know-how OpenAI simple doesn&#x27;t have access to. (Which would be a preferable outcome as it means actual competition and a potential fair working market.)
        • dathinab6 days ago
          as a concrete outdated case:<p>TLS cipher X25519MLKEM768 is recommended to be enabled on servers which do support it<p>last time I checked AI didn&#x27;t even list it when you asked it for a list of TLS 1.3 ciphers (through it has been widely supported since even before it was fully standardized..)<p>this isn&#x27;t surprising as most input sources AI can use for training are outdated and also don&#x27;t list it<p>maybe someone of OpenAI will spot this and feet it explicitly into the next training cycle, or people will cover it more and through this it is feed implicitly there<p>but what about all that many niche but important information with just a handful of outdated stack overflow posts or similar? (which are unlikely to get updated now that everyone uses AI instead..)<p>The current &quot;lets just train bigger models with more encoded data approach&quot; just doesn&#x27;t work, it can get you quite far, tho. But then hits a ceiling. And trying to fix it by giving it also additional knowledge &quot;it can ask if it doesn&#x27;t know&quot; has so far not worked because it reliably doesn&#x27;t realize it doesn&#x27;t know if it has enough outdated&#x2F;incomplete&#x2F;wrong information encoded in the model. Only by assuring it doesn&#x27;t have any specialized domain knowledge can you make sure that approach works IMHO.
  • grtteee6 days ago
    This is the classic apple approach - wait to understand what the thing is capable of doing (aka let others make sunk investments), envision a solution that is way better than the competition and then architect a path to building a leapfrog product that builds a large lead.
    • HerbManic6 days ago
      Pretty much it. That said, they did try to appease the markets by announcing &#x27;Apple Intelligence&#x27; so they didn&#x27;t appear to be behind everyone.<p>They did do the smart thing of not throwing too much capital behind it. Once the hype crumbles, they will be able to do something amazing with this tech. That will be a few years off but probably worth the wait.
      • Gigachad6 days ago
        For consumers AI has anti hype right now. It&#x27;s off-putting to see consumer products slapped with a hundred AI labels. I see people talk about how you can turn off all of Apple Intelligence with one toggle rather than hundreds on Samsung.<p>Firefox is also marketing how easy it is to disable AI.
        • rtpg6 days ago
          I think a lot of people are not hype about AI in their toaster, but... I don&#x27;t think people are generally turned off form deeper integration in their OS itself. Especially when for some people this is representing ideas similar to how programmer-types get excited about Shortcuts.<p>Decently accessible automation and discovery, without having to go figure out a bunch of stuff
          • crote6 days ago
            &gt; Decently accessible automation and discovery, without having to go figure out a bunch of stuff<p>Sure, but is this <i>actually happening</i>? Last time I tried, Atlassian&#x27;s heavily-pushed AI couldn&#x27;t even turn a Jira ticket number of Confluence into a clickable link. Similarly, Windows has been actively moving away from providing locally-installed applications in the Start menu search towards offering random internet garbage.<p>I&#x27;m all for using a LLM to make something like Siri able to understand both &quot;Siri, turn off the lights&quot; and &quot;Siri, make it dark!&quot; - but that&#x27;s not what&#x27;s being pushed onto consumers, because there is <i>no way</i> anyone is going to pay $100&#x2F;month for <i>any</i> version of that.
            • wookmaster5 days ago
              Unfortunately companies seem to be in panic mode about making ANY offering to not become irrelevant that they&#x27;re giving AI overall a bad reputation. Everyone made a mad dash and didn&#x27;t spent enough time making that product well thought out. Some got burned by it such as Microsoft.<p>Everyone seems well convinced AI can just replace 90% of software out there but I&#x27;ve yet to see any evidence of that. Sure it can stand up a blog, get a simple app together pretty quick but once you get into larger scale software it&#x27;s not capable of doing it by itself and you still need teams of developers working together.
          • Gigachad6 days ago
            People like features, benefits, and outcomes. AI isn&#x27;t a feature, it&#x27;s a technology that can enable features. But it&#x27;s being marketed as the only thing that matters.<p>The user does not give two shits if the new laptop &quot;has AI&quot;. This is how Apple has been killing it lately, they market the macbooks being powerful, cheap, with long batteries, and a premium feel. Things the user cares about. Most of the stuff marketers are just blanket labeling &quot;AI&quot; will eventually be shuffled to the background and rebranded with a more specific term to highlight the feature being delivered rather than the fact it&#x27;s AI&quot;.
            • grtteee5 days ago
              Nailed it.<p>I reckon most humans never learn the valuable lessons of the past.<p>As you put it - nobody cares about the technology in and of itself. They care about “ok cool what’s in it for me?”. That’s what determines their decision to purchase&#x2F;use a thing.
          • tuyiown6 days ago
            You&#x27;re right, there is plenty of space for features that require AI to work but that are undistinguishable from &quot;classical&quot; feature. Better autocompletion is a proven one for example.
          • drcongo6 days ago
            Sentiment among my teenage kids and their peers is that AI can fuck right off. It&#x27;s way over the line into actual <i>hate</i> of anything AI.
      • GeekyBear5 days ago
        Apple&#x27;s Neural Engine and the CoreML framework to leverage it are almost a decade old now.<p>Apple Intelligence was a rebrand, and Apple has made some unique decisions rolling it out.<p>For instance, the new chatbot version of Siri&#x27;s hallucinations were seen as unacceptable, so its release was delayed.<p>Is a chatbot that provides false information regularly really an advancement?<p>Apple chose not to do photorealistic generative images, so they can&#x27;t be used for deepfakes.<p>Apple chose not to add a feature to write text for you, just one to clean up what you write, because they don&#x27;t want to help kids in school cheat.
        • crena5 days ago
          That is definitely true, but some time ago Apple’s marketing team has also put out some pretty cringey commercials to the contrary. It’s wild how they seemed to be encouraging people to cheat and hide their ineptitudes, rather than just being honest about it.
          • GeekyBear4 days ago
            As far as I&#x27;m concerned, I would rather see the industry prioritize AI that doesn&#x27;t provide false information over pushing slop out as quickly as possible.<p>Hell, there are chatbots out there trying to convince kids that suicide is a great idea.<p>How is prioritizing pushing out slop as quickly as possible with no consideration of the consequences acceptable?
      • grtteee6 days ago
        Yeah exactly the Apple Intelligence thing was pure BS to shut people up who kept saying apple was going to get disrupted by missing out.<p>Apple seems to follow the values that Steve laid out. Tim isn’t a visionary but he seems to follow the principles associated with being disciplined with cash quite well. They haven’t done any stupid acquisitions either. Quite the contrast with OAI.
    • eastbound6 days ago
      &gt; wait to understand what the thing is capable of doing<p>My parents use Android to ask “What are the 5 biggest towers in Chicago” or “Remove the people on my picture” while apparently iPhone is only capable of doing “Hey Siri start the Chronometer &#x2F; There is no contact named Chronometer in your phone”.<p>My iPhone is lagging a ridiculous 10 years behind. It’s just that I don’t trust Google with my credit card.
      • Gigachad6 days ago
        These are software&#x2F;cloud features. You can install gemini on iphone if you want to talk about towers in Chicago.<p>The only reason to care about it being OS integrated is to interact with functions of the OS, which siri does fine.
        • jeroenhd6 days ago
          Apple&#x27;s AI stuff also uses cloud features, though you can&#x27;t use them on other platforms. The problem with Apple&#x27;s new cloud features is that they generally just suck. I&#x27;m surprised iCloud works so well with how hard they&#x27;re fumbling basic stuff like this.
          • Gigachad5 days ago
            At least all of the ones I have tried work locally. I’ve entered airplane mode and things like magic eraser in images works fine.
        • satvikpendem6 days ago
          Siri does not do it fine, it&#x27;s literally the example the above commenter showed.
          • dwaite5 days ago
            Knowing the building heights around Chicago is not an OS feature. Even if Siri was perfect, they still aren&#x27;t going to ship a wikipedia object graph on every phone.<p>Likewise, the phone does not understand removing people from a photo. It is a feature specific to the photo app, and Siri allows you to wire in commands for the features in your app just fine and has for years. If Google decided for competitive reasons to not ship this feature to non-Pixel or non-Android users, thats not a Siri fault. That Apple did not integrate this as a voice command into their Photos app is also not a Siri fault (is it really common to remove all people from a photo, vs specific people?)
            • satvikpendem5 days ago
              &gt; Hey Siri start the Chronometer &#x2F; There is no contact named Chronometer in your phone<p>Is what I was referring to, Siri often fails at even opening apps which is an OS feature. Regardless, even for your examples at a certain point an AI assistant not being able to do certain things while others can does become the fault of that AI.
              • Gigachad5 days ago
                Just ask for the timer. That&#x27;s what it&#x27;s called in iOS. I&#x27;ve never seen someone stuck on this before.
      • realusername6 days ago
        Siri is one step below that for me, it still doesn&#x27;t understand my accent, I feel like its voice recognition didn&#x27;t improve from 2010...
      • smt886 days ago
        &quot;10 years behind&quot; would be an improvement for Siri. It&#x27;s actively broken much of the time in a way that Google Assistant or Alexa never has been.
        • adrithmetiqa6 days ago
          I would argue that they are as bad as each other. I have to repeat most voice commands to Siri and Alexa than getting it right first time. No experience with Google.
          • ghaff5 days ago
            Voice assistants were going to be this revolutionary new category. I think Amazon was going to populate a whole office tower in Boston with Alexa engineers at one point. There have been incremental improvements here and there but, to a first approximation, none of it has really worked out.
      • Barrin926 days ago
        I want the reverse version of this, if Apple can promise me to &#x27;lag behind&#x27; for another ten years I&#x27;ll buy my first Apple device in ten years
    • m4636 days ago
      Quietly they are doing things on-device. The OCR + copy&#x2F;paste is genuine goodness - modestly functional.
      • gyomu5 days ago
        This feature has been around since iOS 16 (2022) though - no relationship to Apple Intelligence (2024) or the current LLM hype (2023 onwards).
      • lern_too_spel6 days ago
        That&#x27;s also literally years behind the competition. <a href="https:&#x2F;&#x2F;www.androidpolice.com&#x2F;2018&#x2F;05&#x2F;09&#x2F;android-ps-new-recents-ui-includes-smart-text-selection-image-sharing-pulling-text-images&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.androidpolice.com&#x2F;2018&#x2F;05&#x2F;09&#x2F;android-ps-new-rece...</a>
        • crote6 days ago
          The competition has also attached it to a toxic brand and heavily integrated it with actively user-hostile applications. It doesn&#x27;t matter if your <i>tech</i> is years ahead when people expect using it will mean your image content info will be sold to anyone willing to pay a cent for it.
          • lern_too_spel5 days ago
            It&#x27;s more that nontechnical users prefer luxury brands over utility brands. A much smaller issue, which you alluded to, is that some technical users aren&#x27;t technical enough to know real privacy vs. marketed privacy. This feature exists in base Android, which doesn&#x27;t require any Google services.
            • Miserlou575 days ago
              does anyone else tire of hearing Apple referred to as a &quot;luxury&quot; brand? Sorry its more Honda than LVMH
        • nirava5 days ago
          LOL at the risk of sounding like a shill, I think Apple was right on time with these features. They added it after on-device CPU&#x2F;neural engine was finally powerful and efficient enough. These features arrived at once on macs, iphones and ipads, and they arrived at the same time on your friends&#x27; devices.<p>IMO Android suffers from not controlling it&#x27;s hardware. I can&#x27;t ever be sure if the hyped new feature will come to my phone because I&#x27;m not using a Pixel or a Samsung.
        • bdavbdav6 days ago
          But everyone talks about it like it was Apple, and isn’t that what matters (to Apple)?
          • oasisaimlessly6 days ago
            I&#x27;ve never heard anybody (mis)attribute that to Apple.
            • drcongo6 days ago
              I would have, and I work in tech. I&#x27;d guess that most people who use iOS have zero idea of what Android can and can&#x27;t do, because they never use it and probably never will so what&#x27;s the point of trying to find out.
              • 9dev5 days ago
                Seconded. I never knew Android had this—but then again I couldn&#x27;t care less about what Android can do. There is so much stuff fundamentally off-putting for me about the entire Google ecosystem that I&#x27;d never consider switching anyway.
          • dwaite5 days ago
            The text from images feature launched as a Pixel 2 series-only feature.<p>There&#x27;s a lot clearer message to consumers on iPhone, since so many features are available on &quot;every phone made in the last five years, once you update the software.&quot;<p>On Android, that feature might be bound to an OS version, or might be rolled out in a Play Store update, it might be specific to just Google or Samsung, or even just to one of their phones. There&#x27;s much less word of mouth &quot;have you tried this new thing?&quot;
        • hurfdurf6 days ago
          Remember when Google added Car Crash Detection to Pixel in early 2020? Nobody does.<p>But when Apple added it in iPhone 14 (2022)...
          • 107292875 days ago
            In french we talk about &quot;le savoir-faire&quot; vs &quot;le faire-savoir&quot; (Know-how vs making it known&quot;) and the importance of good communication. Apple are the bestest at it. Remember the iPod shuffle and the lack of screen marketed as a feature to spice up your life.
    • socalgal26 days ago
      Yea, they nailed that with the Newton, Apple Pippin, and the Apple Vision Pro
      • Krutonium6 days ago
        The Vision Pro was a Development Kit; Just like the first generation Apple Watch. It&#x27;s not meant for the consumers, it&#x27;s meant for the developers among the consumers.<p>We will see if they ever release a new VisionOS device, but it&#x27;s not the first time they did that; see also the Apple Watch.
        • jeroenhd6 days ago
          You can explain away every failed product launch with &quot;it&#x27;s a developer product&quot;, not meant for consumers.<p>This wasn&#x27;t like HoloLens or Google Glass. They marketed these devices to consumers and then sold these devices to consumers.
        • socalgal25 days ago
          A development kit wouldn&#x27;t have a dedicated section of sofas in Apple stores for demoing them.
      • blitzar6 days ago
        How amazing is that Apple car
        • ponector5 days ago
          It has an excellent reputation of having no reported accidents!
          • blitzar5 days ago
            My regular tow truck driver tells me he sees all sorts; fords, audis, mercedes, teslas, even the odd exotic car like a lambo - but never once an apple car would you believe it.
        • treetalker6 days ago
          Depending on price I would or would not buy an Apple car; but I am quite interested in options for a car that (1) is electric; (2) doesn&#x27;t spy on me and sell my data; (3) doesn&#x27;t take video of me and my passengers and do weird things with it; and (4) doesn&#x27;t support Republicans &#x2F; white supremacists &#x2F; Elon Musk.<p>And I imagine that like-minded consumers are a pretty large market.
          • SebastianKra5 days ago
            Knowing Apple&#x27;s track history with materials, I guess the seats will look like used iPad Smart Keyboard Folios after two years.
          • brikym6 days ago
            (5) Doesn&#x27;t support a dictatorship with camps.
      • anjel6 days ago
        Apple learned to hang back from plowing the unsold Lisa&#x27;s into a landfill.
      • MagicMoonlight6 days ago
        The Vision Pro is the best AR&#x2F;VR product ever created.
        • Fricken6 days ago
          All the king&#x27;s horses and all the king&#x27;s men couldn&#x27;t come up with a killer app.
          • ajdegol5 days ago
            I think it was $1000 too expensive to take off. It’s also too heavy, they should drop the front screen and ruthlessly save weight.<p>Chicken and egg problem, if no-one buys it, no-one will develop any killer apps.<p>Whether it’s pleasant to have any screens that close to your eyes - or ever will be - is maybe the bigger question for VR.
            • ascagnel_5 days ago
              &gt; Chicken and egg problem, if no-one buys it, no-one will develop any killer apps.<p>Disagree on this. Going back as far as VisiCalc, it&#x27;s about a device making space for a killer app, and that killer app selling devices. Apple has torched so much developer good-will that even a lower price wouldn&#x27;t make the space for a killer app.<p>When was the last time a new, mobile-first killer app came out?
      • pjmlp4 days ago
        You missed Taligent, Opendoc, A&#x2F;UX, Mk Linux, Copland,...
    • SoftTalker6 days ago
      When have they done that since the first iPhone in 2007? The watch maybe? Though not sure that&#x27;s &quot;leapfrog&quot; better than anyone else&#x27;s smartwatch, but I don&#x27;t have one so maybe I&#x27;m wrong.
      • gmerc6 days ago
        Their own chips, vertically integrating.
      • tiffanyh6 days ago
        - AirPods<p>- Apple Watch<p>- AirTag<p>Those are a few that come to mind. All do multi-billions in revenue per year.
        • smt886 days ago
          None of those are the best product in their category, and all are only huge sellers because Apple anti-competitively privileges them in its ecosystem.
          • drBonkers6 days ago
            What’s better than AirPods and AirTags? I want them
            • tsycho6 days ago
              The parent poster is saying (and I agree) that Airpods and Airtags are only superior because Apple anti-competitively privileges their integration with iPhones. It&#x27;s not that they are better at the hardware level by itself.<p>And since iPhones form the largest single company&#x27;s device network in the rich countries, that is a pretty big advantage.
              • benbristow5 days ago
                Surely it&#x27;s less of an advantage in rich countries because naturally less theft occurs?
          • npunt5 days ago
            Bad take. Each of these offered novel user experience improvements at launch. Yes they leveraged ecosystem (and yes I agree ecosystem lock-in does move devices) but thats also exactly what unlocked the better UX.<p>In your other thread you mentioned people don&#x27;t necessarily want iPhones but they buy them to not be excluded from iMessage. I think you vastly underestimate how much regular people want low-bullshit tech experiences and are willing to pay for that.
      • Bud6 days ago
        [dead]
    • sidkshatriya6 days ago
      Will this strategy work every time ? Maybe for AI it will work (market is competitive and Apple just purchases the best model for its consumers).<p>But this approach may not work in other areas: e.g. building electric batteries, wireless modems, electric cars, solar cell technology, quantum computing etc.<p>Essentially Apple got lucky with AI but it needs to keep investing in cutting edge technology in the various broad areas it operates in and not let others get too far ahead !
      • Crestwave6 days ago
        It works often enough for the company to be wildly successful. They can simply cut their losses and withdraw from industries where it hasn&#x27;t, such as EVs.
      • dwaite5 days ago
        Their focus is investing in areas where they see something being a competitive differentiator, or where the market has failed to create a competitive environment.<p>They do not make their own screens because they can source screens from multiple sources and work with those manufacturers to create screens with the properties they want. Same thing with them relying on others for electric batteries - there are plenty of manufacturers to provide batteries to Apple&#x27;s spec.<p>They created their own wireless modems because there&#x27;s only one company they were able to purchase modems from, and those modems did not necessarily have the features Apple wanted.<p>Apple hasn&#x27;t announced any interest in selling electric cars, solar cell technology, or quantum computing platforms. I wouldn&#x27;t expect them to do so until they had a consumer product ready for sale. I doubt they are planning to come out with products in any of these categories soon.
      • codeptualize6 days ago
        I think their M chips are a good example. They ran on intel for so long, then did the impossible of changing architecture on Mac, even without much transition pain.<p>Obviously that was built upon years of iPhone experience, but it shows they can lag behind, buy from other vendors, and still win when it becomes worth it to them.
        • moondev6 days ago
          How is changing the architecture of a platform that only you make hardware for doing the impossible?<p>They could change the architecture again tonight, and start releasing new machines with it. The users will adopt because there is literally no other choice.<p>Every machine they release will be fastest and most capable on the platform, because there is no other option
          • crote6 days ago
            The hard part is doing so without completely ruining the existing app ecosystem. Rosetta 2 is <i>genuinely</i> impressive.
            • codeptualize5 days ago
              Exactly this! Rosetta + the whole app developer community who really quickly released builds for M chips (voluntary or forced, but it did happen).<p>I had the initial m1 air, and it was remarkable how useable it was. You&#x27;d expect all sorts of friction and issue but mostly things just worked (very fast). Even with some Rosetta overhead it was still fast compared to intel macs.
            • oarsinsync6 days ago
              Rosetta 1 delivered 50-80% of the performance of native, during the PPC-&gt;Intel transition. It turns out, you can deliver not particularly impressive performance and still not ruin your app ecosystem, because developers have to either update to target your new platform, or leave your platform entirely.<p>You can also voluntarily cut off huge chunks of your own app ecosystem intentionally, by giving up 32bit support and requiring everything to be 64bit capable.<p>...because users have no other choice when only one vendor controls the both the hardware+software. They can either use the apps still available to them, or they can leave. And the cost of leaving for users is a lot higher.
            • lern_too_spel5 days ago
              Vs. FEX and Prism?
              • dwaite5 days ago
                Yes. Apple put custom hardware support in the M series chips based on the needs of Rosetta 2. The x86_64 performance on Rosetta 2 was often higher at launch than the prior generation of Intel chips running those same binaries natively.<p>Microsoft and Qualcomm already knew the performance of x86 app emulation on windows was killing the ARM machine lineup, so Qualcomm was working on extensions to their chips and Microsoft on having Windows support them already, but ARM64EC and Prism didn&#x27;t launch for two years after the M1 shipped.
        • Krutonium6 days ago
          It&#x27;s also notably not the first time they switched. They did the Motorola (I think MIPS?) Archictecure, then IBM PowerPC, then Intel x86 (for a single generation, then x86_64) and now Apple M-Series.
          • spicymaki6 days ago
            Motorola chip was called 68000.
      • danaris6 days ago
        But Apple doesn&#x27;t just try to do everything.<p>They do the things they think they can do very well.<p>Why would they <i>try</i> to build electric batteries, wireless modems, electric cars, solar cells, or quantum computers, if their R&amp;D hadn&#x27;t already determined that they would likely be able to do so Very Well?<p>It&#x27;s not like any of those are really in their primary lines of business anyway.
      • iSnow5 days ago
        &gt;wireless modems<p>They (Apple) bought out intel&#x27;s wireless modems and are using them instead of Qualcomm&#x27;s chips. IIRC, they aren&#x27;t the best in class when it comes to raw throughput, but quite good in terms of throughput vs power consumption.
    • maplethorpe5 days ago
      Didn&#x27;t they rush to integrate ChatGPT into their OS back in 2024? Reality doesn&#x27;t seem to align with your description.
      • dwaite5 days ago
        I wouldn&#x27;t describe it as &#x27;rushed&#x27;. Its integrated pretty much exactly the way they said it would be, as a fall-back from Siri when you ask world knowledge questions.<p>The part that doesn&#x27;t work is having Siri locally smart enough to use it as a tool.
        • maplethorpe5 days ago
          I used &quot;rushed&quot; because they decided to add it while everyone was still figuring out what it was for. That behaviour goes against the OP&#x27;s claim that Apple &quot;waits to understand what the thing is capable of doing&quot; before acting.<p>It feels like we&#x27;re rewriting history. There was a lot of blowback at the time.
      • swiftcoder5 days ago
        They certainly announced they were going to. I&#x27;ve yet to meet someone who actually used that integration. Like many of these things, it seems to have been a sop to the investors who were accusing apple of ignoring the AI wave
    • SirMaster5 days ago
      Apple waited on smartphones?<p>I thought the original iPhone was basically first.<p>Do you count blackberry and palm pilot as Apple waiting to see?
      • kergonath5 days ago
        &gt; Apple waited on smartphones?<p>They were not waiting for smartphones, but they did wait for the technology to enable them. They had been working on prototypes for a couple of years before releasing the first iPhone, and smartphones were not really a new thing at that point. What made it possible is improvements in digitisers and batteries (and they were not the first users of the capacitive digitisers in the first iPhones, they were the first to use it at that scale for a full screen), as well as progress on the software side, which took some effort.<p>It was the same for the first iPod. They jumped when they got a hard drive they thought was small enough to fit in a product they believed was good.<p>So yeah, they tend to wait and see, but they consider technologies, not only final products.
      • dwaite5 days ago
        I would absolutely count blackberry and palm pilot, along with windows ce-based phones. Just because Apple leap-frogged them (and they all eventually folded those lines of business) doesn&#x27;t mean they weren&#x27;t existing products in the market.<p>The difference, if any, was focus. The premium on smartphones before Apple hit the market was on business&#x2F;professional users who could afford the high premium. Apple instead targeted making a premium consumer product - that professionals then started to jump to over time, depending on how addicted they were to their blackberry keyboard.
      • Eric_WVGG5 days ago
        Apple was considered <i>very</i> late to the smartphone game at the time.<p>Windows CE was introduced on PDAs around 1996, and was on phones by 2003, so the iPhone was arguably between four and eleven years late depending on how you define the space.<p>Microsoft’s dominance was a safe bet because they had never really failed to dominate any market at that point in history. Also nobody imagined that the size of the mobile market would eclipse laptops, so “Windows CE already won” wasn’t an absurd statement at all.
        • SirMaster5 days ago
          I guess it&#x27;s just hard for me to consider those even &quot;smartphones&quot; with such small screens without capacitive multi-touch and with web browsers that didn&#x27;t work properly on so many websites, at least compared to safari on the iPhone.<p>And even other factors like the music and videos was so poor, granted they were built for business use which didn&#x27;t really need a good media consumption experience.
          • grtteee5 days ago
            I suggest you watch the original iPhone video launch - Steve compares the iPhone with the existing smartphones of the time.<p>You’re taking for granted that we know how things panned out in hindsight. A complete touch screen phone with no fixed buttons at that time seemed nuts.
            • SirMaster5 days ago
              That&#x27;s exactly my point. It seemed like a new futuristic product category, first of it&#x27;s kind.<p>I watched it live and I bought one right when they came available. To me it delivered a user experience unlike anything else that existed.<p>And yes I had used some palm pilots and blackberries as well.
      • hoytschermerhrn5 days ago
        Yes
    • scotty794 days ago
      That&#x27;s what they famously did for VR. &#x2F;s
    • dangus6 days ago
      It’s even more superpowered than previous implementations of this strategy.<p>When they made the iPhone, iPod, and Apple Watch they had no specific hardware advantage over competitors. Especially with early iPhone and iPod: no moat at all, make a better product with better marketing and you’ll beat Apple.<p>Now? Good luck getting any kind of reasonably priced laptop or phone that can run local AI as well as the iPhone&#x2F;MacBook. It doesn’t matter that Apple Intelligence sucks right now, what matters is that every request made to Gemini is losing money and possibly always will.<p>This is especially true in 2026 where Windows laptops are climbing in price while MacBooks stay the same.
      • dwaite5 days ago
        All three of those products launched with custom hardware made by partnered manufacturers.<p>At iPhone launch, I seem to remember Apple still having quite a bit of the flash ram market tied up from their exclusive iPod contracts - Apple basically helped finance new factories to be spun up in return for exclusive access to their production.<p>The Apple Watch had the S1 system on package, which included an Apple custom CPU. There were a number of miniaturization techniques and custom parts Apple used which I remember competitors lagging on being able to replicate due to the broader market tendency to integrate off the shelf products (but I don&#x27;t have more part examples or timelines).<p>Since they try to stay secretive about upcoming products, competitors may only get hints about what Apple is doing through your typical industrial espionage channels until the product comes out. That creates quite a bit of lag then you are starting a new product design cycle based on a product your competitor just hit the market with.
        • dangus4 days ago
          I think we can’t overestimate how lucky Apple got with the success of the iPhone. It really wasn’t a guaranteed hit by any means, and despite the success of the iPod it was launched by a much more modest company than today’s Apple.<p>Samsung literally makes flash memory and was one of the primary competitors of the iPhone along with its Microsoft Windows Mobile&#x2F;Phone and&#x2F;or Android products of that era.<p>Are you saying that iPhone competitors couldn’t have made similar investments in factories and couldn’t have secured flash chips? These were all mega-corporations like Microsoft, Samsung, LG, and Nokia.<p>Android had been in negotiations with companies like Samsung and LG in 2005 before Google acquired them. In a very slightly alternate universe, Android could have been acquired by a powerhouse phone OEM like Samsung rather than Google, who I would argue squandered Android’s potential. To this very day Google struggles to make competitive hardware with their platform.<p>The iPhone launched as one of the most expensive smartphones on the market. The iPhone launched from a company with zero experience in selling cellular devices and a very small list of cellular networks who would even work with them.<p>Their competitors had ample opportunity to respond, but simply could not execute. In a very very slightly alternate universe, something like the Nokia&#x2F;Microsoft partnership would have obliterated Apple.<p>The Apple Watch had no hardware advantage in the sense that it had no special capabilities above competitors. Yes, Apple custom-designed the SoC, but it wasn’t considered ahead of its competition. The LG G Watch and Moto 360 were available contemporaneous or earlier than the Apple Watch and the Apple Watch had no specific advantage in terms of performance, battery life, etc.<p>What made the Apple Watch a lot different from the iPhone was the ecosystem that Apple had built up to this point, Apple’s focus on watches as a fashion purchase and failure of competitors to recognize the same, and Apple’s arguably-illegal restriction of competing smartwatch devices on their dominant mobile platform (which the EU is forcing them to open up on now).
      • skybrian6 days ago
        How do you know Gemini is losing money on inference?
        • nl6 days ago
          &gt; How do you know Gemini is losing money on inference?<p>It&#x27;s not. People make this claim with zero evidence.<p>But Google made around $20B <i>profit</i> on Google search in 2025 Q4, and that includes AI search.
          • grtteee5 days ago
            Until the day comes that they properly break out the financials you, nor the other poster have any idea as to what the numbers are.
            • dangus5 days ago
              And if AI was making lots of money they’d break it out and proudly display it in their financials on its own.
              • nl5 days ago
                They can&#x27;t break it out because it is embedded in other services.<p>But to quote:<p>&gt; Overall, we’re seeing our AI investments and infrastructure drive revenue and growth across the board.<p>and<p>&gt; Revenue from AI solutions built by our partners increased nearly 300% year-over-year, and commitments from our top 15 software partners grew more than 16X year-over-year.<p><a href="https:&#x2F;&#x2F;blog.google&#x2F;company-news&#x2F;inside-google&#x2F;message-ceo&#x2F;alphabet-earnings-q4-2025&#x2F;#full-stack-approach-ai" rel="nofollow">https:&#x2F;&#x2F;blog.google&#x2F;company-news&#x2F;inside-google&#x2F;message-ceo&#x2F;a...</a>
                • dangus5 days ago
                  Revenue != Profit<p>Operating margin has been declining since approximately 3&#x2F;2025 at Alphabet.<p>You think Google has no ability to tell us whether a traditional search makes more revenue than an AI Summary search? I think we would be naive to assume they don&#x27;t know that.
                • grtteee5 days ago
                  No they choose not to break it out because under accounting principles you can get away with it.<p>Lmao don’t talk about subjects you clearly are not an expert in.<p>The only real metric one can use to gauge new investment is the marginal ROIC. Which is very noisy to say the least.
                  • nl5 days ago
                    Ok..<p>So as I said: Google made around $20B in profit on 2025 Q4 which includes AI search.<p>Both revenue and profit grew with the introduction of AI search.<p>So where exactly is this big loss you speak so confidently of?
                    • grtteee5 days ago
                      I did not comment on whether it was a loss or profit position.<p>“Until the day comes that they properly break out the financials you, nor the other poster have any idea as to what the numbers are.”<p>Until one of the private firms goes public nobody has a clean view of what the financials of a model business look like.
              • grtteee5 days ago
                Yep lol. Every investor and portfolio manager in the world is begging for a signal to say the returns really are coming on all this capex spend.
        • OccamsMirror6 days ago
          They&#x27;re talking about free inference like Android and Google Home devices. No one is paying subscription fees for these and they&#x27;re running their inference in the cloud. Apple Intelligence, for the most part, is running on the device.
          • knowaveragejoe6 days ago
            Isn&#x27;t some of Gemini&#x27;s functionality on Android on-device?
            • nl5 days ago
              Yes it is
        • dangus5 days ago
          Is there any evidence of <i>any company</i> making money in inference?
      • grtteee6 days ago
        Apples advantage was that they did everything in house and had the marketing and distribution capabilities. And now you’ve got the ecosystem lock in.<p>In hindsight it’s obvious why they pulled it off - nobody else could do it. They all had pieces missing.
  • Traster5 days ago
    People can correct me if I&#x27;m wrong, but I think the core logic behind OpenAI&#x27;s valuation was essentially that AI would work like search. Google had the best search engine, it became a centre of gravity that sucked everything in and suddenly network effects meant it was the centre of the universe. There seem to be 2 big problems with that though. The first is that for search, queries are both demand for the product and a way of making the product better. The second, is that Google was genuinely the best product for a very long time.<p>Maybe point (1) was unclear at some point, but I think it&#x27;s mostly clear today that&#x27;s not happening. Training the model is modestly distinct from inference.<p>Point (2) is really funny - because sure, at some point OpenAI was the best, and then Sam Altman blew the place up and spawned a whole host of competitors who could replicate and eventually surpass OpenAI&#x27;s state of the art.<p>It now looks like AI is a death march. You must spend billions of dollars to have the best model or you won&#x27;t be able to sell inference. But even if you do, a whole host of better funded competitors are going to beat you within months so your inference charges better pay off extremely quickly. When the gap between models starts to drop, distribution becomes king and OpenAI can&#x27;t compete in that field either.<p>Google can do that. Meta can do that. MSFT probably can do that. Amazon can do that. OpenAI cannot. They do not have the cash to do it.
    • toyetic5 days ago
      I think a large part of its valuation was it&#x27;s ability to compete with search but thats understating it a bit. Unlike search it could&#x2F;can be the platform users primarily interact with (ala a social media replacement) while having huge impacts on enterprise work and automation. I think its the combination of the ability for effectively one company to compete on every front in the modern web ecosystem thats contributed to the valuation.<p>It&#x27;s also important to note the valuation is not just based off of its possible concrete economic implications in these areas but also future &quot;unknown&quot; possibility ( I.E. whatever &quot;agi&quot; means to investors ). Thats not to say I believe it&#x27;s possible to achieve this but rather a huge part of Sam Altman&#x27;s job is increasing valuation through unfounded claims of AGI&#x27;s possibility and possible impact.
      • Traster5 days ago
        Yeah to zoom out, I think it was less specifically search and more generally: There was the PC, the winner became a behemoth. Then was Search, the winner became a behemoth. Then smartphones, the winner became a behemoth, Then there was social media and the winner became a behemoth.<p>The logic was basically &quot;AI is going to be the next thing. The winner is going to be massive, let&#x27;s back the person who looks best placed to do that&quot;. To be fair, it&#x27;s probably correct. The people betting on OpenAI probably have plenty of money in Google shares and almost certainly have a share of Anthropic, grok, you name it. Most of them will go to 0, but the 1 winner <i>could</i> pay off. I&#x27;m not sure even 1 will pay off.
      • chasd005 days ago
        I&#x27;ve almost forgotten about AGI, that was suppose to be the reason for the valuations and all the hope&#x2F;fear. Then, it just sort of went away and AI turned into the Software Developer doomsday machine. We&#x27;re on month 4 since the models got really good at code and we were all going to be out of a job in 6 months. I guess we only have 2 more months of employment left &#x2F;s
    • alex11385 days ago
      I&#x27;m just sad Google was intent on ruining their own product, whether removing + operator (seriously - Google+ is not an excuse, I don&#x27;t care if it conflicts with search, don&#x27;t do that) or some of their political censorship
    • jacobgold5 days ago
      <i>&quot;Google had the best search engine, it became a centre of gravity...&quot;</i><p>Almost no one made serious attempts at competing with Google. And not because of network effects or any other hard blocker. In the early 2000s, the industry just wasn&#x27;t mature enough to heavily fund serious competition.<p>By the 2020s the industry has funding and founders ready to jump on any huge opportunity that presents itself.<p>There are of course downsides, but this competitive landscape in AI seems like a huge net win for users in terms of lower costs and faster progress.
      • ethbr15 days ago
        Yahoo? MSN Search?
        • jacobgold4 days ago
          There were lots of search engines but very few that made serious well-funded attempts at competing head-to-head with Google.
          • ethbr13 days ago
            I&#x27;d consider both of those well-funded.
      • tim3335 days ago
        Microsoft had a good go with Bing.
        • jacobgold4 days ago
          They did as well competing with Google on search as they have competing with Apple on smartphones.
    • chasd005 days ago
      that&#x27;s been my feeling for a while now. Google just has to keep up while OpenAI and Anthropic go bankrupt. I can see MSFT and Amazon eventually consuming OpenAI and Anthropic respectively when the money runs out but I still think Google is the eventual winner. I also have been pointing out that Apple making a deal with Google vs trying to do it on their own is another vote in that direction.
    • pstuart5 days ago
      For actual searching, it seems like RAG would be the way. Instead of rebuilding models, focus on curating datasets and sources.
  • hapticmonkey6 days ago
    Apple aren’t in the business of building chatbots to impress investors (other than some WWDC2024 vaporware they’d rather not talk about any more). They’re in the business of consumer hardware.<p>Consumers want iPhones and (if Apple are right) some form of AR glasses in the next decade. That’s their focus. There’s a huge amount of machine learning and inference that’s required to get those to work. But it’s under the hood and computed locally. Hence their chips. I don’t see what Apple have to gain by building a competitor to what OpenAI has to offer.
    • discordance6 days ago
      ~25% of Apple&#x27;s revenue came from services in FY25 (and 50% from iPhone, ~25% from other hardware). They made $415B in that year, so ~$100B from services alone!
      • planb5 days ago
        Services revenue is mostly just 30% from App Store Sales. This means every time a user clicks a pro account for ChatGPT or Claude on their phone, Apple makes more money than they could make with a self deployed model.
        • benoau5 days ago
          You&#x27;re not wrong that they collect a ton of rent off AI apps, rumours a few weeks back claimed $900m in fees last year with 75% of that just from OpenAI.<p>But services revenue is:<p>- their 36% share of Google Ads for being default search engine, about $21 billion&#x2F;year of pure profit<p>- their IAP fees, court testimony reveals 75% profit margin<p>- their first-party subscriptions, there&#x27;s an antitrust about iCloud that alleges 52% of iPhone users are on paid plans and that the profit margins are 80-ish percent!<p><a href="https:&#x2F;&#x2F;9to5mac.com&#x2F;2026&#x2F;03&#x2F;19&#x2F;report-apple-made-roughly-900m-from-generative-ai-apps-in-2025&#x2F;" rel="nofollow">https:&#x2F;&#x2F;9to5mac.com&#x2F;2026&#x2F;03&#x2F;19&#x2F;report-apple-made-roughly-900...</a>
        • mdorazio5 days ago
          Not true. App Store is less than 1&#x2F;3 of services revenue.
          • planb5 days ago
            Source? But it‘s true that I forgot about the Google Search deal.
    • goalieca5 days ago
      &gt; (if Apple are right) some form of AR glasses in the next decade.<p>Pretty sure this is just a hedge or simple research project and not a main bet.
    • smt886 days ago
      Consumers don&#x27;t necessarily want iPhone. They don&#x27;t want to be excluded from iMessage, which is a completely different motivation.
      • tokioyoyo6 days ago
        Yeah, that just doesn&#x27;t pass the simplest sniff tests. I barely use iMessage, and yet I&#x27;m an iPhone user. Basically everyone around me is the same.
        • presbyterian6 days ago
          Agreed, I’ve been a loyal iPhone user for a long time, and very few people I know use iMessage. I use it with my parents because they don’t have any other messenger, and they don’t even really know it’s iMessage, they just think of it as texting. Everyone I know is using something else for messages, whether it’s Discord, Instagram DMs, WhatsApp, or occasionally Telegram or Snapchat.
        • smt885 days ago
          &gt; <i>Yeah, that just doesn&#x27;t pass the simplest sniff tests. I barely use iMessage, and yet I&#x27;m an iPhone user.</i><p>A single anecdote isn&#x27;t data. You&#x27;re not a typical consumer.<p>The only major market where iPhone outsells Android (number of handsets) is the US, and it&#x27;s because of iMessage. Android is 70% of the world market and dominates LatAm, Africa, and Asia.
          • avianlyric5 days ago
            Why are you comparing a single phone manufacturers market position to the market position of an entire OS?<p>iOS vs Android isn’t relevant when discussing hardware. It’s Apple vs Samsung etc. iOS doesn’t need majority market ownership for Apple to completely dominate their hardware competitors in a market.
          • carlosjobim5 days ago
            But now you&#x27;ve argued that people are buying Android instead of iPhone just because of social pressure from their peers.
        • array_key_first5 days ago
          In the US it&#x27;s mostly iMessage, and that includes people who say it&#x27;s not mostly iMessage.<p>iPhones are more expensive, on average, for a similar or worse experience. The thing that drives iphone sales is social. People want iPhones because their friends do, and that&#x27;s a very good reason.
      • bdavbdav6 days ago
        US centric view, which I believe to be wrong. UK is predominantly WhatsApp, and the bulk of handsets sold are still iPhones.<p>Income is a much tighter correlation than messaging platform. Rack up those market shares by phone value and the scales tip even harder.
        • hiq6 days ago
          &gt; the bulk of handsets sold are still iPhones<p>According to <a href="https:&#x2F;&#x2F;gs.statcounter.com&#x2F;os-market-share&#x2F;mobile&#x2F;united-kingdom" rel="nofollow">https:&#x2F;&#x2F;gs.statcounter.com&#x2F;os-market-share&#x2F;mobile&#x2F;united-kin...</a> it&#x27;s closer to 50&#x2F;50.
      • drooopy5 days ago
        That must be an american thing because I guarantee you that it doesn&#x27;t mean anything for the rest of the world.
        • smt885 days ago
          It is, and the iPhone doesn&#x27;t have overwhelming market share in any other large market, which is my point.
          • avianlyric5 days ago
            But they’re still the largest individual player in just about every market they’re in. So there’s clearly a strong demand for iPhones.
      • prmoustache5 days ago
        That is a very US centric opinion.<p>In other part of the globe iphone users are mostly using whatsapp or Line and couldn&#x27;t care less about imessage.
        • smt885 days ago
          And in those countries, iOS has a much smaller market share. You&#x27;re proving my point.
          • prmoustache5 days ago
            The sum of all these smaller markets is still bigger than the US one.
            • carefree-bob5 days ago
              When measured in terms of mouths, yes, but when measured in terms of surplus spending power than can go to Apple, certainly not.<p>India has 1.3 Billion people in terms of counting mouths, but not wallets with $1000 to send to Apple for a new iphone.
              • prmoustache5 days ago
                The biggest region in term of revenues for Apple revenues is all americas but it is still only around 43%, which means Apple still makes money from Africa, Europe, Asia and Ocenia combined than the whole american continent. So that is even less for the USA.<p><a href="https:&#x2F;&#x2F;bullfincher.io&#x2F;companies&#x2F;apple&#x2F;revenue-by-geography" rel="nofollow">https:&#x2F;&#x2F;bullfincher.io&#x2F;companies&#x2F;apple&#x2F;revenue-by-geography</a>
                • carefree-bob5 days ago
                  You are doing a lot of heavy lifting lumping Europe and Asia together, and inserting Africa and Oceana, which is so small Apple doesn&#x27;t even report revenue for it.<p><pre><code> North America is 43% Europe is 27% China 15% Japan 7% Rest of Asia: 8% Africa, middle east, oceania effective round to 0% </code></pre> Basically North America and Japan make up 50% of Apple&#x27;s revenue, but are nowhere near 50% of global GDP or population.
                  • prmoustache4 days ago
                    Nobody against the fact that US a is their biggest market. That doesn&#x27;t change the fact that the more than half of their revenue is made in areas where imessage is totally irrelevant.
      • Maxion6 days ago
        iMessage is AFAIK only really a big thing in the US.
        • smt885 days ago
          Yes, and the US is by far Apple&#x27;s most important handset market. The other iOS-majority countries are small markets for Apple.
        • kimos5 days ago
          Canada too.<p>I only have WhatsApp installed for when I leave the country.
          • barbazoo5 days ago
            It depends on your social circle obviously. I had a single person I used iMessage with no but we since switched to SMS. Not many people where I live have iphones.
        • camkego6 days ago
          I totally buy this as someone located in the US, but what is everybody else using? It can’t be WhatsApp? Is everyone sending all their connection graphdata to Meta?
          • andrewl-hn5 days ago
            A lot of SMBs use Instagram to connect to their clients, so Instagram build-in messenger is a default option for a lot of people (especially women) in many parts of the world.<p>Some places have regional messengers that are very entrenched, like Line in Japan or KakaoTalk in Korea.<p>WhatsApp <i>is</i> a default option in a large number of countries including most of Middle East, parts of Europe, Brazil, most of Africa, Southern Asia. To me it is surprising, too, because out of all messaging options WhatsApp seems like the least developed and least ergonomic.<p>And yes, this does mean that most people share whatever data Big Tech wants. They use Meta to talk to each other, auto-upload their photos to Google, click &quot;accept&quot; to every cookie banner so that thousands of no-name companies around the world know where they are and what they are doing at all times.
          • saberience5 days ago
            Everyone in the UK and Western Europe uses WhatsApp as their primary messenger.<p>The only time I ever open iMessage is when I get an SMS 2FA verification code or something similar.<p>Also, in the Middle East everyone also just uses WhatsApp or Telegram.
          • puelocesar6 days ago
            People who care about privacy (very very few) use signal, everyone else uses Whatsup
          • austhrow7435 days ago
            You understand that Facebook and Instagram are also very popular yes?
          • russelldjimmy6 days ago
            It’s WhatsApp. No one thinks about sending data to Meta. The world is much bigger than the HN bubble, where almost no one thinks about privacy implications.
            • lnsru6 days ago
              Absolutely this. No one cares about privacy. 99,9% population has no clue how tech works. “Oh, it’s an app on my phone.” That’s what typical consumer understands. How text travels from one phone to other is something magical.<p>Got WhatsApp, because there is no other channel to communicate with customers. It’s literally used by everyone without exceptions. Really scary.
          • agos5 days ago
            in my country it&#x27;s Whatsapp, and has been since before it was acquired by Meta
      • KaiserPro5 days ago
        I doubt 80% of iphone users would be able to tell you if imessage was on or not.<p>they might say that some people&#x27;s messages are green, but not much more.
      • russelldjimmy6 days ago
        No one uses iMessage in my country. Yet iPhones are sought after. Some of us just really like iPhones for the experience - not everything is a conspiracy. People can have different tastes and are more free to choose than people on HN like to believe.
        • smt885 days ago
          What country?
  • pjmlp6 days ago
    What I don&#x27;t get about Apple is when everyone else was giving up on yet another VR attempt, moving into AI, they decide AI isn&#x27;t worth it, and it was the right time for a me too VR headset.<p>So no VR, given the price and lack of developer support, and late arrival into AI.
    • lugu5 days ago
      It is the same pattern, late on VR, late on AI. Those two tech have a pricing problem. I would guess that Apple is working to create the conditions to make these tech cheap enough to sell it to everyone.
      • pjmlp5 days ago
        For everyone that can afford Apple, that is.<p>They do have the mobile phone market duopoly advantage though, far from the 90&#x27;s mistakes that almost closed shop.
    • a13o5 days ago
      I think of it like a technology checkpoint. Make sure you got as far as everyone else when they gave up, so when the next innovation in that space comes along you can start back up on even footing.<p>You want to have your own pathway to production that dodges competitors’ patents, is somewhat defensible itself, maybe a brand, etc.
      • bigyabai5 days ago
        From the outside looking in, it&#x27;s an incredible waste of resources on Apple&#x27;s behalf to use Vision Pro as a design exercise.<p>Apple would have had a much stronger position (for much cheaper) if they supported OpenXR and obsoleted the Windows Mixed Reality&#x2F;Quest&#x2F;Hololens brand for good. As a &quot;15 competing standards&quot; platform, visionOS is a net negative project when it could have been a direct shot at Microsoft and Valve.
    • KaiserPro5 days ago
      &gt; it was the right time for a me too VR headset.<p>I think it was more that the experience was pretty much there. Hardware takes a loooong time to mature, even more if its a new style or package. I&#x27;m assuming that they were prototyping this in 2015-18.<p>Also, Apple knows that AR glasses, if done right, and not turned into a cesspool of perverts (ie google glasses) will be a massive platform. However its going to take at least another 5 years to get something usable. So if its possible, I expect apple to come out with something just after Meta either gives up or has a string of failures.
  • pram6 days ago
    I&#x27;ve had it turned off since Sequoia, and this I truly appreciate. It hasn&#x27;t nagged me once to turn it or Siri on, and it isn&#x27;t mandatory.<p>When I open up JIRA or Slack I am always greeted with multiple new dialogues pointing at some new AI bullshit, in comparison. We hates it precious
    • TheDong6 days ago
      I don&#x27;t like companies forcing their newest features on me noisily and constantly trying to ship new features and see what sticks so you can&#x27;t trust whether a feature advertised one week will even be there the next.<p>However, I have even less patience for companies forcing paid-for third-party ads down my throat on a paid product. Slack at least doesn&#x27;t sell my eyeballs. Facebook, Twitter, Google&#x27;s ads are worse to me than new feature dialogues.<p>Which brings me to Apple. I pay for a $1k+ device, and yet the app store&#x27;s first result is always a sponsored bit of spam, adware, or sometimes even malware (like the fake ledger wallet on iOS, that was a sponsored result for a crypto stealer). On my other devices, I can at least choose to not use ad-ridden BS (like on android you can use F-Droid and AuroraStore, on Linux my package manager has no ads), but on iOS it&#x27;s harder to avoid.<p>Apple hasn&#x27;t sunk to Google levels in terms of ads, but they&#x27;ve crossed a line.
      • nirava5 days ago
        I agree. App store is really horrible. Why is it that when I&#x27;m searching for a first party or a very very popular, the first result and many of the other results are weird scammy malware like things? I don&#x27;t particularly care about the stupid homepage ads tho, I think thats just because I have &quot;personalize app store recommendations&quot; turned off.<p>Search inside Settings (both mac and ios) was also really really stupid for a long while. Why are you taking me to some random accessibility toggle when I&#x27;m looking for &quot;displays&quot; ? But I checked right now and it&#x27;s good.
      • colechristensen6 days ago
        I get it but... well I think of App Store as... a store. I don&#x27;t have to go there.<p>I&#x27;m actually pretty disappointed in the lack of discovery available in the App Store, but I rarely go there. I&#x27;m fine with advertising being there. I wish it was better but I&#x27;m not offended that there is paid promotion in a store.
        • DaedalusII6 days ago
          &gt;get letter from bank<p>&gt;&quot;to fix this, please install our app&quot;<p>&gt;search BankName<p>&gt;comes up with other banks, BankNames US app (not the country you are in)<p>&gt;revolut etc (cant use in the country you are in)<p>&gt;ten minutes later<p>even worse when its your telecomm telling you to install their Official App so you can pay your bills or they will cut your cellular service, and you cant find it
          • Someone6 days ago
            I don’t see what that has to do with (increased) advertising on the App Store (IMO search there never has been good) or the comment you replied to in which <i>colechristensen</i> said: “I&#x27;m actually pretty disappointed in the lack of discovery available”.<p>I think paid advertising may even help improve discoverability on the App Store because, instead of making 10 or 20 to do list apps and hoping to get them to rank high by a combination of sheer luck and SEO tricks, scammers may only make one, and pay to get that to the top of the list.<p>In super markets product placement is affected by two factors: how much producers are willing to pay for a good spot (e.g. by offering lower wholesale prices if the product gets a more visible place) and vetting by the store owner.<p>I don’t think different solutions exist in the App Store. Apple doesn’t want to do much vetting, making advertising the only thing that may help (and yes, it would be awesome if there were a store that did do much vetting, but that requires a world where many different stores exist, and we aren’t there (yet))
            • TheDong5 days ago
              &gt; I think paid advertising may even help improve discoverability on the App Store<p>So my grandmother searching &quot;Powerpoint&quot; and getting malware instead of the microsoft app is good actually?<p>Let me compare some search terms and see if ads are giving me &quot;better&quot; results:<p>* ublock - surfshark vpn<p>* wordle - spammy adware word game<p>* slack - spammy adware game<p>* microsoft word - spammy spyware office app (not the one made by MS)<p>* every bank I could think of - different financial app<p>Like, this isn&#x27;t a good user experience. The ads aren&#x27;t relevant, even when you type in a hyper-popular app&#x27;s name exactly, something like 80% of the time a competitor has sniped the top spot.<p>For the &quot;microsoft word&quot; search, the spam app had an identical logo to word, and I have no doubt many people have been fooled. If you look at the reviews, some of the 1 star reviews are detailed complaints, and all the 5 star reviews are inhuman sounding &quot;This helped me do my job&quot; and &quot;great app&quot; reviews.<p>&gt; I don’t think different solutions exist in the App Store<p>Sorting roughly by popularity and reviews, and also doing a little more to combat fake reviews, seems like it would be better. It at least would mean that if I searched &quot;bank name&quot; my bank&#x27;s app would come up, since for every bank I tried the first non-ad result was in fact the bank in question.<p>It would save grandmothers around the world who just click on the first result.
              • Someone5 days ago
                &gt; So my grandmother searching &quot;Powerpoint&quot; and getting malware instead of the microsoft app is good actually?<p>Where do I claim that? My argument is that, with paid advertising, the store may show fewer items, making it easier to find the right thing.<p>And no, I’m not claiming that’s ideal; only that it c&#x2F;would be an improvement.
                • TheDong5 days ago
                  So you&#x27;re saying a hypothetically well implemented advertisement service could be better than a hypothetical poorly implemented ranking service.<p>The reality is right now we have a poorly implemented advertisement service that shows malware, and if you ignore the ads and look at the search results based on relevance, they&#x27;re clearly better.<p>The claim &quot;A good ad service would be good&quot; is a truism, but that&#x27;s not the reality we live in.
          • instalabsai6 days ago
            As someone who recently moved to NL from the US I encounter this issue about once a week and it’s blocking me from doing serious things like paying for parking, taxes, utilities or government services, all of which have apps that are only available on the Dutch app store.<p>I have a separate Dutch Apple ID I can switch to, but each time I log out I risk accidentally deleting all my data.
            • matwood6 days ago
              &gt; all of which have apps that are only available on the Dutch app store.<p>This isn’t really on Apple though. Blame the companies&#x2F;developers for geo gating their apps. It’s a simple checkbox in the store to make it available for other countries.
          • nozzlegear6 days ago
            That letter from the bank would probably include a QR code linking directly to their app oui?
        • marcus_holmes6 days ago
          Where do you install apps from then?<p>I get an app recommendation from a friend, I go to the App Store and search for it. I have to be super careful about which link I&#x27;m actually clicking on and which app I&#x27;m installing, because the App Store is riddled with spam and malware.<p>I wouldn&#x27;t mind, except that Apple charge 30% of <i>everything</i> with the justification that they are keeping the ecosystem free of spam and malware...
          • anon70006 days ago
            I’ve been installing apps from the App Store for more than a decade and have never ever accidentally downloaded spam or malware. I’m sure it’s there but it’s really not “riddled” with it in my experience searching for apps. What it’s riddled with is subscription-based apps whose free tier is worthless
          • rconti6 days ago
            I install a new app maybe once every 6 months. I agree that the app store is trash, littered with ads and casino games for kids.<p>I just don&#x27;t find it hard to find the app I want, when I want something specific, and install, and then _get the hell out of that shithole_.
          • nozzlegear6 days ago
            I thought the justification was that they curate an ecosystem of apps with loyal&#x2F;paying customers
      • karel-3d6 days ago
        It&#x27;s best to avoid App Store and look for apps on Google (with ad blocker).
      • Maticslot5 days ago
        [dead]
      • slopinthebag6 days ago
        I haven&#x27;t noticed this at all and I wonder if you&#x27;re mistaking curation for advertising? When I open up the App Store I get a panel written &quot;games we love&quot; and a listing of indie games that are clearly not paid for ads. The ads in search are visibly marked as ads, and while I don&#x27;t particularly like ads in general, they are pretty easy to avoid.
        • 16bitvoid6 days ago
          On iOS, if you open the App Store and click on the Today tab (it&#x27;s the default tab if you kill and reopen), there&#x27;s ads interspersed with curations.<p>For me, the second tile is an ad for Upside, some cashback app
          • slopinthebag6 days ago
            Mine is Moneris Go, and the top review is titled &quot;Garbage App!!!!&quot; lol<p>Honestly the last time I remember using the App Store was years ago and I can&#x27;t recall if they had ads or not. Imo it&#x27;s distasteful and I wish they didn&#x27;t have them. Still leagues better than the fucking ads in the start menu which caused me to give up on gaming and Windows forever.
        • TheDong6 days ago
          If I open the app store and search &quot;Gemini&quot;, the first result is &quot;ChatGPT (advertisement)&quot;<p>If I search for my bank, I get another bank. If I search for &quot;Wordle&quot;, I get a bunch of ad-supported spamware (both the ad and non-ad results) before the real NYT Games app.<p>The app store has ads in search results. This is the primary way that my technologically inept relatives end up with the wrong app installed btw, is by searching and clicking the first result, and getting complete trash adware.<p>Apple should be ashamed of selling out their users.
    • oefrha6 days ago
      Apple keeps nagging me to upgrade to godawful Tahoe. Every time there’s a system update (which includes Safari, Safari TP, CLT etc. updates) Tahoe is always default checked. Even when I specifically click on a Sequoia point update, the Tahoe update is always checked instead of that point release. This has way more destructive potential than “try our new AI feature” in apps.<p>To add insult to injury, the one AI feature that I may want to evaluate—Claude Code integration in Xcode—is gated behind Tahoe upgrade, even though it has absolutely no reason to do so, given that every other IDE integrates AI features just fine on any recent OS.<p>Edit: Oh and I’m not getting bombarded in Slack at all, maybe because my company doesn’t pay for any of the AI stuff there. Last time I got a banner or something like that was months ago.
    • nunez5 days ago
      This right here is why I uninstalled Google Maps from my phone. Them pushing the AI-generated Know Before You Go that you can&#x27;t turn off and blocks you from getting to reviews written _by humans who went to that restaurant_ is absurd. And this is getting normalized everywhere. Amazon with Rufus. Uber with their AI-first support. Google Workspace with Gemini EVERYWHERE (that requires hoops to properly turn off). Lots of sites with their &quot;Ask $HUMAN_NAME&quot; features.
  • an0malous5 days ago
    The best part is that it’ll all run on your device, instead of siphoning off your data to the provider. Local first AI.<p>I think the creatives will also turn around their seething hatred of AI for Apple AI because they use more ethical training data and it feels more like they own their AI, no one’s charging them a subscription fee to use it and then using their private data for training.
    • contagiousflow5 days ago
      Why do you think &quot;creatives&quot; have a &quot;seething hatred of AI&quot;?
      • decimalenough5 days ago
        Have you talked to an artist like a musician, an illustrator or a web designer about AI? It&#x27;s ripping off their work without credit <i>and</i> making them unemployable.
        • satvikpendem5 days ago
          A lot of them already use AI already, such as for example Photoshop AI features. It also seems to be a bimodal distribution, there are those who use it without caring what anyone else says about it, especially the loud minority, and there are those who don&#x27;t use it.
        • contagiousflow5 days ago
          So why would Apples AI features change their minds on either of those points?
  • int32_646 days ago
    Nvidia restricts gamer cards in data centers through licensing, eventually they will probably release a cheaper consumer AI card to corner the local AI market that can&#x27;t be used in data centers if they feel too much of a threat from Apple.<p>Imagine a future where Nvidia sells the exact same product at completely different prices, cheap for those using local models, and expensive for those deploying proprietary models in data centers.
    • walterbell6 days ago
      Nvidia-Mediatek Arm laptops will compete with Qualcomm and Apple, <a href="https:&#x2F;&#x2F;www.forbes.com&#x2F;sites&#x2F;jonmarkman&#x2F;2026&#x2F;03&#x2F;16&#x2F;the-arm-invasion-nvidia-targets-200-billion-pc-market-with-n1x-chips&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.forbes.com&#x2F;sites&#x2F;jonmarkman&#x2F;2026&#x2F;03&#x2F;16&#x2F;the-arm-i...</a><p><pre><code> [WSJ] sources expect.. first units in H1 2026, with GTC as the most likely unveiling stage.. NPU reportedly exceeds both Intel and AMD’s current neural processing units.. If the integrated GPU delivers RTX 5070-class performance in a thin laptop form factor, it would eliminate the need for a separate GPU die, fundamentally changing how gaming laptops are designed.</code></pre>
      • whizzter5 days ago
        If they can get Valve&#x2F;Steam for an OS that handles most games well that could in fact be huge if the pricepoint is a bit lower initially but with plenty of unified RAM (both for AI but also games).<p>That said, gaming laptops cooling issues are so often around the GPU so it&#x27;d also require a seasoned manufacturer to make it correctly.
        • pjmlp4 days ago
          NVidia already has the Shield, and GeForce NOW.
          • whizzter4 days ago
            Isn&#x27;t that a streaming device+service? I&#x27;m more thinking in on lines that the DGX devices can run fairly moden games like Cyperpunk at good framerates, the Windows foothold has been games and enterprise, and an ARM device with a mainstream GPU together with the work Valve has done to make Steam devices.<p>There is Steamdeck and SteamMachine (That can double as a desktop and still X86 based), only real thing missing in that lineup is a laptop for factor machine and if NVidia can provide a bit cooler and fairly power efficient alternative (gaming laptops are still damn loud) it could very well be dang enticing for many.
            • pjmlp4 days ago
              And actual native games instead of relying on Windows developers.
        • walterbell5 days ago
          [dead]
    • kube-system6 days ago
      There’s long been professional segmentation for GPUs, long before people started running AI models on them
    • KaiserPro5 days ago
      &gt; Nvidia restricts gamer cards in data centers through licensing<p>So does intel, so do a lot of companies.<p>but<p>The processor is only half of the equation, memory volume, type and bandwidth as also a big factor in cost. Sure consumer GPUs are cheaper, but they have less memory and (often) less bandwidth. The proc might be the same, or binned, but thats only part of the price.
    • rpmisms6 days ago
      Having your cake and eating it too. Consumer goodwill and printing money.
  • harrouet6 days ago
    Thing is, Apple never considered racing against LLM runners. Apple&#x27;s success comes from human-centered design, it is not trying to launch a me-too product just because it increases their stock price. iPod was not the first MP3 player. iPhone was not even 3G at launch -- in the middle of 3G marketing craze.<p>They sure got lucky that unified memory is well-suited for running AI, but they just focused on having cost- and energy-efficient computing power. They&#x27;ve been having glasses in sight for the last 10 years (when was Magic Leap&#x27;s first product?) and these chips have been developed with that in mind. But not only the chips: nothing was forcing Apple to spend the extra money for blazing fast SSD -- but they did.<p>So yes, Apple is a hardware company. All the services it sells run on their hardware. They&#x27;ve just designed their hardware to support their users&#x27; workflows, ignoring distractions.<p>With that said, LLM makes the GPU + memory bandwidth fun again. NVidia can&#x27;t do it alone, Intel can&#x27;t do it alone, but Apple positioned itself for it. It reminds me how everyone was surprised when then introduced 64-bit ARM for everyone: very few people understood what they were doing.<p>Tbh there are NVidia GPUs that beat Apple perf 2x or 3x, but these are desktop or server chips consuming 10x the power. Now all Apple needs to do is keep delivering performance out of Apple Silicon at good prices and best energy efficiency. Local LLM make sense when you need it immediately, anywhere, privately -- hence you need energy efficiency.
  • ebbi5 days ago
    I think introducing the MacBook Neo now, at that price point, was a genius move. While they&#x27;re playing the waiting game on AI, they&#x27;re cementing the next generation into the Apple ecosystem and getting them to not sway towards whatever device(s) OpenAI is cooking up.<p>The MacBook Neo feels like the iPod of this generation.
    • bigyabai5 days ago
      &gt; getting them to not sway towards whatever device(s) OpenAI is cooking up.<p>Why is this even relevant? The Macbook Neo&#x27;s biggest competitor is not an imaginary future product, it&#x27;s the Chromebooks and Windows devices that constitute the majority of laptops sold.
      • ebbi5 days ago
        It&#x27;s relevant because it hooks people into an ecosystem. It builds brand loyalty when they have a good experience with the product...and the services that come with it (Music, Apple TV, iCloud, etc.).<p>I&#x27;m not suggesting MacBook Neo is a competitor to what OpenAI will release. I&#x27;m suggesting Apple will be a brand competitor, and getting people familiar with the Apple ecosystem from now will potentially lead to higher loyalty when it comes time to decide between OpenAI product vs Apple AI product.
    • rk065 days ago
      can it run windows?
  • bigyabai6 days ago
    I just realized that next year Apple&#x27;s Neural Engine will be 10 years old, just like the &quot;NPUs will <i>change</i> AI forever!&quot; puff pieces.<p>Here&#x27;s to another 10 years of scuffed Metal Compute Shaders, I guess.
  • andsoitis6 days ago
    Using the author’s logic, it is Google then that will lead.<p>Unlike Apple, they have even more devices in the field PLUS they have strong models PLUS Apple uses Google models.
    • aflag6 days ago
      Google is an advertisement company at the end of the day and that&#x27;s a conflict of interest with user privacy.
      • lern_too_spel5 days ago
        So is Apple. Worse, Apple is a company that is comfortable with the idea of restricting user control, so you can&#x27;t get privacy even if you want it.
    • HelloUsername5 days ago
      &gt; Apple uses Google models<p>Source?
  • jayd166 days ago
    My capex is even less than Apple, I can ship to user&#x27;s Apple hardware and I can&#x27;t access iPhone user photos either...so really I&#x27;m the winner.
  • sublinear6 days ago
    &gt; Pure strategy, luck, or a bit of both? I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.<p>Maximizing the available options is in fact a &quot;strategy&quot;, and often a winning one when it comes to technology. I would love to be reminded of a list of tech innovators who were first and still the best.<p>Anyway, hasn&#x27;t this always been Apple&#x27;s strategy?
  • waffletower5 days ago
    I wouldn&#x27;t characterize Apple&#x27;s AI strategy as &quot;smart&quot;. &quot;Accidental&quot; is a perfect descriptor here. &quot;Apple Intelligence&quot; and &quot;Liquid Glass&quot; show they are asleep at the wheel. I wrote an email to Tim last year imploring him to leverage Apple Silicon and its unified memory for private AI. I didn&#x27;t tell him that I had dumped 95% of my Apple shares.
  • pdhborges6 days ago
    Apple&#x27;s accidental moat now is taking the rise of hardware prices due to AI eat into their margins and just expand the mac user base.
  • -16 days ago
    Maybe they thought an investment in a product with lots of substitutes &amp; high capital requirements wasn&#x27;t very attractive.
  • rldjbpin3 days ago
    the core thesis of the article and comparison with non-&quot;losers&quot; seem vague at best. comparing model providers with hardware vendor with software ecosystem is like differentiating between apple to oranges.<p>if hardware moat was to be discussed, then compare with nvidia, amd and google&#x27;s tpu division perhaps. in-house intelligence is best left alone for apple. they are relying on the &quot;peers&quot; for underlying capabilities as is. [1] [2]<p>outside of inference and (pro&#x2F;con)sumer space, there is little to offer for the enterprise or the people developing the lowest end of the stack. even the recent tinygrad egpu is shockingly slow [3]. which might made gb10 look much more capable for in-house training.<p>regardless, most of the industry &quot;moat&quot; does not appear sustainable at best. only time will tell how it will turn out for everyone but on a positive note, apple does not put all its eggs in this basket, which is probably wiser.<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40636980">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40636980</a><p>[2] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46589675">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46589675</a><p>[3] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=C4KWsmezXm4" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=C4KWsmezXm4</a>
  • schnitzelstoat5 days ago
    When using Siri recently it really struck me how much worse it feels after using ChatGPT. It struggles to understand what I say correctly and you have to give commands in more of a &#x27;computer-friendly&#x27; form.<p>I hope they can at least fix this, as I really only use it as a hands-free system while driving.
  • Ifkaluva5 days ago
    I’m confused why he keeps calling out “the Mac Mini craze after claw went viral”. I thought the various versions of claw used remote models, not local models, and I thought the point of using a Mac mini was that it can send and receive iMessages, not anything about the hardware.
    • nedt5 days ago
      You have a lot of private data. So running it locally makes you use less credits and then you also don&#x27;t have your emails as training data for cloud models.
  • sky22246 days ago
    Honestly, I think part of the reason Apple hasn&#x27;t jumped deep into AI is due to two big reasons:<p>1) Apple is not a data company.<p>2) Apple hasn&#x27;t found a compelling, intuitive, and most of all, consistent, user experience for AI yet.<p>Regarding point 2: I haven&#x27;t seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can&#x27;t advertise anything more than, &quot;have AI plan your vacation&quot;.
    • jpalomaki6 days ago
      Put proper LLM into Siri. Encourage developers to expose the functionality of their apps as functions, allow Siri LLM to access those (and sprinkle some magic security dust over it).<p>Boom, you have an agent in the phone capable of doing all the stuff you can do with the apps. Which means pretty much everything in our life.
    • 0rbiter6 days ago
      As for consistency, Apple&#x27;s latest UI shows they don&#x27;t give a damn any more.
      • f6v5 days ago
        I&#x27;m pretty sure most people didn&#x27;t notice any kind of inconsistency. I myself have a hard time figuring out what&#x27;s going on. I&#x27;m so focused on doing the work with the computer that I don&#x27;t have the time to notice what&#x27;s &quot;wrong&quot; with the OS. Which makes me wonder if the whole thing is blown out of proportion.
  • ajross6 days ago
    This seems mistaken to me. The core idea is that LLMs are commoditizing and that the UI (Siri in this case) is what users will stick with.<p>But... what&#x27;s the argument that the bulk of &quot;AI value&quot; in the coming decade is going to be... Siri Queries?! That seems ridiculous on its face.<p>You don&#x27;t code with Siri, you don&#x27;t coordinate automated workforces with Siri, you don&#x27;t use Siri to replace your customer service department, you don&#x27;t use Siri to build your documentation collation system. You don&#x27;t implement your auto-kill weaponry system in Siri. And Siri isn&#x27;t going to be the face of SkyNet and the death of human society.<p>Siri is what you use to get your iPhone to do random stuff. And it&#x27;s great. But ... the world is a whole lot bigger than that.
  • 464931686 days ago
    Apple is almost 2 years out from their announcement of Apple Intelligence. It has barely delivered on any of the hype. New Siri was delayed and barely mentioned in the last WWDC; none of the features are released in China.<p>In other news, people keep buying iPhones, and Apple just had its best quarter ever in China. AAPL is up 24% from last year.
    • trueno6 days ago
      i dont even care about apple intelligence. stays off, not sure anyone really cares about it who is also interested in what this ai shenanigans is about on a local device. i think people keep conflating apple intelligence with all these convos about how macs are kinda dope for joe consumer wanting to tinker with llms.<p>that&#x27;s the other part of the story that matters, not apple intelligence. this writeup tries to touch on that, apple is uniquely positioned to do really well in this arena if&#x2F;when local llm&#x27;s becoming commodities that can do really impressive stuff. we&#x27;re getting there a lot faster than we thought, someone had a trillion parameter qwen3,5 model going on his 128gb macbook and now people are thinking of more creative ways to swap out whats in memory as needed.
    • foobar19626 days ago
      A lot of the people that bought iPhones are now buying Macs as well.
      • 464931686 days ago
        Indeed, a lot of the people that bought iPhones are now buying Macs with a binned version of the chip they already bought. So much so that Apple is in danger of running out of them.
        • foobar19625 days ago
          Indeed. I just watched a &quot;leak&quot; video that claimed Apple is already releasing the MacBook Neo 2 with A19 chip because they are running out of A18 chips.
    • adamddev16 days ago
      It&#x27;s almost like people don&#x27;t actually want LLMs all over their core tools...
  • rekabis5 days ago
    It really comes down to the old saying,<p>“The early bird might get the worm, but it’s the <i>second</i> mouse that gets the cheese.”<p>Apple is hanging back, waiting for mousetraps to be triggered as AI companies make mistakes that could not have been reliably foreseen. Then it’ll swoop in, adopt the best bits, and put out a product that is immensely polished and easy to use.<p>That has been, after all, one of its most important strategies over the years. They realized that early adopters only became industry leaders if their error rate remained low enough to keep ahead of those who let others make mistakes for them.
  • bawana5 days ago
    Any field with abstraction becomes susceptible to ai disruption. In fact, ai susceptibility is proportional to the amount of abstraction. In this sense, the more abstraction then the more ai will displace people (my observation). This turns the millenia old model upside down. Traditionally more abstraction required more schooling and experience and was rewarded with more financial rewards. Until robots and world models become safe, affordable and ubiquitous, the financial apex of careers will be those that are abstraction resistant (technicians, emts, trades, etc) and those protected by requlation and the requlators(politicians, ceos)
  • ianbooker5 days ago
    Why is Nvidia so central to LLMs? Because they embraced ML a decade ago. Apple did as well, machine learning is central to so many things in the iPhone. Its not so surprising then, that a strong showing in ML sets you up good for LLMs..
  • nielsbot6 days ago
    &gt; I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions<p>I find this intriguing.. Does anyone here have enough insight to speculate more?
    • sky22246 days ago
      It&#x27;s probably one of the biggest headlines right now. OpenAI has about $96 billion in debt and they don&#x27;t have a revenue generating product yet.
      • dwedge6 days ago
        I might be wrong but should you not have said profit generating? I pay them $20 a month so they have at least $20 of revenue
    • Maxion6 days ago
      1) Put data on X&#x2F;Y chart 2) Find ruler and pencil 3) Draw line<p>Doing this you will make all kind of fun predictions.
    • operatingthetan6 days ago
      I don&#x27;t think I have unique insight on this but the common belief is they are desperately trying to reach AGI or a least have some halo model that will allow them to rise over the other companies. The problem is they have a hilariously large monthly burn paying for compute. If they don&#x27;t produce something, they are in trouble if investors stop offering capital.
  • 10keane6 days ago
    there are always three elements in the equations of business model: 1. marginal cost 2. marginal revenue 3. value created<p>for llm providers, i always believe the key is to focus on high value problems such as coding or knowledge work, becaues of the high marginal cost of having new customers - the token burnt. and low marginal revenue if the problem is not valuable enough. in this sense no llm providers can scale like previous social media platforms without taking huge losses. and no meaning user stickiness can be built unless you have users&#x27; data. and there is no meaningful business model unless people are willing to pay a high price for the problem you solve, in the same way as paying for a saas.<p>i am really not optimistic about the llm providers other than anthropic. it seems that the rest are just burning money, and for what? there is no clear path for monetization.<p>and when the local llm is powerful enough, they will soon be obsolete for the cost, and the unsustainable business model. in the end of the day, i do agree that it is the consumer hardware provider that can win this game.
    • dandaka6 days ago
      I am super bullish on Google, they are my best bet to earn from models. Mostly because they are vertically integrated (other revenue streams) + open to provide services to other companies (Apple deal).
      • 10keane2 days ago
        idk man. what do you think is their business model? is it more like ai value-add on their existing services?
    • rachel_rig5 days ago
      [dead]
  • 90d5 days ago
    Apple is a marketing company now, not an innovative tech company. They will wait until model progress is not moving in such quick progressive strides then create a new flagship product by rebadging [top model] to match their aesthetic and increase the price 3x.
  • nottorp5 days ago
    &gt; Think about the App Store. Apple didn’t build the apps, they built the platform where apps ran best, and the ecosystem followed.<p>As far as I remember Apple basically got forced into opening the platform to 3rd party developers. Not by regulation but by public pressure. It wasn&#x27;t their initial intention to allow it.
  • m3kw95 days ago
    Looks like Apple fell into a winning&#x2F;winnable position in the AI wars. Their privacy&#x2F;safety first culture is the cause of them not embracing AI as effectively as other more maverick styles. Their AI was always hindered by privacy, and local first AI is their savour.
  • rvz6 days ago
    Apple never competed in the &quot;AI race&quot; in the first place, because they already knew they were already at the finish line.<p>This was really unsurprising [0].<p>[0] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40278371">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40278371</a>
    • bigyabai6 days ago
      &gt; This is an obvious moat for Apple who can offer a cheaper alternative for training, inference AI server farms.<p>According to Bloomberg, Apple&#x27;s inference server farms are a flop: <a href="https:&#x2F;&#x2F;9to5mac.com&#x2F;2026&#x2F;03&#x2F;02&#x2F;some-apple-ai-servers-are-reportedly-sitting-unused-on-warehouse-shelves-due-to-low-apple-intelligence-usage&#x2F;" rel="nofollow">https:&#x2F;&#x2F;9to5mac.com&#x2F;2026&#x2F;03&#x2F;02&#x2F;some-apple-ai-servers-are-rep...</a><p><pre><code> the chips [...] are not powerful enough to run the latest frontier models like Gemini, which the new Siri will be based on</code></pre>
      • russelldjimmy6 days ago
        Go a little bit deeper than what the media directly wants you to think.
        • bigyabai5 days ago
          By all means, enlighten me. What did I miss?
    • bitpush6 days ago
      Your linked comment argues the opposite.<p>&gt; Won&#x27;t be surprised for the re-introduction of Xserve again but for AI.<p>This means, Apple is gonna spend a lot of money standing up data centers (CapEx). And the article in question is essentially saying that Apple is smart not to spend any money.<p>It sounds like there&#x27;s a bit of wishful thinking on - Whatever Apple is doing is 4D chess. Apple not spending any money - That&#x27;s genuis. Apple re-introducing Xserve racks - genius.
  • boxed6 days ago
    It&#x27;s the same everywhere: great fundamentals pay off. It&#x27;s true of martial arts, dance, and absolutely about software platforms. You just have to trust that process and invest in it, which Apple does (although frustratingly not enough!).
  • javchz6 days ago
    What I think was a wasted opportunity was not bringing the xserve back, being one of the few e2e solutions out there at scale.
  • mring336215 days ago
    The moat is that they saved their money and can remain in business indefinitely!
  • jbverschoor6 days ago
    So Apple’s AI acceleration and memory architecture is accidental, but nvidia’s is not?
    • bigyabai6 days ago
      Nvidia has research papers on accelerating Machine Learning as far back as 2014: <a href="https:&#x2F;&#x2F;research.nvidia.com&#x2F;publications?f%5B0%5D=research_area%3A26" rel="nofollow">https:&#x2F;&#x2F;research.nvidia.com&#x2F;publications?f%5B0%5D=research_a...</a>
      • jbverschoor5 days ago
        Apple&#x27;s website from 2017 <a href="https:&#x2F;&#x2F;machinelearning.apple.com&#x2F;research?page=1&amp;sort=oldest" rel="nofollow">https:&#x2F;&#x2F;machinelearning.apple.com&#x2F;research?page=1&amp;sort=oldes...</a><p>That&#x27;s also the year where they released on-chip acceleration for certain things, so they probably started a year or 2 before working on that tech? Not as accidental as assumed.
        • bigyabai5 days ago
          Apple&#x27;s Neural Engine from 2017 is an NPU that&#x27;s basically obsolete today in light of Metal Compute Shaders. It <i>was</i> accidental, and Apple is redesigning their GPU architecture to subsume it.<p>CUDA on the other hand continues to be relevant, and the compute capabilities from 2014 are still instrumental for accelerating training and acceleration workloads.
  • asdev6 days ago
    Apple is just waiting for all the slop to inevitably crash to see what actually works
  • oliver2365 days ago
    The whole premise is that if you don&#x27;t get to AGI first then you loose. The idea is that Anthropic with AGI could build a better version of Apple, or whatever it wants.<p>This was the conversation like 1 year ago. What has changed?
    • teekert5 days ago
      Nothing changed, it&#x27;s new ground, we are searching it with a search light. From some vantage points our view on things may feel quite complete, even insightful. Then we look at if differently and feel lost. It&#x27;s a process we are in together.
    • signa115 days ago
      if you actually got to A.G.I, why would you rent it out ?
  • livinglist6 days ago
    But why do I feel like the quality of the software from Apple declined sharply in recent years? The liquid glass design feels very unpolished and not well thought out throughout almost everywhere… seems like even Apple can’t resist falling victim to AI slop
    • linguae6 days ago
      I don’t think it’s AI slop. Even before modern generative AI, I’ve noticed a decline in Apple’s software quality.<p>Rather, I feel that Apple has forgotten its roots. The Mac was “the computer for the rest of us,” and there were usability guidelines backed by research. What made the Mac stand out against Windows during a time when Windows had 95%+ marketshare was the Mac’s ease of use. The Mac really stood out in the 2000s, with Panther and Tiger being compelling alternatives to Windows XP.<p>I think Apple is less perfectionistic about its software than it was 15-20 years ago. I don’t know what caused this change, but I have a few hunches:<p>0. There’s no Steve Jobs.<p>1. When the competition is Windows and Android, and where there’s no other commercial competitors, there’s a temptation to just be marginally better than Windows&#x2F;Android than to be the absolute best. Windows’ shooting itself in the foot doesn’t help matters.<p>2. The amazing performance and energy efficiency of Apple Silicon is carrying the Mac.<p>3. Many of the people who shaped the culture of Apple’s software from the 1980s to the 2000s are retired or have even passed away. Additionally, there are not a lot of young software developers who have heard of people like Larry Tesler, Bill Atkinson, Bruce Tognazzini, Don Norman, and other people who shaped Apple’s UI&#x2F;UX principles.<p>4. Speaking of Bruce Tognazzini and Don Norman, I am reminded of this 2015 article (<a href="https:&#x2F;&#x2F;www.fastcompany.com&#x2F;3053406&#x2F;how-apple-is-giving-design-a-bad-name" rel="nofollow">https:&#x2F;&#x2F;www.fastcompany.com&#x2F;3053406&#x2F;how-apple-is-giving-desi...</a>) where they criticized Apple’s design as being focused on form over function. It’s only gotten worse since 2015. The saving grace for Apple is that the rest of the industry has gone even further in reducing usability.<p>I think what it will take for Apple to readopt its perfectionism is if competition forced it to.
      • 72deluxe5 days ago
        I agree that there is a decline in usability. If you took a Mac from those early days, it is still very usable and everything is where you&#x27;d expect it to be. In recent years this has changed and the general iOS-ification of the OS has occurred. I have avoided upgrading to Tahoe due to seeing how awful my wife&#x27;s iPhone looks now. It looks like a children&#x27;s toy.
    • slopinthebag6 days ago
      Software quality decline has been a recognised trend long before LLMs took the limelight. Apple included.
  • amelius6 days ago
    In the larger scheme of things, the great winner will be open source, as we&#x27;ll simply use AI to recreate the entire MacOS ecosystem :)
    • 0rbiter6 days ago
      If AI coding does go anywhere and stays affordable, this would be a great outcome.
      • bredren5 days ago
        I think AI needs to greatly accelerate open hardware design and make advanced manufacturing more accessible to really make a dent.<p>User facing software is not the limiting factor in AI assisted replacement of Apple products.
  • f_allwein5 days ago
    maybe “The Only Way to Win is Not to Play”
  • gambutin6 days ago
    That’s actually by design. Apple never jumps on the tech hype bandwagon.<p>they wait until the dust settles before making their well-thought-out moves.<p>Every time they’ve jumped the hype train too quickly it hasn’t worked out, like Siri for example.
    • bitpush6 days ago
      How do you rate Vision Pro? It was not the first one, but it was certainly the best one. Total dud though, while Meta Ray Bans are selling like hot cakes (irrespective of what you think of the company)
  • nl6 days ago
    &gt; Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.<p>Well.. no. The Stargate <i>expansion</i> was cancelled the orginally planned 1.2MW (!) datacenter is going ahead:<p>&gt; The main site is located in Abilene, Texas, where an initial expansion phase with a capacity of 1.2 GW is being built on a campus spanning over 1,000 acres (approximately 400 hectares). Construction costs for this phase amount to around $15 billion. While two buildings have already been completed and put into operation, work is underway on further construction phases, the so-called Longhorn and Hamby sections. Satellite data confirms active construction activity, and completion of the last planned building is projected to take until 2029.<p>&gt; The Stargate story, however, is also a story of fading ambitions. In March 2026, Bloomberg reported that Oracle and OpenAI had abandoned their original expansion plans for the Abilene campus. Instead of expanding to 2 GW, they would stick with the planned 1.2 GW for this location. OpenAI stated that it preferred to build the additional capacity at other locations. Microsoft then took over the planning of two additional AI factory buildings in the immediate vicinity of the OpenAI campus, which the data center provider Crusoe will build for Microsoft. This effectively creates two adjacent AI megacampus locations in Abilene, sharing an industrial infrastructure. The original partnership dynamics between OpenAI and SoftBank proved problematic: media reports described disagreements over site selection and energy sources as points of contention.<p><a href="https:&#x2F;&#x2F;xpert.digital&#x2F;en&#x2F;digitale-ruestungsspirale&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xpert.digital&#x2F;en&#x2F;digitale-ruestungsspirale&#x2F;</a><p>&gt; Micron’s stock crashed. [the link included an image of dropping to $320]<p>Micron’s stock is back to $420 today<p>&gt; One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription.<p>Actually, no. They&#x27;d miscalculated and consumed $2700 worth of tokens.<p>The same place that checked that claim also points out:<p>&gt; In fact, Anthropic’s own data suggests the average Claude Code developer uses about $6 per day in API-equivalent compute.<p><a href="https:&#x2F;&#x2F;www.financialexpress.com&#x2F;life&#x2F;technology-why-is-claude-actually-limiting-your-usage-viral-theory-debunked-4191594&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.financialexpress.com&#x2F;life&#x2F;technology-why-is-clau...</a><p>I like Apple&#x27;s chips, but why do we put up with crappy analysis like this?
    • bitpush6 days ago
      Apple&#x27;s reality distortion field is really really strong. People love to claim Apple is doing 4D chess, when in reality Apple has certain strengths but AI is anything but.<p>Which is why they were completely caught offguard with botched rollout of Apple Intelligence. Even when they were playing to their strengths, things have not gone for them (Apple Vision Pro). Liquid Glass has had mixed reception, and that&#x27;s often explained away as &quot;Apple is setting up a world for Spatial Computing by unifying design language&quot; and when the lead designer was fired it was like &quot;Thank God Alan Dye is gone, he was bad for Apple anyway&quot;.<p>So essentially, Apple can do no wrong.
  • rickdeckard6 days ago
    I think the article is missing a whole aspect on how Apple is ensuring to not face <i>actual</i> competition while they&#x27;re &quot;playing it safe&quot;:<p>Even if the investment is overblown, there is market-demand for the services offered in the AI-industry. In a competitive playing field with equal opportunities, Apple would be affected by not participating. But they are establishing again their digital market concept, where they hinder a level playing field for Apple users.<p>Like they did with the Appstore (where Apple is owning the marketplace but also competes in it) they are setting themselves up as the &quot;the bakn always wins&quot; gatekeeper in the Apple ecosystem for AI services, by making &quot;Apple Intelligence&quot; an ecosystem orchestration layer (and thus themselves the gatekeeper).<p>1. They made a deal with OpenAI to close Apple&#x27;s competitive gap on consumer AI, allowing users to upgrade to paid ChatGPT subscriptions from within the iOS menu. OpenAI has to pay at least (!) the usual revenue share for this, but considering that Apple integrated them directly into iOS I&#x27;m sure OpenAI has to pay MORE than that. (also supported by the fact that OpenAI doesn&#x27;t allow users to upgrade to the 200USD PRO tier using this path, but only the 20USD Plus tier) [1]<p>2. Apple&#x27;s integration is set up to collect data from this AI digital market they created: Their legal text for the initial release with OpenAI already states that all requests sent to ChatGPT are first evaluated by &quot;Apple Intelligence &amp; Siri&quot; and &quot;your request is analyzed to determine whether ChatGPT might have useful results&quot; [2]. This architecture requires(!) them to not only collect and analyze data about the type of requests, but also gives them first-right-to-refuse for all tasks.<p>3. Developers are &quot;encouraged&quot; to integrate Apple Intelligence right into their apps [3]. This will have AI-tasks first evaluated by Apple<p>4. Apple has confirmed that they are interested to enable other AI-providers using the same path [4]<p>--&gt; Apple will be the gatekeeper to decide whether they can fulfill a task by themselves or offer the user to hand it off to a 3rd party service provider.<p>--&gt; Apple will be in control of the &quot;Neural Engine&quot; on the device, and I expect them to use it to run inference models they created based on statistics of step#2 above<p>--&gt; I expect that AI orchestration, including training those models and distributing&#x2F;maintaining them on the devices will be a significant part of Apple&#x27;s AI strategy. This could cover alot of text and image processing and already significantly reduce their datacenter cost for cloud-based AI-services. For the remaining, more compute-intensive AI-services they will be able to closely monitor (via above step#2) when it will be most economic to in-source a service instead of &quot;just&quot; getting revenue-share for it (via above step#1).<p>So the juggernaut Apple is making sure to get the reward from those taking the risk. I don&#x27;t see the US doing much about this anti-competitive practice so far, but at least in the EU this strategy has been identified and is being scrutinized.<p>[1] <a href="https:&#x2F;&#x2F;help.openai.com&#x2F;en&#x2F;articles&#x2F;7905739-chatgpt-ios-app-upgrading-to-a-paid-subscription" rel="nofollow">https:&#x2F;&#x2F;help.openai.com&#x2F;en&#x2F;articles&#x2F;7905739-chatgpt-ios-app-...</a><p>[2] <a href="https:&#x2F;&#x2F;www.apple.com&#x2F;legal&#x2F;privacy&#x2F;data&#x2F;en&#x2F;chatgpt-extension&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.apple.com&#x2F;legal&#x2F;privacy&#x2F;data&#x2F;en&#x2F;chatgpt-extensio...</a><p>[3] <a href="https:&#x2F;&#x2F;developer.apple.com&#x2F;apple-intelligence&#x2F;" rel="nofollow">https:&#x2F;&#x2F;developer.apple.com&#x2F;apple-intelligence&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;9to5mac.com&#x2F;2024&#x2F;06&#x2F;10&#x2F;craig-federighi-says-apple-hopes-to-add-google-gemini-and-other-ai-models-to-ios-18&#x2F;" rel="nofollow">https:&#x2F;&#x2F;9to5mac.com&#x2F;2024&#x2F;06&#x2F;10&#x2F;craig-federighi-says-apple-ho...</a>
  • hansmayer6 days ago
    For the love of all that&#x27;s holy - folks please <i>stop</i> using AI to publish smart sounding texts. While you may think you are &quot;polishing&quot; your text, you are just disrespecting your readers. Write in your own words.
  • Kevin_VAI5 days ago
    [dead]
  • rupayanc5 days ago
    [dead]
  • gayboy5 days ago
    [flagged]
  • worthless-trash6 days ago
    Don&#x27;t worry, when apple introduce it, it&#x27;ll be revolutionary and 10% thinner.
    • Gigachad6 days ago
      Apple will just drip feed locally running models that enable minor conveniences. They will probably drop the Apple Intelligence label later and just have things with their own names like &quot;magic eraser&quot;.
      • bitpush6 days ago
        Apple have had Siri for decades without any meaningful movement. If you think Apple is suddenly going to get better, that&#x27;s just wishful thinking. Apple neither has the expertise nor the capability to do any of that. They&#x27;d hvae demonstrated that with Siri long time back.<p>What Apple does it build beautiful hardware. The software has been shambles for a really long time.
  • dzonga5 days ago
    one day people will realize that Tim Cook as one of the best killer CEOs.<p>by now - by now he has more hits than Steve Jobs. His precision, and being able to manage risk maybe due to his supply chain background have made Apple into the killer it is today.<p>if we were in the age of Robber barons he would&#x27;ve been up there with them.
  • microslop20266 days ago
    I like how we are acting like this market is so novel and emergent revering the luck of some while lamenting the failures of others when it was all &quot;roadmapped&quot; a decade ago. It&#x27;s like watching a Shaanxi shadow puppet show with artificial folk lore about the origins of the industry. I hate reality television!