10 comments

  • causal58 minutes ago
    Run an incredible 400B parameters on a handheld device.<p>0.6 t&#x2F;s, wait 30 seconds to see what these billions of calculations get us:<p>&quot;That is a profound observation, and you are absolutely right ...&quot;
    • intrasight31 minutes ago
      Better than waiting 7.5 million years to have a tell you the answer is 42.
      • thinkingtoilet14 minutes ago
        Maybe you should have asked a better question. :P
        • patapong0 minutes ago
          What do you get if you multiply six by nine?
    • WarmWash36 minutes ago
      I don&#x27;t think we are ever going to win this. The general population loves being glazed way too much.
      • baal80spam31 minutes ago
        &gt; The general population loves being glazed way too much.<p>This is 100% correct!
        • WarmWash19 minutes ago
          Thanks for short warm blast of dopamine, no one else ever seems to grasp how smart I truly am!
          • timcobb11 minutes ago
            That is an excellent observation.
      • tombert10 minutes ago
        That&#x27;s an astute point, and you&#x27;re right to point it out.
        • actusual9 minutes ago
          You are thinking about this exactly the right way.
    • amelius4 minutes ago
      I mean size says nothing, you could do it on a Pi Zero with sufficient storage attached.<p>So this post is like saying that yes an iPhone is Turing complete. Or at least not locked down so far that you&#x27;re unable to do it.
    • Aurornis21 minutes ago
      I thought you were being sarcastic until I watched the video and saw those words slowly appear.<p>Emphasis on slowly.
  • firstbabylonian1 hour ago
    &gt; SSD streaming to GPU<p>Is this solution based on what Apple describes in their 2023 paper &#x27;LLM in a flash&#x27; [1]?<p>1: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2312.11514" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2312.11514</a>
    • simonw1 hour ago
      Yes. I collected some details here: <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Mar&#x2F;18&#x2F;llm-in-a-flash&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Mar&#x2F;18&#x2F;llm-in-a-flash&#x2F;</a>
    • foobiekr16 minutes ago
      This is not entirely dissimilar to what Cerebus does with their weights streaming.
      • manmal3 minutes ago
        And IIRC the Unreal Engine Matrix demo for PS5 was streaming textures directly from SSD to the engine as well?
    • zozbot23455 minutes ago
      A similar approach was recently featured here: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47476422">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47476422</a> Though iPhone Pro has very limited RAM (12GB total) which you still need for the active part of the model. (Unless you want to use Intel Optane wearout-resistant storage, but that was power hungry and thus unsuitable to a mobile device.)
      • Aurornis22 minutes ago
        &gt; Though iPhone Pro has very limited RAM (12GB total) which you still need for the active part of the model.<p>This is why mixture of experts (MoE) models are favored for these demos: Only a portion of the weights are active for each token.
      • simonw45 minutes ago
        Yeah, this new post is a continuation of that work.
  • cj001 hour ago
    It’s 400B but it’s mixture of experts so how many are active at any time?
    • simonw1 hour ago
      Looks like it&#x27;s Qwen3.5-397B-A17B so 17B active. <a href="https:&#x2F;&#x2F;github.com&#x2F;Anemll&#x2F;flash-moe&#x2F;tree&#x2F;iOS-App" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Anemll&#x2F;flash-moe&#x2F;tree&#x2F;iOS-App</a>
    • anshumankmr3 minutes ago
      Aren&#x27;t most companies doing MoE at this point?
  • _air40 minutes ago
    This is awesome! How far away are we from a model of this capability level running at 100 t&#x2F;s? It&#x27;s unclear to me if we&#x27;ll see it from miniaturization first or from hardware gains
    • originalvichy18 minutes ago
      On smartphones? It’s not worth it to run a model this size on a device like this. A smaller fine-tuned model for specific use cases is not only faster, but possibly more accurate when tuned to specific use cases. All those gigs of unnecessary knowledge are useless to perform tasks usually done on smartphones.
    • Tade021 minutes ago
      Only way to have hardware reach this sort of efficiency is to embed the model in hardware.<p>This exists[0], but the chip in question is physically large and won&#x27;t fit on a phone.<p>[0] <a href="https:&#x2F;&#x2F;www.anuragk.com&#x2F;blog&#x2F;posts&#x2F;Taalas.html" rel="nofollow">https:&#x2F;&#x2F;www.anuragk.com&#x2F;blog&#x2F;posts&#x2F;Taalas.html</a>
      • intrasight1 minute ago
        I think for many reasons this will become the dominant paradigm for end user devices.<p>Moore&#x27;s law will shrink it to 8mm soon. I think it&#x27;ll be like a microSD card you plug in.<p>Or we develop a new silicon process that can mimic synaptic weights in biology. Synapses have plasticity.
  • ashwinnair991 hour ago
    A year ago this would have been considered impossible. The hardware is moving faster than anyone&#x27;s software assumptions.
    • cogman101 hour ago
      This isn&#x27;t a hardware feat, this is a software triumph.<p>They didn&#x27;t make special purpose hardware to run a model. They crafted a large model so that it could run on consumer hardware (a phone).
      • pdpi1 hour ago
        It&#x27;s both.<p>We haven&#x27;t had phones running laptop-grade CPUs&#x2F;GPUs for that long, and that is a very real hardware feat. Likewise, nobody would&#x27;ve said running a 400b LLM on a low-end laptop was feasible, and that is very much a software triumph.
        • bigyabai1 minute ago
          &gt; We haven&#x27;t had phones running laptop-grade CPUs&#x2F;GPUs for that long<p>Agree to disagree, we&#x27;ve had laptop-grade smartphone hardware for longer than we&#x27;ve had LLMs.
      • smallerize57 minutes ago
        The iPhone 17 Pro launched 8 months ago with 50% more RAM and about double the inference performance of the previous iPhone Pro (also 10x prompt processing speed).
    • Aurornis18 minutes ago
      It wasn&#x27;t considered impossible. There are examples of large MoE LLMs running on small hardware all over the internet, like giant models on Raspberry Pi 5.<p>It&#x27;s just so slow that nobody pursued it seriously. It&#x27;s fun to see these tricks implemented, but even on this 2025 top spec iPhone Pro the output is 100X slower than output from hosted services.
      • zozbot2340 minutes ago
        If the bottleneck is storage bandwidth that&#x27;s not &quot;slow&quot;. It&#x27;s only slow if you insist on interactive speeds, but the point of this is that you can run cheap inference in bulk on very low-end hardware.
    • mannyv42 minutes ago
      The software has real software engineers working on it instead of researchers.<p>Remember when people were arguing about whether to use mmap? What a ridiculous argument.<p>At some point someone will figure out how to tile the weights and the memory requirements will drop again.
      • snovv_crash31 minutes ago
        The real improvement will be when the software engineers get into the training loop. Then we can have MoE that use cache-friendly expert utilisation and maybe even learned prefetching for what the next experts will be.
  • pier2550 minutes ago
    <a href="https:&#x2F;&#x2F;xcancel.com&#x2F;anemll&#x2F;status&#x2F;2035901335984611412" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;anemll&#x2F;status&#x2F;2035901335984611412</a>
    • dang8 minutes ago
      Added to toptext. Thanks!
  • jee5991 hour ago
    [dead]
  • anemll1 hour ago
    [flagged]
    • lostmsu1 hour ago
      This has nothing to do with Apple, and everything to do with MoE and that everyone forgot you can re-read the necessary bits of the model from disk for each token.<p>This is extremely inefficient though. For efficiency you need to batch many requests (like 32+, probably more like 128+), and when you do that with MoE you lose the advantage of only having to read a subset of the model during a single forward pass, so the trick does not work.<p>But this did remind me that with dense models you might be able to use disk to achieve high throughput at high latency on GPUs that don&#x27;t have a lot of VRAM.
  • rwaksmunski1 hour ago
    Apple might just win the AI race without even running in it. It&#x27;s all about the distribution.
    • dzikimarian36 minutes ago
      Because someone managed to run LLM on an iPhone at unusable speed Apple won AI race? Yeah, sure.
      • naikrovek31 minutes ago
        whoa, save some disbelief for later, don&#x27;t show it all at once.
    • raw_anon_11111 hour ago
      Apple is already one of the winners of the AI race. It’s making much more profit (ie it ain’t losing money) on AI off of ChatGPT, Claude, Grok (you would be surprised at how many incels pay to make AI generated porn videos) subscriptions through the App Store.<p>It’s only paying Google $1 billion a year for access to Gemini for Siri
      • detourdog54 minutes ago
        Apple’s entire yearly capex is a fraction of the AI spend of the persumed AI winners.
        • foobiekr18 minutes ago
          Fantasy buildouts of hundreds of billions of dollars for gear that has a 3 year lifetime may be premature.<p>Put another way, there is no demonstrated first mover advantage in LLM-based AI so far and all of the companies involved are money furnaces.
        • devmor35 minutes ago
          Which is mostly insane amounts of debt leveraged entirely on the moonshot that they will find a way to turn a profit on it within the next couple years.<p>Apple’s bet is intelligent, the “presumed winners” are hedging our economic stability on a miracle, like a shaking gambling addict at a horse race who just withdrew his rent money.
      • qingcharles36 minutes ago
        Plus all those pricey 512GB Mac Studios they are selling to YouTubers.
        • icedchai12 minutes ago
          They don&#x27;t offer the 512 gig RAM variant anymore. Outside of social media influencers and the occasional AI researcher, the market for $10K desktops is vanishingly small.
          • Multiplayer0 minutes ago
            My understanding is that the 512gb offering will likely return with the new M5 Ultra coming around WWDC in June. Fingers crossed anyway!
  • simopa1 hour ago
    It&#x27;s crazy to see a 400B model running on an iPhone. But moving forward, as the information density and architectural efficiency of smaller models continue to increase, getting high-quality, real-time inference on mobile is going to become trivial.
    • volemo5 minutes ago
      &gt; moving forward, as the information density and architectural efficiency of smaller models continue to increase<p><i>If</i> they continue to increase.