19 comments

  • Waterluvian9 hours ago
    At what point do the OEMs begin to realize they don’t have to follow the current mindset of attaching a GPU to a PC and instead sell what looks like a GPU with a PC built into it?
    • lizknope1 hour ago
      The vast majority of computers sold today have a CPU &#x2F; GPU integrated together in a single chip. Most ordinary home users don&#x27;t care about GPU or local AI performance that much.<p>In this video Jeff is interested in GPU accelerated tasks like AI and Jellyfin. His last video was using a stack of 4 Mac Studios connected by Thunderbolt for AI stuff.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x4_RsUxRjKU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x4_RsUxRjKU</a><p>The Apple chips have both power CPU and GPU cores but also have a huge amount of memory (512GB) directly connected unlike most Nvidia consumer level GPUs that have far less memory.
    • themafia1 hour ago
      At this point what you really need is an incredibly powerful heatsink with some relatively small chips pressed against it.
    • animal5314 hours ago
      It&#x27;s funny how ideas come and go. I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time (albeit that I was thinking of computers in general).<p>It would take a lot of work to make a GPU do current CPU type tasks, but it would be interesting to see how it changes parallelism and our approach to logic in code.
      • PunchyHamster9 minutes ago
        It would just make everything worse. Some (if anything, most) tasks are far less paralleliseable than typical GPU loads.
      • sharpneli52 minutes ago
        Is there any need for that? Just have a few good CPUs there and you’re good to go.<p>As for how the HW looks like we already know. Look at Strix Halo as an example. We are just getting bigger and bigger integrated GPUs. Most of the flops on that chip is the GPU part.
        • amelius13 minutes ago
          I still would like to see a general GPU back end for LLVM just for fun.
      • goku124 hours ago
        &gt; I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time<p>HN isn&#x27;t always very rational about voting. It will be a loss if you judge any idea on their basis.<p>&gt; It would take a lot of work to make a GPU do current CPU type tasks<p>In my opinion, that would be counterproductive. The advantage of GPUs is that they have a large number of very simple GPU cores. Instead, just do a few separate CPU cores on the same die, or on a separate die. Or you could even have a forest of GPU cores with a few CPU cores interspersed among them - sort of like how modern FPGAs have logic tiles, memory tiles and CPU tiles spread out on it. I doubt it would be called a GPU at that point.
        • zozbot2343 hours ago
          GPU compute units are not that simple, the main difference with CPU is that they generally use a combination of wide SIMD and wide SMT to hide latency, as opposed to the power-intensive out-of-order processing used by CPU&#x27;s. Performing tasks that can&#x27;t take advantage of either SIMD or SMT on GPU compute units might be a bit wasteful.<p>Also you&#x27;d need to add extra hardware for various OS support functions (privilege levels, address space translation&#x2F;MMU) that are currently missing from the GPU. But the idea is otherwise sound, you can think of the &#x27;Mill&#x27; proposed CPU architecture as one variety of it.
          • goku1229 minutes ago
            &gt; GPU compute units are not that simple<p>Perhaps I should have phrased it differently. CPU and GPU cores are designed for different types of loads. The rest of your comment seems similar to what I was imagining.<p>Still, I don&#x27;t think that enhancing the GPU cores with CPU capabilities (OOE, rings, MMU, etc from your examples) is the best idea. You may end up with the advantages of neither and the disadvantages of both. I was suggesting that you could instead have a few dedicated CPU cores distributed among the numerous GPU cores. Finding the right balance of GPU to CPU cores may be the key to achieving the best performance on such a system.
        • Den_VR4 hours ago
          As I recall, Gartner made the outrageous claim that upwards of 70% of all computing will be “AI” in some number of years - nearly the end of cpu workloads.
          • deliciousturkey1 hour ago
            I&#x27;d say over 70% of all computing is already been non-CPU for years. If you look at your typical phone or laptop SoC, the CPU is only a small part. The GPU takes the majority of area, with other accelerators also taking significant space. Manufacturers would not spend that money on silicon, if it was not already used.
            • PunchyHamster7 minutes ago
              If going by raw operations done, if the given workload uses 3d rendering for UI that&#x27;s probably true for computers&#x2F;laptops. Watching YT video is essentially CPU pushing data between internet and GPU&#x27;s video decoder, and to GPU-accelerated UI.
            • goku1221 minutes ago
              &gt; I&#x27;d say over 70% of all computing is already been non-CPU for years.<p>&gt; If you look at your typical phone or laptop SoC, the CPU is only a small part.<p>Keep in mind that the die area doesn&#x27;t always correspond to the throughput (average rate) of the computations done on it. That area may be allocated for a higher computational bandwidth (peak rate) and lower latency. Or in other words, get the results of a large number of computations faster, even if it means that the circuits idle for the rest of the cycles. I don&#x27;t know the situation on mobile SoCs with regards to those quantities.
              • deliciousturkey5 minutes ago
                This is true, and my example was a very rough metric. But the computation density per area is actually way, way higher on GPU&#x27;s compared to CPU&#x27;s. CPU&#x27;s only spend a tiny fraction of their area doing actual computation.
          • yetihehe1 hour ago
            Looking at home computers, most of &quot;computing&quot; when counted as flops is done by gpus anyway, just to show more and more frames. Processors are only used to organise all that data to be crunched up by gpus. The rest is browsing webpages and running some word or excel several times a month.
        • k4rnaj1k3 hours ago
          [dead]
      • deliciousturkey1 hour ago
        HN in general is quite clueless about topics like hardware, high performance computing, graphics, and AI performance. So you probably shouldn&#x27;t care if you are downvoted, especially if you honestly know you are being correct.<p>Also, I&#x27;d say if you buy for example a Macbook with an M4 Pro chip, it is already is a big GPU attached to a small CPU.
        • philistine11 minutes ago
          People on here tend to act as if 20% of all computers sold were laptops, when it’s the reverse.
    • nightshift18 hours ago
      Exactly. With the Intel-Nvidia partnership signed this September, I expect to see some high-performance single-board computers being released very soon. I don&#x27;t think the atx form-factor will survive another 30 years.
      • bostik5 hours ago
        One should also remember that NVidia <i>does</i> have organisational experience on designing and building CPUs[0].<p>They were a pretty big deal back in ~2010, and I have to admit I didn&#x27;t know that Tegra was powering Nintendo Switch.<p>0: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tegra" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tegra</a>
        • goku123 hours ago
          I had a Xolo Tegra Note 7 tablet (marketed in the US as EVGA Tegra Note 7) in around 2013. I preordered it as far as I remember. It had a Tegra 4 SoC with quad core Cortex A15 CPU and a 72 core GeForce GPU. Nvidia used to claim that it is the fastest SoC for mobile devices at the time.<p>To this day, it&#x27;s the best mobile&#x2F;Android device I ever owned. I don&#x27;t know if it was the fastest, but it certainly was the best performing one I ever had. UI interactions were smooth, apps were fast on it, screen was bright, touch was perfect and still had long enough battery backup. The device felt very thin and light, but sturdy at the same time. It had a pleasant matte finish and a magnetic cover that lasted as long as the device did. It spolied the feel of later tablets for me.<p>It had only 1 GB RAM. We have much more powerful SoCs today. But nothing ever felt that smooth (iPhone is not considered). I don&#x27;t know why it was so. Perhaps Android was light enough for it back then. Or it may have had a very good selection and integration of subcomponents. I was very disappointed when Nvidia discontinued the Tegra SoC family and tablets.
    • pjmlp6 hours ago
      So basically going back to the old days of Amiga and Atari, in a certain sense, when PCs could only display text.
      • goku124 hours ago
        I&#x27;m not familiar with that history. Could you elaborate?
        • pjmlp3 hours ago
          In the home computer universe, such computers were the first ones having a programmable graphics unit that did more than paste the framebuffer into the screen.<p>While the PCs were still displaying text, or if you were lucky to own an Hercules card, gray text, or maybe a CGA one, with 4 colours.<p>While the Amigas, which I am more confortable with, were doing this in the mid-80&#x27;s:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x7Px-ZkObTo" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x7Px-ZkObTo</a><p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-ga41edXw3A" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=-ga41edXw3A</a><p>The original Amiga 1000, had on its motherboard, later reduced to fit into an Amiga 500,<p>Motorola 68000 CPU, a programmable sounds chip with DMA channels (Paula), and a programable blitter chip (Agnus aka early GPUs).<p>You would build in RAM the audio, or graphics instructions for the respetive chipset, set the DMA parameters, and let them lose.
          • goku1218 minutes ago
            Thanks! Early computing history is very interesting (I know that this wasn&#x27;t the earliest). They also sometimes explain certain odd design decisions that are still followed today.
          • nnevatie2 hours ago
            Hey! I had an Amiga 1000 back in the day - it was simply awesome.
    • amelius1 hour ago
      Maybe at the point where you can run Python directly on the GPU. At which point the GPU becomes the new CPU.<p>Anyway, we&#x27;re still stuck with &quot;G&quot; for &quot;graphics&quot; so it all doesn&#x27;t make much sense and I&#x27;m actually looking for a vendor that takes its mission more seriously.
  • yjftsjthsd-h18 hours ago
    I&#x27;ve been kicking this around in my head for a while. If I want to run LLMs locally, a decent GPU is really the only important thing. At that point, the question becomes, roughly, what is the cheapest computer to tack on the side of the GPU? Of course, that assumes that everything does in fact work; unlike OP I am barely in a position to <i>understand</i> eg. BAR problems, let alone try to fix them, so what I actually did was build a cheap-ish x86 box with a half-decent GPU and called it a day:) But it still is stuck in my brain: there must be a more efficient way to do this, especially if all you need is just enough computer to shuffle data to and from the GPU and serve that over a network connection.
    • binsquare17 hours ago
      I run a crowd sourced website to collect data on the best and cheapest hardware setup for local LLM here: <a href="https:&#x2F;&#x2F;inferbench.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;inferbench.com&#x2F;</a><p>Source code: <a href="https:&#x2F;&#x2F;github.com&#x2F;BinSquare&#x2F;inferbench" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;BinSquare&#x2F;inferbench</a>
      • kilpikaarna14 hours ago
        Nice! Though for older hardware it would be nice if the price reflected the current second hand market (harder to get data for, I know). Eg. Nvidia RTX 3070 ranks as second best GPU in tok&#x2F;s&#x2F;$ even at the MSRP of $499. But you can get one for half that now.
        • binsquare13 hours ago
          Great idea - I&#x27;ve added it by manually browsing ebay for that initial data.<p>So it&#x27;s just a static value in this hardware list: <a href="https:&#x2F;&#x2F;github.com&#x2F;BinSquare&#x2F;inferbench&#x2F;blob&#x2F;main&#x2F;src&#x2F;lib&#x2F;hardware-data.ts#L55" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;BinSquare&#x2F;inferbench&#x2F;blob&#x2F;main&#x2F;src&#x2F;lib&#x2F;ha...</a><p>Let me know if you know of a better way, or contribute :D
      • nodja15 hours ago
        Cool site, I noticed the 3090 is on there twice.<p><a href="https:&#x2F;&#x2F;inferbench.com&#x2F;gpu&#x2F;NVIDIA%20GeForce%20RTX%203090" rel="nofollow">https:&#x2F;&#x2F;inferbench.com&#x2F;gpu&#x2F;NVIDIA%20GeForce%20RTX%203090</a><p><a href="https:&#x2F;&#x2F;inferbench.com&#x2F;gpu&#x2F;NVIDIA%20RTX%203090" rel="nofollow">https:&#x2F;&#x2F;inferbench.com&#x2F;gpu&#x2F;NVIDIA%20RTX%203090</a>
        • binsquare15 hours ago
          Oh nice catch, I&#x27;ll fix that<p>---<p>Edit: Fixed
      • jsight11 hours ago
        It seems like verification might need to be improved a bit? I looked at Mistral-Large-123B. Someone is claiming 12 tokens&#x2F;sec on a single RTX 3090 at FP16.<p>Perhaps some filter could cut out submissions that don&#x27;t really make sense?
    • tcdent17 hours ago
      We&#x27;re not yet to the point where a single PCIe device will get you anything meaningful; IMO 128 GB of ram available to the GPU is essential.<p>So while you don&#x27;t need a ton of compute on the CPU you do need the ability address multiple PCIe lanes. A relatively low-spec AMD EPYC processor is fine if the motherboard exposes enough lanes.
      • p1necone14 hours ago
        I&#x27;m holding out for someone to ship a gpu with dimm slots on it.
        • tymscar12 hours ago
          DDR5 is a couple of orders of magnitude slower than really good vram. That’s one big reason.
          • zrm5 hours ago
            DDR5 is ~8GT&#x2F;s, GDDR6 is ~16GT&#x2F;s, GDDR7 is ~32GT&#x2F;s. It&#x27;s faster but the difference isn&#x27;t crazy and if the premise was to have a lot of slots then you could also have a lot of channels. 16 channels of DDR5-8200 would have slightly more memory bandwidth than RTX 4090.
          • dawnerd12 hours ago
            But it would still be faster than splitting the model up on a cluster though, right? But I’ve also wondered why they haven’t just shipped gpus like cpus.
            • cogman1010 hours ago
              Man I&#x27;d love to have a GPU socket. But it&#x27;d be pretty hard to get a standard going that everyone would support. Look at sockets for CPUs, we barely had cross over for like 2 generations.<p>But boy, a standard GPU socket so you could easily BYO cooler would be nice.
          • cogman1011 hours ago
            For AI, really good isn&#x27;t really a requirement. If a middle ground memory module could be made, then it&#x27;d be pretty appealing.
        • anon2578312 hours ago
          Would that be worth anything, though? What about the overhead of clock cycles needed for loading from and storing to RAM? Might not amount to a net benefit for performance, and it could also potentially complicate heat management I bet.
        • kristianp12 hours ago
          A single CAMM might suit better.
      • skhameneh17 hours ago
        There is plenty that can run within 32&#x2F;64&#x2F;96gb VRAM. IMO models like Phi-4 are underrated for many simple tasks. Some quantized Gemma 3 are quite good as well.<p>There are larger&#x2F;better models as well, but those tend to really push the limits of 96gb.<p>FWIW when you start pushing into 128gb+, the ~500gb models really start to become attractive because at that point you’re probably wanting just a bit more out of everything.
        • tcdent17 hours ago
          IDK all of my personal and professional projects involve pushing the SOTA to the absolute limit. Using anything other than the latest OpenAI or Anthropic model is out of the question.<p>Smaller open source models are a bit like 3d printing in the early days; fun to experiment with but really not that valuable for anything other than making toys.<p>Text summarization, maybe? But even then I want a model that understands the complete context and does a good job. Even things like &quot;generate one sentence about the action we&#x27;re performing&quot; I usually find I can just incorporate it into the output schema of a larger request instead of making a separate request to a smaller model.
          • xyzzy12316 hours ago
            It seems to me like the use case for local GPUs is almost entirely privacy.<p>If you buy a 15k AUD rtx 6000 96GB, that card will _never_ pay for itself on a gpt-oss:120b workload vs just using openrouter - no matter how many tokens you push through it - because the cost of residential power in Australia means you cannot generate tokens cheaper than the cloud even if the card were free.
            • girvo15 hours ago
              &gt; because the cost of residential power in Australia<p>This <i>so</i> doesn&#x27;t really matter to your overall point which I agree with but:<p>The rise of rooftop solar and home battery energy storage flips this a bit now in Australia, IMO. At least where I live, every house has a solar panel on it.<p>Not worth it <i>just</i> for local LLM usage, but an interesting change to energy economics IMO!
            • joefourier15 hours ago
              There’s a few more considerations:<p>- You can use the GPU for training and run your own fine tuned models<p>- You can have much higher generation speeds<p>- You can sell the GPU on the used market in ~2 years time for a significant portion of its value<p>- You can run other types of models like image, audio or video generation that are not available via an API, or cost significantly more<p>- Psychologically, you don’t feel like you have to constrain your token spending and you can, for instance, just leave an agent to run for hours or overnight without feeling bad that you just “wasted” $20<p>- You won’t be running the GPU at max power constantly
            • 1515515 hours ago
              Or censorship avoidance
          • popalchemist16 hours ago
            This is simply not true. Your heuristic is broken.<p>The recent Gemma 3 models, which are produced by Google (a little startup - heard of em?) outperform the last several OpenAI releases.<p>Closed does not necessarily mean better. Plus the local ones can be finetuned to whatever use case you may have, won&#x27;t have any inputs blocked by censorship functionality, and you can optimize them by distilling to whatever spec you need.<p>Anyway all that is extraneous detail - the important thing is to decouple &quot;open&quot; and &quot;small&quot; from &quot;worse&quot; in your mind. The most recent Gemma 3 model specifically is incredible, and it makes sense, given that Google has access to many times more data than OpenAI for training (something like a factor of 10 at least). Which is of course a very straightforward idea to wrap your head around, Google was scrapign the internet for decades before OpenAI even entered the scene.<p>So just because their Gemma model is released in an open-source (open weights) way, doesn&#x27;t mean it should be discounted. There&#x27;s no magic voodoo happening behind the scenes at OpenAI or Anthropic; the models are essentially of the same type. But Google releases theirs to undercut the profitability of their competitors.
            • tcdent15 hours ago
              This one? <a href="https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;models&#x2F;gemma-3-27b" rel="nofollow">https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;models&#x2F;gemma-3-27b</a>
    • seanmcdirmid17 hours ago
      And you don’t want to go the M4 Max&#x2F;M3 Ultra route? It works well enough for most mid sized LLMs.
    • zeusk18 hours ago
      Get the DGX Spark computers? They’re exactly what you’re trying to build.
      • Gracana10 hours ago
        They’re very slow.
        • geerlingguy9 hours ago
          They&#x27;re okay, generally, but slow for the price. You&#x27;re more paying for the ConnectX-7 networking than inference performance.
          • Gracana8 hours ago
            Yeah, I wouldn’t complain if one dropped in my lap, but they’re not at the top of my list for inference hardware.<p>Although... Is it possible to pair a fast GPU with one? Right now my inference setup for large MoE LLMs has shared experts in system memory, with KV cache and dense parts on a GPU, and a Spark would do a better job of handling the experts than my PC, if only it could talk to a fast GPU.<p>[edit] Oof, I forgot these have only 128GB of RAM. I take it all back, I still don’t find them compelling.
    • dist-epoch17 hours ago
      This problem was already solved 10 years ago - crypto mining motherboards, which have a large number of PCIe slots, a CPU socket, one memory slot, and not much else.<p>&gt; Asus made a crypto-mining motherboard that supports up to 20 GPUs<p><a href="https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;5&#x2F;30&#x2F;17408610&#x2F;asus-crypto-mining-motherboard-gpus" rel="nofollow">https:&#x2F;&#x2F;www.theverge.com&#x2F;2018&#x2F;5&#x2F;30&#x2F;17408610&#x2F;asus-crypto-mini...</a><p>For LLMs you&#x27;ll probably want a different setup, with some memory too, some m.2 storage.
      • jsheard17 hours ago
        Those only gave each GPU a single PCIe lane though, since crypto mining barely needed to move any data around. If your application doesn&#x27;t fit that mould then you&#x27;ll need a much, much more expensive platform.
        • dist-epoch17 hours ago
          After you load the weights into the GPU and keep the KV cache there too, you don&#x27;t need any other significant traffic.
          • numpad017 hours ago
            Even in tensor parallel modes? I thought it could only work if you&#x27;re fine stalling all but n GPU for n users at any given moments.
      • skhameneh17 hours ago
        In theory, it’s only sufficient for pipeline parallel due to limited lanes and interconnect bandwidth.<p>Generally, scalability on consumer GPUs falls off between 4-8 GPUs for most. Those running more GPUs are typically using a higher quantity of smaller GPUs for cost effectiveness.
      • zozbot23417 hours ago
        M.2 is mostly just a different form factor for PCIe anyway.
    • Eisenstein15 hours ago
      There is a whole section in here on how to spec out a cheap rig and what to look for:<p>* <a href="https:&#x2F;&#x2F;jabberjabberjabber.github.io&#x2F;Local-AI-Guide&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jabberjabberjabber.github.io&#x2F;Local-AI-Guide&#x2F;</a>
  • numpad017 hours ago
    Not sure what was unexpected about the multi GPU part.<p>It&#x27;s very well known that most LLM frameworks including llama.cpp splits models by layers, which has sequential dependency, and so multi GPU setups are completely stalled unless there are n_gpu users&#x2F;tasks running in parallel. It&#x27;s also known that some GPUs are faster in &quot;prompt processing&quot; and some in &quot;token generation&quot; that combining Radeon and NVIDIA does something sometimes. Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.<p>It takes appropriate backends with &quot;tensor parallel&quot; mode support, which splits the neural network parallel to the direction of flow of data, which also obviously benefit substantially from good node interconnect between GPUs like PCIe x16 or NVlink&#x2F;Infinity Fabric bridge cables, and&#x2F;or inter-GPU DMA over PCIe(called GPU P2P or GPUdirect or some lingo like that).<p>Absent those, I&#x27;ve read somewhere that people can sometimes see GPU utilization spikes walking over GPUs on nvtop-style tools.<p>Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one &quot;manager&quot; and few &quot;delegated engineers&quot; personalities. Or simulating multiple different domains of brain such as speech center, visual cortex, language center, etc. communicating in tokens might be interesting in working around this problem.
    • syntaxing10 hours ago
      Theres some technical implementations that makes it more efficient like EXO [1]. Jeff Geerling recently did a review on a 4 MAC Studio cluster with RDMA support and you can see that EXO has a noticeable advantage [2].<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;exo-explore&#x2F;exo" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;exo-explore&#x2F;exo</a> [2] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x4_RsUxRjKU" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=x4_RsUxRjKU</a>
      • sgt5 hours ago
        At this point I&#x27;d consider a cluster of top specced Mac Studio&#x27;s to be worth while in production. I just need to host them properly in a rack and in a co-lo data center.
    • zozbot23417 hours ago
      &gt; Looking for a way to break up tasks for LLMs so that there will be multiple tasks to run concurrently would be interesting, maybe like creating one &quot;manager&quot; and few &quot;delegated engineers&quot; personalities.<p>This is pretty much what &quot;agents&quot; are for. The manager model constructs prompts and contexts that the delegated models can work on in parallel, returning results when they&#x27;re done.
    • nodja15 hours ago
      &gt; Reportedly the inter-layer transfer sizes are in kilobyte ranges and PCIe x1 is plenty or something.<p>Not an expert, but napkin math tells me that more often that not this will be in the order of megabytes—not kilobytes—since it scales with sequence length.<p>Example: Qwen3 30B has a hidden state size of 5120, even if quantized to 8 bits that&#x27;s 5120 bytes per token. It would pass the MB boundary with just a little over 200 tokens. Still not much of an issue when a single PCIe lane is ~2GB&#x2F;s.<p>I think device to device latency is more of an issue here, but I don&#x27;t know enough to assert that with confidence.
      • remexre13 hours ago
        For each token generated, you only send one token’s worth between layers; the previous tokens are in the KV cache.
    • scotty793 hours ago
      &gt; Not sure what was unexpected about the multi GPU part. It&#x27;s very well known that most LLM frameworks including llama.cpp splits models by layers, which has sequential dependency, and so multi GPU setups are completely stalled<p>Oh, I thought the point of transformers was being able to split the load veritcally to avoid seqential dependancies. Is it true just for training or not at all?
  • 3eb7988a166318 hours ago
    Datapoints like this really make me reconsider my daily driver. I should be running one of those $300 mini PCs at &lt;20W. With ~flat CPU performance gains, would be fine for the next 10 years. Just remote into my beefy workstation when I actually need to do real work. Browsing the web, watching videos, even playing some games is easily within their wheelhouse.
    • PunchyHamster3 minutes ago
      Slapping $300 worth of solar panels on your roof&#x2F;balcony will probably get you ahead on power usage
    • themafia1 hour ago
      &gt; I should be running one of those $300 mini PCs at &lt;20W.<p>Yes. They&#x27;re basically laptop chips at this point. The thermals are worse but the chips are perfectly modern and can handle reasonably large workloads. I&#x27;ve got an 8 core Ryzen 7 with Radeon 780 Graphics and 96GB of DDR5. Outside of AAA gaming this thing is absolutely fine.<p>The power draw is a huge win for me. It&#x27;s like 6W at idle. I live remotely so grid power is somewhat unreliable and saving watts when using solar batteries extends their lifetime massively. I&#x27;m thrilled with them.
    • samuelknight17 hours ago
      Switching from my 8-core ryzen minipc to an 8-core ryzen desktop makes my unit tests run way faster. TDP limits can tip you off to very different performance envelopes in otherwise similar spec CPUs.
      • adrian_b12 hours ago
        A full-size desktop computer will always be much faster for any workload that fully utilizes the CPU.<p>However, a full-size desktop computer seldom makes sense as a <i>personal</i> computer, i.e. as the computer that interfaces to a human via display, keyboard and graphic pointer.<p>For most of the activities done directly by a human, i.e. reading &amp; editing documents, browsing Internet, watching movies and so on, a mini-PC is powerful enough. The only exception is playing games designed for big GPUs, but there are many computer users who are not gamers.<p>In most cases the optimal setup is to use a mini-PC as your personal computer and a full-size desktop as a server on which you can launch any time-consuming tasks, e.g. compilation of big software projects, EDA&#x2F;CAD simulations, testing suites etc.<p>The desktop used as server can use Wake-on-LAN to stay powered off when not needed and wake up whenever it must run some task remotely.
        • whatevaa3 hours ago
          Not everything supports remoting well. For example, many IDE&#x27;s. Unless you run RDP, with whole graphical session on remote.<p>Also, having to buy two computers also costs money. It makes sense to use 1 for both use cases if you have to buy the desktop anyway.
      • loeg13 hours ago
        Even if you could cool the full TDP in a micro PC, in a full size desktop you might be able to use a massive AIO radiator with fans running at very slow, very quiet speeds instead of jet turbine howl in the micro case. The quiet and ease of working in a bigger space are mostly a good tradeoff for a slightly larger form factor under a desk.
    • ekropotin18 hours ago
      As experiment, I decided to try using proxmox VM with eGPU and usb bus bypassed to it, as my main PC for browsing and working on hobby projects.<p>It’s just 1 vCPU with 4 Gb ram, and you know what? It’s more than enough for these needs. I think hardware manufactures falsely convinced us that every professional needs beefy laptop to be productive.
    • reactordev17 hours ago
      I went with a beelink for this purpose. Works great.<p>Keeps the desk nice and tidy while “the beasts” roar in a soundproofed closet.
    • jasonwatkinspdx13 hours ago
      For just basic windows desktop stuff, a $200 NUC has been good enough for like 15 years now.
  • haritha-j3 hours ago
    I currently have a £500 laptop hooked up to an egpu box with a £700 gpu. It&#x27;s not a bad setup.
  • jonahbenton19 hours ago
    So glad someone did this. Have been running big gpus on egpus connected to spare laptops and thinking why not pis.
  • omneity10 hours ago
    I wish for a hardware + software solution to enable direct PCIe interconnect using lanes independent from the chipset&#x2F;CPU. A PCIe mesh of sorts.<p>With the right software support from say pytorch this could suddenly make old GPUs and underpowered PCs like in TFA into very attractive and competitive solutions for training and inference.
    • snuxoll9 hours ago
      PCIe already allows DMA between peers on the bus, but, as you pointed out, the traces for the lanes have to terminate somewhere. However, it doesn&#x27;t have to be the CPU (which is, of course, the PCIe root in modern systems) handling the traffic - a PCIe switch may be used to facilitate DMA between devices attached to it, if it supports routing DMA traffic directly.
      • ComputerGuru9 hours ago
        That’s what happened in TFA.
        • omneity1 hour ago
          You&#x27;re right. Let me correct myself: a hobbyist-friendly hardware solution. Dolphin&#x27;s PCIe switches cost more than 8 RTX 3090 on a Threadripper machine.
  • moebrowne4 hours ago
    I&#x27;d be interested to see if workloads like Folding@home could be efficiently run this way. I don&#x27;t think they need a lot of bandwidth.
  • pjmlp5 hours ago
    Of course, just go to any computer store where most gamer setups on affordable bugets go with the combo &quot;beefy GPU + an i5&quot;, instead of using an i7 or i9 Intel CPUs.
  • Wowfunhappy18 hours ago
    I really would have liked to see gaming performance, although I realize it might be difficult to find a AAA game that supports ARM. (Forcing the Pi to emulate x86 with FEX doesn&#x27;t seem entirely fair.)
    • 3eb7988a166318 hours ago
      You might have to thread the needle to find a game which does not bottleneck on the CPU.
  • kristjansson18 hours ago
    Really why have the PCI&#x2F;CPU artifice at all? Apple and Nvidia have the right idea: put the MPP on the same die&#x2F;package as the CPU.
    • PunchyHamster2 minutes ago
      We need low power but high PCIE lane count CPUs for that. Just purely for shoving models from NVMe to GPU
    • bigyabai17 hours ago
      &gt; put the MPP on the same die&#x2F;package as the CPU.<p>That would help in latency-constrained workloads, but I don&#x27;t think it would make much of a difference for AI or most HPC applications.
  • kgeist16 hours ago
    What about constrained decoding (with JSON schemas)? I noticed my vLLM instance is using 1 CPU 100%.
  • jauntywundrkind15 hours ago
    PCIe 3.0 is the nice easy convenient generation where 1 lane = 1GBps. Given the overhead, thats pretty close to 10Gb ethernet speeds (low latency though).<p>I do wonder how long the cards are going to need host systems at all. We&#x27;ve already seen GPUs with m.2 ssd attached! Radeon Pro SSG hails back from 2016! You still need a way to get the model on that in the first place to get work in and out, but a 1Gbe and small RISC-V chip (which Nvidia already uses formanagement cores) could suffice. Maybe even an rpi on the card. <a href="https:&#x2F;&#x2F;www.techpowerup.com&#x2F;224434&#x2F;amd-announces-the-radeon-pro-ssg" rel="nofollow">https:&#x2F;&#x2F;www.techpowerup.com&#x2F;224434&#x2F;amd-announces-the-radeon-...</a><p>Given the gobs of memory cards have, they probably don&#x27;t even need storage; they just need big pipes. Intel had 100Gbe on their Xeon &amp; Xeon Phi cores (10x what we saw here!) in <i>2016</i>! GPUs that just plug into the switch and talk across 400Gbe or UltraEthernet or switched CXL, that run semi independently, feel so sensible, so not outlandish. <a href="https:&#x2F;&#x2F;www.servethehome.com&#x2F;next-generation-interconnect-intel-omni-path-released&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.servethehome.com&#x2F;next-generation-interconnect-in...</a><p>It&#x27;s far off for now, but flash makers are also looking at radically many channel flash, which can provide absurdly high GB&#x2F;s, High Bandwidth Flash. And potentially integrated some extremely parallel tensorcores on each channel. Switching from DRAM to flash for AI processing could be a colossal win for fitting large models cost effectively (&amp; perhaps power efficiently) while still having ridiculous gobs of bandwidth. With that possible win of doing processing &amp; filtering extremely near to the data too. <a href="https:&#x2F;&#x2F;www.tomshardware.com&#x2F;tech-industry&#x2F;sandisk-and-sk-hynix-join-forces-to-standardize-high-bandwidth-flash-memory-a-nand-based-alternative-to-hbm-for-ai-gpus-move-could-enable-8-16x-higher-capacity-compared-to-dram" rel="nofollow">https:&#x2F;&#x2F;www.tomshardware.com&#x2F;tech-industry&#x2F;sandisk-and-sk-hy...</a>
  • lostmsu17 hours ago
    Now compare batched training performance. Or batched inference.<p>Of course prefill is going to be GPU bound. You only send a few thousand bytes to it, and don&#x27;t really ask to return much. But after prefill is done, unless you use batched mode, you aren&#x27;t really using your GPU for anything more that it&#x27;s VRAM bandwidth.
  • Avlin6711 hours ago
    tired of jeff glinglin everywhere...
    • manarth2 hours ago
      I personally find his work and his posts interesting, and enjoy seeing them pop up on HN.<p>If you prefer not to see his posts on the HN list pages, a practical solution is to use a browser extension (such as Stylus) to customise the HN styling to hide the posts.<p>Here is a specific CSS style which will hide submissions from Jeff&#x27;s website:<p><pre><code> tr.submission:has(td a[href=&quot;from?site=jeffgeerling.com&quot;]), tr.submission:has(td a[href=&quot;from?site=jeffgeerling.com&quot;]) + tr, tr.submission:has(td a[href=&quot;from?site=jeffgeerling.com&quot;]) + tr + tr { opacity: 0.05 } </code></pre> In this example, I&#x27;ve made it almost invisible, whilst it still takes up space on the screen (to avoid confusion about the post number increasing from N to N+2). You could use { display: none } to completely hide the relevant posts.<p>The approach can be modified to suit any origin you prefer to not come across.<p>The limitation is that the style modification may need refactoring if HN changes the markup structure.