72 comments

  • the_arun3 minutes ago
    Curious to know who will spend this much money without external funding? Would you spend any VC invested money into this nameless brand? Are there any guardrails or clauses to protect the kind of expenses?
  • bastawhiz18 hours ago
    There&#x27;s no way the red v2 is doing anything with a 120b parameter model. I just finished building a dual a100 ai homelab (80gb vram combined with nvlink). Similar stats otherwise. 120b only fits with very heavy quantization, enough to make the model schizophrenic in my experience. And there&#x27;s no room for kv, so you&#x27;ll OOM around 4k of context.<p>I&#x27;m running a 70b model now that&#x27;s okay, but it&#x27;s still fairly tight. And I&#x27;ve got 16gb more vram then the red v2.<p>I&#x27;m also confused why this is 12U. My whole rig is 4u.<p>The green v2 has better GPUs. But for $65k, I&#x27;d expect a much better CPU and 256gb of RAM. It&#x27;s not like a threadripper 7000 is going to break the bank.<p>I&#x27;m glad this exists but it&#x27;s... honestly pretty perplexing
    • sosodev13 minutes ago
      What models are you testing? A 120b model with hybrid attention should fit within 80gb of VRAM fine at a 4-bit quant. Also, 4-bit quants that are done well are generally fine. They certainly don’t make the model unusable.
    • oceanplexian18 hours ago
      It will work fine but it’s not necessarily insane performance. I can run a q4 of gpt-oss-120b on my Epyc Milan box that has similar specs and get something like 30-50 Tok&#x2F;sec by splitting it across RAM and GPU.<p>The thing that’s less useful is the 64G VRAM&#x2F;128G System RAM config, even the large MoE models only need 20B for the router, the rest of the VRAM is essentially wasted (Mixing experts between VRAM and&#x2F;System RAM has basically no performance benefit).
      • androiddrew2 hours ago
        Could you share what you are using for inference and how you are running it? I have a 64G VRAM&#x2F;128G system RAM setup.
        • sosodev2 minutes ago
          Most people are using something in the llama family for inference. Llama server is my go to. Unsloth guides describe how to configure inference for your model of choice.
      • datadrivenangel2 hours ago
        Yeah I&#x27;ve got the q4 gpt-oss-120b running at ~40-60 tokens per second on an M5 Pro.
      • syntaxing17 hours ago
        Split RAM and GPU impacts it more than you think. I would be surprised if the red box doesn’t outperform you by 2-3X for both PP and TG
    • overfeed16 hours ago
      &gt; I&#x27;m also confused why this is 12U. My whole rig is 4u.<p>I imagine that&#x27;s because they are buying a single SKU for the shell&#x2F;case. I imagine their answer to your question would be: <i>In order to keep prices low and quality high, we don&#x27;t offer any customization to the server dimensions</i>
      • ottah14 hours ago
        That&#x27;s just such a massively oversized server for the number of gpus. It&#x27;s not like they&#x27;re doing anything special either. I can buy an appropriately sized supermicro chassis myself and throw some cards in it. They&#x27;re really not adding enough value add to overspend on anything.
        • randomgermanguy4 hours ago
          The major selling point of the tinyboxes is that you&#x27;re able to run them in your office without any hassle.<p>I used to own a Dell Poweredge for my home-office, but those fans even on minimal setting kept me up at night
    • ericd15 hours ago
      Was that cheaper than a Blackwell 6000?<p>But yeah, 4x Blackwell 6000s are ~32-36k, not sure where the other $30k is going.
      • bastawhiz14 hours ago
        I bought the A100s used for a little over $6k each.
        • ericd14 hours ago
          Oh, why&#x27;d you go that route? Considering going beyond 80 gigs with nvlink or something?
      • segmondy14 hours ago
        folks have too much money than sense, gpt-oss-120b full quant runs on my quad 3090 at 100tk&#x2F;sec and that&#x27;s with llama.cpp, with vllm it will probably run at 150tk&#x2F;sec and that&#x27;s without batching.
        • integralid54 minutes ago
          Thanks for chiming in. I&#x27;m looking for a reasonably cheap local LLM machine, and multiple 3090s is exactly what I planned to buy. Do you have any recommendations or recommend any reading material before I decide to spend money on that?<p>edit: Found your comment about &#x2F;r&#x2F;localllama, but if you have anything more to add I&#x27;m still very interested.
        • amarshall14 hours ago
          You&#x27;re almost certainly (definitely, in fact) confusing the 120b and 20b models.
          • segmondy1 hour ago
            I&#x27;m most certainly not doing so.<p><pre><code> seg@seg-epyc:~&#x2F;models$ du -sh * &#x2F;llmzoo&#x2F;models&#x2F;* | sort -n 4.0K metrics.txt 4.0K opus 4.0K start_llama 8.2G nvidia_Orchestrator-8B-Q8_0.gguf 12K config.ini 34G Qwen3.5-27B 47G Qwen3.5-35B 51G Qwen3.5-27B-BF16 61G gpt-oss-120b-F16.gguf 65G Qwen3.5-35B-BF16 106G Qwen3.5-122B-Q6 117G GLM4.6V 175G MiniMax-M2.5 232G &#x2F;llmzoo&#x2F;models&#x2F;small_models 240G Ernie4.5-300B 377G DeepSeekv3.2-nolight 380G &#x2F;llmzoo&#x2F;models&#x2F;DeepSeek-V3.2-UD 400G &#x2F;llmzoo&#x2F;models&#x2F;Qwen3.5-397B-Q8 424G &#x2F;llmzoo&#x2F;models&#x2F;KimiK2Thinking 443G DeepSeek-Math-v2 443G DeepSeek-V3-0324-Q5 500G &#x2F;llmzoo&#x2F;models&#x2F;GLM5-Q5 546G &#x2F;llmzoo&#x2F;models&#x2F;KimiK2.5</code></pre>
            • amarshall29 minutes ago
              Oh I missed the &quot;quad&quot; before 3090.
        • Aurornis12 hours ago
          &gt; gpt-oss-120b full quant runs on my quad 3090<p>A 120B model cannot fit on 4 x 24GB GPUs at full quantization.<p>Either you&#x27;re confusing this with the 20B model, or you have 48GB modded 3090s.
          • segmondy1 hour ago
            Some of you folks on here love to argue, gpt-oss-120b was trained in 4 bits, so it pretty much takes up 60gb.
            • Aurornis1 hour ago
              Good point, but you still need KV cache and more. Fitting the model alone to RAM doesn’t get the job done.
              • segmondy56 minutes ago
                Yeah, it doesn&#x27;t take much. I&#x27;m looking at it right now, KV cache is about 4gb of vram, compute buffer =~ 1.5gb at full 128k context.
        • ericd14 hours ago
          How&#x27;re you fitting a model made for 80 gig cards onto a GPU with 24 gigs at full quant?
          • Havoc5 hours ago
            He said quad 3090 not single
            • ericd1 hour ago
              Yeah, pretty sure that was edited in after I commented because 150 toks&#x2F;sec was also new, but could’ve just missed it.
          • zozbot23413 hours ago
            MoE layers offload to CPU inference is the easiest way, though a bit of a drag on performance
            • ericd13 hours ago
              Yeah, I&#x27;d just be pretty surprised if they were getting 100 tokens&#x2F;sec that way.<p>EDIT: Either they edited that to say &quot;quad 3090s&quot;, or I just missed it the first time.
              • segmondy1 hour ago
                you are correct, I did forget to add quad. you should join us in r&#x2F;localllama<p>check out what other people are getting. you&#x27;re welcome.<p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1nunq7s&#x2F;gptoss120b_performance_on_4_x_3090&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1nunq7s&#x2F;gptoss1...</a> <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1p4evyr&#x2F;most_economical_way_to_run_gptoss120b_for_10_users&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLaMA&#x2F;comments&#x2F;1p4evyr&#x2F;most_ec...</a>
                • ericd1 hour ago
                  Thanks for the confirmation, wasn&#x27;t sure if I was just going a bit senile heh. Yeah, I love &#x2F;r&#x2F;localllama, some of the best actual practitioners of this stuff on the internet. Also, crazy awesome frankenrigs to try and get that many huge cards working together.<p>I was considering picking up a couple of the 48 gig 4090&#x2F;3090s on an upcoming trip to China, but I just ended up getting one of the Max-Q&#x27;s. But maybe the token throughput would still be higher with the 4090 route? Impressive numbers with those 3090s!<p>What&#x27;s the rig look like that&#x27;s hosting all that?
    • Aurornis12 hours ago
      &gt; There&#x27;s no way the red v2 is doing anything with a 120b parameter model.<p>I don&#x27;t see the 120B claim on the page itself. Unless the page has been edited, I think it&#x27;s something the submitter added.<p>I agree, though. The only way you&#x27;re running 120B models on that device is either extreme quantization or by offloading layers to the CPU. Neither will be a good experience.<p>These aren&#x27;t a good value buy unless you compare them to fully supported offerings from the big players.<p>It&#x27;s going to be hard to target a market where most people know they can put together the exact same system for thousands of dollars less and have it assembled in an afternoon. RTX 6000 96GB cards are in stock at Newegg for $9000 right now which leaves almost $30,000 for the rest of the system. Even with today&#x27;s RAM prices it&#x27;s not hard to do better than that CPU and 256GB of RAM when you have a $30,000 budget.
    • zozbot23418 hours ago
      &gt; And there&#x27;s no room for kv, so you&#x27;ll OOM around 4k of context.<p>Can&#x27;t you offload KV to system RAM, or even storage? It would make it possible to run with longer contexts, even with some overhead. AIUI, local AI frameworks include support for caching some of the KV in VRAM, using a LRU policy, so the overhead would be tolerable.
      • tcdent18 hours ago
        Not worth it. It is a very significant performance hit.<p>With that said, people are trying to extend VRAM into system RAM or even NVMe storage, but as soon as you hit the PCI bus with the high bandwidth layers like KV cache, you eliminate a lot of the performance benefit that you get from having fast memory near the GPU die.
        • zozbot23416 hours ago
          &gt; With that said, people are trying to extend VRAM into system RAM or even NVMe storage<p>Only useful for prefill (given the usual discrete-GPU setup; iGPU&#x2F;APU&#x2F;unified memory is different and can basically be treated as VRAM-only, though a bit slower) since the PCIe bus becomes a severe bottleneck otherwise as soon as you offload more than a tiny fraction of the memory workload to system memory&#x2F;NVMe. For decode, you&#x27;re better off running entire layers (including expert layers) on the CPU, which local AI frameworks support out of the box. (CPU-run layers can in turn offload to storage for model parameters&#x2F;KV cache as a last resort. But if you offload too much to storage (insufficient RAM cache) that then dominates the overhead and basically everything else becomes irrelevant.)&quot;
      • bastawhiz14 hours ago
        The performance already isn&#x27;t spectacular with it running all in vram. It&#x27;ll obviously depend on the model: MoE will probably perform better than a dense model, and anything with reasoning is going to take _forever_ to even start beginning its actual output.
      • ranger_danger18 hours ago
        I know llama.cpp can, it certainly improved performance on my RAM-starved GPU.
    • ottah14 hours ago
      Honestly two rtx 8000s would probably have a better return on investment than the red v2. I have an eight gpu server, five rtx 8000, three rtx 6000 ada. For basic inference, the 8000s aren&#x27;t bad at all. I&#x27;m sure the green with four rtx pro 6000s are dramatically faster, but there&#x27;s a $25k markup I don&#x27;t honestly understand.
  • ivraatiems19 hours ago
    There&#x27;s some irony in the fact that this website reads as extremely NOT AI-generated, very human in the way it&#x27;s designed and the tone of its writing.<p>Still, this is a great idea, and one I hope takes off. I think there&#x27;s a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company&#x27;s own model.<p>One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.
    • solarkraft16 hours ago
      I find that the most respected writing <i>about</i> AI has very few signs of being written <i>by</i> AI. I&#x27;m guessing that&#x27;s because people in the space are very sensitive to the signs and signal vs. noise.
      • rimeice15 hours ago
        And because people writing anything worth reading are using the process of writing to form a proper argument and develop their ideas. It’s just not possible to do that by delegating even a small chunk of the work to AI.
      • Aperocky16 hours ago
        I found it useful to preface with<p>* this section written by me typing on keyboard *<p>* this section produced by AI *<p>And usually both exist in document and lengthy communications. This gets what I wanted across with exactly my intention and then I can attach 10x length worth of AI appendix that would be helpful indexing and references.
        • jolmg13 hours ago
          &gt; attach 10x length worth of AI appendix that would be helpful indexing and references.<p>Are references helpful when they&#x27;re generated? The reader could&#x27;ve generated them themselves. References would be helpful if they were personal references of stuff you actually read and curated. The value then would be getting your taste. References from an AI may well be good-looking nonsense.
          • Aperocky52 minutes ago
            &quot;The user could have written the code themselves&quot;<p>Yes, sometimes this is true, but not always.<p>Note, it&#x27;s not one prompt (there aren&#x27;t really &quot;one prompt&quot; any more, prompt engineering is such a 2023-2024 thing), or purely unreviewed output. It&#x27;s curated output that was created by AI but iterated with me since it goes with and has to match my intention. And most of the time I don&#x27;t directly prompt the agent any more. I go through a layer of agent management that inject more context into the agent that actually work on it.
          • cgio8 hours ago
            I agree wholeheartedly, I don’t see any balance in the effort someone dedicated to generating text vs me consuming it. If you feel there’s further insight to be gained by an llm, give me the prompt, not the output. Any communication channel reflects a balance of information content flowing and we are still adjusting to the proper etiquette.
    • jofzar13 hours ago
      Good? That&#x27;s what I want out of all websites. I don&#x27;t want to read what an AI believes is the best thing for a website, I want to know the honest truth.
    • agnishom11 hours ago
      I don&#x27;t view this as irony. This seems like good sense in understanding when AI usage will make things better and when it will not.
    • Lerc18 hours ago
      I am a little surprised that they openly solicit code contributions with &quot;Invest with your PRs&quot; but don&#x27;t have any statement on AI contributions.<p>Maybe the volume for them is ok that well-intentioned but poor quality PRs can be politely(or otherwise, culture depending) disregarded and the method of generation is not important.
      • KeplerBoy18 hours ago
        Tinygrad sure shared a few opinions on AI PRs on Twitter. I believe the gist was &quot;we have Claude code as well, if that&#x27;s all you bring don&#x27;t bother&quot;.
        • all212 hours ago
          That&#x27;s a pretty excellent take, IMO. Just an undirected AI model doesn&#x27;t do much, especially when the core team has time with the code, domain expertise, _and_ Claude.
      • cyanydeez17 hours ago
        I&#x27;m starting to think that if you have an AI repo thats basically about codegen, you should just close all issues automatically, the manually (or whatever) open the ones you&#x2F;maintainers actually care about. Thats about the only way to kill some of the signal&#x2F;noise ratio AIs are creating.<p>Then you could focus fire, like the script kiddies did with DDoS in the old days on fixing whatever preferred issues you have.
    • wat1000018 hours ago
      If you’re spending $65,000 on this thing, needing two circuits seems like a minor problem
      • ycui198615 hours ago
        they could had gone with the Max-Q version RTX PRO 6000 and only require 120V circuit. 10% performance hit, but half the power.<p>fundamentally, looks like they are shipping consumer off-the-shelf hardwares in a custom box.
        • ericd13 hours ago
          Yeah, the other big benefit is that the Max-Q&#x27;s have blowers that exhaust the hot air out of the box, the workstation cards would each blow their exhaust straight into the intake of the card behind it. The last card in that chain would be cooking, as the air has already been heated up by 1800W, essentially a hair dryer on high.<p>Or could be the server edition 6000s that just have a heatsink and rely on the case to drive air through them, those are 600W cards.
      • ivraatiems17 hours ago
        The $12,000 one also requires it.
        • knollimar16 hours ago
          Easier to get two circuits than rewire a breaker in an office you might be renting, no?<p>(I work for an electrical contractor so my sense of ease might be overcorrecting)
          • markdown14 hours ago
            And 240v is orders of magnitude more common worldwide than 120v
        • isatty17 hours ago
          Surprisingly affordable but I’m not really interested in the 9070XT.<p>If it shipped with like 4090+ (for a higher price) it’d be more tempting.
          • dmarcos17 hours ago
            They offered a version a few months ago with 4x5090 for 25k<p><a href="https:&#x2F;&#x2F;x.com&#x2F;__tinygrad__&#x2F;status&#x2F;1983917797781426511" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;__tinygrad__&#x2F;status&#x2F;1983917797781426511</a><p>Stopped due to raising GPU prices:<p><a href="https:&#x2F;&#x2F;x.com&#x2F;__tinygrad__&#x2F;status&#x2F;2011263292753526978" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;__tinygrad__&#x2F;status&#x2F;2011263292753526978</a>
          • ycui198614 hours ago
            9070XT provide roughly same inference performance at double the power, half the cost, as RTX PRO 4500. So this one is optimized for total BOM cost.
        • wat1000015 hours ago
          The specs show that it only has one PSU. The docs just say that it has 2 and thus needs two circuits, but I’d guess that was meant to be for the more expensive one.
    • imjustmsk11 hours ago
      Big companies are pushing cloud really hard, and yea the hardware prices too is a problem. People still buy Google cloud and OneDrive when they could literally pickup an old computer from trash and Frankenstein it into a NAS server.
    • harvey98 hours ago
      If I&#x27;m spending at least 12k USD on the machine then doing some electrical works to accommodate it is not a big deal.
    • adrianwaj13 hours ago
      &quot;locally-trained models for everyone&quot;<p>Wouldn&#x27;t there be a massive duplication of effort in that case? It&#x27;ll be interesting to see how the costs play out. There are security benefits to think about as well in keeping things local-first.
      • all212 hours ago
        There are multiple efforts for &#x27;folding at home&#x27; but for AI models at this point. I get the impression that we will see a frontier model released this year built on a system like this.
    • kube-system11 hours ago
      When you’re dealing with this kind of power it’s easier just to colocate where you’ll typically get two separate feeds of 208v
    • trollbridge18 hours ago
      A typical U.S. 240V circuit is actually just two 120V circuits. Fairly trivial to rewire for that.
      • Salgat16 hours ago
        It&#x27;s more accurate to say that the typical 120V circuit is just a 240V source with the neutral tapped into the midpoint of the transformer winding.
        • reactordev16 hours ago
          This. It definitely comes in at a higher voltage.
          • amluto13 hours ago
            Sort of? It’s 120V RMS to ground.
            • razingeden11 hours ago
              yes, this is accurate for US and “works” but it’s against code here. you’ll get mildly shocked by metallic cabinets and fixtures especially if you’re barefoot and become the new shortest path to ground.<p>old construction in the US sometimes did this intentionally (so old, the house didn’t have grounds. Or to “pass” an inspection and sell a place) but if a licensed electrician sees this they have to fix it.<p>I’m dealing with a 75 year old house that’s set up this way, the primary issue this is causing is that a 50amp circuit for their HVACs are taking a shorter path to ground inside the house instead of in the panel.<p>As a result the 50 amp circuit has blown through several of the common 20amp grounds and neutrals and left them with dead light fixtures and outlets because they’re bridged all over the place.<p>If an HVAC or two does this, I’d advise against this for your 3200 watt AI rig.<p>EU, you don’t want to try to energize your ground. They use step down transformers or power supplies capable of taking 115-250 (their systems are 240-250V across the load and neutral lines. Not 120 across the load and neutral like ours.)<p>in the US. you’re talking about energizing your ground plane with 120v and I don’t want to call that safe… but it’s REALLY NOT SAFE to make yourself the shortest path to ground on say. a wet bathroom floor. with 220V-250v.
              • amluto8 minutes ago
                &gt; I’m dealing with a 75 year old house that’s set up this way<p>I can’t tell what practice you’re referring to. Are you perhaps referring to older wiring that connects large appliances to a neutral and two hots but no ground, e.g. NEMA 10-30R receptacles? Those indeed suck and are rather dangerous. Extra dangerous if the neutral wiring is failing or undersized anywhere.<p>But even NEMA 10-30R receptacles are still 120V RMS phase-to-ground. (And, bizarrely, there’s an entire generation of buildings where you might find proper 4-conductor wiring to the dryer outlet and a 10-30R installed — you can test the wiring and switch to 14-30R without any rewiring.)<p>The exception for residential wiring is when the neutral feed from the utility transformer fails, in which case you may have 240V phase-to-phase with the actual Earth floating somewhere in the middle (via the service’s ground connection), which can result in phase-to-neutral and phase-to-ground measured anywhere in the house varying from 0 to 240V RMS.<p>&gt; wet bathroom floor<p>A GFCI receptacle adds a considerable degree of safety and can be installed with arbitrarily old wiring. It’s even permitted by code to install one with no ground connection as long as you label it appropriately — look it up in your local code.
      • projektfu9 hours ago
        Yes, if you have a 240V US split-phase circuit you could make a little sub-panel with a 40A breaker feeding two 20A 120V circuits and plug the two power supplies into each side. (1600W would need a 20-A breaker because 13.3A would be too much of a 15A circuit). But it would probably make more sense to just plug them both into the same 40A 240V circuit. If you use NEMA 6-20, make sure you label it appropriately and probably color it red.<p>In Europe, you could plug the two power supplies into an appropriately sized 240V circuit.<p>In an apartment you can&#x27;t rewire, you could set it up in your kitchen, which in the modern US code should have two separate 20A circuits. You will need to put it to sleep while you use appliances.
      • razingeden11 hours ago
        A US circuit is.<p>But this is re: European 240&#x2F;250 which is 240 between its load and neutral<p>I’d say don’t energize either systems ground plane, but , really, don’t do this in EU
      • 0xbadcafebee15 hours ago
        I think you&#x27;re forgetting the wires? If you have one outlet with a 15-20A 120V circuit, then the wiring is almost certainly rated for 15-20A. If you just &quot;combined&quot; two 120V circuits into a 240V circuit, you still need an outlet that is rated for 30A, the wires leading to it also need to be rated for 30A, and it definitely needs a neutral. So you still need a new wire run if you don&#x27;t have two 120V circuits right where you wanna plug in the box. To pass code you also may need to upsize conduit. If load is continuously near peak, it should be 50A instead of 30.<p>So basically you need a brand new circuit run if you don&#x27;t have two 120V circuits next to each other. But if you&#x27;re spending $65k on a single machine, an extra grand for an electrician to run conduit should be peanuts. While you&#x27;re at it I would def add a whole-home GFCI, lightning&#x2F;EMI arrestor, and a UPS at the outlet, so one big shock doesn&#x27;t send $65k down the toilet.
        • briandw13 hours ago
          Correct me if I’m wrong, but doubling the volts doesn&#x27;t change the amps, it doubles the watts. Watts = V*A.
          • 0xbadcafebee8 hours ago
            Yes; I assumed 30A was minimum requirement for 240V service in US. Apparently I was wrong, 20A 240V is apparently normal. So in theory you could use a pre-existing 20A 120V circuit&#x27;s wiring for a 240V (assuming it was 12&#x2F;3 cable). And apparently 4-wire is now the standard for 240V service in US? Jesus we have a weird grid.
          • subscribed13 hours ago
            Doubling the volts halves the amps. P = I * V indeed.
        • fc417fc80214 hours ago
          I think you might&#x27;ve misread GP. (Or maybe I did?)<p>He&#x27;s not saying you would use it as two separate 120v circuits sharing a ground but rather as a single 240v circuit. His point is that it&#x27;s easy to rewire for 240v since it&#x27;s the same as all the other wiring in your house just with both poles exposed.<p>Of course you do have to run a new wire rather than repurpose what&#x27;s already in the wall since you need the entire circuit to yourself. So I think it&#x27;s not as trivial as he&#x27;s making out.<p>But then at that wattage you&#x27;ll also want to punch an exhaust fan in for waste heat so it&#x27;s not like you won&#x27;t already be making some modifications.
          • projektfu9 hours ago
            The wiring (at least in the US) to the 120V outlets is just one half of the split-phase 240V. If you want to send 240V down a particular wire, you can do that, by changing the breaker, but then you lose the neutral. You also make the wires dangerous to people who don&#x27;t realize that the white wire is now energized at 120V over ground. (Though it&#x27;s best to test to be sure anyway, as polarity gets reversed by accident, etc.) Live wires should be black or red.
      • doubled11218 hours ago
        I’ve actually had half of my dryer outlet fail when half of the breaker failed.<p>Can confirm.
      • amluto17 hours ago
        Sometimes. 240V circuits may or may not have a neutral.
      • jcgrillo17 hours ago
        If you actually use two 120V circuits that way and one breaker flips the other half will send 120V <i>through the load</i> back into the other circuit. So while that circuit&#x27;s breaker is flipped <i>it is still live</i>. Very bad. Much better to use a 240V breaker that picks up two rails in the panel.
        • HWR_146 hours ago
          They make connected circuit breakers for this use case, where one tripping automatically trips both.
        • amluto12 hours ago
          I assume the device has two separate PSUs, each of which accepts 120-240V, and neither of which will backfeed its supply.
        • ycui198614 hours ago
          i am guessing, without any proof, that, when one breaker fails the server lose it all, or loose two GPUs, depending on whether one connected to the cpu side failed.
          • fc417fc80214 hours ago
            GPUs aren&#x27;t electrically isolated from the motherboard though. An entire computer is a single unified power domain.<p>The only place where there&#x27;s isolation is stuff like USB ports to avoid dangerous ground loop currents.<p>That said I believe the PSU itself provides full isolation and won&#x27;t backfeed so using two on separate circuits should (maybe?) be safe. Although if one circuit tripped the other PSU would immediately be way over capacity. Hopefully that doesn&#x27;t cause an extended brownout before the second one disables itself.
    • nutjob27 hours ago
      3200W at ~240V is ~15A, that&#x27;s just a regular household socket, at least in Europe. I imagine 240V sockets in the US are at least 15A.<p>No need for separate circuits, just use a double adapter.
    • aiiizzz7 hours ago
      Why is hn so obsessed Scott whether something is _written_ by ai or not? Who cares? Judge content, not form.<p>Oh wait, I get it, it&#x27;s bike shedding.
      • dddgghhbbfblk7 hours ago
        I&#x27;ve been seeing variations on your comment a lot on HN lately and I find it a rather vapid way of looking at something so intricate as human communication. Among other things, the medium is the message!
  • vessenes19 hours ago
    The exabox is interesting. I wonder who the customer is; after watching the Vera Rubin launch, I cannot imagine deciding I wanted to compete with NVIDIA for hyperscale business right now. Maybe it’s aiming at a value-conscious buyer? Maybe it’s a sensible buy for a (relatively) cash-strapped ML startup; actually I just checked prices, and it looks like Vera Rubin costs half for a similar amount of GPU RAM. I’m certain that the interconnect will not be as good as NV’s.<p>I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
    • kulahan17 hours ago
      Sometimes you can compete with the big boys simply because they built their infra 5+ years ago and it’s not economically viable for them to upgrade yet, because it’s a multi-billion dollar process for them. They can run a deficit to run you out of the business, but if you’re taking less than 0.01% of their business, I doubt they’d give a crap.
    • zozbot23418 hours ago
      &gt; The exabox is interesting.<p>Can it run Crysis?
      • WithinReason17 hours ago
        Only gamers understand that reference<p>-- Jensen Huang
        • zargon9 hours ago
          *Only gamers know that joke.
      • bastawhiz18 hours ago
        Probably, the rdna5 can do graphics. But it would be a huge waste, since you could probably only use one of the 720 GPUs
      • dist-epoch17 hours ago
        Yes, it can generate Crysis with diffusion models at 60 fps.
  • paxys16 hours ago
    The problem with all these &quot;AI box&quot; startups is that the product is too expensive for hobbyists, and companies that need to run workloads at scale can always build their own servers and racks and save on the markup (which is substantial). Unless someone can figure out how to get cheaper GPUs &amp; RAM there is really no margin left to squeeze out.
    • nine_k15 hours ago
      Would a hedge fund that does not want to trust to a public AI cloud just buy chassis, mobos, GPUs, etc, and build an equivalent themselves? I suspect they value their time differently.
      • paxys11 hours ago
        Why do you think a hedge fund can&#x27;t hire a couple of IT guys? Most of the larger ones have technical operations that would put big tech to shame.
        • ViscountPenguin6 hours ago
          Medium sized hedge funds are a good portion of the market, and only really want to hire just enough tech people to keep the quant pipelines running.
        • signal_v17 hours ago
          [dead]
    • qubex8 hours ago
      They’re kickstarting a TINY device that is pocketable and aimed at consumers. I’ve backed it (full disclosure).
      • jgrizou3 hours ago
        <a href="https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;tiinyai&#x2F;tiiny-ai-pocket-lab" rel="nofollow">https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;tiinyai&#x2F;tiiny-ai-pocket...</a>
      • ankaz2 hours ago
        [dead]
    • kkralev15 hours ago
      i think the real gap isnt at the high end tho. theres a whole segment of people who just want to run a 7-8b model locally for personal use without dealing with cloud APIs or sending their data somewhere. you dont need 4 GPUs for that, a jetson or even a mini pc with decent RAM handles it fine. the $12k+ market feels like it&#x27;s chasing a different customer than the one who actually cares about offline&#x2F;private AI
      • wmf15 hours ago
        <i>just want to run a 7-8b model locally</i><p>This is already solved by running LM Studio on a normal computer.
        • zozbot23415 hours ago
          Ollama or llama.cpp are also common alternatives. But a 8B model isn&#x27;t going to have much real-world knowledge or be highly reliable for agentic workloads, so it makes sense that people will want more than that.
          • zach_vantio13 hours ago
            the compute density is insane. but giving a 70B model actual write access locally for agentic workloads is a massive liability. they still hallucinate too much. raw compute without strict state control is basically just a blast radius waiting to happen.
  • alexfromapex14 hours ago
    $12,000 for the base model is insane. I have an Apple M3 Max with 128GB RAM that can run 120B parameter models using like 80 watts of electricity at about 15-20 tokens&#x2F;sec. It&#x27;s not amazing for 120B parameter models but it&#x27;s also not 12 grand.
    • Thaxll14 hours ago
      M3 max tflops is tiny compared to the 12k box. It&#x27;s not even comparable.
      • davej9 hours ago
        It is very comparable if you work out the $&#x2F;tok&#x2F;s on inference. I did some napkin math and it looks like you’re getting roughly 3x the performance for 3x the cost. Red v2 vs Mac Studio M3 Ultra 96GB.<p>If you compare tokens&#x2F;kWh efficiency then my math has Mac Studio being about 1.5x more efficient.
      • zozbot23414 hours ago
        M3 has tolerable decode performance for the price, and that&#x27;s what people would care about most of the time. they underperform severely wrt. prefill, but that&#x27;s a fraction of the workload. AI, even agentic AI, spends most of its time outputing tokens, not processing context in bulk.
    • segmondy14 hours ago
      it&#x27;s for fools. i bought 160gb of vram for $1000 last year. 96gb of p40 VRAM can be had for under $1000. And it will run gpt-oss-120b Q8 at probably 30tk&#x2F;sec
      • timschmidt14 hours ago
        P40 is Tesla architecture which is no longer receiving driver or CUDA updates. And only available as used hardware. Fine for hobbyists, startups, and home labs, but there is likely a growing market of businesses too large to depend on used gear from ebay, but too small for a full rack solution from Nvidia. Seems like that&#x27;s who they&#x27;re targeting.
        • segmondy14 hours ago
          99% of interest is in inference. If you want to fine-tune a model, just rent the best gpu in the cloud. It&#x27;s often cheaper and faster.
          • timschmidt14 hours ago
            Great option if you don&#x27;t mind sharing your data with the cloud. Some businesses want to own the hardware their data resides on.
            • cootsnuck13 hours ago
              How many businesses have the capabilities and expertise to train their own models?
              • timschmidt13 hours ago
                No idea. Probably more every day.
            • segmondy13 hours ago
              renting GPU, how is that sharing data with the cloud? you can rent GPU from GCP or AWS
              • timschmidt12 hours ago
                I suppose if I rent a cloud GPU and just let it sit there dark and do nothing then I wouldn&#x27;t have to move any data to it. Otherwise, I&#x27;m uploading some kind of work for it to do. And that usually involves some data to operate on. Even if it&#x27;s just prompts.
                • segmondy1 hour ago
                  So you also believe when you rent a server you are sharing your data with the cloud? AWS and GCP are copying all private data on servers? Give me a break. There&#x27;s a big difference between renting a server and using an API.
  • roarcher12 hours ago
    &gt; In order to keep prices low and quality high, we don&#x27;t offer any customization to the box or ordering process. If you aren&#x27;t capable of ordering through the website, I&#x27;m sorry but we won&#x27;t be able to help.<p>Has this guy never worked on a B2B product before? Nobody is going to order a $10 million piece of infrastructure through your website&#x27;s order form. And they are definitely going to want to negotiate <i>something</i>, even if it&#x27;s just a warranty. And you&#x27;ll do it because they&#x27;re waving a $10 million check in your face.<p>The tone of this website is arrogant to the point of being almost hostile. The guy behind this seems to think that his name carries enough weight to dictate terms like this, among other things like requiring candidates to have already contributed to his product to even be considered for a job. I would be extremely surprised if anyone except him thinks he&#x27;s that important.
    • codemog7 hours ago
      I haven’t seen tinygrad used for any mainstream production project or thing of value, yet.<p>Besides a lot of self congratulatory pats on the back for how elegant it is. Honestly, when I read it, it looked confusing as all the other ML libraries. Not actually simple like Karpathy’s stuff.<p>All that to say, I do really want it to succeed. They should probably hire some practical engineers and not just guys and gals congratulating themselves how elegant and awesome they are.
    • jen729w12 hours ago
      Your framing of this section is misleading. On the site it&#x27;s preceded by a FAQ-style &#x27;question&#x27;:<p>&gt; <i>Can you fill out this supplier onboarding form?</i><p>That&#x27;s very important context, as anyone who has been asked to fill out a supplier onboarding form (hi) will attest.
      • roarcher11 hours ago
        Filling out an onboarding form is an <i>example</i> of what he&#x27;s not willing to do, not the only thing he isn&#x27;t willing to do.<p>&gt; we don&#x27;t offer any customization to the box or ordering process<p>Every B2B deal of that size that I&#x27;ve ever seen requires at least weeks of meetings between the customer and vendor, in which every detail is at least discussed if not negotiated. That would certainly constitute a &quot;customization&quot; to this guy&#x27;s prescribed ordering process, which is to &quot;Buy it now&quot; [1] through the website at the stated price like you&#x27;re ordering a jar of peanuts on Amazon. This is not &quot;framing&quot;, it&#x27;s what the guy said. If it isn&#x27;t what he meant then he needs to fix his copy.<p>[1] Yes, there is an actual &quot;Buy it now&quot; button for a $65,000 business purchase that takes you to a page that looks just like a Stripe form. There isn&#x27;t even a textbox for delivery instructions. Wild.
        • awesomeMilou10 hours ago
          Then if they succeed, I guess you&#x27;re going to see a different process for the first time in your life.<p>On a website where we frequently talk about disruptive business models, this whole attitude kinda stinks.
          • roarcher9 hours ago
            &gt; Then if they succeed, I guess you&#x27;re going to see a different process for the first time in your life.<p>Sure, I guess. Far more likely that they won&#x27;t succeed, and it will be because of their pointless refusal to cooperate with others. I&#x27;m curious why you think we should &quot;disrupt&quot; companies putting a little due diligence into massive purchases.<p>&gt; On a website where we frequently talk about disruptive business models, this whole attitude kinda stinks.<p>I could say the same thing about making a comment like this on a website where groupthink is rightfully mocked.
          • pegasus7 hours ago
            &gt; you&#x27;re going to see a different process for the first time in your life<p>That sounds very neutral, but wouldn&#x27;t this, by removing the human element and flexibility from business transactions, be a further step along a general enshittification trend?
    • phrotoma6 hours ago
      &gt; arrogant to the point of being almost hostile<p>First encounter with geohot eh?
    • wmf12 hours ago
      He&#x27;s not actually selling the exabox yet. It sounds like he put up a hypothetical config to see if anyone is interested.
    • HWR_146 hours ago
      There isn&#x27;t a $10MM device right now, just $64M and under. I doubt the order process will remain the same in 12 months when the $10MM device becomes available
    • kube-system11 hours ago
      The specs for the “exabox” scream “this is a joke” to me.<p>&gt; 20,000 lbs<p>&gt; concrete slab<p>Huge-scale IT systems are typically delivered in one or more 42&#x2F;44u cabinets, and are designed to be installed on raised floors.
      • 0xbadcafebee8 hours ago
        It&#x27;s a shipping container. Look at the dimensions. They say concrete slab probably half as a joke, half because building code would require it to consider it a non-temporary structure.
      • wmf10 hours ago
        It&#x27;s a shipping container that you install outdoors.
        • kube-system10 hours ago
          Are you referring to the images of branded shipping containers on their Twitter page that have visible Gemini watermarks … and jokes in the comments about AI trailer parks?
          • wmf10 hours ago
            20x8x8.5 ft is the dimensions of a half shipping container. You think that render is a joke but it&#x27;s not. They don&#x27;t have photos yet because it&#x27;s a 2027 product (if it actually comes out which I would bet against).
      • roarcher10 hours ago
        It&#x27;s also funny that they explicitly list driver quality as &quot;good&quot; for the base option and &quot;great&quot; for the intermediate one. You&#x27;re really going to deliberately provide worse drivers for the machine I paid you for, just because I didn&#x27;t buy the more expensive one?<p>I mean I&#x27;m sure lots of companies do this in practice because tickets for higher-paying customers naturally get prioritized, but directly stating your intention to do it on your home page is hilarious.
        • wmf10 hours ago
          Nvidia drivers are better than AMD. It&#x27;s not really something they have control over. Geohot is definitely obsessed with bitching about driver bugs though.
          • roarcher10 hours ago
            That may be, but then it&#x27;s an inside joke that many of his customers won&#x27;t get. It just looks like a &quot;fuck you&quot; to anyone buying the cheaper system.<p>This guy desperately needs a marketing intern to look over his copy. Or hell, anyone who knows how to talk to humans.
            • fwipsy10 hours ago
              Not a joke. It&#x27;s just true.
              • roarcher10 hours ago
                It doesn&#x27;t matter if it&#x27;s a joke. The non-technical manager or VP making this purchase will not understand it and will expect poor treatment from this vendor, an expectation that will be reinforced by numerous other things on this page. There is no reason to include it at all.
                • kube-system10 hours ago
                  It doesn’t read as if they actually care about broad appeal, given their plain refusal to accommodate traditional procurement processes
                  • pegasus7 hours ago
                    So they&#x27;re only interested in taking on customers who are OK with being treated poorly?
                • vkazanov8 hours ago
                  It seems that you work a lot with managers who have no clue what they are buying and why.<p>I mean, you&#x27;re not wrong: buying enterprise software from Oracle or Microsoft or Salesforce is pure pain.<p>But nobody expects buying niche hardware from a tiny vendor to involve the usual 128 pre&#x2F;post sale meetings and 256 hours of professional services.<p>Also, relevant VP buying these things usually do understand the difference between AMD and Nvidia stacks really well. Like, really-really well.
                  • roarcher8 hours ago
                    &gt; It seems that you work a lot with managers who have no clue what they are buying and why.<p>There are certain quirks of this platform&#x27;s user base that always make me laugh. For example, HNers absolutely love to imply something condescending about the other guy&#x27;s workplace in order to make their point.<p>Watch this, I can do it too: Working with managers who make $65,000 (or $10 million) purchases with no more due diligence than reading a marketing page and clicking &quot;Buy it now&quot; is not the flex you think it is.
                    • vkazanov6 hours ago
                      I was involved in it-related deals on both purchasing and selling sides. Sums involved were larger than both numbers you mentioned.<p>And I honestly see almost no correlation between the amount of negotiation involved, and value received.<p>Some of the most useful things we&#x27;ve integrated were either free or meant that only the &quot;buy it now&quot; button had to be clicked.<p>Some of the absolutely worst systems I had to work with were purchased after making a call to that &quot;let us know&quot; number.<p>This tiny guy is mostly saying that he doesnt have the time for enterprise bla-bla. I am not sure he can organise enterprise sales with this attitude but can definitely relate to it!
        • kube-system10 hours ago
          I took that as a dig against AMD vs Nvidia driver quality.
        • zekrioca10 hours ago
          I guess it is called ‘honesty’.
    • Havoc5 hours ago
      &gt; arrogant to the point of being almost hostile.<p>The YouTube rap video of geohotz telling Sony lawyers suing him to blow him is still up.<p>His style of dealing with corporate matters is certainly unconventional
    • jrflowers12 hours ago
      I imagine that the FAQ might get updated when there’s actually a $10M machine for sale
      • roarcher12 hours ago
        Maybe. Frankly I&#x27;d be very surprised if any business ordered a $65k machine that way either.
        • jrflowers11 hours ago
          Yeah it’s a little odd. Maybe they are meant to be really <i>really</i> cool toys? People regularly spend more than $65k on things like cars to show off, so it could be like that.<p>I have no use for these but I might buy one anyway if I won the lottery. ¯\_(ツ)_&#x2F;¯
  • mellosouls12 hours ago
    Where is the 120B documented? This seems to be an editorialized title.<p>Edit: found a third party referencing the claim but it doesn&#x27;t belong in the title here I think:<p><i>Meet the World’s Smallest ‘Supercomputer’ from Tiiny AI; A Machine Bold Enough to Run 120B AI Models Right in the Palm of Your Hand</i><p><a href="https:&#x2F;&#x2F;wccftech.com&#x2F;meet-the-worlds-smallest-supercomputer-a-machine-bold-enough-to-run-120b-ai-models&#x2F;" rel="nofollow">https:&#x2F;&#x2F;wccftech.com&#x2F;meet-the-worlds-smallest-supercomputer-...</a>
    • Aurornis12 hours ago
      That third party link is from a different company (Tiiny with an extra i)<p>Now I&#x27;m wondering if the HN title was submitted by some AI bot that couldn&#x27;t tell the difference.
      • mellosouls9 hours ago
        Ha, good catch, I googled for Tinybox 120B and clearly didn&#x27;t read the article beyond the seeming match.
  • siliconc0w16 hours ago
    Tinybox is cool but I think the market is maybe looking more for a turn-key explicit promise of some level of intelligence @ a certain Tok&#x2F;s like &quot;Kimi 2.5 at 50Tok&#x2F;s&quot;.
  • hmokiguess16 hours ago
    Is this like the new equivalent of crypto mining? I remember the early days when they would sell hardware for farming crypto, now it’s AI?
    • latchkey15 hours ago
      Kind of yes, except there is no block reward.
      • barnabee3 hours ago
        The block reward is firing humans and collecting ad revenue for slop
  • adrianwaj17 hours ago
    Perhaps this company should think about acting as a landlord for their hardware. You buy (or lease) but they also offer colocation hosting. They could partner with crypto miners who are transitioning to AI factories to find the space and power to do this. I wonder if the machines require added cooling, though, in what would otherwise be a crypto mining center. CoreWeave made the transition and also do colocation. The switchover is real.<p>I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone? My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
  • ekropotin18 hours ago
    IDK, I feel it’s quite overpriced, even with the current component prices.<p>I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
    • lostmsu18 hours ago
      AMD now has 32 GiB Radeon AI Pro 9700. 4 of these (just under 2k each) would put you at 128 GiB VRAM
      • ekropotin18 hours ago
        VRAM is not everything - GPU cores also matter (a lot) for inference
        • lostmsu18 hours ago
          4x Radeon will have significantly more GPU power than say Mac Studio or DGX Spark.
        • cyanydeez16 hours ago
          inference speed is like monitor Hz; sure, you go from 60 to 120Hz and thats noticeable, but unless your model is AGI, at some point you&#x27;re just generating more code than you&#x27;ll ever realistically be able to control, audit and rely on.<p>So, context is probably more $&#x2F;programming worth than inference speed.
  • operatingthetan18 hours ago
    The incremental price increases between products is funny.<p>$12,000, $65,000, $10,000,000.
    • znpy18 hours ago
      I was more worried by the 600kW power requirement... that&#x27;s 200 houses at full load (3kw) in southern europe... which likely means 400 houses at half load.<p>the town near my hometown has 650 – 800 houses (according to chatgpt).<p>crazy.
      • nine_k15 hours ago
        Or it&#x27;s two 300kW fast EV chargers working together.<p>A typical home just consumes rather little energy, now that LED lighting and heat pump cooling &#x2F; heating became the norm.
        • delusional4 hours ago
          I think the above commentor is reflecting on the total energy use from having a 600KW load running 24&#x2F;7. I suppose the more interesting observation is the 14 MWh of daily consumption, enough to charge 100 Rivians every day.
        • paganel5 hours ago
          &gt; and heat pump cooling &#x2F; heating became the norm.<p>We&#x27;re not all solidly middle-class (especially in Southern and Eastern Europe) and as such we cannot afford those heat pumps. But we&#x27;ll have to eat the increased energy costs brought by insane server configurations like the ones from the article, so, yeey!!!
        • znpy5 hours ago
          &gt; now that LED lighting and heat pump cooling &#x2F; heating became the norm.<p>My brother in Christ, you vastly overestimate southern europe
      • nutjob26 hours ago
        &gt; at full load (3kw)<p>Do you live in a deprived rural village in a very poor country? Because you can&#x27;t even run a heater and the oven with 3kW.
        • znpy5 hours ago
          No it’s quite the norm actually.<p>Most power contracts give you 3 kwh power supply for residential home. That’s the standard.<p>Bumping to 4.5 or 6kwh must be required explicitly and costs and extra on the base power supply bill
      • ericd15 hours ago
        That’s surprising, 200 amp 240v service is pretty common in the US.
      • dist-epoch17 hours ago
        Your hometown also has public lightning, water pumps, and probably some other stuff.
    • sudo_cowsay18 hours ago
      I mean the difference in performance is quite big too. However, the 10,000,000 is a little bit too much (imo).
  • triwats1 hour ago
    This is cool, I&#x27;ll add these as desktops to <a href="https:&#x2F;&#x2F;flopper.io" rel="nofollow">https:&#x2F;&#x2F;flopper.io</a>!<p>How do you test&#x2F;generate these numbers?
  • algolint5 hours ago
    The most interesting part of Tinybox isn&#x27;t just the hardware, but the push for a more vertical integration with tinygrad. We&#x27;ve become so accustomed to the CUDA&#x2F;PyTorch stack that seeing a serious attempt at a different software-hardware synergy is refreshing, even if the hardware specs or price point relative to DIY homelabs raise some eyebrows for power users. It&#x27;s more about reducing the friction for researchers who want a &quot;just works&quot; environment without the nightmare of driver&#x2F;toolkit version hell.
  • mciancia15 hours ago
    Not sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.<p>With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
  • mmoustafa17 hours ago
    I would love to see real-life tokens&#x2F;sec values advertised for one or various specific open source models.<p>I&#x27;m currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok&#x2F;s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
    • atwrk2 hours ago
      For reference, 12k gets you at least <i>4</i> Strix Halo boxes <i>each</i> running GPT-OSS-120B at ~50tok&#x2F;s.
    • hpcjoe16 hours ago
      Look for llmfit on github. This will help with that analysis. I&#x27;ve found it reasonably accurate. If you have Ollama already installed, it can download the relevant models directly.
  • adi_kurian14 hours ago
    <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Decoy_effect" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Decoy_effect</a>
  • ks204815 hours ago
    &quot;... and likely the best performance&#x2F;$&quot;.<p>&quot;likely&quot; doesn&#x27;t inspire much confidence. Surely, they have those numbers, and if it was, they&#x27;d publicize the comparisons.
  • wongarsu19 hours ago
    Sound like solid prebuilt with well balanced components and a pretty case<p>Not revolutionary in any way, but nice. Unless I&#x27;m missing something here?
    • eurekin19 hours ago
      It&#x27;s pretty close to what people have been frankenbuilding on r&#x2F;LocaLLaMa... It&#x27;s nice to have a prebuild option.
      • speedgoose18 hours ago
        You could also order such configurations from a classic server reseller as far as I know. The case is a bit original there.
      • nextlevelwizard18 hours ago
        Tiny boxes are already several years old IIRC
    • llbbdd14 hours ago
      If you wanted a box built by geohot, most recently known for signing on to Elons Twitter and then bailing, it&#x27;s for you
      • asadm10 hours ago
        actually known for comma.ai
  • comrade123419 hours ago
    Cool that you have a dual power supply model. It says rack mountable or free standing. Does that mean two form factors? $65K is more than we can afford right now but we are definitely eventually in the market for something we can run in our own colo.<p>It&#x27;s funny though... we&#x27;re using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we&#x27;d have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
    • hrmtst9383718 hours ago
      Most privacy talk folds on contact with a quote. Latency and convenience beat philosophy fast once someone wants a dashboard next week, and a lot of &quot;data sensitivity&quot; talk is just the corporate version of buying &quot;organic&quot; food until the price tag shows up.<p>If private inference is actually non-negotiable, then sure, put GPUs in your colo and enjoy the infra pain, vendor weirdness, and the meeting where finance learns what those power numbers meant.
      • zozbot23418 hours ago
        The real case for private inference is not &quot;organic&quot;, it&#x27;s &quot;slow food&quot;. Offering slow-but-cheap inference is an afterthought for the big model providers, e.g. OpenRouter doesn&#x27;t support it, not even as a way of redirecting to existing &quot;batched inference&quot; offerings. This is a natural opening for local AI.
        • selectodude18 hours ago
          But how slow is too slow (faster than you’d think) and even then, you’re in for $25,000 for even the most basic on-premise slow LLM.
    • aplomb102617 hours ago
      [dead]
  • SmartestUnknown16 hours ago
    Regarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:<p>Can they&#x2F;someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.<p>If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that&#x27;s a different issue.
  • mayukh18 hours ago
    What’s the most effective ~$5k setup today? Interested in what people are actually running.
    • emidoots16 hours ago
      At $7.2k + tax:<p>* RAM - $1500 - Crucial Pro 128GB Kit (2x64GB) DDR5 RAM, 5600MHz CP2K64G56C46U5, up to 4 sticks for 128GB or 256GB, Amazon<p>* GPU - $4700 - RTX Pro 5000 48GB, Microcenter<p>* CPU&#x2F;Mobo bundle - $1100 - AMD Ryzen 7 9800X3D, MSI X870E-P Pro, ditch the 32GB RAM, Microcenter<p>* Case - $220, Hyte Y70, Microcenter<p>* Cooler - $155, Arctic Cooling Liquid Freezer III Pro, top-mount it, Microcenter<p>* PSU - $180, RM1000x, Microcenter<p>* SSD - $400 - Samsung 990 pRO 2TB gen 4 NVMe M.2<p>* Fans - $100 - 6x 120mm fans, 1x 140mm fan, of your choice<p>Look into models like Qwen 3.5
      • aurareturn9 hours ago
        $7.2k just to run at best Qwen3.5-35B-A3B doesn&#x27;t seem worth it at all.<p>This is certainly not the most effective use of $7k for running local LLMs.<p>The answer is a 16&quot; M5 Max 128GB for $5k. You can run much bigger models than your setup while being an awesome portable machine for everything else.
      • cmxch13 hours ago
        Surprised to see X3D given the reports of failures. I’ve opted for a regular 9900x and X670E-E just to have a bit more assurance.
    • BobbyJo18 hours ago
      Depends. If token speed isn&#x27;t a big deal, then I think strix halo boxes are the meta right now, or Mac studios. If you need speed, I think most people wind up with something like a gaming PC with a couple 3090 or 4090s in it. Depending on the kinds of models you run (sparse moe or other), one or the other may work better.
    • bensyverson18 hours ago
      Sadly $5k is sort of a no-man&#x27;s land between &quot;can run decent small models&quot; and &quot;can run SOTA local models&quot; ($10k and above). It&#x27;s basically the difference between the 128GB and 512GB Mac Studio (at least, back when it was still available).
    • EliasWatson18 hours ago
      The DGX Spark is probably the best bang for your buck at $4k. It&#x27;s slower than my 4090 but 128gb of GPU-usable memory is hard to find anywhere else at that price. It being an ARM processor does make it harder to install random AI projects off of GitHub because many niche Python packages don&#x27;t provide ARM builds (Claude Code usually can figure out how to get things running). But all the popular local AI tools work fine out of the box and PyTorch works great.
      • NickJLange10 hours ago
        It&#x27;s $4.7K now, darn inflation!<p><a href="https:&#x2F;&#x2F;marketplace.nvidia.com&#x2F;en-us&#x2F;enterprise&#x2F;personal-ai-supercomputers&#x2F;dgx-spark&#x2F;" rel="nofollow">https:&#x2F;&#x2F;marketplace.nvidia.com&#x2F;en-us&#x2F;enterprise&#x2F;personal-ai-...</a><p>A small joke at this weeks GTC was the &quot;BOGOD&quot; discount was to sell them at $4K each...
    • cco17 hours ago
      Biggest Mac Studio you can get. The DGX Spark may be better for some workflows but since you&#x27;re interested in price, the Mac will maintain it&#x27;s value far longer than the Spark so you&#x27;ll get more of your money out of it.
    • kristopolous18 hours ago
      Fully aware of the DGX spark I&#x27;ve actually been looking into AMD Ryzen AI Max+ 395&#x2F;392 machines. There&#x27;s some interesting things here like <a href="https:&#x2F;&#x2F;www.bee-link.com&#x2F;products&#x2F;beelink-gtr9-pro-amd-ryzen-ai-max-395" rel="nofollow">https:&#x2F;&#x2F;www.bee-link.com&#x2F;products&#x2F;beelink-gtr9-pro-amd-ryzen...</a> and <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;GMKtec-5-1GHz-LPDDR5X-8000MHz-Display&#x2F;dp&#x2F;B0FKYZF9HL" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;GMKtec-5-1GHz-LPDDR5X-8000MHz-Display...</a> ... haven&#x27;t pulled the trigger yet but apparently inferencing on these chips are not trash.<p>Machines with the 4xx chips are coming next month so maybe wait a week or two.<p>It&#x27;s soldered LPDDR5X with amd strix halo ... sglang and llama.cpp can do that pretty well these days. And it&#x27;s, you know, half the price and you&#x27;re not locked into the Nvidia ecosystem
      • ejpir17 hours ago
        unfortunately the bigger models are pretty slow in token speed. The memory is just not that fast.<p>You can check what each model does on AMD Strix halo here:<p><a href="https:&#x2F;&#x2F;kyuz0.github.io&#x2F;amd-strix-halo-toolboxes&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kyuz0.github.io&#x2F;amd-strix-halo-toolboxes&#x2F;</a>
      • Tepix8 hours ago
        4xx chips are less capable than the 395
    • zozbot23417 hours ago
      &gt; What’s the most effective ~$5k setup today?<p>Mac Studio or Mac Mini, depending on which gives you the highest amount of unified memory for ~$5k.
    • borissk18 hours ago
      With $5k you have to make compromises. Which compromises you are willing to make depends on what you want to do - and so there will be different optimal setup.
    • oofbey18 hours ago
      DGX Spark is a fantastic option at this price point. You get 128GB VRAM which is extremely difficult to get at this price point. Also it’s a fairly fast GPU. And stupidly fast networking - 200gbps or 400gbps mellanox if you find coin for another one.
      • ekropotin18 hours ago
        I’m not very well versed in this domain, but I think it’s not going to be “VRAM” (GDDR) memory, but rather “unified memory”, which is essentially RAM (some flavour of DDR5 I assume). These two types of memory has vastly different bandwidth.<p>I’m pretty curious to see any benchmarks on inference on VRAM vs UM.
        • banana_giraffe13 hours ago
          A quick benchmark using float32 copies using torch cuda-&gt;cuda copies, comparing some random machines:<p><pre><code> Raptor Lake + 5080: 380.63 GB&#x2F;s Raptor Lake (CPU for reference): 20.41 GB&#x2F;s GB10 (DGX Spark): 116.14 GB&#x2F;s GH200: 1697.39 GB&#x2F;s </code></pre> This is a &quot;eh, it works&quot; benchmarks, but should give you a feel for the relative performance of the different systems.<p>In practice, this means I can get something like 55 tokens a sec running a larger model like gpt-oss-120b-Q8_0 on the DGX Spark.
          • ekropotin13 hours ago
            Nice! Thanks for that.<p>55 t&#x2F;s is much better than I could expect.
        • oofbey18 hours ago
          I’m using VRAM as shorthand for “memory which the AI chip can use” which I think is fairly common shorthand these days. For the spark is it unified, and has lower bandwidth than most any modern GPU. (About 300 GB&#x2F;s which is comparable to an RTX 3060.)<p>So for an LLM inference is relatively slow because of that bandwidth, but you can load much bigger smarter models than you could on any consumer GPU.
      • BobbyJo18 hours ago
        Internet seems to think the SW support for those is bad, and that strix halo boxes are better ROI.
        • oofbey18 hours ago
          Meh. DGX is Arm and CUDA. Strix is X86 and ROCm. Cuda has better support than ROCm . And x86 has better support than Arm.<p>Nowadays I find most things work fine on Arm. Sometimes something needs to be built from source which is genuinely annoying. But moving from CUDA to ROCm is often more like a rewrite than a recompile.
          • overfeed16 hours ago
            &gt; But moving from CUDA to ROCm is often more like a rewrite than a recompile.<p>Isn&#x27;t everyone* in this segment just using PyTorch for training, or wrappers like Ollama&#x2F;vllm&#x2F;llama.cpp for inference? None have a strict dependency on Cuda. PyTorch&#x27;s AMD backend is solid (for supported platforms, and Strix Halo is supported).<p>* enthusiasts whose budget is in the $5k range. If you&#x27;re vendor-locked to CUDA, Mac Mini and Strix Halo are immediately ruled out.
          • BobbyJo17 hours ago
            CUDA != Driver support. Driver support seems to be what&#x27;s spotty with DGX, and iirc Nvidia jas only committed to updates for 2 years or something.
      • borissk18 hours ago
        Can even network 4 of these together, using a pretty cheap InfiniBand switch. There is a YouTube video of a guy building and benchmarking such setup.<p>For 5K one can get a desktop PC with RTX 5090, that has 3x more compute, but 4x less VRAM - so depending on the workload may be a better option.
        • ekropotin18 hours ago
          VRAM vs UM is not exactly apples to apples comparison.
  • alasdair_13 hours ago
    I just don’t believe that this can run inference on a 120 billion parameter model at actually useful speeds.<p>Obviously any Turing machine can run any size of model, so the “120B” claim doesn’t mean much - what actually matters is speed and I just don’t believe this can be speedy enough on models that my $5000 5090-based pc is too slow for and lacks enough vram for.
    • mnkyprskbd12 hours ago
      Look at the GPU and RAM spec; 120b seems workable.
      • Aurornis12 hours ago
        For the red v2?<p>120B could run, but I wouldn&#x27;t want to be the person who had to use it for anything.<p>To be fair, the 120B claim doesn&#x27;t appear on the webpage. I don&#x27;t know where it came from, other than the person who submitted this to HN
        • mnkyprskbd12 hours ago
          It is more than fair, also, you&#x27;re comparing your 5k devices to 12k and more importantly 65k and &gt;10m devices.
          • Aurornis12 hours ago
            The &quot;to be fair&quot; part of my comment was saying that the tinygrad website doesn&#x27;t claim 120B.<p>Also nobody is comparing this box to an $10M nVidia rack scale deployment. They&#x27;re comparing it to putting all of the same parts into their Newegg basket and putting it together themself.
  • jmspring12 hours ago
    Tinygrad devices are interesting, I wish I have screen captures - but their prices have gone up and some specs like RAM have gone down.<p>A single box with those specs without having to build&#x2F;configure (the red and green) - I could see being useful if you had $ and not time to build&#x2F;configure&#x2F;etc yourself.
  • ilaksh17 hours ago
    I thought the most interesting thing about tinygrad was that theoretically you could render a model all the way into hardware similar to Taalas (tinygrad might be where Taalas got the idea for all I know).<p>I could swear I filed a GitHub issue asking about the plans for that but I don&#x27;t see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn&#x27;t got more attention.<p>As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
  • saidnooneever7 hours ago
    its a bit weird to me ud need to be contributor to their software to work in operations or hardware, but I suppose its ok for tinycompany. in long term its likely better to have domain experts and not bias everything towards the same thing.<p>the boxes look cool but how good are they really? the cheapest box seems pricey at 12 for a what is essentially a few gaming gpus. i dont see why you couldnt make that like half the price. u could do a PC&#x2F;server build thats much much faster for way less. size doesnt matter if its more than twice the price i think...<p>the more expensive box has atleast real processing gpus but afaik also not very popular ones, this one seems maybe more fair priced (there seems a big difference in bang for buck between these???).<p>the third one suggested looks like a joke.<p>dont get me wrong, this seems like a really cool idea. But i dont see it taking off as the prices are corporate but the product seems more home use.<p>maybe in time they will find a better balance, i do respect the fact that the component market now is sour as hell and making good products with stable prices is pretty much i possible.<p>id love one of these machines someday, maybe when i am less poor, or when they are xD.<p>(love the styling of everything, this is the most critical i could be from a dumb consumer perspective, which i totally am btw.)
  • jeremie_strand16 hours ago
    The AMD angle is interesting given the history — tinygrad has had to work around a lot of driver quirks to get ROCm into a usable state. At that price point, you&#x27;re esentially betting on a software stack that NVIDIA has had years to stabilize. Would be curious to see real-world utilization numbers vs. a comparable NVIDIA setup.
    • latchkey15 hours ago
      Old news. ROCm works a lot better now than it did a year ago.
      • Gigachad15 hours ago
        You are still really limited in what you can run. So much stuff is cuda only.
        • latchkey15 hours ago
          Like what? Most of the good stuff is ported over already and anything else, tag Anush on X and see what you get. Also happy to help.<p>The point is that they care now.
          • Gigachad15 hours ago
            Tbh my experience is in the non AI uses, recently I was looking at Gaussian splatting tools and it seemed the majority of it was CUDA only. I’m also still bothered AMD for ages claimed my card (5700xt) would be getting rocm but just abandoned it.
            • latchkey15 hours ago
              &gt;I was looking at Gaussian splatting tools and it seemed the majority of it was CUDA only.<p>Not surprising. True, the ecosystem is like early OSX vs. Windows. Eventually it&#x27;ll get ported over if there is demand.
          • djsjajah15 hours ago
            trl. give me a uv command to get that working.<p>But even in the amd stack things (like ck and aiter) consumer cards are not even second class citizens. They are a distance third at best. If you just want to run vllm with the latest model, if you can get it running at all there are going to be paper cuts all along the way and even then the performance won&#x27;t be close to what you could be getting out of the hardware.
            • latchkey15 hours ago
              It is not perfect, but it isn&#x27;t that bad anymore. Tons of improvements over the last year.
  • himata411317 hours ago
    exabox reads as if it was making a joke of something or someone. if it&#x27;s real then it&#x27;s really interesting!
  • Buttons84013 hours ago
    Oh, this is geohots product?<p>He&#x27;s an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.
  • zahirbmirza17 hours ago
    10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
    • Gigachad15 hours ago
      Looking at these prices I’m just thinking that as a user it makes no sense to buy this when you can just use the subsidised stuff from AI companies and then buy it a few years later at a tiny % of the cost.
  • p0w3n3d16 hours ago
    Quite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40&#x2F;mtok from hyperbolic for example
    • sowbug12 hours ago
      If you&#x27;re OK letting them train on, and maybe keep, your data, then it&#x27;s hard to beat cloud prices vs. local.
  • DeathArrow2 hours ago
    I wonder how much has he sold.
  • agnishom11 hours ago
    Who is the intended customer for this product? I am genuinely curious.
    • moscoe10 hours ago
      Anyone who wants to run&#x2F;train&#x2F;finetune a local llm.<p>“Not your weights, not your brain.”
  • DeathArrow2 hours ago
    Why do I get the impression that I get more bang for the buck by going through OpenRouter? Of course, not anyone can do that and there are security and other concerns.
  • heinternets19 hours ago
    exabox -<p>720x RDNA5 AT0 XL 25,920 GB VRAM 23,040 GB System RAM<p>~ $10 Million<p>Who is the target market here?
    • LorenDB18 hours ago
      I can&#x27;t find sources but I think they are building it for Comma.ai (geohot&#x27;s other company) so that Comma can scale up their training datacenter.
    • orochimaaru19 hours ago
      And... what about 20k lbs and 1360 cubic feet screams &quot;tiny&quot; :)
      • smoyer18 hours ago
        That is very close to a half-length shipping container.
    • mayukh18 hours ago
      A non-trivial share of this market won’t show up in public data. That makes most estimates unreliable by default
    • spiderfarmer19 hours ago
      VC funded startups
    • dist-epoch17 hours ago
      A company which doesn&#x27;t want the big LLM providers to see it&#x27;s prompts or data - military, health, finance, research
  • andai17 hours ago
    Can someone explain the exabox? They say it &quot;functions as a single GPU&quot;. Is there anything like that currently existing?
    • wmf17 hours ago
      An NVL72 rack or Helios rack also &quot;functions as a single GPU&quot;.
    • progbits17 hours ago
      TPU pods
  • sudo_cowsay18 hours ago
    I always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
    • wmf14 hours ago
      He builds a batch every few months.
    • cyanydeez16 hours ago
      In this case, they&#x27;re taking wire transfers, so they&#x27;re definitely building them once they get the cash.
  • qubex8 hours ago
    I just backed their TINY on Kickstarter.
    • rick_dalton7 hours ago
      That thing is NOT related to tinybox or tinygrad in any way. It is basically copyright infringement. Unless you’re astroturfing here I suggest you get your money back.
      • qubex7 hours ago
        Wasn’t astroturfing, I’ll look into it, thanks.
        • rick_dalton6 hours ago
          Sorry for even mentioning astroturfing, haha. It’s just because the promotion of the device is based on trying to fool people it was made by tiny corp.
          • qubex6 hours ago
            In my case they apparently succeeded.
  • operatingthetan18 hours ago
    Are we at the point where 2x 9070XT&#x27;s are a viable LLM platform? (I know this has 4, just wondering for myself).
    • oceanplexian18 hours ago
      These things don’t have Flash Attention or either have a really hacked together version of it. Is it viable for a hobby? Sure. Is it viable for a serious workload with all the optimizations, CUDA, etc.. Not really.
    • cyanydeez16 hours ago
      I&#x27;d go with strix halo if you&#x27;re looking at that old of tech.<p>the latest AMD GPUs are RX 9070 XT w&#x2F;32GB each
  • jgarzik12 hours ago
    Skeptical of their engineering, with replies to questions like this: <a href="https:&#x2F;&#x2F;x.com&#x2F;jgarzik&#x2F;status&#x2F;2031312666036146460?s=20" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;jgarzik&#x2F;status&#x2F;2031312666036146460?s=20</a>
    • creddit12 hours ago
      They answered your question with a pretty specific uptime target. Calling it a dodge and then moving the goalposts with a new question as your follow up doesn’t speak to you acting in good faith.
      • scratchyone12 hours ago
        tbh they really didn&#x27;t, tinygrad&#x27;s was clearly a joke response. they were not providing a real uptime target.
    • potamic11 hours ago
      Can&#x27;t see replies, what did they say?
      • Moduke8 hours ago
        <a href="https:&#x2F;&#x2F;xcancel.com&#x2F;jgarzik&#x2F;status&#x2F;2031312666036146460?s=20" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;jgarzik&#x2F;status&#x2F;2031312666036146460?s=20</a>
  • orliesaurus19 hours ago
    I wonder if this is frontpage right now because of the other tiiny (the names are similar) video that went viral ... which turns out wasn&#x27;t an actual product by the tinygrad linked in this post[1]<p>[1]<a href="https:&#x2F;&#x2F;x.com&#x2F;ShriKaranHanda&#x2F;status&#x2F;2035284883384553953" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;ShriKaranHanda&#x2F;status&#x2F;2035284883384553953</a>
  • droidjj18 hours ago
    Adding this to my list of ~beautifully~ designed things to buy when I win the lottery.
  • raincole14 hours ago
    How does this thing cool down?
  • ppap317 hours ago
    I thought there was a typo in the price
  • vlovich12319 hours ago
    Surprising to see this with AMD GPUs considering how George famously threw up his hands as AMD not being worth working with.
    • embedding-shape18 hours ago
      Yeah, and labeling AMD &quot;Driver Quality&quot; as &quot;Good&quot; (for comparison, they label nvidia&#x27;s driver quality as &quot;Great&quot;).
      • lostmsu18 hours ago
        Things changed. On my new Ryzen Strix Halo laptop I was able to run training experiments with PyTorch on Windows day 1: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46052535">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46052535</a>
  • gymbeaux9 hours ago
    $12,000 gets you 1Gb&#x2F;s networking and vanilla Ubuntu 24.04. Napkin math on the hardware it looks like margins are around 50% which feels like a school fundraiser where everyone pays what is obviously way more than normal retail price for X because &quot;it&#x27;s for the children.&quot;<p>I&#x27;m not sure what tinygrad is but I assume the markup is because the customer is making a conscious choice to support the tinygrad project. But what&#x27;s unusual is there is apparently no reason whatsoever to buy this hardware, even if you plan on using tinygrad exclusively for your project. At least with System76 hardware I get (in theory) first class support for Pop!_OS.
  • throwatdem1231118 hours ago
    Finally, a computer that should be able to run Monster Hunter Wilds with decent performance.<p>But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
    • oofbey18 hours ago
      It’s not for people to buy. It’s for companies to buy. Compare to salary, and it’s cheap.
      • aziaziazi18 hours ago
        &gt; What&#x27;s the goal of the tiny corp? To accelerate. We will commoditize the petaflop and enable AI for everyone.<p>I had the same feeling as throwadem when reading this. Your comment clarify what they meant by &quot;everyone&quot;
      • throwatdem1231118 hours ago
        What companies are buying this instead of like a Dell server or whatever?
        • flumpcakes17 hours ago
          These specs look enormously cheaper than doing it with dell servers. The last quote I had for a bog standard dell server was $50k and only if bought in the next few days or so. The prices are going up weekly.
          • throwatdem1231117 hours ago
            So what’s the catch? If it seems too good to be true it probably is.
            • wmf14 hours ago
              These are &quot;unsupported&quot; configurations. Nvidia&#x2F;AMD discourage running multiple gaming&#x2F;workstation cards and encourage customers to buy $500K SXM&#x2F;OAM servers.
      • lostmsu18 hours ago
        Hm, I compared my salary with $10M and it doesn&#x27;t feel cheap. I guess skill issue.
        • throwatdem1231118 hours ago
          But how will I make ad-supported youtube videos about how I automated my life with OpenClaw using a $10M boutique AI server to make a few thousand in ad revenue while burning tens of thousands per month on API cost.
  • mememememememo13 hours ago
    Give me token&#x2F;s for favourite models.
  • rpastuszak16 hours ago
    Who is this for?
  • kylehotchkiss12 hours ago
    Meanwhile M-series processors and Qwen are racing to do the same thing for a much more approachable price.
  • arunakt12 hours ago
    Great idea, can you publish the power consumption units for this device
  • renewiltord16 hours ago
    I have 8x RTX 6000 Pro. Better to run the 300 W version of the cards. And it costs close to their 4x version. I get why they make it so big. So you can cool it at home. I prefer to just put in datacenter. Much cheaper power.
  • aabaker9917 hours ago
    &gt; Can I pay with something besides wire transfer? In order to keep prices low and quality high, we don&#x27;t offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.<p>Sorry, what? Is this just a scam?
    • 10100817 hours ago
      Wire transfer has no comission or extra costs associated to it, so I find it very honest.
    • ejpir17 hours ago
      man, cmon. a little more effort.
      • aabaker9917 hours ago
        Sure thing. For those who don’t know, wiring money like this is a good way to lose your money.<p><a href="https:&#x2F;&#x2F;consumer.ftc.gov&#x2F;articles&#x2F;what-know-you-wire-money" rel="nofollow">https:&#x2F;&#x2F;consumer.ftc.gov&#x2F;articles&#x2F;what-know-you-wire-money</a>
        • metadata16 hours ago
          Wire transfer is a bank transfer, not money wire to Western Union and like.
          • aabaker9916 hours ago
            Yeah I agree the FTC article could be more clear here. I think they call out Western Union because those are tools that are commonly used by scammers.<p>But let’s be clear: the risks are the same if you are wiring money through Western Union or wiring through any other bank. Once you wire the money you do not have the same protections as other payment mechanisms. And if you don’t get the product as described, you are likely out your money. This is compared to other forms of payment like credit cards where you are protected. With a credit card you can issue a charge back to the seller and get your money back in the case of fraud. With a wire transfer you cannot.
  • jauntywundrkind19 hours ago
    My interest in anything associated with geohot took a colossal nose dive today after seeing this post against democracy, quoting frelling M*ncius M*ldbug: <i>Democracy is a Liability.</i> <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47469543">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47469543</a> <a href="https:&#x2F;&#x2F;geohot.github.io&#x2F;&#x2F;blog&#x2F;jekyll&#x2F;update&#x2F;2026&#x2F;03&#x2F;21&#x2F;democracy-liability.html" rel="nofollow">https:&#x2F;&#x2F;geohot.github.io&#x2F;&#x2F;blog&#x2F;jekyll&#x2F;update&#x2F;2026&#x2F;03&#x2F;21&#x2F;demo...</a><p>Theres a lot there that makes sense &amp; I think needs to be considered. But a lot just seems to be out of the blue, included without connection, in my view. Feels like maybe are in-grouo messages, that I don&#x27;t understand. How this is headered as against democracy is unclear to me, and revolting. I both think we must grapple with the world as it is, and this post is in that area, strongly, but to let fear be the dominant ruling emotion is one of the main definitions of conservativism, and it&#x27;s use here to scare us sounds bad.
    • kelvinjps1018 hours ago
      He was always defending democracy and freedom before, and that was his argument for the local AI thing? What changed?
    • yukIttEft7 hours ago
      He had a video on Youtube where he proudly gloated about how he voted for Trump in not one but two elections, how happy he is that he can now openly talk about it, how its a fresh start for US, how catastrophic Harris would have been.<p>Did he take down the video because of embarrassment or did he fear negative impact on his sales?
    • fragmede18 hours ago
      Damn, that&#x27;s a take.
    • tadfisher18 hours ago
      For those unaware, Mencius Moldbug is the pen name of Curtis Yarvin, thought leader for the Silicon Valley branch of right-wing technofascist weirdos which includes Peter Thiel and apparently half of a16z.
    • pencilheads18 hours ago
      Geohot has always been an arrogant cunt who thinks he&#x27;s better than everyone else. That blog post is totally on brand.
    • stale200218 hours ago
      Geohotz&#x27;s politics are fairly straightforward once you understand his background. Geohotz is the prodigy child who, at the age of ~16 accomplished amazing technical feats on his own.<p>And his politics are a derivative of Great Man Theory, and his positions on things like democracy follow from that. This idea, and those espoused by some of the VC&#x2F;tech elite like Peter Theil are that singular hardworking genius individuals can change the world on their own, and everyone who not in this top 0.1% are borderline NPCs.<p>They do this both because of their genius&#x2F;hardwork, and also because they are willing to break the rules that are set forth by this bottom 99.9%.<p>I&#x27;m starting to call this ideology Authoritarian techno-Libertarianism. Its a delibriately oxymoronic name that I use, because these &quot;Great Men&quot; are <i>definitely</i> trying to change the world. IE, they are trying to impose their goals and values on the world without getting the buyin of other people.<p>Thats the &quot;authoritarian&quot; part. And then the &quot;libertarian&quot; part is that they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.<p>Think &quot;Person invents a world changing technology, that some people thing is bad, and just releases it open source for anyone to use&quot;. AI models are a great example, in fact. Once that technology is out there the genie cannot be put back into the bottle and a ton of people are going to lose their jobs, ect.<p>A distain for democracy follows directly from things like this. You dont wait for people to vote to allow you to change the world by inventing something new. You just <i>do</i> and watch the results.
      • overfeed15 hours ago
        &gt; also because they are willing to break the rules that are set forth by this bottom 99.9%[...] they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.<p>I think all these wildly successful neo-feudalists get increasingly emboldened the more they get away with bigger and bigger social infractions.<p>It&#x27;s also clear that they haven&#x27;t experienced existed an environment with extreme inequality - it&#x27;s not safe for <i>anyone</i> there! They think the NPC plebs will continue to follow &quot;the rules&quot; <i>ad perpetuam</i> without considering that it is a direct result of the stability they are actively undermining. They clearly don&#x27;t read enough history.
      • SilverElfin17 hours ago
        What makes it “Libertarianism” still? To me it feels like they’re taking away freedom, control, and influence from everyone who is not them. Even the concentration of wealth is itself taking away everyone else’s places in the world.
        • LogicFailsMe15 hours ago
          Scratch a libertarian and a fascist bleeds libertarianism here, no?
  • insane_dreamer11 hours ago
    Is this real? Reads like a joke. They sell a $12K machine, a $60K machine, and a $10M machine???
    • wmf10 hours ago
      Nvidia has $4K DGX Spark, $120K DGX Station, $500K DGX, and $7M NVL72.
  • flykespice17 hours ago
    &quot;tiny&quot; and it&#x27;s 20k lbs and cost about 10k...<p>Since when did our perception of tiny blow out of size in tech? Is it the influence of &quot;hello world&quot; eletron apps consuming 100mb of mem while idle setting the new standard? Anyway being an AI bro seems like an expensive hobby...
  • Yanko_116 hours ago
    [dead]
  • WWilliam12 hours ago
    [dead]
  • jee59910 hours ago
    [flagged]
  • chloecv7 hours ago
    [dead]
  • jee59911 hours ago
    [flagged]
  • EruditeCoder1083 hours ago
    [dead]
  • caijia13 hours ago
    [flagged]
  • aplomb102616 hours ago
    [dead]
  • baibai00898917 hours ago
    [dead]
  • Heer_J18 hours ago
    [dead]
  • pink_eye18 hours ago
    [flagged]
  • fhn17 hours ago
    &quot;but if you haven&#x27;t contributed to tinygrad your application won&#x27;t be considered&quot; this company expects people to work for free?
    • paxys17 hours ago
      &gt; See our bounty page to judge if you might be a good fit. Bounties pay you while judging that fit.<p>Literally the line above that
      • roarcher13 hours ago
        They MIGHT pay you IF you&#x27;re a fit. They&#x27;re bounties, i.e. spec work. They also pay a max of $1000, most of them significantly less. You can see more info at the link in that line:<p>&gt; All bounties paid out at my (geohot) discretion. Code must be clean and maintainable without serious hacks.<p>No thanks. If you want to try before you buy, have your candidates do a paid test project. Founders need to stop acting like it&#x27;s a privilege to work for them. Any talent worth hiring has plenty of other options that will treat them with respect.