Curious to know who will spend this much money without external funding? Would you spend any VC invested money into this nameless brand? Are there any guardrails or clauses to protect the kind of expenses?
There's no way the red v2 is doing anything with a 120b parameter model. I just finished building a dual a100 ai homelab (80gb vram combined with nvlink). Similar stats otherwise. 120b only fits with very heavy quantization, enough to make the model schizophrenic in my experience. And there's no room for kv, so you'll OOM around 4k of context.<p>I'm running a 70b model now that's okay, but it's still fairly tight. And I've got 16gb more vram then the red v2.<p>I'm also confused why this is 12U. My whole rig is 4u.<p>The green v2 has better GPUs. But for $65k, I'd expect a much better CPU and 256gb of RAM. It's not like a threadripper 7000 is going to break the bank.<p>I'm glad this exists but it's... honestly pretty perplexing
What models are you testing? A 120b model with hybrid attention should fit within 80gb of VRAM fine at a 4-bit quant. Also, 4-bit quants that are done well are generally fine. They certainly don’t make the model unusable.
It will work fine but it’s not necessarily insane performance. I can run a q4 of gpt-oss-120b on my Epyc Milan box that has similar specs and get something like 30-50 Tok/sec by splitting it across RAM and GPU.<p>The thing that’s less useful is the 64G VRAM/128G System RAM config, even the large MoE models only need 20B for the router, the rest of the VRAM is essentially wasted (Mixing experts between VRAM and/System RAM has basically no performance benefit).
Could you share what you are using for inference and how you are running it? I have a 64G VRAM/128G system RAM setup.
Yeah I've got the q4 gpt-oss-120b running at ~40-60 tokens per second on an M5 Pro.
Split RAM and GPU impacts it more than you think. I would be surprised if the red box doesn’t outperform you by 2-3X for both PP and TG
> I'm also confused why this is 12U. My whole rig is 4u.<p>I imagine that's because they are buying a single SKU for the shell/case. I imagine their answer to your question would be: <i>In order to keep prices low and quality high, we don't offer any customization to the server dimensions</i>
Was that cheaper than a Blackwell 6000?<p>But yeah, 4x Blackwell 6000s are ~32-36k, not sure where the other $30k is going.
I bought the A100s used for a little over $6k each.
folks have too much money than sense, gpt-oss-120b full quant runs on my quad 3090 at 100tk/sec and that's with llama.cpp, with vllm it will probably run at 150tk/sec and that's without batching.
Thanks for chiming in. I'm looking for a reasonably cheap local LLM machine, and multiple 3090s is exactly what I planned to buy. Do you have any recommendations or recommend any reading material before I decide to spend money on that?<p>edit: Found your comment about /r/localllama, but if you have anything more to add I'm still very interested.
You're almost certainly (definitely, in fact) confusing the 120b and 20b models.
> gpt-oss-120b full quant runs on my quad 3090<p>A 120B model cannot fit on 4 x 24GB GPUs at full quantization.<p>Either you're confusing this with the 20B model, or you have 48GB modded 3090s.
How're you fitting a model made for 80 gig cards onto a GPU with 24 gigs at full quant?
He said quad 3090 not single
MoE layers offload to CPU inference is the easiest way, though a bit of a drag on performance
> There's no way the red v2 is doing anything with a 120b parameter model.<p>I don't see the 120B claim on the page itself. Unless the page has been edited, I think it's something the submitter added.<p>I agree, though. The only way you're running 120B models on that device is either extreme quantization or by offloading layers to the CPU. Neither will be a good experience.<p>These aren't a good value buy unless you compare them to fully supported offerings from the big players.<p>It's going to be hard to target a market where most people know they can put together the exact same system for thousands of dollars less and have it assembled in an afternoon. RTX 6000 96GB cards are in stock at Newegg for $9000 right now which leaves almost $30,000 for the rest of the system. Even with today's RAM prices it's not hard to do better than that CPU and 256GB of RAM when you have a $30,000 budget.
> And there's no room for kv, so you'll OOM around 4k of context.<p>Can't you offload KV to system RAM, or even storage? It would make it possible to run with longer contexts, even with some overhead. AIUI, local AI frameworks include support for caching some of the KV in VRAM, using a LRU policy, so the overhead would be tolerable.
Not worth it. It is a very significant performance hit.<p>With that said, people are trying to extend VRAM into system RAM or even NVMe storage, but as soon as you hit the PCI bus with the high bandwidth layers like KV cache, you eliminate a lot of the performance benefit that you get from having fast memory near the GPU die.
> With that said, people are trying to extend VRAM into system RAM or even NVMe storage<p>Only useful for prefill (given the usual discrete-GPU setup; iGPU/APU/unified memory is different and can basically be treated as VRAM-only, though a bit slower) since the PCIe bus becomes a severe bottleneck otherwise as soon as you offload more than a tiny fraction of the memory workload to system memory/NVMe. For decode, you're better off running entire layers (including expert layers) on the CPU, which local AI frameworks support out of the box. (CPU-run layers can in turn offload to storage for model parameters/KV cache as a last resort. But if you offload too much to storage (insufficient RAM cache) that then dominates the overhead and basically everything else becomes irrelevant.)"
The performance already isn't spectacular with it running all in vram. It'll obviously depend on the model: MoE will probably perform better than a dense model, and anything with reasoning is going to take _forever_ to even start beginning its actual output.
I know llama.cpp can, it certainly improved performance on my RAM-starved GPU.
Honestly two rtx 8000s would probably have a better return on investment than the red v2. I have an eight gpu server, five rtx 8000, three rtx 6000 ada. For basic inference, the 8000s aren't bad at all. I'm sure the green with four rtx pro 6000s are dramatically faster, but there's a $25k markup I don't honestly understand.
There's some irony in the fact that this website reads as extremely NOT AI-generated, very human in the way it's designed and the tone of its writing.<p>Still, this is a great idea, and one I hope takes off. I think there's a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company's own model.<p>One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.
I find that the most respected writing <i>about</i> AI has very few signs of being written <i>by</i> AI. I'm guessing that's because people in the space are very sensitive to the signs and signal vs. noise.
And because people writing anything worth reading are using the process of writing to form a proper argument and develop their ideas. It’s just not possible to do that by delegating even a small chunk of the work to AI.
I found it useful to preface with<p>* this section written by me typing on keyboard *<p>* this section produced by AI *<p>And usually both exist in document and lengthy communications. This gets what I wanted across with exactly my intention and then I can attach 10x length worth of AI appendix that would be helpful indexing and references.
Good? That's what I want out of all websites. I don't want to read what an AI believes is the best thing for a website, I want to know the honest truth.
I don't view this as irony. This seems like good sense in understanding when AI usage will make things better and when it will not.
I am a little surprised that they openly solicit code contributions with "Invest with your PRs" but don't have any statement on AI contributions.<p>Maybe the volume for them is ok that well-intentioned but poor quality PRs can be politely(or otherwise, culture depending) disregarded and the method of generation is not important.
Tinygrad sure shared a few opinions on AI PRs on Twitter. I believe the gist was "we have Claude code as well, if that's all you bring don't bother".
That's a pretty excellent take, IMO. Just an undirected AI model doesn't do much, especially when the core team has time with the code, domain expertise, _and_ Claude.
I'm starting to think that if you have an AI repo thats basically about codegen, you should just close all issues automatically, the manually (or whatever) open the ones you/maintainers actually care about. Thats about the only way to kill some of the signal/noise ratio AIs are creating.<p>Then you could focus fire, like the script kiddies did with DDoS in the old days on fixing whatever preferred issues you have.
If you’re spending $65,000 on this thing, needing two circuits seems like a minor problem
they could had gone with the Max-Q version RTX PRO 6000 and only require 120V circuit. 10% performance hit, but half the power.<p>fundamentally, looks like they are shipping consumer off-the-shelf hardwares in a custom box.
Yeah, the other big benefit is that the Max-Q's have blowers that exhaust the hot air out of the box, the workstation cards would each blow their exhaust straight into the intake of the card behind it. The last card in that chain would be cooking, as the air has already been heated up by 1800W, essentially a hair dryer on high.<p>Or could be the server edition 6000s that just have a heatsink and rely on the case to drive air through them, those are 600W cards.
The $12,000 one also requires it.
Easier to get two circuits than rewire a breaker in an office you might be renting, no?<p>(I work for an electrical contractor so my sense of ease might be overcorrecting)
Surprisingly affordable but I’m not really interested in the 9070XT.<p>If it shipped with like 4090+ (for a higher price) it’d be more tempting.
They offered a version a few months ago with 4x5090 for 25k<p><a href="https://x.com/__tinygrad__/status/1983917797781426511" rel="nofollow">https://x.com/__tinygrad__/status/1983917797781426511</a><p>Stopped due to raising GPU prices:<p><a href="https://x.com/__tinygrad__/status/2011263292753526978" rel="nofollow">https://x.com/__tinygrad__/status/2011263292753526978</a>
9070XT provide roughly same inference performance at double the power, half the cost, as RTX PRO 4500. So this one is optimized for total BOM cost.
The specs show that it only has one PSU. The docs just say that it has 2 and thus needs two circuits, but I’d guess that was meant to be for the more expensive one.
Big companies are pushing cloud really hard, and yea the hardware prices too is a problem. People still buy Google cloud and OneDrive when they could literally pickup an old computer from trash and Frankenstein it into a NAS server.
If I'm spending at least 12k USD on the machine then doing some electrical works to accommodate it is not a big deal.
"locally-trained models for everyone"<p>Wouldn't there be a massive duplication of effort in that case? It'll be interesting to see how the costs play out. There are security benefits to think about as well in keeping things local-first.
When you’re dealing with this kind of power it’s easier just to colocate where you’ll typically get two separate feeds of 208v
A typical U.S. 240V circuit is actually just two 120V circuits. Fairly trivial to rewire for that.
It's more accurate to say that the typical 120V circuit is just a 240V source with the neutral tapped into the midpoint of the transformer winding.
Yes, if you have a 240V US split-phase circuit you could make a little sub-panel with a 40A breaker feeding two 20A 120V circuits and plug the two power supplies into each side. (1600W would need a 20-A breaker because 13.3A would be too much of a 15A circuit). But it would probably make more sense to just plug them both into the same 40A 240V circuit. If you use NEMA 6-20, make sure you label it appropriately and probably color it red.<p>In Europe, you could plug the two power supplies into an appropriately sized 240V circuit.<p>In an apartment you can't rewire, you could set it up in your kitchen, which in the modern US code should have two separate 20A circuits. You will need to put it to sleep while you use appliances.
A US circuit is.<p>But this is re: European 240/250 which is 240 between its load and neutral<p>I’d say don’t energize either systems ground plane, but , really, don’t do this in EU
I think you're forgetting the wires? If you have one outlet with a 15-20A 120V circuit, then the wiring is almost certainly rated for 15-20A. If you just "combined" two 120V circuits into a 240V circuit, you still need an outlet that is rated for 30A, the wires leading to it also need to be rated for 30A, and it definitely needs a neutral. So you still need a new wire run if you don't have two 120V circuits right where you wanna plug in the box. To pass code you also may need to upsize conduit. If load is continuously near peak, it should be 50A instead of 30.<p>So basically you need a brand new circuit run if you don't have two 120V circuits next to each other. But if you're spending $65k on a single machine, an extra grand for an electrician to run conduit should be peanuts. While you're at it I would def add a whole-home GFCI, lightning/EMI arrestor, and a UPS at the outlet, so one big shock doesn't send $65k down the toilet.
Correct me if I’m wrong, but doubling the volts doesn't change the amps, it doubles the watts. Watts = V*A.
Yes; I assumed 30A was minimum requirement for 240V service in US. Apparently I was wrong, 20A 240V is apparently normal. So in theory you could use a pre-existing 20A 120V circuit's wiring for a 240V (assuming it was 12/3 cable). And apparently 4-wire is now the standard for 240V service in US? Jesus we have a weird grid.
Doubling the volts halves the amps. P = I * V indeed.
I think you might've misread GP. (Or maybe I did?)<p>He's not saying you would use it as two separate 120v circuits sharing a ground but rather as a single 240v circuit. His point is that it's easy to rewire for 240v since it's the same as all the other wiring in your house just with both poles exposed.<p>Of course you do have to run a new wire rather than repurpose what's already in the wall since you need the entire circuit to yourself. So I think it's not as trivial as he's making out.<p>But then at that wattage you'll also want to punch an exhaust fan in for waste heat so it's not like you won't already be making some modifications.
The wiring (at least in the US) to the 120V outlets is just one half of the split-phase 240V. If you want to send 240V down a particular wire, you can do that, by changing the breaker, but then you lose the neutral. You also make the wires dangerous to people who don't realize that the white wire is now energized at 120V over ground. (Though it's best to test to be sure anyway, as polarity gets reversed by accident, etc.) Live wires should be black or red.
I’ve actually had half of my dryer outlet fail when half of the breaker failed.<p>Can confirm.
Sometimes. 240V circuits may or may not have a neutral.
If you actually use two 120V circuits that way and one breaker flips the other half will send 120V <i>through the load</i> back into the other circuit. So while that circuit's breaker is flipped <i>it is still live</i>. Very bad. Much better to use a 240V breaker that picks up two rails in the panel.
They make connected circuit breakers for this use case, where one tripping automatically trips both.
I assume the device has two separate PSUs, each of which accepts 120-240V, and neither of which will backfeed its supply.
i am guessing, without any proof, that, when one breaker fails the server lose it all, or loose two GPUs, depending on whether one connected to the cpu side failed.
3200W at ~240V is ~15A, that's just a regular household socket, at least in Europe. I imagine 240V sockets in the US are at least 15A.<p>No need for separate circuits, just use a double adapter.
Why is hn so obsessed Scott whether something is _written_ by ai or not? Who cares? Judge content, not form.<p>Oh wait, I get it, it's bike shedding.
The exabox is interesting. I wonder who the customer is; after watching the Vera Rubin launch, I cannot imagine deciding I wanted to compete with NVIDIA for hyperscale business right now. Maybe it’s aiming at a value-conscious buyer? Maybe it’s a sensible buy for a (relatively) cash-strapped ML startup; actually I just checked prices, and it looks like Vera Rubin costs half for a similar amount of GPU RAM. I’m certain that the interconnect will not be as good as NV’s.<p>I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
Sometimes you can compete with the big boys simply because they built their infra 5+ years ago and it’s not economically viable for them to upgrade yet, because it’s a multi-billion dollar process for them. They can run a deficit to run you out of the business, but if you’re taking less than 0.01% of their business, I doubt they’d give a crap.
> The exabox is interesting.<p>Can it run Crysis?
Only gamers understand that reference<p>-- Jensen Huang
Probably, the rdna5 can do graphics. But it would be a huge waste, since you could probably only use one of the 720 GPUs
Yes, it can generate Crysis with diffusion models at 60 fps.
The problem with all these "AI box" startups is that the product is too expensive for hobbyists, and companies that need to run workloads at scale can always build their own servers and racks and save on the markup (which is substantial). Unless someone can figure out how to get cheaper GPUs & RAM there is really no margin left to squeeze out.
Would a hedge fund that does not want to trust to a public AI cloud just buy chassis, mobos, GPUs, etc, and build an equivalent themselves? I suspect they value their time differently.
They’re kickstarting a TINY device that is pocketable and aimed at consumers. I’ve backed it (full disclosure).
i think the real gap isnt at the high end tho. theres a whole segment of people who just want to run a 7-8b model locally for personal use without dealing with cloud APIs or sending their data somewhere. you dont need 4 GPUs for that, a jetson or even a mini pc with decent RAM handles it fine. the $12k+ market feels like it's chasing a different customer than the one who actually cares about offline/private AI
$12,000 for the base model is insane. I have an Apple M3 Max with 128GB RAM that can run 120B parameter models using like 80 watts of electricity at about 15-20 tokens/sec. It's not amazing for 120B parameter models but it's also not 12 grand.
M3 max tflops is tiny compared to the 12k box. It's not even comparable.
It is very comparable if you work out the $/tok/s on inference. I did some napkin math and it looks like you’re getting roughly 3x the performance for 3x the cost. Red v2 vs Mac Studio M3 Ultra 96GB.<p>If you compare tokens/kWh efficiency then my math has Mac Studio being about 1.5x more efficient.
M3 has tolerable decode performance for the price, and that's what people would care about most of the time. they underperform severely wrt. prefill, but that's a fraction of the workload. AI, even agentic AI, spends most of its time outputing tokens, not processing context in bulk.
it's for fools. i bought 160gb of vram for $1000 last year. 96gb of p40 VRAM can be had for under $1000. And it will run gpt-oss-120b Q8 at probably 30tk/sec
P40 is Tesla architecture which is no longer receiving driver or CUDA updates. And only available as used hardware. Fine for hobbyists, startups, and home labs, but there is likely a growing market of businesses too large to depend on used gear from ebay, but too small for a full rack solution from Nvidia. Seems like that's who they're targeting.
> In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. If you aren't capable of ordering through the website, I'm sorry but we won't be able to help.<p>Has this guy never worked on a B2B product before? Nobody is going to order a $10 million piece of infrastructure through your website's order form. And they are definitely going to want to negotiate <i>something</i>, even if it's just a warranty. And you'll do it because they're waving a $10 million check in your face.<p>The tone of this website is arrogant to the point of being almost hostile. The guy behind this seems to think that his name carries enough weight to dictate terms like this, among other things like requiring candidates to have already contributed to his product to even be considered for a job. I would be extremely surprised if anyone except him thinks he's that important.
I haven’t seen tinygrad used for any mainstream production project or thing of value, yet.<p>Besides a lot of self congratulatory pats on the back for how elegant it is. Honestly, when I read it, it looked confusing as all the other ML libraries. Not actually simple like Karpathy’s stuff.<p>All that to say, I do really want it to succeed. They should probably hire some practical engineers and not just guys and gals congratulating themselves how elegant and awesome they are.
Your framing of this section is misleading. On the site it's preceded by a FAQ-style 'question':<p>> <i>Can you fill out this supplier onboarding form?</i><p>That's very important context, as anyone who has been asked to fill out a supplier onboarding form (hi) will attest.
Filling out an onboarding form is an <i>example</i> of what he's not willing to do, not the only thing he isn't willing to do.<p>> we don't offer any customization to the box or ordering process<p>Every B2B deal of that size that I've ever seen requires at least weeks of meetings between the customer and vendor, in which every detail is at least discussed if not negotiated. That would certainly constitute a "customization" to this guy's prescribed ordering process, which is to "Buy it now" [1] through the website at the stated price like you're ordering a jar of peanuts on Amazon. This is not "framing", it's what the guy said. If it isn't what he meant then he needs to fix his copy.<p>[1] Yes, there is an actual "Buy it now" button for a $65,000 business purchase that takes you to a page that looks just like a Stripe form. There isn't even a textbox for delivery instructions. Wild.
Then if they succeed, I guess you're going to see a different process for the first time in your life.<p>On a website where we frequently talk about disruptive business models, this whole attitude kinda stinks.
> Then if they succeed, I guess you're going to see a different process for the first time in your life.<p>Sure, I guess. Far more likely that they won't succeed, and it will be because of their pointless refusal to cooperate with others. I'm curious why you think we should "disrupt" companies putting a little due diligence into massive purchases.<p>> On a website where we frequently talk about disruptive business models, this whole attitude kinda stinks.<p>I could say the same thing about making a comment like this on a website where groupthink is rightfully mocked.
> you're going to see a different process for the first time in your life<p>That sounds very neutral, but wouldn't this, by removing the human element and flexibility from business transactions, be a further step along a general enshittification trend?
> arrogant to the point of being almost hostile<p>First encounter with geohot eh?
He's not actually selling the exabox yet. It sounds like he put up a hypothetical config to see if anyone is interested.
There isn't a $10MM device right now, just $64M and under. I doubt the order process will remain the same in 12 months when the $10MM device becomes available
The specs for the “exabox” scream “this is a joke” to me.<p>> 20,000 lbs<p>> concrete slab<p>Huge-scale IT systems are typically delivered in one or more 42/44u cabinets, and are designed to be installed on raised floors.
It's a shipping container. Look at the dimensions. They say concrete slab probably half as a joke, half because building code would require it to consider it a non-temporary structure.
It's a shipping container that you install outdoors.
It's also funny that they explicitly list driver quality as "good" for the base option and "great" for the intermediate one. You're really going to deliberately provide worse drivers for the machine I paid you for, just because I didn't buy the more expensive one?<p>I mean I'm sure lots of companies do this in practice because tickets for higher-paying customers naturally get prioritized, but directly stating your intention to do it on your home page is hilarious.
Nvidia drivers are better than AMD. It's not really something they have control over. Geohot is definitely obsessed with bitching about driver bugs though.
That may be, but then it's an inside joke that many of his customers won't get. It just looks like a "fuck you" to anyone buying the cheaper system.<p>This guy desperately needs a marketing intern to look over his copy. Or hell, anyone who knows how to talk to humans.
I took that as a dig against AMD vs Nvidia driver quality.
I guess it is called ‘honesty’.
> arrogant to the point of being almost hostile.<p>The YouTube rap video of geohotz telling Sony lawyers suing him to blow him is still up.<p>His style of dealing with corporate matters is certainly unconventional
I imagine that the FAQ might get updated when there’s actually a $10M machine for sale
Where is the 120B documented? This seems to be an editorialized title.<p>Edit: found a third party referencing the claim but it doesn't belong in the title here I think:<p><i>Meet the World’s Smallest ‘Supercomputer’ from Tiiny AI; A Machine Bold Enough to Run 120B AI Models Right in the Palm of Your Hand</i><p><a href="https://wccftech.com/meet-the-worlds-smallest-supercomputer-a-machine-bold-enough-to-run-120b-ai-models/" rel="nofollow">https://wccftech.com/meet-the-worlds-smallest-supercomputer-...</a>
Tinybox is cool but I think the market is maybe looking more for a turn-key explicit promise of some level of intelligence @ a certain Tok/s like "Kimi 2.5 at 50Tok/s".
Is this like the new equivalent of crypto mining? I remember the early days when they would sell hardware for farming crypto, now it’s AI?
Perhaps this company should think about acting as a landlord for their hardware. You buy (or lease) but they also offer colocation hosting. They could partner with crypto miners who are transitioning to AI factories to find the space and power to do this. I wonder if the machines require added cooling, though, in what would otherwise be a crypto mining center. CoreWeave made the transition and also do colocation. The switchover is real.<p>I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone?
My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
IDK, I feel it’s quite overpriced, even with the current component prices.<p>I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
The incremental price increases between products is funny.<p>$12,000, $65,000, $10,000,000.
I was more worried by the 600kW power requirement... that's 200 houses at full load (3kw) in southern europe... which likely means 400 houses at half load.<p>the town near my hometown has 650 – 800 houses (according to chatgpt).<p>crazy.
Or it's two 300kW fast EV chargers working together.<p>A typical home just consumes rather little energy, now that LED lighting and heat pump cooling / heating became the norm.
I think the above commentor is reflecting on the total energy use from having a 600KW load running 24/7. I suppose the more interesting observation is the 14 MWh of daily consumption, enough to charge 100 Rivians every day.
> and heat pump cooling / heating became the norm.<p>We're not all solidly middle-class (especially in Southern and Eastern Europe) and as such we cannot afford those heat pumps. But we'll have to eat the increased energy costs brought by insane server configurations like the ones from the article, so, yeey!!!
> now that LED lighting and heat pump cooling / heating became the norm.<p>My brother in Christ, you vastly overestimate southern europe
> at full load (3kw)<p>Do you live in a deprived rural village in a very poor country? Because you can't even run a heater and the oven with 3kW.
That’s surprising, 200 amp 240v service is pretty common in the US.
Your hometown also has public lightning, water pumps, and probably some other stuff.
I mean the difference in performance is quite big too. However, the 10,000,000 is a little bit too much (imo).
This is cool, I'll add these as desktops to <a href="https://flopper.io" rel="nofollow">https://flopper.io</a>!<p>How do you test/generate these numbers?
The most interesting part of Tinybox isn't just the hardware, but the push for a more vertical integration with tinygrad. We've become so accustomed to the CUDA/PyTorch stack that seeing a serious attempt at a different software-hardware synergy is refreshing, even if the hardware specs or price point relative to DIY homelabs raise some eyebrows for power users. It's more about reducing the friction for researchers who want a "just works" environment without the nightmare of driver/toolkit version hell.
Not sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.<p>With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
I would love to see real-life tokens/sec values advertised for one or various specific open source models.<p>I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
For reference, 12k gets you at least <i>4</i> Strix Halo boxes <i>each</i> running GPT-OSS-120B at ~50tok/s.
Look for llmfit on github. This will help with that analysis. I've found it reasonably accurate. If you have Ollama already installed, it can download the relevant models directly.
<a href="https://en.wikipedia.org/wiki/Decoy_effect" rel="nofollow">https://en.wikipedia.org/wiki/Decoy_effect</a>
"... and likely the best performance/$".<p>"likely" doesn't inspire much confidence. Surely, they have those numbers, and if it was, they'd publicize the comparisons.
Sound like solid prebuilt with well balanced components and a pretty case<p>Not revolutionary in any way, but nice. Unless I'm missing something here?
It's pretty close to what people have been frankenbuilding on r/LocaLLaMa... It's nice to have a prebuild option.
If you wanted a box built by geohot, most recently known for signing on to Elons Twitter and then bailing, it's for you
Cool that you have a dual power supply model. It says rack mountable or free standing. Does that mean two form factors? $65K is more than we can afford right now but we are definitely eventually in the market for something we can run in our own colo.<p>It's funny though... we're using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we'd have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
Most privacy talk folds on contact with a quote. Latency and convenience beat philosophy fast once someone wants a dashboard next week, and a lot of "data sensitivity" talk is just the corporate version of buying "organic" food until the price tag shows up.<p>If private inference is actually non-negotiable, then sure, put GPUs in your colo and enjoy the infra pain, vendor weirdness, and the meeting where finance learns what those power numbers meant.
The real case for private inference is not "organic", it's "slow food". Offering slow-but-cheap inference is an afterthought for the big model providers, e.g. OpenRouter doesn't support it, not even as a way of redirecting to existing "batched inference" offerings. This is a natural opening for local AI.
[dead]
Regarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:<p>Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.<p>If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.
What’s the most effective ~$5k setup today? Interested in what people are actually running.
At $7.2k + tax:<p>* RAM - $1500 - Crucial Pro 128GB Kit (2x64GB) DDR5 RAM, 5600MHz CP2K64G56C46U5, up to 4 sticks for 128GB or 256GB, Amazon<p>* GPU - $4700 - RTX Pro 5000 48GB, Microcenter<p>* CPU/Mobo bundle - $1100 - AMD Ryzen 7 9800X3D, MSI X870E-P Pro, ditch the 32GB RAM, Microcenter<p>* Case - $220, Hyte Y70, Microcenter<p>* Cooler - $155, Arctic Cooling Liquid Freezer III Pro, top-mount it, Microcenter<p>* PSU - $180, RM1000x, Microcenter<p>* SSD - $400 - Samsung 990 pRO 2TB gen 4 NVMe M.2<p>* Fans - $100 - 6x 120mm fans, 1x 140mm fan, of your choice<p>Look into models like Qwen 3.5
$7.2k just to run at best Qwen3.5-35B-A3B doesn't seem worth it at all.<p>This is certainly not the most effective use of $7k for running local LLMs.<p>The answer is a 16" M5 Max 128GB for $5k. You can run much bigger models than your setup while being an awesome portable machine for everything else.
Surprised to see X3D given the reports of failures. I’ve opted for a regular 9900x and X670E-E just to have a bit more assurance.
Depends. If token speed isn't a big deal, then I think strix halo boxes are the meta right now, or Mac studios.
If you need speed, I think most people wind up with something like a gaming PC with a couple 3090 or 4090s in it.
Depending on the kinds of models you run (sparse moe or other), one or the other may work better.
Sadly $5k is sort of a no-man's land between "can run decent small models" and "can run SOTA local models" ($10k and above). It's basically the difference between the 128GB and 512GB Mac Studio (at least, back when it was still available).
The DGX Spark is probably the best bang for your buck at $4k. It's slower than my 4090 but 128gb of GPU-usable memory is hard to find anywhere else at that price. It being an ARM processor does make it harder to install random AI projects off of GitHub because many niche Python packages don't provide ARM builds (Claude Code usually can figure out how to get things running). But all the popular local AI tools work fine out of the box and PyTorch works great.
It's $4.7K now, darn inflation!<p><a href="https://marketplace.nvidia.com/en-us/enterprise/personal-ai-supercomputers/dgx-spark/" rel="nofollow">https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...</a><p>A small joke at this weeks GTC was the "BOGOD" discount was to sell them at $4K each...
Biggest Mac Studio you can get. The DGX Spark may be better for some workflows but since you're interested in price, the Mac will maintain it's value far longer than the Spark so you'll get more of your money out of it.
Fully aware of the DGX spark I've actually been looking into AMD Ryzen AI Max+ 395/392 machines. There's some interesting things here like <a href="https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen-ai-max-395" rel="nofollow">https://www.bee-link.com/products/beelink-gtr9-pro-amd-ryzen...</a> and <a href="https://www.amazon.com/GMKtec-5-1GHz-LPDDR5X-8000MHz-Display/dp/B0FKYZF9HL" rel="nofollow">https://www.amazon.com/GMKtec-5-1GHz-LPDDR5X-8000MHz-Display...</a> ... haven't pulled the trigger yet but apparently inferencing on these chips are not trash.<p>Machines with the 4xx chips are coming next month so maybe wait a week or two.<p>It's soldered LPDDR5X with amd strix halo ... sglang and llama.cpp can do that pretty well these days. And it's, you know, half the price and you're not locked into the Nvidia ecosystem
unfortunately the bigger models are pretty slow in token speed. The memory is just not that fast.<p>You can check what each model does on AMD Strix halo here:<p><a href="https://kyuz0.github.io/amd-strix-halo-toolboxes/" rel="nofollow">https://kyuz0.github.io/amd-strix-halo-toolboxes/</a>
4xx chips are less capable than the 395
> What’s the most effective ~$5k setup today?<p>Mac Studio or Mac Mini, depending on which gives you the highest amount of unified memory for ~$5k.
With $5k you have to make compromises. Which compromises you are willing to make depends on what you want to do - and so there will be different optimal setup.
DGX Spark is a fantastic option at this price point. You get 128GB VRAM which is extremely difficult to get at this price point. Also it’s a fairly fast GPU. And stupidly fast networking - 200gbps or 400gbps mellanox if you find coin for another one.
I’m not very well versed in this domain, but I think it’s not going to be “VRAM” (GDDR) memory, but rather “unified memory”, which is essentially RAM (some flavour of DDR5 I assume). These two types of memory has vastly different bandwidth.<p>I’m pretty curious to see any benchmarks on inference on VRAM vs UM.
A quick benchmark using float32 copies using torch cuda->cuda copies, comparing some random machines:<p><pre><code> Raptor Lake + 5080: 380.63 GB/s
Raptor Lake (CPU for reference): 20.41 GB/s
GB10 (DGX Spark): 116.14 GB/s
GH200: 1697.39 GB/s
</code></pre>
This is a "eh, it works" benchmarks, but should give you a feel for the relative performance of the different systems.<p>In practice, this means I can get something like 55 tokens a sec running a larger model like gpt-oss-120b-Q8_0 on the DGX Spark.
I’m using VRAM as shorthand for “memory which the AI chip can use” which I think is fairly common shorthand these days. For the spark is it unified, and has lower bandwidth than most any modern GPU. (About 300 GB/s which is comparable to an RTX 3060.)<p>So for an LLM inference is relatively slow because of that bandwidth, but you can load much bigger smarter models than you could on any consumer GPU.
Internet seems to think the SW support for those is bad, and that strix halo boxes are better ROI.
Can even network 4 of these together, using a pretty cheap InfiniBand switch. There is a YouTube video of a guy building and benchmarking such setup.<p>For 5K one can get a desktop PC with RTX 5090, that has 3x more compute, but 4x less VRAM - so depending on the workload may be a better option.
I just don’t believe that this can run inference on a 120 billion parameter model at actually useful speeds.<p>Obviously any Turing machine can run any size of model, so the “120B” claim doesn’t mean much - what actually matters is speed and I just don’t believe this can be speedy enough on models that my $5000 5090-based pc is too slow for and lacks enough vram for.
Tinygrad devices are interesting, I wish I have screen captures - but their prices have gone up and some specs like RAM have gone down.<p>A single box with those specs without having to build/configure (the red and green) - I could see being useful if you had $ and not time to build/configure/etc yourself.
I thought the most interesting thing about tinygrad was that theoretically you could render a model all the way into hardware similar to Taalas (tinygrad might be where Taalas got the idea for all I know).<p>I could swear I filed a GitHub issue asking about the plans for that but I don't see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn't got more attention.<p>As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
its a bit weird to me ud need to be contributor to their software to work in operations or hardware, but I suppose its ok for tinycompany. in long term its likely better to have domain experts and not bias everything towards the same thing.<p>the boxes look cool but how good are they really? the cheapest box seems pricey at 12 for a what is essentially a few gaming gpus. i dont see why you couldnt make that like half the price. u could do a PC/server build thats much much faster for way less. size doesnt matter if its more than twice the price i think...<p>the more expensive box has atleast real processing gpus but afaik also not very popular ones, this one seems maybe more fair priced (there seems a big difference in bang for buck between these???).<p>the third one suggested looks like a joke.<p>dont get me wrong, this seems like a really cool idea. But i dont see it taking off as the prices are corporate but the product seems more home use.<p>maybe in time they will find a better balance, i do respect the fact that the component market now is sour as hell and making good products with stable prices is pretty much i possible.<p>id love one of these machines someday, maybe when i am less poor, or when they are xD.<p>(love the styling of everything, this is the most critical i could be from a dumb consumer perspective, which i totally am btw.)
The AMD angle is interesting given the history — tinygrad has had to work around a lot of driver quirks to get ROCm into a usable state. At that price point, you're esentially betting on a software stack that NVIDIA has had years to stabilize. Would be curious to see real-world utilization numbers vs. a comparable NVIDIA setup.
exabox reads as if it was making a joke of something or someone. if it's real then it's really interesting!
Oh, this is geohots product?<p>He's an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.
10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
Quite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40/mtok from hyperbolic for example
If you're OK letting them train on, and maybe keep, your data, then it's hard to beat cloud prices vs. local.
I wonder how much has he sold.
Who is the intended customer for this product? I am genuinely curious.
Why do I get the impression that I get more bang for the buck by going through OpenRouter? Of course, not anyone can do that and there are security and other concerns.
exabox -<p>720x RDNA5 AT0 XL
25,920 GB VRAM
23,040 GB System RAM<p>~ $10 Million<p>Who is the target market here?
I can't find sources but I think they are building it for Comma.ai (geohot's other company) so that Comma can scale up their training datacenter.
And... what about 20k lbs and 1360 cubic feet screams "tiny" :)
A non-trivial share of this market won’t show up in public data.
That makes most estimates unreliable by default
VC funded startups
A company which doesn't want the big LLM providers to see it's prompts or data - military, health, finance, research
Can someone explain the exabox? They say it "functions as a single GPU". Is there anything like that currently existing?
I always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
He builds a batch every few months.
In this case, they're taking wire transfers, so they're definitely building them once they get the cash.
I just backed their TINY on Kickstarter.
Are we at the point where 2x 9070XT's are a viable LLM platform? (I know this has 4, just wondering for myself).
These things don’t have Flash Attention or either have a really hacked together version of it. Is it viable for a hobby? Sure. Is it viable for a serious workload with all the optimizations, CUDA, etc.. Not really.
I'd go with strix halo if you're looking at that old of tech.<p>the latest AMD GPUs are RX 9070 XT w/32GB each
Skeptical of their engineering, with replies to questions like this: <a href="https://x.com/jgarzik/status/2031312666036146460?s=20" rel="nofollow">https://x.com/jgarzik/status/2031312666036146460?s=20</a>
They answered your question with a pretty specific uptime target. Calling it a dodge and then moving the goalposts with a new question as your follow up doesn’t speak to you acting in good faith.
Can't see replies, what did they say?
I wonder if this is frontpage right now because of the other tiiny (the names are similar) video that went viral ... which turns out wasn't an actual product by the tinygrad linked in this post[1]<p>[1]<a href="https://x.com/ShriKaranHanda/status/2035284883384553953" rel="nofollow">https://x.com/ShriKaranHanda/status/2035284883384553953</a>
Adding this to my list of ~beautifully~ designed things to buy when I win the lottery.
How does this thing cool down?
I thought there was a typo in the price
Surprising to see this with AMD GPUs considering how George famously threw up his hands as AMD not being worth working with.
$12,000 gets you 1Gb/s networking and vanilla Ubuntu 24.04. Napkin math on the hardware it looks like margins are around 50% which feels like a school fundraiser where everyone pays what is obviously way more than normal retail price for X because "it's for the children."<p>I'm not sure what tinygrad is but I assume the markup is because the customer is making a conscious choice to support the tinygrad project. But what's unusual is there is apparently no reason whatsoever to buy this hardware, even if you plan on using tinygrad exclusively for your project. At least with System76 hardware I get (in theory) first class support for Pop!_OS.
Finally, a computer that should be able to run Monster Hunter Wilds with decent performance.<p>But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
Give me token/s for favourite models.
Who is this for?
Meanwhile M-series processors and Qwen are racing to do the same thing for a much more approachable price.
Great idea, can you publish the power consumption units for this device
I have 8x RTX 6000 Pro. Better to run the 300 W version of the cards. And it costs close to their 4x version. I get why they make it so big. So you can cool it at home. I prefer to just put in datacenter. Much cheaper power.
> Can I pay with something besides wire transfer?
In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.<p>Sorry, what? Is this just a scam?
Wire transfer has no comission or extra costs associated to it, so I find it very honest.
man, cmon. a little more effort.
My interest in anything associated with geohot took a colossal nose dive today after seeing this post against democracy, quoting frelling M*ncius M*ldbug: <i>Democracy is a Liability.</i> <a href="https://news.ycombinator.com/item?id=47469543">https://news.ycombinator.com/item?id=47469543</a> <a href="https://geohot.github.io//blog/jekyll/update/2026/03/21/democracy-liability.html" rel="nofollow">https://geohot.github.io//blog/jekyll/update/2026/03/21/demo...</a><p>Theres a lot there that makes sense & I think needs to be considered. But a lot just seems to be out of the blue, included without connection, in my view. Feels like maybe are in-grouo messages, that I don't understand. How this is headered as against democracy is unclear to me, and revolting. I both think we must grapple with the world as it is, and this post is in that area, strongly, but to let fear be the dominant ruling emotion is one of the main definitions of conservativism, and it's use here to scare us sounds bad.
He was always defending democracy and freedom before, and that was his argument for the local AI thing? What changed?
He had a video on Youtube where he proudly gloated about how he voted for Trump in not one but two elections, how happy he is that he can now openly talk about it, how its a fresh start for US, how catastrophic Harris would have been.<p>Did he take down the video because of embarrassment or did he fear negative impact on his sales?
Damn, that's a take.
For those unaware, Mencius Moldbug is the pen name of Curtis Yarvin, thought leader for the Silicon Valley branch of right-wing technofascist weirdos which includes Peter Thiel and apparently half of a16z.
Geohot has always been an arrogant cunt who thinks he's better than everyone else. That blog post is totally on brand.
Geohotz's politics are fairly straightforward once you understand his background. Geohotz is the prodigy child who, at the age of ~16 accomplished amazing technical feats on his own.<p>And his politics are a derivative of Great Man Theory, and his positions on things like democracy follow from that. This idea, and those espoused by some of the VC/tech elite like Peter Theil are that singular hardworking genius individuals can change the world on their own, and everyone who not in this top 0.1% are borderline NPCs.<p>They do this both because of their genius/hardwork, and also because they are willing to break the rules that are set forth by this bottom 99.9%.<p>I'm starting to call this ideology Authoritarian techno-Libertarianism. Its a delibriately oxymoronic name that I use, because these "Great Men" are <i>definitely</i> trying to change the world. IE, they are trying to impose their goals and values on the world without getting the buyin of other people.<p>Thats the "authoritarian" part. And then the "libertarian" part is that they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.<p>Think "Person invents a world changing technology, that some people thing is bad, and just releases it open source for anyone to use". AI models are a great example, in fact. Once that technology is out there the genie cannot be put back into the bottle and a ton of people are going to lose their jobs, ect.<p>A distain for democracy follows directly from things like this. You dont wait for people to vote to allow you to change the world by inventing something new. You just <i>do</i> and watch the results.
> also because they are willing to break the rules that are set forth by this bottom 99.9%[...] they are going about this imposition of their will on the world by doing it all themselves, through their own hard work.<p>I think all these wildly successful neo-feudalists get increasingly emboldened the more they get away with bigger and bigger social infractions.<p>It's also clear that they haven't experienced existed an environment with extreme inequality - it's not safe for <i>anyone</i> there! They think the NPC plebs will continue to follow "the rules" <i>ad perpetuam</i> without considering that it is a direct result of the stability they are actively undermining. They clearly don't read enough history.
What makes it “Libertarianism” still? To me it feels like they’re taking away freedom, control, and influence from everyone who is not them. Even the concentration of wealth is itself taking away everyone else’s places in the world.
Is this real? Reads like a joke. They sell a $12K machine, a $60K machine, and a $10M machine???
"tiny" and it's 20k lbs and cost about 10k...<p>Since when did our perception of tiny blow out of size in tech? Is it the influence of "hello world" eletron apps consuming 100mb of mem while idle setting the new standard? Anyway being an AI bro seems like an expensive hobby...
[dead]
[dead]
[flagged]
[dead]
[flagged]
[dead]
[flagged]
[dead]
[dead]
[dead]
[flagged]
"but if you haven't contributed to tinygrad your application won't be considered" this company expects people to work for free?