I had my formative years in programming when memory usage was something you still worried about as a programmer. And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant. Will we start to actually consider this in software solutions again as a result?
You're right in terms of fitting your program to memory, so that it can run in the first place.<p>But in performance work, the relative speed of RAM relative to computation has dropped such that it's a common wisdom to treat today's cache as RAM of old (and today's RAM as disk of old, etc).<p>In software performance work it's been all about hitting the cache for a long time. LLMs aren't too amenable to caching though.
AFAIK, you can't explicitly allocate cache like you allocate RAM however. A bit like if you could only work on files and ram was used for cache. Maybe I am mistaken ? (Edit: typo)
You can in CUDA. You can have shared memory which is basically L1 cache you have full control over. It's called shared memory because all threads within a block (which reside on a common SM) have fast access to it. The downside: you now have less regular L1 cache.
You can't explicitly allocate cache, but you can lay things out in memory to minimize cache misses.
A fun fact for the people who like to go on rabbit holes. There is an x86 technique called cache-as-RAM (CAR) that allows you to explicitly allocate a range of memory to be stored directly in cache, avoiding the DRAM entirely.<p>CAR is often used in early boot before the DRAM is initialized. It works because the x86 disable cache bit actually only decouples the cache from the memory, but the CPU will still use the cache if you primed it with valid cache lines before setting the cache disable bit.<p>So the technique is to mark a particular range of memory as write-back cacheable, prime the cache with valid cache lines for the entire region, and then set the bit to decouple the cache from memory. Now every access to this memory region is a cache hit that doesn't write back to DRAM.<p>The one downside is that when CAR is on, any cache you don't allocate as memory is wasted. You could allocate only half the cache as RAM to a particular memory region, but the disable bit is global, so the other half would just sit idle.
Out of curiosity, why has there not been a slight paradigm shift in modern system programming languages to expose more control over the caches?
Same as the failure of Itanium VLIW instructions: you don't actually <i>want</i> to force the decision of what is in the cache back to compile time, when the relevant information is better available at runtime.<p>Also, additional information on instructions costs instruction bandwidth and I-cache.
There are cache control instructions already. The reason why it goes no further than prefetch/invalidate hints is probably because exposing a fuller api on the chip level to control the cache would overcomplicate designs, not be backwards compatible/stable api. Treating the cache as ram would also require a controller, which then also needs to receive instructions, or the cpu has to suddenly manage the cache itself.<p>I can understand why they just decide to bake the cache algorithms into hardware, validate it and be done with it. Id love if a hardware engineer or more well-read fellow could chime in.
This is not applicable to most programming scenarios since the cache gets trashed unpredictably during context switches (including the user-level task switches involved in cooperative async patterns). It's not a true scratchpad storage, and turning it into one would slow down context switches a lot since the scratchpad would be processor state. Maybe this can be revisited once even low-end computers have so many hardware cores/threads that context switches become so rare that the overhead is not a big deal. But we are very far from anything of the sort.
Because programmers are in general worse at managing them than the basic LRU algorithm.<p>And because the abstraction is simple and easy enough to understand that when you do need close control, it's easy to achieve by just writing to the abstraction. Careful control of data layout and nontemporal instructions are almost always all you need.
There has! Intel has Cache Acceleration Technology, and I was very peripherally involved in reviewing research projects at Boston University into this. One that I remember was allowing the operating system to divide up cache and memory bandwidth for better prioritization.<p><a href="https://www.intel.com/content/www/us/en/developer/articles/technical/introduction-to-cache-allocation-technology.html" rel="nofollow">https://www.intel.com/content/www/us/en/developer/articles/t...</a>
LLMs need memory bandwidth to stream lots of data through quickly, not so much caching. Well, this is basically the same way that a GPU uses memory.
OTOH, LLM inference tends to have very predictable memory access patterns. So well-placed prefetch instructions that can execute predictable memory fetches in parallel with expensive compute might help CPU performance quite a bit. I assume that this is done already as part of optimized numerical primitives such as GEMM, since that's where most of the gain would be.
I've actively started to use outlook and teams through chrome to free up some of my ram, easily saves 3-4gb. It's gotten ridiculous how much ram basic tools are using, leaving nothing for doing actually real work
> And then memory expanded so much that all kinds of “optimal” patterns for programming just become nearly irrelevant.<p>I don't think that ever happened. Using relatively sparse amount of memory turns into better cache management which in turn usually improves performance drastically.<p>And in embedded stuff being good with memory management can make the difference between 'works' and 'fail'.
The need to use optimal patterns didn't go away, but the techniques certainly did. Just as a quick example, it's usually a bad idea now to use lookup tables to accelerate small math workloads. The lookup table creates memory pressure on the cache, which ends up degrading performance on modern systems. Back in the 1980s, lookup tables were by far the dominant technique because math was *slow.*
> Back in the 1980s, lookup tables were by far the dominant technique because math was <i>slow.</i><p>This actually generalizes in a rather clean way: compared to the 1980s, you now want to cheaply <i>compress</i> data in memory and use <i>succinct</i> representations as much as practicable, since the extra compute involved in translating a more succinct representation into real data is practically free compared to even one extra cacheline fetch from RAM (which is now hundreds of cycles latency, and in parallel code often has surprisingly low throughput).
The way to approach this is to benchmark and then pick the best solution.
It obviously never became completely irrelevant. But I think programmers spend a lot less time thinking about memory than they used to. People used to do a lot of gymnastics and crazy optimizations to fit stuff into memory. I do quite a bit of embedded programming and most of the time it seems easier for me to simply upgrade the MCU and spend 10cents more (or whatever) than to make any crazy optimimzations. But of course there are still cases where it makes sense.
When was the last time you used mergesort because you had to?
Coincidentially, last night, and I'm not pulling your leg! But to be fair that's the first time in much more than a decade. I don't normally work with such huge files and this was one very rare exception. I also nearly crashed my machine by triggering the OOM killer after naively typing 'vi file' without first checking how large it had become. I'm working on a project that I probably should run on a more serious machine but I don't feel like moving my whole work environment from the laptop that I normally use.
I never really bought in to the anti-Leetcode crowd’s sentiment that it’s irrelevant. It has always mattered as a competitive edge, against other job candidates if you’re an employee or the competition of you’re a company. It only looked irrelevant because opportunities were everywhere during ZIRP, but good times never last.
It mattered to pass through the interview, but not for the job itself. With all leetcode geniuses in Microsoft why Teams and Windows are so shitty?
Most developers work at banks, insurance companies and other “enterprise” jobs. Even most developers at BigTech and who are working “at scale” are building on top of scalable infrastructure and aren’t worrying about reversing a btree on a whiteboard.
It's not like most developers are wasting memory for fun by using Electron etc. It's just the simplest way to deploy applications that require frequent multiplatform changes. Until you get Apple to approve native app changes faster and Linux users to agree on framework, app distribution, etc., it's the most optimal way to ship a product and not just a program.
> for fun<p>Not for fun but for convenience (laziness occasionally?). Someone needed to "pay" for the app being available on all platforms. Either the programmer by coding and optimizing multiple times, or the user by using a bloated unoptimized piece of software. The choice was made to have the user pay. It's been so long I doubt recent generations of coders could even do it differently.
RAM didn't get more expensive to produce. It just got more desirable. The prices will come down again when supply responds. It may take some time, but it will happen eventually.
RAM production is highly inelastic and controlled by an oligopoly. They have little desire to increase production considering the lead time and the risk that the AI demand might be transient.<p>They actively prefer keeping confortable margins than competing between each other. They have already been condemned for active collusion in the past.<p>New actors from China could shake things up a bit but the geopolitical situation makes that complicated. The market can stay broken for a long time.
They are increasing production as fast as they can (which is not fast at all, it's more like slowly steering a huge ship towards the correct direction) because current prices are too high even when accounting for the historical oligopoly dynamics. They can easily increase their collective profits by making more.
RAM actually got more expensive to produce in the medium term because production is bottlenecked. It takes years to expand production.
We would have, if the expensive memory was a long term trend. It is not - eventually the supply will expand to match demand. There is no fundamental lack of raw materials underlying the issues, it is just a demand shock.
Also, it's not like we have regressed in the process itself either, which was historically the limiting factor. As you said this is purely an economics thing resulting from a greedy shift in business focus by e.g. Micron.
When I train some leetcode problems, I remember the best solution was the one that optimised cpu (time) instead of memory. Meaning adding data index in memory instead of iterating on the main data structure. I thought, ok, thats fine, it's normal, you can (could) always buy more RAM, but you can't buy more time.<p>But well, I think there is no right answer and there always be a trade off case by case depending on the context.
I've recently started a side project for the N64, and this is very relatable!
Working within such tight constraints is most of the fun.
I just heard in a podcast, they talked about how powerful our devices are today but do not feel faster than they did 15 years ago and that it's because of what you write here.
most likely in a couple years this bubble will pop, just like 8 years and 16 years ago<p>it's just a cartel cycle of gaining profits while soon eliminating all investments into competitors when flood of cheap ram "suddenly" appears
This is coming from an insane demand spike, not some nefarious plot by the RAM manufacturers.
> This is coming from an insane demand spike, not some nefarious plot by the RAM manufacturers.<p>Something something, 2000 dot-com bubble, something
I can never understand why so many people resort conspiracy theories when the obvious answer is supply and demand. I know well educated people, who do this when they talk about the resential property market. (Including an accountant).
Which is in large part due to hoarding by OpenAI.<p>Although their stated reason for hoarding is that they "really need it", I think it was a strategic move to make their competitors' lives more difficult with little regard for the collateral consequences to non-competitors, such as regular people or companies needing new computers.
Yes, it's a nefarious plot of AI producers to attempt a monopoly with a product that no one seems capable of demonstrating has the exponential value they're betting on.
Once everybody has a decent amount of VRAM they can just run local AIs and the need to mess with Ad-laden search results will fizzle. So of course they are desperate to grab a new monopoly. People haven't realised yet, that local AIs are fast and produce good results - on pretty average hardware. If they don't manage to grab a new monopoly Google will be history.<p>But it doesn't really need a nefarious plot for the price spikes. There is a serious lack of VRAM deployed out there. Filling that gap will take quite some time. Add to that the nefarious plot and the situation will most likely get even worse....
LLM inference is mostly read only, so high-bandwidth flash looks like it could provide huge cost savings over VRAM. It's not yet in commercial products but there are working prototypes already. Previous HN discussion:<p><a href="https://news.ycombinator.com/item?id=46700384">https://news.ycombinator.com/item?id=46700384</a>
AI companies yes, RAM manufacturers no.
Eventually new capacity will come online, and the money the DRAM companies are making are going to accelerate even ,ore new capacity. If you can get your new capacity going before your competitors, maybe you can avoid a bubble burst. If you don’t build new capacity, your competitors will, etc, etc…
If you are on Linux you can 'download' some RAM. Enable zram, configure sysctl variables to make good use of it.<p>Note that it won't help you if your workload makes use of all your RAM at once.<p>If you have a bunch of stuff running in the background it will help a lot.<p>I get 2 to 3 compression factor at all times with zstd. I calculated the utility to be as if I had 20GB extra RAM for what I do.
Back in the day if you could find a deal on defective RAM (that wasn't going to degrade further?), Linux could be configured to avoid the defects. Unfortunately this isn't allowed with secure/UEFI boot.<p><a href="https://www.gnu.org/products" rel="nofollow">https://www.gnu.org/products</a> software/grub/manual/grub/html_node/badram.html
That’s not particularly unique, Mac OS and Windows have had compressed memory for years. No fiddling with setting needed either.
> No fiddling with setting needed either.<p>It's since become the default in several distributions, including Fedora.
It's not just that it's compressed - the OS is also intelligently handling which memory should be on hardware vs. virtual. Effectively a lot of the memory concerns have been offloaded to the OS and the VM where one exists.
I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.
The worry is that these high prices aren't going to last long. And by the time you spend years building the capacity, the prices plummet making your facility uneconomical to run.<p>Ram will always be in some demand, but that doesn't mean it's viable for everyone to start building production.
There's a few things to note here:<p>1) Prices aren't returning to "normal".<p>The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it<p>2) By building up capacity you influence the outcome.<p>If someone else enters the DRAM space, the duopoly has to actually start thinking about competing on price, maybe they become price competitive before the launch of your new fab in order to kill it, but, it will have an effect and probably before it even opens<p>3) A western supply chain has benefits by itself.<p>There's a reason some industries are not allowed to die, most notably farming- because security and external pressure are concerning.<p>---<p>Realistically there's no reason not to do this. It will be long, painful and expensive. The best time was a decade ago. The next best time is now.
> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it<p>I disagree.<p>Modern RAM is made in fabs, which are ridiculously expensive to manufacture. Modern EUV lithography machines cost around 500M each. They're manufactured by hand. Only one company in the world knows how to manufacture them right now. So we can't exactly increase global manufacturing capacity overnight.<p>The way I see, there's 2 ways this goes:<p>1. AI is a fad. RAM and storage demand falls. Prices drop back to normal.<p>2. AI is not a fad. Over time, more and more fabs come online to meet the supply needs of the AI industry. The price comes down as manufacturing supply increases.<p>Or some combination of the two.<p>The high prices right now are because there's a demand shock. There's way more demand for RAM than anyone expected, so the RAM that is produced sells at a premium. High prices aren't because RAM costs more to manufacture than it did a couple years ago. There's just not enough to go around. In 5-10 years, manufacturing capacity will match demand and prices will drop. Just give it time.
> Only one company in the world knows how to manufacture them right now.<p>And that company is in Europe, isn't it? The EU has a great opportunity to enter the market: it's a high-tech manufacturing job, not something that requires lots of cheap labor.
Yes, but it's not that important. Any complex high-tech product requires suppliers from all over the world. For example, I bet the EU company depends on a lot of China companies critically.
Just like any airliner is produced by pretty much the whole world.
<i>> The EU has a great opportunity to enter the market: </i><p>You can't just get into RAM manufacturing overnight whenever you feel like it, like you're building washing machines. You need a lot more than just ASML machines, you need the supply chain, the IP, the experienced professionals with know-how, the education system, the energy, the right regulations, etc.<p>The EU exited the RAM manufacturing business a long time ago when RAM prices sunk, see Qimonda, meaning it would be a long, expensive uphill battle to get back in, and currently EU has no major semiconductor manufacturing ambitions, or ambitions in commodity hardware manufacturing of any kind, so that's not gonna happen.<p>Of course, RAM is no longer a commodity right now, but nobody can guarantee it won't be again when the AI bubble burst and RAM prices crash, so spinning up the know-how, manufacturing facilities and supply chains from the ground up just for RAM is insanely expensive and risky and might leave you holding the bag.<p><i>> it's a high-tech manufacturing job, not something that requires lots of cheap labor.</i><p>Except semiconductor manufacturing <i>DOES</i> require cheap labor relative to the high degrees of skills and specialization needed at that cutting edge. Unlike in Taiwan, skilled STEM grads in the EU (and even more in the US) who invest that time and effort in education and specialization, will go to better paying careers with better WLB like software or pharma, than in hardware and semi manufacturing that pays peanuts by comparison and works you to death in deadlines.<p>Also, profitable semi manufacturing requires cheap energy and lax environmental regulations, which EU lacks. So even more compounding reasons why you won't see too many new semi fabs opening here.
> nobody can guarantee it won't be again<p>I hope we (Europe) can try some things even when they are not guaranteed to succeed and generate huge profits. Otherwise we are toast, though it might take some time to realise it.<p>The concept of trying not-guaranteed things should not be so alien here on news.ycombinator.com I would think.
> In 5-10 years<p>and waiting for 5-10 yrs for a lower price is a long wait for consumers.<p>If food prices were high, would you say to the starving person to wait for 5-10yrs for food?
<i>>Modern RAM is made in fabs, which are ridiculously expensive to manufacture. Modern EUV lithography machines cost around 500M each. They're manufactured by hand. Only one company in the world knows how to manufacture them right now. </i><p>You're wrong here. You don't need the most cutting edge ASML EUV machines to make RAM. Most RAM fabs still use standard DUV.
Are DUV machines cheap and easy to manufacture? I suspect if they were, we'd see a lot of cheap RAM hit the market.<p>Maybe some RAM chips don't need EUV lithography. but I suspect I'm still right about the economics.
> You're wrong here. You don't need the most cutting edge ASML EUV machines to make RAM. Most RAM fabs still use standard DUV.<p>Ah. Please check that. Which types of DRAM can be made in a DUV fab? Obviously the older ones, but are those obsolete for new computers. This really matters.
From Micron, everything up to their 1-beta node is DUV. Their 1-gamma node they debuted last year is the only EUV node they have. If you bought a Micron-based DDR5 RAM stick a year ago it would have been DUV and you could get those up to DDR5-8000. 1-gamma increases that to DDR5-9200, so if you can live with ~15% less performance DUV is good enough.
CXMT’s entire portfolio is made without EUV, and CXMT claims to have acceptable yield and performance comparable to other producers.<p>Keep in mind that the high bandwidths of modern RAM modules aren’t really a property of the RAM cells so much as a property of the read and write circuitry and the DDR or HBM transceivers, and those are a large part of the IP but a small part of the die. There is no such thing as “double data rate” or “high bandwidth” DRAM cells. Even DRAM cells from the 1990s could be read in microseconds. Reading and streaming your fancy AI model weights is an embarrassingly parallel problem and even 1 TB/sec does not even come close to stressing the ability of the raw cells to be read. This in contrast to, say, modern tensor processors where the actual ALUs set a hard cap on throughput and everyone works hard to come closer to the cap.<p>Take a look at what makes a modern computer with good RAM performance work: it’s the interconnect between the RAM and processor.
DDR4 and basic HBM is still in high demand right now and that was made before the first EUV fabs came online.
> 1) Prices aren't returning to "normal".<p>> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it<p>RAM isn’t some commodity that gets mined at a fixed rate and therefore costs more when people want large amounts of it. It’s a manufactured good, made from raw materials that are available in huge quantity, that was produced and sold at a profit at 2024 prices, even accounting for the capex needed to produce it.<p>Two things have changed. First, demand increased <i>quickly</i>. Second, big buyers sort of demonstrated that they’re willing to pay current prices, at least temporarily, so maybe the demand price elasticity has changed, or at least people’s perception of it has changed.<p>None of prevents the price from going back down. The high prices have made it economical for new manufacturers to invest more to compete — look at CXMT. And CXMT doesn’t have EUV machines, which doesn’t appear to be a showstopper for them.
> at a profit at 2024 prices, even accounting for the capex needed to produce it<p>2024 prices were at a historical low, so we can't be sure that this is correct. Regardless, when production capacity is short-term constant, new RAM does get "mined" at a constant rate, a bit like bitcoin with its mining ASICs.
Your first point highlights the huge unmitigated risk. There is no guarantee that this won't all implode, triggering a huge recession. And even if no one can afford the ram after, they especially won't be able to afford the more expensive European ram.<p>Really the only way it could work is if the government declares it it a national security issue and will promise to subsidize it. Because in just a free market, it's most likely to flop.
> government [..] will promise to subsidize it<p>Subsidize what? Copilot prompt in the Run dialog or Notepad? Is this what you think might be considered for subsidizing?
It's been a headscratcher for me... The EU and the US have an issue with CCP-subsidised tech giants, but their sole reaction is banning them in some form or other? In the EU it is from public tenders, in the US it's from dealing with US companies.<p>This does not really help EU and US businesses to be competitive though, neither does it stop consumers going for the cheapest option...
Those US companies that the US government bought part of and funded their expansions with no controls over the cartel...
<i>>The EU and the US have an issue with CCP-subsidised tech giants</i><p>Except EU and US tech giants also get massive government subsidies making such accusations hypocritical. Silicon Valley has its roots in cold war defense funding.<p>What the US and EU don't like it that China has beaten them at their own game using their own rules, so now they need to move the goalposts on why we shouldn't buy Chinese RAM and protect western DRAM monopolies making amazing margins.
Prices are returning to normal, probably 2-3 years from now. SK Hynix is making absolutely monstrous investments in memory fabs and CMXT will be entering the market in force more and more.<p>The biggest problem is that the industry wants HBM, whereas consumers want DRAM. Until the need for HBM has been sufficiently satisfied, fabs will prefer being tooled for HBM because businesses can be squeezed much harder than consumers.<p>Then again, as consumer you don't really need DDR5 or even DDR4 so long as you aren't using an iGPU. Its all about being around CL15 timings.
> The only way they will is if the hyperscalers and AI companies start to implode<p>you're missing the picture that it's not companIES - the crisis primarily was caused by only one company OpenAI buying out wafers<p>but even more than that - that wafer buyout is *an excuse* used by cartel - there are several mechanisms that could have eased out most of the problem (e.g. Samsung selling old equipment) that was not done to ride the money wave<p>(also said "hyperscalers and AI companies" existed in spring 2025 too, yet the price was low)<p>the winners will not be the ones who build new fabs - but ones who'll have enough money and government subsidies/import taxes to protect such investments after cartel decides to oversupply again, flushing the price down
FWIW I agree with you. The US should provide stable, consistent policy & funding so companies understand the regulatory environment and do long-term planning.<p>Which is a good idea for when we don't have a dementia patient in charge of our country.<p>EU should get on that though.
> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it<p>You can't reshore domestic manufacturing without creating legions of desperate workers with no other choice but to accept minimum wage factory jobs.
You think AI can't implode..wat
Not everyone but a supplier in the Europe would be a massive benefit long after the AI driven demand dies off. It'd free them from dependence on other countries for a critical resource making chips more affordable and the supply more stable which is good because the stability of the rest of the world is already questionable and big shocks are expected in the near future.
I guess the idea would have to be to look at it in the reverse: have some domestic capacity for those usual strategic supply independence reasons, even if it operates at a loss. And hope that occasionally, one of those demand surge waves will swipe through and make the cost not quite so bad. Outlier waves like the current one might even create a net positive, but that bet would not be the primary purpose, that honor would go to hedging against getting cut off.
there is another industry that is not economical in EU but we still do it. Food production. Because its strategic decision. Not saying that RAM is of equal importance like RAM, but saying that if there is a will there is a way.
AI demand isn't going away. It will just move from the data center to the local machine. On device AI is much better for the customer than it being in the cloud. Expecting people to stick with a few dozen gb of hbm is going to be the 'no one needs more than 640kb' of the 2030s.
> On device AI is much better for the customer than it being in the cloud<p>Which is exactly how you know it will always be nerfed. The last thing these guys want is to take their claws out of our data.
It's being delayed by ai companies from running on local consumer grade machines specifically by making the cost of entry too expensive. OpenAi buys 40% of wafers to ensure the price of memory stays high.
Hmm, never considered a targeted squeeze at consumer run models by way of slowing hardware proliferation. It "made sense" to try and box out other AI companies but I guess they also have a pretty strong vested interest in keeping VRAM low or preventing some kind of high-memory PCIe ASIC from getting cheap broad adoption.<p>Another thread suggested that OpenAIs primary play is to get big enough that it's too big to fail, funny to think that it's not a funding runway or algorithmic moat, just a hardware vault and the longer you can stop boats crossing it the more chance you get your fingers in all the pies.
> AI demand isn't going away.<p>I'm not sure about that. When was the last time you have used Copilot prompt in Run dialog or Notepad?
> And by the time you spend years building the capacity, the prices plummet making your facility uneconomical to run.<p>People forget quickly why we only have a handful of DRAM manufacturers today.
Europe needs to focus on energy and fixing their supply chain first. And the deregulation push keeps getting delayed. Just the other day the main person behind it in Germany got sacked because of internal power struggles, in part because of the Greens (part of the coalition).<p>For context, the German manufacturing sector is losing something like 15k jobs PER MONTH.
Aren’t Chinese manufacturers already expanding their capacity? Given that Samsung and SK Hynix have left that market in the pursuit of HBM4 chips, China is going to rule this market. At least that’s what analysts are saying.
In this market, CXMT is more likely to also move to HBM production rather than consumer grade RAMs. After all China is also doing an AI push in a competition with the US, and the domestic Chinese companies are "recommended/guided" by the government to help, while consumers are pushed to lower priorities.<p>The situation I'm worrying about is that these PC manufacturers could use this opportunity to push for a more locked-down design, such as soldered RAM or even SSD. My current ThinkPad already got soldered LPDDR5 RAM chips on it with no user-end RAM upgrade possible, so there's a reason to suspect they'll take more pagers from Apple's book if they can get away doing it, just like what they did when they pushed out those internally mounted unswappable batteries.<p>My personal guess is that the RAM price will fall down after this period of AI expansion is over and major players starts to consolidate. But it will not fall as much as we're hopping for, because the manufacturers could just reduce production to control the price.
Chinese manufacturers like CXMT face the same kinds of issues that Huawei faced in entering the EU market - the EU is clamping down on Chinese suppliers across their supply chain [0].<p>Where can CXMT and other Chinese players export when Japan, South Korea, much of ASEAN, India, much of North America, the EU, the UK, Australia, NZ, and parts of the Gulf have enacted or begun enacting trade barriers against Chinese exports?<p>[0] - <a href="https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042295" rel="nofollow">https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...</a>
RAM isn't a critical security category like 5G base stations.<p>Also, I don't think you've seen true consumer rage until the opposition in the EU would start pointing out the current parties are making the smartphones, laptops, TVs and whatnot consumers wanna buy much more expensive (or more crappy). Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.
> RAM isn't a critical security category like 5G base stations.<p>Those base stations are only security critical because mobile networks are deliberately insecure to enable government surveillance.<p>And I can image backdooring RAM. At least the controller part.
> Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.<p>Huh?
So, EU and US tend to actually implement such bans, but the rest of those countries in a list like that..<p>People appreciated cheap YMTC 232-layers when that happened where I live.
Then those countries that didn't will have an advantage at selling their electronics to the world.<p>Or their consumers will enjoy cheap PC part prices. With possible gray zone re-export market.<p>Of course we could see retreat from global markets to mercantilism, but that has yet to fully happen.
Or China could stop antagonizing blocs like the EU through actions like solidifying ties with Russia [0][1][2], imposing rare earth export restrictions on the EU [3], and undermining EU institutions [4].<p>[0] - <a href="https://www.reuters.com/world/asia-pacific/xi-putin-hail-ties-video-call-ukraine-war-nears-anniversary-2026-02-04/" rel="nofollow">https://www.reuters.com/world/asia-pacific/xi-putin-hail-tie...</a><p>[1] - <a href="https://www.reuters.com/world/china/chinas-president-xi-meets-with-russias-prime-minister-mishustin-2025-11-04/" rel="nofollow">https://www.reuters.com/world/china/chinas-president-xi-meet...</a><p>[2] - <a href="https://www.reuters.com/world/china/china-calls-closer-defence-ties-with-russia-ministers-talks-2026-01-27/" rel="nofollow">https://www.reuters.com/world/china/china-calls-closer-defen...</a><p>[3] - <a href="https://www.reuters.com/world/china/eu-steps-up-efforts-cut-reliance-chinese-rare-earths-2025-10-25/" rel="nofollow">https://www.reuters.com/world/china/eu-steps-up-efforts-cut-...</a><p>[4] - <a href="https://www.scmp.com/news/china/diplomacy/article/3316875/china-tells-eu-it-cannot-afford-russian-loss-ukraine-war-sources-say" rel="nofollow">https://www.scmp.com/news/china/diplomacy/article/3316875/ch...</a>
> stop antagonizing blocs like the EU<p>Who antagonized who first, again?
Ahh, it is always China antagonizing others, isn't it?
There's nothing wrong EU coming to absurd lengths in all directions that are leading to destruction of its economy and society only to please the narrative of few degraded groups of individuals. Yet, it is others that are the cause. Nice, easy story telling.
All governments in the world turned to be on the dark side. But some are reaching new heights.
And Asia is mostly peaceful right now while war has returned to Europe on a scale not seen in 50 years. Those crazy Chinese, eh? Massive failure of their foreign policy establishment. Total inability to engage with Russia.<p>> and undermining EU institutions<p>If the <i>Europeans</i> had any common sense they'd be undermining EU institutions as well, those institutions have been disasters. They aren't doing a good job of keeping the peace, they aren't doing a good job of promoting prosperity and they've had successes like forcing Apple to switch from Lightening to USB ports. The CCP on the other hand have been so successful in the last few decades that they're making authoritarianism look good. If the EU focused on figuring out what good policy looked like then they that wouldn't be the case. Although I assume sooner or later the ideological issues will catch up with China.
That's a good point.
> <i>Europe should invest into manufacturing RAM</i><p>It should. And it should enact the political reforms they would make large capital projects like fabs possible. The current confederacy is proving just as much a stepping stone for Europe as it was for America. I’m not saying a full united Europe should emerge. But a system of vetoes is barely a system at all.
There exists domestic supply, it's just not scaled up:<p><a href="https://www.goodram.com/en/" rel="nofollow">https://www.goodram.com/en/</a><p>You don't see their products in stores too often as they're focused on B2B - particularly the automotive sector.<p>That being said I have a 128GB memory stick from this manufacturer and I hope they make the most out of this windfall.
I'm pretty sure they're just assembling DIMMs (and SSDs), not fabricating the memory ICs on it. The latter are what are in short supply.
Thats assembly, couple lanes of PCB Reflow ovens. They have been at this since mid nineties, always offering lifetime ram warranty too. Ram and SSD pretty commonly seen in retail.
Idea: Take the money that Germany promised to Intel if they build a state of the art fab. Instead, ask SK Hynix, Samsung, or Micron to build a DRAM fab in Germany.
It may seem that these are very similar processes, but this is only if you do not take into account the bribes from Intel to specific officials and their relatives who make decisions about subsidizing Intel.<p>SK Hynix, Samsung, or Micron don't treat good people well enough to give them taxpayer money.
Qimonda says hello from the grave
Yup. The countries giving huge tax breaks to American corporations could start taxing them and use it to invest in RAM production
> I think Europe should invest into manufacturing RAM ... This would be an opportunity to create domestic supply of it<p>How?<p>Most foundries across Asia and the US are being given subsidizes that outstrip those that the EU is providing, with the only mega-foundry project in Europe was canceled by Intel last year [0].<p>Additionally, much of the backend work like OSAT and packaging is done in ASEAN (especially Malaysia), Taiwan, China, and India. As much of the work for memory chips is largely backend work (OSAT and packaging), this is a field the EU simply cannot compete in given that it has FTAs with the US, Japan, South Korea, India, and Vietnam so any EU attempt would be crushed well before imitating the process.<p>Furthermore, much of the IP in the memory space is owned by Korean, Japanese, Taiwanese, Chinese, and American champions who are largely investing either domestically or in Asia, as was seen with MUFG's announcement earlier today to create a dedicated end-to-end semiconductor fund specifically to unify Japan, Taiwan, and India into a single fab-to-fabless ecosystem [1]. SoftBank announced something similar to unify the US, Japan, Malaysia, and India into a similar end-to-end ecosystem as well a couple weeks ago [2]. Meanwhile, South Korea is trying to further shore up their domestic capacity [3] via subsidies and industrial policy.<p>When Japanese, Korean, and Taiwanese technology and capital partners are uninterested in investing in building European capacity, American technology and capital partners have pulled out of similar initiatives in Europe, and the EU working to ban Chinese players [4] what can the EU even do?<p>----<p>Edit: can't reply<p>> Why are you overlooking European semiconductor champions<p>Because they don't have the IP for the flash memory supply chain. And whatever capacity and IP they have in chip design, front-end fab, or back-end fab is domiciled in the US, ASEAN, and India.<p>> STMicroelectronics<p>Power electronics and legacy nodes (28nm and above) for IoT and embedded applications.<p>> Infineon<p>Power electronics and legacy nodes (28nm and above) for automotive applications.<p>> NXP<p>Power electronics and legacy nodes (28nm and above) for embedded applications.<p>> All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.<p>They don't have the IP. Much of the IP for the memory space is owned by Japanese, American, Korean, Taiwanese and Chinese companies.<p>Additionally, most Asian funds own <i>both</i> the IP <i>and</i> capital (often with government backing), making European attempts futile.<p>Essentially, the EU would have to start from scratch and decades behind countries with whom the EU <i>already</i> has FTAs with that have expanded capacity well before the EU and thus would be able to crush any incipient European competitor.<p>[0] - <a href="https://www.it-daily.net/shortnews-en/intel-officially-cancels-construction-of-chip-factory-in-magdeburg-germany" rel="nofollow">https://www.it-daily.net/shortnews-en/intel-officially-cance...</a><p>[1] - <a href="https://www.digitimes.com/news/a20260224VL219/taiwan-talent-semiconductor-industry-policy-labor.html" rel="nofollow">https://www.digitimes.com/news/a20260224VL219/taiwan-talent-...</a><p>[2] - <a href="https://asia.nikkei.com/economy/trade-war/trump-tariffs/softbank-to-form-consortium-for-33bn-trump-deal-power-plant" rel="nofollow">https://asia.nikkei.com/economy/trade-war/trump-tariffs/soft...</a><p>[3] - <a href="https://www.digitimes.com/news/a20251230PD220/semiconductor-industry-chips-roadmap-transistor-technology.html" rel="nofollow">https://www.digitimes.com/news/a20251230PD220/semiconductor-...</a><p>[4] - <a href="https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042295" rel="nofollow">https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...</a>
Why are you overlooking European semiconductor champions? STMicroelectronics, Infineon Technologies, and NXP Semiconductors. All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.
> I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.<p>It's easy to build factories, much more difficult to train the engineers required to run them... and let's not even talk about all the crazy regulations & environmental rules at the EU level that make that task even more difficult, because yes, chip factories do pollute... a lot.<p>Countries like South Korea or Taiwan have adapted all their legislations and tax, environmental regulations to allow such factories to operate easily. The EU and EU countries will never do that... better outsource pollution and claim they care about the planet...
I am a CAD engineer and software developer who has worked in manufacturing a lot in the UK in various industries - products as big as superyachts and as small as peristaltic pumps. I think if the UK and EU are to try and defend their weakening and shrinking manufacturing sectors (these industries have been disappearing for my entire adult life) then it is possible but difficult...In 10 to 20 years it will be impossible.<p>The reason is as you have described. We are getting close to where the numbers of people with practical experience working in, managing, and designing things like the work processes and factory layouts in industries that build physical products are disappearing. We're losing a lot of capable practical engineers with hands on experience. We can keep the universities going teaching the physical subjects but those lecturers wouldn't know even where to begin on designing and building efficient factories unfortunately.<p>We'd probably end up having to get Chinese and Taiwanese businesses to outsource their 'experts' back to us in order to actually do this and pay them a fortune - basically the reverse of what was happening in the manufacturing sector in the 80s and 90s!
Doesn’t the EU have an excellent education system?
Even the most excellent education system takes several yeas to educate a high-schooler to a level of a <i>junior</i> engineer. Then several more years are needed for the best of them to become senior engineers, with the knowledge and experience that a university alone cannot provide.<p>So, we're looking at a decade-long project at least, even if everything goes as planned, and crazy fast, in the technical and administrative departments.
> Doesn’t the EU have an excellent education system?<p>Excellent universities, overall. But results from primary and secondary schools are nose diving at a more than alarming rate in several EU countries. Literacy rates are falling, math grades are falling. There's IMO only so much time before universities begin to be affected as well.
It is a general phenomenon across the Western developped world, here is the account of a professor which went viral a few months ago: <a href="https://news.ycombinator.com/item?id=43522966">https://news.ycombinator.com/item?id=43522966</a>
Maybe having less RAM and using pen/paper instead of tablet/phone will improve things...
[flagged]
> Doesn’t the EU have an excellent education system?<p>Well, the EU has not manufactured a whole lot of chips in the last 30 years, where do you get the people with the professional experience to teach new engineers... Oh you mean you have to import the teachers from South Asia too? /s and it takes what, 5 years at the minimum to train an engineer? France and UK used to produce entire home computers... in the 80's...
Come on, STM, Nordic, Infineon, NXP are all European. There is a bunch of chip-making installations in Dresden, Germany (Global Foundries, Bosch, etc), and there's Intel Fab 34 in Ireland. BTW TSMC is planning to open a production facility in Europe in 2027.<p>This is not comparable to Taiwan or the Shenzen area, but it's definitely not nothing. Some local expertise exists, even though it may be not the most cutting-edge.
A parallel reply from me: <a href="https://news.ycombinator.com/item?id=47162226">https://news.ycombinator.com/item?id=47162226</a><p>The same applies to your comment.
Europe can do one simple to ensure low RAM prices: Allow ASML to sell all its advanced machines to Chinese RAM producers.
i am working on my side-product [1] where i was exploring a Rockchip which required external memory (just 1G) which went from $3 to $32 and completely destroyed economics for me. I settled with one with embedded memory and optimizing my code instead :)<p>1. <a href="https://x.com/_asadmemon/status/1989417143398797424" rel="nofollow">https://x.com/_asadmemon/status/1989417143398797424</a>
I suspect game development will be similar - game companies will optimize their games given customer cards are not going to be released for a while or will be too expensive.
I hope so.<p>Resource usage has been on a hedonic treadmill at least since I came online in the 90s. Good things have come from that, of course, but there's also plenty of abstraction/waste that's permitted because "new computers can handle it."<p>With so many gaming devices based on the AMD Z1 Extreme platform (and its custom Valve corollaries) over the past few years, it'll be great to see that be the target/baseline for a while. Brings access to more players and staves of e-waste for longer.
I'm not sure how we got on to games as resource hogs when Teams uses 2GiB of RAM and Windows itself uses 4GiB of RAM.<p>I work in gamedev, so perhaps I'm a bit sensitive, and I understand that general purpose engines aren't as light on resources as the handcrafted ones that nobody can afford to make anymore... but we're not anywhere close to the layers of waste and abstraction that presents itself when using webtech for desktop apps by default.
Still, isn't part of the hedonic treadmill merely marketers dream of number goes up? If your software needs more specs, then surely it's doing more work and the hardware boys will gladly give you a monitor with more hz.<p>So, the causal link is more: why would software makers need to optimize when it benefits them to pretend the user _needs_ more hardware. Especially in the games realm. Surely going from 60hz to 240hx refresh rate was a practical loss in benefits per hz halfway through. But it ate up hardware resources along the way.
RAM usage is super easy to dial down, it's just texture quality.
Not really new, Nvidia's GTX 1070 launched in 2016 with 8 GB of VRAM and they've been slow walking VRAM increases for the last decade.<p>Today's RTX 5060 has 8 GB for basically the same price that the 1070 did.<p>For $650 you can go up to 12 GB in the 5070, if you want 16 GB it's $1000 for the 5070 Ti, or hundreds more than that for the 5080.<p>I know there's inflation and $380 in 2016 was more money than it is today, but if you'd asked me 10 years ago I would've bet on VRAM capacity doing better than "the same money is worth less but still gets you exactly same amount of memory 10 years from now."<p>With prices going up, I half expect Nvidia to launch the RTX 6070 and tell everyone "It has 4 GB of memory and we think you're going to love it. $900." Or they'll just stop bothering with consumer GPUs entirely.
1060 6GB here. Figured the headroom would get me a couple extra years out of it. At this rate I’m wondering if the card is going to outlast the concept of owning graphics cards. Partly because, as you mention, maybe NVIDIA will stop selling them. Partly because, maybe APUs will get good enough…
Very few games target high specs to begin with.
win win, considering the slop game studios are pushing out these days.
For reference, Octopart is useful to track prices from many distributors, linked below [0] is a commonly used memory (1G) for Rockchip, Amlogic, Allwinner on many Radxa and Orange Pis.<p>[0] <a href="https://octopart.com/part/nanya/NT6AN256T32AV-J2" rel="nofollow">https://octopart.com/part/nanya/NT6AN256T32AV-J2</a>
Maybe this RAMmageddon will trigger a wave of optimized softwares that don't need GBs of memory for anything and everything.
Let's see.<p>A) Programmers will get their shit together and start shipping lean software.<p>OR<p>B) New laptops will become neutered thin clients, and all the heavy lifting will be done by cloud service providers.<p>Which one seems more likely?
B sounds more likely because LLMs are already in the cloud and agents are thin clients
B. But those neutered thin clients do not have enough ram to run software anyway even if all the real computation is in the cloud.
If electron dies, I'll forgive the people responsible for this shortage
Or you won’t need RAM because everything will be hosted in the cloud and used via browser or just desktop streaming :)
Only a matter of time before you hear about missing shipping trucks being stolen. China is opening up more production, but I don’t see any relief coming soon.
I might not have bought NVDA or timed BTC correctly, but at least I have 512 GB of DDR5 in my server and 128 GB in my Macbook Pro haha. The reality is that these are insanely huge amounts of RAM. I'm glad to have them because I don't need these tab suspender extensions a bunch of my friends use, but really I'd prefer if GPUs were a bit cheaper, and server hardware was generally easier to get. An SXM5 based motherboard is really hard to get these days despite the fact that you can get super powered Epyc 9755s for comparatively nothing.<p>It reminds me of the heady days of Thai floods when hard drives were inaccessible.
Jeez. I'm glad I "splurged" for the 24gb RAM in my macbook air. Should last me a few more years..
The joke is that Apple RAM pricing is now close to market level, they still have margin in there even at market prices, and they are notorious for supply chain management and locking in contracts/prices ahead of time. So doubt Apple will change anything here short term.<p>On the flip side if you're buying a new computer in 2026 - it's going to be even harder to justify not getting a MacBook, the chips are already 2 years ahead of PC, the price of base models was super competitive, now that the ram is super expensive even the upgraded versions are competitive with the PC market. Oh and Windows is turning to an even larger pile of shit on a daily basis.
Yeah, but the Linux support is still quite poor unfortunately.<p>I'd buy a mac in a sec otherwise.
Apple is releasing new devices next week, I wonder if they will take the opportunity to increase memory prices.
> The joke is that Apple RAM pricing is now close to market level<p>Probably not quite, but I was pricing a Lenovo laptop last week and this is the first time the lenovo price for RAM upgrades was lower than 3rd party RAM.
2 years ahead?
My canonical example is I bought 12 sticks of 64GB DDR4LRDIMM for 400-430$. Now each stick costs 320$...
Just a year ago...
Apple also uses a different kind of RAM (iirc a custom LPDDR5X to use with their Unified Memory SOC - not the same kind as the commodity RAM that everyone is putting in their PCs. So they aren’t competing with everyone on it. Plus they probably locked in their rates back in 2024 with their suppliers.
"Bill of materials" (BOM) is not a monetary invoice, only an itemization of constituent parts. :)
Yes the normal term I’ve always heard and used for what they’re talking about is “BOM cost”, i.e. the combined cost of every item on the BOM to make a single unit.
Reading the title I wondered if this is about more components coming with their own memory, because I've never heard BOM used as a monetary bill.
Isn't there a full wafer ai chip mainframe for data centers now that blows anything needing ram out of the water?
I don't understand the ram shortage exists companies have surpassed nvidia.
> Isn't there a full wafer ai chip mainframe for data centers now that blows anything needing ram out of the water?<p>Guess what's inside these chips and what equipment they're made on.
How come there’s ASICs for mining but not AI? Seems like there would be almost unlimited demand
There is. It does 16k tokens per second. <a href="https://chatjimmy.ai/" rel="nofollow">https://chatjimmy.ai/</a>
Weren't the Google TPU stuff that already? Wikipedia says that's from a decade ago.
Mining is all compute and no IO. Training particularly is heavy compute and insane IO.
There are asics for ai. They're called gpus / tpus.
I think China is about to step in and take every last bit of non-ai market share, and then when the bubble bursts companies like micron and samsung are going to be begging governments for a bail out.
90s: so little memory, programmers have to optimize their code to run properly.<p>2010s: so much memory, programmers used electron and chrome wrapping everything in js.<p>2026: so little memory, programmers have to optimize AI code to run properly.
Will this RAM shortage also affect the price of mobile phones?
I read that Apple will start feeling the heat in the third quarter of this year although nobody knows for sure. That will either shrink their margins a bit or iPhones prices will go up.
The Verge had some good coverage about this, but TLDR: probably. Flagship phones may not raise in price to fully reflect it. They might cut costs elsewhere like keeping the same camera, or eat <i>some</i> of the cost increase in their margin.<p><a href="https://www.theverge.com/tech/880812/ramageddon-ram-shortage-memory-crisis-price-2026-phones-laptops" rel="nofollow">https://www.theverge.com/tech/880812/ramageddon-ram-shortage...</a><p>They discussed it on the decoder podcast as well.
Given how eagerly companies have used price rises - even unrelated - as an excuse to raise prices disproportionately since 2020 I expect so
Most base level smartphones are loss leaders and wouldn't be severely impacted and upper-tier smartphones tend to be priced at their true value. It's the mid-tier SKUs that get impacted.<p>Additonally, depending on which country you live in, telecom vendors reduce the upfront cost of the phone purchase and make up the difference via contracts.
Is there any hope for RAM to stabilize in prices again?
This a good question. My guess: As long as the LLM boom continues, and no new DRAM manuf capacity is added, the high prices will remain.
if people keep buying the RAM at current prices, then I can't imagine manufacturers will lower the prices
Most economists will tell you the cure for high prices is high prices. We will all adjust and ram manufacturers will invest in more capacity.
Recently order a number of machines with 32Gb of RAM. Wanted 64, was told prices couldn't be guaranteed nor could delivery dates. Under the pressure of urgency settled for whatever was available that day.
Expensive PCs/homeservers means more people on mobile crap + someone cloud, means students who do not learn PCs FLOSS when they have time and so on. That's the real point.
>AI-driven “end-to-end planning processes"<p>Behold, the RAM cost is being optimized with AI.
What!? I always thought we could just download more RAM!
on Linux you can actually do that by enabling zram <a href="https://en.wikipedia.org/wiki/Zram" rel="nofollow">https://en.wikipedia.org/wiki/Zram</a>
<a href="http://computeradsfromthepast.substack.com/p/connectix-ram-doubler" rel="nofollow">http://computeradsfromthepast.substack.com/p/connectix-ram-d...</a>
Connectix was a big deal in its day. RAM Doubler was considered essential software.<p>They also marketed the first webcam, and made emulators mainstream. Their PlayStation emulator is the basis for the case law that says emulators are fair use, decided as a result of a suit from Sony.
> RAM was expensive at the time. For example, an 8 MB stick cost $300.<p>So why you’re saying is that it could be worse, but not by much?
To the downvoters that don't get OP's reference:<p><a href="https://downloadmoreram.com" rel="nofollow">https://downloadmoreram.com</a><p>Idk if the owner changed or what, but the website used to be more comical.
A conspiracy theory I’m entertaining right now is that hogging RAM manufacturing by AI companies is not so much because they _need_ the RAM, but because they want to cripple existing and potential competitors, and that includes on-device models.<p>One thing that might support this is the fact AI companies are purchasing uncut wafers of DRAM. One use might be to hoard and stockpile them somewhere in a cave, so that no one else gets to them.<p>Another thing that might support this is that precisely the same strategy had been in use by software companies during the COVID hiring fever. Companies used to hire people for ridiculous pay with little actual work to perform so that among other things, competitors wouldn’t whisk those people away and be at an advantage.<p>This, of course, ended with massive layoffs once the reckoning came about, and I’m wondering about what is going to happen when (there’s no “if”) the reckoning comes for big AI, too.
I think we’re at the peak, or close to it for these memory shenanigans. OpenAI who is largely responsible for the shortage, just doesn’t have the capital to pay for it. It’s only a matter of time before chickens come home to roost and the bill is due. OpenAI is promising hundreds of billions in capex but has no where near that cash on hand, and its cash flow is abysmal considering the spend.<p>Unless there is a true breakthrough, beyond AGI into super intelligence on existing, or near term, hardware— I just don’t see how “trust me bro,” can keep its spending party going. Competition is incredibly stiff, and it’s pretty likely we’re at the point of diminishing returns without an absolute breakthrough.<p>The end result is going to be RAM prices tanking in 18-24 months. The only upside will be for consumers who will likely gain the ability to run much larger open source models locally.
Big tech wants all the chips and they get them. That is Stalinist level of absurd planning.<p>People is missing the point. Mega-corporations distort the market. This is not capitalism this is old aristocratic ruling by power. If all these monopolies were divided in smaller chunks and regulated to not allow them to abuse that power we will not be here.<p>This situation is not normal, big tech is currently above the law and above the market economy and if they fail their plan is to make us pay *AGAIN* for their bad decisions. All businesses and individuals are already paying higher prices for big tech folly, we will be left with the bill when the AI boom fails, too.
[dead]
This is a fairly odd statement given that BOMs are managed in manufacturing systems and for accounting and engineering purposes in multiple different ways. This can be for anything to do with sales data for a client or for guys on the factory floor or for the accountants. There are sales BOMs, manufacturing BOMs procurement BOMs and nested BOMs etc all for different parts of the business process...you would have BOMs within the organisation that were probably nearly 70% etc or those that were 0%!
I asked ChatGPT directly how it was fair that OpenAI bought 40% of the world’s RAM supply.<p>It denied this saying that the figures quoted were estimates only, that such massive RAM contracts would be easily obtainable public knowledge and that primarily the recent price increases were mostly cyclical in nature.<p>Any truth to this?<p>Edit to add: I am actually curious; I was under the impression that this 40% story going around was true and confirmed, rather than just hyperbole or speculation.
Please, just search for it and read some articles. It would take 10 mins
How about you ask a model of a competitor? You could just hitting build-in OpenAI propaganda on their own bot. :)