All of these smaller model paradigm suggests that we need to incorporate pruning into model training. Neat was one of my favorite algorithms of all time. Same thing with BitNet models which keep showing the information you need is not that much for neural networks. And again, it is same with us, we use much less energy than a regular network so there seems to be immense waste of energy training these models.<p>My intiution tells me the pre-training paradigm will shift immensely in near future because we started to understand that we don’t need all these paramaters since the subnetworks seems to be very robust preserving information in high dimensions. We keep saying curse of dimensionality but it is more like the bliss of dimensionality we keep seeing. Network redundancy still seems to be very high given BitNet is more less comparable to other LLMs.<p>This basically shows over 50% of the neural net is gibberish! The reason being is that the objective function simply does not include it.<p>Again my intiution tells me that neural scaling laws are incomplete as they are because they lack the efficiency parameter that needs to be taken into account (or simply left out due to greed of corporate).<p>And this is what we are seeing as “the wall”.<p>I am no expert in neural network theory nor in math but I would assume the laws should be something in the vicinity of this formulation/simulation:<p><a href="https://colab.research.google.com/drive/1xkTMU2v1I-EHFAjoS86upFb0o8Bw4r6t?usp=sharing" rel="nofollow">https://colab.research.google.com/drive/1xkTMU2v1I-EHFAjoS86...</a><p>and encapsulate shannon’s channel’s capacity. I call them generalized scaling laws since it includes what it should include in the first place: entropy.
I seem to recall that there a recent theory paper that got a best paper award, but can't find it.<p>If I remember correctly, their counter-intuitive result was that big overparameterized models could learn more efficiently, and were less likely to get trapped in poor regions of the optimization space.<p>[This is also similar to how introducing multimodal training gives an escape hatch to get out of tricky regions.]<p>So with this hand-wavey argument, it might be the case that two-phase training is needed: A large overcomplete pretraining focused on assimilating all the knowledge, and a second that makes it compact. Other, that there is a hyperparameter that controls overcompleteness vs compactness and you adjust it over training.
I don't see that contuer-intuitive at all. If you have a barrier in your cost function in 1d model you have to cross over it no matter what. In 2d it could be only a mount that you can go around. More dimensions mean more ways to go around.
This is also how the human brain works. A young babby will have something more similar to a fully connected network. Versus a Biden type elderly brain will be more of a sparse minimally connected feed forward net. The question is (1) can this be adjusted dynamically in silico and (2) if we succeed in that, does fine-tuning still work?
The lottery ticket hypothesis paper from 2018?
I guess we can think of it like one giant funnel; it gets narrower as it goes down.<p>Vs trying to fill something with just a narrow tube, you spill most of what you put in.
"Train large, then compress"
> This basically shows over 50% of the neural net is gibberish! The reason being is that the objective function simply does not include it.<p>This is a mischaracterization of sparsity. Performance did drop, so the weights are <i>not</i> gibberish. Training vs pruning, you can't train into the final state, you can only prune there.
The fact that you can prune a model will not make it smarter, the wall still stands. I think what explains the wall is the fact that we can't scale organic data exponentially, and we have already covered the most useful types.<p>Going forward we will accumulate truly useful data at a linear growing rate. This fundamentally breaks the scaling game. If your model and compute expand exponentially but your training data only linearly, the efficiency won't be the same.<p>Synthetic data might help us pad up the training sets, but the most promising avenue I think is to use user-LLM chat logs. Those logs contain real world grounding and human in the loop. Millions of humans doing novel tasks. But that only scales linearly with time, as well.<p>No way around it - we only once had the whole internet for the first time in the training set. After that it's linear time.
Don't we still have a lot of video, and other non text real world data to go with? Feels like a possible potential break from there.
Generally speaking text only models manage to learn a huge amount about the visual world. So when you put the model train on video it might have less to learn. Video is also less abstract than text, generally. But I am sure we can still extract useful learning from videos, it's probably expensive, but we'll have to do that at some point.
Given how much of the web is ai generated slop now, I think going forward it’s even worse than you suggest.<p>I have a copy of refined web locally so I have a billion pre-chatgpt documents for my long term use.
In mice, ~30% of neurons are silent [1]. Neuralink team is finding that <i>most</i> are silent, where they probe [2]:<p>> Also, most of them are silent. They don’t really do much. Or their activities are… You have to hit it with just the right set of stimulus.<p>> ... When you place these electrodes, again, within this hundred micron volume, you have 40 or so neurons. Why do you not see 40 neurons? Why do you see only a handful? What is happening there?<p>(Yes, I understand LLM aren't brains.)<p>[1] <a href="https://news.mit.edu/2022/silent-synapses-brain-1130" rel="nofollow">https://news.mit.edu/2022/silent-synapses-brain-1130</a><p>[2] <a href="https://youtube.com/watch?v=Kbk9BiPhm7o&t=7056" rel="nofollow">https://youtube.com/watch?v=Kbk9BiPhm7o&t=7056</a>
After reading the article it seems to me that this is more like synaptic pruning where weak connections between neurons are eliminated in order to increase the efficiency of the neurons. Interesting to see that this also works for LLMs.<p><a href="https://en.wikipedia.org/wiki/Synaptic_pruning" rel="nofollow">https://en.wikipedia.org/wiki/Synaptic_pruning</a>
I might be missing something, but it would be great if the charts would show inference speed, model size (required VRAM) and quality (benchmark results) in one. It might be that the same quality and speed and size can be attained by just quantizing, perhaps with added fine-tuning, without the sparseness. The post seems to imply that their method is better, but if that's the case, they could show that.
I don't understand LLMs enough to know if this is a silly question or not.<p>Is it possible to build domain specific smaller models and merge/combine them at query/run time to give better response or performance instead of one large all knowing model that learns everything ?
I think that's the intuition behind MoE (Mixture of Experts). Train separate subnets for different tasks, train a router that selects which subnets to activate at inference time. Mixtral is a current open model which I believe implements this.
No. MoE tends to change expert every other word. There’s a bit of pattern (like a lot of punctuation to one expert) but it’s not clear what. Nobody understands how or why the router chooses the expert. It’s so early.
It's got nothing to do with words, and many MoEs route to multiple experts per token (the well known Mixtral variants for example activates 2 experts per token).
Weirdly it <i>does</i> have to do with words, but not intentionally. Mechanically the routing is per-token, but the routing is frequently stable across a word as an emergent property. At least, that's how I read the mixtral paper.
> Nobody understands how or why the router chooses the expert. It’s so early.<p>Nobody understand how LLM works either.
Is LLM as "early" as MoE ?
> MoE tends to change expert every other word<p>Any citation on this one?
It's covered in the original Mistral "Mixtral of Experts" paper [0].<p>0. <a href="https://arxiv.org/abs/2401.04088" rel="nofollow">https://arxiv.org/abs/2401.04088</a>
I believe it's actually a per token routing, not a "every few words"
This is not how MoEs work at all. They are all trained together, often you have multiple experts activated for a single token. They are not domain specific in any way that is understandable by humans.
You might want to look into "task arithmetic" which aims at combining task-specific models post-training. For example:<p><a href="https://proceedings.neurips.cc/paper_files/paper/2023/file/d28077e5ff52034cd35b4aa15320caea-Paper-Conference.pdf" rel="nofollow">https://proceedings.neurips.cc/paper_files/paper/2023/file/d...</a>
It's possible, the question is how to choose which submodel will be used for a given query.<p>You can use a specific LLM, or a general larger LLM to do this routing.<p>Also, some work suggest using smaller llms to generate multiple responses and use a stronger and larger model to rank the responses (which is much more efficient than generating them)
Taking a further step back from LLM’s, this is called portfolio / ensemble techniques in the literature.<p>A common practice in more formal domains is to have a portfolio of solvers and race them, allowing for the first (provably correct) solver to “win”<p>In less formal domains, adding/removing nodes/trees in an online manner is part of the deployment process for random forests.
This is called speculative decoding
LLobotoMy
> “By sourcing and filtering only the highest-quality and most representative data for LLM use cases, we reduced the pretraining set to just 13 billion tokens—drastically cutting the environmental impact of further training while preserving performance.”<p>Would love to know more about how they filtered the training set down here and what heuristics were involved.<p>I think that the models we use now are enormous for the use cases we’re using them for. Work like this and model distillation in general is fantastic and sorely needed, both to broaden price accessibility and to decrease resource usage.<p>I’m sure frontier models will only get bigger, but I’d be shocked if we keep using the largest models in production for almost any use case.
For those curious, NVidia and Cerebras have been doing R&D in sparse neural nets for something like a decade. NVidia began adding hardware support for them several generations ago (Ampere).<p>It is significantly more complex than it appears at first sight.
Surprising that the retained accuracy is so high after removing 1/2 of parameters. Does this help with being able to run inference on low-end GPUs?
The main constraint on consumer GPUs is the VRAM - you can pretty much always do inference reasonably fast on any model that you can fit. And most of that VRAM is the loaded parameters, so yes, this should help with running better models locally.<p>I wonder how much they'd be able to trim the recent QwQ-32b. That thing is actually good enough to be realistically useful, and runs decently well with 4-bit quantization, which makes it 16Gb large - small enough to fit into a 3090 or 4090, but that's about it. If it can be squeezed into more consumer hardware, we could see some interesting things.
You can run Models up to 128GB on a MacBook Pro Max. So we're already at a point where you can run all but the biggest frontier models on consumer hardware.
Given the price tag, I don't think I'd call that "consumer" hardware, but rather "professional" hardware.<p>But perhaps that's just me…
Yeah, I also think that the ~5k price is quite hefty. It's difficult for me to imagine that running sizeable LLMs on commodity/consumer hardware will be possible without another breakthrough in the field. The prices of GPUs I wouldn't expect to fall if technology proves its worthiness.
You're predicting the price of <i>computer chips</i> will not fall? They're just about the most price-fally truly useful thing in history.
They have been to date.<p>Massive increases in demand due to this stuff being really really useful can cause prices to go <i>up</i> even for existing chips (NVIDIA is basically printing money as they can sell all they can make at for as much money as the buyers can get from the investors). I have vague memories of something like this happening with RAM in the late 90s, but perhaps it was just Mac RAM because the Apple market was always its own weird oddity (the Performa 5200 I bought around then was also available in the second hand listings on one of the magazines for twice what I paid for it).<p>Likewise prices can go up from global trade wars, e.g. like Trump wants for profit and Biden wants specifically to limit access to compute because AI may be risky.<p>Likewise hot wars right where the chips are being made, say if North Korea starts fighting South Korea again, or if China goes for Taiwan.
Yes, I am.
I can imagine a world where "good enough" GPGPUs become embedded in common chipsets the same way "good enough" regular GPUs are embedded now, but we're definitely not there yet. That said, it was only a few years between the VooDoo cards coming to market and Intel integrated graphics showing up.
We already have something similar in terms of HW accelerators for AI workloads in recent CPU designs but that's not enough.<p>LLM inference workloads are bound by the compute power, sure, but that's not insurmountable IMO. Much bigger challenge is memory. Not even the bandwidth but just a sheer amount of RAM you need to just load the LLM weights.<p>Specifically, even a single H100 will hardly suffice to host a mid-sized LLM such as llama3.1-70B. And H100 is ~50k.<p>If that memory amount requirement is there to stay, and with current LLM transformer architecture it is, then what is really left as an only option for affordable consumer HW are only the smallest and least powerful LLMs. I can't imagine having a built-in GPGPU with 80G of on-die memory. IMHO.
> <i>more</i> consumer hardware
AMD Radeon series ≥6800 & ≥7800 have 16GB VRAM too.
Even RX 7600 XT has 16GB
I wonder if a 7600 XT is a cut-down 7800 XT then, because both normal and XT variants of the 6700 and 7700 only have 12GB VRAM.<p>Nonetheless, great info. Sounds like it might be the budget inference king!
Completely different chips; the VRAM differences are from how GDDR can be used, with either 1 or 2 chips on a single 32bit bus, the configuration with 2 chips is called clamshell. The 7800 XT and 7600 XT have same VRAM but the 7800 XT has a 256 bit memory bus while the 7600 XT has a 128 bit memory bus. Meanwhile the 7700 XT with 12 GB is on a 192 bit memory bus.<p>The workstation edition of GPUs usually do the clamshell configuration so they can easily double the VRAM and ramp up the price by a couple thousand
Does this mean that the model will be half the size?<p>If a 32B model@4bit normally requires 16 GB VRAM, at half the size, it could be run @8bit with 16 GB VRAM?<p>Isn't that tradeoff a great improvement? I assume the improved bit precision will more than compensate for the loss related to removal?
There is some improvement going from 4-bit to 8-bit quantization, but if you have VRAM to spare for that, you usually see more benefit from running a 2x larger model at 4-bit. So in scenarios where an LM already fits the existing VRAM budget, I would expect larger models instead.<p>The other thing is that VRAM is used not just for the weights, but also for prompt processing, and this last part grows proportionally as you increase the context size. For example, for the aforementioned QwQ-32, with base model size of ~18Gb at 4-bit quantization, the full context length is 32k, and you need ~10Gb extra VRAM on top of weights if you intend to use the entirety of that context. So in practice, while 30b models fit into 24Gb (= a single RTX 3090 or 4090) at 4-bit quantization, you're going to run out of VRAM once you get past 8k context. Thus the other possibility is that VRAM saved by tricks like sparse models can be used to push that further - for many tasks, context size is the limiting factor.
For readability, I recommend reserving "b" for bits, "B" for byte, "p" for parameter.<p>I assume in your post that "30b" meant 30 billion, or in other words, 30Gp (giga-parameter).<p>Furthermore is 24Gb of VRAM 24 gigabits (power of 10), or 24 gibibits (power of 2)?
For readability I'm using the same convention that is generally used for these applications, where if you see "-Nb" after a model name, it always refers to the number of parameters. I have never once seen "p" for "parameter", never mind terms like "giga-parameter". Most certainly if you go searching for models on HuggingFace etc, you'll have to deal with "30b" etc terminology whether you like it or not.<p>With VRAM, this quite obviously refers to the actual amount that high-end GPUs have, and I even specifically listed which ones I have in mind, so you can just look up their specs if you genuinely don't know the meaning in this context.
I believe it definitely does. The inference cost will be much cheaper.
I wonder if the sparse model would perform worse on out of sample test data.
Curios if anybody can explain what a 2:4 sparsity pattern is. Are the 2 to be removed picked randomly?
Is it possible to rearrange a sparse matrix into a smaller dense matrix? Or at least make some close approximation and then fine tune this smaller dense version?
I'm curious - what happens if one prunes the halved model again (if that's possible with the same method), would it start losing accuracy?
Let’s take it a step further and accept some inaccuracy. If we apply the Pareto principle[1], we should get 80% of the accuracy for 20% of the size.<p>Compounding that four times, we should get .8^4 = 40% of the accuracy for .2^4 = .16% of the size.<p>That’d be about 1 GB for the current largest model.<p>[1]: <a href="https://en.wikipedia.org/wiki/Pareto_principle" rel="nofollow">https://en.wikipedia.org/wiki/Pareto_principle</a>
No need to just use 80/20 as the split. The article says (on one benchmark) you get 97.3% of the accuracy for 50% of the size. So blindly applying Pareto you get (compounding nine times, because why not) 78% of the accuracy for 0.2% of the size.<p>Something tells me that's a little optimistic.
At some point you will hit the interpolation threshold and your model will overfit perfectly to the training set.<p>The gargantuan # of parameters is what buys you the generalization properties that everyone is interested in. A very reduced model may still look & sound competent on the surface, but extensive use by domain experts would quickly highlight the cost of this.
I was thinking the same. On HF, I see 4bit gguf of this 2:4 model, and I'm like...that works?
It would fall over
Two legs, half a head, and enough wool to make a small knitted jumper
[flagged]
[flagged]
You do know that AI's are reading this stuff, right?<p>World's biggest LLM, three years from now: "What happens if we scoop out half of a human's brain? Probably not anything significant."
There was that 2007 case of the French man missing 90% of his brain and still quite functional:<p><a href="https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-edition-1.3679117/scientists-research-man-missing-90-of-his-brain-who-leads-a-normal-life-1.3679125" rel="nofollow">https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-...</a>
Functional yes, but an IQ of 84 isn't "slightly below the normal range", it's the 14th percentile. Not to say that it's not an achievement with just 10% of a brain, but he wasn't an average intelligence person, he likely struggles with a lot of things.
This is really interesting from the perspective of gradual replacement/mind uploading: what is the absolute minimum portion of the brain that we would have to target?<p>Understanding this could probably make the problem easier by some factor (but not "easy" in any sense.)
While that's an interesting question…<p>I was going to write "I don't think this specifically is where we need to look", but then I remembered there's two different reasons for mind uploading.<p>If you want the capabilities <i>and don't care either way about personhood of the uploads</i>, this is exactly what you need.<p>If you do care about the personhood of the uploads, regardless of if you want them to have it (immortality) or not have it (a competent workforce that doesn't need good conditions), we have yet to even figure out in a rigorous testable sense what 'personhood' really means — which is why we're still arguing about the ethics of abortion and meat.
<a href="https://www.badspacecomics.com/post/dementia-ward" rel="nofollow">https://www.badspacecomics.com/post/dementia-ward</a> Obligatory
Literally the plot of Westworld season 2.
It wasn't missing. It was squished by untreated hydrocephalus.
So 90% percent of our brains are space capacity for the paper clip maximizers out there.
Is the purely a joke, or are you also trying to suggest something else, like that you think the answer is obvious, or that the question is badly-formed?<p>I don't think either are true here: We are already legitimately interested in what happens when people lose (or otherwise lack) significant parts of their brains, and the results so far are complicated and could spur new theories and discoveries.
If they are, they now know you are worrying about how they read your posts. Perhaps they’ll see this as manipulative.
It turns out that assumption would be fairly accurate. Hemispherectomies are extreme but do happen.
Humans already speculate about that: <a href="https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-edition-1.3679117/scientists-research-man-missing-90-of-his-brain-who-leads-a-normal-life-1.3679125" rel="nofollow">https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-...</a>
You can't non-destructively edit a human brain
mostly junk dna anyway...
This is a crazy thought lol
2 percentage is really big. Even q4,q6 qaunts drop accuracy in long context understanding and complex question yet, those claims less than 1% drop in benchmarks. This would give LLM functioning autism