1 bit with a FP16 scale factor every 128 bits. Fascinating that this works so well.<p>I tried a few things with it. Got it driving Cursor, which in itself was impressive - it handled some tool usage. Via cursor I had it generate a few web page tests.<p>On a monte carlo simulation of pi, it got the logic correct but failed to build an interface to start the test. Requesting changes mostly worked, but left over some symbols which caused things to fail. Required a bit of manual editing.<p>Tried a Simon Wilson pelican as well - very abstract, not recognizable at all as a bird or a bicycle.<p>Pictures of the results here: <a href="https://x.com/pwnies/status/2039122871604441213" rel="nofollow">https://x.com/pwnies/status/2039122871604441213</a><p>There doesn't seem to be a demo link on their webpage, so here's a llama.cpp running on my local desktop if people want to try it out. I'll keep this running for a couple hours past this post: <a href="https://unfarmable-overaffirmatively-euclid.ngrok-free.dev" rel="nofollow">https://unfarmable-overaffirmatively-euclid.ngrok-free.dev</a>
Thanks for sharing the link to your instance. Was blazing fast in responding. Tried throwing a few things at it with the following results:
1. Generating an R script to take a city and country name and finding it's lat/long and mapping it using ggmaps. Generated a pretty decent script (could be more optimal but impressive for the model size) with warnings about using geojson if possible
2. Generate a latex script to display the gaussian integral equation - generated a (I think) non-standard version using probability distribution functions instead of the general version but still give it points for that. Gave explanations of the formula, parameters as well as instructions on how to compile the script using BASH etc
3. Generate a latex script to display the euler identity equation - this one it nailed.<p>Strongly agree that the knowledge density is impressive for the being a 1-bit model with such a small size and blazing fast response
> Was blazing fast in responding.<p>I should note this is running on an RTX 6000 pro, so it's probably at the max speed you'll get for "consumer" hardware.
I must add that I also tried out the standard "should I walk or drive to the carwash 100 meters away for washing the car" and it made usual error or suggesting a walk given the distance and health reasons etc. But then this does not claim to be a reasoning model and I did not expect, in the remotest case, for this to be answered correctly. Ever previous generation larger reasoning models struggle with this
The speed is impressive, I wish it could be setup for similar to speculative decoding
here's the google colab link, <a href="https://colab.research.google.com/drive/1EzyAaQ2nwDv_1X0jaC5XiVC3ZREg9bdG?usp=sharing" rel="nofollow">https://colab.research.google.com/drive/1EzyAaQ2nwDv_1X0jaC5...</a> since the ngrok like likely got ddosed by the number of individuals coming along
wow that was cooler than I expected, curious to embed this for some lightweight semantic workflows now
What’s the trade-off? If it’s smaller, faster and more efficient - is it worse performance? A layman here, curious to know.
If you look at their whitepaper (<a href="https://github.com/PrismML-Eng/Bonsai-demo/blob/main/1-bit-bonsai-8b-whitepaper.pdf" rel="nofollow">https://github.com/PrismML-Eng/Bonsai-demo/blob/main/1-bit-b...</a>) you'll notice that it does have some tradeoffs due to model intelligence being reduced (page 10)<p>The average of MMLU Redux,MuSR,GSM8K,Human Eval+,IFEval,BFCLv3 for this model is 70.5 compared to 79.3 for Qwen3, that being said the model is also having a 16x smaller size and is 6x faster on a 4090....so it is a tradeoff that is pretty respectable<p>I'd be interested in fine tuning code here personally
Their own (presumably cherry picked) benchmarks put their models near the 'middle of the market' models (llama3 3b, qwen3 1.7b), not competing with claude, chatgtp, or gemini. These are not models you'd want to directly interact with. but these models can be very useful for things like classification or simple summarization or translation tasks.<p>These models quite impressive for their size: even an older raspberry pi would be able to handle these.<p>There's still a lots of use for this kind of model
I expect the trend of large machine learning models to go towards bits rather than operating on floats. There's a lot of inefficiency in floats because typically they're something like normally distributed, which makes the storage and computation with weights inefficient when most values are clustered in a small range. The foundation of neural networks may be rooted in real valued functions, which are simulated with floats, but float operations are just bitwise operations underneath. The only issue is that GPUs operate on floats and standard ML theory works over real numbers.
Doesn't Jevons paradox dictate larger 1-bit models?
Super interesting, building their llama cpp fork on my Jetson Orin Nano to test this out.
I feel like it's a little disingenuous to compare against full-precision models. Anyone concerned about model size and memory usage is surely already using at least an 8 bit quantization.<p>Their main contribution seems to be hyperparameter tuning, and they don't compare against other quantization techniques of any sort.
Is Bonsai 1 Bit or 1.58 Bit?
How do I run this on Android?
What is the value of a 1 bit? For those that do not kno
[dead]
[dead]