43 comments

  • mythz1 day ago
    I consider HuggingFace more &quot;Open AI&quot; than OpenAI - one of the few quiet heroes (along with Chinese OSS) helping bring on-premise AI to the masses.<p>I&#x27;m old enough to remember when traffic was expensive, so I&#x27;ve no idea how they&#x27;ve managed to offer free hosting for so many models. Hopefully it&#x27;s backed by a sustainable business model, as the ecosystem would be meaningfully worse without them.<p>We still need good value hardware to run Kimi&#x2F;GLM in-house, but at least we&#x27;ve got the weights and distribution sorted.
    • data-ottawa1 day ago
      Can we toss in the work unsloth does too as an unsung hero?<p>They provide excellent documentation and they’re often very quick to get high quality quants up in major formats. They’re a very trustworthy brand.
      • disiplus1 day ago
        Yeah, they&#x27;re the good guys. I suspect the open source work is mostly advertisements for them to sell consulting and services to enterprises. Otherwise, the work they do doesn&#x27;t make sense to offer for free.
        • danielhanchen21 hours ago
          Haha for now our primary goal is to expand the market for local AI and educate people on how to do RL, fine-tuning and running quants :)
          • WanderPanda16 hours ago
            Amazing work and people should really appreciate that the opportunity costs of your work are immense (given the hype).<p>On another note: I&#x27;m a bit paranoid about quantization. I know people are not good at discerning model quality at these levels of &quot;intelligence&quot; anymore, I don&#x27;t think a vibe check really catches the nuances. How hard would it be to systematically evaluate the different quantizations? E.g. on the Aider benchmark that you used in the past?<p>I was recently trying Qwen 3 Coder Next and there are benchmark numbers in your article but they seem to be for the official checkpoint, not the quantized ones. But it is not even really clear (and chatbots confuse them for benchmarks of the quantized versions btw.)<p>I think systematic&#x2F;automated benchmarks would really bring the whole effort to the next level. Basically something like the bar chart from the Dynamic Quantization 2.0 article but always updated with all kinds of recent models.
            • danielhanchen8 hours ago
              Thanks! Yes we actually did think about that - it can get quite expensive sadly - perplexity benchmarks over short context lengths with small datasets are doable, but it&#x27;s not an accurate measure sadly. We&#x27;re actually investigating currently what would be the best efficient course of action on evaluating quants - will keep you posted!
            • jychang9 hours ago
              &gt; How hard would it be to systematically evaluate the different quantizations? E.g. on the Aider benchmark that you used in the past?<p>Very hard. $$$<p>The benchmarks are not cheap to run. It&#x27;ll cost a lot to run them for each quant of each model.
              • danielhanchen8 hours ago
                Yes sadly very expensive :( Maybe a select few quants could happen - we&#x27;re still figuring out what is the most economical and most efficient way to benchmark!
                • illusive40807 hours ago
                  Roughly how much does it cost to run one of the popular benchmarks? Are we talking $1,000, $10,000, or $100k?
            • Zetaphor15 hours ago
              This would be amazing
        • I hope that is exactly what is happening. It benefits them, and it benefits us.
      • swyx18 hours ago
        not that unsung! we&#x27;ve given them our biggest workshop spot every single year we&#x27;ve been able to and will do until they are tired of us <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;@aiDotEngineer&#x2F;search?query=unsloth" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;@aiDotEngineer&#x2F;search?query=unsloth</a>
        • danielhanchen8 hours ago
          Appreciate it immensely haha :) Never tired - always excited and pumped for this year!
      • danielhanchen21 hours ago
        Oh thank you - appreciate it :)
      • cubie1 day ago
        I&#x27;m a big fan of their work as well, good shout.
    • Tepix1 day ago
      It&#x27;s insane how much traffic HF must be pushing out of the door. I routinely download models that are hundreds of gigabytes in size from them. A fantastic service to the sovererign AI community.
      • razster1 day ago
        My fear is that these large &quot;AI&quot; companies will lobby to have these open source options removed or banned, growing concern. I&#x27;m not sure how else to explain how much I enjoy using what HF provides, I religiously browse their site for new and exciting models to try.
        • throwaway2744821 hours ago
          They can try. I don&#x27;t think they&#x27;ll be able to get the toothpaste back in the tube. The data will just move our of the country.
          • seanmcdirmid5 hours ago
            Many of the models on hugging face are already Chinese. It’s kind of obvious that local AI is going to flourish more in China than the USA due to hardware constraints.
        • culi1 day ago
          ModelScope is the Chinese equivalent of Hugging Face and a good back up. All the open models are Chinese anyways
          • thot_experiment22 hours ago
            Not true! Mistral is really really good, but I agree that there isn&#x27;t a single decent open model from the USA.
            • culi21 hours ago
              Mistral is cool and I wish them success but it consistently ranks extremely low on benchmarks while still being expensive. Chinese models like DeepSeek might rank almost as low as Mistral but they are significantly cheaper. And Kimi is the best of both worlds with incredible benchmark results while still being incredibly cheap<p>I know things change rapidly so I&#x27;m not counting them out quite yet but I don&#x27;t see them as a serious contender currently
              • thot_experiment17 hours ago
                Sure, benchmarks are fake and I use Mistral over equivalently sized models most of the time because it&#x27;s better in real life. It runs plenty fast for me, I don&#x27;t pay for inference.
              • BoredomIsFun10 hours ago
                &gt; it consistently ranks extremely low on benchmarks<p>As general purpose chatbots small Mistral models are better than comparably sized Chiniese models, as they have better SimpleQA scores and general knowledge of Western culture.
                • seanmcdirmid5 hours ago
                  It’s really hard to beat qwen coder, especially for role play where the instruction following is really useful. I don’t think their corpus is lacking in western knowledge, although I wonder if Chinese users get even better results from it?
                  • BoredomIsFun4 hours ago
                    &gt; It’s really hard to beat qwen coder, for role play<p>I am not sure if you actually tried that. Mistrals are widely asccepted go-to models for roleplay and creative writing. No Qwens are good at prose, except for their latest big Qwen 3.5.<p>&gt; I don’t think their corpus is lacking in western knowledge,<p>It absolutely does, especially pop culture knowledge.
                    • seanmcdirmid3 hours ago
                      Instruct and coder just follow instructions so well though. I guess I’ve just never been able to make mistral work well, I guess.
                      • BoredomIsFun2 hours ago
                        Qwen3 30B A3B and that big 400+ B Coder were absolutely terrible at editing fiction. I would tell them what to change in the prose and they&#x27;d just regurgitate text with no changes.
              • Eupolemos19 hours ago
                Why are you talking price when we are talking local AI?<p>That doesn&#x27;t make any sense to me. Am I missing something?
                • culi13 hours ago
                  Your electricity is free?
                  • thot_experiment10 minutes ago
                    for almost the entire year, yes.
                  • seanmcdirmid5 hours ago
                    Apple silicon is crazy efficient as well as being comparable to GPUs in performance for max and ultra chips.
                  • cpburns20096 hours ago
                    If you have the hardware to run expensive models, is the cost of electricity much of a factor? According to Google, the average price in the Silicon Valley Area is $0.448 per kWh. An RTX 5090 costs about $4,000 and has a peak power consumption of 1000 W. Maxing out that GPU for a whole year would cost $3,925 at that rate. It&#x27;s not particularly more expensive than that hardware itself.
                    • culi2 hours ago
                      At that point it&#x27;d be cheaper to get an expensive subscription to a cloud platform AI product. I understand the case for local LLMs but it seems silly to worry about pricing for cloud-based offerings but not worry about pricing for locally run models. Especially since running it locally can often be more expensive
                • dirasieb12 hours ago
                  15 missed calls from your local power company
            • CamperBob215 hours ago
              To be fair there are lots of worse models than OpenAI&#x27;s GPT-OSS-120b. It&#x27;s not a standout when positioned next to the latest releases from China, but prior to the current wave it was considered one of the stronger local models you can reasonably run.
        • toofy8 hours ago
          it’s only a matter of time. we have all seen first hand how … wrong … these companies behave, almost on a regular basis.<p>there’s a small tinfoil hat part of me that suspects part of their obscene investments and cornering the hardware market is driven by an conscious attempt to stop open source local from taking off. they want it all, the money, the control, and to be the only source of information to us.
        • dotancohen19 hours ago
          How do you choose which models to try for which workflows? Do you have objective tests that you run, or do you just get a feel for them while using them in your daily workflow?
      • vardalab1 day ago
        Yup, I have downloaded probably a terabyte in the last week, especially with the Step 3.5 model being released and Minimax quants. I wonder what my ISP thinks. I hope they don&#x27;t cut me off. They gave me a fast lane, they better let me use it, lol
        • fc417fc8021 day ago
          Even fairly restrictive data caps are in the range of 6 Tb per month. P2P at a mere 100 Mb works out to 1 TiB per 24 hours.<p>Hypothetically my ISP will sell me unmetered 10 Gb service but I wonder if they would actually make good on their word ...
          • 3eb7988a166318 hours ago
            I have a 1.2TB cap before you start getting charged extra, so you might need to recalibrate your restrictive level.
            • fc417fc80218 hours ago
              Is that with a WISP by chance? Or in a developing country? Or are there really wired providers with such low caps in the western world in this day and age?
              • Zetaphor15 hours ago
                ATT once told me if I don&#x27;t pay for their TV service then my home gigabit fiber would have a 1TB cap. They had an agreement with the apartment building so I had no other choice of provider.
                • fc417fc80214 hours ago
                  Buy our off brand netflix or else we&#x27;ll make it so you can&#x27;t watch netflix. How is that legal?
                  • Zetaphor2 hours ago
                    The law is written by the highest bidder, and the telecom lobbyists are very generous
              • nagaiaida16 hours ago
                well it&#x27;s my wired cap a stone&#x27;s throw from buildings with google cloud logos on the side in a major us city, so...
              • zargon16 hours ago
                Comcast.
      • Onavo1 day ago
        Bandwidth is not that expensive. The Big 3 clouds just want to milk customers via egress. Look at Hetzner or CloudFlare R2 if you want to get get an idea of commodity bandwidth costs.
    • zozbot2341 day ago
      &gt; We still need good value hardware to run Kimi&#x2F;GLM in-house<p>If you stream weights in from SSD storage and freely use swap to extend your KV cache it will be really slow (multiple seconds per token!) but run on basically anything. And that&#x27;s still really good for stuff that can be computed overnight, perhaps even by batching many requests simultaneously. It gets progressively better as you add more compute, of course.
      • Aurornis1 day ago
        &gt; it will be really slow (multiple seconds per token!)<p>This is fun for proving that it can be done, but that&#x27;s 100X slower than hosted models and 1000X slower than GPT-Codex-Spark.<p>That&#x27;s like going from real time conversation to e-mailing someone who only checks their inbox twice a day if you&#x27;re lucky.
        • zozbot2348 hours ago
          You&#x27;d need real rack-scale&#x2F;datacenter infrastructure to properly match the hosted models that are keeping everything in fast VRAM at all times, and then you only get reasonable utilization on that by serving requests from many users. The ~100X slower tier is totally okay for experimentation and non-conversational use cases (including some that are more agentic-like!), and you&#x27;d reach ~10X (quite usable for conversation) by running something like a good homelab.
      • HPsquared1 day ago
        At a certain point the energy starts to cost more than renting some GPUs.
        • vardalab1 day ago
          Yeah, that is hard to argue with because I just go to OpenRouter and play around with a lot of models before I decide which ones I like. But there&#x27;s something special about running it locally in your basement
          • dotancohen19 hours ago
            I&#x27;d love to hear more about this. How do you decide that you like a model? For which use cases?
        • fc417fc8021 day ago
          Aren&#x27;t decent GPU boxes in excess of $5 per hour? At $0.20 per kWhr (which is on the high side in the US) running a 1 kW workstation 24&#x2F;7 would work out to the same price as 1 hour of GPU time.<p>The issue you&#x27;ll actually run into is that most residential housing isn&#x27;t wired for more than ~2kW per room.
    • sowbug1 day ago
      Why doesn&#x27;t HF support BitTorrent? I know about hf-torrent and hf_transfer, but those aren&#x27;t nearly as accessible as a link in the web UI.
      • &gt; Why doesn&#x27;t HF support BitTorrent?<p>Harder to track downloads then. Only when clients hit the tracker would they be able to get download states, and forget about private repositories or the &quot;gated&quot; ones that Meta&#x2F;Facebook does for their &quot;open&quot; models.<p>Still, if vanity metrics wasn&#x27;t so important, it&#x27;d be a great option. I&#x27;ve even thought of creating my own torrent mirror of HF to provide as a public service, as eventually access to models will be restricted, and it would be nice to be prepared for that moment a bit better.
        • Barbing1 hour ago
          That would be a very nice service. I think folks might rely on it for a number of reasons, including that we&#x27;ll want to see how biases changed over time. What got sloppier, shillier...
        • sowbug1 day ago
          I thought of the tracking and gate questions, too, when I vibed up an HF torrent service a few nights ago. (Super annoying BTW to have to download the files just to hash the parts, especially when webseeds exist.) Model owners could disable or gate torrents the same way they gate the models, and HF could still measure traffic by .torrent downloads and magnet clicks.<p>It&#x27;s a bit like any legalization question -- the black market exists anyway, so a regulatory framework could bring at least some of it into the sunlight.
          • &gt; Model owners could disable or gate torrents the same way they gate the models, and HF could still measure traffic by .torrent downloads and magnet clicks.<p>But that&#x27;ll only stop a small part, anyone could share the infohash and if you&#x27;re using the dht&#x2F;magnet without .torrent files or clicks on a website, no one can count those downloads unless they too scrape the dht for peers who are reporting they&#x27;ve completed the download.
            • fc417fc8021 day ago
              &gt; unless they too scrape the dht for peers who are reporting they&#x27;ve completed the download.<p>Which can be falsified. Head over to your favorite tracker and sort by completed downloads to see what I mean.
            • sowbug1 day ago
              Right, but that&#x27;s already happening today. That&#x27;s the black-market point.
        • homarp1 day ago
          how are all the private trackers tracking ratios?
        • taminka1 day ago
          most of the traffic is probably from open weights, just seed those, host private ones as is
        • jimbob451 day ago
          Wouldn’t it still provide massive benefits if they could convince&#x2F;coerce their most popular downloaded models to move to torrenting?
          • intrasight8 hours ago
            Benefit to you, but great downside to the three letter agencies that inject their goods into these models.
    • Fin_Code1 day ago
      I still don&#x27;t know why they are not running on torrent. Its the perfect use case.
      • heliumtera1 day ago
        How can you be the man in the middle in a truly P2P environment?
      • freedomben1 day ago
        That would shut out most people working for big corp, which is probably a huge percentage of the user base. It&#x27;s dumb, but that&#x27;s just the way corp IT is (no torrenting allowed).
        • zozbot2341 day ago
          It&#x27;s a sensible option, even when not everyone can really use it. Linux distros are routinely transfered via torrent, so why not other massive, open-licensed data?
          • freedomben1 day ago
            Oh as an option, yeah I agree it makes a ton of sense. I just would expect a very, very small percentage of people to use the torrent over the direct download. With Linux distros, the vast majority of downloads still come from standard web servers. When I download distro images I opt for torrents, but very few people do the same
            • Const-me23 hours ago
              &gt; very small percentage of people to use the torrent over the direct download<p>BitTorrent protocol is IMO better for downloading large files. When I want to download something which exceeds couple GB, and I see two links direct download and BitTorrent, I always click on the torrent.<p>On paper, HTTP supports range requests to resume partial downloads. IME, it seems modern web browsers neglected to implement it properly. They won’t resume after browser is reopened, or the computer is restarted. Command-line HTTP clients like wget are more reliable, however many web servers these days require some session cookies or one-time query string tokens, and it’s hard to pass that stuff from browser to command-line.<p>I live in Montenegro, CDN connectivity is not great here. Only a few of them like steam and GOG saturate my 300 megabit&#x2F;sec download link. Others are much slower, e.g. windows updates download at about 100 megabit&#x2F;sec. BitTorrent protocol almost always delivers the 300 megabit&#x2F;sec bandwidth.
            • zrm1 day ago
              With Linux distros they typically put the web link right on the main page and have a torrent available if you go look for it, because they want you to try their distro more than they want to save some bandwidth.<p>Suppose HF did the opposite because the bandwidth saved is more and they&#x27;re not as concerned you might download a different model from someone else.
          • thot_experiment22 hours ago
            I have terabytes of linux isos I got via torrents, many such cases!
  • simonw1 day ago
    It&#x27;s hard to overstate the impact Georgi Gerganov and llama.cpp have had on the local model space. He pretty much kicked off the revolution in March 2023, making LLaMA work on consumer laptops.<p>Here&#x27;s that README from March 10th 2023 <a href="https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;blob&#x2F;775328064e69db1ebd7e19ccb59d2a7fa6142470&#x2F;README.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;blob&#x2F;775328064e69db1eb...</a><p>&gt; The main goal is to run the model using 4-bit quantization on a MacBook. [...] This was hacked in an evening - I have no idea if it works correctly.<p>Hugging Face have been a great open source steward of Transformers, I&#x27;m optimistic the same will be true for GGML.<p>I wrote a bit about this here: <a href="https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Feb&#x2F;20&#x2F;ggmlai-joins-hugging-face&#x2F;" rel="nofollow">https:&#x2F;&#x2F;simonwillison.net&#x2F;2026&#x2F;Feb&#x2F;20&#x2F;ggmlai-joins-hugging-f...</a>
    • ushakov1 day ago
      i am curious, why are your comments always pinned to the top?
      • carbocation1 day ago
        Because many of us think simonw has discerning taste on this topic and like to read what he has to say about it, so we upvote his comments.
        • ushakov1 day ago
          i don&#x27;t doubt this. i just find it questionable that one particular poster always gets in the spotlight when AI is the topic - while other conversations in my opinion offer more interesting angles.
          • jonas211 day ago
            Upvote the conversations that you find to be more interesting. If enough people do the same, they too will make it to the top.
            • coldtea19 hours ago
              Parent implies there might be some &quot;boosting&quot; involved, in which case, &quot;upvote the conversations that you find to be more interesting&quot; wont change anything...<p>Not saying this is the case, but it&#x27;s what the comment implies, so &quot;just upvote your faves&quot; doesn&#x27;t really address it.
          • colesantiago1 day ago
            Agreed,<p>I would like to see others, being promoted to the top rather than Simon’s constant shilling for backlinks to his blog every time an AI topic is on the front page.
      • simonw1 day ago
        At a guess that&#x27;s because my comment attracted more up-votes than the other top-level comments in the thread.<p>I generally try to include something in a comment that&#x27;s not information already under discussion - in this case that was the link and quote from the original README.
        • ushakov1 day ago
          of course your comment attracts more upvotes - it&#x27;s at the top.
          • seanhunter1 day ago
            It’s at the top because of upvotes. They don’t have an “if simonw: boost” branch in the code.
            • ushakov1 day ago
              the code is not public, so we can&#x27;t know. i think it&#x27;s much more nuanced and certain users&#x27; comments might get a preferential treatment, based on factors other than the upvote count - which itself is hidden from us.
              • &gt; the code is not public, so we can&#x27;t know.<p>I feel like you&#x27;re making this statement in bad faith, rather than honestly believing the developers of the forum software here have built in a clause to pin simonw&#x27;s comments to the top.
              • satvikpendem23 hours ago
                &gt; <i>certain users&#x27; comments might get a preferential treatment</i><p>This does not happen. It hasn&#x27;t even happened when pg made the forum in the first place.
                • dcrazy23 hours ago
                  I thought dang explicitly said it does happen? It certainly happens for stories.
          • ontouchstart1 day ago
            Attention feeds attention.<p>Attention is ALL You Need.
      • magicalhippo6 hours ago
        New comments get a boost, and as such are frequently near the top just due to that. Frequent upvotes also boosts. There might be other factors.<p>However these things are dynamic and change over time. As I read the discussion just now, the GP comment was the ~5th top-level comment.
      • satvikpendem23 hours ago
        They aren&#x27;t pinned, people just vote on them, and more so because simonw is a recognizable name with lots of posts and comments.
      • llm_nerd1 day ago
        HN goes through phases. I remember when patio11 was the star of the hour on here. At another time it was that security guy (can&#x27;t remember his name).<p>And for those who think it&#x27;s just organic with all of the upvotes, HN absolutely does have a +&#x2F;- comment bias for users, and it does automatically feature certain people and suppress others.
        • rymc1 day ago
          the security you mean is probably tptacek (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;user?id=tptacek">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;user?id=tptacek</a>)
        • imiric1 day ago
          &gt; And for those who think it&#x27;s just organic with all of the upvotes, HN absolutely does have a bias for authors, and it does automatically feature certain people and suppress others.<p>Exactly.<p>There are configurable settings for each account, which might be automatically or manually set—I&#x27;m not sure–, that control the initial position of a comment in threads, and how long it stays there. There might be a reward system, where comments from high-karma accounts are prioritized over others, and accounts with &quot;strikes&quot;, e.g. direct warnings from moderators, are penalized.<p>The difference in upvotes that account ultimately receives, and thus the impact on the discussion, is quite stark. The more visible a comment is, i.e. the more at the top it is, the more upvotes it can collect, which in turn makes it stay at the top, and so on.<p>It&#x27;s safe to assume that certain accounts, such as those of YC staff, mods, or alumni, or tech celebrities like simonw, are given the highest priority.<p>I&#x27;ve noticed this on my own account. Before being warned for an IMO bullshit reason, my comments started to appear near the middle, and quickly float down to the bottom, whereas before they would usually be at the top for a few minutes. The quality of what I say hasn&#x27;t changed, though the account&#x27;s standing, and certainly the community itself, has.<p>I don&#x27;t mind, nor particularly care about an arbitrary number. This is a proprietary platform run by a VC firm. It would be silly to expect that they&#x27;ve cracked the code of online discourse, or that their goal is to keep it balanced. The discussions here are better on average than elsewhere because of the community, although that also has been declining over the years.<p>I still find it jarring that most people would vote on a comment depending on if they agree with it or not, instead of engaging with it intellectually, which often pushes interesting comments to the bottom. This is an unsolved problem here, as much as it is on other platforms.
          • Eisenstein4 hours ago
            There is a saying that if everyone you encounter seems to be unreasonable, maybe it isn&#x27;t the other people that are being unreasonable.<p>This isn&#x27;t to say that social media is fair, or that people vote properly or that any ranking system based on agreement by readers is a good one. However, generally when you are getting negativity communicated to you and you are seeing consistently poor results around actions you take, it is going to be useful to examine the possibility that there is a difference in how you perceive what you are doing vs how others do. In that case spending time trying to figure out ways in which you are being wronged so that you can continue in the same manner is going to be time wasted.
            • imiric2 hours ago
              How are you getting persecution complex from what I said? If anything, your comment might be feeding that delusion. :)<p>My point is that HN definitely has certain weights associated with accounts, which control the karma, visibility, and ultimately discussion of certain topics.<p>This problem doesn&#x27;t affect only negativity or downvotes, but upvotes as well. The most upvoted comments are not necessarily of the highest quality, or contribute the most to the discussion. They just happen to be the most visible, and to generally align with the feeling of the hive mind.<p>I know this because some of my own comments have been at the top, without being anything special, while others I think are, barely get any attention. I certainly examine my thinking whenever it strongly aligns with the hive mind, as this community does not particularly align with my values.<p>I also tend to seek out comments near the bottom of threads, and have dead comments enabled, precisely to counteract this flawed system. I often find quality opinions there, so I suggest everyone do the same as well.<p>An essential feature of a healthy and interesting discussion forum is to accomodate different viewpoints. That starts by not burying those that disagree with the majority, or boosting those that agree. AFAIK no online system has gotten this right yet.
      • throwaway20271 day ago
        Time flies and simonw his AI feedback isn&#x27;t always received favorably, sometimes he pushes it too much.
      • francispauli1 day ago
        thanks for reminding me i need to follow his blog weekly again
  • car14 hours ago
    So great to see my two favorite Open Source AI projects&#x2F;companies joining forces.<p>Since I don&#x27;t see it mentioned here, <i>LlamaBarn</i> is an awesome little—but mighty—MacOS menubar program, making access to llama.cpp&#x27;s great web UI and downloading of tastefully curated models easy as pie. It automatically determines the available model- and context-sizes based on available RAM.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;LlamaBarn" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;LlamaBarn</a><p>Downloaded models live in:<p><pre><code> ~&#x2F;.llamabarn </code></pre> Apart from running on localhost, the server address and port can be set via CLI:<p><pre><code> # bind to all interfaces (0.0.0.0) defaults write app.llamabarn.LlamaBarn exposeToNetwork -bool YES # or bind to a specific IP (e.g., for Tailscale) defaults write app.llamabarn.LlamaBarn exposeToNetwork -string &quot;100.x.x.x&quot; # disable (default) defaults delete app.llamabarn.LlamaBarn exposeToNetwork</code></pre>
    • noisy_boy6 hours ago
      Github is showing me unicorn - is there an Linux equivalent? I have a old Thinkpad with a puny Nvidia GPU, can I hope to find anything useful to run on that?
  • HanClinto1 day ago
    I&#x27;m regularly amazed that HuggingFace is able to make money. It does so much good for the world.<p>How solid is its business model? Is it long-term viable? Will they ever &quot;sell out&quot;?
    • FT had a solid piece a few weeks back: &quot;Why AI start-up Hugging Face turned down a $500mn Nvidia deal&quot;<p><a href="https:&#x2F;&#x2F;giftarticle.ft.com&#x2F;giftarticle&#x2F;actions&#x2F;redeem&#x2F;9b4eca55-1214-4f9e-b85e-58571d8da8d4" rel="nofollow">https:&#x2F;&#x2F;giftarticle.ft.com&#x2F;giftarticle&#x2F;actions&#x2F;redeem&#x2F;9b4eca...</a>
      • jackbravo1 day ago
        sounds very interesting, but even though it says giftarticle.ft, I got blocked by a paywall.
        • <a href="https:&#x2F;&#x2F;archive.is&#x2F;zSyUc" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;zSyUc</a><p>To summarize, they rejected Nvidia&#x27;s offer because they didn&#x27;t want one outsized investor who could sway decisions. And &quot;the company was also able to turn down Nvidia due to its stable finances. Hugging Face operates a &#x27;freemium&#x27; business model. Three per cent of customers, usually large corporations, pay for additional features such as more storage space and the ability to set up private repositories.&quot;
          • bee_rider1 day ago
            Freemium seems to be working pretty well for them—what’s the alternative website, after all. They seem to command their niche.
        • culi1 day ago
          find the Bypass Paywalls Clean extension. Never worry about a paywall again
    • bityard1 day ago
      Their business model is essentially the same as GitHub. Host lots of stuff for free and build a community around it, sell the upscaled&#x2F;private version to businesses. They are already profitable.
      • HanClinto1 day ago
        This is what Sourceforge did too, and they still had the DevShare adware thing didn&#x27;t they?<p>GitHub is great -- huge fan. To some degree they &quot;sold out&quot; to Microsoft and things could have gone more south, but thankfully Microsoft has ruled them with a very kind hand, and overall I&#x27;m extremely happy with the way they&#x27;ve handled it.<p>I guess I always retain a bit of skepticism with such things, and the long-term viability and goodness of such things never feels totally sure.
    • dmezzetti1 day ago
      They have paid hosting - <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;enterprise" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;enterprise</a> and paid accounts. Also consulting services. Seems like a pretty good foundation to me.
      • julien_c1 day ago
        and a lot of traction on paid (private in particular) storage these days; sneak peek at new landing page: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;storage" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;storage</a>
    • heliumtera1 day ago
      &gt;Will they ever &quot;sell out&quot;?<p>Oh no, never. Don&#x27;t worry, the usual investors are very well known for fighting for user autonomy (AMD, Nvidia, Intel,IBM, Qualcomm)<p>They are all very pro consumers and all backers are certainly here for your enjoyment only
      • zozbot2341 day ago
        These are all big hardware firms, which makes a lot of sense as a classic &#x27;commoditize the complement&#x27; play. Not exactly pro-consumer, but not quite anti-consumer either!
        • 5o1ecist1 day ago
          &gt; AMD, Nvidia, Intel, IBM, Qualcomm<p>&gt; but not quite anti-consumer either!<p>All of them are public companies, which means that their default state is anti-consumer and pro-shareholder. By law they are required to do whatever they can to maximize profits. History teaches that shareholders can demand whatever they want, with the respective companies following orders, since nobody ever really has to suffer consequences and any and all potential fines are already priced in, in advance, anyway.<p>Conversely, this is why Valve is such a great company. Valve is probably one of the only few actual pro-consumer companies out there.<p>Fun Fact! Rarely is it ever mentioned anywhere, but Valve is not a public company! Valve is a private company! That&#x27;s why they can operate the way they do! If Valve was a public company, then greedy, crooked billionaire shareholders would have managed to get rid of Gabe a long time ago.
          • RussianCow23 hours ago
            &gt; By law they are required to do whatever they can to maximize profits.<p>I know it&#x27;s a nit-pick, but I hate that this always gets brought up when it&#x27;s not actually true. Public corporations face pressure from investors to maximize returns, sure, but there is no law stating that they have to maximize profits at all costs. Public companies can (and often do) act against the interest of immediate profits for some other gain. The only real leverage that investors have is the board&#x27;s ability to fire executives, but that assumes that they have the necessary votes to do so. As a counter-example, Mark Zuckerberg still controls the majority of voting power at Meta, so he can effectively do whatever he wants with the company without major consequence (assuming you don&#x27;t consider stock price fluctuations &quot;major&quot;).<p>But I say this not to take away from your broader point, which I agree with: the short-term profit-maximizing culture is indeed the default when it comes to publicly traded corporations. It just isn&#x27;t something inherent in being publicly traded, and in the inverse, private companies often have the same kind of culture, so that&#x27;s not a silver bullet either.
            • chucksmash8 hours ago
              It&#x27;s a worthwhile point to make because if people believe that misconception then it lets companies wash their hands of flagrantly bad behavior. &quot;Gosh, we should really get around to changing the law that makes them act that way.&quot;
            • 5o1ecist19 hours ago
              You&#x27;re perfectly right and I don&#x27;t consider it a nitpick. I really should be more precise about this, instead of spreading inaccuracies. Thank you!
          • HanClinto1 day ago
            Great points.<p>Valve is one of my top favorite companies right now. Love the work they&#x27;re doing, and their products are amazing.<p>Can hardly wait for the Steam Frame.
        • smallerize1 day ago
          heliumtera is being sarcastic.
    • I_am_tiberius1 day ago
      I once tried hugging face because I wanted I worked through some tutorial. They wanted my credit card details during the registration as far as I remember. After a month they invoiced me some amount of money and I had no idea what it was. To be honest, I don&#x27;t understand what exactly they do and what services I was paying for, but I cancelled my account and never touched it again. For me that was a totally intransparent process.
      • shafyy1 day ago
        Their pricing seems pretty transparent: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;pricing" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;pricing</a>
  • 0xbadcafebee1 day ago
    &gt; The community will continue to operate fully autonomously and make technical and architectural decisions as usual. Hugging Face is providing the project with long-term sustainable resources, improving the chances of the project to grow and thrive. The project will continue to be 100% open-source and community driven as it is now.<p>I want this to be true, but business interests win out in the end. Llama.cpp is now the de-facto standard for local inference; more and more projects depend on it. If a company controls it, that means that company controls the local LLM ecosystem. And yeah, Hugging Face seems nice now... so did Google originally. If we all don&#x27;t want to be locked in, we either need a llama.cpp competitor (with a universal abstration), or it should be controlled by an independent nonprofit.
    • zozbot2341 day ago
      Llama.cpp is an open source project that anyone can fork as needed, so any &quot;control&quot; over it really only extends to facilitating development of certain features.
      • 0xbadcafebee22 hours ago
        In practice, nobody does this, because you then have to keep the fork up to date with upstream plus your changes, and this is an endless amount of work.
  • mnewme1 day ago
    Huggingface is the silent GOAT of the AI space, such a great community and platform
    • lairv1 day ago
      Truly amazing that they&#x27;ve managed to build an open and profitable platform without shady practices
      • al_borland1 day ago
        It’s such a sad state of affairs when shady practices are so normal that finding a company without them is noteworthy.
  • jgrahamc1 day ago
    This is great news. I&#x27;ve been sponsoring ggml&#x2F;llama.cpp&#x2F;Georgi since 2023 via Github. Glad to see this outcome. I hope you don&#x27;t mind Georgi but I&#x27;m going to cancel my sponsorship now you and the code have found a home!
  • beoberha1 day ago
    Seems like a great fit - kinda surprised it didn’t happen sooner. I think we are deep in the valley of local AI, but I’d be willing to bet it breaks out in the next 2-3 years. Here’s hoping!
    • breisa1 day ago
      I mean they already supported the project quite a bit. @ngxson and maybe others? from Huggingface are big contributors to llama.cpp.
  • tkp-4151 day ago
    Can anyone point me in the direction of getting a model to run locally and efficiently inside something like a Docker container on a system with not so strong computing power (aka a Macbook M1 with 8gb of memory)?<p>Is my only option to invest in a system with more computing power? These local models look great, especially something like <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;AlicanKiraz0&#x2F;Cybersecurity-BaronLLM_Offensive_Security_LLM_Q6_K_GGUF" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;AlicanKiraz0&#x2F;Cybersecurity-BaronLLM_O...</a> for assisting in penetration testing.<p>I&#x27;ve experimented with a variety of configurations on my local system, but in the end it turns into a make shift heater.
    • 0xbadcafebee1 day ago
      8GB is not enough to do complex reasoning, but you could do very small simple things. Models like Whisper, SmolVLM, Quen2.5-0.5B, Phi-3-mini, Granite-4.0-micro, Mistral-7B, Gemma3, Llama-3.2 all work on very little memory. Tiny models can do a lot if you tune&#x2F;train them. They also need to be used differently: system prompt preloaded with information, few-shot examples, reasoning guidance, single-task purpose, strict output guidelines. See <a href="https:&#x2F;&#x2F;github.com&#x2F;acon96&#x2F;home-llm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;acon96&#x2F;home-llm</a> for an example. For each small model, check if Unsloth has a tuned version of it; it reduces your memory footprint and makes inference faster.<p>For your Mac, you can use Ollama, or MLX (Mac ARM specific, requires different engine and different model disk format, but is faster). Ramalama may help fix bugs or ease the process w&#x2F;MLX. Use either Docker Desktop or Colima for the VM + Docker.<p>For today&#x27;s coding &amp; reasoning models, you need a minimum of 32GB VRAM combined (graphics + system), the more in GPU the better. Copying memory between CPU and GPU is too slow so the model needs to &quot;live&quot; in GPU space. If it can&#x27;t fit all in GPU space, your CPU has to work hard, and you get a space heater. That Mac M1 will do 5-10 tokens&#x2F;s with 8GB (and CPU on full blast), or 50 token&#x2F;s with 32GB RAM (CPU idling). And now you know why there&#x27;s a RAM shortage.
      • BoredomIsFun4 hours ago
        &gt; Mistral-7B<p>Is hopelessly dated. There are much better newer models around.
    • mft_1 day ago
      There’s no way around needing a powerful-enough system to run the model. So you either choose a model that can fit on what you have —i.e. via a small model, or a quantised slightly larger model— or you access more powerful hardware, either by buying it or renting it. (IME you don’t need Docker. For an easy start just install LM Studio and have a play.)<p>I picked up a second-hand 64GB M1 Max MacBook Pro a while back for not too much money for such experimentation. It’s sufficiently fast at running any LLM models that it can fit in memory, but the gap between those models and Claude is considerable. However, this might be a path for you? It can also run all manner of diffusion models, but there the performance suffers (vs. an older discrete GPU) and you’re waiting sometimes many minutes for an edit or an image.
      • ryandrake1 day ago
        I wasn&#x27;t able to have very satisfying success until I bit the bullet and threw a GPU at the problem. Found an actually reasonably priced A4000 Ada generation 20GB GPU on eBay and never looked back. I still can&#x27;t run the insanely large models, but 20GB should hold me over for a while, and I didn&#x27;t have to upgrade my 10 year old Ivy Bridge vintage homelab.
      • sigbottle1 day ago
        Are mac kernels optimized compared to CUDA kernels? I know that the unified GPU approach is inherently slower, but I thought a ton of optimizations were at the kernel level too (CUDA itself is a moat)
        • ttoinou19 hours ago
          There’s this developer called nightmedia who converts a lot of models to apple MLX. I can run Qwen3 coder next at 60 tps on my m4 max. It works
        • liuliu23 hours ago
          Depending on what you do. If you are doing token generations, compute-dense kernel optimization is less interesting (as, it is memory-bounded) than latency optimizations else where (data transfers, kernel invocations etc). And for these, Mac devices actually have a leg than CUDA kernels (as pretty much Metal shaders pipelines are optimized for latencies (a.k.a. games) while CUDA shaders are not (until cudagraph introduction, and of course there are other issues).
        • bigyabai1 day ago
          Mac kernels are almost always compute shaders written in Metal. That&#x27;s the bare-minimum of acceleration, being done in a non-portable proprietary graphics API. It&#x27;s optimized in the loosest sense of the word, but extremely far from &quot;optimal&quot; relative to CUDA (or hell, even Vulkan Compute).<p>Most people will not choose Metal if they&#x27;re picking between the two moats. CUDA is far-and-away the better hardware architecture, not to mention better-supported by the community.
    • ontouchstart1 day ago
      This is the easiest set up on a Mac. You need at least 16gb on a MacBook:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;discussions&#x2F;15396" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ggml-org&#x2F;llama.cpp&#x2F;discussions&#x2F;15396</a>
    • zozbot2341 day ago
      The general rule of thumb is that you should feel free to quantize even as low as 2 bits average if this helps you run a model with more active parameters. Quantized models are not perfect at all, but they&#x27;re preferable to the models with fewer, bigger parameters. With 8GB usable, you could run models with up to 32B active at heavy quantization.
      • zargon15 hours ago
        A large model (100B+, the more the better) may be acceptable at 2-bit quantization, depending on the task. But not a small model. Especially not for technical tasks. On top of that, one still needs room for OS, software and KV cache. 8GB is just not very useful for local LLMs. That said, it can still be entertaining to try out a 4-bit 8B model for the fun of it.
        • zozbot2348 hours ago
          100B+ is the amount of total parameters, whereas what matters here is active - very different for sparse MoE models. You&#x27;re right that there&#x27;s some overhead for the OS&#x2F;software stack but it&#x27;s not that much. KV-cache is a good candidate for being swapped out, since it only gets a limited amount of writes per emitted token.
          • zargon5 hours ago
            Total parameters, not active parameters, is the property that matters for model robustness under extreme quantization.<p>Once you&#x27;re swapping from disk, the performance will be quite unusable for most people. And for local inference, KV cache is the worst possible choice to put on disk.
    • xrd1 day ago
      I think a better bet is to ask on reddit.<p><a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLM&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;LocalLLM&#x2F;</a><p>Everytime I ask the same thing here, people point me there.
    • yjftsjthsd-h1 day ago
      With only 8 GB of memory, you&#x27;re going to be running a really small quant, and it&#x27;s going to be slow and lower quality. But yes, it should be doable. In the worst case, find a tiny gguf and run it on CPU with llamafile.
    • HanClinto1 day ago
      Maybe check out Docker Model Runner -- it&#x27;s built on llama.cpp (in a good way -- not like Ollama) and handles I think most of what you&#x27;re looking for?<p><a href="https:&#x2F;&#x2F;www.docker.com&#x2F;blog&#x2F;run-llms-locally&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.docker.com&#x2F;blog&#x2F;run-llms-locally&#x2F;</a><p>As far as how to find good models to run locally, I found this site recently, and I liked the data it provides:<p><a href="https:&#x2F;&#x2F;localclaw.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;localclaw.io&#x2F;</a>
    • Hamuko1 day ago
      I tried to run some models on my M1 Max (32 GB) Mac Studio and it was a pretty miserable experience. Slow performance and awful results.
  • mhher8 hours ago
    It&#x27;s great to see the ggml team getting proper backing. Keeping inference in bare-metal C&#x2F;C++ without the Python bloat is the only way local AI is going to scale efficiently. Well deserved for Georgi, Johannes, Piotr, and the rest of the team.
  • am17an9 hours ago
    One often overlooked after that is ggml, the tensor library that runs llama.cpp is not based on pytorch, rather just plain cpp. In a world where pytorch dominates, it shows that alternatives are possible and are worthy to be pursued.
  • ontouchstart19 hours ago
    I have played with both mlx-lm and llama.cpp after I bought a 24GB M5 MacBook Pro last year.<p>Then I fell down the rabbit holes of uv, rust and C++ and forgot about LLMs. Today after I saw this announcement and answered someone’s question about how to set it up, when I got home, I decided play with llama.cpp again.<p>I was surprised and impressed:<p><a href="https:&#x2F;&#x2F;ontouchstart.github.io&#x2F;rabbit-holes&#x2F;llama.cpp&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ontouchstart.github.io&#x2F;rabbit-holes&#x2F;llama.cpp&#x2F;</a><p>I am not going to use mlx-lm or lmstudio anymore. llama.cpp is so much fun.
  • jpcompartir7 hours ago
    This is great, brings clear benefits to both sides and the rest of us.<p>Always rooting for Hugging Face
  • mattfrommars1 day ago
    I don’t know if this warrants a separate thread here but I have to ask…<p>How can I realistically get involved the AI development space? I feel left out with what’s going on and living in a bubble where AI is forced into by my employer to make use of it (GitHub Copilot), what is a realistic road map to kinda slowly get into AI development, whatever that means<p>My background is full stack development in Java and React, albeit development is slow.<p>I’ve only messed with AI on very application side, created a local chat bot for demo purposes to understand what RAG is about to running models locally. But all of this is very superficial and I feel I’m not in the deep with what AI is about. I get I’m too ‘late’ to be on the side of building the next frontier model and makes no sense, what else can I do?<p>I know Python, next step is maybe do ‘LLM from scratch”? Or I pick up Google machine learning crash course certificate? Or do recently released Nvidia Certification?<p>I’m open for suggestions
    • w10-119 hours ago
      The competition for root and branch AI models and infrastructure is intense and skilled.<p>But if you&#x27;re adjacent to some leaf use-case for AI, you&#x27;re likely already as good as anyone else at productizing it.<p>And that&#x27;s who is getting hired: people who show they can deliver product-market fit.
    • fc417fc8021 day ago
      I&#x27;m not entirely clear what your goals are but roughly, just figure out an application that holds your interest and build a model for it from scratch. Probably don&#x27;t start with an LLM though. Same as for anything else really. If you&#x27;re interest in computer graphics then decide on a small scale project and go build it from scratch. Etc.
    • breisa23 hours ago
      Maybe look into model finetuning&#x2F;distilation. Unsloth [1] has great guides and provides everything you need to get started on Google Colab for free. [1] <a href="https:&#x2F;&#x2F;unsloth.ai&#x2F;">https:&#x2F;&#x2F;unsloth.ai&#x2F;</a>
    • swyx18 hours ago
      go thru workshops here <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;@aiDotEngineer&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;@aiDotEngineer&#x2F;</a>
  • Does anyone have a good comparison of HuggingFace&#x2F;Candle to Burn? I am testing them concurrently, and Burn seems to have an easier-to-use API. (And can use Candle as a backend, which is confusing) When I ask on Reddit or Discord channels, people overwhelmingly recommend Burn, but provide no concrete reasons beyond &quot;Candle is more for inference while Burn is training and inference&quot;. This doesn&#x27;t track, as I&#x27;ve done training on Candle. So, if you&#x27;ve used both: Thoughts?
    • csunoser1 day ago
      I have used both (albeit 2 years ago, and things change really fast). At the time, Candle didn&#x27;t have 2d conv backprop with strides properly implemented. And getting Burn running libtch backend was just a lot simpler.<p>I did use candle for wasm based inference for teaching purposes - that was reasonably painless and pretty nice.
  • jimmydoe1 day ago
    Amazing. I like the openness of both project and really excited for them.<p>Hopefully this does not mean consolidation due to resource dry up but true fusion of the bests.
  • androiddrew1 day ago
    One of the few acquisitions I do support
  • kristianp1 day ago
    &gt; Towards seamless “single-click” integration with the transformers library<p>That&#x27;s interesting. I thought they would be somewhat redundant. They do similar things after all, except training.
  • sbinnee20 hours ago
    I am happy for ggml team. They did so much work for quantization and actually made it available to everyone. Thank you.
  • fancy_pantser1 day ago
    Was Georgi ever approached by Meta? I wonder what they offered (I&#x27;m glad they didn&#x27;t succeed, just morbid curiosity).
  • stephantul1 day ago
    Georgi is such a legend. Glad to see this happening
  • segmondy1 day ago
    Great news! I have always worried about ggml and long term prospect for them and wished for them to be rewarded for their effort.
  • sheepscreek1 day ago
    Curious about the financials behind this deal. Did they close above what they raised? What’s in it for HuggingFace?
  • karmasimida1 day ago
    Does local AI have a future? The models are getting ridiculously big and any storage hardware is hoarded by few companies for next 2 years and nvidia has stopped making consumer GPU for this year.<p>It seems to me there is no chance local ML is going to be anywhere out of the toy status comparing to closed source ones in short term
    • rhdunn23 hours ago
      Mistral have small variants (3B, 8B, 14B, etc.), as do others like IBM Granite and Qwen. Then there are finetunes based on these models, depending on your workflow&#x2F;requirements.
      • karmasimida21 hours ago
        True, but anything remotely useful is 300B and above
        • Eupolemos19 hours ago
          That is a very broad and silly position to take, especially in this thread.<p>I use Devstral 2 and Gemini 3 daily.
    • dust4223 hours ago
      I am actually doing now a good part of dev with Qwen3-Coder-Next on an M1 64GB with Qwen Code CLI (a fork of Gemini CLI). I very much like<p><pre><code> a) to have an idea how much tokens I use and b) be independent of VC financed token machines and c) I can use it on a plane&#x2F;train </code></pre> Also I never have to wait in a queue, nor will I be told to wait for a few hours. And I get many answers in a second.<p>I don&#x27;t do full vibe coding with a dozen agents though. I read all the code it produces and guide it where necessary.<p>Last not least, at some point the VC funded party will be over and when this happens one better knows how to be highly efficient in AI token use.
      • ttoinou18 hours ago
        How much tokens per seconds are you getting ?<p>Whats the advantage of qwen code cli over opencode ?
        • dust4216 hours ago
          320 tok&#x2F;s PP and 42 tok&#x2F;s TG with 4bit quant and MLX. Llama.cpp was half for this model but afaik has improved a few days ago, I haven&#x27;t yet tested though.<p>I have tried many tools locally and was never really happy with any. I tried finally Qwen Code CLI assuming that it would run well with a Qwen model and it does. YMMV, I mostly do javascript and Python. Most important setting was to set the max context size, it then auto compacts before reaching it. I run with 65536 but may raise this a bit.<p>Last not least OpenCode is VC funded, at some point they will have to make money while Gemini CLI &#x2F; Qwen CLI are not the primary products of the companies but definitely dog-fooded.
  • snowhale21 hours ago
    good to see them get proper backing. llama.cpp is basically infrastructure at this point and relying on volunteer maintainers for something this critical was starting to feel sketchy.
  • dhruv30061 day ago
    Huggingface is actually something thats driving good in the world. Good to see this collab&#x2F;
  • superkuh1 day ago
    I&#x27;m glad the llama.cpp and the ggml backing are getting consistent reliable economic support. I&#x27;m glad that ggerganov is getting rewarded for making such excellent tools.<p>I am somewhat anxious about &quot;integration with the Hugging Face transformers library&quot; and possible python ecosystem entanglements that might cause. I know llama.cpp and ggml already have plenty of python tooling but it&#x27;s not strictly required unless you&#x27;re quantizing models yourself or other such things.
  • dmezzetti1 day ago
    This is really great news. I&#x27;ve been one of the strongest supporters of local AI dedicating thousands of hours towards building a framework to enable it. I&#x27;m looking forward to seeing what comes of it!
    • logicallee1 day ago
      &gt;I&#x27;ve been one of the strongest supporters of local AI, dedicating thousands of hours towards building a framework to enable it.<p>Sounds like you&#x27;re very serious about supporting local AI. I have a query for you (and anyone else who feels like donating) about whether you&#x27;d be willing to donate some memory&#x2F;bandwidth resources p2p to hosting an offline model:<p>We have a local model we would like to distribute but don&#x27;t have a good CDN.<p>As a user&#x2F;supporter question, would you be willing to donate some spare memory&#x2F;bandwidth in a simple dedicated browser tab you keep open on your desktop that plays silent audio (to not be put in the background and deloaded) and then allocates 100mb -1 gb of RAM and acts as a webrtc peer, serving checksumed models?[1] (Then our server only has to check that you still have it from time to time, by sending you some salt and a part of the file to hash and your tab proves it still has it by doing so). This doesn&#x27;t require any trust, and the receiving user will also hash it and report if there&#x27;s a mismatch.<p>Our server federates the p2p connections, so when someone downloads they do so from a trusted peer (one who has contributed and passed the audits) like you. We considered building a binary for people to run but we consider that people couldn&#x27;t trust our binaries, or would target our build process somehow, we are paranoid about trust, whereas a web model is inherently untrusted and safer. Why do all this?<p>The purpose of this would be to host an offline model: we successfully ported a 1 GB model from C++ and Python to WASM and WebGPU (you can see Claude doing so here, we livestreamed some of it[2]), but the model weights at 1 GB are too much for us to host.<p>Please let us know whether this is something you would contribute a background tab to hosting on your desktop. It wouldn&#x27;t impact you much and you could set how much memory to dedicate to it, but you would have the good feeling of knowing that you&#x27;re helping people run a trusted offline model if they want - from their very own browser, no download required. The model we ported is fast enough for anyone to run on their own machines. Let me know if this is something you&#x27;d be willing to keep a tab open for.<p>[1] filesharing over webrtc works like this: <a href="https:&#x2F;&#x2F;taonexus.com&#x2F;p2pfilesharing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;taonexus.com&#x2F;p2pfilesharing&#x2F;</a> you can try it in 2 browser tabs.<p>[2] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=tbAkySCXyp0and" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=tbAkySCXyp0and</a> and some other videos
      • HanClinto1 day ago
        Hosting model weights for projects like this I think is something that you could upload to a space in Hugging Face?<p>What services would you need that Hugging Face doesn&#x27;t provide?
      • echoangle1 day ago
        Maybe stupid question but why not just put it in a torrent?
        • liuliu1 day ago
          It is very simple. Storage &#x2F; bandwidth is not expensive. Residential bandwidth is. If you can convince people to install a bandwidth-related software on their residential homes, you can then charge other people $5 to $10 per 1GiB bandwidth (useful for botnet mostly, get around DDOS protections and other reCAPTCHA tasks).
          • logicallee1 day ago
            Thank you for your suggestion. Below is only our plans&#x2F;intentions, we welcome feedback about it:<p>We are not going to do what you suggest. Instead, our approach is to use the RAM people aren&#x27;t using at the moment for a fast edge cache close to their area.<p>We&#x27;ve tried this architecture and get very low latency and high bandwidth. People would not be contributing their resources to anything they don&#x27;t know about.
        • logicallee1 day ago
          Torrents require users to download and install a torrent client! In addition, we would like to retain the possibility of giving live updates to the latest version of a sovereign fine-tuned file, torrents don&#x27;t autoupdate. We want to keep improving what people get.<p>Finally, we would like the possibility of setting up market dynamics in the future: if you aren&#x27;t currently using all your ram, why not rent it out? This matches the p2p edge architecture we envision.<p>In addition, our work on WebGPU would allow you to rent out your gpu to a background tab whenever you&#x27;re not using it. Why have all that silicon sit idle when you could rent it out?<p>You could also donate it to help fine tune our own sovereign model.<p>All of this will let us bootstrap to the point where we could be trusted with a download.<p>We have a rather paranoid approach to security.
      • liuliu1 day ago
        &gt; We have a local model we would like to distribute but don&#x27;t have a good CDN.<p>That is not true. I am serving models off Cloudflare R2. It is 1 petabyte per month in egress use and I basically pay peanuts (~$200 everything included).
        • logicallee1 day ago
          1 petabyte per month is 1 million downloads of a 1 GB file. We intend to scale to more than 1 million downloads per month. We have a specific scaling architecture in mind. We&#x27;re qualified to say this because we&#x27;ve ported a billion parameter model to run in your browser - fast - on either webgpu or wasm. (You can see us doing it live at the youtube link in my comment above.) There is a lot of demand for that.
          • liuliu1 day ago
            The bandwidth is free on Cloudflare R2. I paid money for storage (~10TiB storage of different models). If you only host 1GiB file there, you are only paying $0.01 per month I believe.
          • dirasieb12 hours ago
            how about you work on achieving 1 million downloads per month first? talk about putting the horse before the carriage
  • geooff_1 day ago
    As someone who&#x27;s been in the &quot;AI&quot; space for a while its strange how Hugging Face went from one of the biggest name to not a part of the discussion at all.
    • r_lee1 day ago
      I think that&#x27;s because there&#x27;s less local AI usage now since there&#x27;s all kinds of image models by the big labs, so there&#x27;s really no rush of people self hosting stable diffusion etc anymore<p>the space moved from Consumer to Enterprise pretty fast due to models getting bigger
      • zozbot2341 day ago
        Today&#x27;s free models are not really bigger when you account for the use of MoE (with ever increasing sparsity, meaning a smaller fraction of active parameters), and better ways of managing KV caching. You can do useful things with very little RAM&#x2F;VRAM, it just gets slower and slower the more you try to squeeze it where it doesn&#x27;t quite belong. But that&#x27;s not a problem if you&#x27;re willing to wait for every answer.
        • r_lee1 day ago
          yeah, but I mean more like the old setups where you&#x27;d just load a model on a 4090 or something, even with MoE it&#x27;s a lot more complex and takes more VRAM, right? like it just seems not justifiable for most hobbyists<p>but maybe I&#x27;m just slightly out of the loop
          • zozbot2341 day ago
            With sparse MoE it&#x27;s worth running the experts in system RAM since that allows you to transparently use mmap and inactive experts can stay on disk. Of course that&#x27;s also a slowdown unless you have enough RAM for the full set, but it lets you run much larger models on smaller systems.
    • segmondy1 day ago
      part of what discussion? anyone in the AI space knows and uses HF, but the public doesn&#x27;t give a care and why should they? It&#x27;s just an advanced site were nerds download AI stuff. HF is super valuable with their transformers library, their code, tutorials, smol-models, etc, but how does it translate to investor dollars?
    • LatencyKills1 day ago
      It isn&#x27;t necessary to be part of the discussion if you are truly adding value (which HF continues to do). It&#x27;s nice to see a company doing what it does best without constantly driving the hype train.
  • lukebechtel23 hours ago
    Thank you Georgi &lt;3
  • option1 day ago
    Isn&#x27;t HF banned in China? Also, how are many Chinese labs on Twitter all the time?<p>In either case - huge thanks to them for keeping AI open!
    • dragonwriter1 day ago
      &gt; Isn&#x27;t HF banned in China?<p>I think, for some definition of “banned”, that’s the case. It doesn’t stop the Chinese labs from having organization accounts on HF and distributing models there. ModelScope is apparently the HF-equivalent for reaching Chinese users.
    • disiplus1 day ago
      I think in the West we think everything is blocked. But for example, if you book an eSIM, when you visit you already get direct access to Western services because they route it to some other server. Hong Kong is totally different: they basically use WhatsApp and Google Maps, and everything worked when I was there.
      • But also yes, parent is right, HF is more or less inaccessible, and Modelscope frequently cited as the mirror to use (although many Chinese labs seems to treat HF as the mirror, and Modelscope as the &quot;real&quot; origin).
    • woadwarrior011 day ago
      HF is indeed banned in China. The Chinese equivalent of HF is ModelScope[1].<p>[1]: <a href="https:&#x2F;&#x2F;modelscope.cn&#x2F;" rel="nofollow">https:&#x2F;&#x2F;modelscope.cn&#x2F;</a>
  • forty22 hours ago
    Looks like someone tried to type &quot;Gmail&quot; while drunk...
    • rkomorn22 hours ago
      Looks like Gargamel of Smurfs fame to me.
  • periodjet1 day ago
    Prediction: Amazon will end up buying HuggingFace. Screenshot this.
  • moralestapia22 hours ago
    I hope Georgi gets a big fat check out of this, he deserves it 100%.
  • ukblewis1 day ago
    Honestly I’m shocked to be the only one I see of this opinion: HuggingFace’s `accelerate`, `transformers` and `datasets` have been some of the worst open source Python libraries I have ever used that I had to use. They break backwards compatibility constantly, even on APIs which are not underscore&#x2F;dunder named even on minor version releases without even documenting this, they refuse PRs fixing their lack of `overloads` type annotations which breaks type checking on their libraries and they just generally seem to have spaghetti code. I am not excited that another team is joining them and consolidating more engineering might in the hands of these people
    • ukblewis1 day ago
      And clearly I say all of this in my name and not my employers name
    • ukblewis1 day ago
      And I said all of that despite us continuing to use their platform and libraries extensively… We just don’t have a choice due to their dominance of open source ML
  • cyanydeez22 hours ago
    Is there a local webui that integrates with Hugging face?<p>Ollama and webui seem to rapidly lose their charm. Ollama now includes cloud apis which makes no sense as a local.
  • 45dsilicon1 hour ago
    [dead]
  • indiekitai19 hours ago
    [dead]
  • genie3io5 hours ago
    [dead]
  • raphaelmolly81 day ago
    [dead]
  • cboyardee23 hours ago
    [dead]
  • Filip_portive1 day ago
    [flagged]
  • rvz1 day ago
    This acquisition is almost the same as the acquisition of Bun by Anthropic.<p>Both $0 revenue &quot;companies&quot;, but have created software that is essential to the wider ecosystem and has mindshare value; Bun for Javascript and Ggml for AI models.<p>But of course the VCs needed an exit sooner or later. That was inevitable.
    • andsoitis1 day ago
      I believe ggml.ai was funded by angel investors, not VC.