> Large language models are something else entirely*. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn't sit well.<p>Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.<p>I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.
To be more charitable to TFA, machine translation is a field where there aren't great alternatives and the downside is pretty limited. If something is in another language you don't read it at all. You can translate a bunch of documents and benchmark the result and demonstrate that the model doesn't completely change simple sentences. Another related area is OCR - there are sometimes mistakes, but it's tractable to create a model and verify it's mostly correct.<p>LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way; it's designed like other programs, according to someone's explicit understanding. There's still active research in this field; I have a friend who's very deep into it.<p>The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.<p>The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.<p>Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
This is the bitter lesson.[1]<p>I too used to think that rule-based AI would be better than statistical, Markov chain parrots, but here we are.<p>Though I still think/hope that some hybrid system of rule-based logic + LLMs will end up being the winner eventually.<p>----------------<p>[1] <a href="https://en.wikipedia.org/wiki/Bitter_lesson" rel="nofollow">https://en.wikipedia.org/wiki/Bitter_lesson</a>
> There's an older tradition of rule-based machine translation. In these methods, someone really does understand exactly what the program does, in a detailed way<p>I would softly disagree with this. Technically, we also understand exactly what a LLM does, we can analyze every instruction that is executed. Nothing is <i>hidden</i> from us. We don't always know what the <i>outcome</i> will be; but, we also don't always know what the outcome will be in rule-based models, if we make the chain of logic too deep to reliably predict. There is a difference, but it is on a spectrum. In other words, explicit code may help but it does not guarantee understanding, because nothing does and nothing can.
Yep, some domains have no hard rules at all.<p>Time flies like an arrow; fruit flies like a banana.
LLMs are great because of exactly that: they solve things that have no other solutions.<p>(And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)<p>There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.
<i>and demonstrate that the model doesn't completely change simple sentences</i><p>A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of <i>some</i> sentences <i>some</i> of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.<p>For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.<p>Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language <i>and</i> the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
Aside: Does anyone actually use summarization features? I've never once been tempted to "summarize" because when I read something I either want to read the entire thing, or look for something specific. Things I want summarized, like academic papers, already have an abstract or a synopsis.
Yeah, basically every 15 minute YouTube video, because the amount of actual content I care about is usually 1-2 sentences, and usually ends up being the first sentence of an LLM summary of the transcript.<p>If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.
I'd wager there's 95% of the benefit for 0.1% of the CPU cycles just by having a "search transcript for term" feature, since in most of those cases I've already got a clear agenda for what kind of information I'm seeking.<p>Many years ago I make a little proof-of-concept for displaying the transcript (closed captions) of a YouTube video as text, and highlighting a word would navigate to that timestamp and vice-versa. Such a thing might be valuable as a browser extension, now that I think of it.
YouTube already supports that natively these days, although it's kind of hidden (and knowing Google, it might very well randomly disappear one day). Open the description of the video, scroll down and click "show transcript".
Searching the transcript has the problem of missing synonyms. This can be solved by the one undeniably useful type of AI: embedding vector search. Embeddings for each line of the transcript can be calculated in advance and compared with the embeddings of the user's search. These models need only a few hundred million parameters for good results.
One of the best features of SponsorBlock is crowd sourced timestamps for the meat of the video. Skip right over 20 minutes of rambling to see the cool thing in the thumbnail.
In-browser ones? No. With external LLMS? Often. It depends on the purpose of the text.<p>If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.<p>If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
> If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.<p>And what do you do if the LLM hallucinates? For me, skim-reading still comes out on top because my own mistakes are my own.
Yes, several times a day. I use summarization for webpages, messages, documents and YouTube videos. It’s super handy.<p>I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.<p>That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.
No, because an LLM cannot summarise. It can only <i>shorten</i> which is <i>not the same</i>.<p>Citation: <a href="https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/" rel="nofollow">https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actu...</a>
Wonderful article showing the uselessness of this technology, IMO.<p>> I just realised the situation is even worse. If I have 35 sentences of circumstance leading up to a single sentence of conclusion, the LLM mechanism will — simply because of how the attention mechanism works with the volume of those 35 — find the ’35’ less relevant sentences more important than the single key one. <i>So, in a case like that it will actively suppress the key sentence.</i><p>> I first tried to let ChatGPT one of my key posts (the one about the role convictions play in humans with an addendum about human ‘wetware’). ChatGPT made a total mess of it. <i>What it said had little to do with the original post, and where it did, it said the opposite of what the post said.</i><p>> For fun, I asked Gemini as well. Gemini didn’t make a mistake and actually produced something that is a very short summary of the post, but it is extremely short so it leaves most out. So, I asked Gemini to expand a little, <i>but as soon as I did that, it fabricated something that is not in the original article (quite the opposite),</i> i.e.: “It discusses the importance of advisors having strong convictions and being able to communicate them clearly.” Nope. Not there.<p>Why, after reading something like this, should I think of this technology as useful for this task? It seems like the exact opposite. And this is what I see with most LLM reviews. The author will mention spending hours trying to get the LLM to do a thing, or "it made xyz, but it was so buggy that I found it difficult to edit it after, and contained lots of redundant parts", or "it incorrectly did xyz". And every time I read stuff like that I think — wow, if a junior dev did that the number of times the AI did, they'd be fired on the spot.<p>See also, something like <a href="https://boston.conman.org/2025/12/02.1" rel="nofollow">https://boston.conman.org/2025/12/02.1</a> where (IIRC) the author comes away with a semi-positive conclusion, but if you look at the list near the end, most of these things are something that any person would get fired for, and are things that are <i>not positive</i> for industrial software engineering and design. LLMs appear to do a "lot", but still confabulates and repeats itself incessantly, making it worthless to depend on for practical purposes unless you want to spend hours chasing your own tail over something it hallucinated. I don't see why this isn't the case. I thought we were trying to reduce the error rate in professional software development, not increase it.
I occasionally use the "summarize" button on the iPhone Mobile Safari reader view if I land on a blog entry and it's quite long and I want to get a quick idea of if it's worth reading the whole thing or not.
You mean you don't summarize those terrible articles you happen to come across and you're a little intrigued, hoping that there's some substance, and then you read, and it just repeats the same thing over and over again with different wording? Anyway, I sometimes still give them the benefit of the doubt, and end up doing a summary. Often they get summarized into 1 or 2 sentences.
Maybe I should start doing that but I usually just... don't read them.
No, not really. I don't even know how to really respond to this but maybe<p>1. I don't read "terrible articles". I can skim an article and figure if something I'm interested in.<p>2. I actually do read terrible articles and I have terrible taste<p>3. Any "summarization" I do that isn't from my direct reading is evaluated by the discussion around it. Though nowadays that's more and more spotty.
Yes. I use it sometimes in Firefox with my local LLM server. Sometimes i come across an article I'm curious about but don't have the time or energy to read. Then I get a TL;DR from it. I know it's not perfect but the alternative is not reading it at all.<p>If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.
Haven’t tried them but I can see these features being really useful for screen reader users.
Yes.<p>Most recently, a new ISP contract: because it's both low stakes enough where I don't care much about inaccuracies (it's a bog standard contract from a run of the mill ISP), there's basically no information in there that the cloud vendor doesn't already have (if they have my billing details) but also where I'm curious about whether anything might jump out, all while not really wanting to read the 5 pages of the thing.<p>Just went back to that, it got both all of the main items (pricing, contract terms, my details) correctly, but also the annoying fine print (that I referenced, just in case). Also works pretty well across languages, though that depends on the model in question a bunch.<p>I feel like if browsers or whatever get the UX of this down, people will upload all sorts of data into those vendors that they normally shouldn't. I also think that with nuanced enough data, we'll eventually have the LLM equivalent of Excel messing up data due to some formatting BS.
Nah, because anything not worth reading is also not worth summarizing.
No, because I know how to search and skim.
Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.<p>On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.<p>WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.
I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.
I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?<p>Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?<p>> [A] An agent can process untrustworthy inputs<p>> [B] An agent can have access to sensitive systems or private data<p>> [C] An agent can change state or communicate externally<p><a href="https://simonwillison.net/2025/Nov/2/new-prompt-injection-papers/" rel="nofollow">https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...</a><p>Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.<p>It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.
> Have we not had numerous articles on HN about data exfiltration in recent memory?<p>there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”
I believe you are conflating multiple concepts to prove a flaky point.<p>Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!<p>You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.<p>Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.<p>Please stop with this unnecessary fear mongering and make a better argument.
Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?<p>This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.
Firefox should look like Librewolf first of all, Librewolf shouldn’t have to exist. Mozilla’s privacy stuff is marketing bullshit just like Apple. It shouldn’t be doing ANYTHING that isn’t local only unless it’s explicitly opt in or user UI action oriented. The LLM part is absurd bc the entire overton window is in the wrong place.
As a side note, I was like "Isn't WaterFox the FF fork by that wolf guy?"<p>Then I thought, "Aha! Surely <i>LibreWolf</i> is the one I'm thinking of!"<p>Turns out no, it's a third one! It's PaleMoon...
It's frankly desperate trend chasing from management that lost after starting from near total market domination, and have no idea what to do now.
> starting from near total market domination<p>That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.<p>FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.
I would rather firefox release a paid browser with no AI, or at least everything Opt-In, and more user control than to see them stuff unwanted features on users.<p>I used firefox faithfully for a long time, but it's time for someone to take it out back and put it down.<p>Also, I switched to Waterfox about a year ago and I have no complaints. The very worst thing about it is that when it updates its very in your face about it, and that is such a small annoyance that its easily negligible.<p>Throw on an extension like Chrome Mask for those few websites that "require chrome" (as if that is an actual thing), a few privacy extensions, ecosia search, uBlacklist (to permablock certain sites from search results), and Content Farm Terminator to get rid of those mass produced slop sites that weasel their way into search results and you're going to have a much better experience than almost any other setup.
The thing about translation, even a human translator will sometimes make silly mistakes unless they know the domain really well. So LLM are not any worse. Translation is a problem with no deterministic solution (rule-based translation had always been a bad joke). Properly implemented deterministic search/information retrieval, on the other hand, works extremely well. So well it doesn't really need any replacement - except when you also have some extra dynamics on top like "filtering SEO slop" - and that's not something LLMs can improve at all.
No, it is disqualifyingly clueless. The author defends one neural network, one bag of effectively-opaque floats that get blended together with WASM to produce non-deterministic outputs which are injected into the DOM (translation), then righteously crusades against other bags of floats (LLMs).<p>From this point of view, uBlock Origin is also effectively un-auditable.<p>Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.
> Machine learning technologies like the Bergamot translation project offer real, tangible utility. Bergamot is transparent in what it does (translate text locally, period), auditable (you can inspect the model and its behavior), and has clear, limited scope, even if the internal neural network logic isn’t strictly deterministic.<p>This really weakens the point of the post. It strikes me as a: we just don't like <i>those</i> AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: <a href="https://marian-nmt.github.io/docs/" rel="nofollow">https://marian-nmt.github.io/docs/</a><p>The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
It's not necessarily close minded to choose to abstain from interacting with generative text, and choose not to use software that integrates it.<p>I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
I’m not <i>too</i> worried about starting to write like a bot. But, I do notice that I’m sometimes blunt and demanding when I talk to a bot, and I’m worried that could leak through to my normal talking.<p>I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.<p>Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
It's like using your turn signal even when you know there's nobody around you. Politeness is a habit you don't want to break.
That's an interesting example to use. I only use turn signals when there are other cars around that would need the indication. I don't view a turn signal as politeness, its a safety tool to let others know what I'm about to.<p>I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
I would strongly suggest you use your turnsignals, <i>always</i>, without exception. You are relying on perfect awareness of your surroundings which isn't going to be the case over a longer stretch of time and you are obliged to signal changes in direction irrespective of whether or not you believe there are others around you. I'm saying this as a frequent cyclist who more than once has been cut off by cars that were not indicating where they were going <i>because they had not seen me</i>, and I though they were going to go straight instead of turn into my lane or the bike path.<p>Signalling your turns is zero cost, there is no reason to optimize this.
Its a matter of approach and I wouldn't say what I've found to work for me would work for anyone else.<p>In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.<p>I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.<p>Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
The point of making signaling a habit is that you don't think about it at all. It becomes an automatic action that just <i>happens</i>, without affecting your focus.<p>I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to decide whether signalling was necessary in each case.
This is all fine and good until you accidentally kill someone with your blinkers off and then you have to wonder 'what if' the rest of your life.<p>Seriously: signal your turns and stop defending the indefensible, this is just silly.
I am a frequent pedestrian and am often frustrated by drivers not indicating, but always grateful when they do!
> I only use turn signals when there are other cars around that would need the indication.<p>That is a very bad habit and you should change it.<p>You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.<p>Your signal is <i>more important</i> to the other road users you are <i>less likely to see</i>.<p><i>Always</i> ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.
Not to dog pile, just to affirm what jacquesm is saying. Remember, what you do consciously is what you end up doing unconsciously when you're distracted.<p>Here is a hypothetical: A loved one is being hauled away in an ambulance and it is a bad scenario. And you're going to follow them. Your mind is busy with the stress, trying to keep things cool while under pressure. What hospital are they going to, again? Do you have a list of prescriptions? Are they going to make it to the hospital? You're under a mental load, here.<p>The last thing you need is to ask "did I use my turn signal" as you merge lanes. If you do it automatically, without exception, chances are good your mental muscle memory will kick in and just do it.<p>But if it isn't a learned innate behavior, you may forget to while driving and cause an accident. Simply because the habit isn't there.<p>It's similar for talking to bots, as well. How you treat an object, a thing seen as lesser, could become how a person treats people they view as lesser, such as wait staff, for example. If I am unerring polite to a machine with no feelings, I'm more likely to be just as polite to people in customer service jobs. Because it is innate:<p>Watch your thoughts, they become words; Watch your words, they become actions.
> when there are other cars around that would need the indication<p>This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I <i>always</i> use my turn signals.<p>I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
You're not the only one raising that concern here - I get it and am not recommending what anyone else should do.<p>I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
I think it makes much more sense to treat the bot like a bot and avoid humanizing it. I try to abstain from any kind of linguistic embellishments when prompting AI chat bots. So, instead of "what is the area of the circle" or "can you please tell me the area of the circle", I typically prefer "area of the circle" as the prompt. Granted, this is suboptimal given the irresponsible way it has been trained to pretend it's doing human-like communication, but I still try this style first and only go to more conversational language if required.
Sure, I am more referring to advocating for Bergamot as a type of more "pure" solution.<p>I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
> but I don't understand the hate for LLMs.<p>It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
You can't really dig into a model you don't control. At least by running locally, you could in theory if it is exposed enough.<p>The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.<p>The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
Yes I agree with this, but the blog post makes a much more aggressive claim.<p>> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.<p>Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.<p>The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Exactly this. The black box in this case is a problem because it's not in my computer. It transfers the users data to an external entity that can use this data to train it's model or sell it.<p>Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.<p>Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.
Running locally does help get less modified output, bit how does it help escape the black box problem?<p>A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.
Your tone is kind of ridiculous.<p>It’s insane this has to be pointed out to you but here we go.<p>Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
No, a tank is obviously better at all those things. They should clearly be used by everyone, including the paratrooper. Never mind the extra fuel costs or weight, all that matters is that it gets the job done best.
To me it sounds like a reasonable "AI-conservative" position.<p>(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)
The local part is the important part here. If we get consumer level hardware that can run general LLM models, there we can actually monitor locally what goes in and what goes out, then it meets the privacy needs/wants of power users.
My take is that I'm ok with anything a company wants to do with their product EXCEPT when they make it opt out or non-opt-outable.<p>Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in an chose to download it.<p>I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
Translation AI though has provable behavior cases though: round tripping.<p>An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.<p>No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
I think the author was close to something here but messed up the landing.<p>To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.<p>It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
I just want FireFox to focus on building an absolutely awesome plugin API that exposes as much power and flexibility as possible - with the best possible security sandbox and permissions model to go with it.<p>Then everyone who wants AI can have it and those that don't .... don't.
I just want a browser that lets me easily install a good adblocker on all my operating systems. I don't care about their new toolbar or literally any other feature, because I will probably just disable it immediately anyway. But the nr 1 thing I use every day on every single site I visit is an adblocker. I'm always baffled when people complain about ads on mobile or something, because I literally haven't watched ads in decades now.
I just want an adblocker and tree style vertical tabs, where the tab bar minimises when the mouse isn't over it.<p>That's literally my entire use case for using firefox.
They've been quite forceful in the past in pushing 'plugins' by integrating them and turning them on repeatedly when people turned them off.<p>Did that achieve the last CEOs goals? Presumably if it did they'll use that route again.<p>Have Google required a default 'on' for Gemini use?
>Then everyone who wants AI can have it and those that don't .... don't.<p>The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.<p>My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?<p>I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.
I just want them to fix their goddamn rendering.
This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...<p>[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
They are not "wanting" to introduce AI, they already did.<p>And now we have:<p>- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.<p>- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.<p>Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)
Every time i reinstall Firefox on a new machine, the number of annoyances that I need to remove or change increases.<p>Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.
All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like. And tons of people asked for that sidebar by the way.<p>We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.
> All your complaints can be resolved in a few seconds by using the settings to customize the browser to your liking and not downloading extensions you dont like<p>until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
For me, the complaint isn’t the AI itself, but the updated privacy policy that was rolled out prior to the AI features. Regardless of me using the AI features or not, I must agree to their updated privacy policy.<p>According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners.
<a href="https://arstechnica.com/tech-policy/2025/02/firefox-deletes-promise-to-never-sell-personal-data-asks-users-not-to-panic/" rel="nofollow">https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...</a>
This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.<p>For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
The courts have found providing an http request in exchange for an http response- where both the request and response contains valuable data, is selling data? Well that’s interesting because I too consider it selling of data. I’m glad the courts and I can agree on something so simple and obvious.
If they were only selling data in such an 'innocent' way, couldn't they clearly state that, in addition to whatever legalese they're required to provide?
Pay for what? It says it's a local AI model so how will AI companies be giving Firefox revenue from this?
What says that?<p><a href="https://support.mozilla.org/en-US/kb/ai-chatbot" rel="nofollow">https://support.mozilla.org/en-US/kb/ai-chatbot</a>
This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.
> Firefox is trying to diversify their revenue<p>Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".<p>Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.<p>Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU or one of the U.S. billionaires would be willing to fund a truly free (as in libre) browser engine that serves the public interest.
>This whole backlash to firefox wanting to introduce AI feels a little knee-jerky. We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints. I think people want AI in the browser, they just don't want it to be the big-corp hosted AI...<p>Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.<p>I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
> [Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc<p>I don't want <i>any</i> of this built into my web browser. Period.<p>This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
> We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into.. and if so, if would cut down on the majority of the knee jerk complaints<p>Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
This 100% -- the AI features already in Firefox, for the most part, rely on local models. (Right now there is translation and tab-grouping, IIRC.)<p>Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
Local models are nice for keeping the initial prompt and inference off someone else's machine, but there is still the question of what the AI feature will do with data produced.<p>I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
If we look at the last AI features they implemented it doesn't like they are betting on local models anymore.
I don't <i>feel</i> like I want AI in my browser. I'm not sure what I'd do with it. Maybe translation?
yeah, translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc<p>All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts
If I have to fill a form for anything that matters, I'm doing it by hand. I don't even use the existing historical auto-complete stuff. It can fill stuff incorrectly. LLMs regularly get factual stuff wrong in mysterious ways when I engage with them as chat bots. It might be less effort to verify correctness than type in all the fields, but IMO there's less risk of missing or forgetting to check one of the fields.
Super charged search on page would also be nice<p>Agents (like a research agent) could also be interesting
I like translation, it's come in handy a few times, and it's neat to know it's done locally.
FWIW, Firefox already has AI-based translation using local models.
The ux changes and features remind us of pocket and all the other low value features that come with disruptive ux changes as other commenters have noted.<p>Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
I just know I've already had to chase down AI in Firefox I definitely did not ask for or activate, and I don't recall consenting to.
>I think people want AI in the browser<p>I don't. And the whole idea of Firefox's marketing is that it won't force things on me. Ofc course om frustrated. My core browser should serve pages and manage said pages. Anything else should be an option.<p>I'm beyond tired of being told my preferences, especially by people with incentives to extract money out of me.
It doesn't matter what they exactly want to do, what it matters is they're wasting resources on it instead of keeping the ... browsing part ... up to date.
There is also the matter of how training data was licensed to create these models. Local or not, it’s still based on stolen content. And really what transformative use case is there to have AI in the browser - none of the ones currently available step outside gimmicks that quickly get old and don’t really add value.
I want the people who make Firefox to make decisions about Firefox based on what users have been asking for instead of based on what a CEO of a for-profit decides is still not going to make them any money, just like every other plan that got pitched in the last 10 years that failed to turn their losing streak around.<p>It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
While I do sympathize with the thought behind it, general user is already equating llm chat box as 'better browsing'. In terms of simple positioning vis-a-vis non-technical audience, this is one integration that does make fiscal sense.. if mozilla was a real business.<p>Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
I find that hard to believe, every general/average user I have spoken to does not use AI for anything in their daily lives and have either not tried it at all or only played with it a bit a few years ago when it first came out.
The problem with integrating a chat bot is that what you are effectively doing is the same thing as adding a single bookmark, except now it's taking up extra space.
There IS no advantage here, it's unnecessary bloat.
Firefox is not for general users, which is the problem that Mozilla's for a literal decade now. There is no way to make it <i>better</i> than Chrome or Safari (because it has to be better for every day users to switch, not just "as good" or even "way more configurable but slightly worse". It has to be appreciably better).<p>So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.
The way to make Firefox better is by not doing the things that are making the other browsers <i>worse</i>. Ads and privacy are an example of areas where Chrome is clearly getting worse.<p>LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
These comments are full of people explaining how Firefox can differentiate from chrome and safari: don't force AI on us.
I don't think a locally hosted LLM would be powerful enough for the supposed "agentic browsing" scenarios - at least if the browser is still supposed to run on average desktop PCs.
Not yet, but we’ll hopefully get there within at most a few years.
Get there by what mechanism? In the near term a good model pretty much requires a GPU, and it needs a lot of VRAM on that GPU. And the current state of the art of quantization has already gotten us most of the RAM-savings it possibly could.<p>And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
By M series and amd strix halo. You don't actually need a gpu, if the manufacturer knows that the use case will be running transformer models a more specialized NPU coupled with higher memory bandwidth of on the package RAM.<p>This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.
This is probably their plan to monetize this. They will partner with a AI company to 'enhance' the browser with a paid cloud model and the local model has no monetary incentive not to suck.
>We don't know if firefox might want to roll out their own locally hosted LLM model that then they plug into..<p><a href="https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025/11/Mozilla-Summary-Portfolio-Strategy.pdf" rel="nofollow">https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...</a><p>it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech<p>it's better to understand the concern over mozilla's announcement the following way i think:<p>- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching<p>- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies<p>- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla<p>with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software<p>the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life<p>my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position<p>firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement<p>the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future
We're still in bubble-period hyper-polarized discourse: "shoehorn AI into absolutely everything and ram it down your throat" vs "all AI is bad and evil and destroying the world."
I don't want any AI in anything apart from the Copilot app, where the AI that I use is. I don't want it in my IDE. I don't want it in my browser. I don't want it in my messaging client. I don't want it in my email app. I want it in the app, where it is, where I can choose to use it, give it what it needs, and leave at at bloody that.
I also want to have complete control over what data I provide to LLMs (at least as long as inference happens in the cloud), but I’d love to have them everywhere, not just a chat UI (which I suspect will be seen as a relatively pretty bizarre way of doing non-chat tasks on a computer).
> I think people want AI in the browser<p>Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.
I don't want to have to max out my gpu to browse reddit.
Waterfox is dependant on Firefox still being developed. Mozilla are adding these features to try to stay relevant and keep or gain market share. If this fails, and Firefox goes away, Waterfox is unlikely to survive.
That's true, but as a Waterfox user, I'm not worried!<p>If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.
I switched to Waterfox about a year ago because my poor old linux box just couldn't keep up with the latest Firefox version (especially the Snap package! I literally unusable for me) and I am very thankful that they aren't going to be including any of the LLM crud that Mozilla has been talking up.<p>I get the utility that this stuff can have for certain types of activities but on top of not having great hardware to run the dang things, I just don't find any of the proposed use-cases that compelling for me personally.<p>It's just nice that the totalizing self-insistence of AI tech hasn't gobbled up every corner of the tech space, even if those crevices and niches are getting smaller by the day.
This feature can be easily disabled with policies:<p><a href="https://mozilla.github.io/policy-templates/#generativeai" rel="nofollow">https://mozilla.github.io/policy-templates/#generativeai</a><p><a href="https://mozilla.github.io/policy-templates/#preferences" rel="nofollow">https://mozilla.github.io/policy-templates/#preferences</a><p><a href="https://searchfox.org/firefox-main/source/browser/app/profile/firefox.js#2209" rel="nofollow">https://searchfox.org/firefox-main/source/browser/app/profil...</a><p><a href="https://searchfox.org/firefox-main/source/modules/libpref/init/all.js#3684" rel="nofollow">https://searchfox.org/firefox-main/source/modules/libpref/in...</a>
This is like when people defend Windows 11's nonsense by saying "you can disable or remove that stuff". Yes, you can. But you shouldn't have to, and I personally prefer to use tools which don't shove useless things into the tool because it's trendy.
The difference is that on Windows all unwanted features eventually become mandatory, with no way of switching them off. With Firefox, it never happens.
If you listen to the doomers in this thread, it will.<p>They "will" remove the option from settings, hide it in about:config, then later on remove it from there!<p>Of course none of that is true...
They already have hidden these in about:config!<p>Right click anywhere, (ask an AI chatbot) right there. Go to settings, search AI or search Chatbot, nothing.
It's plausible because the team working on the settings screen will be reassigned to the "AI".
How is this different from linux? People happily spend hours customizing defaults in their OS. It’s usually a point of praise for open source software.
not to mention firefox routinely blows up any policies you set during upgrades, incompatibilities, and an endless about:config that is more opaque than a hunk of room temperature lard.
Easy for who? 99% of people are not going/able to setup firefox policies.
Even if we ignore things like "they're chasing AI fads instead of better things" and "they're adding attack surface" and so forth, and <i>just</i> focus on the disabling feature toggles thing...<p>... Mozilla has <i>re-enabled</i> AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.
Is it really in all 4 of those places? Just need to change it in the first two, right? I hate the new AI tab feature and wish they had a non-AI option.
> Waterfox won't include them. The browser's job is to serve you, not think for you... Waterfox will not include LLMs. Full stop. At least and most definitely not in their current form or for the foreseeable future.<p>> If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.<p>This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.<p>However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people <i>actually</i> need and optimize heavily on that. But this is hard to do because people don't show their true motives.<p>Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.
A browser is a tool that allows you to browse the internet. It should be able to display HTML elements, and stuff.<p>LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.
Also see related statement by vivaldi: <a href="https://xcancel.com/i/status/2000874212999799198" rel="nofollow">https://xcancel.com/i/status/2000874212999799198</a>
How do you disable the telemetry in Waterfox? It looks like they get their funding because they partnered with an Ad company. Do I just need to change the default search?
Did Firefox already add AI into Tabs? Today I just got my first 'Tab Grouping' and it says "Nightly uses AI to read your Open Tabs". That's the worst way to do grouping ever... just group hierarchically based on where it opened from...
Particularly since they clearly keep this info around - if you install TreeStyleTabs or Sideberry, you'll see it <i>immediately</i> show the historical-structure of your current tabs (in-process at least, I'm not 100% sure about after kill->restore). That info has to come from somewhere.
The problem with this is integration:
no one would complain if it was an official plugin/extension,
but integrating this plugin into Firefox is forced and unexpected decision.
Firefox telemetry,labs/experiments and server-dependent features
will lose it marketshare slowly in favor of local-only browsers that
don't have online dependencies or forced bloatware.
Like many i've switched long ago to LibreWolf.
“Even if you can disable individual AI features, the cognitive load of monitoring an opaque system that’s supposedly working on your behalf would be overwhelming.”<p>99.9% of people haven’t ever had one single thought about how their software works. I don’t think they will be overwhelmed with cognitive load. Quite the opposite.
> AI browsers are proliferating<p>Are they, though? I get bombarded by AI ads very frequently and I have yet to see anything from those "AI browsers" mentioned on the article.
I was a FF driver for ages and now making the switch to Chrome based browser simple because it's faster and websites are all tested against Chrome / Safari. I see both of these issues manifest IRL on a weekly basis. Why do I want to burn up CPU cycles and second using FF when Chromium is literally faster.
if kagi can make a search engine that charges users, why dont we have a 1$/month open source browser whose code can be verified but people pay to use monthly?
I guess that wouldn't really "open source" in the traditional sense, but that's clearly a tangent.<p>Personally, I'd <i>love</i> a paid for high quality browser that serves me rather than sneakily trying to get me to look at ads.<p>I think the challenge is that a browser is an incredibly difficult and large thing to build and maintain. So there aren't many wholly new browsers in existence, and therefore not very many business models being tried out.<p>Full agreement that I'd pay for such a thing- I have a browser and a terminal open non-stop during my workday. It's an important tool and I'd easily pay for a better offering if that was an option.
Would it be profitable without some heavy investments ?<p><a href="https://kagi.com/stats" rel="nofollow">https://kagi.com/stats</a>
Paying to get a browser fork with less features?
At that point, just pay $1 to Mozilla for firefox instead..
With this, people will come here and the go. I mean consider the example of many GNU/Linux users I know who use GNU/Linux (or for them Linux means Ubuntu) system and can ask them to try out Waterfox. But, about installation - can't we have .deb? I know we can easily install from tarball and then setup the .desktop file and then adjust the icon to properly display, and what not...But, Can we make a bit simpler to try?
I completely agree with the main sentiment, which is - I want the browser to be a User Agent and nothing else. I don´t need a crappy, un-reliable intermediary between the already perfectly fine UA and the Internet.
how is adding ai chat different than asking search engine? I think mozilla wants to make sure that it gets some cut for sending queries to ai similar to their existing revenue model where they get cut for sending it to google. Similar to SE's users should have a choice to use any ai or not.
On Windows Mozilla can't even handle disabling hardware acceleration, a.k.a. the GPU, from its settings page. Sure you can toggle the button but it doesn't work as verified in the task manager. What hope is there that they can be trusted to disable AI then? It's a feature that I'd never want enabled. When that "feature" comes out users will be forced to find a fork without the feature.
I guess it's nice for non-technical people who don't know how to use `about:config` but beyond that I don't really see the need. Hopefully adding that extra layer of indirection doesn't mean the users will have to wait too long for security patches.
PSA (for the nth time): about:config is not a supported way of configuring Firefox, so if you tweak features with about:config, don't be surprised if those tweaks stop working without warning.
Mozilla tells you to use it so it so that seems supported enough to me (example:
<a href="https://support.mozilla.org/en-US/kb/how-stop-firefox-making-automatic-connections" rel="nofollow">https://support.mozilla.org/en-US/kb/how-stop-firefox-making...</a>)<p>That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
Ugh. Because they also say:<p>"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."<p><a href="https://support.mozilla.org/en-US/kb/firefox-advanced-customization-and-configuration" rel="nofollow">https://support.mozilla.org/en-US/kb/firefox-advanced-custom...</a>
about:config is a cat and mouse game, and I don't want to reconfigure my settings everytime Firefox updates. That's just hostile user design.
Does anyone have more information on this sentence from the second paragraph?:<p>> Alphabet themselves reportedly see the writing on the wall, developing what appears to be a new browser separate from Chrome.
Presumably Google Disco, an experimental AI focused web browser. There's also a few related HN threads but not much discussion.<p><a href="https://labs.google/disco" rel="nofollow">https://labs.google/disco</a>
<a href="https://news.ycombinator.com/item?id=46240952">https://news.ycombinator.com/item?id=46240952</a>
>A browser is meant to be a user agent, more specifically, your agent on the web.<p>at this point it’s more so a sandbox runtime bordering an OS, but okay
Waterfox just released version 6.6.6. Are we sure it is not evil?
As I read the post by MrAlex94, I noticed a remark that the browser Chrome is good as a user agent. To me, that's terrific! Looks like I'll have to consider Chrome again.`<p>Here are what I find as reasons to scream about Mozilla:<p>Popups:<p>(a) Several times a day, my attention and concentration get interrupted by, for me, the unwelcome announcement that there is a new version I can download. A new version can have changes I don't like and genuine bugs. Sure, I could keep a copy of my favorite version from history, but that is system management mud wrestling and interruption of my work.<p>(b) Now I get told several times a day that my computer and cell phone can share access to a Web page. In this action Mozilla covers up what that page was showing I wanted it to show. No thanks. When I'm at my computer, AMD 8 core processor, all my files and software tools, and 1 Gbps optical fiber connection to the Internet and looking at a Web page, I want nothing to do with a cell phone's presentation of a, that, Web page.<p>(c) Some URLs are a dozen lines long and Mozilla finds ways to present such URLs with all their lines and pursue clearly their main objective -- cover up the desired content.<p>Mozilla needs to make their covering up, changing, the screen optional or just eliminated.<p>Want me to donate? You've mentioned as little as $10. Deal: Raise the $10 by a factor of 5 AND quit covering up my content and interrupting my work, and we've got a deal.
I still can’t give them money, so what’s the point? Just like with Mozilla, they rely on sponsors and you are the product.
You <i>can</i> give Waterfox your money. Just not for the browser itself. They sell ad free search[0].<p>[0] <a href="https://search.waterfox.net/" rel="nofollow">https://search.waterfox.net/</a>
As I mentioned in a comment below (<a href="https://news.ycombinator.com/item?id=46297617">https://news.ycombinator.com/item?id=46297617</a> ), Firefox does not rely only on sponsors. There are a few ways to pay money that goes directly towards Firefox.
> I still can’t give them money, so what’s the point?<p>What do you say about the following link, then?<p>> <a href="https://www.mozillafoundation.org/en/donate/" rel="nofollow">https://www.mozillafoundation.org/en/donate/</a>
That link is for Mozilla Foundation, which is a non-profit and donations to it <i>do not</i> go to the development of Firefox. Mozilla Corporation, the for-profit entity, owns and manages Firefox. The way to support Firefox monetarily is by buying Mozilla VPN where available (this is Mullvad in the backend) and buying some Firefox merchandise (like stickers, t-shirts, etc.). I think an MDN Plus subscription also helps.
New this year? <a href="https://web.archive.org/web/20250000000000*/https://www.mozillafoundation.org/donate/" rel="nofollow">https://web.archive.org/web/20250000000000*/https://www.mozi...</a><p>I agree it's counter-evidence right now, and I <i>think</i> there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...
Related:<p><i>Mozilla appoints new CEO Anthony Enzor-Demeo</i><p><a href="https://news.ycombinator.com/item?id=46288491">https://news.ycombinator.com/item?id=46288491</a>
I just downloaded WaterFox, it looks nice.<p>When they say "AI browsers are proliferating." and "Their lunch is being eaten by AI browsers." what does that mean? What's an "AI Browser", and are they really gaining significant market share? For what?<p>I found this (1) that suggests that several "AI Browsers" exist, which is "proliferating" in a sense.<p>1) <a href="https://www.waterfox.com/blog/no-ai-here-response-to-mozilla/" rel="nofollow">https://www.waterfox.com/blog/no-ai-here-response-to-mozilla...</a>
I, for one, am dreaming of AI assisted ad removal, content summaries, bookmarks automatic classification...
[flagged]
It's not really weird that two different people say different things.
i bet there's a big overlap between users of firefox and those who complain about humans being replaced with AI. so don't think its weird
...and keep your hand up if you've ever donated to Firefox
Why don't you go ahead and share the "donate to Firefox" page?<p>Last I knew, it doesn't exist. You can donate to Mozilla Corporation, the group that has been agitating it's own users and donors for years now.<p>People who want to support the Firefox team/product and have them focus on improving things like the development tools (or whatever else) literally cannot. Mozilla doesn't make that an option.
I gave them over $500 and I sure as hell will never do that again.
"...trust from other large, imporant [sic] third parties which in turn has given Waterfox users access to protected streaming services via Widevine."<p>The black box objection disqualifies Widevine.
I do think dipping your toes into the future is worth it. If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck. But I don't think this is any more dangerous than giving people a browser in the first place. They have already done enough to shoot themselves in the foot enough.
I am more of a sceptic of AI in the context of a browser, than its general use. I think LLMs have great utility and have really helped push things along - but it’s not as if they’re completely risk free.
> If it turns out the LLM is trying to kill us by cancelling our meetings and emailing people that we're crazy that would suck.<p>It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
Birth rates may fall for those who LLM made unemployable...
Just checking but… what if instead of cruel natural selection, we’ve largely eliminated threats like predators and starvation… but still by either necessity or accident are presented with a less cruel more subtle filter?
I don't mind Mozilla trying to make use of AI, but I'm also glad we have actual competition still.<p>In many other areas, there are zero "no AI" options at all.