"Google Chromium CSS contains a use-after-free vulnerability that could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page. This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera."<p>That's pretty bad! I wonder what kind of bounty went to the researcher.
> That's pretty bad! I wonder what kind of bounty went to the researcher.<p>I'd be surprised if it's above 20K$.<p>Bug bounties rewards are usually criminally low; doubly so when you consider the efforts usually involved in not only finding serious vulns, but demonstrating a reliable way to exploit them.
The bounty could be very high. Last year one bug’s reporter was rewarded $250k. <a href="https://news.ycombinator.com/item?id=44861106">https://news.ycombinator.com/item?id=44861106</a>
I think a big part of "criminally low" is that you'll make much more money selling it on the black market than getting the bounty.
I read this often, and I guess it could be true, but those kinds of transaction would presumably go through DNM / forums like BF and the like. Which means crypto, and full anonymity. So either the buyer trusts the seller to deliver, or the seller trusts the buyer to pay. And once you reveal the particulars of a flaw, nothing prevents the buyer from running away (this actually also occurs <i>regularly</i> on legal, genuine bug bounty programs - they'll patch the problem discreetly after reading the report but never follow up, never mind paying; with little recourse for the researcher).<p>Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying. And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.<p>The way this trust issue is (mostly) solved in drugs DNM is through the platform itself acting as a escrow agent; but I suspect such a thing would not work as well with selling vulnerabilities, because the volume is much lower, for one thing (preventing a high enough volume for reputation building); the financial amounts generally higher, for another.<p>The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
I don't think you know anything about how these industries work and should probably read some of the published books about them, like "This Is How They Tell Me The World Ends", instead of speculating in a way that will mislead people. Most purchasers of browser exploits are nation-state groups ("gray market") who are heavily incentivized not to screw the seller and would just wire some money directly, not black market sales.
> Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying.<p>Is conning a seller really worth it for a potential buyer? Details will help an expert find the flaw, but it still takes lots of work, and there is the risk of not finding it (and the seller will be careful next time).<p>> And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.<p>They also have the money to just buy an exploit.<p>> The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.<p>I'd imagine the skills needed to get paid from ransomware victims without getting caught to be <i>very</i> different from the skills needed to find a vulnerability.
I am far from the halls of corporate decision making, but I really don't understand why bug bounties at trillion dollar companies are so low.
Because it's nice to get $10k legally + public credit than it is to get $100k while risking arrest + prison time, getting scammed, or selling your exploit to someone that uses it to ransom a children's hospital?
Is it in fact illegal to sell a zero day exploit of an open source application or library to whoever I want?
Depends. Within the US, there are data export laws that could make the "whoever" part illegal. There are also conspiracy to commit a crime laws that could imply liability. There are also laws that could make <i>performing/demonstrating</i> certain exploits illegal, even if divulging it isn't. That could result in some legal gray area. IANAL but have worked in this domain. Obviously different jurisdictions may handle such issues differently from one another.
Thanks, great answer. I was just thinking from a simple market value POV.
What about $500K selling it to governments?
> but demonstrating a reliable way to exploit them<p>Is this a requirement for most bug bounty programs? Particularly the “reliable” bit?
So basically Firefox is not affected ?
The listed browsers are basically skins on top of the same chromium base.<p>It’s why Firefox and Safari as so important despite HN’a wish they’d go away.
HN doesn't want firefox to go away. HN wants firefox to be better, more privacy/security focused, and to stop trying to copy chrome out of the misguided hope that being a poor imitation will somehow make it more popular.<p>Sadly, mozilla is now an adtech company (<a href="https://www.adexchanger.com/privacy/mozilla-acquires-anonym-a-privacy-tech-startup-founded-by-two-top-former-meta-execs/" rel="nofollow">https://www.adexchanger.com/privacy/mozilla-acquires-anonym-...</a>) and by default firefox now collects your data to sell to advertisers. We can expect less and less privacy for firefox users as Mozilla is now fully committed to trying to profit from the sale of firefox users personal data to advertisers.
As a 25 year Firefox user this is spot on. I held out for 5 years hoping they would figure something out, but all they did was release weird stuff like VPNs and half baked services with a layer of "privacy" nail polish.<p>Brave is an example of a company doing some of the same things, but actually succeeding it appears. They have some kind of VPN thing, but also have Tor tabs for some other use cases.<p>They have some kind of integration with crypto wallets I have used a few times, but I'm sure Firefox has a reason they can't do that or would mess it up.<p>You can only watch Mozilla make so many mistakes while you suffer a worse Internet experience. The sad part is that we are paying the price now. All of the companies that can benefit from the Chrome lock in are doing so. The web extensions are neutered - and more is coming - and the reasons are exactly what you would expect: more ads and weird user hostile features like "you must keep this window in the foreground" that attempt to extract a "premium" experience from basic usage.<p>Mozilla failed and now the best we have is Brave. Soon the fingerprinting will be good enough Firefox will be akin to running a Tor browser with a CAPTCHA verification can for every page load.
HN wants Firefox but with better stewardship and fewer misdirected funds.<p>Mozilla - wrongly - believes that the majority of FF users believe in Mozilla's hobby projects rather than that they care about their browser.<p>That's why - as far as I know - to this day it is impossible to directly fund Firefox. They'd rather take money from google than to be focusing on the one thing that matters.
Particularly weird impulse for technically inclined people…<p>Although I must admit to the guilty pleasure of gleefully using Chromium-only features in internal apps where users are guaranteed to run Edge.
Firefox is safe from this because their CSS handling was the first thing they rewrote in Rust.
Firefox and Safari are fine in this case, yeah.
It's pretty hard to have an accidental a use after free in the FireFox CSS engine because it is mostly safe Rust. It's possible, but very unlikely.
That came to my mind as well. CSS was one of the earliest major applications of Rust in FireFox. I believe that work was when the "Fearless Concurrency" slogan was popularized.
Firefox and Safari developers dared the Chromium team to implement :has() and Houdini and this is the result!<p>/s
Presumably this affects all electron apps which embed chrome too? Don’t they pin the chrome version?
"Actually, you forgot Brave."
Yeah, but lets keeping downplaying use-after-free as something not worth eliminating in 21st century systems languages.
"Use after free in CSS" is a funny description to see.
I don't quite understand the vulnerability, when exploited, you can get information about the page from which the exploit code is running. Without a sandbox escape or XSS, that seems almost completely harmless?<p>This is the "impact" section on <a href="https://github.com/huseyinstif/CVE-2026-2441-PoC" rel="nofollow">https://github.com/huseyinstif/CVE-2026-2441-PoC</a>:<p>Arbitrary code execution within the renderer process sandbox
Information disclosure — leak V8 heap pointers (ASLR bypass), read renderer memory contents
Credential theft — read document.cookie, localStorage, sessionStorage, form input values
Session hijacking — steal session tokens, exfiltrate via fetch() / WebSocket / sendBeacon()
DOM manipulation — inject phishing forms, modify page content
Keylogging — capture all keystrokes via addEventListener('keydown')
Browser exploits are almost always two steps: you exploit a renderer bug in order to get arbitrary code execution inside a sandboxed process, and then you use a second sandbox escape exploit in order to gain arbitrary code execution in the non-sandboxed broker process. The first line of that (almost definitely AI generated) summary is the bad part, and means that this is one half of a full browser compromise chain. The fact that you still need a sandbox escape doesn't mean that it is harmless, especially since if it's being exploited in the wild that means whoever is using it probably <i>does</i> also have a sandbox escape they are pairing with it.
Maybe Chromium should also rewrite their rendering engine in Rust ;p
The fact that these still show up is pretty wild to me. Don't we have a bunch of tools that should create memory-safish binaries by applying the same validation checks that memory-safe languages get for free purely from their design?<p>I get that css has changed a lot over the years with variables, scopes and adopting things from less/sass/coffee, but people use no-script for the reason because javascript is risky, but what if css can be just as risky... time to also have no-style?<p>Honestly, pretty excited for the full report since it's either stupid as hell or a multi-step attack chain.
> Don't we have a bunch of tools that should create memory-safish binaries by applying the same validation checks that memory-safe languages get for free purely from their design?<p>No, we don't. All of the ones we have are heavily leveraged in Chromium or were outright developed at Google for similar projects. 10s of billions are spent to try to get Chromium to <i>not</i> have these vulnerabilities, using those tools. And here we are.<p>I'll elaborate a bit. Things like sanitizers largely rely on test coverage. Google spends a lot of money on things like fuzzing, but coverage is still a critical requirement. For a massive codebase, gettign proper coverage is obviously really tricky. We'll have to learn more about this vulnerability but you can see how even just that limitation alone is sufficient to explain gaps.
I heard they once created an entire language that would replace C++ in all their projects. Obviously they never rewrote Chrome in Go.<p>> 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.<p>Shouldn't pages run in isolated and sandboxed processes anyway? If that exploit gets you anywhere it would be a failure of multiple layers.
They do run in a sandbox, and this exploit gives the attacker RCE inside the sandbox. It is not in and of itself a sandbox escape.<p>However if you have arbitrary code execution then you can groom the heap with malloc/new to create the layout for a heap overflow->ret2libc or something similar
> Things like sanitizers largely rely on test coverage.<p>And not in a trivial “this line is traversed” way, you need to actually trigger the error condition at runtime for a sanitizer to see anything. Which is why I always shake my head at claims that go has “amazing thread safety” because it has the race detector (aka tsan). That’s the opposite of thread safety. It is, if anything, an admission to a lack of it.
I'd love to see what the PoC code looks like, of course after the patch has been rolled out for a few weeks.
this is insane! what other zero days are out there and being used<p>also this seems chromium only so it doesnt impact firefox ?
This doesn't affect the many browsers based on Chromium?
It does, it's just that blog is for chrome so it doesn't mention other browsers.
why on earth would you even assume somthing like this?<p>honestly curious. do you think "based on chrome" means they forked the engine and not just "applied some UI skin"?
"This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera"
use after free.... ahh the irony
Devtools is seemingly partially broken in this version, if I have devtools open on a reasonably dynamic web app Chrome will crash within a minute or two
[dead]
[dead]
Isn't this a wrongly editorialized title - "Reported by Shaheen Fazim on 2026-02-11" so more like 7-day.
It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug<p>The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN<p>Edit: fwiw, I'm not the downvoter
It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.<p>In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.<p>It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".
I think the implication in this specific context is that malicious people were exploiting the vuln in the wild prior to the fix being released
I wonder if this was found with LLM assistance, if yes, with which one and is it a one-off or does it mark a start of a new era (I assume it does).