Dirty Frag: Universal Linux LPE

816 points
1/21/1970
6 days ago
by flipped

Comments


firer

This is very similar in root cause and exploitation to Copy Fail.

Which illustrates pretty well something that's lost when relying heavily on LLMs to do work for you: exploration.

I find that doing vulnerability research using AI really hinders my creativity. When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby. It's like a genie - you get exactly what you asked for and nothing more.

The researcher who discovered Copy Fail relied heavily on AI after noticing something fishy. If he had to manually wade through lots of code by himself, he would have many more chances to spot these twin bugs.

At the same time, I'm pretty sure that by using slightly less directed prompting, a frontier LLM would found these bugs for him too.

It's a very unusual case of negative synergy, where working together hurt performance.

6 days ago

eqvinox

No, unless I'm misreading it it's the *same* root cause: high 32 bits of Extended ESN in IPsec == authencesn module/cipher mode.

The wrong thing got fixed for copy.fail, because people jumped to blame AF_ALG.

[ed.: yes it's the same authencesn issue. https://github.com/V4bel/dirtyfrag/blob/892d9a31d391b7f0fccb... it doesn't say authencesn in the code, only in a comment, but nonetheless, same issue.]

[ed.2: the RxRPC issue is separate, this is about the ESP one]

6 days ago

firer

There are two vulnerabilities here.

The RxRPC one is definitely a different root cause (although caused by a very similar mistake).

For the ESP one it's a bit harder to tell. I don't think the wrong thing was fixed, just that there was a very similar bug in almost the same spot. Could be wrong about that though.

6 days ago

eqvinox

(you probably wrote this while I was editing my post.)

It's absolutely the same issue in authencesn/ESP. There's another one in RxRPC that is AIUI completely unrelated.

6 days ago

pepa65

But if dirtyfrag is mitigated, copyfail is still active...

5 days ago

timcobb

> When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

Very much aligns with my experience. For me this is the most unsatisfying thing about AI-based workflows in general, they miss stuff humans would never miss.

All the time I wonder what am I missing that's right nearby? It's remarkable how many times I have to ask Claude code to fully ingest something before it actually puts it into context. It always tries to laser through to target it's looking for, which is often not what you want it to look for, at least not all you want it to look for. Getting these models to open up their field of vision is tough.

6 days ago

VladVladikoff

Actually lately I’ve been feeling the other way around with it. The LLM catches things I would have overlooked. I ask for a new feature in a certain file, and the LLM suggests fixing a tangentially related file to accommodate the new feature without breaking something else. Maybe this is just the crap legacy codebase I’m working with and how tangled up everything is, but I definitely have found several times now that it caught things I would have missed.

5 days ago

timcobb

> The LLM catches things I would have overlooked. I ask for a new feature in a certain file, and the LLM suggests fixing a tangentially related file to accommodate the new feature without breaking something e

What are you using? Do you think this behavior is in response to prompting? My goal at times is to "rabbit hole" the LLM to get it to go down rabbit holes and find bigger and bigger picture issues until it homes in on something fundamentally broken that could have big impact if fixed. But it's not trivial to push the agent in that direction for me.

5 days ago

VladVladikoff

I sort of switch around, Claude, sometimes Codex. Probably more Claude than Codex. If Claude’s down then codex, or Gemini.

5 days ago

ulrikrasmussen

Do you think this is inherent or an artifact of prompting? Curiosity and side quests leads to higher token usage and longer time to finish, so I could understand why current harnesses and system prompts would not encourage that sort of thing.

But what if a coding agent was prompted to be more curious during development? Like a human developer, make mental notes of alternatives to try out and chase suspicious looking code which may seem unrelated to the task at hand. It could even spawn rabbit hole agents in parallel.

Taking a step back, this probably highlights major hazard with the increased usage of LLMs for coding, which is that everyone's style of work is going to converge because most code will be written by the 2-3 most popular models using the same system prompts.

6 days ago

timcobb

> Do you think this is inherent or an artifact of prompting?

Not sure! I mean, look at this sibling comment for example: https://news.ycombinator.com/item?id=48062797. Not my experience, but apparently others have this experience.

> But what if a coding agent was prompted to be more curious during development?

I've tried using the language of curiosity. My qualitative take was that it did have a positive impact, but not much. And I can only tinker with system prompting so much, before I get drawn into LLM driving :)

> which is that everyone's style of work is going to converge

yeah I imagine even people's styles of thinking will converge as a result of this, more so than from reading other people's prose or programs. I think I saw something on HN to this effect within the last month, too.

5 days ago

lloeki

I've seen something similar, solutions generated feel very pythonic or javaesque in languages that are neither Python nor Java (C, Rust, Ruby)

I've had to explicitly direct the machine to read existing sibling code and follow the specific idioms and patterns in use.

6 days ago

clbrmbr

It’s interesting to compare how the agentic search performs, with these targeted reads and lots of tool calls in the stream, versus the older but still valid paradigm of using a high-reasoning model like GPT-X-pro and feeding in all the relevant files at once with no tools.

I have found that the “pro” approach is much more holistic and able to tackle rather “creative” problems that require very careful design and the overall artifact is tight and self-consistent. — Claude Code by comparison is incredible in exploration and targeted implementation but indeed is not great at seeing the forest.

5 days ago

dotancohen

  > All the time I wonder what am I missing that's right nearby?
Add to the prompt "use coding conventions of the file which you are currently editing". That gets the machine (Opus and Sonnet at least) to go over the nearby code and occasionally mention something obvious.
5 days ago

[deleted]
6 days ago

tptacek

I don't follow. LLMs spotted these bugs in the first place. You seem to be saying that these discoveries are indications that they're bad for vulnerability discovery.

6 days ago

firer

From what I understand, the copy fail bug was found by researcher who noticed something weird and then using AI to scan the codebase for instances where that becomes a problem.

I bet that with a slightly looser prompt/harness, the LLM could have found these twin bugs too.

Yet at the same time, I also think that if the human researcher had manually scanned the code, he'd have noticed these bugs too.

FWIW I do think LLMs are great tools for finding vulnerabilities in general. Just that they were visibly not optimally applied in this case.

6 days ago

aerodexis

They could also have found all these things at the same time - and are slow-rolling the disclosures.

6 days ago

eqvinox

I don't think the copy.fail people understood the issue they found, as is evident by the heavy focus on AF_ALG/aead_algif, which is essentially "innocent" as we're seeing here.

I think LLMs are great for vulnerability discovery, but you need to not skimp on the legwork and understanding what even you just found there.

6 days ago

tptacek

Right but without the LLM the bug doesn't get found at all.

6 days ago

_AzMoo

That's not necessarily true. Who's to say the security researchers wouldn't have found it if they'd searched the code manually?

6 days ago

tptacek

It's an AI security firm! You might just as productively ask "why did all the other engineers who ever looked at this code not find it, and why was Theori the one to actually surface it?".

6 days ago

UltraSane

It would have taken a LOT longer but often this kind of manual search is so tedious people just don't do it. LLMs don't get bored.

6 days ago

dgellow

> LLMs don't get bored

They do not get bored like a human but they are trained on human language and replicate the same traits, such as laziness, and expressing boredom or annoyance (even if obviously they do not experience anything at all). It’s actually a lot of effort to get them to engage with things at a deeper level without skipping corners

6 days ago

cp9

I’m hardly going to simp for LLM tools but the fact that the bug existed and no one had reported it seems proof positive no one was about to find it without them

6 days ago

eqvinox

Yes, I agree. I'm not the GP poster.

6 days ago

baq

Safer to assume at least one of NSA, Mosad and a few others were sitting on it for years.

6 days ago

totallyrandom__

Am I missing something? Where does it say that the researcher that found Dirty Frag used LLM to find it? Have you read the original report from the researcher?

5 days ago

parliament32

No, they did not. Careful of falling for the psychosis.

> This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data.

https://xint.io/blog/copy-fail-linux-distributions

6 days ago

tptacek

Theori is an AI security research firm.

6 days ago

duk3luk3

You appear to want to die on the hill of "This vulnerability would never have been found if we lived in a world without LLM AI" which is a very strange hill to die on.

There's no question that we live in the world where LLM AI was involved in finding the copy fail vulnerability at this specific time, and it's completely normal for people to see a vulnerability and then look closer and find related vulnerabilities or a deeper root cause, but there's no need to adopt an extreme "without AI LLM we don't find these vulnerabilities" position.

6 days ago

tptacek

It's weird to say I want to "die on this hill" because that's not even something I believe. There was nothing especially difficult about this particular vulnerability. My only observation that nobody did find it before, then an LLM security firm went out looking for Linux LPEs, and thus it was discovered.

That is a very difficult fact pattern to which to attach the conclusion "LLMs have sabotaged security research" (my paraphrase).

6 days ago

j16sdiz

Well.. every new vulnerability is one nobody did find it before.

Otherwise, it won't be classified as "new"

--

Edit:

I think LLM is very useful here.

When a researcher spot something funny, instead of spending two days on reading and testing, he can fire up a LLM and have it read all the code lead to there in ~30 minutes.

5 days ago

Yokohiii

The finding started with human intuition and was assisted by an LLM. You can yell "AI sec firm" 1000 times. A human got it started. You shouldn't die on that hill.

6 days ago

furyofantares

Of the MANY things I've completed in the last year that I would never have done without an LLM, a human got 100% of them started. The ideas were mine in every case.

But it is still a fact that I have been taking on all sorts of tasks I would never have taken on if I didn't have power tools.

5 days ago

Yokohiii

My comment was solely about the correct attribution who made the initial finding. It's not a comment about the value of AI. I think we can get facts right and still argue for or against AI.

4 days ago

furyofantares

That's a pet peeve of mine as well, the inability to discuss facts / correct mistaken things on the internet, if the fact/mistake is on the "wrong" side of an argument.

I don't think I'm doing that though.

The context is a discussion about whether it would have been found without the assistance of an LLM. I agree that further upthread there maybe some misattribution but it is not present in the post you were directly replying to and it is not really the argument being made.

4 days ago

Yokohiii

His whole sentiment was yelling several times "LLMs did this". He wanted to smuggle his pro AI attribution in, one way or another. In that way I could also argue "without humans, we wouldn't have LLMs." But it doesn't have value, right? I don't know why some try so hard to play down any human impact in this context. LLMs can help to find bugs. Without broader context it's a good and interesting thing. There is no need to trample over everything left and right just to overhype it.

4 days ago

danudey

It seems as though this issue occurred to him, then he used their tool ("Xint Code") to analyze the codebase for instances of it.

6 days ago

ofjcihen

I don’t think that’s what the OP is saying at all, just that using LLMs needs to be a cooperative research process.

Also I see you jumping around a lot to the defense of LLMs when I don’t think anyone is really attacking them. Maybe cool it a bit.

6 days ago

tptacek

From the thread that ensued I feel comfortable that my interpretation of the comment (or rather, my confusion about it) was in fact germane.

6 days ago

ofjcihen

Germane or not the knee-jerk reactions related to LLMs are getting ridiculous and it seems like it’s the same people throwing down at a moments notice and then chalking it up to a misunderstanding.

So like I said, just chill out.

6 days ago

keybored

Right. Finding the bug is in itself a win. It seems we’re jumping from that spend-electricity-to-find-bugs win to arguing about how some things around it are not quite good or comfy.

6 days ago

rayiner

It’s incredible humans spot stuff like this. I guess even more incredible that LLMs can do it!

6 days ago

papascrubs

Or a follow up prompt: "find similar classes of bugs". Once the actual case has been layed out finding like bugs isn't too hard. I hear you on the creativity bit. Like any tool, AI can put blinders on. Using it to augment without it fully taking over your workflow is tough.

6 days ago

dgellow

Not just like any tool though. Interacting with agents can be incredibly boring and frustrating in a way that I personally do not experience with other technology

6 days ago

riedel

Just on a side note. Negative synergy does not seem so uncommon with machine learning. We did some research maybe 10 yrs ago an human/ML based duplicate detection (for a municipal support ticket system) . Research showed that pure AI and pur human outperformed co-working. Human oversight often e.g. overcorrected machine work. I think it is a nice HCI problem to solve actually to amplify creativity and unique skills in such processes. Particularly if they can be to some degree repetitive and tiresome.

6 days ago

refulgentis

It’s very hard to see a root vuln similar to, but not the same as, another discovered by AI, as a lesson about AI not exploring.

Is there a counterfactual where you would say it explored well enough, besides both vulnerabilities published as one?

6 days ago

harshreality

I don't know... after they found a high profile bug like copyfail, I wouldn't attribute not looking for similar bugs to them being overly dependent on AI. It's easy to stop exploring, for a while at least, after you've struck on a major find. Maybe they would've returned to it in a few months. It certainly inspired others to explore similar areas and find these new bugs. Isn't that enough?

5 days ago

YmiYugy

AI or not, it’s always been reasonable common that a bunch of related vulnerabilities get discovered after shortly after the original one.

5 days ago

formerly_proven

These are all page cache poisoning attacks (dirtyfrag, copyfail, dirtypipe). Maybe the page cache should have defense-in-depth measures for SUID binaries?

6 days ago

firer

SUID mitigations have nothing to do with the vulnerability itself - just the exploit.

If there's a root cronjob that runs a world readable binary, you could modify it in the page cache and exploit it that way.

Modifying the page cache is a really strong primitive with countless ways to exploit it.

6 days ago

formerly_proven

True! Building protections (e.g. physical pages in the page cache are not writeable 100% of the time) just for executables has of course countless circumventions as well (e.g. config files). Yeah, there is probably not that much to be done there, actually. Looking at some of the diffs it seems to me like the kernel makes it really not particularly obvious when/how this goes wrong. E.g. the patch for this is to look at an additional flag on the socket buffer to fix an arbitrary page cache write. This feels rather action at a distance. Logically this of course makes sense, the whole point of splice et al is to feed data from one file-like into another file-like, whatever those ends might be. That erases the underlying provenance of the data.

6 days ago

eqvinox

splice() should maybe generally refuse to operate on things you can't write to.

6 days ago

toast0

splice is documented to return EBADF if "One or both file descriptors are not valid, or do not have proper read-write mode."

So it seems surprising to me that you can call it when the out fd is not writable? But I didn't retain the information about the vulnerability, so I'm missing something. There was something about copy on write, IIRC?

6 days ago

eqvinox

"proper read-write mode" for the input fd is reading only. The exploit is writing to the splice() input fd.

Also, NB, I said permission check, not mode check. The input fd to splice can and will be open for only reading quite often. Doesn't mean the kernel can't still do a write permission check.

(Except I didn't say that here. Oops. Getting confused with my posts.)

6 days ago

toast0

OK, I may likely have too much sleep debt to understand, but given the bug is that splice can write to the input fd, you're suggesting maybe splice should only let you use an input fd if the process has access to write to it?

But splice is a more or less a generalization of sendfile, and sendfile is often used for webserving where the serving process does not have ownership of the documents it is serving. It doesn't make sense to limit splice such that it can't do the task it was built for. Maybe splice should just not write to the input fd? :P

6 days ago

cyphar

> But splice is a more or less a generalization of sendfile

Not really, splice(2) is actually more limited, it's an optimisation for reading and writing data between files and pipes without needing to make copies.

sendfile(2) works with any fds because it just exists to remove a fair bit of the copy overhead when doing a userspace read/write loop, but it does actually do a copy.

6 days ago

eqvinox

Yes, it'd curtail splice() usage quite heavily. Maybe too much.

But apparently we can't be trusted with the page cache…

Maybe the kernel using supervisor-read-only flags could be made to work, only issue then is what happens if something does in fact need to write…

6 days ago

semiquaver

Aren’t you just saying “don’t write bugs?”

6 days ago

varispeed

> When your workflow consists of asking questions and getting answers immediately, you don't get to see what's nearby.

That's why is very very important to just step out and use saved time to go for a walk, to a park, sit on a bench, listen do birds, close eyes and zoom out.

The state we are in is actually brilliant.

6 days ago

zimbatm

Maybe. This phenomenon of security holes being found closely to each other was also common before LLMs. People attention gets directed to a place and typically more issues are getting found.

5 days ago

totallyrandom__

Where does it say that the Dirty Frag one was found by LLMs? The research site with all the information doesn't mention LLM or AI at all.

5 days ago

[deleted]
6 days ago

SubiculumCode

Evidence or are you just riffing?

6 days ago

john_strinlai

"Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities."

link: https://github.com/V4bel/dirtyfrag

detailed writeup: https://github.com/V4bel/dirtyfrag/blob/master/assets/write-...

importantly:

"Copy Fail was the motivation for starting this research. In particular, xfrm-ESP Page-Cache Write in the Dirty Frag vulnerability chain shares the same sink as Copy Fail. However, it is triggered regardless of whether the algif_aead module is available. In other words, even on systems where the publicly known Copy Fail mitigation (algif_aead blacklist) is applied, your Linux is still vulnerable to Dirty Frag."

mitigation (i have not tested or verified!):

"Because the responsible disclosure schedule and the embargo have been broken, no patch exists for any distribution. Use the following command to remove the modules in which the vulnerabilities occur."

    sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"
conversation around the mitigation suggests you need a reboot or run this after the above on already-exploited machines:

    sudo echo 3 > /prox/sys/vm/drop_caches
6 days ago

progval

"sudo" in "sudo echo 3 > /prox/sys/vm/drop_caches" does not do anything because only runs echo, not the write.

And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

6 days ago

john_strinlai

>And if a machine is already exploited, it's too late to do just that. You need to rebuild the whole disk image because anything on it could be compromised.

this is more targeted at the people who run the PoC to see if their machine is vulnerable.

just transcribing some relevant stuff from https://github.com/V4bel/dirtyfrag/issues/1 so that people visiting this thread dont need to poke around a bunch of different places.

6 days ago

dundarious

You can't sudo echo and redirect from the non-sudo shell like that.

    echo 3 | sudo tee /proc/sys/vm/drop_caches
or

    sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
Also fixed your typo in /proc...
6 days ago

throw0101c

Also try:

     sudo sysctl -w vm.drop_caches=3
6 days ago

wpollock

Or more simply, use

   su -c 'echo 3 > /proc/sys/vm/drop_caches'
6 days ago

seba_dos1

echo 3 | sudo tee /proc/sys/vm/drop_caches

6 days ago

john_strinlai

thanks. copy pasting from the github via my phone, and should have taken the extra few mins

6 days ago

dundarious

No worries, overall a very useful summary comment.

6 days ago

sounds

Is there any additional info on where it was "published publicly by an unrelated third party"? From the timeline in the writeup:

> 2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.

> 2026-05-07: Detailed information and the exploit for this vulnerability were published publicly by an unrelated third party, breaking the embargo.

Edit: nevermind, details are further down in the thread:

https://openwall.com/lists/oss-security/2026/05/07/12

And

https://news.ycombinator.com/item?id=48055863

6 days ago

alecco

People are blaming the guy who wrote the exploit for breaking the embargo but it was actually broken in Linux by publishing a fix [1]:

> on 2026-05-05 Steffen Klassert pushed f4c50a4034 to netdev/net.git with Cc: stable@vger.kernel.org.

Once a fix is out it's usual for researchers to race to make the first exploit out of it.

[1] https://afflicted.sh/blog/posts/copy-fail-2.html

5 days ago

danudey

Just FYI, you can also mitigate it with `echo 1 > ...`; you don't need to drop everything, dropping `1` clears the page cache and that's enough.

Tested locally on Ubuntu 26.04:

1. Ran the exploit and got root

2. Configured the mitigations

3. Ran `su` again with no parameters and immediately got root again unprompted

4. Cleared the page cache

5. `su` asked for a password

6 days ago

eqvinox

And I ask again: why the f*ck is algif_aead getting all the flak for copy.fail? It's authencesn being stupid.

authencesn didn't get fixed. Now we got the results of that, turns out you can access the same (I believe) out of bounds write through plain network sockets.

I wish I thought of that, but I didn't.

[ed.: I'm referring to the through-ESP issue. The RxRPC one is AIUI completely unrelated.]

6 days ago

chromacity

If this indeed works on all major distributions, I just continue to be amazed by how irresponsible the maintainers are. We're talking about optional kernel functionality that's presumably useful to something like <0.1% of their userbase, but is enabled by default?... why?

This feels like the practice of Linux distros back in 1999 when they'd ship default installs with dozens of network services exposed to the internet. Except it's not 1999 anymore.

6 days ago

JeremyNT

Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask. They just don't know who is using what. It's always possible for users to go back and tailor their builds for the stuff they actually want.

And... I remember the early days of Linux where I ran `make menuconfig` and selected exactly the functionality I wanted in my kernel. I'd... rather not end up back there.

That said a target for an easy win here is RHEL, which compiles a lot of modules into the kernel rather than leaving them as loadable modules, so the mitigation for e.g. copy fail was impossible. Maybe they could do with a few less of those?

6 days ago

chromacity

You can make precisely the same argument for network services. Who knows, maybe you need telnet and UUCP and NFS and ftpd running on your system?... why should the distro maintainer decide?

Well, because you probably don't, and it's a security risk, so no need to put millions at risk for the benefit of that one person who wants to tinker with packet radio or whatever. Similarly, it would be prudent for distros to not allow autoloading of modules that are extremely niche while giving a simple way to adjust the settings if you want to. God knows they have plenty of GUI configurators and config files already.

6 days ago

akdev1l

The thing is that we could simply split those modules into separate packages

No reason why you couldn’t just `dnf install -y kmod-rxrpc` if for whatever reason you need that.

6 days ago

michaelt

Now I think about it, it's kinda weird if non-root users can cause kernel modules to get loaded, without any hardware changes having happened.

If the kernel modules for esp4, esp6 and rxrpc aren't loaded - how is it that a non-root attacker can cause them to get loaded?

6 days ago

pepa65

It seems that this is allowed as part of a dependency chain...

6 days ago

TZubiri

>Distro maintainers blacklisting specific functionality because they believe YAGNI is a pretty big ask

We have forgotten what a distro is, and its modern corruption of the concept is now taken as the definition.

Distributions weren't meant to be competing generic universal bundles of userspace tools in addition to the kernel.

6 days ago

atgreen

Don't disagree, but there are eBPF mitigations that work as alternatives to unloading kernel modules.

6 days ago

cassianoleal

Can you elaborate on that?

5 days ago

atgreen

5 days ago

JeremyNT

I was aware of commercial antivirus vendors (Crowdstrike) doing something like this, but this is the first I've seen it published by somebody in the open!

Have you considered writing up a blog post and submitting this to HN?

5 days ago

cassianoleal

Thanks!

From the sound of it, the same mitigations for Copy Fail 1 are also effective here.

5 days ago

atgreen

No, they are different. I just bundled them together for convenience in this POC. The only real thing in common is that they both use eBPF.

5 days ago

cassianoleal

Got it, thanks!

5 days ago

0xbadcafebee

There is no way to disable components you think users won't use and not make it incredibly difficult to use the system. I personally would have no way to know what to enable or not enable based on what I want to do, and I've been using this stupid OS for 25 years.

Linux distro maintainers are the most responsible software maintainers on the planet. Their security practices are miles beyond the stupid programming language package managers, they maintain a select list of packages, vet changes, patch bugs, resolve complex packaging issues, backport fixes, use tiered releases, distribute files to global mirrors, and cryptographically validate all files. And might I remind you, they do all this for free.

6 days ago

nirui

> irresponsible the maintainers are

Today it's 0.1%, tomorrow it might become 100%. User demand is hard to anticipate, so it's reasonable to include small features that don't cost a lot to run by default.

It's not ideal, but you really don't want to prevent user from finishing their task, because maybe then they'll just give you a bad name and switch to another distro.

That's to say, it's not "irresponsible", it's reasonably maximums (at least trying to be).

6 days ago

lunar_rover

In many ways non mobile computers are very much still stuck in 1999. Android is significantly more secure than other Linux systems because it's much younger and had the chance to integrate mandatory access control into the entire stack.

6 days ago

croes

Unless your Android doesn’t get any security updates anymore.

https://durovscode.com/google-android-security-update-warnin...

6 days ago

akimbostrawman

That is a well know and entirely different issue

6 days ago

croes

Is it?

The claim is Android is much more secure than other Linux, but if 40% of all Android devices don‘t get a security patch and you can’t even do it yourself I would call the more secure per se.

Hardening is one part of security, patchability another. Android lacks in the latter.

6 days ago

a96

You can take many computers from 1999 and update them to the best software available today. Most phones won't even do that for a few years. And that is security in the real sense of the word, as in "this won't just pull the rug from under me".

(Of course the problem isn't Android, it's the chipset vendors that the SW depends on. They drop support fast and never give enough info for anyone else to keep things up to date. Also Google.)

6 days ago

akimbostrawman

>if 40% of all Android devices don‘t get a security patch

No system will stay secure once it does not receive updates. That does not exclude it from being more secure than another system based on security feature merits as long as it does get updated.

>Hardening is one part of security, patchability another. Android lacks in the latter.

That is not an inherent flaw with android but OEM devices shipping modified android they don't bother keeping up to date. Some OEMs are trying to mitigate this by increasing security update support up to 7 years which still is not long enough but also doesn't make them less secure than a desktop that gets updated longer.

What people forget is that not only desktop and mobile phone software is different but also the hardware. If your desktop pc hardware is out of date / EOL nobody cares usually. Meanwhile on a phone this can be a lot more relevant because security expectations and threat models are a lot higher, for example see all the zero/one click compromise headlines.

5 days ago

croes

It is an inherent flaw of android. Imagine no Windows update because Lenovo stopped support for 4 year old notebooks

5 days ago

akimbostrawman

It's 7 years because there limiting factor is hardware firmware support. A lot of desktop hardware does not receive firmware updates above 4 years either but that just gets shrugged off like you do because "OS still gets updates so it means it's secure".

5 days ago

koutakun

Funny comparison seeing as Windows decided to drop support for any machines without TPM (some as young as 2017/2018)

5 days ago

mike_hearn

So what? Most devices running Linux don't get security patched, it was ever thus. Think about all the kernels running in wifi routers and other embedded devices.

5 days ago

akimbostrawman

Your outdated linux is not reachable world wide by a public phone number. Once again phones do not have the same threat model as desktops. try running your outdated Linux without a firewall and see how long that will survive.

7 hours ago

Atlas26

The only thing Android shares with Linux desktop/server Linux is the kernel, the entire rest of the OS built on top of it is completely different so it’s a pointless comparison. While it uses the Linux kernel, it’s not considered the Linux everyone is commonly talking about here ie gnu/Linux (insert copypasta here).

Mobile OS are also essentially required to be much more controlled and locked down due to FCC regulations and the strictness surrounding modems and other RF emitting devices.

4 days ago

akerl_

It’s not enabled by default. It’s an optional module that is loaded on demand. The entire setup of the kernel promotes compiling in the core set of things your users will need and offering basically everything else as a module to load on demand.

6 days ago

chromacity

This is a pedantry for the sake of it. If it's present by default and an attacker can trivially cause it to be loaded, it's the same as "on by default".

6 days ago

akerl_

It’s radically different than on by default.

Having a service that automatically starts and listens on the network is radically different from having a module that a local administrator can load.

If you want to block module loads, you’re one sysctl flag away.

6 days ago

zzrrt

> having a module that a local administrator can load

This is a successful local privilege escalation, so local administrator privs were not needed. In default configuration of all distros, apparently.

> If you want to block module loads, you’re one sysctl flag away.

The modules aren't really the point, it's that unnecessary features (to 99% of us?) were accessible by default without privs.

6 days ago

zbentley

This is "a service that automatically starts". That's what automatic kernel module loading is for!

It's not any different from putting an always-running network service behind socket activation instead. The security boundary/risk is nearly identical between the two.

6 days ago

akerl_

One is remotely accessible. The other is locally accessible.

6 days ago

zbentley

The GP you were replying to mentioned a vulnerability "present by default and an attacker can trivially cause it to be loaded".

You responded contrasting a network service with an administrator-loadable module.

This is neither of those. It's an LPE, not a remote exploit. It doesn't require an administrator (root) to load anything. In context of this vuln, it's exactly analogous to socket activation. The scope of an LPE vuln is local; yes. What does that have to do with the rest of your comments?

6 days ago

akerl_

I don't understand what point you're trying to make here.

I originally replied to a comment saying "This feels like the practice of Linux distros back in 1999 when they'd ship default installs with dozens of network services exposed to the internet". It is not like that.

6 days ago

Sohcahtoa82

> This is a pedantry for the sake of it.

Par for the course for HN.

6 days ago

thayne

How would the attacker cause one of these modules to get loaded without already having root?

6 days ago

staticassertion

Trivially. Kernel modules autoload through various unprivileged mechanisms.

6 days ago

[deleted]
6 days ago

[deleted]
6 days ago

kro

Maybe it would be reasonable for sysadmins to proactively whitelist used / block all exotic unused modules that are not needed in their system configuration.

This would reduce the amount of ring 0 code. But I've never seen such advice.

6 days ago

[deleted]
5 days ago

ActorNightly

Because in order to exploit this, you have to have direct access to the computer. Either through malicious usb device, or by exploiting some supply chain or a known piece of software that will be willingly or automatically installed, and furthermore you need to be able to essentially run arbitrary terminal commands, which is a huge breach of isolation in that software.

If an attacker manages to do all that, its already bad news for you. Escalation to root with this is the least of your worries at that point.

Like someone else below posted, https://xkcd.com/1200/

People need to understand what the vulnerability actually is before freaking out about it.

6 days ago

netheril96

You are assuming that LPE only applies to the user that holds all the sensitive stuff. But it also applies to users created specifically for isolation. Without LPE they would not have access to anything important even if they were compromised.

6 days ago

ActorNightly

It doesn't matter which "user" this goes through. If an attacker can get hold of a users control to the point where they can execute arbitrary scripts, you have already lost.

4 days ago

cluckindan

So a threat actor buys access to a managed kubernetes service, or other linux-based shared hosting platform, and now they have access to the computer.

Hell, GitHub Actions would do.

6 days ago

echoangle

Is there any service that relies on Linux user separation or containers to separate different user accounts? I’m pretty sure you’re not supposed to do that and the proper way is to run different instances in virtual machines.

6 days ago

ndiddy

Basically every shared webhost that uses cPanel works like this. The security mechanism they use is called CageFS (https://cloudlinux.com/getting-started-with-cloudlinux-os/41...), which makes it so users can't see other users, but it's not like a VM or something.

5 days ago

LelouBil

Right, you're not supposed to do that...

6 days ago

ActorNightly

Yes, because hypervisors are simply just a program that runs under linux, not total cpu/memory isolation......

Lemme guess, you probably think this can be used to hack into the backend that runs AWS from any EC2 lol?

4 days ago

TacticalCoder

> ... but is enabled by default?... why?

We could also wonder why XZ was linked to SSH... But only on systemd-enabled distros (which is a lot of them).

Just... Why?

And then make sure to call to incompetence, instead of malice and say non-sense like "Sure, it only factually affects systemd distros, but this is totally not related to systemd". All I saw though was a systemd backdoor (sorry, exploit).

Now regarding copy.fail that just happened: not all maintainers are irresponsible. And some have, rightfully, bragged that the security measures they preemptively took in their distros made them non vulnerable.

But yup I agree it's madness. Just why. And Ubuntu is a really bad offender: it's as if they did a "yes | .." pipe to configure every single modules as an include directly in the kernel.

"We take security seriously, look we've got the IPsec backdoor (sorry, exploit) modules directly in the kernel". "There's 'sec' in 'IPsec', so we're backdoored (sorry, secure)".

6 days ago

chuckadams

xz was not directly linked to ssh, and systemd itself was not providing the backdoor. The weakness is embedded into the architecture of glibc (which has spread to other systems like FreeBSD as well): https://github.com/robertdfrench/ifuncd-up

6 days ago

AshamedCaptain

The entire argumentation here is ridiculous. There's a big jump from "IFUNC undermines RELRO" to "IFUNC is the issue". You could have gotten all but the same effect spawning a thread from a plain init or C++ constructor. No one should think that any relro, r^x or aslr or anything like this is going to deter anyone who can literally control the contents of the libraries which are linked in. They could, literally, spawn a copy of sshd with a patched config if necessary.

6 days ago

TacticalCoder

Sure, but distros not using systemd were not affected.

6 days ago

seba_dos1

The only reason distros not using systemd were "not affected" is because this particular attack wasn't going after them. They were compromised nevertheless, their compromise was simply consequence-less due to attacker's choices of what to do after the compromise.

5 days ago

[deleted]
6 days ago

baggy_trough

Disclosure Timeline

2026-04-29: Submitted detailed information about the rxrpc vulnerability and a weaponized exploit that achieves root privileges on Ubuntu to security@kernel.org.

2026-04-29: Submitted the patch for the rxrpc vulnerability to the netdev mailing list. Information about this issue was published publicly.

2026-05-07: Submitted detailed information about the vulnerability and the exploit to the linux-distros mailing list. The embargo was set to 5 days, with an agreement that if a third party publishes the exploit on the internet during the embargo period, the Dirty Frag exploit would be published publicly.

2026-05-07: Detailed information and the exploit for the esp vulnerability were published publicly by an unrelated third party, breaking the embargo.

2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.

6 days ago

flumpcakes

7 days from disclosure to publishing a how-to guide to get root to the entire planet doesn't scream "responsible" disclosure to me.

6 days ago

bawolff

Its not the reporter's fault that other people broke the embargo.

6 days ago

progval

They don't have to publish a working exploit as soon as the embargo is broken, though.

6 days ago

throw0101c

Perhaps, but if the exploit code is published folks can double-check that they implemented the mitigations properly.

If there's no PoC, how can you really be sure?

6 days ago

mike_d

Why not? There has already been a working exploit floating around, at least now it comes from an authoritative source.

6 days ago

john_strinlai

anyone who will use the exploit maliciously will immediately and trivially be able to create a working exploit.

6 days ago

staticassertion

An exploit was already published.

6 days ago

j16sdiz

The third party posted an exploit.

5 days ago

firer

My immediate reaction was the same.

But this is very similar to Copy Fail, and I'm assuming there was an assumption that others might also discover this soon as well. Hence the urgency.

At least that's my charitable interpretation.

6 days ago

[deleted]
6 days ago

lofaszvanitt

WTF cares? Publish them without disclosure is the true way, otherwise noone would care about security and your data.

6 days ago

eqvinox

And again it's band-aiding the problem. Can authencesn not be fixed or what?

6 days ago

wolttam

Maybe write to the LKML if you have some privy information?

5 days ago

eqvinox

I don't have enough hubris to assume I know things the people on LKML (or actually netdev in this case) don't. If anything, I might unicast a mail to Steffen.

5 days ago

thom

After all these years, we finally have enough eyeballs that all bugs are shallow, and it kinda sucks. How many times a week am I going to be updating my kernel from now on?

6 days ago

tempaccount5050

I haven't updated mine. I have a firewall and it's not exposed to the Internet. Need a key to SSH in. Same with my public facing server. Almost none of these exploits are "drop everything now and patch" unless you are somehow exposing yourself stupidly.

6 days ago

mnw21cam

It's a "drop everything and patch" if you have a large multi-user server where you don't completely trust all of the users. Like say in a university with a server that students can log in to, like I have just had the joy of updating (and had RHEL break ZFS on me yet again).

But yes, in most other cases no it isn't a "drop everything" exploit - but it does mean one less layer in the multi-layer security, as unprivileged remote exploits now become root-access remote exploits.

5 days ago

rithdmc

> unless you are somehow exposing yourself stupidly

Or, y'know, offer some forms of compute as a service.

5 days ago

INTPenis

I understand where you're coming from, it's no reason to panic.

But this kind of thinking can be dangerous because it implies that your systems don't talk to the outside world at all, which they obviously do. I mean a very glaring example is container images, so it definitely takes more than a firewall and ssh keys to stay safe in general.

5 days ago

baq

If you’re running any sort of CI you’re probably going to have a bad couple of days if everything goes well

6 days ago

HugoTea

To be honest, CI has always been a massive risk, I'm a bit miffed at how blasé some people are about providing runners.

5 days ago

yread

unless you run pinned CI runners on hardware you control

5 days ago

midtake

I sort of always expect there to be an LPE to root on Linux tbh, if anything this is great news and Linux might be a useful multiuser system after all.

6 days ago

bjackman

Updating your kernel isn't good enough, it never was.

Native unsandboxed execution == root. Only thing that's new is some people started making websites for their LPEs.

https://github.com/google/security-research/tree/master/pocs...

6 days ago

brcmthrowaway

So you think someone is going to break into your house, find your default credentials somehow and get root access?

6 days ago

sureglymop

With physical access, root access is as simple as setting init=/bin/bash in the kernel parameters from a bootloader. No need for credentials or anything.

6 days ago

anygivnthursday

Secure boot and disk enryption are not that unusual nowdays

6 days ago

Asraelite

Secure boot doesn't provide security, just control for device manufacturers.

Physical access always means the device is pwned. You can install a keylogger or something similar.

6 days ago

qrobit

Secure boot ensures the image you boot was not tampered with. You can't install keylogger without tampering with the image. If you wanted to install physical keylogger, you would need to open the device up, and at least my laptop provides detection of bottom cover removal, meaning the system will ask you for a bios password if the laptop was opened up.

5 days ago

thom

I think when there’s a step change in our ability to find one type of vulnerability, other types of vulnerability are probably going to become more common as well. Let’s see where we stand at the end of the year.

6 days ago

baq

With how things are going the question should be ‘is twice a day often enough?’

6 days ago

dwd

At the moment it doesn't seem to be.

Within an hour of be advised of, and running the mitigation for DirtyFrag, my upstream provider has blocked all WHM/cPanel/SSH/FTP/SFTP access with a heads-up on:

CVE-2026-29201 CVE-2026-29202 CVE-2026-29203

which look like a repeat of CVE-2026-41940 a week ago.

6 days ago

int0x29

I'm curious what broke the embargo. Did it leak or did a third party find it independently?

6 days ago

reisse

No embargo exists (or could possibly exist) in the first place.

Linux is open source, so every patch fixing the security bug is immediately visible to everyone. There is no workaround to that by the very design how the kernel is developed. The "embargo" people talking about is the rather stupid notion that if people keep their mouth shut and not write "THIS IS A LPE" straight in the patch description, everyone can pretend vulnerability is not leaked until the "official" message in the mailing list is sent.

This approach might have been defensible before, but in LLM era, when people have automated pipelines feeding diffs straight from the mailing lists to SotA models asking to identify probable security issues fixed by those, it is both stupid and dangerous.

6 days ago

zbentley

My (novice) understanding is that embargoes are intended to provide time to 1) develop a patch and 2) distribute the patch.

For Linux/public open source, what you said is right about 2). Once the patch is visible to anyone, it's trivial to identify exploits for unpatched systems. But 1) is still a valid use-case for embargoes for Linux vulns, right? Like, if this patch had taken a few weeks to develop before being confirmed working and published, that's potentially valid grounds for not sharing details during that time (within reason), no?

6 days ago

yencabulator

Someone developed the patch and got it merged, so the timeline was past the 1) embargo that Linux can structurally have: https://openwall.com/lists/oss-security/2026/05/07/12

3 days ago

bjackman

Linux does actually have a proper embargo process. But, you're correct that in this case it wouldn't usually have been followed anyway. Bugs like this are fixed multiple times a week, anyone with basic kernel knowledge can see that they are potentially LPEs.

Usually, nobody even bothers to check. LPEs like this are too common to even categorise effectively.

6 days ago

account42

The linked announcement specifically mentions that an embargo has been broken.

5 days ago

either-orr

A link to the patch was posted in someone's X account. Someone else saw that and posted a working exploit in less than an hour (potentially exploited by an LLM, though other than the quick turnaround, claim not substantiated).

https://x.com/encrypted_past/status/2052409822998392962

6 days ago

ajross

It seems like the embargo bit is a bit spun. The exploit is a reasonable extension of ideas from Copy Fail and was independently discovered in public (on X, it seems like) before Kim's "embargo" had expired. No one broke any trust, the independent discoverer just didn't follow as rigorous a process.

5 days ago

john_strinlai

it was published publicly by an unrelated third party

6 days ago

jacobgkau

They're asking the nature of the third party's discovery/publishing. Someone on the inside who decided to leak it anonymously? Someone else who was able to access some private communication they shouldn't have been able to see? Or a third party who happened to discover the same vulnerability (which seems less unlikely than normal since this is so similar to Copy Fail), but didn't follow disclosure procedures?

6 days ago

staticassertion

The commit for the fix was public. Someone noticed. An exploit was published.

6 days ago

ahartmetz

I think I read on the bug's website that "No fix has been released". I understood that as there is no public fix, but maybe it only means it's not in a tagged version of the kernel and no hotfixed distro kernels have been released?

6 days ago

danudey

The patch was posted to the kernel mailing list; someone saw the e-mail, read the patch, figured it out, and published an exploit very soon after.

6 days ago

tkel

The fix has been commited to the git tree for the `netdev` linux subsystem fork. That's how it was noticed by the grsecurity guy who published an exploit. Then, it will be merged by linus either into a RC/master for the next linux minor version release, or into the patch releases branch by GregKH/Sasha for already-released versions. Or in this case, both, because it's a security fix.

6 days ago

staticassertion

Spender didn't publish any exploit afaik

6 days ago

lofaszvanitt

Following disclosure procedures? The main cause that kills the need to take security seriously.

6 days ago

KamiNuvini

Does anyone know whether Debian is vulnerable? I tried the exploit on a Debian 12+Debian 13 machine but wasn't able to reproduce it myself.

6 days ago

thaniri

I was able to reproduce this issue on kernel 6.12.57+deb13-amd64 running Debian 13 (Trixie), but unable to reproduce it on kernel 6.1.0-42-amd64 running Debian 12 (Bookworm).

For anyone not on the security stream of Debian packages for Bookworm, kernel version 6.1.0-42-amd64 is actually immune to copy.fail. Surprising that it looks to be immune to dirtyfrag. If you haven't already patched on the security stream, you can choose any kernel version that kept commit 2b8bbc64b5c2. I am thinking that the same commit might accidentally be keeping certain Debian 12 kernel versions safe from dirtyfrag as well.

6 days ago

louwrentius

I tested on a fully up-to-date Debian 13 and the exploit works. The mitigation also works / confirmed.

6 days ago

[deleted]
5 days ago

cholmon

I just tried the exploit on a fresh Debian 13 droplet on digitalocean and it worked.

6 days ago

baggy_trough

Debian 13 is offering linux-image-6.12.86+deb13 now.

5 days ago

miduil

This again does not work under Android, at least in termux compiled with clang/gcc.

6 days ago

staticassertion

I assume because the rxrpc module is not loaded / provided and because unprivileged user namespaces are not allowed, which should be sufficient to mitigate. Curious if someone else has more details though.

6 days ago

jeroenhd

The exploit as posted contains x86 shellcode, so you'd need to drop in the appropriate shellcode to test if it really works.

Android wasn't vulnerable the last time, so far it's been a shining beacon of hope for proper SELinux configuration that I wish was more widely available in other places.

6 days ago

ronsor

Android has a lot of hardening and sandboxing that desktop Linux doesn't (and won't for UX reasons).

6 days ago

miduil

Yes, it demonstrates that it's possible to harden well - at least for some cases. It appears depending on the environment hardened kernel / runtime environments are pretty much possible to have safeguards working today already.

6 days ago

__float

> desktop Linux doesn't (and won't for UX reasons)

Can you elaborate?

6 days ago

akdev1l

A very comprehensive SELinux deployment for one.

SELinux will stop any process in android from loading kernel modules, that’s not allowed. The android permission model as a whole is ultimately backed by SELinux.

6 days ago

mike_hearn

Locking down a desktop OS to modern standards really requires what Apple did with macOS, which requires a degree of central coordination that's beyond the Linux community. It mandates huge changes in almost every area of the OS stack, and all apps have to be sandboxed by default out of the box.

Developers don't like mandatory sandboxing. It has to be forced on them. So you can see the difficulty of doing it in the open source community, which has for decades now had the worst security of any desktop OS platform (even Windows is better).

5 days ago

lunar_rover

To solve the issue from the source, you need to enforce security through means like mandatory access control. The problem is that existing desktop and server systems are too mature for that to be practical, you'll have to rework almost everything and users will certainly reject it violently due to the breakages.

6 days ago

mike_hearn

Apple have shown it can be done with macOS. Not only is every app sandboxed in a usefully robust way (even ones distributed outside the app store) but this has been done in a way smooth enough that users didn't revolt.

5 days ago

danudey

Not sure what specifically they're referring to, but Android (and iOS) add a lot of sandboxing to ensure that each application can only access its own files, can't access hardware willy-nilly (bluetooth, scanning wifi, etc), can only link against certain libraries, etc.

Imagine if Linux only let you run stuff from Flatpak, and if stuff didn't work in Flatpak then too bad for you. Most Linux users would hate it and it would be a mess a lot of the time, so, for user experience (UX) reasons, they don't do it. Android can get away with it because that's been the app paradigm for decades now.

6 days ago

croes

6 days ago

[deleted]
6 days ago

pjmlp

Because Android is not Linux, as much as some pretend it is.

In fact, given the official public APIs, Google could replace the Linux kernel with a BSD, and userspace wouldn't notice, other than rooted devices, and the OEMs themselves baking their Android distro.

6 days ago

grosswait

It absolutely is Linux, and yes the JVM could absolutely run on something else. But it is Linux and you can run Linux binaries directly on it - that just isn’t how it is used by end users.

6 days ago

akdev1l

The JVM has nothing to do with Android. There is no JVM running android apps.

There was Dalvik VM at one point but now it’s just the Android Runtime.

6 days ago

pjmlp

No you cannot, the NDK has a specific set of oficial APIS, and the Android team feels in the right to kill any application that doesn't follow the law of Android land.

Some folks like the termux rebels, occasionally find out there is a sherif in town.

> As documented in the Android N behavioral changes, to protect Android users and apps from unforeseen crashes, Android N will restrict which libraries your C/C++ code can link against at runtime. As a result, if your app uses any private symbols from platform libraries, you will need to update it to either use the public NDK APIs or to include its own copy of those libraries. Some libraries are public: the NDK exposes libandroid, libc, libcamera2ndk, libdl, libGLES, libjnigraphics, liblog, libm, libmediandk, libOpenMAXAL, libOpenSLES, libstdc++, libvulkan, and libz as part of the NDK API. Other libraries are private, and Android N only allows access to them for platform HALs, system daemons, and the like. If you aren’t sure whether your app uses private libraries, you can immediately check it for warnings on the N Developer Preview.

https://android-developers.googleblog.com/2016/06/improving-...

These stable APIs,

https://developer.android.com/ndk/guides/stable_apis

6 days ago

stevenhuang

That's all user space platform specifics, it has no relation to your previous statement where you said 'android is not linux'.

Someone can statically build a freestanding executable/so targetting arm64 linux (specifically the right android linux kernel version) and it will run fine on Android. The syscall interface, process model, file descriptors, signals, memory mapping, all of this is Linux, this is what people mean when they say Android is just Linux.

6 days ago

pjmlp

Yes, exactly PlayStore isn't GNU/Linux, normies don't use ADB.

6 days ago

tadfisher

What's amazing about Linux is that you don't have to use the system's libc, and you don't have to use dynamic linking.

That said, newer Androids use seccomp to restrict which syscalls you can use, basically to what bionic exposes anyway. This doesn't seem to affect Termux and friends, which can apparently run full X11 applications without root.

(edit) Notably, splice() is still callable, so maybe the POC needs to be tweaked...

6 days ago

pjmlp

Yes, at which point it isn't GNU/Linux, rather something else built on top of the Linux kernel.

As for termux,

https://wiki.termux.com/wiki/Termux_Google_Play

6 days ago

esseph

https://www.androidpolice.com/google-support-linux-kernels-a...

Google relies on Linux LTS kernels. When the Linux LTS team dropped support from 6 years down to 2 years, Google stepped in to cover the 4-year gap.

It is Linux. It's basically a distro.

6 days ago

pjmlp

When people say Linux they mean GNU/Linux.

6 days ago

cyphar

In common parlance, yes -- because there is no practical distinction. But in cases where something is just using the Linux kernel without GNU and other common userpand components (and there is a practical distinction) then it's definitionally untrue to say that it's "not Linux" if you really meant to say "it's not GNU/Linux".

6 days ago

esseph

I've always thought this was extremely interesting: https://chimera-linux.org/

5 days ago

grosswait

That is indeed interesting!

2 days ago

nonameiguess

Alpine Linux is not using GNU. I'm sure there are others. No definition you can ever come up with will have no exceptions in widespread use. Live with it.

5 days ago

grosswait

Maybe. Depends on context, and in this case the context means, no I do not mean GNU/Linux

2 days ago

[deleted]
5 days ago

dzaima

That's specific libraries, when using the default linker. You could construct that same behavior on desktop linux too. And you can avoid it equally well on Android - you can statically-link things just fine, you can use libraries you actually control, and presumably use a custom linker if desired. It's utterly non-surprising that "you run code you don't control" results in "said code...can do arbitrary things for unsupported use". (Never mind that, instead of a "sherif", they could've just renamed all private symbols, or just naturally replaced them over time, breaking your code all the same, just in a more confusing way)

Also some obligatory Linux vs GNU/Linux comment. (and it's not like GNU/Linux doesn't ever change under your feet - see the glibc DT_HASH debacle)

6 days ago

anthk

- Waydroid

- Is totally Linux

6 days ago

teaearlgraycold

Anyone here with experience providing multi-tenant Linux systems (CI and the like), do providers usually disable kernel modules they don’t need to eliminate attack surface? Every time one of these comes out I wonder if I should be rotating every key in my GitHub CI or PaaS host. So far I haven’t seen any reports from the providers I use that they were pwned by any of these exploits.

6 days ago

TheDong

A lot of these multi-tenant CI systems actually run everything in microVMs even if they present it to you as a container.

At this point, a microvm can be booted in ~200ms so you don't even have to keep a warm pool, you can just launch em on demand.

GitHub CI (actions) uses virtual machines.

6 days ago

fulafel

Both of these (copy fail and dirtyfrag) exploit obscure socket address families. Are these filtered by commonly used seccomp profiles in eg docker (assuming seccomp can express it)?

6 days ago

YZF

At least in the k8s setup I looked at the dirtyfrag were filtered (by default).

"XFRM SA registration requires CAP_NET_ADMIN".

6 days ago

fulafel

Right, so it blocked the first part of the chain. Which normally uses unprivilgeed network namespace to dp net admin inside that.

I had been thinking of a RxRPC AF block for the second part of the chain which seems rarer.

Systemd seems to have this setting for units since 2011:

> The setting RestrictAddressFamilies aims to restrict what socket address families can be used. When using it, the default is that it is used as an allow-list and define what address families can be used.

> Example

> A common combination might look like this.

   [Service]
   SystemCallArchitectures=native
   RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
5 days ago

hughw

Ran as a fresh new default user in a ubuntu:latest container

  git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp
Result:

  dirtyfrag: failed (rc=3)
Good news!
6 days ago

stsewd

I got the same running it inside a container, but got a shell when running it directly in the host. This only shows that the exploit doesn't work inside a container. So, containers aren't vulnerable, or the script needs some adjustments to make it work in containers.

Since copy fail can be used to escape containers (https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...), I'm guessing the exploit needs some changes only.

6 days ago

cyphar

The repo you linked works by replacing files that are being used by other privileged containers on the same system. That works for the Kubernetes case (I'm a little surprised they don't use static binaries for their own privileged containers, seems a little dangerous to share any kind of data with untrusted tenants even if it's read-only) but not standalone containers.

However, there is a much an easier way of doing a breakout -- you can corrupt the host runc binary in a way analogous to CVE-2019-5736. The next time a container is spawned, the host runc binary will get run as as root and that's that.

Ironically, the first version of the protection against this attack I wrote also protected against page cache poisoning (by making a temporary copy of the runc binary during container setup in a sealed memfd and re-execing that) but the runtime cost of copying a 10MB binary at container startup was seen as too expensive by some users[1] so we ended up with a setup that shares the same page cache. I also distinctly remember arguing at the time that something like Dirty Cow could always happen in the future, and the memfd approach was better for that reason -- maybe I should've stuck to my guns more... :/

In practice the solution for containers is to update your seccomp policy to block the vulnerable syscall.

[1]: https://github.com/opencontainers/runc/issues/1980

6 days ago

jguarnelli

[dead]

a day ago

percivalll

[dead]

5 days ago

Havoc

Wouldn't count on container being a reliable testing platform for this. Loads of stuff - legitimate or otherwise - fails in containers

6 days ago

eqvinox

If you don't need it (rootless containers), you can disable unprivileged userns to block these two:

  echo 1 | sudo tee /proc/sys/kernel/apparmor_restrict_unprivileged_userns
May also break sandboxes (e.g. browser) though.
6 days ago

baggy_trough

6 days ago

RandomGerm4n

Perhaps we should consider designing distributions to be more tailored to specific purposes. Since no one needs the affected module on a desktop computer, distributions designed for that purpose should no longer include it by default. If this approach were consistently followed, significantly fewer systems would be vulnerable to such exploits. For most users a system with a kernel as minimalistic as the Android GKI kernel combined with sensible SELinux policies, would likely be sufficient.

6 days ago

fulafel

Both of the modules are (also) for desktop/workstation use. Though AFS could probably be retired generally.

6 days ago

unethical_ban

Here's a general question, are these vulnerabilities hitting Linux more than BSDs due to hit being a larger target or because its architecture is less secure by design?

6 days ago

vsgherzi

It’s two things. 1. Less eyes are on the bsds

2. Bsds don’t have the same optimizations that Linux has. Bsds generally try to pursue corrrectness

That being said there were just a bunch of vulnerabilities in freebsd

macOS has had its own dirty cow attack and I know there’s for sure more memory ones just based on the way the xnu kernel works.

So no Linux isn’t really worse per say

6 days ago

staticassertion

Larger target.

6 days ago

golem14

in many ways:

- more people are using it (assuming macos is in its own bucket perhaps) - bigger surface areas (esp NetBSD has in my limited understanding just less stuff that can go boom) - more churn, ie more new stuff than can be buggy released more often.

Of course, because of that, more eyes are on Linux, so I'm not sure where that security tradeoff is.

6 days ago

ahartmetz

AFAIU, Linux and the BSDs have basically the same architecture - the BSDs just value secure and simple, understandable code more highly than Linux vs features and performance.

6 days ago

angry_octet

This is really not a correct statement beyond the fact that both are a type of Unix.

6 days ago

cluckindan

Linux is not Unix: it is not derived from AT&T Unix.

6 days ago

ahartmetz

Linux 2.2 or 2.4 or so (possibly only Suse Linux) even had a kernel startup message "Unix compliance testing by UNIFIX" or something, back when Unix was considered more prestigious than Linux. It is / was by some official definition "a Unix", though not "UNIX the trademark by AT&T".

6 days ago

cluckindan

I’m fairly certain they’re referring to POSIX compatibility, not calling a Linux a Unix.

6 days ago

ahartmetz

Oh damn, you are probably right.

6 days ago

angry_octet

By that definition, nor is BSD. It's kind of their whole raison d'étre.

5 days ago

cluckindan

BSD was originally a derivative of AT&T Unix.

4 days ago

angry_octet

You should read some BSD history.

3 days ago

cluckindan

Still originally a derivative of AT&T Unix. :)

2 days ago

ahartmetz

What are the differences? I think of both as Unix-type sytems with macrokernels. I have no practical experience with BSDs.

6 days ago

ahartmetz

Jeez, care to reply instead of downvoting? I would really like to know. I do keep an eye on the BSDs as a good example in some areas where Linux is bad.

5 days ago

unethical_ban

Didn't downvote. But if you don't know the difference, then you seemed pretty confident in describing their differences.

BSDs are an actual fork of UNIX from the 90s. Linux is a kernel whose code is not forked from UNIX. The userland is most often GNU, which stands for GNU's Not Unix.

5 days ago

ahartmetz

That's not an architectural difference though. The way that the systems are technically structured seems very, very similar to me. There are quality differences (BSD generally better) and feature and performance differences (Linux generally better), but not basic approach differences - is that wrong?

4 days ago

zepearl

So if I understand correctly 3 modules are involved:

- esp4 (kernel config "CONFIG_AF_RXRPC")

- esp6 (kernel config "CONFIG_INET_ESP")

- rxrpc (kernel config "CONFIG_INET6_ESP")

Is this correct?

6 days ago

eqvinox

You mixed up the names vs. config options but yes killing those 3 options should make you "safe". No warranty.

6 days ago

zepearl

damn you're right, thx

6 days ago

Luker88

I can't make it work on nixos. Kernel 7.0.1

I tried fixing the paths and even linking `/bin/bash` to the nix /run/current-system/sw/bin/bash

/etc/passwd is unmodified.

Can anyone else try? CopyFail1 did not work because `su` is only executable, not readable, CopyFail2 worked only partially (changes /etc/passwd but the user is not passwordless)

5 days ago

titanomachy

I'm not a security expert, but I'm responsible for some (relatively low-stakes) production systems.

It sounds like these two most recent exploits depend on unprivileged user namespaces, and that in fact a high percentage of LPE exploits need this feature. I use rootless containers on a couple of systems (like my dev machine server), but on most of my systems I don't, so it sounds like disabling that would be a good step to hardening my systems against future exploits.

To the security experts: are there any other straightforward configuration changes with such broad-reaching improvement in security posture? Any well-written guides on this subject, something like "top kernel modules to consider disabling if you don't need them"? I'm not talking about the obvious stuff like "disable password SSH", I'm specifically looking for steps that are statistically likely to prevent as-yet-unknown privilege escalation attacks.

6 days ago

staticassertion

You don't need unprivileged user namespaces for this one if you're in a position to get the target kernel module loaded. But yeah, user namespaces are basically the single most significant privesc path in the kernel, maybe io-uring is second. Disabling both (or very carefully deciding what can use them) is one of the best ways to reduce your attack surface.

I don't have any guides but you can determine which kernel modules are already loaded in your system and then just compile those in and block module loading.

Otherwise, shove everything into a container, ideally gvisor, and you've reduced attack surface by a large chunk again via seccomp.

5 days ago

netheril96

We need an easy way to ensure that only kernel modules in an whitelist can load. I’m tired of blacklisting modules I never need.

6 days ago

kinow

Just got an email from one HPC I have access in Germany. I guess all HPCs ans services like GH Actions are going to be offline for a bit. I think last time was on a Friday too, so it might be another Friday to organize emails, files, rotate backup/passwords...

6 days ago

m3nu

5 days ago

[deleted]
6 days ago

danborn26

The fragmentation logic in the networking stack has been a recurring source of bugs for years. It is surprising how these edge cases keep surviving multiple security audits.

5 days ago

kro

It's scary to think that some day it will be more than a local attack vector. I don't want to imagine the fallout from a remote rce via tcp/ip.

5 days ago

Tiberium

Do you think with modern LLMs in a few years projects like Linux will have all those low-hanging security bugs fixed? Are we witnessing a transition period, or will nothing change?

6 days ago

tetha

Out of this dataset of 2-3 vulnerabilities, I'm noticing a pattern: All of those are in older and/or niche kernel modules. That raises two thoughts:

Maybe the more regularly used kernel code has a lot of low-hanging security topics shaken out of it already.

And second, I'm indeed wondering what a good path to minimize the loadable kernel code is on a system looks like. My container hosts for example have a fairly well defined set of requirements, and IPSec certainly is not in there. So why not block everything solely made to support IPSec? I'm sure there is more than that.

After all, the most reliable way to higher security is to do less things.

6 days ago

spartanatreyu

LLMs don't matter, linux's codebase has been growing much faster than it can be secured so this is all inevitable.

Transitioning components to rust eliminates certain categories of bugs leaving the rest of the bugs to be dealt with.

We'd likely end up needing another language with stronger type and effect systems to eliminate more categories of bugs. Probably something which enforces linear types, capabilities, units of measure types, and effects.

And you'd have to update linux itself to switch to capabilities.

6 days ago

[deleted]
6 days ago

staticassertion

New vulns are introduced to Linux every day. Fuzzers trigger every single day on Linux. No, nothing will improve here from AI.

6 days ago

alex_duf

there's an argument to be made that new code will be inspected before being merged and therefore the classes of bugs an LLM is likely to find will not be merged until it's fixed.

6 days ago

Muromec

There is a finite number of bugs and betters tools that find them mean there is less bugs in the code.

6 days ago

staticassertion

We already find bugs constantly in Linux and they go unaddressed, no one even keeps up with syzkaller reports lol

AI is neat because it's higher signal but yeah no, we're not getting anywhere close to "safe linux", AI or not.

6 days ago

Muromec

I want to believe, okay

5 days ago

[deleted]
6 days ago

mikeweiss

Considering AWS just released patches for Copy Fail for Amazon Linux and Bottlerocket only yesterday.... I imagine it will over a week before we see patches for this. This is especially important to fix on Kubernetes nodes...does anyone have any recommendations for mitigating this issue before a patch is released?

6 days ago

caned

The enforcement of read-only protection for pagecache pages (and the scatterlists and or other structures they point to) seems to be diffuse and incredibly fragile.

6 days ago

[deleted]
6 days ago

jcims

Tested Amazon Linux 2023 and it doesn't appear to be vulnerable in the default configuration. Would be interested if anyone finds anything different.

6 days ago

bytejanitor

Is there a CVE identifier available for this yet?

5 days ago

nxobject

I know this was a thing re: Copy Fail, but... ...LPE = "local privilege escalation", for everyone not directly involved in the security.

5 days ago

Retr0id

Testing the rxrpc vuln on aarch64, I get a kernel data abort, which is interesting. Not looked into the root cause yet!

6 days ago

Retr0id

Huh why is this getting downvoted?

5 days ago

snvzz

There's, in practice, unlimited such bugs in the megabytes of kernel object code.

Monolithic UNIX-like kernels are a bankrupt design.

Only third generation microkernels like seL4[0] make sense in the present world. All effort put elsewhere is wasted outright.

0. https://sel4.systems/

5 days ago

BadBadJellyBean

Well this is getting tiresome. I wish there was a less stressful way to get fixes for such bugs. But the cat is out of the bag now.

Not criticizing whoever found the bug, of course.

6 days ago

fulafel

RxRPC is apparently an AFS (Andrew File System) thing.

6 days ago

[deleted]
6 days ago

oncallthrow

can this also be used to obtain container escape ?

6 days ago

synack

If your container has setuid binaries and these modules are loaded, yes.

6 days ago

lights0123

With the exploits published as-is, you'll only get root inside the container: there's no explicit namespace break, and calling setuid() in a container just gives you root in the container.

However, it can be used to modify files that are passed into the container (e.g. Docker run -v), or files that are shared with other containers (e.g. other Docker containers sharing the same layers). kube-proxy with Kubernetes happens to share a trusted binary with containers by default, which is how it can be exploited: https://github.com/Percivalll/Copy-Fail-CVE-2026-31431-Kuber...

6 days ago

miduil

It's poisoning the filesystem cache, if you don't have a setuid binary handy you just poison anything else that gets executed by the host.

6 days ago

aaronmdjones

You don't need any setuid binaries. You could just as easily use the vulnerability to add a job to crontab(5) that causes the cron daemon to run whatever you want as root.

6 days ago

awoimbee

And your containers need to have specific capabilities enabled, which aren't by default on kubernetes and podman.

6 days ago

friedr12

[dead]

6 days ago

x4132

this is why you don't contact distro mailing list. responsible disclosure is dead.

6 days ago

zbentley

At present it looks to me like the embargo was broken by someone identifying the patch as fixing a vulnerability, not someone leaking the mailing list.

More information may come out, or I might be missing something, but assuming that the above is accurate, this isn't a problem with responsible disclosure or mailing list opsec; it's a problem with the nature of open source. Right? Or are folks seriously proposing that the patch/mitigations should have been circulated to distro maintainers privately before going to mainline?

6 days ago

collinmanderson

> Or are folks seriously proposing that the patch/mitigations should have been circulated to distro maintainers privately before going to mainline?

I always assumed that distro maintainers got early access to patches before going mainline but maybe that’s not true?

6 days ago

[deleted]
5 days ago

[deleted]
6 days ago

nicman23

well at least they are not commonly loaded - in like 12 machines i have

6 days ago

lyu07282

Two distro independent LPEs in such a short time, if only all Linux software could be this portable.

6 days ago

normie3000

So umm... should I rush home and turn off all my computers?

6 days ago

arcfour

Are they already vulnerable to RCE as an unprivileged user? Hopefully not.

An LPE only allows an attacker who can already execute code on the system to become root. So, bad, yes, but it doesn't mean you are immediately pwned.

6 days ago

hughw

Should I rush to Lambda or ECS and turn off all my containers sharing a host with who the hell knows?

6 days ago

PhilipRoman

AFAIK Lambda and everything else will use micro-VMs. No serious company would use a shared kernel design for workloads in different security contexts. (Personally I wouldn't even use the same hardware host, but sometimes sacrifices have to be made)

6 days ago

tkel

Like others have said, this will get you root inside the container. It isn't a container escape. File/volume mounts shared across containers would be vulnerable.

6 days ago

arcfour

Firecracker is extremely hardened, so I wouldn't worry about Lambda. As for ECS, getting root doesn't necessarily mean you have a container escape. I think you could escape containers with this exploit, but you would need a different payload than what's published. I could be wrong though.

I would assume AWS is pretty on the ball when it comes to handling stuff like this if they didn't have other defenses or mitigations in place already.

6 days ago

account42

And for a single user desktop, an LPE is almost meaningless as all the really important files are in $HOME and accessible without root.

5 days ago

arcfour

Perhaps, unless you want persistence.

5 days ago

dezgeg

For home computers, essentially https://xkcd.com/1200/ applies.

6 days ago

[deleted]
6 days ago

cynicalsecurity

Imagine how many undiscovered bugs and exploits exist in Windows.

6 days ago

tap-snap-or-nap

Noone has the time given how many windows bugs are already open and active long term.

5 days ago

[deleted]
6 days ago

WindyBolt907

[dead]

5 days ago

WindyBolt907

[dead]

5 days ago

Steinmark

[dead]

5 days ago

CalmBirch127

[dead]

5 days ago

HollowRidge427

[dead]

6 days ago

biennvops

[dead]

5 days ago

[deleted]
6 days ago

QuietLedge375

[dead]

5 days ago

CalmBirch127

[dead]

6 days ago

BoldBrook418

[dead]

5 days ago

QuietLedge375

[dead]

6 days ago

ftheplan9

[flagged]

6 days ago

infrapilot

[flagged]

6 days ago

staticassertion

> The old “quiet patch before disclosure” model may simply not work anymore in the LLM era.

It never did. Trawling the Linux commit history is a tried and true method for finding n-days.

6 days ago

ftheplan9

[flagged]

6 days ago

john_strinlai

>2026-05-07: After obtaining agreement from distribution maintainers to fully disclose Dirty Frag, the entire Dirty Frag document was published.

you think the reporters and the distribution maintainers colluded to... get 5 minutes of attention?

that would be exceptionally stupid of the distribution maintainers and destroy all trust.

6 days ago

acedTrex

Here we go again

6 days ago

7373737373

Tanenbaum was right

6 days ago

TZubiri

Go on...

6 days ago

xxpor

Linux is a single user system and should be treated as such. Run your services as root. Don't rely on unix user primitives for security.

6 days ago

wolttam

Running as root opens you up to a class of vulnerabilities (denial of service, mainly) that you can avoid by not running as root.

That said, running every process in its own micro VM is looking more attractive by the minute.

6 days ago

xxpor

Half the point is that you should always assume that there exists a complete LPE bug.

But yes, micro VMs are a great idea!

6 days ago

amarant

Everything in this comment is wrong.

6 days ago

xxpor

Technically yes. Practically, I disagree.

6 days ago

eqvinox

The part where you run everything as root is particularly stupid. But yes, user isolation has been weakened quite a bit.

6 days ago

Sohcahtoa82

This carries the same energy as "People will break into your car no matter what, so just leave your doors unlocked."

6 days ago

bigbuppo

You say that, but I know someone whose house had their front door kicked in by burglars even though it wasn't even locked.

6 days ago

yencabulator

This actually happened to me. The seats had moved and glovebox was open one morning. Then a second break-in a few days later, and this one damaged the door panel near the lock. I left the doors unlocked for a couple of weeks after, to decrease the break-in damage -- there was never anything of value in the vehicle.

3 days ago

tptacek

The energy here is "so don't leave anything valuable in your car".

6 days ago

angry_octet

Unfortunately that is not what they proposed. To stretch the automotive analogy too far, you could say: if you invite a carjacker in, their seatbelt is not going to stop them from carjacking you.

6 days ago

tptacek

"Avoid shared-kernel attack surfaces" is not an unreasonable proposition in 2026.

6 days ago

JackSlateur

Virtual machines are still the best design and has been for something like 20 years

Containers are good, as long as they all share the same purpose (read: same application, no multi-tenant)

We all know that multi-users systems (and thus, containers) have a very wide attack surface, while VM attack surface is very limited ..

This is why I am totally convinced that:

  - redhat and friends are a terrible idea (licencing forces collocation which reduces segmentation)
  - per-instance pricing (read: cloud public, but not only that) are terrible: for the same reason. Paying per consumed CPU/ram is sane, paying per VM unit is damageful
6 days ago

angry_octet

Yes that is reasonable, but dispensing with all on machine controls is not.

6 days ago

0123456789ABCDE

isn't root level access one of the selling points of the cloud vm product line?

5 days ago

angry_octet

That doesn't mean you should run your services as root, it means other users are not sharing your machine/ kernel.

3 days ago

__float

It is very good practical advice.

It also saddens me greatly, imagining what computing could look like if systems evolved differently.

6 days ago

256_

I agree with the general sentiment. I treat anything running arbitrary machine code as if it has full access to a machine. I don't know where you get "run your services as root" from that, though. The principle of least privilege doesn't just apply to running malicious code, but running buggy code whose attack surface is exposed to evil-doers.

6 days ago

fragmede

6 days ago

[deleted]
6 days ago

SupLockDef

Where is the famous Linux is so much secure than Windows?

I would like to see the same hate comments about Linux than the ones we would see if it was a Windows vulnerability...

6 days ago

arian_

Every time someone finds a universal Linux privilege escalation, somewhere a sysadmin whispers 'this is why we don't run as root' while nervously checking if their containers are actually isolated.

6 days ago

minimaltom

This attack class lets you escalate from any user to UID 0. Not running as root won't save you, in fact, this attack is for those processes not running as root.

However, if you are in a user namespace where UID 0 doesn't map to system-wide capabilities, and you dont share page cache for the setuid binaries on the system, this attack doesn't lead to LPE.

6 days ago

delamon

setuid binaries are not the only way to get root. E.g. one can change /etc/crontab or /etc/passwd. Or add trojan to /bin/ls and wait until admin type 'ls'

5 days ago

quantummagic

It's not always as easy as you imply. All the attack vectors you mentioned, require root on the host, before you can make the change or install the trojan.

5 days ago

delamon

The attack gives you ability to overwrite any cached page. So you don't need to be root to "edit" /etc/passwd.

5 days ago

quantummagic

Not of the host system, assuming we're talking about a compromised VM, running as a non-root user.

5 days ago

delamon

I assume you mean container, not VM. But yes, container makes it harder.

5 days ago

minimaltom

Worth adding also that you can only use these vectors to corrupt the page cache for files reachable in your mount namespace.

Usually with containers, almost nothing is shared with the host namespaces (tho likely shared with other container namespaces, hopefully none of those are --priv).

5 days ago

oncallthrow

> this is why we don't run as root

The entire point is that you can escalate to root

6 days ago