GCC Preparing to Introduce “-Fhardened” Security Hardening Option
Comments
woodruffw
dooglius
As indicated in that page, this is only an issue when one takes the address of a nested function, it doesn't happen merely by using nested functions.
jcalvinowens
Why would you ever want to do that? It's a weird GNU extension, I've personally never seen it used in real code. I'm really curious.
woodruffw
I don't ever want to do it. I'm worried that code I rely on will do it, and I'll end up building (or running) binaries that are missing basic security mitigations.
jcalvinowens
You can always build with clang, it doesn't support this feature.
A desire to support clang is ubiquitous enough nowadays that weird stuff like this is getting ripped out of most active open source projects.
rwmj
I think the background to this is simplifying the mess of RPM macros which are needed to set all these flags:
https://src.fedoraproject.org/rpms/redhat-rpm-config/blob/ra... https://src.fedoraproject.org/rpms/redhat-rpm-config/blob/ra... https://src.fedoraproject.org/rpms/redhat-rpm-config/blob/ra...
(and several other places). I'm sure Debian has something similar as do other distros, so having one flag which does it all is an advantage.
nerpderp82
Do you know if any distros are going to enable it by default? I think -Fhardened should be opt-out not in.
mrlonglong
Why is it "-Fhardened" and not "-fhardened"?
kevincox
That is Hacker News butchering the title. The flag is in fact -fhardened as stated in TFA.
mrlonglong
Glad to hear that. I was a bit worried the GCC boffins had lost it or something.
redfern314
Every instance in the source article uses lower case; not sure if they changed it or the title just got mangled when posting
tutfbhuf
Does it make sense to compile the linux kernel with -Fhardened?
rwmj
I don't think it would work. Linux uses internal string functions while _FORTIFY_SOURCE works with help from glibc, plus PIE is just not an appropriate memory model for kernel code.
josefx
Hardening flags can be problematic depending on which assumptions they make. Some years ago Golang ran into problems with kernels proactively checking for the stack guard page, since Go doesn't use any that safety feature just ended up corrupting random memory on every system call.
dmix
Did Go address these issues or is it more fundamental?
josefx
They switched to using a C shim for system calls like they where already using for other operating systems without a stable/public system call interface.
I think the affected distros also ended up removing the check over other issues.
nicce
Performance impact would be too much likely. Control-flow integrity protections alone usually apply 5% overhead.
nerpderp82
This line of reasoning will lead to always using insecure systems over secure ones. We lost much larger percentages due sidechannel mitigations. Layout has more effect than -O1 to -03 (Berger et al). Machines have never been faster than they are now, when will they be fast enough to also safe by default?
nicce
It is not that simple.
Especially in kernel, is the likelihood of bugs so big that we increase the electricity consumption of everyone using Linux by default?
Sidechannel mitigations are not valid comparison, since it is protection against known, reproducible issues, not against "maybe there is a serious bug".
We should utilize the gains we get in increased computational power instead of wasting everything to inefficient software.
Machines are still not efficient enough to use garbage collected languages alone, which do not have the issues these compiler options mitigate. With efficient software, we can reduce the electricity bill and other resources.
RetroTechie
"We should utilize the gains we get in increased computational power instead of wasting everything to inefficient software."
And yet, horribly inefficient scripting languages are a thing. To save time of 1 or 2 developers ONCE, at the cost of countless users' compute capacity, again & again for as long as that software is used.
As long as that is considered acceptable, your argument holds no water.
Current hardware is way, way, waaayy fast enough for 99.999% of uses. Hardened software or not.
nicce
> As long as that is considered acceptable, your argument holds no water.
It is true that it costs more to hire more skilled developers, and for that reason we suffer. (e.g. Electron).
But even if that happens all the time, it does not mean that my point is invalid. If we want to reduce the energy waste, it starts from software as well.
> Current hardware is way, way, waaayy fast enough for 99.999% of uses. Hardened software or not.
For end-users yes, but not for any cloud computing, servers, et. all where the real computing happens. You can make significant saving in terms of required processors, memory and electricity simply by writing more efficient software. Just one method more to make world greener place.
MiguelX413
Safety is not the same as inefficiency. What's the point of being fast but wrong?
nicce
The point is to focus on making correct code instead of relying into fallback mechanisms.
E.g. compare to increased popularity of Rust and what it means in this context.
chc4
Android already supports enabling kCFI[0], and says they saw negligable performance and code size impact. Even if it was 5%, security mitigations with a large security impact probably make sense to enable for a lot of usecases.
nicce
There is actually study which notes both Anrdoid and Linux Kernel.
On Linux kernel the performance impact was from 2% to up to 25% and size increased around 30%.
On Android there was too much variance, but they note that Google got around 2-3% overhead, which sounds reasonable.
Without hardware acceleration (e.g. Intel CET), it will likely come with great cost. But we are yet to see those benchmarks.
But I would argue that you can take bigger impact for performance on Android or consumer phones anyway, since they are not performing high computation 24/7 usually, and they already have more computation power than most users require.
https://www.duo.uio.no/bitstream/handle/10852/79829/master.p...
_a_a_a_
can you explain what these integrity protections are and why they're needed, or give a link? TIA
nicce
Wikipedia summarises it quite well. In short, they attempt to prevent code-reuse attacks (ROP/JOP) https://en.wikipedia.org/wiki/Control-flow_integrity
LoganDark
Is it "-Fhardened", "-fhardened", or "-fhardening"?
rurban
-fhardened. see the article.
Just the HN headline is wrong
pxeger1
The auto-capitalisation HN does to headlines seems completely unnecessary to me.
jjgreen
For info, it is not applied to edits of the title
LoganDark
The article includes "-fhardening" as well. I agree that it's most likely to be -fhardened though.
rurban
See https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630...
It's still discussed in gcc-patches, and the name is proposed as -fhardened. Rebased patch is here: https://github.com/rurban/gcc/tree/fhardened
landr0id
> -ftrivial-auto-var-init=pattern
Would be nice if this was zero instead of pattern.
formerly_proven
From the compiler's PoV this is buggy code so it's better to make it predictably wrong rather than unboundedly incorrect (=security issues) or predictably correct (=people rely on UB).
woodruffw
On top of your reasons (which are good ones!), there’s another good reason to avoid default zero initialization in languages like C: zero is a special value for all kinds of sensitive operations (like UID 0 for root).
In other words: a mitigation that initializes all values to 0 may make some uses of uninitialized variables worse than they were before.
google234123
0 is also the most common variable value probably :p hard to tell a valid state from an invalid one
woodruffw
Yes, that's why the "uninitialized" part is important; we're talking about a mitigation that would make UB potentially easier rather than harder to exploit.
Having 0 as a default initialization value in a language where doing so is well defined makes perfect sense; this is primarily an issue for C and C++ (to a lesser extent).
twic
There is a proposal for C++ to zero-initialise automatic (ie local variables):
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p27...
If that goes through, zeroing automatics would just be doing the same thing.
(FYI the feedback section of that paper is quite funny)
tialaramex
Ideally C++ would tell people you can't do that and make it a compiler error ("Ill-formed") but my guess is that too many people insist they ought to be able to take arbitrary C++ 23 code, recompile it with C++ 26 and have that just always work even though the standard doesn't deliver that and so it won't happen.
P2723 is unlikely to happen. The "Erroneous behaviour" P2795 might have a better chance. This would say it's wrong to do uninitialized reads (whereas P2723 says they are initialized to zero and thus it's not wrong) but you always get zero anyway.
I think there's a fair chance WG21 manages to make everybody unhappy by kicking this can into the long grass as they have on many other controversial issues.
Zero is the wrong default, it's better than UB, but it's not good. This is actually a problem in languages like Go where zero defaults are core to the language design. The correct thing is that "I didn't initialize it" won't compile. Force the programmer to write what they meant, sometimes they meant zero, or None, or 0.0 or whatever, but surprisingly often when confronted with the question the programmer realises their design is wrong and needs a design level change.
tsimionescu
> Force the programmer to write what they meant, sometimes they meant zero, or None, or 0.0 or whatever, but surprisingly often when confronted with the question the programmer realises their design is wrong and needs a design level change.
I almost sympathize with your point, except we are taking about a language where `Type var;` is the explicit way to initialize many variables to a perfectly well defined value: it is the only way to call the no-arguments constructor for a variable on the stack. It's only for non-class types that this has the bizarre behavior of allocating but not setting any value.
It's even worse in a language with templates:
template <class T>
void foo() T {
T local;
return local;
}
Can be perfectly correct OR it can be UB based on the type of T.gpderetta
> [...] it is the only way to call the no-arguments constructor for a variable on the stack
The syntax:
T var{};
always value-initializes a stack allocated variable (or a member variable or a global).tsimionescu
I didn't know about the empty initializer list syntax.
Still, reading about it, there are cases where `T var{} ;` will do something different from `T var;`: if T is an aggregate object type, then it will invoke aggregate initialization instead of calling the no-args constructor.
gpderetta
If T is an aggregate, there is, by definition, no constructor (no-argument or otherwise) to call. The aggregate initialization will then recursively value initialize each member, which is what you want.
The only catch is, as usual, list-initialiation. You have to hope that T is sane and any list initialization constructor with an empty list is equivalent to the nullary constructor.
tsimionescu
There was one case I found on SO and later reproduced, where a class B derives publically from another class A which has a protected no-args constructor. In that case, B b; is valid, but B b{}; is not, since it tries to construct an instance of A from the calling code itself using the protected constructor, which it's obviously not allowed to access.
Overall I think it's safe to say that the two syntaxes have different semantics, even if they overlap in most cases.
gpderetta
I have to try that. There are defect reports for corner cases.
In any case this is also allowed:
T val = T();
And copy elision is now guaranteed.tsimionescu
The example looks like this:
class A {
protected:
A() {}
}
class B : public A {}
B x; //ok
B y{}; //not ok, can't access A::A()
Would the T val = T(); example work if you don't have a copy constructor at all, or no move constructor, or custom ones which do weird things?Edit: I checked, and you're right - the syntax seems to be fully equivalent in C++17 or later. Great to hear!
gpderetta
Played it a little bit: this seems to be a regression and breaking change from C++14 where B would not be an aggregates, so B{} would just invoke the default constructor. This is probably an oversight, I wonder if there is a Defect Report.
edit: it works by making the inheritance protected, as B is no longer an aggregate. The right fix would be to also disqualify B from aggregate status if the base class constructor is unreachable.
Also making both A and B non-empty removes aggregate status, so it is really a dark corner of the language.
tialaramex
> it is the only way to call the no-arguments constructor for a variable on the stack.
Is that really true? Ouch. In many languages that wouldn't feel crazy, but in a language where there's a whole book about initialization https://leanpub.com/cppinitbook that feels kinda silly.
tsimionescu
A sibling response pointed out that adding an empty pair of braces (an ampty initializer list) after the var name can also invoke the no-args constructor, but it can also do other things depending on the class. So yes, I believe this is the only way of explicitly calling the no-args constructor whole in-place on a stack variable.
Ideally the syntax `T var();` would have worked as well, but it turns out that it would be ambiguous with declaring a local function named var that takes no arguments and returns a T...
xamuel
>The correct thing is that "I didn't initialize it" won't compile
Flawless detection of uninitialized reads would require solving the halting problem, which is impossible. So requiring initialization does prevent optimal efficiency of some theoretical programs. Of course, this would only matter in cases where performance was extremely critical (and the whole point becomes moot if the alternative is to automatically zero the memory, which is even worse in this pedantic optimal-performance sense).
tialaramex
Having some means (as these C++ proposals all do) to explicitly say "I understand that you can't see why this is correct but it assure you it is" would be fine, and needn't be introduced to beginners at all. The problem as usual in C++ is that All The Defaults Are Wrong and because they're defaults we need to warn beginners about them.
You won't write Rust's MaybeUninit<T>::assume_init() in your first program by mistake, whereas the equivalent mistake in C++ happens easily because it's the default.
tsimionescu
The question essentially is what the statement `T x;` should mean. Today, if T is class, it means "allocate space for a value of type T and construct it using T::T()". However, if T is a built-in type, it means "allocate space for a value of type T with no defined value", which has proven to be highly problematic in practice.
The situation could be improved in two simple ways. One, you could unify the two meanings, and say that `T x;` allocates space and calls T::T() to initialize the value. The no-args constructor for built-in types already exists and initializes them to 0.
Or, you could also say `T x;` is illegal syntax, one must write `T x = val;` always (or at least when T is a built-in type).
In either case, an escape hatch is needed for allocating uninitialized space on the stack, since there are valid performance reasons for wanting that, in rare cases. But that should be new syntax, it really really shouldn't be the default. So you can still do something like `T x = std::uninitialized();` or whatever the syntax would be to get the current behavior in performance-critical cases, where the tradeoff makes sense.
Personally, especially given C++'s use of templates that don't distinguish between built-in types and classes, I believe the first option makes the most sense, and in fact removes am ugly inconsistency from the language.
saagarjha
The proposal discusses the above concern (as it should, since the author has gotten almost every version of possible concerns). Perhaps one of them will win out and alter the proposal appropriately.
vbezhenar
UB is a property of standard. GCC implements plenty of deviations from standard. Nothing wrong with that, as long as it's explicitly documented.
I'd even argue that defined behaviour is a subset of undefined behaviour. So I'd value compiler options to force well defined and "expected" behaviour instead of the current insanity.
Clang "optimized" away empty loop. My MCU gets locked because of it. I have to write `b .` with assembly, because C can't cut it. It is insanity.
saagarjha
Optimizing out an empty loop in C is illegal.
dzaima
That's C++, which is not C. Granted, the C++ behavior is weird and annoying. (the C behavior, while better for truly-infinite loops, is still "broken" for potentially-not-but-still-possibly-infinite loops, though such should be less common)
vbezhenar
Huh, didn't think about it, thanks.
Karellen
That doesn't fit with my understanding of the C abstract machine. Can you give any links that explain this further? (Or to the relevant part of the standard itself?)
dzaima
N1570, 6.8.5, point 6 under "Semantics":
An iteration statement whose controlling expression is not a constant expression,156) that
performs no input/output operations, does not access volatile objects, and performs no
synchronization or atomic operations in its body, controlling expression, or (in the case of
a for statement) its expression-3, may be assumed by the implementation to terminate.
Namely, the "not a constant expression" restriction is important here. So an empty loop with a non-constant end test can be assumed to terminate, but a constant one (e.g. while(1){} or for(;;); ) cannot.Note that the rules in C++ on this are different, and do allow even a constant-end-condition empty loop to be assumed to terminate.
tialaramex
Further bonus notes, the C++ behaviour is sufficiently controversial and disliked that there is a C++ 26 proposal to "fix" it: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p28...
And, Rust's only actual loop is an infinite loop. Rust's "loop" syntax is an infinite loop, and both "for" and "while" in Rust are just syntax sugar which the documentation explains how to transform your "for" or "while" into the exact same "loop" that it's going to emit when you do that - they're not merely "equivalent" that's how it really works via a process called "de-sugaring".
Interestingly "loop" is categorically more powerful than "for" or "while" because it has a type, the type of a "for" or "while" is always the unit type, but the type of a "loop" can be anything, for example maybe the loop finds a Goose, and the value of your loop is a Goose, this means to exit the loop we need a Goose and we can't leave the loop without one.
Because of the C++ misfeature, Rust has sometimes run into problems where LLVM is like "Oh, that's an infinite loop, I'll just ignore it" but LLVM is not a C++ compiler. Clang is a C++ compiler so Clang is allowed to obey C++ rules, but LLVM is not, it's supposed to provide an actual infinite loop, for both C and Rust to use.
gpderetta
I understand thal LLVM implements the C++11 memory model, which specifies the termination requirements for non-side effects loops.
tialaramex
It's true that C++ specifies this as part of its forward progress guarantees and that's likely how it infected LLVM, but I'd deny that LLVM's rather sparse documentation of their IR lowering says basically "Ooops, we actually are only suitable as the core of a C++ compiler" was part of their intent, especially since LLVM substantially pre-dates Clang...
Lattner started work on Clang in 2006, but LLVM is from 2000..
And sure enough when the Rust project finds bugs in LLVM related to this, there is no "Oh you can't have the semantics we documented, we actually provide exactly whatever C++ says instead for some reason". Sometimes it's a doc bug but most often the problem is that as usual the optimisation passes assumed something that's just not true outside of C++.
gpderetta
I have no idea where's the formal spec for rust, but there's this: https://doc.rust-lang.org/nomicon/atomics.html
Edit: LLVM predates clang, but dragon egg was a thing.
In any case, before C++11 there was no memory model suitable for a system language[1], so it was the obvious solution.
[1] POSIX, OpenMP and the linux kernel all had memory models, but they were either underspecified, not sufficient or both.
tialaramex
That's telling you that Rust has the C++ Memory Ordering rules, not that it has the C++ Forward Progress guarantee.
C likewise has the C++ Memory Ordering, but not its Forward Progress guarantee. As I wrote earlier, C has infinite loops, they're spelled the way you'd obviously write them in C or C++, but in C they're supposed to actually work (whereas in C++ they are UB). Rust is only different here syntactically, the semantic feature is identical to C's choice.
javier_e06
In my field zeroes are a problem when there is byte shift (not aligned). Specially data transfers. Alignment corruption cannot be detected when the memory area is all zeroes. We use things like 0xaaaa will have you.
blackpill0w
I guess the reasoning behind this is that using a pattern (0xFE on GCC 12.2.0) is easier to recognise in a crash dump.
watersb
I suppose that this meta-flag won't work with musl libc.
Does anybody know how this interacts (or is intended to interact) with nested functions? My understanding is that GCC enables executable stacks when nested functions are used[1]; it'd be interesting to know whether they produce an error for this combination or continue to silently disable that mitigation.
[1]: https://gcc.gnu.org/onlinedocs/gccint/Trampolines.html#Suppo...