←back to thread

GCC 15.1

(gcc.gnu.org)
270 points jrepinc | 10 comments | | HN request time: 0.201s | source | bottom
Show context
Calavar ◴[] No.43792948[source]
> {0} initializer in C or C++ for unions no longer guarantees clearing of the whole union (except for static storage duration initialization), it just initializes the first union member to zero. If initialization of the whole union including padding bits is desirable, use {} (valid in C23 or C++) or use -fzero-init-padding-bits=unions option to restore old GCC behavior.

This is going to silently break so much existing code, especially union based type punning in C code. {0} used to guarantee full zeroing and {} did not, and step by step we've flipped the situation to the reverse. The only sensible thing, in terms of not breaking old code, would be to have both {0} and {} zero initialize the whole union.

I'm sure this change was discussed in depth on the mailing list, but it's absolutely mind boggling to me

replies(14): >>43793036 #>>43793080 #>>43793121 #>>43793150 #>>43793166 #>>43794045 #>>43794558 #>>43796460 #>>43798312 #>>43798826 #>>43800132 #>>43800234 #>>43800932 #>>43800975 #
myrmidon ◴[] No.43794045[source]
I honestly feel that "uninitialized by default" is strictly a mistake, a relic from the days when C was basically cross-platform assembly language.

Zero-initialized-by-default for everything would be an extremely beneficial tradeoff IMO.

Maybe with a __noinit attribute or somesuch for the few cases where you don't need a variable to be initialized AND the compiler is too stupid to optimize the zero-initialization away on its own.

This would not even break existing code, just lead to a few easily fixed performance regressions, but it would make it significantly harder to introduce undefined and difficult to spot behavior by accident (because very often code assumes zero-initialization and gets it purely by chance, and this is also most likely to happen in the edge cases that might not be covered by tests under memory sanitizer if you even have those).

replies(6): >>43794119 #>>43794483 #>>43794611 #>>43794707 #>>43796274 #>>43799214 #
1. elromulous ◴[] No.43794119[source]
Devil's advocate: this would be unacceptable for os kernels and super performance critical code (e.g. hft).
replies(5): >>43794300 #>>43794341 #>>43794380 #>>43795075 #>>43800870 #
2. sidkshatriya ◴[] No.43794300[source]
Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?

You might claim that that you can have both but bugs are more inevitable in the uninitialised by default scenario. I doubt that variable initialisation is the thing that would slow down HFT. I would posit is it things like network latency that would dominate.

replies(1): >>43795813 #
3. myrmidon ◴[] No.43794341[source]
No, just throw the __noinit attribute at every place where its needed.

You probably would not even need it in a lot of instances because the compiler would elide lots of dead stores (zeroing) even without hinting.

4. pjmlp ◴[] No.43794380[source]
It is acceptable enough for Windows, Android and macOS, that have been doing for at least the last five years.

That is the usual fearmongering when security improvements are done to C and C++.

5. TuxSH ◴[] No.43795075[source]
> this would be unacceptable for os kernels

Depends on the boundary. I can give a non-Linux, microkernel example (but that was/is shipped on dozens of millions of devices):

- prior to 11.0, Nintendo 3DS kernel SVC (syscall) implementations did not clear output parameters, leading to extremely trivial leaks. Unprivileged processes could retrieve kernel-mode stack addresses easily and making exploit code much easier to write, example here: https://github.com/TuxSH/universal-otherapp/blob/master/sour...

- Nintendo started clearing all temporary registers on the Switch kernel at some point (iirc x0-x7 and some more); on the 3DS they never did that, and you can leak kernel object addresses quite easily (iirc by reading r2), this made an entire class of use-after-free and arbwrite bugs easier to exploit (call SvcCreateSemaphore 3 times, get sema kernel object address, use one of the now-patched exploit that can cause a double-decref on the KSemaphore, call SvcWaitSynchronization, profit)

more generally:

- unclearead padding in structures + copy to user = infoleak

so one at least ought to be careful where crossing privilege boundaries

6. hermitdev ◴[] No.43795813[source]
> Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?

As someone who works in the HFT space: it depends. How frequently and how bad are the bad-trade cases? Some slop happens. We make trade decisions with hardware _without even seeing an entire packet coming in on the network_. Mistakes/bad trades happen. Sometimes it results in trades that don't go our way or missed opportunities.

Just as important as "can we do better?" is "should we do better?". Queue priority at the exchange matters. Shaving nanoseconds is how you get a competitive edge.

> I would posit is it things like network latency that would dominate.

Everything matters. Everything is measured.

edit to add: I'm not saying we write software that either has or relies upon unitialized values. I'm just saying in such a hypothetical, it's not a cut and dry "do the right thing (correct according to the language spec)" decision.

replies(1): >>43798225 #
7. Imustaskforhelp ◴[] No.43798225{3}[source]
We make trade decisions with hardware _without even seeing an entire packet coming in on the network_

Wait what????

Can you please educate me on high frequency trading... , like I don't understand what's the point of it & lets say one person has created a hft bot then why the need of other bot other than the fact of different trading strats and I don't think these are profitable / how they compare in the long run with the boglehead strategy??

replies(1): >>43798549 #
8. hermitdev ◴[] No.43798549{4}[source]
This is a vast, _vast_ over-simplification: The primary "feature" of HFT is providing liquidity to market.

HFT firms are (almost) always willing to buy or sell at or near the current market price. HFT firms basically race each other for trade volume from "retail" traders (and sometimes each other). HFTs make money off the spread - the difference between the bid & offer - typically only a cent. You don't make a lot of money on any individual trade (and some trades are losers), but you make money on doing a lot of volume. If done properly, it doesn't matter which direction the market moves for an HFT, they'll make money either way as long as there's sufficient trading volume to be had.

But honestly, if you want to learn about HFT, best do some actual research on it - I'm not a great source as I'm just the guy that keeps the stuff up and running; I'm not too involved in the business side of things. There's a lot of negative press about HFTs, some positive.

9. saagarjha ◴[] No.43800870[source]
The same OS kernel that zeros out pages before handing them back to me?
replies(1): >>43800899 #
10. frontfor ◴[] No.43800899[source]
This is arguing in bad faith. Just because the kernel does that doesn’t mean it does that in everywhere else.