←back to thread

229 points pjmlp | 1 comments | | HN request time: 0.221s | source
Show context
derriz ◴[] No.43534525[source]
Sane defaults should be table stakes for toolchains but C++ has "history".

All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization. This stuff is so brittle, who knows when (with which release of the compiler or linker) a particular combination of optimization flags were actually beneficial? How do you regression test this stuff? So everyone is afraid to touch this stuff.

Other compiled languages have similar issues but none to the extent of C++ that I've experienced.

replies(4): >>43534781 #>>43535229 #>>43535747 #>>43543362 #
rollcat ◴[] No.43535747[source]
It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

I've been eyeing Zig recently. It makes a lot of choices straightforward yet explicit, e.g. you choose between four optimisation strategies: debug, safety, size, perf. Individual programs/libraries can have a default or force one (for the whole program or a compilation unit), but it's customary to delegate that choice to the person actually building from source.

Even simpler story with Go. It's been designed by people who favour correctness over performance, and most compiler flags (like -race, -asan, -clobberdead) exist to help debug problems.

I've been observing a lot of people complain about declining software quality; yearly update treadmills delivering unwanted features and creating two bugs for each one fixed. Simplicity and correctness still seem to be a niche thing; I salute everyone who actually cares.

replies(1): >>43539554 #
nayuki ◴[] No.43539554[source]
> It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid. Drawing from math analogies where one wrong step leads to an absurd conclusion:

* https://en.wikipedia.org/wiki/All_horses_are_the_same_color

* https://en.wikipedia.org/wiki/Principle_of_explosion

* https://proofwiki.org/wiki/False_Statement_implies_Every_Sta...

* https://en.wikipedia.org/wiki/Mathematical_fallacy#Division_...

Back to programming, hopefully this example will not be controversial: If a program contains at least one write to an arbitrary address (e.g. `*(char*)0x123 = 0x456;`), the overall behavior will be unpredictable and effectively meaningless. In this case, I would fully agree with a compiler deleting, reordering, and manipulating code as a result of that particular UB.

You could argue that C shouldn't have been designed so that reading out of bounds is UB. Instead, it should read some arbitrary value without crashing or cleanly segfault at that instruction, with absolutely no effects on any surrounding code.

You could argue that C/C++ shouldn't have made it UB to dereference a null pointer for reading, but I fully agree that dereferencing a null pointer for a method call or writing a field must be UB.

Another analogy in programming is, let's forget about UB. Let's say you're writing a hash table in Java (in the normal safe subset without using JNI or Unsafe). If you get even one statement wrong in the data structure implementation, there still might be arbitrarily large consequences like dropping values when you shouldn't, miscounting how many values exist, duplicating values when you shouldn't, having an incorrect state that causes subtle failures far in the future, etc. The consequences are not as severe and pervasive as UB at the language level, but it will still result in corrupt data and/or unpredictable behavior for the user of that library code, which can in turn have arbitrarily large consequences. I guess the only difference compared to C/C++ UB is that for C/C++, there is more "spooky action at a distance", where some piece of UB can have very non-local consequences. But even incorrect code in safe Java can produce large consequences, maybe just not as large on average.

I am not against compilers "exploiting" UB for performance gain. But these are the ways forward that I believe in, for any programming language in general:

* In the language specification, reduce the number of cases/places that are undefined. Not only does it reduce the chances of bad things happening, but it also makes the rules easier to remember for humans, thus making it easier to avoid triggering these cases.

* Adding to that point, favor compile-time errors over run-time UB. For example, reading from an uninitialized local variable is a compile error in Java but UB in C. Rust's whole shtick about lifetimes and borrowing is one huge transformation of run-time problems into compile-time problems.

* Overwhelmingly favor safety by default. For example, array accesses should be bounds-checked using the convenient operator like `array[index]`, whereas the unsafe unchecked version should be something obnoxious and ugly like `unsafe { array.get_unchecked(index) }`. Make the safe way easy and make the unsafe way hard - the exact opposite of C/C++.

* Provide good (and preferably complete) sanitizer tools to check that UB isn't triggered at run time. C/C++ did not have these for the first few decades of their lives, and you were flying blind when triggering UB.

replies(1): >>43543310 #
motorest ◴[] No.43543310[source]
> Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid.

You're failing to understand the problem domain, and consequently you're oblivious to how UB is actually a solution to problems.

There are two sides to UB: the one which is associated with erroneous programs, because clueless developers unwittingly do things that the standards explicitly states that lead to unknown and unpredictable behavior, and the one which leads to valid programs, because developers knowingly adopted an implementation that specifies exactly what behavior they should expect from doing things that the standards specify as UB.

Somehow, those who mindlessly criticize UB only parrot the simplistic take on UB, the "nasal demons" blurb. They don't even stop to think about what is undefined behavior and why would a programming language specification purposely leave specific behavior as undefined instead of unspecified or even implementation-defined. They do not understand what they are discussing and don't invest any moment trying to understand why things are the way they are, and what problems are solved by them. The just parrot cliches.

replies(2): >>43546745 #>>43552317 #
cowboylowrez ◴[] No.43552317[source]
from the ubc.pdf paper linked in this thread.

    int d[16];
    int SATD (void)
    {
    int satd = 0, dd, k;
    for (dd=d[k=0]; k<16; dd=d[++k]) {
    satd += (dd < 0 ? -dd : dd);
    }
    return satd;
    }
This was “optimized” by a pre-release of gcc-4.8 into the following infinite loop: SATD: .L2: jmp .L2

(simply because k<16 is always true because k is used as an index to an array with a known size)

I mean thats just sort of nuts, how do you loop over an array then in an UB free manner? The paper referred to this situation being remediated:

"The GCC maintainers subsequently disabled this optimization for the case occuring in SPEC"

I try to keep up with the UB thing, while for current code I just use o0 because its fast enough and apparently allows me to keep an array index in bounds. Reading about this leaves me thinking that some of this UB criticism might not be so mindless.

replies(3): >>43553392 #>>43553519 #>>43556332 #
tyg13 ◴[] No.43553519[source]
Leaving aside the fact that that code reads an array out-of-bounds (which is not a trivial security issue) that's a ridiculously obtuse way to write that code. For loop conditions should be almost always be expressed in terms of their induction variable. A much cleaner and safe version is

    int d[16];
    int SATD (void)
    {
    int satd = 0, k = 0;
    for (k = 0; k < 16; ++k)
      satd += d[k] < 0 ? -d[k] : d[k];
    return satd;
    }
replies(1): >>43555099 #
cowboylowrez ◴[] No.43555099[source]
I don't understand your logic though, according to my obviously inadequate understanding of UB, "k < 16" can never be false because its used as an array index for d[16]??? is the critical difference here that the array access happens outside of the "for" expression?
replies(1): >>43565775 #
1. tyg13 ◴[] No.43565775[source]
> Is the critical difference here that the array access happens outside of the "for" expression?

Precisely: this means that `d[k]` is guaranteed to execute before the check that `k < 16`. In general, if you have some access like `d[k]` where `k` is some integer and `d` is some array of size `N`, you can assume that `k < N` on all paths which are dominated by the statement containing `d[k]`. In simpler terms, the optimizer will assume that `k < N` is true on every path after the access to `d[k]` occurs.

To make this clearer, consider an equivalent, slightly-transformed version of the original code:

    int d[16];
    int SATD (void)
    {
      int satd = 0, dd, k;
      dd = d[k=0];
      do {
        satd += (dd < 0 ? -dd : dd);
        k = k + 1;
        dd=d[k]; // At this point, k must be < 16.
      } while (k < 16); // The value of `k` has not changed, thus our previous
                        // assumption that `k < 16` must still hold. Thus `k < 16`
                        // can be simplified to `true`.
      return satd;
    }
Now consider a slightly-transformed version of the correct code:

    int d[16];
    int SATD (void)
    {
    int satd = 0, k = 0;
    k = 0;
    do
    {
      satd += d[k] < 0 ? -d[k] : d[k]; // At this point, k must be < 16
      k = k + 1;
    } while (k < 16); // The value of `k` has changed -- at best, we can assert
                      // that `k < 17` since its value increased by 1 since we
                      // last assumed it was less than 16. But the assumption
                      // that `k < 16` doesn't hold, and this check cannot be
                      // simplified.
    return satd;
    }
It's important that this is understood in terms of dominance (in the graph-theoretical sense), because statements like "k < 16 can never be false because it's used in d[k] where k == 16" or "the compiler will delete checks for k < 16 if it knows that d[16] occurs" which seem equivalent to the previously-stated dominance criterion simply are not. It's not that the compiler is detecting UB, thus deleting your checks -- it's that it assumes UB never occurs in the first place.