←back to thread

229 points pjmlp | 2 comments | | HN request time: 0.443s | source
Show context
derriz ◴[] No.43534525[source]
Sane defaults should be table stakes for toolchains but C++ has "history".

All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization. This stuff is so brittle, who knows when (with which release of the compiler or linker) a particular combination of optimization flags were actually beneficial? How do you regression test this stuff? So everyone is afraid to touch this stuff.

Other compiled languages have similar issues but none to the extent of C++ that I've experienced.

replies(4): >>43534781 #>>43535229 #>>43535747 #>>43543362 #
rollcat ◴[] No.43535747[source]
It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

I've been eyeing Zig recently. It makes a lot of choices straightforward yet explicit, e.g. you choose between four optimisation strategies: debug, safety, size, perf. Individual programs/libraries can have a default or force one (for the whole program or a compilation unit), but it's customary to delegate that choice to the person actually building from source.

Even simpler story with Go. It's been designed by people who favour correctness over performance, and most compiler flags (like -race, -asan, -clobberdead) exist to help debug problems.

I've been observing a lot of people complain about declining software quality; yearly update treadmills delivering unwanted features and creating two bugs for each one fixed. Simplicity and correctness still seem to be a niche thing; I salute everyone who actually cares.

replies(1): >>43539554 #
nayuki ◴[] No.43539554[source]
> It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid. Drawing from math analogies where one wrong step leads to an absurd conclusion:

* https://en.wikipedia.org/wiki/All_horses_are_the_same_color

* https://en.wikipedia.org/wiki/Principle_of_explosion

* https://proofwiki.org/wiki/False_Statement_implies_Every_Sta...

* https://en.wikipedia.org/wiki/Mathematical_fallacy#Division_...

Back to programming, hopefully this example will not be controversial: If a program contains at least one write to an arbitrary address (e.g. `*(char*)0x123 = 0x456;`), the overall behavior will be unpredictable and effectively meaningless. In this case, I would fully agree with a compiler deleting, reordering, and manipulating code as a result of that particular UB.

You could argue that C shouldn't have been designed so that reading out of bounds is UB. Instead, it should read some arbitrary value without crashing or cleanly segfault at that instruction, with absolutely no effects on any surrounding code.

You could argue that C/C++ shouldn't have made it UB to dereference a null pointer for reading, but I fully agree that dereferencing a null pointer for a method call or writing a field must be UB.

Another analogy in programming is, let's forget about UB. Let's say you're writing a hash table in Java (in the normal safe subset without using JNI or Unsafe). If you get even one statement wrong in the data structure implementation, there still might be arbitrarily large consequences like dropping values when you shouldn't, miscounting how many values exist, duplicating values when you shouldn't, having an incorrect state that causes subtle failures far in the future, etc. The consequences are not as severe and pervasive as UB at the language level, but it will still result in corrupt data and/or unpredictable behavior for the user of that library code, which can in turn have arbitrarily large consequences. I guess the only difference compared to C/C++ UB is that for C/C++, there is more "spooky action at a distance", where some piece of UB can have very non-local consequences. But even incorrect code in safe Java can produce large consequences, maybe just not as large on average.

I am not against compilers "exploiting" UB for performance gain. But these are the ways forward that I believe in, for any programming language in general:

* In the language specification, reduce the number of cases/places that are undefined. Not only does it reduce the chances of bad things happening, but it also makes the rules easier to remember for humans, thus making it easier to avoid triggering these cases.

* Adding to that point, favor compile-time errors over run-time UB. For example, reading from an uninitialized local variable is a compile error in Java but UB in C. Rust's whole shtick about lifetimes and borrowing is one huge transformation of run-time problems into compile-time problems.

* Overwhelmingly favor safety by default. For example, array accesses should be bounds-checked using the convenient operator like `array[index]`, whereas the unsafe unchecked version should be something obnoxious and ugly like `unsafe { array.get_unchecked(index) }`. Make the safe way easy and make the unsafe way hard - the exact opposite of C/C++.

* Provide good (and preferably complete) sanitizer tools to check that UB isn't triggered at run time. C/C++ did not have these for the first few decades of their lives, and you were flying blind when triggering UB.

replies(1): >>43543310 #
motorest ◴[] No.43543310[source]
> Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid.

You're failing to understand the problem domain, and consequently you're oblivious to how UB is actually a solution to problems.

There are two sides to UB: the one which is associated with erroneous programs, because clueless developers unwittingly do things that the standards explicitly states that lead to unknown and unpredictable behavior, and the one which leads to valid programs, because developers knowingly adopted an implementation that specifies exactly what behavior they should expect from doing things that the standards specify as UB.

Somehow, those who mindlessly criticize UB only parrot the simplistic take on UB, the "nasal demons" blurb. They don't even stop to think about what is undefined behavior and why would a programming language specification purposely leave specific behavior as undefined instead of unspecified or even implementation-defined. They do not understand what they are discussing and don't invest any moment trying to understand why things are the way they are, and what problems are solved by them. The just parrot cliches.

replies(2): >>43546745 #>>43552317 #
cowboylowrez ◴[] No.43552317[source]
from the ubc.pdf paper linked in this thread.

    int d[16];
    int SATD (void)
    {
    int satd = 0, dd, k;
    for (dd=d[k=0]; k<16; dd=d[++k]) {
    satd += (dd < 0 ? -dd : dd);
    }
    return satd;
    }
This was “optimized” by a pre-release of gcc-4.8 into the following infinite loop: SATD: .L2: jmp .L2

(simply because k<16 is always true because k is used as an index to an array with a known size)

I mean thats just sort of nuts, how do you loop over an array then in an UB free manner? The paper referred to this situation being remediated:

"The GCC maintainers subsequently disabled this optimization for the case occuring in SPEC"

I try to keep up with the UB thing, while for current code I just use o0 because its fast enough and apparently allows me to keep an array index in bounds. Reading about this leaves me thinking that some of this UB criticism might not be so mindless.

replies(3): >>43553392 #>>43553519 #>>43556332 #
dapperdrake ◴[] No.43556332[source]
After thinking about it and reading the other comments, the for loop needs to be changed (k<15), because ++k sets k to 16 in the last step of the version in the parent comment.

This one is nasty.

And it will still cause trouble close to arrays if size INT_MAX.

Fun times.

replies(1): >>43556626 #
1. nayuki ◴[] No.43556626[source]
The problem isn't ++k per se. The problem is the expression d[++k], which immediately reads d[16] before realizing that `k < 16` is false. Look at your sibling comments for explanations and corrections: https://news.ycombinator.com/item?id=43553392 , https://news.ycombinator.com/item?id=43553519

> And it will still cause trouble close to arrays if size INT_MAX.

Fun fact, `for (int i = 0; i <= INT_MAX; i++) {}` is undefined behavior in C/C++ but a well-defined infinite loop in Java.

replies(1): >>43558046 #
2. dapperdrake ◴[] No.43558046[source]
The problem is checking for k<16 and incrementing k and then using k to access array d[16] afterwards. That's how it goes out if bounds. The condition is inadequate for ensuring that k stays in the bounds of array d[16].

It seems like k is "obviously" an array index for array d[16] here, but whatever. Not in a position to have that discussion right now.

As for Java, (too lazy to look that up right now): That at least sounds like they enforce twos-complement representation for signed int values. C89 has UB here because ancient hardware also used ones-complement. And C89 wanted to be portable as a portable assembler. Well, if UB really makes anything portable. They didn't even go with platform-dependent a.k.a. implementation defined. Instead they specifically chose undefined behavior. The argument brought forward, at least today, is that C isn't supposed to be a portable assembler. Now which one is it supposed to be? Cannot have both at the same time.