←back to thread

93 points endorphine | 1 comments | | HN request time: 0.207s | source
Show context
ajross ◴[] No.43537017[source]
This headline is badly misunderstanding things. C/C++ date from an era where "correctness" in the sense the author means wasn't a feasible feature. There weren't enough cycles at build time to do all the checking we demand from modern environments (e.g. building medium-scale Rust apps on a Sparcstation would be literally *weeks* of build time).

And more: the problem faced by the ANSI committee wasn't something where they were tempted to "cheat" by defining undefined behavior at all. It's that there was live C code in the world that did this stuff, for real and valid reasons. And they knew if they published a language that wasn't compatible no one would use it. But there were also variant platforms and toolchains that didn't do things the same way. So instead of trying to enumerate them all individually (which probably wasn't possible anyway), they identified the areas where they knew they could define firm semantics and allowed the stuff outside that boundary to be "undefined", so existing environments could continue to implement them compatibly.

Is that a good idea for a new language? No. But ANSI wasn't writing a new language. They were adding features to the language in which Unix was already written.

replies(5): >>43537270 #>>43537327 #>>43537466 #>>43537560 #>>43537849 #
pjmlp ◴[] No.43537560[source]
The author is a famous compiler writer, including C and C++ compilers as GCC contributor, regardless of how Go is designed, he does know what he is talking about.
replies(1): >>43538066 #
ajross ◴[] No.43538066[source]
It's still a bad headline. UB et. al. weren't added to the language for "performance" reasons, period. They were then and remain today compatibility features.
replies(2): >>43538405 #>>43541974 #
fooker ◴[] No.43541974[source]
You are wrong. The formalized concept of UB was introduced exactly because of this.

Let's take something as simple as divide by zero. Now, suppose you have a bunch of code with random arithmetic operations.

A compiler can not optimize this code at all without somehow proving that all denominators are non zero. What UB brings you is that you can optimize the program based on the assumption that UB never occurs. If it actually does, who cares, the program would have done something bogus anyway.

Now think about pointer dereferences, etc etc.

replies(1): >>43546319 #
1. ajross ◴[] No.43546319[source]
UB was not introduced to facilitate optimization, period. At the time the ANSI standard was being written, such optimizations didn't even exist yet. The edge case trickery around "assume behavior is always defined" didn't start showing up until the late 90's, a full decade and a half later.

UB was introduced to allow for variant/incompatible platform behavior (in your example, how the hardware treats a divide by zero condition) in a way that allowed pre-existing code to remain valid on the platform it was written, but to leave the core language semantics clear for future code.