[1]: https://gavinhoward.com/2023/02/why-i-use-c-when-i-believe-i...
[1]: https://gavinhoward.com/2023/02/why-i-use-c-when-i-believe-i...
> The question is: should compiler authors be able to do whatever they want? I argue that they should not.
My question is: I see so many C programmers bemoaning the fact that modern compilers exploit undefined behavior to the fullest extent. I almost never see those programmers actually writing a "reasonable"/"friendly"/"boring" C compiler. Why is no one willing to put their ~money~ time where their mouth is?
Because it is not much harder to simply write a new language and you can discard all the baggage? Lots of verbiage gets spilled about undefined behavior, but things like the preprocessor and lack of "slices" are way bigger faults of C.
Proebsting's Law posits that compiler optimizations double performance every 20 years. That means that you can implement the smallest handful of compiler optimizations in your new language and still be within a factor of 2 of the best compilers. And people are doing precisely that (see: Zig, Jai, Odin, etc.).
This is only possible if you check for it at runtime and that's a tradeoff most C programmers don't like.
If it's implementation-defined that you can turn them off when you're building for the PDP11, I'm sold.
Exploiting undefined behavior for optimization only requires local analysis, detecting whether that undefined behavior arises (either unconditionally or at all) requires global analysis. To put it differentially: The compiler often simply doesn't know whether the undefined behavior arises, it only knows that the optimization it introduces is valid anyway.