They don't prioritize performance over correctness, they prioritize programmer control over compiler/runtime control.
They don't prioritize performance over correctness, they prioritize programmer control over compiler/runtime control.
Of course it can be difficult to know when you've unintentionally hit UB, which leaves room for footguns. This is probably an unpopular opinion, but to me that's not an argument for rolling back UB-based optimizations; it's an argument for better diagnostics (are you *sure* you meant to do this), rigorous testing, and for eliminating some particularly tricky instances of UB in future revisions of the standard.
On the contrary I'd argue that the idea that any arbitrary bug can have any arbitrary consequence whatsoever is odd.
There's nothing odd about expecting that the extent of an operation is bounded in space and time, it's a position that has a great body of research backing it.