Saturation breaks the successor relation S(x) != x. Sometimes you want that, but it's extremely situational and rarely do you want saturation precisely at the type max. Saturation is better served by functions in C.
Trapping is fine conceptually, but it means all your arithmetic operations can now error. That's a severe ergonomic issue, isn't particularly well defined for many systems, and introduces a bunch of thorny issues with optimizations. Again, better as functions in C.
On the other hand, wrapping is the mathematical basis for CRCs, Error correcting codes, cryptography, bitwise math, and more. There's no wasted bits, it's the natural implementation in hardware, it's familiar behavior to students from a young age as "clock arithmetic", compilers can easily insert debug mode checks for it (the way rust does when you forget to use Wrapping<T>), etc.
It's obviously not perfect either, as it has the same problem of all fixed size representations in diverging from infinite math people are actually trying to do, but I don't think the alternatives would be better.
There's a 4th reasonable choice: pretend it doesn't happen. Now, before you crucify me for daring to suggest that undefined behavior can be a good thing, let me explain:
When you start working on a lot of peephole optimizations, you quickly come to the discovery that there are quite a few cases where two pieces of code are almost equivalent, except that they end up giving different answers if someone overflowed (or some other edge case you don't really care about). Rather interestingly, even if you put a lot of effort into a compiler to make it aggressively infer that code can't overflow, you still run into problems because those assumptions don't really compose well (e.g., knowing that (A + (B + C)) can't overflow doesn't mean that ((A + B) + C) can't overflow--imagine B = INT_MAX and C = INT_MIN to see why).
And sure, individual peephole optimizations don't make much of a performance effect. But they can sometimes have want-of-a-nail side effects, where a failure because of inability to assume nonoverflow in one place causes another optimization to fail to kick in and the domino effect results in measurable slowdowns. In one admittedly extreme example, I've seen a single this-might-overflow result in a 10× slowdown, since it alone was responsible for the autoparallelization framework to fail to kick in. This is happened enough to me that there are times I just want to shake the computer and scream "I DON'T FUCKING CARE ABOUT EDGE CASES, JUST GIVE ME THE DAMN FASTEST CODE."
The problem with undefined behavior isn't that it risks destroying your code if you hit it (that's a good thing!); the problem is that it too frequently comes without a way to opt-out of it. And there is room to argue if it should be opt-in or opt-out, but completely absent is a step too far for me.
(Slight apologies for the rant, I'm currently in the middle of tracking down a performance hit caused by... inability to infer non-overflow of an operation.)
I don't consider destroying code semantics if you hit it a good thing, especially when there's no reliable and automatic way to observe it.
Actually, you don't. Unspecified behavior means that you have some sort of limited "blast radius" for the behavior; in LLVM's terminology, it would be roughly equivalent to "freeze poison"--i.e., the overflow returns some value, which is chosen nondeterministically, but that is the limit you can do with it. By contrast, the way LLVM handles undefined overflow is to treat it as "poison", which itself is already weaker than C/C++'s definition of UB (which LLVM also has) [1].
Now poison seems weirder in that it has properties like "can be observed to be a different value by different uses (even within the same instruction)." But it also turns out from painful experience that if you don't jump to that level of weird behavior, you end up accidentally making optimizations like "x * 2" -> "x + x" illegal because oops our semantics accidentally makes the number of uses of a value something that can't be increased because formal semantics are hard. (Hats off to Nuno Lopes et al for working on this! We need more formalism in our production compilers!)
[1] IMHO, C itself could benefit from fronting a definition of poison-like UB to the standard, rather than having just one kind of undefined behavior. But that also requires getting the committee to accept that UB isn't an inherently evil thing that needs to be stamped out at all costs, and I don't have the energy to push hard on that front myself.