So the first reason not to do it is that you never want to change software without a good reason. And none of the use cases anybody's named here so far hold water. They're all either skill issues already well addressed by existing systems, or fundamental misunderstandings that don't actually work.
Changing basic assumptions about naming is an extra bad idea with oak leaf clusters, because it pretty much always opens up security holes. I can't point to the specific software where somebody's made a load-bearing security assumption about IP address certificates not being available (more likely a pair of assumptions "Users will know about this" and "This can't happen/I forgot about this")... but I'll find out about it when it breaks.
Furthermore, if IP certificates get into wide use (and Let's Encrypt is definitely big enough to drive that), then basically every single validator has to have a code path for IP SANs. Saying "you don't have to use it" is just as much nonsense as saying "you don't have to use IP". Every X.509 library ends up with a code path for IP SANs, and it effectively can't even be profiled out. Every library is that much bigger and that much more complicated and needs that much more maintenance and testing. It's a big externalized cost. It would better to change the RFCs to deprecate IP SANs; they never should have been standardized to begin with.
It also encourages a whole bunch of bad practices that make networks brittle and unmaintainable. You should almost never see an IP address outside of a DNS zone file (or some other name resolution protocol). You can argue that people shouldn't do stupid things like hardwiring IP addresses even if they're given the tools... but that's no consolation to the third parties downstream of those stupid decisions.
... and it doesn't even work for all IP addresses, because IP addresses aren't global names. So don't forget to special-case the locally administered space in every single piece of code that touches an X.509 certificate.
I'm not sure why private IP addresses would need to be treated differently other than by the software that issues certs for publicly trusted CAs (which is highly specialized and can handle the few extra lines of code, it's not a big cost for the whole ecosystem). Private CAs can and do issue certs for private IP addresses.
Also, how would DoH or DoT work without this?
Still not wide use. It's when it gets into wide use that you end up having to include it in everything.
For now, it's a parlor trick, and it's a parlor trick that shouldn't work.
> nobody is going to remove support for things that work today just because it'd be slightly cleaner.
Work, but shouldn't and aren't actually used except by crazy people.
> If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so.
I've tried to use TLS on microcontrollers that barely had the memory to parse X.509 at all. Including stuff just because you can doesn't make that better.
... and I'm not going to go check the relevant RFCs, but I very much doubt that IP SANs are listed as a MUST. If I'm wrong, well, that's still a bug in the RFCs.
> Also, how would DoH or DoT work without this?
Hardwired keys for your trusted resolvers. Given that the whole CA infrastructure long ago gave up on doing any really robust verification of who was asking for a cert, making your DNS dependent on X.509 is a bad idea anyway. But if you really want to do it even though it's a bad idea, you can also bootstrap via the local DNS resolver and then connect to your DoH/DoT server using a domain name.
DoH, of course, is a horrible idea in itself, but that's another can of worms.