It's all a question of risk management; for example, Google has historically used container-based sandboxes for their own code (even before Linux containers were a thing), and there an io_uring vulnerability could expose them to attacks by any swdev employee. And for real performance where needed the big boys are bypassing the kernel networking and block I/O stacks anyway (load balancers, ML, ...).
I think the real question to ask is why are you running hostile code outside a dedicated VM? Lots of places will happily give you root inside a VM, and in that context io_uring attacks are irrelevant. That trust boundary is probably just as complex (KVM, virtio, very similar ringbuffers as io_uring really), but the trusted side these days is often Rust and more trustworthy.
For "non-hostile code", frankly other attacks are typically simpler. That's likely the stuff your devs run on their workstations all the time. It likely has direct access to the family jewels and networking at the same time, without needing to use any exploit.
The real fix is to slowly push the industry off of C/C++, and figure out how to use formal methods to reason about shared-memory protocols better. For example, if your "received buffer" abstraction only lets you read every byte exactly once, you can't be vulnerable to TOCTOU. That'd be pretty easy to do safely but the whole reason a shared-memory protocols was used in the first place was performance, and that trade-off is a lot less trivial.