I don't know how io_uring solves this - does it return an error if the underlying NFS call times out? How long do you wait for a response before giving up and returning an error?
I don't agree that it was a reasonable tradeoff. Making an unreliable system emulate a reliable one is the very thing I find to be a bad idea. I don't think this is unique to NFS, it applies to any network filesystem you try to present as if it's a local one.
> What does vi do when the server hosting the file you're editing stop responding? None of these tools have that kind of error handling.
That's exactly why I don't think it's a good idea to just pretend a network connection is actually a local disk. Because tools aren't set up to handle issues with it being down.
Contrast it with approaches where the client is aware of the network connection (like HTTP/GRPC/etc)... the client can decide for itself how long it should retry failed requests, whether it should bubble up failures to the caller, or work "offline" until it gets an opportunity to resync, etc. With NFS the syscall just hangs forever by default.
Distributed systems are hard, and NFS (and other similar network filesystems) just pretend it isn't hard at all, which is great until something goes wrong, and then the abstraction leaks.
(Also I didn't say io_uring solves this, but I'm curious as to whether its performance would be any better than blocking calls.)
The other far-edge is the S3, where appending has just been possible within the last a few years as far as I can tell. Meanwhile editing a file requiring a full download/upload, not great either.
For the NFS case, I cannot say it's my favorite, but certainly easy to setup and run on your own. Obviously a rebooting server may cause certain issues during the unavailability, but the NFS server should be in highly-available. with NFSv4.1, you may use UDP as the primary transport, which allows you to swap/switch servers pretty quickly. (Given you connect to a DNS/FQDN rather than the IP address)
Another case is the plug and play, with NFS, UNIX permissions, ownership/group details, execute bit, etc are all preserved nicely...
Besides, you could always have a "cache" server locally. Similar to GDrive or OneDrive clients, constantly syncing back and forth, caching the data locally, using file-handles to determine locks. Works pretty well _at scale_ (ie. many concurrent users in the case of GDrive or OneDrive).