Sigh. Piss poor engineering, likely by humans. For the love of god, do atomic updates by duplicating data first such as in a move-out-of-the-way-first strategy before doing metadata updates. And keep a backup of metadata at each point of time to maximize crash consistency and crash recovery while minimizing the potential for data loss. An online defrag kernel module would likely be much more useful but I don't trust them to be able to handle such an undertaking.
If a user has double storage available, it's probably best to do the old-fashioned "defrag" by single-threaded copying all files and file metadata to a newly-formatted volume.
replies(2):