←back to thread

38 points dxdxdt | 1 comments | | HN request time: 0.392s | source
Show context
burnt-resistor ◴[] No.46253479[source]
Sigh. Piss poor engineering, likely by humans. For the love of god, do atomic updates by duplicating data first such as in a move-out-of-the-way-first strategy before doing metadata updates. And keep a backup of metadata at each point of time to maximize crash consistency and crash recovery while minimizing the potential for data loss. An online defrag kernel module would likely be much more useful but I don't trust them to be able to handle such an undertaking.

If a user has double storage available, it's probably best to do the old-fashioned "defrag" by single-threaded copying all files and file metadata to a newly-formatted volume.

replies(2): >>46255853 #>>46260715 #
1. doubled112 ◴[] No.46255853[source]
That last paragraph sums up the ZFS defrag procedure at one shop I worked at. Buy new disks and send/receive the pool.

At our size and use case the timing was usually close to perfect. The pools were getting close to full and fragmented as larger disks became inexpensive.