←back to thread

221 points finnlab | 9 comments | | HN request time: 0s | source | bottom
1. FloatArtifact ◴[] No.43545753[source]
Self-Hosting like it's 2025...uhhgg...

Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.

Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.

replies(3): >>43546151 #>>43547094 #>>43547135 #
2. marceldegraaf ◴[] No.43546151[source]
Best decision of last year for my homelab: run everything in Proxmox VMs/containers and back up to a separate Proxmox Backup Server instance.

Fully automated, incremental, verified backups, and restoring is one click of a button.

replies(1): >>43546756 #
3. FloatArtifact ◴[] No.43546756[source]
Yes, I'm considering that if I can't find a solution that is plug-and-play for containers Independent of the OS and file system. Although I don't mind something abstracting on top of ZFS. ZFS Mental overhead through the snapshot paradigm can lead to its own complexities. A traditional backup and restorer front end would be great.

I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.

4. nunez ◴[] No.43547094[source]
rclone is great for this.

One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.

replies(1): >>43547276 #
5. nijave ◴[] No.43547135[source]
Why not zfs snapshots? Besides using Hyper-V machine snapshots, that's been the easiest way, by far, for me. No need to worry about the 20 different proprietary tools that go with each piece of software.

Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)

replies(1): >>43573855 #
6. nijave ◴[] No.43547276[source]
Rclone won't take a consistent snapshot so you either need to shutdown the thing or use some other tool to export the data first
replies(1): >>43549442 #
7. auxym ◴[] No.43549442{3}[source]
zfs/btrfs snapshot and then rclone that snapshot?
replies(1): >>43551993 #
8. nijave ◴[] No.43551993{4}[source]
I think that'd break deleting incremental snapshots unless you tried uploading a gigantic blob of the entire filesystem, wouldn't it?

Meaning you'd need to upload full snapshots on a fixed interval

9. FloatArtifact ◴[] No.43573855[source]
Well, one argument not to use ZFS is simply the resources it takes. It eats up a lot of ram. Also I'm under the impression that one should never live-snapshot a database without risk of corruption.