←back to thread

75 points markusw | 3 comments | | HN request time: 0.614s | source
1. 8organicbits ◴[] No.45335921[source]
I do this using the Docker approach, especially for low scale web apps that run on a single VM. I like that its full Postgres versus the sometimes odd limits of SQLite. My usual approach uses a Trafik container for SSL, Django+gunicorn web app, and Postgres container; all running as containers one VM. Postgres uses a volume, which I back up regularly. For testing I use `eatmydata` which turns off sync, and speeds up test cycles by a couple percent.

I haven't tried the unix socket approach, I suppose I should try, but it's plenty performant as is. One project I built using this model hit the HN front page. Importantly, the "marketing page" was static content on a CDN, so the web app only saw users who signed up.

replies(1): >>45337362 #
2. markusw ◴[] No.45337362[source]
Yeah, basically the same here, except it's Caddy in front instead of Traefik.

So you do periodic backups, not incremental on every write or something (read replica-like)?

It's important to me to not lose any data once committed if at all possible.

(For testing, I've sped everything up by running migrations on `template1` and every test gets a random database name. Works wonders.)

replies(1): >>45361950 #
3. 8organicbits ◴[] No.45361950[source]
Good catch. I am doing periodic, not incremental, backups on that system. It all depends on risk, cost, and tolerance for data loss.