←back to thread

264 points davidgomes | 1 comments | | HN request time: 0s | source
Show context
noirscape ◴[] No.41877487[source]
Because the actual process of upgrading Postgres is terrible.

I say this as someone who absolutely loves using it, but the actual process of upgrading Postgres is something that takes significant downtime, is error-prone and you're often better off just dumping all the databases and reimporting them in a new folder. (A good idea in general since it'll vacuum and compact indexes as well if you do it, combining a few maintenance routines in your upgrade path.)

It requires having the previous version of Postgres installed, something which can mess with a number of distro policies (not to mention docker, which is the most popular way to deploy software that will typically rely on Postgres), and unlike most software with that issue, Postgres is software you want to be managed by your distro.

Therefore, most people only upgrade by necessity - when their distro forces the upgrade or the version they're using reaches EOL.

replies(11): >>41877841 #>>41877921 #>>41877992 #>>41878101 #>>41878462 #>>41878670 #>>41879013 #>>41879161 #>>41879191 #>>41879259 #>>41879567 #
forinti ◴[] No.41878670[source]
I have always compiled from source so that I can have more than one version in place. The installation takes less than 1GB and is quite easy and fast to compile and I've never had any issues with upgrades. If you forget to compile an extension, you can do it later, copy the .so to the right folder, and you don't even have to restart the cluster.

If you don't want to use pg_upgrade, you can dump from one cluster and pipe directly into another, with no need for a temporary file. It couldn't be easier.

Can't say the same for Oracle. Even applying a patch can ruin several weekends.

replies(2): >>41878915 #>>41880421 #
literalAardvark ◴[] No.41878915[source]
> It couldn't be easier

If you have a trivial database, yes.

If you have a large, complicated database that takes hours/days to restore and has an HA SLA, very hell no.

replies(2): >>41879032 #>>41879224 #
forinti ◴[] No.41879224{3}[source]
That's not going to be easy with any database.
replies(3): >>41879571 #>>41881588 #>>41887428 #
1. danudey ◴[] No.41881588{4}[source]
MySQL, for all its faults, actually handles this quite well. Generally speaking, assuming there aren't any major backwards-incompatibilities (which there haven't often been in my experience), you can start up the old server in place of the new server and you're good to go. Some table-rebuilding can be done in-place if you need to do things like recreate indices, but _generally_ you're good to go from there.

For more complex deployments, cases of backwards-incompatibilities in the data structure, or where I had reason to rebuild many large tables, you set up a replication secondary, do pre-upgrade migrations (e.g. in the case of deprecated functionality), upgrade the secondary, do post-upgrade migrations (e.g. in the case of new functionality), and wait for replication to catch up 100%. Then, after sending traffic to it for a bit to ensure it works correctly, you swap the primary and secondary and do it again.