←back to thread

264 points davidgomes | 5 comments | | HN request time: 0.223s | source
Show context
paulryanrogers ◴[] No.41875055[source]
Upgrades are hard. There was no replication in the before times. The original block-level replication didn't work among different major versions. Slony was a painful workaround based on triggers that amplified writes.

Newer PostgreSQL versions are better. Yet still not quite as robust or easy as MySQL.

At a certain scale even MySQL upgrades can be painful. At least when you cannot spare more than a few minutes of downtime.

replies(7): >>41875126 #>>41876174 #>>41876232 #>>41876375 #>>41877029 #>>41877268 #>>41877959 #
slotrans ◴[] No.41876232[source]
"Not as robust as MySQL"? Surely you're joking.
replies(3): >>41876309 #>>41876384 #>>41877139 #
sgarland ◴[] No.41876309[source]
They’re not wrong. If you’ve ever spent meaningful time administering both, you’ll know that Postgres takes far more hands-on work to keep it going.

To be clear, I like both. Postgres has a lot more features, and is far more extensible. But there’s no getting around the fact that its MVCC implementation means that at scale, you have to worry about things that simply do not exist for MySQL: vacuuming, txid wraparound, etc.

replies(3): >>41876387 #>>41876650 #>>41877061 #
lelanthran ◴[] No.41876650[source]
My experience of both is that MySQL is easier for developers, PostgreSQL is easier for sysads.

That was true in 2012; dunno if it still applies though.

replies(3): >>41876791 #>>41877270 #>>41877996 #
sofixa ◴[] No.41876791[source]
I doubt it was true in 2012, because sysadmins would be the ones trying to make it run reliably, including things like replication, upgrades, etc.

Pretty sure that even in 2012 MySQL had very easy to use replication, which Postgres didn't have well into the late 2010s (does it today? It's been a while since I've ran any databases).

replies(2): >>41876810 #>>41878057 #
1. yxhuvud ◴[] No.41878057[source]
In 2012 MySQL had several flavors of replications, each with its own very serious pitfalls that could introduce corruption or loss of data. I saw enough of MySQL replication issues in those days that I wouldn't want to use it.

But sure, it was easy to get a proof of concept working. But when you tried to break it by turning off network and/or machines, then shit broke down in very broken ways that was not recoverable. I'm guessing most that set up MySQL replication didn't actually verify that it worked well when SHTF.

replies(3): >>41878828 #>>41880075 #>>41880546 #
2. sofixa ◴[] No.41878828[source]
Maybe that was true in 2012 (maybe it was related to MyISAM) but by ~2015 with InnoDB MySQL replication was rock solid.
replies(1): >>41879698 #
3. yxhuvud ◴[] No.41879698[source]
It was not related to MyISAM.

How did you verify that it was rock solid? And which of the variants did you use?

4. evanelias ◴[] No.41880075[source]
Many of the largest US tech companies were successfully using MySQL replication in 2012 without frequent major issues.

source: direct personal experience.

5. est ◴[] No.41880546[source]
> pitfalls that could introduce corruption or loss of data

sometimes, repairing broken data is easier than, say, upgrading a god damn hot DB.

MVCC is overrated. Not every row in a busy MySQL table is your transactional wallet balance. But to upgrade a DB you have to deal with every field every row every table, and data keeps changing, which is a real headache

Fixing a range of broken data, however, can be done by a junior developer. If you rely on rdbms for a single source of truth you are probably fucked anyway.

btw I do hate DDL changes in MySQL.