Most active commenters
  • sgarland(6)
  • bigwheels(4)
  • riku_iki(3)

←back to thread

131 points pgedge_postgres | 21 comments | | HN request time: 0.764s | source | bottom
1. sgarland ◴[] No.45534417[source]
You do not want multi-master. If you think you do, think again.

Source: I have operated a large multi-master Postgres cluster.

replies(4): >>45534725 #>>45534761 #>>45535023 #>>45535213 #
2. bigwheels ◴[] No.45534725[source]
I imagined this position would depend almost entirely on the requirements of the project. Are you able to elaborate on why it's a universal "NO" for you?
replies(2): >>45534752 #>>45538649 #
3. gtowey ◴[] No.45534752[source]
That's just the point, it always sounds like a great idea to people not experienced in database operations.

The problem with the setup is you will have a data corruption issue at some point. It's not an "if" it's a "when". If you don't have a plan to deal with it, then you're hosed.

This is why the parent is turning around the burden of proof. If you can't definitely say why you absolutely need this, and no other solution will do, then avoid it.

replies(1): >>45534777 #
4. phs318u ◴[] No.45534761[source]
Multi-master can be useful in cases where writes to the data are usually logically grouped by an attribute that correlates to the distribution of masters e.g. sales info by geography. The chances of write conflicts become much smaller (though not zero.
replies(1): >>45537431 #
5. bigwheels ◴[] No.45534777{3}[source]
Believe it or not, Mrs. Bigwheels is pretty experienced in the database department. I've seen multi-master HA architecture work out great for 10M+ DAU games, and many/most other cases where I wouldn't recommend it- as in it wouldn't even enter my brain, because the tradeoffs are harsh.

IME it comes down to considering CAP against the business goals, and taking into account how much it will annoy the development team(s).

If you follow "the rules" WRT to writes, it may fit the bill. Especially these days with beauties like RDS. But then again, Aurora is pretty awesome, and did not exist/mature until only ~5 years ago or so.

Definitely more of a wart than a pancea or silver bullet. Even still, I wouldn't dismiss outright, always keen to compare alternatives.

Overall it sounds like we're in the same camp, heh.

replies(3): >>45534857 #>>45535129 #>>45538831 #
6. porridgeraisin ◴[] No.45534857{4}[source]
What would you say are the primary tradeoffs?
7. jasonthorsness ◴[] No.45535023[source]
Agree; the part of the application requiring multi-master semantics is probably a small piece and can be handled outside the database where there is enough domain-specific knowledge that it can be made simpler and more obvious how conflicts for example are avoided or handled.
8. riku_iki ◴[] No.45535129{4}[source]
> I've seen multi-master HA architecture work out great for 10M+ DAU games

could you tell what kind of DB was that so we can understand if it is apple to apple comparison to multi-master PG?

replies(1): >>45540309 #
9. pgedge_postgres ◴[] No.45535213[source]
There's a lot of ways to approach the common problems found when running multi-master / active-active PostgreSQL. (A complete guide on this, specifically using PostgreSQL in general, was written by one of our solutions engineers, Shaun Thomas: https://www.pgedge.com/blog/living-on-the-edge)

Could you elaborate on what problems you experienced?

replies(1): >>45538499 #
10. ownagefool ◴[] No.45537431[source]
That's not multi-master in the typical sense, it's sharding, and done correctly, you shouldn't have any write conflicts because each shard should be strongly consistent within itself.

Typically a strongly consistent (CP) system works by having a single elected master where writes are only ack'd when they're written to the majority of the cluster. The downside of this system is you need majority of the cluster working and up-to-date and the performance impact of doing this.

A multi-master system is generally ( AP ) allows writes to any master node, but has some consensus algorithm where it picks and chooses winners based on conflicting writes. It should be faster and more available at the cost of potentially lost data.

There are some systems that claim to beat CAP but they typically have caveats and assurances that are required. After-all, if you ack a write, and then that node blows up, how will it ever sync?

replies(1): >>45538050 #
11. sgarland ◴[] No.45538050{3}[source]
> There are some systems that claim to beat CAP but they typically have caveats and assurances that are required.

If by “caveats and assurances,” you mean “relax the definitions of CAP,” then yes. CAP, in its strict definition, has been formally proven [0].

> After-all, if you ack a write, and then that node blows up, how will it ever sync?

That’s just async replication.

0: https://www.comp.nus.edu.sg/~gilbert/pubs/BrewersConjecture-...

12. sgarland ◴[] No.45538499[source]
To clarify, I was working with 2nd Quadrant BDR (now Enterprise DB Postgres Distributed), running on some extremely large EC2 instances, in a global mesh - IIRC, five regions. Also in fairness, EDB told us that we were the largest mesh they had seen, and so we frequently ran into fun edge cases.

Each node had N replicas running vanilla Postgres attached, which were on EC2s with node-local NVMe drives for higher performance. This was absolutely necessary for the application. There were also a smattering of Aurora Postgres instances attached, which the data folk used for analytics.

In no particular order:

* DDL is a nightmare. BDR by default will replicate DDL statements across the mesh, but the locking characteristics combined with the latency between `ap-southeast-2` and `us-east-1` (for example) meant that we couldn't use it; thus, we had to execute it separately on each node. Also, since the attached Aurora instances were blissfully unaware of anything but themselves, for any table-level operations (e.g. adding a column), we had to execute it on those first, lest we start building up WAL at an uncomfortable pace due to replication errors.

* You know how it's common to run without FK constraints, because "scalability," etc.? Imagine the fun of having devs manage referential integrity combined with eventual consistency across a global mesh.

* Things like maximum network throughput start to become concerns. Tbf, this is more due to modern development's tendency to use JSON everywhere, and to have heavily denormalized tables, but it's magnified by the need to have those changes replicated globally.

* Hiring is _hard_. I can already hear people saying, "well, you were running on bare EC2s," and sure, that requires Linux administration knowledge as a baseline - I promise you, that's a benefit. To effectively manage a multi-master RDBMS cluster, you need to know how to expertly administrate and troubleshoot the RDBMS itself, and to fully understand the implications and effects of some of those settings, you need to have a good handle on Linux. You're also almost certainly going to be doing some kernel parameter tuning. Plus, in the modern tech world, infra is declared in IaC, so you need to understand Terraform, etc. You're probably going to be writing various scripts, so you need to know shell and Python.

There were probably more, but those are the main ones that come to mind.

replies(2): >>45539029 #>>45540532 #
13. sgarland ◴[] No.45538649[source]
I replied above with some problems I experienced, but this question is slightly different, so I'll add more here.

IME - both at a place using active-active, and at places that suggested using it - the core issue is developer competency. People in general like to think of themselves as above average in most areas of life (e.g. "I'm an above-average driver"). I'm certainly not excluded from this, but over the last several years, I like to think I've become self-aware enough to understand my own limitations, and to know what I am and am not an expert in.

So, you'll get devs who read some blog posts, and then when the CTO announces that they're going multi-region, they rush forward with the excitement of people not yet hardened by the horrors of distributed systems. They're probably running a distributed monolith, because obviously the original monolith had to be decomposed into micro services for trendy reasons, but since that wasn't done well, they now have a dependency chain, each with its own sub-dependencies.

There is also a general lack of understanding of computing fundamentals in the industry. By fundamentals, I mean knowledge of concepts like latency (and the relative latency of CPU cache levels, RAM, disk, network, etc.), IOPS, etc. People love to believe that these lower-order elements have been abstracted away, but abstractions leak, and then you're stuck. There are also more practical skills that I wrongly assumed were universal, like the ability to profile one's code, read logs, and read technical documentation for the tools you're using.

Finally, there is an overwhelming desire to over-complicate, and to build anew instead of using existing and proven technology. Why run HAProxy when you can build your own little health checker for fun in NodeJS (this actually happened to me)? Sure, we could redesign our schema to have better normalization, and stop using UUIDv4 PKs so our pages aren't scattered all around the B+tree, or we could just rent bigger servers, and add another caching layer.

14. sgarland ◴[] No.45538831{4}[source]
> But then again, Aurora is pretty awesome

Why do you like Aurora? Genuinely curious. Here's my list of pros and cons, after having used Aurora MySQL.

Pro: Buffer pool persistence after restart is admittedly a very cool trick. That's it. That's the pro. The cons are long.

It's slow as hell. I don't know why this comes as a shock to anyone, but it's probably due to my statement answering your other question about a lack of knowledge of computing fundamentals. When your storage lives on 6 nodes spread dozens of miles apart, and you need quorum ack to commit, you're gonna have some pretty horrendous write latency. I have run benchmarks (realistic ones for a workload at a previous employer) comparing Aurora to some 13-year old Dell servers I have, and the ancient Dells won every time, by a lot. They didn't technically even have node-local storage; they had NVMe drives in a Ceph pool over a Mellanox Infiniband network.

The re-architecture - for MySQL anyway - required them to lose the change buffer. This buffers writes to secondary indices, which is quite helpful for performance. So now, not only do all writes to indices have to go directly to disk, they have to do so over a long distance, and achieve quorum. Oof.

Various InnoDB parameters that I would like to tune (and know how to do so correctly) are locked away.

I believe that AWS is being deceptive when they tout the ability to have 128 (now 256) TiB of storage. Yes, you can hit those numbers. Good luck operating there, though. Take one of the most common DDL operations performed: secondary index builds. AWS fully knows that this would take forever if written to the cluster volume, so they have a "local storage" drive (which is actually EBS) attached to the instance that's used for temporary storage of things like on-disk temp tables for sorts, and secondary index builds. This drive is sized vaguely proportionally to the size of the instance, and cannot be adjusted. If you have a large table - which you're likely to have if you're operating close to the cluster storage limits - you will likely discover that there isn't enough room on this drive to create an index. Sorry, have fun with that!

Finally, purely on a philosophical level, I find the idea of charging for I/O to be absolutely atrocious. Charge me a flat rate, or at the very least, a rate per byte, or some other unit that's likely to be understood by an average dev. "We charge you per page fetched from disk, except we charge for writes in 4 KiB segments, but sometimes they get batched together" - madness.

15. asah ◴[] No.45539029{3}[source]
"DDL is a nightmare"

Can I ask more about this? I assume you created a procedure around making DDL changes to the global cluster... what was that procedure like? what tools did you use (create) to automate/script this? what failure modes did it encounter?

replies(1): >>45539707 #
16. sgarland ◴[] No.45539707{4}[source]
Bold of you to assume it was automated. The process I used was tmux with pane synchronization.

I asked to automate it (probably would've just been a shell script, _maybe_ Python, issuing SQL commands to stdin), but people were afraid of unknown unknowns.

17. bigwheels ◴[] No.45540309{5}[source]
It was a huge cluster (30 large servers in total) of MySQL with the MyISAM engine running in a Master-Master configuration. No foreign-keys allowed, the Apps were responsible for correctly enforcing data constraints.

Nowadays it seems the answer has somehow become "Pay MongoDB a ton of money for a support contract" and call it a day (Fortnite by Epic Games). Let's just say this isn't really my style, but somehow the game does work. To be real with you, keeping track of player scores, doing lobby matchmaking, and storing a few hundred or thousand items is pretty straightforward, even at "high-scale".

replies(1): >>45540374 #
18. riku_iki ◴[] No.45540374{6}[source]
> MyISAM engine

MyISAM didn't support ACID transactions, thus can't be apples to apples comparison, its just very different niches.

There are plenty of distributed databases on the market today which could be used if you don't need ACID transactions.

replies(1): >>45540666 #
19. bonesmoses ◴[] No.45540532{3}[source]
I don't recall which customer you may have been, but the standard solution to that specific DDL issue with BDR is to use Stream Triggers to enable row versioning. One of the 2ndQuadrant customers used it extensively for multi-region cross-version app schema migrations that could last for months.

Essentially what that boils down to is you create stream triggers that intercept the logical stream and modify it to fit the column orientation by version. So during the transition, the triggers would be deployed to specific nodes while modifications are rolled out. Once everything was on the new version, triggers were all dropped until the next migration.

Spock doesn't have anything like that _yet_, but as you observed, being unable to use DDL replication significantly increases complexity, and tmux is a poor substitute.

20. bigwheels ◴[] No.45540666{7}[source]
Oops - I wasn't fully awake yet! It was using InnoDB.

MyISAM is mostly a huge "NOPE!" from me -_-

replies(1): >>45541282 #
21. riku_iki ◴[] No.45541282{8}[source]
And do you know what was the transaction+replication story there? Were all changes transactionally and synchronously replicated?