Source: I have operated a large multi-master Postgres cluster.
Source: I have operated a large multi-master Postgres cluster.
The problem with the setup is you will have a data corruption issue at some point. It's not an "if" it's a "when". If you don't have a plan to deal with it, then you're hosed.
This is why the parent is turning around the burden of proof. If you can't definitely say why you absolutely need this, and no other solution will do, then avoid it.
IME it comes down to considering CAP against the business goals, and taking into account how much it will annoy the development team(s).
If you follow "the rules" WRT to writes, it may fit the bill. Especially these days with beauties like RDS. But then again, Aurora is pretty awesome, and did not exist/mature until only ~5 years ago or so.
Definitely more of a wart than a pancea or silver bullet. Even still, I wouldn't dismiss outright, always keen to compare alternatives.
Overall it sounds like we're in the same camp, heh.
Why do you like Aurora? Genuinely curious. Here's my list of pros and cons, after having used Aurora MySQL.
Pro: Buffer pool persistence after restart is admittedly a very cool trick. That's it. That's the pro. The cons are long.
It's slow as hell. I don't know why this comes as a shock to anyone, but it's probably due to my statement answering your other question about a lack of knowledge of computing fundamentals. When your storage lives on 6 nodes spread dozens of miles apart, and you need quorum ack to commit, you're gonna have some pretty horrendous write latency. I have run benchmarks (realistic ones for a workload at a previous employer) comparing Aurora to some 13-year old Dell servers I have, and the ancient Dells won every time, by a lot. They didn't technically even have node-local storage; they had NVMe drives in a Ceph pool over a Mellanox Infiniband network.
The re-architecture - for MySQL anyway - required them to lose the change buffer. This buffers writes to secondary indices, which is quite helpful for performance. So now, not only do all writes to indices have to go directly to disk, they have to do so over a long distance, and achieve quorum. Oof.
Various InnoDB parameters that I would like to tune (and know how to do so correctly) are locked away.
I believe that AWS is being deceptive when they tout the ability to have 128 (now 256) TiB of storage. Yes, you can hit those numbers. Good luck operating there, though. Take one of the most common DDL operations performed: secondary index builds. AWS fully knows that this would take forever if written to the cluster volume, so they have a "local storage" drive (which is actually EBS) attached to the instance that's used for temporary storage of things like on-disk temp tables for sorts, and secondary index builds. This drive is sized vaguely proportionally to the size of the instance, and cannot be adjusted. If you have a large table - which you're likely to have if you're operating close to the cluster storage limits - you will likely discover that there isn't enough room on this drive to create an index. Sorry, have fun with that!
Finally, purely on a philosophical level, I find the idea of charging for I/O to be absolutely atrocious. Charge me a flat rate, or at the very least, a rate per byte, or some other unit that's likely to be understood by an average dev. "We charge you per page fetched from disk, except we charge for writes in 4 KiB segments, but sometimes they get batched together" - madness.
Nowadays it seems the answer has somehow become "Pay MongoDB a ton of money for a support contract" and call it a day (Fortnite by Epic Games). Let's just say this isn't really my style, but somehow the game does work. To be real with you, keeping track of player scores, doing lobby matchmaking, and storing a few hundred or thousand items is pretty straightforward, even at "high-scale".
MyISAM didn't support ACID transactions, thus can't be apples to apples comparison, its just very different niches.
There are plenty of distributed databases on the market today which could be used if you don't need ACID transactions.