Most active commenters
  • anonzzzies(5)

←back to thread

234 points benocodes | 13 comments | | HN request time: 1.592s | source | bottom
Show context
whalesalad ◴[] No.41836959[source]
So satisfying to do a huge upgrade like this and then see the actual proof in the pudding with all the reduced latencies and query times.
replies(1): >>41837062 #
hu3 ◴[] No.41837062[source]
Yeah some numbers caught my attention like ~94% reduction in overall database lock time.

And to think they never have to worry about VACUUM. Ahh the peace.

replies(4): >>41837227 #>>41837317 #>>41837626 #>>41838255 #
1. anonzzzies ◴[] No.41837227[source]
Yeah, until vacuum is gone, i'm not touching postgres. So many bad experiences with our use cases over the decades. I guess most people don't have our uses, but i'm thinking Uber does.
replies(2): >>41837323 #>>41837537 #
2. RedShift1 ◴[] No.41837323[source]
Maybe just vacuum much more aggressively? Also there have been a lot of changes to the vacuuming and auto vacuuming process these last few years, you can pretty much forget about it.
replies(1): >>41837671 #
3. leishman ◴[] No.41837537[source]
Postgres 17 tremendously improves vacuum performance
replies(1): >>41838004 #
4. anonzzzies ◴[] No.41837671[source]
Not in our experience; for our cases it is still a resource hog. We discussed it even less than a year ago with core devs and with a large postgres consultancy place; they said postgres doesn't fit our use case which was already our conclusion, no matter how much we want it to be. Mysql is smooth as butter. I have nothing to win from picking mysql just that it works; I rather use postgres as features / not oracle but...

Edit; also, as can be seen here in responses, and elsewhere on the web when discussing this, the fans say it's no problem, but many less religious users feel it's a massive design flaw (perfectly logical at the time, not so logical now) that sometimes will stop users from using it, which is a shame

replies(1): >>41837887 #
5. yeswecatan ◴[] No.41837887{3}[source]
What is your use case?
replies(1): >>41838128 #
6. mannyv ◴[] No.41838004[source]
Vacuuming is a design decision that may have been valid back in the day, but is really a ball and chain today.

In a low-resource environment deferring work makes sense. But even in low-resource environment the vacuum process would consume huge amounts of resources to do its job, especially given any kind of scale. And the longer it's deferred the longer the process will take. And if you actually are in a low-resource environment it'll be a challenge to have enough disk space to complete the vacuum (I'm looking at you, sunos4) - and don't even talk about downtime.

I don't understand how large pgsql users handle vacuuming in production. Maybe they just don't do it and let the disk usage grow unbounded, because disk space is cheap compared to the aggravation of vacuuming?

replies(1): >>41838234 #
7. anonzzzies ◴[] No.41838128{4}[source]
We have 100000s tables per database (1000s of those) (think sensor/iot data with some magic sauce that 0 of our competitors offer) that are heavy on the changes. And yes, maybe it's the wrong tool (is it though if it works without hickups?) for the job (but migrating would be severe so we would only attempt that if we are 100% sure it will work and if the endresult would be cheaper; remember; we are talking decades here, not a startup), but mysql has been taking this without any issues for decades with us (including the rapid growth of the past decade) now while far smaller setups with postgres have been really painful and all because of vacuum. We were postgres in 1999 when we ran many millions of records through it, but that was when we could do a full vacuum at night without anyone noticing. The internet grew a little bit, so that's not possible anymore. Vacuum improved too like everyone says here, and i'm not spreading the gospel or whatever; just fans (... what other word is there) blindly stating it can do loads 'now' they never considered is, well weird.
replies(1): >>41838887 #
8. wongarsu ◴[] No.41838234{3}[source]
You run VACUUM often enough that you never need a VACUUM FULL. A normal VACUUM doesn't require any exclusive locks or a lot of disk space, so usually you can just run it in the background. Normally autovacuum does that for you, but at scale you transition to running it manually at low traffic times; or if you update rows a lot you throw more CPUs at the database server and run it frequently.

Vacuuming indices is a bit more finicky with locks, but you can just periodically build a new index and drop the old one when it becomes an issue

replies(1): >>41838818 #
9. sgarland ◴[] No.41838818{4}[source]
People not realizing you can tune autovacuum on a per-table basis is the big one. Autovacuum can get a lot done if you have enough workers and enough spare RAM to throw at them.

For indices, as you mentioned, doing either a REINDEX CONCURRENTLY (requires >= PG12), or a INDEX CONCURRENTLY / DROP CONCURRENTLY (and a rename if you’d like) is the way to go.

In general, there is a lot more manual maintenance needed to keep Postgres running well at scale compared to MySQL, which is why I’m forever upset that Postgres is touted as the default to people who haven’t the slightest clue nor the inclination to do DB maintenance. RDS doesn’t help you here, nor Aurora – maintenance is still on you.

replies(1): >>41840662 #
10. dhoe ◴[] No.41838887{5}[source]
I'd generally call this amount of tables an antipattern - doing this basically implies that there's information stored in the table names that should be in rows instead, like IDs etc. -- But I'll admit that sensor related use cases have a tendency to stress the system in unusual ways, which may have forced this design.
replies(1): >>41840639 #
11. anonzzzies ◴[] No.41840639{6}[source]
Especially back when we started. Now we would've done it differently, but still think postgres wouldn't really work. Guess we will never now as even far smaller data sets do not work in the way we need them.
12. anonzzzies ◴[] No.41840662{5}[source]
We make good money 'saving' people from Aurora; you can throw traffic at it and pay more. We often migrate companies who then end up with a fraction of the price.
replies(1): >>41842752 #
13. sgarland ◴[] No.41842752{6}[source]
I’m convinced that Aurora’s team consists mostly of sales. There are certainly some talented engineers working on it – I’ve talked to a few – but by and large, all of my interactions with AWS about DB stuff was been them telling me how much better it is than other options.

I’ve tested Aurora Postgres and MySQL against both RDS and native (on my own, extremely old hardware), and Aurora has never won in performance. I’ve been told that “it’s better in high concurrency,” but IMO, that’s what connection poolers are for.