Most active commenters
  • bastawhiz(7)
  • fhdsgbbcaA(6)
  • evanelias(6)
  • sgarland(5)
  • paulryanrogers(4)
  • stickfigure(3)
  • crazygringo(3)
  • ttfkam(3)

←back to thread

264 points davidgomes | 71 comments | | HN request time: 1.229s | source | bottom
Show context
paulryanrogers ◴[] No.41875055[source]
Upgrades are hard. There was no replication in the before times. The original block-level replication didn't work among different major versions. Slony was a painful workaround based on triggers that amplified writes.

Newer PostgreSQL versions are better. Yet still not quite as robust or easy as MySQL.

At a certain scale even MySQL upgrades can be painful. At least when you cannot spare more than a few minutes of downtime.

replies(7): >>41875126 #>>41876174 #>>41876232 #>>41876375 #>>41877029 #>>41877268 #>>41877959 #
1. api ◴[] No.41875126[source]
I've always wondered why Postgres is so insanely popular. I mean it has some nice things like very powerful support for a very comprehensive subset of SQL functionality, but most apps don't need all that.

It really feels like early 1990s vintage Unix software. It's clunky and arcane and it's hard to feel confident doing anything complex with it.

replies(9): >>41875168 #>>41875240 #>>41875306 #>>41875587 #>>41876346 #>>41876770 #>>41877119 #>>41877951 #>>41878394 #
2. stickfigure ◴[] No.41875168[source]
What's the alternative? MySQL? No transactional DDL, immediate fail.
replies(3): >>41875276 #>>41875729 #>>41876552 #
3. justin_oaks ◴[] No.41875240[source]
> It really feels like early 1990s vintage Unix software. It's clunky and arcane and it's hard to feel confident doing anything complex with it.

How software "feels" is subjective. Can you be more specific?

replies(3): >>41875286 #>>41875625 #>>41877469 #
4. jes5199 ◴[] No.41875276[source]
I worked for a company that migrated from mysql to postgres, but then got big enough they wanted to hire fulltime database experts and ended up migrating back to mysql because it was easier to find talent
replies(4): >>41875436 #>>41875533 #>>41875636 #>>41879900 #
5. threeseed ◴[] No.41875286[source]
The command line experience is old school style i.e. to show tables.

  \c database
  \dt
Versus:

  use database
  show tables
replies(4): >>41875328 #>>41875350 #>>41875573 #>>41875689 #
6. tpmoney ◴[] No.41875306[source]
> I've always wondered why Postgres is so insanely popular.

In no particular order, my preference for postgres is driven by:

  * Date / time functions that don't suck
  * UTF-8 is really UTF-8
  * 99% of a backup can be done live with nothing more than rsyncing the data directory and the WAL files
  * Really comprehensive documentation
  * LTREE and fuzzy string match extensions
  * Familiarity from using it for years
MySQL/Maria I'm sure is fine, but it's one of hose things where it's just different enough and I haven't encountered a compelling use case for changing my preference.
replies(3): >>41875336 #>>41876296 #>>41879736 #
7. dventimi ◴[] No.41875328{3}[source]
That's psql.
8. fhdsgbbcaA ◴[] No.41875336[source]
UTF-8 is what made me switch. It’s insane MySQL has something called UTF-8 that isn't really UTF-8, but do have a type UTF8MB4 that actually is correct. This means if you use UFT-8 in MySQL, you can’t use emoji for example.
replies(3): >>41875430 #>>41876285 #>>41878032 #
9. fhdsgbbcaA ◴[] No.41875350{3}[source]
It’s also faster to type.
replies(1): >>41876111 #
10. bastawhiz ◴[] No.41875430{3}[source]
And the fact that adding real utf-8 support limited (limits?) the length of strings that can be indexed
replies(1): >>41875764 #
11. bastawhiz ◴[] No.41875436{3}[source]
Dunno if that says much about Postgres, but it says a lot about the company
12. ◴[] No.41875533{3}[source]
13. rootusrootus ◴[] No.41875573{3}[source]
I assume this is really what it comes down to. If psql added those verbose-but-descriptive commands a whole bunch of people comfortable with mysql would be a lot happier using postgres.
14. DonHopkins ◴[] No.41875587[source]
Because it's not tainted and cursed by Oracle, like MySQL (and Oracle).
replies(1): >>41877266 #
15. dalyons ◴[] No.41875625[source]
It requires a ton of somewhat arcane maintenance at scale. Vacuum shenanigans, Index fragmentation requiring manual reindexing, Txid wraparounds. I like Postgres but it’s definitely way more work to maintain a large instance than mysql. MySQL just kinda works
16. georgyo ◴[] No.41875689{3}[source]
I started with MySQL in 2006 for my personal projects, but what first won me over to psql was those commands.

Today I use CLIs like usql to interact with MySQL and SQLite so I can continue to use those commands.

At first glance they may be less obvious, but they are significantly more discoverable. \? Just shows you all of them. In MySQL it always feels like I need to Google it.

replies(1): >>41876427 #
17. cosmotic ◴[] No.41875729[source]
It's not just DDL that isn't transactional, there's a whole bunch of other things that aren't. And they break the transactionality silently. It's like an obstical course where bumping into something might be fatal.
replies(1): >>41875779 #
18. evanelias ◴[] No.41875764{4}[source]
Postgres limits btree keys to 2704 bytes, which is actually slightly smaller than MySQL's limit of 3072 bytes, assuming the default InnoDB storage engine.

That said, when using utf8mb4 in an index key, MySQL uses the "worst case" of each character being 4 bytes. So it effectively limits the max key size to 3072/4 = 768 characters, when a column is using the utf8mb4 character set.

For practical purposes, this doesn't cause much pain, as it's generally inadvisable to use complete long-ish strings as a key. And there are various workarounds, like using prefixes or hashes as the key, or using binary strings as keys to get the full 3072 bytes (if you don't need collation behaviors).

replies(1): >>41875901 #
19. evanelias ◴[] No.41875779{3}[source]
What specific non-DDL things are you referring to here?

Aside from DDL, the only other major ones are manipulating users/grants, manipulating replication, a small number of other administrative commands, and LOCK TABLES.

This is all documented very clearly on https://dev.mysql.com/doc/refman/8.4/en/implicit-commit.html. Hardly an "obstical course".

replies(1): >>41880682 #
20. bastawhiz ◴[] No.41875901{5}[source]
> So it effectively limits the max key size to 3072/4 = 768 characters, when a column is using the utf8mb4 character set.

This is exactly what I mean. 768 characters for an index is woefully bad. And for no obviously great reason: you can just index the encoded UTF-8 text.

This was literally reason why a former company (who will remain nameless) refused to add Unicode support. It's not even an imagined problem.

replies(2): >>41876001 #>>41876347 #
21. evanelias ◴[] No.41876001{6}[source]
Why did you need to index fairly long strings in their entirety in a way that preserves collation behaviors?

And why is a 768 character limit woefully bad, but a 2704 character limit is totally fine?

replies(2): >>41876259 #>>41876450 #
22. eYrKEC2 ◴[] No.41876111{4}[source]
Not after you have to google, "What's the equivalent of `show tables` in postgres?", because the psql command names are completely arbitrary.
replies(5): >>41876166 #>>41877357 #>>41878020 #>>41878561 #>>41884086 #
23. fhdsgbbcaA ◴[] No.41876166{5}[source]
Which you need to do exactly once.
replies(1): >>41876478 #
24. bastawhiz ◴[] No.41876259{7}[source]
A URL, for instance, can't be safely stored in 768 characters, but it can be stored safely in 2704. If you then wanted to sort those URLs so that all URLs for each domain and path within that domain are adjacent, you need an index. Especially if you want to paginate over them with a cursor. Doing that without an index on the raw value is a royal pain in the ass.

Hell, even just being able to sort user-submitted strings up to a kilobyte. Why up to a kilobyte? Some users have strings that are kind of long. If I have to define a second column that's the truncated prefix, that's just a silly waste of space because MySQL decided to use utf-32 under the hood.

replies(3): >>41876474 #>>41876664 #>>41876823 #
25. sgarland ◴[] No.41876285{3}[source]
> This means if you use UFT-8 in MySQL, you can’t use emoji for example.

I for one have always viewed this as a perk.

replies(1): >>41876570 #
26. sgarland ◴[] No.41876296[source]
MySQL does have ON UPDATE for its DATETIME, though; something that Postgres inexplicably still lacks.
replies(1): >>41877969 #
27. fzeindl ◴[] No.41876346[source]
* transactional DDL

* comprehensive transaction model using different modes

* PostGIS and lots of other great extensions

* supports most of the current SQL standard and is clear on interpretation of edge-cases in the documentation

* support for writing stored procedures in any major programming language

* many useful functions regarding dates, sets, ranges, json, xml, ...

* custom datatypes

* extremely thought-out and standardized approach to development: if a feature is included it generally works well in interaction with everything else

* syntax, semantics and performance are all very predictable

* great documentation

Regarding MySQL / MariaDB: MySQL optimized for performance first. Until 2010 the standard-storage-engine MyISAM didn't even support transactions.

PostgreSQL always focused on correctness and stability and then made sure everything performed.

replies(1): >>41877457 #
28. sgarland ◴[] No.41876347{6}[source]
You should not be indexing 768 characters in any circumstance I can imagine. Go ahead and try it. Spin up two tables, fill them with a few million rows, and slap and index on them. Give one a reasonable prefix limit, and let the other go wild. Make sure you ANALYZE each, then run queries in a loop and check the times.

Spoiler: I literally did this a couple of days ago. The index size bloat means that any possible savings you might have gained from collisions are obliterated from page fetches. I tested with a measly 128 characters vs. a prefix of 16, and that was enough for the average query time to be equal, with the smaller index winning for the minimum.

29. stephenr ◴[] No.41876427{4}[source]
> At first glance they may be less obvious, but they are significantly more discoverable. \? Just shows you all of them. In MySQL it always feels like I need to Google it.

In MySQL either `?` or `help` or `\?` will show you the help...

30. fweimer ◴[] No.41876450{7}[source]
You might just load someone else's data, and the index is desirable in general for speeding up analytic queries. It's possible to work around that, of course. But depending on what you do, it can make writing efficient queries against the data more difficult. That's just a distraction because most of the time, those long columns won't matter anyway.
31. evanelias ◴[] No.41876474{8}[source]
> A URL, for instance

VARBINARY is typically ok for that I'd think? Then you can utilize the full 3072 byte limit for the key, since there's no character set applied.

> even just being able to sort user-submitted strings up to a kilobyte

As a software engineer, I completely agree. But as a DBA, I am obligated to make a "tsk" sound and scowl disapprovingly!

32. kalleboo ◴[] No.41876478{6}[source]
I need to manually admin my database server maybe once every 2 years or so. Definitely not remembering them 2 years later.
replies(1): >>41889579 #
33. cvalka ◴[] No.41876552[source]
TiDB
34. fhdsgbbcaA ◴[] No.41876570{4}[source]
A database that doesn’t give you back what you put into it is never a perk. It literally can’t handle storing and retrieving the data.
replies(1): >>41878624 #
35. crazygringo ◴[] No.41876664{8}[source]
To be honest, indexes aren't designed for that. They're meant for fast lookup of short identifiers. Things like people's names and product ID's. Not long URL's. It's not performant.

If you need to keep a million long URL's in a defined sort order, my first recommendation would be, don't -- see if there's another way to achieve your end result. But if you absolutely have to, then create a new integer column to be your sort key, and use a little bit of extra code to give it values that produce the same sort order.

Creating short numerical primary keys for long strings is a common database technique.

replies(1): >>41880646 #
36. moogly ◴[] No.41876770[source]
> I've always wondered why Postgres is so insanely popular

Real answer: no licensing cost

37. The_Colonel ◴[] No.41876823{8}[source]
> it can be stored safely in 2704

No, it can't. URL doesn't have any length limit, regardless of the fact that different software will impose different limits.

replies(1): >>41880612 #
38. Scramblejams ◴[] No.41877119[source]
> I've always wondered why Postgres is so insanely popular.

Just another anecdote: MySQL lost data for me (2004). I spent some time evaluating the projects and Postgres’ development process seemed much more mature — methodical, careful, and focused on correctness. Boring, which I loved.

I didn’t need whatever perf advantage MySQL had so I switched to Postgres and never looked back. And then the Oracle drama and Monty’s behavior around it — not saying he was wrong or right, but it was the opposite of boring — just reinforced my decision.

I like to play with new tech in various spots of the stack, but for filesystems and databases I go boring all the way.

39. immibis ◴[] No.41877266[source]
That's what MariaDB is for, right? I'm surprised to hear people recommend the Oracle fork of MySQL (still called MySQL because they own the trademark) rather than the original project (now called MariaDB)
40. ◴[] No.41877357{5}[source]
41. arkh ◴[] No.41877457[source]
> * custom datatypes

Good in theory. But last time I checked the main libs to connect to pgsql, everything you get back from the database are strings. So you need something in your app to convert those strings to the equivalent data structures.

replies(1): >>41879925 #
42. arkh ◴[] No.41877469[source]
Having to tinker with pg_hba.conf files on the server so manage how users can connect.
replies(1): >>41878377 #
43. vbezhenar ◴[] No.41877951[source]
For me Postgres is 100% predictable and reliable. It's neither clunky nor arcane in my experience. I don't need to think about it, I just SQL it and that's about it. It quietly works in the background. At some scale there might be some issues, but there is always known path to solve things.
44. fanf2 ◴[] No.41877969{3}[source]
Isn’t ON UPDATE related to foreign keys and independent of the data type? https://www.postgresql.org/docs/current/ddl-constraints.html...
replies(1): >>41878336 #
45. Symbiote ◴[] No.41878020{5}[source]
They are clearly abbreviations.

\c is for connect.

\dt is for describe tables.

46. homebrewer ◴[] No.41878032{3}[source]
I won't defend that utf8 brain damage, but the defaults are sane since 2018 — you don't need to set the encoding, it's set to proper utf8 out of the box. MySQL 8 cleaned up a lot of this legacy stuff.
replies(1): >>41882967 #
47. paulryanrogers ◴[] No.41878336{4}[source]
Maybe they're thinking of TIMESTAMP in MySQL, which IIRC would auto update its value on any update to the row. Which was useful for uodated_at like columns. Though I think they later limited it to only the first TIMESTAMP column in a table.
replies(1): >>41878876 #
48. paulryanrogers ◴[] No.41878377{3}[source]
I'd agree that is annoying yet usually just a one off task, unless you really want different IP allowlists per user.
replies(1): >>41880521 #
49. paulryanrogers ◴[] No.41878394[source]
I've never lost data with PostgreSQL. MySQL had enough data loss bugs and foot guns that I ran into a few of them.
50. mxey ◴[] No.41878561{5}[source]
\? shows the help
51. sgarland ◴[] No.41878624{5}[source]
I don’t want to see emoji in my database. The customer is only right in matters of taste, not engineering.
replies(2): >>41879747 #>>41882994 #
52. sgarland ◴[] No.41878876{5}[source]
No, it works for both [0] types. The first TIMESTAMP thing you’re referring to is that if a specific variable isn’t set, the first TIMESTAMP column automatically gets auto updates applied on creation and update, unless you explicitly defined it to not. This was the default behavior in 5.7, but has since been changed.

[0]: https://dev.mysql.com/doc/refman/8.0/en/timestamp-initializa...

53. ttfkam ◴[] No.41879736[source]
Transactional DDL!
54. ttfkam ◴[] No.41879747{6}[source]
Uhh… not wanting to see emojis is a matter of taste, not engineering.
55. icedchai ◴[] No.41879900{3}[source]
Ugh. I worked with MySQL earlier in my career (until about 10 years ago.) All the companies since have been Postgres. All my personal projects are Postgres. I can't imagine going back.
56. ttfkam ◴[] No.41879925{3}[source]
You're thinking only in terms of application. Types in the db save storage space, allow for better validation than plain strings, can be correlated cleanly with other columns with the same type, etc.

Yes, more drivers and libraries should support the more expansive data type list, but even just within the database itself there are multiple advantages.

57. Tostino ◴[] No.41880521{4}[source]
In complex environments it is not just a one off task. I dealt with it by automating my infrastructure with ansible, but without some tooling it sucks.
58. bastawhiz ◴[] No.41880612{9}[source]
Browser address bars have a limit of 2048, so if that's your use case, yes it's safe.
replies(1): >>41885721 #
59. bastawhiz ◴[] No.41880646{9}[source]
> indexes aren't designed for that. They're meant for fast lookup of short identifiers. Things like people's names and product ID's. Not long URL's. It's not performant.

This is objectively false. If this was true, indexes wouldn't serve range queries. You couldn't index on dates. You couldn't sort numbers.

> But if you absolutely have to, then create a new integer column to be your sort key, and use a little bit of extra code to give it values that produce the same sort order.

This fails when you need to insert new values into the table. Then you not only need to figure out the new integer value (how, if you can't efficiently compare sorted string values???), you need to update all the integers to make room.

replies(1): >>41881416 #
60. stickfigure ◴[] No.41880682{4}[source]
"Aside from missing his head, the patient appears to be in fine shape."
replies(1): >>41880824 #
61. evanelias ◴[] No.41880824{5}[source]
That hardly seems equivalent. Why do you need to e.g. reconfigure replication inside of a transaction in the first place?

The lack of transactional DDL is a totally valid complaint, but the non-DDL stuff is just a total head-scratcher to me. Aside from DDL, implicit commits have literally never impacted me in my 21 years of using MySQL.

replies(1): >>41885276 #
62. crazygringo ◴[] No.41881416{10}[source]
Sorry, I was considering short things like dates and numbers as identifiers. I realize that's not quite right -- what I should have said was that indexes are designed for short things period (short identifiers being one of those things). Thanks.

> This fails when you need to insert new values into the table.

Yes, that's part of the extra code you need to keep the values accurately sorted. There are a lot of different particular code solutions that might work -- whether allowing for collisions and re-ordering every night with a cron job, or putting large gaps between numbers, or using floats.

But my main point stands, which is that standard relational databases are not designed to be able to maintain a sorted index of long URL's out of the box. Indexes aren't meant for that and they won't work, and this is by design. You're going to have to roll your own code for that.

Fortunately I've never come across a case in the wild where maintaining a globally sorted list of long items was required (though I'm not saying they never exist). E.g. if you're building a spider that needs to match against URL's, you'd index a short hash of the URL as a non-unique index. Or if you wanted to display sorted URL's for a site, you'd index by domain name only, and then sort the remainder of the URL at query time.

replies(1): >>41883759 #
63. fhdsgbbcaA ◴[] No.41882967{4}[source]
Good to hear they saw the light but after I switched to Postgres I never had a single regret.

In a competitive market where people make very long term engineering decisions based on stability and reliability you can’t fuck up this badly and survive.

64. fhdsgbbcaA ◴[] No.41882994{6}[source]
Ok so if you are doing sentiment analysis of user product reviews you want to silently truncate emoji because you don’t like them? That’s a good idea how?
65. bastawhiz ◴[] No.41883759{11}[source]
> But my main point stands, which is that standard relational databases are not designed to be able to maintain a sorted index of long URL's out of the box.

You keep saying that, but Postgres does a great job with no issues without any extra work. MySQL is alone in being suboptimal. "It's not designed for that" isn't a good answer, if it works great. Show me how the underlying data structures fail or perform poorly if it's really not something you should do.

replies(2): >>41883900 #>>41884084 #
66. crazygringo ◴[] No.41883900{12}[source]
No, Postgres doesn't. 2730 bytes is not long enough to hold all URL's encountered in the wild. But also, your performance will suffer if you use that whole length. You generally don't want to be doing that.

The difference between MySQL and Postgres here is negligible. It doesn't matter exactly where you define the limit of a short field, except it should probably be able to hold a maximum length filename which is 255 characters, plus some room to spare. Both MySQL and Postgres do this fine.

67. evanelias ◴[] No.41884084{12}[source]
> MySQL is alone in being suboptimal.

It's only suboptimal if you choose the wrong column type for the task at hand. For storing URLs, you almost certainly don't want collation behaviors, such as accent insensitivity or case insensitivity. So VARBINARY is a better choice here anyway.

And as several other commenters have mentioned, at large scale, indexing a bunch of long URLs in b-trees is indeed a bad practice performance-wise in any relational database. You won't be able to fit many entries per page, so read performance will be slow, especially for range scans.

In that situation it's almost always better to use a non-unique index over a prefix (if you need sorting and range scans) or a hash (if you don't), and disambiguate collisions by having the full value in an unindexed column. And/or split the URL up between the domain name and path in separate columns. If needed, normalize the domain names into a separate table so that the URL table can refer to them by numeric ID. etc. All depends on the specific use-case.

68. ahoka ◴[] No.41884086{5}[source]
They kinda make sense if you consider that Postgres was not an SQL database in the beginning. Quirky though.
69. stickfigure ◴[] No.41885276{6}[source]
Sorry - I was trying to make light of the discussion. DDL is so important that it's silly to talk about the other stuff.
70. The_Colonel ◴[] No.41885721{10}[source]
Safari has 80 000, Firefox 65K.

There are plenty of needs to store URLs which will never go through a browser.

You can only claim that "some URL use cases" can be stored in 2048 characters.

71. dventimi ◴[] No.41889579{7}[source]
Sounds like a YP