Most active commenters
  • (4)
  • jeltz(4)
  • fforflo(3)
  • dfox(3)
  • srcreigh(3)
  • emptiestplace(3)
  • sgarland(3)

365 points virtualwhys | 132 comments | | HN request time: 2.035s | source | bottom
1. terminalbraid ◴[] No.41896249[source]
https://news.ycombinator.com/item?id=41892830
replies(4): >>41897128 #>>41897142 #>>41898153 #>>41898993 #
2. hlandau ◴[] No.41896827[source]
With every new PostgreSQL release we see yet more features and sugar added to the frontend, yet seemingly no meaningful improvement to the backend/storage layer which suffers these fundamental problems.

I wish the PostgreSQL community would stop chasing more frontend features and spend a concerted few years completely renovating their storage layer. The effort in each release seems massively and disproportionately skewed towards frontend improvements without the will to address these fundamental issues.

It's absurd that in 2024, "the world's most advanced open source database" doesn't have a method of doing upgrades between major versions that doesn't involve taking the database down.

Yes, logical replication exists, but it still doesn't do DDL, so it has big caveats attached.

replies(6): >>41896999 #>>41897049 #>>41897077 #>>41897182 #>>41905260 #>>41907744 #
3. factormeta ◴[] No.41896999[source]
what about https://github.com/orioledb/orioledb ?
4. paulryanrogers ◴[] No.41897049[source]
> a method of doing upgrades between major versions that doesn't involve taking the database down.

For large instances this is a big ask, especially of a project without single person in charge. MySQL does have better replication, yet still often requires manually setting that up and cutting it over to do major version upgrades.

5. jandrewrogers ◴[] No.41897077[source]
The design of good storage layers in databases is deeply architectural. As a consequence, it is essentially a "forever" design decision. Fundamentally changing the storage architecture will alter the set of tradeoffs being made such that it will break the assumptions of existing user applications, which is generally considered a Very Bad Thing. The existing architecture, with all its quirks and behaviors, is part of the public API (see also: Hyrum's Law).

In practice, the only way to change the fundamental architecture of a database is to write a new one, with everything that entails.

replies(1): >>41901775 #
6. hn_throwaway_99 ◴[] No.41897093[source]
Wow, as someone who feels like I'm decently familiar with the ins and outs of Postgres, I thought this was a great article and I learned a ton.

It seems like one of the biggest fundamental flaws is that Postgres chose the O2N approach for tracking row versions instead of N2O. While switching to N2O wouldn't solve all problems (e.g. the article also talks about how Postgres stores full row copies and not just diffs), from an "80/20 rule" perspective, it seems like it would get rid of most of the downsides with the current implementation. For example, I'd assume that the vast majority of the time that transactions want the latest row version, so using the N2O ordering means you could probably do away with storing each row version in an index, as you'd only need to traverse the linked list of you needed an older version, which should be much less common.

replies(1): >>41904048 #
7. metadat ◴[] No.41897128[source]
Thanks for pointing this out, it's an existing discussion started half a day ago with 5 comments:

The Part of PostgreSQL We Hate the Most (2023)

https://news.ycombinator.com/item?id=41892830

8. ◴[] No.41897140[source]
9. ◴[] No.41897142[source]
10. quotemstr ◴[] No.41897182[source]
Better yet: decouple front and back ends. Let them talk over a stable interface and evolve independently. The SQLite ecosystem is evolving in this direction, in fits and starts.
replies(2): >>41897304 #>>41909460 #
11. globular-toast ◴[] No.41897284[source]
The article says the benefit of O2N is there's no need to immediately update indexes, but then goes on to say postgres updates the indexes anyway! So is there actually any advantage to O2N at all?
replies(2): >>41897608 #>>41897609 #
12. anarazel ◴[] No.41897304{3}[source]
You can implement a different mvcc model today, without patching code.
13. nightfly ◴[] No.41897421[source]
> MySQL and Oracle store a compact delta between the new and current versions (think of it like a git diff).

Doesn't git famously _not_ store diffs and instead follows the same storage pattern postgres uses here and stores the full new and old objects?

replies(6): >>41897457 #>>41897486 #>>41897759 #>>41897885 #>>41899164 #>>41899189 #
14. jmholla ◴[] No.41897457[source]
That is correct. Each version of a file is a separate blob. There is some compression done by packing to make cloning faster, but the raw for git works with is these blobs.
replies(2): >>41897535 #>>41898446 #
15. ChadNauseam ◴[] No.41897486[source]
TBF, the quoted section doesn't say that git stores diffs (or anything about git storage), it just says that what MySQL and Oracle stores is similar to a git diff.
replies(2): >>41900208 #>>41906576 #
16. vinnymac ◴[] No.41897492[source]
The part I hate the most is that in 2024 I still need a connection pooler (such as pgbouncer) in front of it to make it usable.
replies(1): >>41901827 #
17. avg_dev ◴[] No.41897553[source]
pretty informative. now i understand why people are often having issues with vacuum-related stuff. i like the diagrams too.
18. fweimer ◴[] No.41897588[source]
The big advantage is that you do not need any extra space if your workload mostly consists of INSERTs (followed by table drops). And it's generally unnecessary to split up insertion transactions because there is no size limit as such (neither on the generated data or the total count of rows changed). There is a limit on statements in a transaction, but you can sidestep that by using COPY FROM if you do not have to switch tables too frequently. From a DBA point of view, there is no need to manage a rollback/undo space separately from table storage.

Every application is a bit different, but it's not that the PostgreSQL design is a loser in all regards. It's not like bubble sort.

replies(4): >>41897732 #>>41902493 #>>41904079 #>>41905601 #
19. apavlo ◴[] No.41897608[source]
If all table pages exist in memory and you are using cooperative GC, then O2N can be preferable. As workers scan version chains, they can clean up dead tuples without taking additional locks.

This is what Microsoft Hekaton does.

20. kikimora ◴[] No.41897609[source]
Good question! Also they point out that famous Uber article erroneously mentions write amplification caused by what they thought was N2O. IDK if write amplification is real or not. But if it is really O2N then there is no apparent reason for write amplification and entire Uber article might had been based on the wrong premise.
replies(1): >>41909499 #
21. indulona ◴[] No.41897732[source]
> but it's not that the PostgreSQL design is a loser in all regards

the article literally says that pg's mvcc design is from the 90s and no one does it like that any more. that is technology that is outdated by over 30 years. i'd say it does not make it a loser in all regards, but in the most important aspects.

replies(4): >>41898162 #>>41898235 #>>41898303 #>>41902986 #
22. derefr ◴[] No.41897751[source]
Question: is storing full new row-tuple versions something fundamental to Postgres as a whole, or is it just a property of the default storage engine / “table access method”?
replies(2): >>41897917 #>>41898523 #
23. Hendrikto ◴[] No.41897759[source]
Git diffs are generated on the fly, but diffs are still diffs.
24. simonw ◴[] No.41897771{4}[source]
Saying "that's incorrect" is a lot more productive than saying "that's a lie".

Calling something a lie implies that the incorrect information was deliberate.

25. vrosas ◴[] No.41897807[source]
I’ve been trashed for suggesting as much. People here seem to think any team or startup not choosing Postgres is committing some type of negligence.
replies(1): >>41898201 #
26. ori_b ◴[] No.41897810{4}[source]
Git does both. When you create a commit, it stores a full (zipped) copy of the object, without any deltas.

Periodically (I believe it used to be every thousand commits, though I'm not sure what the heuristic is today), git will take the loose objects and compress them into a pack.

The full blob format is how objects are manipulated by git internally: to do anything useful, the objects need to be extracted from the blob, with all deltas applied, before anything can be done with them.

It's also worth nothing that accessing a deltified object is slow (O(n) in the number of deltas), so the length of the delta chain is limited. Because deltification is really just a compression format, it doesn't matter how or where the deltas are done -- the trivial "no deltas" option will work just fine if you want to implement that.

You can trivially verify this by creating commits and looking in '.git/objects/*' for loose objects, running 'git repack', and then looking in '.git/objects/pack' for the deltified packs.

27. paulddraper ◴[] No.41897885[source]
1. The comparison was to MySQL and Oracle storage using git diff format as an analogy, not git storage.

2. git storage does compress, and the compression is "diff-based" of sorts, but it is not based on commit history as one might naively expect.

28. haradion ◴[] No.41897887{4}[source]
The file contents are logically distinct blobs. Packfiles will aggregate and delta-compress similar blobs, but that's all at a lower level than the logical model.
replies(1): >>41902053 #
29. msie ◴[] No.41897894[source]
Nicely written article. Easy to read and understand!
30. paulddraper ◴[] No.41897917[source]
You could store partial tuples without disrupting the interface.

Though full tuples is pretty fundamental to the underlying implementation....MVCC, VACUUM, etc. It'd be a massive change to say the least.

31. fforflo ◴[] No.41898053[source]
OrioleDB was supposed to tackle this problem with a new storage engine . https://github.com/orioledb/orioledb
replies(1): >>41898410 #
32. infogulch ◴[] No.41898153[source]
@dang maybe merge these threads?
replies(1): >>41901201 #
33. naranha ◴[] No.41898162{3}[source]
At least couchdb is also append only with vacuum. So it's maybe not completely outdated.
replies(1): >>41898658 #
34. rnewme ◴[] No.41898201{3}[source]
To be fair that's sort of a possotive self fulfilling propercy. The more it's used the better it's going to get.
35. thr0w ◴[] No.41898232[source]
Don't agree with their characterization of `pg_repack`. `VACUUM FULL` is definitely crushing, but that's why repack exists as a faster/lighter alternative. Anyone have a different experience?
replies(2): >>41898486 #>>41899466 #
36. mikeocool ◴[] No.41898235{3}[source]
When it comes to your data store, some people might consider using technology that’s been reliably used in production by many organizations for 30 years a feature not a bug.

I’d prefer not to be the first person running up against a limit or discovering a bug in my DB software.

replies(2): >>41898511 #>>41908322 #
37. kunley ◴[] No.41898303{3}[source]
Still I am very happy to use every day the technology designed in early 70s by Ken Thompson and colleagues, so far in that specific field many tried to invent something more "modern" and "better" and failed, with an exception of a certain Finnish clone of that tech, also started in 80s by the way.

So, newer not always means better, just saying

replies(3): >>41899219 #>>41899463 #>>41907434 #
38. dfox ◴[] No.41898326[source]
> Oracle and MySQL do not have this problem in their MVCC implementation because their secondary indexes do not store the physical addresses of new versions. Instead, they store a logical identifier (e.g., tuple id, primary key) that the DBMS then uses to look up the current version’s physical address. Now this may make secondary index reads slower since the DBMS has to resolve a logical identifier, but these DBMS have other advantages in their MVCC implementation to reduce overhead.

Interesting behavior of MySQL that I have observed (~500GB database, with a schema that is more of an document oriented than relational) is that when you update single row doing SELECT id WHERE something; UPDATE what WHERE id=id is orders of magnitudes faster than UPDATE what WHERE something. I somehow suspect that this is the reason for this behavior. But well, the normal workload will not do that and this only slows down ad-hoc DML when you fix some inconsistency.

replies(2): >>41898716 #>>41902118 #
39. kiwicopple ◴[] No.41898410[source]
(I’m on the Supabase team)

Oriole has joined us at supabase now and it’s being worked on full time by Alexander and his team. Here is the patch set:

https://www.orioledb.com/docs#patch-set

It will be available to try on the supabase platform later this year too

replies(3): >>41898435 #>>41899346 #>>41899603 #
40. fforflo ◴[] No.41898435{3}[source]
Yeah, I've been keeping an eye on the pgsql-hackers discussions. Alexander+team are doing great work.
41. quotemstr ◴[] No.41898446{3}[source]
git's model is a good example of layered architecture. Most of the code works in terms of whole blobs. The blob storage system, as an implementation detail, stores some blobs with diffs. The use of diffs doesn't leak into the rest of the system. Good separation of concerns
42. dfox ◴[] No.41898486[source]
pg_repack is an hack-ish solution to do what VACUUM FULL does without completely locking the relation in question. But well, when you care about either of these things, your workload has significant issues, with the typical case being using pgsql as a backend for something that was originally a thick client designed for some kind of RDBMS based on shared files (InterBase mentioned in TFA, MS Jet whatever…)
43. mannyv ◴[] No.41898511{4}[source]
Well every product has issues. The question is, do you feel like dealing with those issues or not?

Flat files have also been reliably used in production for decades. That doesn't mean they're ideal...although amusingly enough s3 and its equivalent of flat files is what we've migrated to as a data store.

replies(1): >>41908961 #
44. dfox ◴[] No.41898523[source]
From the PQ protocol PoV the way how this works is pretty much irrelevant, but the actual implementation of PostgreSQL contains ridiculous amount of places that depend on the “backward” MVCC implementation of the tuple heaps.
45. jongjong ◴[] No.41898538[source]
For most cases, MVCC sounds like over-engineering. From the problem description:

> The goal of MVCC in a DBMS is to allow multiple queries to read and write to the database simultaneously without interfering with each other when possible.

How is that a problem for most use cases?

If there is a read query which is taking a long time, with many rows, and some of these later rows happen to be updated mid-read but the earlier rows are not... It's not really a problem for the vast majority of application. Why is it better for all rows to be delivered out of date versus just the first half fetched being out of date? It's not ideal in either case but it's unavoidable that some requests can sometimes return out of date data. It seems like a tiny advantage.

I suspect the real need to implement MVCC arose out of the desire for databases like Postgres to implement atomic transactions as a magical black box.

IMO, two-phase commit is a simpler solution to this problem. It's not possible to fully hide concurrency complexity from the user; it ends up with tradeoffs.

replies(1): >>41898609 #
46. thih9 ◴[] No.41898543[source]
> Then in the 2010s, it was MongoDB because non-durable writes made it “webscale“.

Off topic, it was marketing all along: https://news.ycombinator.com/item?id=15124306

replies(1): >>41899286 #
47. kccqzy ◴[] No.41898609[source]
One person's over engineering is another person's essential feature. I personally like the fact that Postgres supports the serializable isolation level that simplifies application programming.

> It's not really a problem for the vast majority of application.

This is true, but I don't even want to think about when it is indeed not really a problem and in the few cases when it is a problem.

replies(1): >>41898746 #
48. jbellis ◴[] No.41898658{4}[source]
High performance has never been a reason to use couchdb.
49. whazor ◴[] No.41898716[source]
A SELECT is a readonly operation and can be performed in parallel. However, an UPDATE actually writes and might lock the table. Whereas UPDATE id=id allows for row level locking. There is also the risk of missing newly inserted records between the SELECT and the UPDATE.
replies(1): >>41898733 #
50. nine_k ◴[] No.41898733{3}[source]
SELECT FOR UPDATE was invented to address this,
replies(2): >>41898851 #>>41905546 #
51. magicalhippo ◴[] No.41898746{3}[source]
> I personally like the fact that Postgres supports the serializable isolation level that simplifies application programming.

Not sure how PG implements it, but I tried it in a case where I did need it in SQLAnywhere, and only found out a bit too late that while the docs stated it was very detrimental to performance, the docs didn't explicitly say why, and it was much worse than I had assumed.

I assumed it meant the transaction would lock the table, do it's thing and release on commit/rollback. And of course, that would hurt performance a lot if there was high contention. But no, that's not what it did. It was much, much worse.

Instead of taking a lock on the whole table, it locked all the rows. Which went as swimmingly as you could expect on a table with thousands upon thousands of rows.

Not sure why they did it this way, but yeah had to ditch that and went with the good old retry loop.

replies(1): >>41900778 #
52. OrvalWintermute ◴[] No.41898780[source]
This article is incorrect IMO - the following section in particular.

“ In the 2000s, the conventional wisdom selected MySQL because rising tech stars like Google and Facebook were using it. Then in the 2010s, it was MongoDB because non-durable writes made it “webscale“. In the last five years, PostgreSQL has become the Internet’s darling DBMS. And for good reasons! It’s dependable, feature-rich, extensible, and well-suited for most operational workloads.”

Smart engineers were choosing postgres not because of the logical fallacy of popularum, but for the following reasons:

Data safety - not MyIsam, ACID, Similarity to Oracle, MVCC, SQL standards adherence, Postgres team, Helpful awesome community, Data types, High performance, BSD flexibility

Above are the reasons I selected Postgres while at ATT early 2000s and our Oracle DBA found it a very easy transition. While Mysql went through rough transitions, PG has gone from strength to strength and ever improving path.

I think Bruce Momjian is a big part of this success; they truly have an excellent community. <3

replies(1): >>41903184 #
53. fipar ◴[] No.41898851{4}[source]
Or just select + update in a transaction, which with IIRC, with the default isolation level will use optimistic locking for the select part, unlike select for update.
replies(1): >>41910708 #
54. arp242 ◴[] No.41898972{4}[source]
Sjeez, tone it down. People can be incorrect without lying.
55. evanelias ◴[] No.41898993[source]
The overall topic was also discussed extensively in this subthread from 6 days ago:

https://news.ycombinator.com/item?id=41837317

56. ◴[] No.41899164[source]
57. epcoa ◴[] No.41899189[source]
Others have mentioned that it said “git diffs”. However git does use deltas in pack files as a low level optimization, similar to the MySQL comparison. You don’t get back diffs from a SQL query either.
58. nine_k ◴[] No.41899219{4}[source]
Speaking of which, if you try an actual System V in an emulator, or look at C code in K&R style, certain progress, as in "much more actually usable", can be noticed.

While persisting key architectural ideas certainly has benefits, so does evolving their implementations.

59. Hilift ◴[] No.41899286[source]
It was designed by former DoubleClick engineers as an afterthought DIY db for another service because no other db met their requirements. Supposedly version 4.2.8 (2020) is fairly solid, i.e. no dirty writes. https://en.wikipedia.org/wiki/MongoDB#Technical_criticisms
60. justinclift ◴[] No.41899346{3}[source]
As a data point, there are easily noticeable typos on that docs page. Might be a good idea to have someone run a spell checker over it at some point?
61. throwawayie6 ◴[] No.41899463{4}[source]
> exception of a certain Finnish clone of that tech

Are you referring to C++? That was actually created by a Danish guy, who was also inspired by the object oriented Simula language created in the 60s

replies(1): >>41899553 #
62. fovc ◴[] No.41899466[source]
The requirement for having two copies of the table simultaneously on systems that make it easy to add but not subtract storage. Otherwise pg_repack has worked really well.

We solved the 2x storage with partitions, but it feels like the tail wagging the dog

63. nneonneo ◴[] No.41899553{5}[source]
Pretty sure the OP was referring to UNIX and its “Finnish clone” Linux.
64. philippemnoel ◴[] No.41899603{3}[source]
The whole ParadeDB team is really excited for OrioleDB and Supabase to ship this :) It's long overdue in the Postgres ecosystem!
65. halayli ◴[] No.41899794[source]
This topic cannot be discussed alone without talking about disks. SSDs write 4k page at a time. Meaning if you're going to update 1 bit, the disk will read 4k, you update the bit, and it writes back a 4k page in a new slot. So the penalty for copying varies depending on the disk type.
replies(2): >>41900275 #>>41904085 #
66. mbreese ◴[] No.41900026[source]
So, if this is such a problem, my question is — are the poor MVCC choices of Postgres enough to make the authors (or people here) recommend another RDBMS?
replies(1): >>41900276 #
67. zdragnar ◴[] No.41900208{3}[source]
It's a little too easy to misinterpret if you're skimming and still have memories of working with SVN, mercurial, perforce, and probably others (I've intentionally repressed everything about tfvc).
68. srcreigh ◴[] No.41900275[source]
Postgres pages are 8kb so the point is moot.
replies(2): >>41901535 #>>41901808 #
69. abenga ◴[] No.41900276[source]
The last couple of paragraphs of the article answer this. (The answer is No).
replies(1): >>41900542 #
70. srcreigh ◴[] No.41900307[source]
It’s really annoying to see people write that Postgres has a “primary index” and “secondary indexes”. No. That’s not what those words mean. Every index in Postgres is a secondary index.
replies(1): >>41903808 #
71. mbreese ◴[] No.41900542{3}[source]
Thanks - I completely missed the “concluding remarks” paragraph the first time. After the “problems” sections, I apparently just stopped reading.

For others who are curious:

> But please don’t misunderstand our diatribe to mean that we don’t think you should ever use PostgreSQL. Although its MVCC implementation is the wrong way to do it, PostgreSQL is still our favorite DBMS. To love something is to be willing to work with its flaws.

72. WuxiFingerHold ◴[] No.41900626[source]
Putting aside the license / closed source issues with CockroachDB (CRDB) and just focus at it technically: CRDB uses MVVM too, but its storage is a key-value store. I know it uses some kind of garbage collection to remove the old versions.

I wonder if CRDB (or other newer designed DBs) has circumvented those issues? Or don't we just hear from those issues as CRDB and the other newer DBs are not that widely used and mainly in the commercial space?

replies(2): >>41901017 #>>41901777 #
73. sitharus ◴[] No.41900778{4}[source]
One of the best things about postgresql is the documentation. They document not only the features, but the constraints and considerations for using it and why they exist.

So from reading https://www.postgresql.org/docs/17/transaction-iso.html#XACT... we can tell that using serializable transactions only locks data actually used.

replies(1): >>41901173 #
74. qaq ◴[] No.41901017[source]
CRDB will be slower than PG on the same hardware.With CRDB you are trading off performance for ability to scale horizontally.
75. magicalhippo ◴[] No.41901173{5}[source]
Yea that's much, much better. I also note that it goes for the retry instead of locking.
76. wordofx ◴[] No.41901201{3}[source]
No
77. bluedonuts ◴[] No.41901448[source]
Loved this post! Could anyone recommend a book (or other medium) with similar content about RMBDS internals?
replies(2): >>41904497 #>>41910345 #
78. olavgg ◴[] No.41901535{3}[source]
The default is 8kb, but it can be recompiled for 4kb-32kb, I actually prefer 32kb because with ZSTD compression, it will most likey only use 8kb after being compressed. Average compress ratio with ZSTD, is usually between 4x-6x. But depending on how your compressable you data is, you may also get a lot less. Note that changing this block size, will require initialization of a new data file system for your Postgres database.
79. Negitivefrags ◴[] No.41901775{3}[source]
You could select which storage approach on a per table level. That way the new characteristics don’t surprise anyone.
replies(1): >>41902154 #
80. ddorian43 ◴[] No.41901777[source]
This is fixed in YugabyteDB that reuses the PostgreSQL query layer source code but uses it's own storage: https://www.yugabyte.com/blog/improve-postgresql/ (other issues too like XID wraparound etc).

It's also apache2 license.

replies(1): >>41907805 #
81. halayli ◴[] No.41901808{3}[source]
I am referring to physical pages in an SSD disk. The 8k pg page maps to 2 pages in a typical SSD disk. Your comment proves my initial point, which is write amplification cannot be discussed without talking about the disk types and their behavior.
replies(2): >>41902116 #>>41903957 #
82. bvrmn ◴[] No.41901827[source]
Let me guess, PHP?
replies(1): >>41903625 #
83. thaumasiotes ◴[] No.41902053{5}[source]
Is that relevant to something? The logical model is identical for every source control system. Deltas are a form of compression for storage in every source control system.
replies(1): >>41904694 #
84. emptiestplace ◴[] No.41902116{4}[source]
Huh? It seems you've forgotten that you were just saying that a single bit change would result in a 4096 byte write.
replies(1): >>41906755 #
85. fforflo ◴[] No.41902118[source]
I have a couple of read-heavy >2TB Postgres instances, document-oriented too. You're right that bulk updates can be too slow. Too many times I end up doing the updates incremental (in batches) or even use COPY.
replies(1): >>41903043 #
86. emptiestplace ◴[] No.41902154{4}[source]
This is the MySQL approach, but it isn't without downsides - consistency, predictability, etc.
87. mxey ◴[] No.41902815[source]
> The need for PostgreSQL to modify all of a table’s indexes for each update has several performance implications. Obviously, this makes update queries slower because the system has to do more work.

You know, I was wondering something regarding this write amplification. It's true that MySQL doesn't need to update its indexes like that. However, MySQL replication relies on the binlog, where every change has to be written in addition to the database itself (InnoDB redo log and so on). So, it seems to me, MySQL, if used in a cluster, has a different kind of write amplification. One that PostgreSQL does not have, because it reuses its WAL for the replication.

In addition, on the receiving side, MySQL first writes the incoming binlog to the relay log. The relay log is then consumed by the applier threads, creating more InnoDB writes and (by default) more binlog.

88. j16sdiz ◴[] No.41902986{3}[source]
> the article literally says that pg's mvcc design is from the 90s and...

Actually, it is 1980s. The article:

> Its design is a relic of the 1980s and before the proliferation of log-structured system patterns from the 1990s.

89. andruby ◴[] No.41903043{3}[source]
You also want to avoid long transactions to avoid lock contention. Every statement is also a transaction, so chunking it up helps a lot on busy databases.
replies(1): >>41904988 #
90. andruby ◴[] No.41903184[source]
Similar. My preference switched from MySQL to PostgreSQL in 2005 when I wanted to use database views to create a "live" compatibility layer between an old (AS400) database schema and a modern Rails app.

The preference kept growing thanks to data safety, DDL's in transactions, etc.

91. uvas_pasas_per ◴[] No.41903373[source]
My problem with PG is it just doesn't seem to help much with my situation. I want to write apps that work offline and sync data across devices using the 'cloud'. I think that means Sqlite on the client, but ?? for the server. I have yet to find a good book explaining techniques for this kind of replication/syncing.
replies(4): >>41903396 #>>41903423 #>>41903746 #>>41904249 #
92. tapoxi ◴[] No.41903396[source]
CouchDB?
replies(2): >>41903464 #>>41903976 #
93. beng-nl ◴[] No.41903423[source]
At risk of saying something you know, CRDT’s may be a good fit for your use case.

The problem is of course making changes offline that the user assumes are permanent but then later, when sync time comes, then turn out to conflict with changes made in the meantime. So changes can’t be made permanent. Either that requires difficult UX to reconcile or something that always will give you something consistent, like a crdt.

94. EGreg ◴[] No.41903464{3}[source]
PouchDB
95. nobleach ◴[] No.41903625{3}[source]
I see this pattern all over the place, not just PHP. When leveraging containers, what's a better option? Certainly Java, Rust, even NodeJS all have decent enough connection pools for PG. But telling those apps that are running in those containers, "uhhh, you're just one among many so, your pooling strategy may not represent reality". I've seen folks try to solve this by literally doing replica math when an app boots up (are there 8 replicas running? ok, divide Postgres' default 100 connections by that). Moving the pool outside the app container seems like a better move.
96. sgarland ◴[] No.41903746[source]
I’ve never used this, but I know people who work here [0]. Might be useful for you?

[0]: https://ditto.live/

97. sgarland ◴[] No.41903808[source]
You’re not technically wrong, but even Postgres’ docs [0] use the term Primary Key.

[0]: https://www.postgresql.org/docs/current/ddl-constraints.html

replies(1): >>41904432 #
98. mschuster91 ◴[] No.41903957{4}[source]
> The 8k pg page maps to 2 pages in a typical SSD disk.

You might end up with even more than that due to filesystem metadata (inode records, checksums), metadata of an underlying RAID mechanism or, when working via some sort of networking, stuff like ethernet frame sizes/MTU.

In an ideal world, there would be a clear interface which a program can use to determine for any given combination of storage media, HW RAID, transport layer (local attach vs stuff like iSCSI or NFS), SW RAID (i.e. mdraid), filesystem and filesystem features what the most sensible minimum changeable unit is to avoid unnecessary write amplification bloat.

99. uvas_pasas_per ◴[] No.41903976{3}[source]
Looks interesting, but I don't see anything for how to build it into an iOS or Android app. https://docs.couchdb.org/en/stable/install/index.html
replies(1): >>41905390 #
100. topherjaynes ◴[] No.41904048[source]
You should check out Andy's History of Databases (CMU Databases / Spring 2020) on youtube. He does the entire first class from the streets of Amsterdam because he can't get in his hotel... he's an interesting character and he's insanely good at explaining the ins and out
replies(1): >>41905905 #
101. winternewt ◴[] No.41904079[source]
> Every application is a bit different, but it's not that the PostgreSQL design is a loser in all regards. It's not like bubble sort.

When doing game development in the early 2000s I learned that bubble sort is not a loser in all regards. It performs well when a list is usually almost sorted. One situation when this is the case is in 3D rendering, which sorts objects by their distance from the camera. As you move the camera around or rotate it, bubble sort works very well for re-sorting the objects given the order they had in the previous frame.

To prevent bad worst-case scenarios you can count the number of comparisons that failed on the last pass and the number of passes you have performed so far, then switch to a different sort algorithm after reaching a threshold.

replies(1): >>41909394 #
102. rand_r ◴[] No.41904085[source]
Interesting! I wonder how this plays into AWS pricing. They charge a flat fate for MBps of IO. But I don’t know if they have a rule to round up to nearest 4K, or they actually charge you the IO amount from the storage implementation by tracking write volume on the drive itself, rather what you requested.
103. mannyv ◴[] No.41904249[source]
Well usually you use a datastore on a server as a master, then you pull/push based on timestamps.

Firebase and cognito/appsync work this way, basically.

You can use any data store you want on the server to do that. You could theoretically push a local sqlite db up to s3 as a sync mechanism, I suppose, if you do the locking correctly.

104. srcreigh ◴[] No.41904432{3}[source]
Postgres primary key indexes are not primary indexes.
replies(1): >>41905138 #
105. cfors ◴[] No.41904497[source]
While not strictly for RDBMS, I think this book is pretty close!

https://www.databass.dev/

106. didgetmaster ◴[] No.41904577[source]
How big of an issue is this really for db users who work daily with large tables that are frequently updated but still have to be fast for queries?

The article mentioned that there are nearly 900 different databases on the market. I am trying to build yet another one using a unique architecture I developed. It is very fast, but although it is designed for transactions; I haven't implemented them yet.

I think if I spend the time and effort to do it right, this could be a real game changer (I think the architecture lends itself very well to a superior implementation); but I don't want to waste too much time on it if people don't really care one way or the other.

107. haradion ◴[] No.41904694{6}[source]
> The logical model is identical for every source control system.

Most source control systems have some common logical concepts (e.g. files and directories), but there's actually significant divergence between their logical models. For instance:

- Classic Perforce (as opposed to Perforce Streams) has a branching model that's very different from Git's; "branches" are basically just directories, and branching/merging is tracked on a per-file basis rather than a per-commit basis. It also tracks revisions by an incrementing ID rather than hashes. - Darcs and Pijul represent the history of a file as an unordered set of patches; a "branch" is basically just a set of patches to apply to the file's initial (empty) state.

All of that is above the physical state, which also differs:

- Perforce servers track files' revision histories in a directory hierarchy that mirrors the repository's file structure rather than building a pseudo-directory hierarchy over a flat object store. - Fossil stores everything in an SQLite database.

> Is that relevant to something?

Yes. You can use a VCS reasonably effectively if you understand its logical model but not its physical storage model. It doesn't work so well the other way around.

108. atombender ◴[] No.41904988{4}[source]
Avoiding long transactions is also about preventing the transaction from holding back vacuuming. Postgres will not vacuum tuples that are still visible to old transactions (visible as the backend_xmin in the pg_stat_activity table).

Long transactions can also cause surprising locks, because many locks taken persist to the end of the transaction, even if the transaction is no longer doing anything. This can block DDL operations as well as things like REINDEX.

109. sgarland ◴[] No.41905138{4}[source]
I know that, I’m saying that it’s a fairly pedantic argument when even their own docs use the term loosely.
replies(2): >>41906024 #>>41909511 #
110. RegnisGnaw ◴[] No.41905260[source]
That's my big issue with PostgreSQL. I don't care about DDL with logical replication as I can build an replica for the switch, but no LOBs?
111. tapoxi ◴[] No.41905390{4}[source]
There was a spinoff a while ago when they merged with Membase, I haven't used Couch in a few years but maybe this will work: https://docs.couchbase.com/couchbase-lite/current/swift/quic...
112. ◴[] No.41905546{4}[source]
113. 15155 ◴[] No.41905601[source]
> It's not like bubble sort.

Bubble sort is great in hardware and for mostly-sorted sets.

114. Tostino ◴[] No.41905905{3}[source]
The content that his group puts out on YouTube is great. Been a little while since I caught up, but I was extremely impressed.
115. j16sdiz ◴[] No.41906024{5}[source]
"Primary Key" is the term in SQL standard. The url you linked is in "Part II: The SQL Language"
116. layer8 ◴[] No.41906576{3}[source]
It’s not clear why they state “git diff” specifically. It’s simply a diff (git or otherwise).
117. Tostino ◴[] No.41906755{5}[source]
> a single bit change would result in a 4096 byte write

On (most) SSD hardware, regardless of what software you are using to do the writes.

At least that's how I read their comment.

replies(1): >>41907050 #
118. emptiestplace ◴[] No.41907050{6}[source]
Right, and if pg writes 8192 bytes every time, this is no longer relevant.
119. gregw2 ◴[] No.41907434{4}[source]
Err, Linux is a child of the 90s...

Linus began work on it in April 1991: https://groups.google.com/g/comp.os.minix/c/dlNtH7RRrGA/m/_R...

120. tristan957 ◴[] No.41907744[source]
Rearchitecting the storage layer takes time. A storage manager API didn't even exist until fairly recently, maybe 14. That API needs to undergo changes to account for things that Oriole is trying to do. Postgres is not developed by a team. It's a community effort, and people work on what they want to work on. If you're interested in a previous attempt to change the storage layer, you can learn about what happened with zheap[0].

[0]: https://pgpedia.info/z/zheap.html

121. knowitnone ◴[] No.41907805{3}[source]
Thanks for pointing out Yugabyte. I did a search for performance comparisons and there seems (seemed?) to be some performance issues at least back in 2023 https://github.com/yugabyte/yugabyte-db/issues/10108

hopeful performance has improved since

replies(1): >>41911154 #
122. simne ◴[] No.41908322{4}[source]
Tell this to developers of Ariane 5, who used old proven software from Ariane 4.

Many people consider this most expensive bug in history, when on first flight of Ariane 5, it enters speed range, which was hard prohibited in Ariane 4 software and caused software exception and then 1 billion crashed.

Honesty, they could re-check all ranges, but they decided, it would cost like write new software, so to save money, was made decision to just use old software, without additional checks.

123. diroussel ◴[] No.41908961{5}[source]
It would be quite nice to have some of the S3 semantics on local files. Like no one else can the see the file until after you’ve finished writing the file and committed it. And being able to put almost any chara in the file name (key). That is quite nice in S3
124. jeltz ◴[] No.41909394{3}[source]
But wouldn't insertion sort be better than bubble sort in those cases?
125. jeltz ◴[] No.41909460{3}[source]
This has already been done.
126. jeltz ◴[] No.41909499{3}[source]
That article was just wondering pain bad, written by someone who was obviously a beginner to PostgreSQL and did not understsand the issue he was seeing.

Yes, there are real issues but that article should be ignored.

127. jeltz ◴[] No.41909511{5}[source]
Primary keys are primary indexes are two mostly unrelated terms.
128. Izkata ◴[] No.41909591[source]
They mention autvacuum_vacuum_scale_factor, and its default value, but give no hint if they tried to change that. Obviously I have no access to their database, but one of piece of advice for ages has been, in situations similar to theirs where a lot of dead tuples accumulate and autovacuum is having trouble finishing, to lower this value so autovacuum runs much more often, so each run has less to do and less gets blocked.
129. xiasongh ◴[] No.41910345[source]
CMU's Intro to Database Systems course is one of the best resources. Andy Pavlo has his lectures all up on youtube
130. nuttingd ◴[] No.41910708{5}[source]
You would need to use serializable isolation for this to hold true. Any isolation level less than serializable will use the snapshot that was active at the time of the select.

In Postgres, even with the serializable isolation level, all transactions that touch the same rows must also be using the serializable isolation level or it's not really enforced. This is one aspect of serializable isolation in Postgres that seemed like a major gotcha for real world application development. There's no future proof solution: new code can be added that doesn't use the serializable isolation, and then the assumptions of isolation from the earlier code are no longer valid.

FOR UPDATE is the only real solution in my eyes

131. ddorian43 ◴[] No.41911154{4}[source]
Yes, performance is much better. But it's a bit apples to oranges comparisons because of distributed/sharding nature (example with bitmap scans https://www.yugabyte.com/blog/bitmap-scans-on-distributed-po...), and always synchronous replication (https://www.yugabyte.com/blog/yugabytedb-resiliency-vs-postg...).