Most active commenters
  • zachmu(8)
  • throwaway892238(5)
  • hinkley(4)
  • aboodman(4)
  • jandrewrogers(3)

←back to thread

334 points gjvc | 48 comments | | HN request time: 1.028s | source | bottom
1. throwaway892238 ◴[] No.31849720[source]
This is the future of databases, but nobody seems to realize it yet.

One of the biggest problems with databases (particularly SQL ones) is they're a giant pile of mutable state. The whole idea of "migrations" exists because it is impossible to "just" revert any arbitrary change to a database, diff changes automatically, merge changes automatically. You need some kind of intelligent tool or framework to generate DDL, DML, DCL, they have to be applied in turn, something has to check if they've already been applied, etc. And of course you can't roll back a change once it's been applied, unless you create even more program logic to figure out how to do that. It's all a big hack.

By treating a database as version-controlled, you can treat any operation as immutable. Make any change you want and don't worry about conflicts. You can always just go back to the last working version, revert a specific change, merge in one or more changes from different working databases. Make a thousand changes a day, and when one breaks, revert it. No snapshotting and slowly restoring the whole database due to a non-reversible change. Somebody dropped the main table in prod? Just revert the drop. Need to make a change to the prod database but the staging database is different? Branch the prod database, make a change, test it, merge back into prod.

The effect is going to be as radical as the popularization of containers. Whether you like them or not, they are revolutionizing an industry and are a productivity force multiplier.

replies(11): >>31849825 #>>31849875 #>>31849951 #>>31850566 #>>31850778 #>>31851109 #>>31851356 #>>31852067 #>>31853553 #>>31858826 #>>31865675 #
2. blowski ◴[] No.31849825[source]
It looks incredible, but somehow seems too good to be true.

What are the tradeoffs here? When wouldn't I want to use this?

replies(1): >>31849955 #
3. 411111111111111 ◴[] No.31849875[source]
> This is the future of databases, but nobody seems to realize it yet

It's a pipedream, not the future.

Your database is either too big / has too much throughput or migrations just don't matter. And it's not like you wouldn't need migrations with a versioned schema, as otherwise a rollback would mean data loss.

replies(4): >>31850130 #>>31850350 #>>31850869 #>>31854369 #
4. jahnu ◴[] No.31849951[source]
Doesn’t Datomic do all this for some years now?
replies(1): >>31850672 #
5. timsehn ◴[] No.31849955[source]
Creator here.

It's slower. This is `sysbench` Dolt vs MySQL.

https://docs.dolthub.com/sql-reference/benchmarks/latency

We've dedicated this year to performance with a storage engine rewrite. We'll have some performance wins coming in the back half of the year. We think we can get under 2X MySQL.

It also requires more disk. Each change is at least on average 4K on disk. So, you might need more/bigger hard drives.

replies(2): >>31850114 #>>31851592 #
6. EarthLaunch ◴[] No.31850114{3}[source]
Another commenter noted a need for migrations in order to handle rollbacks without data loss.
7. packetlost ◴[] No.31850130[source]
I think the problem is relational datasets like that don't fit well into that model. In reality, it's very possible. Look at Datomic. While I agree, for high-throughput systems, storage is a concern, but the general trends seem to be towards streaming data and retention policies anyways.
8. throwaway892238 ◴[] No.31850350[source]
You're thinking in terms of the present, but I'm saying it's the future. At present it doesn't make sense, because nobody has yet made a database which does version control on very big datasets with a lot of throughput. But when somebody does make it, it will be completely obvious that this was something we always needed.

It's all just software. There is essentially no limit to what we can make software do as long as the hardware supports it. And there's no hardware limit I know of that says version-controlled databases can't work. We just need to figure out how they will work, and then make 'em (or try to make 'em and in the process figure it out).

> And it's not like you wouldn't need migrations with a versioned schema, as otherwise a rollback would mean data loss.

When you roll back a code change, you don't lose code, as it's still in history. If you need to revert but keep some code, you branch the code, copy the needed code into some other part of the app, revert the old change, merge everything. If on merge there is a conflict (let's presume losing data is a conflict), it can prompt you to issue a set of commands to resolve the conflict before merge. You could do all of that in a branch, test it, merge into prod, and on merge it could perform the same operations. The database does all the heavy lifting and the user just uses a console the way they use Git today.

It's probably going to be required to lock the version of software and the version of the database together, such that both are changed/reverted at the same time. But because this is version control, we could actually serve multiple versions of the same database at the same time. You could have the database present two different versions of itself with the same data COW-overlayed for each version, and two different versions of an application. You could then blue/green deploy both the application and database, each modifying only its version. If you need to revert, you can diff and merge changes from one version to another.

replies(2): >>31850459 #>>31851388 #
9. iamnafets ◴[] No.31850459{3}[source]
I think the problem is that the tradeoffs already exist. Most users would prefer more usable space or less money to a full history of their data.

You might be making the argument that the usability of immutable data is not there yet, but there are well-established theoretical costs of maintaining full history and I don't think they're within bounds of many real-world use-cases.

replies(2): >>31850643 #>>31851860 #
10. simlevesque ◴[] No.31850566[source]
QLDB seems like something that goes in this direction. What's your opinion of it ?
replies(1): >>31851032 #
11. throwaway892238 ◴[] No.31850643{4}[source]
If the user doesn't want full history they could configure the database to expunge it with a lifecycle policy, though I think keeping deltas of the changes would make any excess file storage negligible, as most people don't seem to ever get rid of data anyway.
12. zachmu ◴[] No.31850672[source]
Lots of databases offer time travel / historical querying, including datomic, MySQL, Postgres, etc (plugins required in some cases).

Dolt's unique functionality isn't time travel, although it has that. It's version control, i.e. branch and merge, push and pull, fork and clone. A bunch of database products give you some of this for schema migrations, but Dolt is the only one that does it for table data as well.

replies(1): >>31851414 #
13. LukeEF ◴[] No.31850778[source]
This has to be imagined in the context of a post-SQL future. Unless of course you are a 'SQL is the end of history' person!
14. zachmu ◴[] No.31850869[source]
You're suffering from a failure of imagination.

Consider a CMS, one of the most common forms of database backed applications. What if you could give your customer a "dev" branch of all their data to make their changes on and test out new content, that you could then merge with back to prod after somebody reviews it in a standard PR workflow?

This is the workflow one of our earliest customers built. They run network configuration software, and they use Dolt to implement a PR workflow for all changes their customers make.

More details here:

https://www.dolthub.com/blog/2021-11-19-dolt-nautobot/

replies(3): >>31851146 #>>31852808 #>>31863592 #
15. zachmu ◴[] No.31851032[source]
QLDB isn't version controlled, it's an immutable ledger database. If that's what you're looking for, definitely choose it, it's a great product.

More discussion of the immutable DB product space:

https://www.dolthub.com/blog/2022-03-21-immutable-database/

16. hinkley ◴[] No.31851109[source]
In a similar vein, I am trying to work on a tool for a volunteer group, and one of the problems I'm trying to figure out how to model is not just tracking when the data changed, but why it changed.

Sometimes when you're first entering data you just get it wrong. You said the wrench was in storeroom A1 or the customer lives on 3rd St or the tree was five feet from the sidewalk. If the state of the asset changes due to human action, that's a new thing. But if you opened the wrong list taking inventory, or you missed a keystroke for 33rd St or you just eyeballed the coordinates, then that row was simply wrong, and the correction should be back-dated for most things.

But if I emptied out A1 because we aren't renting that space anymore, the customer moves or the tree got replanted, then it was here and now it's over there. Which might be important for figuring out things like overstock, taxes or success rates.

Similarly if the migration introduces derived data, then the migrated data is assumed/inferred, whereas if we took that data from a user, that information is confirmed, which might introduce subtle differences in how best to relate to the user. Things a mediocre business could easily ignore but a quality establishment might be ill-pleased with such a request.

17. Johannesbourg ◴[] No.31851146{3}[source]
Personally working with timeseries data my experience is that clients typically underestimate how much storage they need for a single state, let alone including historic versions. The decision people want more data, not more snapshots for a given storage spend. But that's timeseries.
replies(1): >>31851629 #
18. qaq ◴[] No.31851356[source]
You do realize that how MVCC works right? Turn off GC process that collects old versions on production DB and see what happens. Reverting changes out of order is not possible in many cases.
replies(1): >>31851459 #
19. hinkley ◴[] No.31851388{3}[source]
Do we make databases smarter, or start asking for database-like behavior from version control systems?

I can't help thinking how much time the git team or Jetbrains or Chrome or the Confluence team have spent on backing store implementation/migration and file formats instead of using sqlite (like why aren't V8 heap and perf snapshots just a sqlite database?) but then many of these things operate in that gap. So do we keep improving change over time functionality in databases, or make VCS backends with more formal database-like behavior?

IIRC, Trac stores its wiki history in a subversion repository. Since it already had to understand commit histories and show diffs, that was a sensible choice. Of course it is easier to live with such a decision if the API is good, but I haven't heard anyone say that about any version control system yet.

replies(1): >>31853994 #
20. rapnie ◴[] No.31851414{3}[source]
I think TerminusDB does that as well.
replies(1): >>31851507 #
21. zachmu ◴[] No.31851459[source]
Right, but it is with Dolt. That's the point.

CALL DOLT_REVERT('bad-commit-hash')

Works just like git revert: creates an inverse of the changes in the commit given and applies it as a patch.

replies(1): >>31858636 #
22. zachmu ◴[] No.31851507{4}[source]
Yup, TerminusDB has a very similar set of capabilities, just for a graph DB instead of SQL / relational. Very cool product if you're in the market for a graph DB.
23. kragen ◴[] No.31851592{3}[source]
(Disclaimer: I haven't tried Dolt.)

In your benchmark it's only 2.1–7.4 times slower than MySQL, average 4.4. And any database someone could fit on a disk 20 years ago (I forget, maybe 8 GB?) fits in RAM now, which makes it about 256 times faster, which is a lot bigger than 4.4. You can get a 20 TB disk now, which is enough space So anything that could be done with MySQL 20 years ago can be done faster and cheaper with Dolt now, which covers, I think the technical term is, a fucking shitload of applications. It probably includes literally every 20th-century application of relational databases.

Well, except for things that have over 5 billion transactions (20 TB ÷ 4 kB/txn) over their lifetime, I guess, so it might be important to find a way to compact that 4K. 5 billion transactions is 19 months at 100 TPS. If you could get that down to 256 bytes it would be almost 25 years of 100 TPS.

Also, as I understand it, and correct me if I'm wrong here, that 4.4× slowdown buys you a bulletproof and highly performant and scalable strategy for backups (with PITR), staging servers, data warehousing, readslaves, disk error detection and recovery, cryptographically secure audit logs, bug reproduction, and backtesting. Along with the legal security the Apache 2 license gives you, which you don't have with Datomic.

Sounds fantastic! It sounds like you're selling its performance a bit short. If someone is really concerned about such a small performance loss they probably aren't really in the market for a new RDBMS.

24. hinkley ◴[] No.31851629{4}[source]
They want more data but they don't want to pay for it. People want lots of things, doesn't mean they get it or deserve it.

I can't recall which it was but one of the timeseries databases was bragging on the fact that there are certain situations where scanning a block of data is as cheap as trying to add finer grained indexes to it, especially with ad hoc queries. They did a bunch of benchmarks that said block scanning with compression and parallelism was workable.

And while compression typically leads to write amplification (or very poor compression ratios), in a timeseries database, or a regular database architected in a timeseries-like fashion, modifying the old data is deeply frowned upon (and in fact I've heard people argue for quasi-timeseries behavior because modifying old records is so punishing, especially as the application scales), so as long as you can decide not to compress some pages - new pages - this is not a problem.

25. CPLX ◴[] No.31851860{4}[source]
As a guy who's been doing technical stuff of one kind or another since the mid 90's I would say that any analysis that insists that a specific use case has tradeoffs due to lack of memory or processing speed has an inevitable expiration date.
26. jandrewrogers ◴[] No.31852067[source]
This is how relational databases have commonly worked since at least the 1990s and is called multi-version concurrency control (MVCC). Welcome to the future, it is called PostgreSQL. There are at least two reasons no sensible database designer would allow users to operate a database in this way even though they are technically capable of it.

First, keeping every version of every piece of data forever is an excellent way to consume non-intuitively vast amounts of storage even if your data model is tiny. Every time this feature has been offered by databases, it immediately causes a rash of "out of storage" errors that force the user to manually and permanently delete large numbers of old versions. This is extremely user-unfriendly, so the feature is almost immediately removed in subsequent versions because the pain it causes far outweighs the benefits even when used carefully. In typical MVCC systems, old versions are aggressively garbage collected automatically to limit out-of-storage errors.

Second, finding or reconstructing an arbitrary number of old versions of data is unavoidably expensive. Much of the architectural difference between various MVCC implementations are trying to manage the rather severe performance tradeoffs of maintaining multiple versions of data and navigating to the version you need, with the understanding that all of these versions live on storage and rarely in a single place. There is no optimal way, and keeping version chains short is critical for good performance.

There is very deep literature around MVCC-style databases. The challenges of generalizing and maximally exploiting MVCC as a user feature while having performance that is not poor to the point of unusability are thoroughly documented.

replies(1): >>31852336 #
27. zachmu ◴[] No.31852336[source]
MVCC is not version control, and time travel / historical querying is not version control.

Dolt's unique functionality isn't time travel, although it has that. It's version control, i.e. branch and merge, push and pull, fork and clone. A bunch of database products give you some of this for schema migrations, but Dolt is the only one that does it for table data as well.

replies(1): >>31854987 #
28. password4321 ◴[] No.31852808{3}[source]
See also: https://versionpress.com (not being actively developed)
replies(1): >>31854202 #
29. dizhn ◴[] No.31853553[source]
How would you revert, merge, rollback or otherwise make sense of real world data that necessarily reflects "state"? Or is this only for development?
30. throwaway892238 ◴[] No.31853994{4}[source]
Well, they're discrete problems. Version control of source code, packaged applications, container images, databases are all quite different.

Git is a distributed file manager that operates on files where every change is a commit, and a commit is a set of operations on files, and/or a change to a block of text strings terminated by newlines. Versions are merkle trees of commits.

RPM/Deb/etc is a semi-centralized file manager that operates on files assuming each change is a collection of files with executable stages before and after copying/linking/unlinking. Versions are arbitrary key=value pairs which optionally depend on other versions, with extra logic to resolve relative versions.

Docker/OCI is a distributed file manager that operates on layers assuming every layer is a collection of files overlaid on other layers, with extra logic to do extra things with the layers at runtime. Versions are (I think?) merkle trees of layers.

The database is going to need a helluva lot of custom heuristics and operations to do version-control, because how you use it is so much different than the above. Databases are much more complex beasts, require higher performance, higher reliability, tons more functionality.

replies(1): >>31866787 #
31. zachmu ◴[] No.31854202{4}[source]
If we were less busy we would adopt versionpress and port it to dolt, such a cool product. Some day.
32. whazor ◴[] No.31854369[source]
With big data such a model is even more promising. One of the big problems is that people keep copying data sets, which does not scale. Just syncing newer versions is much more efficient, look at delta.io.
33. jandrewrogers ◴[] No.31854987{3}[source]
The conceit here is the assumption that this has not been built many times by very clever software engineers. It is not a new idea. True git-like version control systems for managing large volumes of data have been built on MVCC kernels for a decades -- branch and merge, push and pull, fork and clone.

There are fundamental computer science and technical issues that make scaling these systems for arbitrary data models extremely difficult. The platforms always had the ambition to be general but the design tradeoffs required to make them scale requires narrowly overfitting for a particular type of data model such that they can only be used for the original use case. And even then, the performance ends up being not good.

I've never designed one from scratch but I've worked on a few at large companies. All of them started with the vision you are proposing, all of them failed at achieving that vision because of the technical tradeoffs required to enable something resembling scalability. Unless you are proposing some novel computer science that renders these issues moot, you aren't presenting a credible defense that this hasn't been done before.

replies(1): >>31855232 #
34. zachmu ◴[] No.31855232{4}[source]
Git-like version control requires a Merkle DAG. Unless you know something I don't, there are no RDBMS products that incorporate a Merkle DAG for storage. Dolt is the first.

Table data is stored in a cross between a Merkle DAG and a B Tree (a prolly tree), which is what makes diff / merge performant and scalable. We didn't invent these data structures but we believe we are the first to build a SQL database on them.

https://docs.dolthub.com/architecture/storage-engine/prolly-...

replies(1): >>31855955 #
35. jandrewrogers ◴[] No.31855955{5}[source]
> Git-like version control requires a Merkle DAG.

This is false, you are conflating the abstract algorithm with a narrow implementation. That's like saying the only possible sorting algorithm is quicksort.

With all due respect, you seem to be only loosely familiar with database architecture, both theory and historical practice. Nothing you've described is actually novel. That you are unfamiliar with why no one builds things this way, despite many attempts, does not lend confidence.

I am actually a big fan of people trying unorthodox approaches to databases that have never been tried before, this just isn't such an example. Which doesn't make your approach wrong per se, but it leaves you exposed to learning why other people tried and abandoned it.

Tangentially, the "prolly tree" is intrinsically not a scalable data structure. That may satisfy your design requirements but I can't tell.

replies(3): >>31856467 #>>31871902 #>>31885817 #
36. throwaway892238 ◴[] No.31856467{6}[source]
> you are conflating the abstract algorithm with a narrow implementation

They're literally telling you Git uses a Merkle DAG and they wanted to recreate Git so they used a Merkle DAG. That's not conflating, it's copying.

> you seem to be only loosely familiar with database architecture

Based on what? The only comments GP has made about DBA in this entire HN thread is "this is not MVCC", which is correct.

> I am actually a big fan of people trying unorthodox approaches to databases that have never been tried before

So stop discouraging the neophyte? Sometimes innovation requires the ignorant to figure things out without knowing better, because that way they won't quit before they've begun. Let them figure it out. And if it's not novel, who cares? It doesn't have to be some perfect architectural masterpiece to solve people's problems. If it works well enough just for WordPress blogs, that's pretty great already.

replies(1): >>31861146 #
37. qaq ◴[] No.31858636{3}[source]
so I have commit1 drop table blah commit2 create table blah with diff schema

I revert commit1 what would be the state?

replies(1): >>31871966 #
38. k_bx ◴[] No.31858826[source]
Hot take: migrations are useless

In modern world of SAASes and connectivity, you use multiple "databases" and such (queues, identity providers) anyways, and the world got connected enough to not need to have a local db most of the time.

39. discreteevent ◴[] No.31861146{7}[source]
> They're literally telling you Git uses a Merkle DAG

They literally told them: "Git-like version control requires a Merkle DAG."

40. lazzlazzlazz ◴[] No.31863592{3}[source]
This is Hacker News. The norm is a deep, thoughtful understanding of the state of the art and a total inability to apply the imagination to the future.
41. AaronLasseigne ◴[] No.31865675[source]
One issue with immutable data stores for businesses is compliance with things like GDPR. You need some mechanism for scrubbing data and its version history effectively making it non-immutable.
42. hinkley ◴[] No.31866787{5}[source]
> Well, they're discrete problems.

I'm not convinced they are. Invention is often a case of adopting a solution well known in another discipline. Discovering that these two things share a category. We keep discovering things that are isomorphic to each other.

In the end git is keeping a database of code changes as write once data entries, and presenting the snapshots for human consumption. It does a very bad job of pretending that's not the case.

43. aboodman ◴[] No.31871902{6}[source]
Sigh. Databases seems to be one of the last bastions of ivorytower-ism in software engineering.

Databases aren't magic. They are software. And as you say, there is a deep well of literature, and almost all interesting modern databases are open source. So it's possible for anyone with the interest and motivation to learn how they work, and yes, improve them.

> With all due respect, you seem to be only loosely familiar with database architecture, both theory and historical practice. Nothing you've described is actually novel. That you are unfamiliar with why no one builds things this way, despite many attempts, does not lend confidence.

You claim that there is all this information out there saying why this won't work, and that it has been tried, but don't point to any of it.

> Tangentially, the "prolly tree" is intrinsically not a scalable data structure.

First: Dolt publishes industry standard performance benchmarks and they are an average of 4.4x slower than MySQL:

https://docs.dolthub.com/sql-reference/benchmarks/latency

This is using the original storage format from noms which wasn't written for oltp workloads. A new storage format is coming which is and will dramatically improve this:

https://www.dolthub.com/blog/2022-05-20-new-format-alpha/

In any case 5x slower already shows, experimentally, that this approach works for database-style problems.

===

Second: In a purely algorithmic sense, prolly trees are the definition of scalable. They are log(n) (with a large base) for inserts, deletes, and seeks and they have efficient ordered scans. They have basically the same algorithmic complexity as B Trees, B+ Trees, LSM trees, etc -- very similar properties to the data structures used by other databases. The problem with prolly trees is actually the reverse: they are scalable, but have large constant factors due to the overhead of hashing.

But a single-digit constant factor slower performance than MySQL but with versioning seems like a great product for many applications.

If anyone reading this is interested, the Dolt team did a great job writing up the how prolly trees work and how they compare to classic databases here:

https://www.dolthub.com/blog/2020-04-01-how-dolt-stores-tabl...

replies(2): >>31871982 #>>31874161 #
44. aboodman ◴[] No.31871966{4}[source]
It will be a conflict. But you can commit the revert to a different branch and run both schemas at the same time. This provides a lot of flexibility! Check out https://www.dolthub.com/blog/2022-04-29-dolt-revert/ for some discussion of this.
45. aboodman ◴[] No.31871982{7}[source]
Oh. Somebody said it better than me:

https://news.ycombinator.com/item?id=31851592

46. heisjustsosmart ◴[] No.31874161{7}[source]
Tip: I'd read the jandrewrogers comments before engaging. He actually does know more than you. He makes some unusual claims then doesn't back them up because he has no need to. I've spent and hour or three and learnt nothing because he's so far ahead I can't keep up. Perhaps read his profile about the amazing thing he's invented such as http://www.jandrewrogers.com/2015/10/08/spacecurve/

Let me quote a bit. I admit, I don't understand it but then how could I?

Algorithm design using topology manipulation can be enormously challenging to reason about. You are often taking a conceptually simple algorithm, like a nested loop or hash join, and replacing it with a much more efficient algorithm involving the non-trivial manipulation of complex high-dimensionality constraint spaces that effect the same result. Routinely reasoning about complex object relationships in greater than three dimensions, and constructing correct parallel algorithms that exploit them, becomes easier but never easy.

Incredible! I wish I could grok this stuff.

You should save your breath and just get out of the way of the real experts.

replies(1): >>31887209 #
47. wstuartcl ◴[] No.31885817{6}[source]
I agree with everything you have said in this thread, the one area I would take a slightly different approach with is that some of the best algorithmic advances do come from people and teams looking at a problem anew -- without examining all previous works and failures.

Are they likely to find a way to jump forward? no. Are they likely to fall into the same traps and dead ends that are well documented in this space? yep. But in general I believe we should temper the negative feedback with constructive details:

Hey have you seen this previous and very similar work in the same space, here is some info where and how they failed to scale.

Taking a stance (even an implied one) that this is not novel and therefore you should stop could have been made against many of the teams working in vast areas of algorithms and architectures that have lead to jumps forward. CS is and should be science and to that end, our current knowledge and understanding is simply what is known today and must remain mutable to move forward. While their current approach is not scalable and a dead end, who knows if this experience will lead them to identify previously unrealized solutions (even if partial).

48. aboodman ◴[] No.31887209{8}[source]
Yeah, I should have really checked who I was dealing with first. Standing aside.