←back to thread

42 points gm678 | 1 comments | | HN request time: 0.286s | source
Show context
fmjrey ◴[] No.44508095[source]
The article reads like a story of trying to fit a square peg in a round hole, discussing pros and cons of cutting the square corners vs using a bigger hole. At some point one needs to realize we're using the wrong kind of primitives to build the distributed systems of today. In other words, we've reached the limit of the traditional approach based on OO and RDBMS that used to work with 2 and 3-tier systems. Clearly OO and RDBMS will not get us out of the tar pit. FP and NoSQL came to the rescue, but even these are not enough to reduce the accidental complexity of building distributed systems with the kind of volume, data flows, and variability of data and use cases.

I see two major sources of inspiration that can help us get out of the tar pit.

The first is the EAV approach as embodied in databases such as Datomic, XTDB, and the like. This is about recognizing that tables or documents are too coarse-grained and that entity attribute is a better primitive for modeling data and defining schemas. While such flexibility really simplifies a lot of use cases, especially the polymorphic data from the article, the EAV model assumes data is always about an entity with a specific identity. Once again the storage technology imposes a model that may not fit all use cases.

The second source of inspiration, which I believe is more generic and promising, is the one embodied in Rama from Red Planet Labs, which allows for any data shape to be stored following a schema defined by composing vectors, maps, sets, and lists, and possibly more if custom serde are provided. This removes the whole impedance mismatch issue between code and data store, and embraces the fact that normalized data isn't enough by providing physical materialized views. To build these, Rama defines processing topologies using a dataflow language compiled and run by a clustered streaming engine. With partitioning being a first-class primitive, Rama handles the distribution of both compute and data together, effectively reducing accidental complexity and allowing for horizontal scaling.

The difficulty we face today with distributed systems is primarily due to the too many moving parts of having multiple kinds of stores with different models (relational, KV, document, graph, etc.) and having too many separate compute nodes (think microservices). Getting out of this mess requires platforms that can handle the distribution and partitioning of both data and compute together, based on powerful primitives for both data and compute that can be combined to handle any kind of data and volumes.

replies(1): >>44510220 #
1. setr ◴[] No.44510220[source]
I mean this particular problem would be resolved if the database let you define/defend a UNIQUE constraint across tables. Then you could just do approach #2 without the psychotic check constraint.