As long as actions are immutable and any non-deterministic inputs are captured in the arguments they can be (re)executed in total clock order from a known common state in the client database to arrive at a consistent state regardless of when clients sync. The benefit of this I realized is that it works perfectly with authentication/authorization using postgres row level security. It's also efficient, letting clients sync the minimal amount of information and handle conflicts while still having full server authority over what clients can write.
There's a lot more detail involved in actually making it work. Triggers to capture row level patches and reverse patches in a transaction while executing an action. Client local rollback mechanism to resolve conflicts by rolling back local db state and replaying actions in total causal order. State patch actions that reconcile the differences between expected and actual outcomes of replaying actions (for example due to private data and conditionals). And so on.
The big benefits of this technique is that it isn't just merging data, it's actually executing business logic to move state forward. That means it captures user intentions where a system based purely on merging data cannot. Traditional crdt that merges data will end up at a consistent state but can provide zero guarantees about the semantic validity of that state to the end user. By replaying business logic functions I'm seeking to guarantee that the state is not only consistent but maximally preserves the intentions of the user when reconciling interleaved writes.
This is still a WIP and I don't have anything useful to share yet but I think the core of the idea is sound. Exciting to see so much innovation in the space of data sync! It's a tough problem and no solution (yet) handles the use cases of many different types of apps.