> Local-first architectures allow for fast and responsive applications that are resilient to network failures
So are we talking about apps that can work for days and weeks offline and then sync a lot of data at once, or are we talking about apps that can survive a few seconds glitch in network connectivity? I think that what is promised is the former, but what will make sense in practice is the latter.
In my experience, it can affect the architecture and performance in a significant way. If a client can go offline for an arbitrary period of time, doing a delta sync when they come back online is more tricky, since we need to sync a specific range of operation history (and this needs to be adjusted for specific scope/permissions that the client has access to). If you scale up a system to thousands or millions of clients, having them all do arbitrary range queries doesn't scale well. For this reason I've seen sync engines simply force a client to do a complete re-sync if it "falls behind" with deltas for too long (e.g. more than a day or so.) Maintaining an operation log that is set up and indexed for querying arbitrary ranges of operations (for a specific scope of data) works well.