Most active commenters
  • GMoromisato(7)
  • WarOnPrivacy(3)

←back to thread

Local-first software (2019)

(www.inkandswitch.com)
863 points gasull | 32 comments | | HN request time: 0.842s | source | bottom
1. GMoromisato ◴[] No.44473808[source]
Personally, I disagree with this approach. This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).

The problems with closed-source software (lack of control, lack of reliability) were solved with a new business model: open source development, which came with new licenses and new ways of getting revenue (maintenance contracts instead of license fees).

In the same way, we need a business model solution to cloud-vendor ills.

Imagine we create standard contracts/licenses that define rights so that users can be confident of their relationship with cloud-vendors. Over time, maybe users would only deal with vendors that had these licenses. The rights would be something like:

* End-of-life contracts: cloud-vendors should contractually spell out what happens if they can't afford to keep the servers running.

* Data portability guarantees: Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.

* Data privacy transparency: Vendors must track/audit all data access and report to the user who/what read their data and when.

I'm sure you can think of a dozen other clauses.

The tricky part is, of course, adoption. What's in it for the cloud-vendors? Why would they adopt this? The major fear of cloud-vendors is, I think, churn. If you're paying lots of money to get people to try your service, you have to make sure they don't churn out, or you'll lose money. Maybe these contracts come only with annual subscription terms. Or maybe the appeal of these contracts is enough for vendors to charge more.

replies(12): >>44473922 #>>44474074 #>>44474164 #>>44474231 #>>44474286 #>>44474367 #>>44474424 #>>44474450 #>>44474769 #>>44475861 #>>44476561 #>>44477275 #
2. Habgdnv ◴[] No.44473922[source]
Currently there are laws but not for hosting. Look at the contract of Steam for example or Ubisoft, or anything else - Q: What happens to your game collection if we shut down our servers? A: You own nothing and lose everything, GG!

It is like that we must protect users privacy from greedy websites so we will make the bad ones spell out that they use cookies to spy on users - and the result is what we have now with the banners.

replies(1): >>44474048 #
3. GMoromisato ◴[] No.44474048[source]
I agree with you! And your point about cookie banners underlines that we can't just rely on regulation (because companies are so good are subverting or outright lobbying their way out of them).

Just as with the open source movement, there needs to be a business model (and don't forget that OSS is a business model, not a technology) that competes with the old way of doing things.

Getting that new business model to work is the hard part, but we did it once with open source and I think we can do it again with cloud infrastructure. But I don't think local-first is the answer--that's just a dead end because normal users will never go with it.

replies(1): >>44476414 #
4. hodgesrm ◴[] No.44474074[source]
> * Data portability guarantees: Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.

This is not practical for data of any size. Prod migrations to a new database take months or even years if you want things to go smoothly. In a crisis you can do it in weeks but it can be really ugly, That applies even when moving between the same version of open source database, because there's a lot of variation between the cloud services themselves.

The best solution is to have the data in your own environment to begin with and just unplug. It's possible with bring-your-own-cloud management combined with open source.

My company operates a BYOC data product which means I have an economic interest in this approach. On the other hand I've seen it work, so I know it's possible.

replies(1): >>44474124 #
5. GMoromisato ◴[] No.44474124[source]
I'd love to know more about BYOC. Does that apply to the raw data (e.g., the database lives inside the enterprise) or the entire application stack (e.g., the enterprise is effectively self-hosting the cloud).

It seems like you'd need the latter to truly be immune to cloud-vendor problems. [But I may not understand how it works.]

replies(1): >>44481405 #
6. WarOnPrivacy ◴[] No.44474164[source]
> End-of-life contracts: cloud-vendors should contractually spell out what happens if they can't afford to keep the servers running.

I'm trying to imagine how this would be enforced when a company shutters and it's principals walk away.

replies(3): >>44474245 #>>44474255 #>>44478728 #
7. al_borland ◴[] No.44474231[source]
Does this really solve the problem? Let's say I'm using a cloud provider for some service I enjoy. They have documents that spell out that if they have to close their doors they will give X months of notice and allow for a data export. Ok, great. Now they decide to shut their doors and honor those agreements. What am I left with? A giant JSON file that is effectively useless unless I decide to write my own app, or some nice stranger does? The thought is there, it's better than nothing, but it's not as good as having a local app that will keep running, potentially for years or decades, after the company shuts their doors or drops support.
replies(1): >>44475087 #
8. GMoromisato ◴[] No.44474245[source]
It's a good question--I am not a lawyer.

But that's the point of contracts, right? When a company shuts down, the contracts become part of the liabilities. E.g., if the contract says "you must pay each customer $1000 if we shut down" then the customers become creditors in a bankruptcy proceeding. It doesn't guarantee that they get all (or any) money, but their interests are negotiated by the bankruptcy judge.

Similarly, I can imagine a contract that says, "if the company shuts down, all our software becomes open source." Again, this would be managed by a bankruptcy judge who would mandate a release instead of allowing the creditors to gain the IP.

Another possibility is for the company to create a legal trust that is funded to keep the servers running (at a minimal level) for some specified amount of time.

replies(2): >>44474339 #>>44476440 #
9. WarOnPrivacy ◴[] No.44474255[source]
(cont. thinking...) One possibility. A 3rd party manages a continually updating data escrow. It'd add some expense and complexity to the going concern.
10. maccard ◴[] No.44474286[source]
> Vendors must spell out how data gets migrated out, and all formats must be either open or (at minimum) fully documented.

Anecdotally, I’ve never worked anywhere where the data formats are documented in any way other than a schema in code,

11. WarOnPrivacy ◴[] No.44474339{3}[source]
> When a company shuts down, the contracts become part of the liabilities.

The asset in the contract is their customer's data; it is becoming stale by the minute. It could be residing in debtor-owned hardware and/or in data centers that are no longer getting their bills paid.

It takes time to get a trustee assigned and I think we need an immediate response - like same day. (NAL but prep'd 7s & 13s)

12. prmoustache ◴[] No.44474367[source]
> Personally, I disagree with this approach. This is trying to solve a business problem (I can't trust cloud-providers)

It is not only a business problem. I stay away from cloud based services not only because of subscription model, but also because I want my data to be safe.

When you send data to a cloud service, and that data is not encrypted locally before being sent to the cloud (a rare feature), it is not a question of if but when that data will be pwned.

replies(2): >>44476464 #>>44476485 #
13. mumbisChungo ◴[] No.44474424[source]
A good contract can help you to seek some restitution if wrongdoing is done and you become aware of it and you can prove it. It won't mechanically prevent the wrongdoing from happening.
replies(1): >>44476578 #
14. samwillis ◴[] No.44474450[source]
> This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).

I don't think that's quite correct. I think the authors fully acknowledge that the business case for local-first is not complexly solved and is a closely related problem. These issues need both a business and technical solution, and the paper proposes a set of characteristics of what a solution could look like.

It's also incorrect to suggest that local-first is an argument for decentralisation - Martin Kleppmann has explicitly stated that he doesn't think decentralised tech solves these issues in a way that could become mass market. He is a proponent of centralised standardised sync engines that enable the ideals of local-first. See his talk from Local-first conf last year: https://youtu.be/NMq0vncHJvU?si=ilsQqIAncq0sBW95

replies(1): >>44474576 #
15. GMoromisato ◴[] No.44474576[source]
I'm sure I'm missing a lot, but the paper is proposing CRDTs (Conflict-free Replicated Data Types) as the way to get all seven checkmarks. That is fundamentally a distributed solution, not a centralized one (since you don't need CRDTs if you have a central server).

And while they spend a lot of time on CRDTs as a technical solution, I didn't see any suggestions for business model solutions.

In fact, if we had a business model solution--particularly one where your data is not tied to a specific cloud-vendor--then decentralization would not be needed.

I get that they are trying to solve multiple problems with CDRTs (such a latency and offline support) but in my experience (we did this with Groove in the early 2000s) the trade-offs are too big for average users.

Tech has improved since then, of course, so maybe it will work this time.

16. AnthonyMouse ◴[] No.44474769[source]
> This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).

Whenever it's possible to solve a business problem or political problem with a technical solution, that's usually a strong approach, because those problems are caused by an adversarial entity and the technical solution is to eliminate the adversarial entity's ability to defect.

Encryption is a great example of this if you are going to use a cloud service. Trying to protect your data with privacy policies and bureaucratic rules is a fool's errand because there are too many perverse incentives. The data is valuable, neither the customer nor the government can easily tell if the company is selling it behind their backs, it's also hard to tell if he provider has cheaped out on security until it's too late, etc.

But if it's encrypted on the client device and you can prove with math that the server has no access to the plaintext, you don't have to worry about any of that.

The trouble is sometimes you want the server to process the data and not just store it, and then the technical solution becomes, use your own servers.

replies(1): >>44475038 #
17. GMoromisato ◴[] No.44475038[source]
I 100% agree, actually. If there were a technical solution, then that's usually a better approach.

For something like data portability--being able to take my data to a different provider--that probably requires a technical solution.

But other problems, like enshittification, can't be solved technically. How do you technically prevent a cloud vendor from changing their pricing?

And you're right that the solution space is constrained by technical limits. If you want to share data with another user, you either need to trust a central authority or use a distributed protocol like blockchain. The former means you need to trust the central provider; the latter means you have to do your own key-management (how much money has been lost by people forgetting the keys to their wallet?)

There is no technical solution that gets you all the benefits of central plus all the benefits of local-first. There will always be trade-offs.

replies(2): >>44479865 #>>44480488 #
18. GMoromisato ◴[] No.44475087[source]
Data portability is, I think, useful even before the service shuts down. If I'm using some Google cloud-service and I can easily move all my data to a competing service, then there will be competition for my business.

What if cloud platforms were more like brokerage firms? I can move my stocks from UBS to Fidelity by filling out a few forms and everything moves (somewhat) seamlessly.

My data should be the same way. I should be able to move all my data out of Google and move it to Microsoft with a few clicks without losing any documents or even my folder hierarchy. [Disclaimer: Maybe this is possible already and I'm just out of the loop. If so, though, extend to all SaaS vendors and all data.]

replies(1): >>44475261 #
19. al_borland ◴[] No.44475261{3}[source]
This mainly just requires the ability to export, and standard formats. For generic file storage, emails, contacts, calendars, etc, this is largely possible already. Though there are minor incompatibilities based on various implementations or customizations on top of the standard.

The big problem comes into play for new, or more custom types of applications. It takes a while for something to become ubiquitous enough that standard formats are developed to support them.

20. satvikpendem ◴[] No.44475861[source]
> This is trying to solve a business problem (I can't trust cloud-providers)

Not necessarily. I like local-first due to robust syncing via CRDTs, not because I somehow want to avoid cloud providers.

21. sirjaz ◴[] No.44476414{3}[source]
I've found people want local software and access. This is a major reason why people like mobile more now than desktops outside of the obvious of having it in their pocket. A mobile app gives you more of a private feel than going to website and entering your info. In addition to an extent it is kept local first, due to sync issues.
22. bigfatkitten ◴[] No.44476440{3}[source]
No, not at all.

The entire point of Chapter 11 (and similar bankruptcy legislation internationally) is to allow companies to get out of contracts, so that they can restructure the business to hopefully continue on as a going concern.

23. bigfatkitten ◴[] No.44476464[source]
I have spent the last decade or so working in digital forensics and incident response for a series of well-known SaaS companies.

The experience has made me a big fan of self hosting.

24. HappMacDonald ◴[] No.44476485[source]
"Trust about whether or not another company will maintain confidentiality" still sounds like a business problem to me (or at least one valid way of perceiving the problem)

And the biggest advantage I see of this perspective over the "technical problem" perspective is that assigning responsibility completely covers the problem space, while "hope that some clever math formula can magic the problem away" does not.

replies(1): >>44478766 #
25. __MatrixMan__ ◴[] No.44476561[source]
Is it trying to solve a business problem? I think it's trying to solve a more general problem which has nothing to do with business.

It's ok to just solve the problem and let the businesses fail. Predation is healthy for the herd. Capitalism finds a way, we don't have to protect it.

26. HappMacDonald ◴[] No.44476578[source]
It can also help to align the incentives of multiple parties to actually care about the same goals.

"Mechanically preventing wrongdoing from happening" can be a bit of a Shangri-La. What Tech can mechanically do is increase the cost of wrongdoing, or temporarily deflect attempts towards easier targets. But that by definition cannot "solve the problem for everyone" as there will always be a lowest hanging fruit remaining somewhere.

What contracts can do is help to reduce the demand for wrongdoing.

27. solidsnack9000 ◴[] No.44477275[source]
This would make cloud vendors kind of like banks. The cloud vendor is holding a kind of property for the user in the user's account. The user would have clearly defined rights to that property, and the legal ability to call this property back to themselves from the account.

This calling back might amount to taking delivery. In a banking context, that is where the user takes delivery of whatever money and other property is in the account. In the cloud vendor case, this would be the user receiving a big Zip file with all the contents of the account.

Taking delivery is not always practical and is also not always desirable. Another option in a financial context is transferring accounts from one vendor to another: this can take the form of wiring money or sometimes involves a specialized transfer process. Transferring the account is probably way more useful for many cloud services.

This leads us to a hard thing about these services, though: portability. Say we delineate a clear property interest for user's in their cloud accounts and we delineate all of their rights. We have some good interests and some good rights; but what does it mean to take delivery of your Facebook friends? What does it mean to transfer your Facebook account from one place to another?

28. necovek ◴[] No.44478728[source]
Putting stuff in escrow is usually the way to go: escrow service is paid upfront (say, always for the next 3 months), and that's the time you've got to pull out your data.

My company does that with a few small vendors we've got for the source code we depend on.

29. necovek ◴[] No.44478766{3}[source]
Here at HN, I think most people see it differently (me included): having clear math proof of "confidentiality" is usually seen as both cheaper and more trustworthy.

Yes, there might be a breakthrough or a bug in encryption, and jnless you've been targetted, you can respond. But we've seen and experienced breakdowns in human character (employees spying on customers, stealing data...), government policies and company behaviour to trust the complexity and cost (lawyers) of enforcing accountability through policy.

In general, you do need both, but if you've got one, to engineers, technical solution is usually more appealing.

30. AnthonyMouse ◴[] No.44479865{3}[source]
Listing key management as the thing that makes distributed protocols hard seems like an error. If your stuff is in the cloud, what are you using to access it? Some kind of password, TOTP, etc., which is maybe tied to your email, which itself is tied to some password, TOTP, etc. So what happens if you lose access to your email or whatever they're using for password recovery? You lose all your stuff.

But it's even worse in that case, because that can also happen if they mess something up. Your email account got banned by some capricious bot, or the provider abruptly decided to stop providing the service, and then the service tied to it decided to send you a verification code to the email you don't have access to anymore -- even though you didn't forget your password for either of them. So now you have even more ways to lose all your stuff.

Meanwhile if you were willing to trust some email provider to not screw you and you only needed some way to recover your keys if your computer falls into the sea, you could just email a copy of them to yourself. And then you wouldn't be relying on that provider to have the only means of recovery, because they're still on your device too.

31. klabb3 ◴[] No.44480488{3}[source]
> How do you technically prevent a cloud vendor from changing their pricing?

Through regulating markets to ensure fierce competition - including things like portability, standard APIs, banning egress fees and similar lock in techniques, breaking up infrastructure (DCs and networking) from service providers. In cloud we have 3 vertically integrated mega oligopolies. That’s not a healthy market.

> data portability […] probably requires a technical solution

Yes, formats and APIs are needed for technical reasons, but it already exists (or fairly trivial to implement) and is not provided – sometimes actively obstructed – for business reasons. Imo interop is predominantly bottlenecked by social/business concerns.

32. hodgesrm ◴[] No.44481405{3}[source]
Here's how we do it for analytic systems: data and software. The software services are open source running on Kubernetes. If you don't like the vendor or the vendor goes away, the existing services keep running. You can also maintain them because the stack is open source.

This is different from what the local-first article is describing, which addresses data for individuals. That's a much harder problem to solve at scale.