←back to thread

175 points chilipepperhott | 2 comments | | HN request time: 0.736s | source
Show context
mkagenius ◴[] No.44474681[source]
Three clear advantages of a local first software:

1. No network latency, you do not have to send anything across the atlantic.

2. Your get privacy.

3. Its free, you do not need to pay any SaaS business.

An additional would be, the scale being built-in. Every person has their own setup. One central agency doesn't have to take care of all.

replies(6): >>44474744 #>>44474793 #>>44474889 #>>44474937 #>>44475878 #>>44516447 #
echelon ◴[] No.44474744[source]
This entire paradigm gets turned on its head with AI. I tried to do this with purely local compute, and it's a bad play. We don't have good edge compute yet.

1. A lot of good models require an amount of VRAM that is only present in data center GPUs.

2. For models which can run locally (Flux, etc.), you get dramatically different performance between top of line cards and older GPUs. Then you have to serve different models with different sampling techniques to different hardware classes.

3. GPU hardware is expensive and most consumers don't have GPUs. You'll severely limit your TAM if you require a GPU.

4. Mac support is horrible, which alienates half of your potential customers.

It's best to follow the Cursor model where the data center is a necessary evil and the local software is an adapter and visualizer of the local file system.

replies(2): >>44474864 #>>44475280 #
datameta ◴[] No.44474864[source]
Define "good edge compute" in a way that doesn't have expectations set by server-based inference. I don't mean this to sound like a loaded request - we simply can't expect to perform the same operations at the same latency as cloud-based models.

These are two entirely separate paradigms. In many instances it is quite literally impossible to depend on models reachable by RF like in an ultra-low power forest mesh scenario for example.

replies(1): >>44475137 #
1. echelon ◴[] No.44475137[source]
We're in agreement that not all problem domains are amenable to data center compute. Those that don't have internet, etc.

But for consumer software that can be internet connected, data center GPU is dominating local edge compute. That's simply because the models are being designed to utilize a lot of VRAM.

replies(1): >>44494147 #
2. datameta ◴[] No.44494147[source]
I think the concept/term Edge has been diluted in meaning... Near edge, far edge, etc. It includes everything from truly remote ultra-low power compute to essentially a branched local data center that is certainly closer to the cloud if you ask me.

"Not on the trunk" isn't a sufficient threshold for me nor is "not the cloud" a good filter.