←back to thread

623 points magicalhippo | 3 comments | | HN request time: 0.677s | source
1. delegate ◴[] No.42622338[source]
I think this is version 1 of what's going to become the new 'PC'.

Future versions will get more capable and smaller, portable.

Can be used to train new types models (not just LLMs).

I assume the GPU can do 3D graphics.

Several of these in a cluster could run multiple powerful models in real time (vision, llm, OCR, 3D navigation, etc).

If successful, millions of such units will be distributed around the world within 1-2 years.

A p2p network of millions of such devices would be a very powerful thing indeed.

replies(1): >>42622367 #
2. mycall ◴[] No.42622367[source]
> A p2p network of millions of such devices would be a very powerful thing indeed.

If you think RAM speeds are slow for the transformer or inference, imagine what 100Mbs would be like.

replies(1): >>42622420 #
3. ben_w ◴[] No.42622420[source]
Depends on the details, as always.

If this hypothetical future is one where mixtures of experts is predominant, where each expert fits on a node, then the nodes only need the bandwidth to accept inputs and give responses — they won't need the much higher bandwidth required to spread a single model over the planet.