←back to thread

468 points speckx | 1 comments | | HN request time: 0s | source
Show context
Aurornis ◴[] No.45302320[source]
I thought the conclusion should have been obvious: A cluster of Raspberry Pi units is an expensive nerd indulgence for fun, not an actual pathway to high performance compute. I don’t know if anyone building a Pi cluster actually goes into it thinking it’s going to be a cost effective endeavor, do they? Maybe this is just YouTube-style headline writing spilling over to the blog for the clicks.

If your goal is to play with or learn on a cluster of Linux machines, the cost effective way to do it is to buy a desktop consumer CPU, install a hypervisor, and create a lot of VMs. It’s not as satisfying as plugging cables into different Raspberry Pi units and connecting them all together if that’s your thing, but once you’re in the terminal the desktop CPU, RAM, and flexibility of the system will be appreciated.

replies(11): >>45302356 #>>45302424 #>>45302433 #>>45302531 #>>45302676 #>>45302770 #>>45303057 #>>45303061 #>>45303424 #>>45304502 #>>45304568 #
vlovich123 ◴[] No.45302531[source]
I’d say it’s inconclusive. For traditional compute it wins on power and cost (it’ll always lose on space). The inference is noted to not be able to use the GPU due to llama.cpp’s vulkan backend AND that clustering software in llama.cpp is bad. I’d say it’s probably still going to be worse for AI but it’s inconclusive where it could be due to the software immaturity (ie not worth it today but could be with better software)
replies(1): >>45303840 #
1. tracker1 ◴[] No.45303840[source]
But will there be a CM6 while you're waiting for the software to improve?