Most active commenters
  • kllrnohj(5)
  • kbenson(3)

←back to thread

468 points speckx | 15 comments | | HN request time: 0.338s | source | bottom
Show context
Aurornis ◴[] No.45302320[source]
I thought the conclusion should have been obvious: A cluster of Raspberry Pi units is an expensive nerd indulgence for fun, not an actual pathway to high performance compute. I don’t know if anyone building a Pi cluster actually goes into it thinking it’s going to be a cost effective endeavor, do they? Maybe this is just YouTube-style headline writing spilling over to the blog for the clicks.

If your goal is to play with or learn on a cluster of Linux machines, the cost effective way to do it is to buy a desktop consumer CPU, install a hypervisor, and create a lot of VMs. It’s not as satisfying as plugging cables into different Raspberry Pi units and connecting them all together if that’s your thing, but once you’re in the terminal the desktop CPU, RAM, and flexibility of the system will be appreciated.

replies(11): >>45302356 #>>45302424 #>>45302433 #>>45302531 #>>45302676 #>>45302770 #>>45303057 #>>45303061 #>>45303424 #>>45304502 #>>45304568 #
bunderbunder ◴[] No.45302356[source]
The cost effective way to do it is in the cloud. Because there's a very good chance you'll learn everything you intended to learn and then get bored with it long before your cloud compute bill reaches the price of a desktop with even fairly modest specs for this purpose.
replies(12): >>45302408 #>>45302469 #>>45302503 #>>45302550 #>>45302742 #>>45302824 #>>45303327 #>>45303352 #>>45304169 #>>45304176 #>>45304278 #>>45305010 #
Almondsetat ◴[] No.45302469[source]
I can get a Xeon E5-2690V4 with 28 threads and 64GB of RAM for about $150. If you need cores and memory to make a lot of VMs you can do it extremely cheaply
replies(7): >>45302491 #>>45302525 #>>45302535 #>>45302992 #>>45303342 #>>45303344 #>>45303461 #
kbenson ◴[] No.45302992[source]
Source? That seems like something I would want to take advantage if at the moment...
replies(1): >>45303066 #
1. kllrnohj ◴[] No.45303066[source]
Note the E5-2690V4 is a 10 year old CPU, they are talking about used servers. You can find those on ebay or whatever as well as stores specializing in that. Depending on where you live, you might even find them free as they are often considered literal ewaste by the companies decommissioning them.

It also means it performs like a 10 year old server CPU, so those 28 threads are not exactly worth a lot. The geekbench results, for whatever value those are worth, are very mediocre in the context of anything remotely modern: https://browser.geekbench.com/processors/intel-xeon-e5-2690-...

Like a modern 12-thread 9600x runs absolute circles around it https://browser.geekbench.com/processors/amd-ryzen-5-9600x

replies(3): >>45303457 #>>45313507 #>>45354389 #
2. mattbillenstein ◴[] No.45303457[source]
This is the correct analysis - there's a reason you see this stuff cheap or free.

The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop.

replies(3): >>45303637 #>>45304959 #>>45354238 #
3. kllrnohj ◴[] No.45303637[source]
> The homelab group on Reddit is full of people who don't understand any of this - they have full racks in their house that could be replaced with one high-end desktop.

A lot of that group is making use of the IO capabilities of these systems to run lots of PCI-E devices & hard drives. There's not exactly a cost-effective modern equivalent for that. If there were cost-effective ways to do something like take a PCI-E 5.0 x2 and turn it into a PCI-E 3.0 x8 that'd be incredible, but there isn't really. So raw PCI-E lane count is significant if you want cheap networking gear or HBAs or whatever, and raw PCI-E lane count is $$$$ if you're buying new.

Also these old systems mean cheap RAM in large, large capacities. Like 128GB RAM to make ZFS or VMs purr is much cheaper to do on these used systems than anything modern.

replies(1): >>45303876 #
4. mattbillenstein ◴[] No.45303876{3}[source]
Perhaps, but I don't really get the dozens of TB of storage in the home use case a lot of the time either.

Like if you have a large media library, you need to push maybe 10MB/s, you don't need 128GB of RAM to do that...

It's mostly just hardware porn - perhaps there are a few legit use cases for the old hardware, but they are exceedingly rare in my estimate.

replies(1): >>45304202 #
5. kllrnohj ◴[] No.45304202{4}[source]
> Like if you have a large media library, you need to push maybe 10MB/s,

For just streaming a 4k bluray you need more than 10MB/s, Ultra HD bluray tops out at 144 Mbit/s. Not to mention if that system is being hit by something else at the same time (backup jobs, etc...).

Is the 128GB of RAM just hardware porn? Eh, maybe, probably. But if you want 8+ bays for a decent sized NAS then you're already quickly into price points at which point these used servers are significantly cheaper, and 128GB of RAM adds very little to the cost so why not.

replies(1): >>45304607 #
6. Kubuxu ◴[] No.45304607{5}[source]
For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.

If anything, 2nd hand AMD gaming rigs make more sense than old servers. I say that as someone with always off r720xd at home due to noise and heat. It was fun when I bought it during winter years ago, until summer came.

replies(2): >>45305020 #>>45312158 #
7. zer00eyz ◴[] No.45304959[source]
Most of the workloads that people with homelabs run, could be run on a 5 year old i5.

A lot of business are paying obscene money to cloud providers when they could have a pair of racks and the staff to support it.

Unless you're paying attention to the bleeding edge of the server market, to its costs (better yet features and affordability) this sort of mistake is easy to make.

The article is by someone who does this sort of thing for fun, and views/attention, and im glad for it... it's fun to watch. But it's sad when this same sort of misunderstanding happens in professional settings, and it happens a lot.

8. kllrnohj ◴[] No.45305020{6}[source]
> For 8+ bays you just need a SAS HBA card and one free PCI-E slot. Not to mention that many motherboards will have 6+ SATA ports already.

And what case are you putting them into? What if you want it rack mounted? What about >1gig networking? What if I want a GPU in there to do whisper for home assistant?

Used gaming rigs are great. But used servers also still have loads of value, too. Compute just isn't one of them.

replies(1): >>45306970 #
9. ssl-3 ◴[] No.45306970{7}[source]
> And what case are you putting them into?

Maybe one of the Fractal Designs cases with a bunch of drive bays?

> What if you want it rack mounted?

Companies like Rosewill sell ATX cases that can scratch that itch.

> What about >1gig networking?

What about PCI Express card? Regular ATX computers are expandable.

> What if I want a GPU in there to do whisper for home assistant?

I mean... We started with a gaming rig, right? Isn't a GPU already implicit?

replies(1): >>45313594 #
10. ThatPlayer ◴[] No.45312158{6}[source]
I've been turning off my home server even though it's a modern PC rather than old server hardware because it idles at 100W which is too much. Put a Ryzen 7900X in it.

Not sure if it's not properly doing lower power states, or if it's the 10 HDDs spinning. Or even the GPU. But also don't really have anything important running on it that I can't just turn it off.

11. flas9sd ◴[] No.45313507[source]
I tend to use quite old hardware that is powered-off when not in use for its intended purpose and I coined "capability is its own quality".

For dedicated build boxes that crunch through lots of sources (whole distributions, AOSP) but do run seldomly, getting your hands on lots of Cores and RAM very cheaply can still trump buying newer CPUs with better perf/watt but higher cost.

12. kllrnohj ◴[] No.45313594{8}[source]
> Companies like Rosewill sell ATX cases that can scratch that itch.

Have you looked at what they cost? Those cases alone cost as much as a used server. Which comes with a case.

> What about PCI Express card? Regular ATX computers are expandable.

As mentioned higher up, they run out of lane count in a hurry. Especially when you're using things like used Connect-X cards

replies(1): >>45318874 #
13. ssl-3 ◴[] No.45318874{9}[source]
A rackmount case from Rosewill costs a couple of hundred bucks or so, new. And they'll remain useful for as long as things like ATX boards and 3.5" hard drives are useful.

I mean: An ATX case can be paid for once, and then be used for decades. (I'm writing this using a modern desktop computer with an ATX case that I bought in 2008.)

PCI Express lanes can be multiplied. There should frankly be more of this going on than there is, but it's still a thing that can be done.

Consumer boards built on the AMD X670E chipset, for instance, have some switching magic built in. There's enough direct CPU-connected lanes for an x16 GPU and a couple of x4 NVMe drives, and the NIC(s) and/or HBA(s) can go downstream of the chipset.

(Yeah, sure: It's limited to an aggregate 64 Gbps at the tail end, but that's not a problem for the things I do at home where my sights are set on 10Gbps networking and an HBA with a bunch of spinny disks. Your needs may differ.)

14. kbenson ◴[] No.45354238[source]
I have a couple units of free colocation cabinet space space and free bandwidth and power to go with it waiting to be used, so inefficient hardware is less of an issue for me. I've just been fairly lazy in sourcing it myself.
15. kbenson ◴[] No.45354389[source]
Which is mostly irrelevant because I have a few rack units of free space, with free power and bandwidth I can use if I want, but I haven't bothered because I don't have a need worth shelling out the money for a modern platform to put in it.

I'm well aware of the costs of power and the lgostics of colocation, this is purely about how I'm more willing to spend $100-$200 for a toy than I am $1000-$2000.