←back to thread

1117 points kentonv | 4 comments | | HN request time: 0.832s | source

I wasn't quite sure if this qualified as "Show HN" given you can't really download it and try it out. However, dang said[0]:

> If it's hardware or something that's not so easy to try out over the internet, find a different way to show how it actually works—a video, for example, or a detailed post with photos.

Hopefully I did that?

Additionally, I've put code and a detailed guide for the netboot computer management setup on GitHub:

https://github.com/kentonv/lanparty

Anyway, if this shouldn't have been Show HN, I apologize!

[0] https://news.ycombinator.com/item?id=22336638

Show context
RulerOf ◴[] No.42159254[source]
> I've never heard of anyone else having done anything like this. This surprises me! But, surely, if someone else did it, someone would have told me about it? If you know of another, please let me know!

I never had the tenacity to consider my build "finished," and definitely didn't have your budget, but I built a 5-player room[1] for DotA 2 back in 2013.

I got really lucky with hardware selection and ended up fighting with various bugs over the years... diagnosing a broken video card was an exercise in frustration because the virtualization layer made BSODs impossible to see.

I went with local disk-per-VM because latency matters more than throughput, and I'd been doing iSCSI boot for such a long time that I was intimately familiar with the downsides.

I love your setup (thanks for taking the time to share this BTW) and would love to know if you ever get the local CoW working.

My only tech-related comment is that I will also confirm that those 10G cards are indeed trash, and would humbly suggest an Intel-based eBay special. You could still load iPXE (I assume you're using it) from the onboard NIC, continue using it for WoL, but shift the netboot over to the add-in card via a script, and probably get better stability and performance.

[1]: https://imgur.com/a/4x4-four-desktops-one-system-kWyH4

replies(1): >>42159408 #
kentonv ◴[] No.42159408[source]
Hah, you really did the VM thing? A lot of people have suggested that to me but I didn't think it'd actually work. Pretty cool!

Yeah I'm pretty sure my onboard 10G Marvell AQtion ethernet is the source of most of my stability woes. About half the time any of these machines boot up, Windows bluescreens within the first couple minutes, and I think it has something to do with the iSCSI service crashing. Never had trouble in the old house where the machines had 1G network -- but load times were painful.

Luckily if the machines don't crash in the first couple minutes, then they settle down and work fine...

Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

replies(9): >>42159687 #>>42160035 #>>42160606 #>>42160613 #>>42161388 #>>42162551 #>>42162914 #>>42163053 #>>42163249 #
1. toast0 ◴[] No.42160035[source]
> Yeah I could get higher-quality 10G cards and put them in all the machines but they seem expensive...

Bulk buying is probably hard, but ex-enterprise Intel 10G on eBay tends to be pretty inexpensive. Dual spf+ x520 cards are regularly available for $10. Dual 10g-base-t x540 cards run a bit more, with more variance, $15-$25. No 2.5/5Gb support, but my 10g network equipment can't do those speeds either, so no big deal. These are almost all x8 cards, so you need a slot that can accomidate them, but x4 electrical should be fine (I've seen reports that some enterprise gear has trouble working properly in x1/x4 slots beyond bandwidth restrictions which shouldn't be a problem; if a dual port card needs x8 and you only have x4 and only use a single port, that should be fine)

I think all of mine can pxeboot, but sometimes you have to fiddle with the eeprom tools, and they might be legacy only, no uefi pxe, but that's fine for me.

And you usually have to be ok with running them with no brackets, cause they usually come with low profile brackets only.

replies(1): >>42160567 #
2. vueko ◴[] No.42160567[source]
+1 ebay x520 cards. My entire 10g sfp+ home network runs on a bunch of x520s, fs.com DACs/AOCs, Mikrotik switches, and an old desktop running FreeBSD with a few x520s in it as the core router. Very very cheap to assemble and has been absolutely bulletproof. IME at this point in time the ixgbe driver is extremely stable.

x520s with full-height brackets do exist (I have a box full of them), but you may pay like $3-5/ea more than the more common lo-pro bracket ones. If you're willing to pop the bracket off, you can also find full-height brackets standalone and install your own.

Also, in general: in my experience avoiding 10gbe rj45 is very worthwhile. More expensive, more power consumption, more heat generation. If you can stick a sfp+ card in something, do it. IMO 10gbe rj45 is only worthwhile when you've got a device that supports it but can't easily take a pcie nic, like some intel NUCs.

replies(1): >>42160817 #
3. toast0 ◴[] No.42160817[source]
sfp+ is clearly cheaper, and less heat/power, but I've got cat5e in the walls and between my house and detached garage, so I've got to use 10g-baseT to get between the garage and the house, and up to my office from the basement. At my two network closet areas, I use sfp+ for servers.

I think my muni fiber install happening this week might have a 10G-baseT handoff, and I've got a port for that open on my switch in the garage. If that works out, that will be neat, but I'll need to upgrade some more stuff to make full use of that.

replies(1): >>42161364 #
4. vueko ◴[] No.42161364{3}[source]
Oh true, good point, being wired for ethernet is another valid usecase. I'm lucky in that my ONT is just a commodity Nokia switch I can slap any sfp+ form factor transceiver I want in the appropriate port of for the connection to the router, so in my case 10gbe is truly banishable to the devices I can't get a pcie card into. I'm still in the phase of masking taping cables to the ceiling instead of doing real wall pulls, but when I do get around to that I feel like I'm going to pick up an aliexpress fiber splicer and pull single-mode fiber to futureproof it and make sure I never have to deal with pulls again (and not be stuck on an old ethernet standard in the magical future where I can get a 100gbit wan link).